id
stringlengths
20
20
score
int64
1
5
normalized_score
float64
0.2
1
content
stringlengths
217
3.74M
sub_path
stringclasses
1 value
BkiUdl45qhDBdYuXfcp3
5
1
\section{Introduction} The goal of this article is to study definable one-dimensional Hausdorff topologies in o-minimal structures, and to understand when they are definably homeomorphic to a definable set in some\emph{ $M^{n}$ }with its affine topology (namely, the induced subspace topology from \emph{$M^{n}$}). When we say that $\tau$ is a definable topology on a definable set $X$, we mean that $\tau$ has a basis which is definable in the language of the underlying o-minimal structure.\\ Our main theorem consists of several equivalent conditions to $\left(X,\tau\right)$ being definably homeomorphic to a definable set with its affine topology. It is a combination of Theorem \ref{thm: TFAE affine} and Theorem \ref{thm: Condition =0000235 affine}:\\ \\ \textbf{Main theorem.}\emph{ Let $\mathcal{M}$ be an o-minimal expansion of an ordered group. Let $X\subseteq M^{n}$ be a definable bounded set with $\dim X=1$, and let $\tau$ be a definable Hausdorff topology on $X$. Then the following are equivalent: } \begin{enumerate} \item \emph{$\left(X,\tau\right)$ is definably homeomorphic to a definable subset of $M^{k}$ for some $k$, with its affine topology. } \item \emph{There is a finite set $G\subseteq X$ such that every $\tau$-open subset of $X\setminus G$ is open with respect to the affine topology on $X\setminus G$. } \item \emph{Every definable subset of $X$ has finitely many definably connected components, with respect to $\tau$.} \item \emph{$\tau$ is regular and $X$ has finitely many definably connected components, with respect to $\tau$.} \end{enumerate} \noindent{\em Note: If $\mathcal M$ expands a real closed field then every definable set is in definable bijection with a bounded set, so the assumption that $X$ is bounded could be omitted.} \vspace{.1cm} We mention here a theorem of Erik Walsberg, which says that a definable metric space in an o-minimal expansion of a real closed field is definably homeomorphic to a definable set equipped with its affine topology if and only if it does not contain any infinite definable discrete set. This theorem can be found in \cite{Walsberg}, and we shall phrase it more precisely later on. Inspired by this work, we study general definable topological spaces, but restrict our attention to dimension $1$. \\ {\em The results of this article were part of the M.Sc. thesis of the second author at the University of Haifa. After the submission of the thesis we learned that Pablo Andujar Guerrero, Margaret Thomas and Erik Walsberg are working, independently, on similar questions. Finally, we thank the anonymous referee for some useful suggestions.} \section{Basic definitions } Below we take ``definable'' to mean ``definable with parameters''. \begin{defn} Let $\mathcal{M}=\left(M;\ldots\right)$ be a first-order structure of a fixed language $\mathcal{L}$. We say that {\em a collection $\mathcal B$ of subsets of $M^n$ is definable (in $\mathcal M$)} if there exists an $\mathcal{L}$-formula $\varphi(\bar{x},\bar{y})$ such that $$\mathcal B=\{\varphi(M^n,\bar{b}):\bar{b}\in M^m\}.$$ Let $X\subseteq M^{n}$ be a definable set. If $\mathcal{B}$ as above forms a basis for a topology $\tau$ on $X$, then we say that $\tau$ is a \emph{definable topology} on $X$. (This is the third possibility of considering topological structures from a model-theoretic point of view due to Pillay's \cite{Pillay}, page 764, where it is named a \emph{first-order topological structure}.) Note that a basis for the neighborhoods of $\bar{a}\in X$ is given by \[ \mathcal{B}_{\bar{a}}=\left\{ \varphi\left(X,\bar{b}\right):\models \varphi\left(\bar{a},\bar{b}\right),\bar{b}\in M^m\right\} =\left\{ U\in\mathcal{B}:\bar{a}\in U\right\} . \] \end{defn} From now on, all topological operations, like closure or interior, are taken with respect to the underlying topology $\tau$, unless otherwise stated. The closure and interior of a subset $Z\subseteq X$ is denoted by $cl\left(Z\right)$ and $int\left(Z\right)$, respectively. \\ This article investigates definable topologies in o-minimal structures. We fix $\mathcal{M}=\left(M;<,\ldots\right)$ o-minimal and list some examples of definable topologies in $\mathcal{M}$. All the following examples can be defined on $X=M^{n}$, unless otherwise stated. We can then also consider the induced topology $\tau|_{Y}$ for a definable set $Y\subseteq X$. \begin{enumerate} \item The order topology on $M$, which we denote by $\tau^{<}$. \item The affine topology $\tau^{af}$ which is the product topology with respect to $\tau^{<}$. \item In \cite{Dries}, the notion of a definable space is introduced, based on a finite atlas where each chart is modeled on a definable subset of some $M^{n}$, with its induced affine topology. It is easy to verify that the associated topology is definable. \item The discrete topology, which we denote by $\tau^{iso}$. \item The left-closed topology on $M$, with the left closed-intervals as basic open (also closed) sets. We denote this topology by $\tau^{^{\left[\,\,\right)}}$. \item Every definable linear ordering $\prec$ on a definable set $X\subseteq M^{n}$ gives rise to a definable topology on $X$, namely the order topology with respect to $\prec\,$. In \cite{Onshuus_Steinhorn}, Alf Onshuus and Charles Steinhorn study such definable linear ordering in o-minimal structures, and show that they are \textquotedblleft piecewise lexicographic\textquotedblright . \item A definable metric topology $\tau^{d}$: Let $\mathcal{M}=\left(M;<,+,\cdot,\ldots\right)$ be an o-minimal expansion of a real closed field. A \emph{definable metric} on $X\subseteq M^{n}$ is a definable function $d:X^{2}\rightarrow M_{+}$ such that for all $x,y,z\in X$: 1.~$d(x,y)=0\Leftrightarrow x=y.$ 2.~$d(x,y)=d(y,x)$. 3.~$d(x,z)\leq d(x,y)+d(y,z)$. The topology $\tau^{d}$ on $X$ is the topology whose basis is the collection of open balls with respect to $d$. \end{enumerate} The following theorem is due to Erik Walsberg, \cite{Walsberg}: \begin{thm} Let $(X,d)$ be a definable metric space in an o-minimal expansion of a real closed field. The following are equivalent: (1) $(X,\tau^{d})$ is definably homeomorphic to a definable set with it affine topology. (2) There is no infinite definable set $A\subseteq X$ such that $(A,d)$ is discrete. \end{thm} \begin{rem*} We note that the analogous result fails for definable topologies in o-minimal structures. Indeed, consider the topology $\tau^{^{\left[\,\,\right)}}$ on $M$. There is no infinite definable set $A\subseteq X$ such that $(A,\tau|_{A})$ is discrete, and yet it is not definably homeomorphic to any definable set with the induced affine topology (e.g. by Theorem \ref{thm: TFAE affine}). \end{rem*} We fix a definable set $X\subseteq M^{n}$ and a definable topology $\tau$ on $X$ with a definable basis $\mathcal{B}$, and proceed with some more definitions. \begin{defn} $\mathcal{F}\subseteq\mathcal{P}(X)$ is a \emph{filtered collection} if for every $B_{1},B_{2}\in\mathcal{F}$ there exists $B_{3}\in\mathcal{F}$ such that $B_{3}\subseteq B_{1}\cap B_{2}$. \end{defn} An example of a filtered collection is a basis for the neighborhoods of each $\bar{a}\in X$.\\ The following definition was given by Will Johnson in \cite{Johnson} (see \cite{Peterzil_Steinhorn} for an earlier definition in the o-minimal setting): \begin{defn} $\left(X,\tau\right)$ is \emph{definably compact} if every definable filtered collection of closed non-empty subsets of X has non-empty intersection. \end{defn} The next lemma is proved in \cite{Johnson} (Corollary 1.11 and the subsequent paragraph): \begin{fact} \label{lem: J} If $\mathcal{M}$ is o-minimal and $X\subseteq M^{n}$ is definable, then \textup{$X$ is }definably compact with respect to the induced affine topology if and only if \textup{$X$ is }$\tau^{af}$-closed and bounded. In particular, every definable filtered collection of $\tau^{af}$-closed and bounded non-empty subsets, has non-empty intersection. \end{fact} Thus, the definition above is equivalent in the o-minimal setting, for the affine topology, to the one in \cite{Peterzil_Steinhorn}. \medskip{} \textbf{In our context, whenever we say that a set is definably compact, we mean that it is definably compact with respect to the induced affine topology. }\\ We recall: \begin{defn} \label{def: regular} $\left(X,\tau\right)$ is \emph{regular} if for every $\bar{a}\in X$ and open $U\subseteq X$ with $\bar{a}\in U$ there is an open $W\subseteq X$ with $\bar{a}\in W$ such that $cl\left(W\right)\subseteq U$. \end{defn} \begin{fact} For any topology $\tau$ on $X$, $\left(X,\tau\right)$ is regular if and only if for every basis $\mathcal{B}$ for~$\tau$, for every point $\bar{a}\in X$ and open basic neighborhood $U\in\mathcal{B}$ of $\bar{a}$, there is an open basic neighborhood $W\in\mathcal{B}$ of $\bar{a}$ such that $cl\left(W\right)\subseteq U$. \end{fact} It follows that for a definable topology\emph{ $\tau$ on $X$, $\left(X,\tau\right)$} is regular if and only if it is definably regular, namely, in Definition \ref{def: regular} we may consider only definable $U,W$.\\ We continue with a new definition: \begin{defn} Let $T,S\subseteq\mathcal{P}\left(X\right)$ be two definable families of sets. We write $T\preceq S$ if for each $U\in T$ there is $V\in S$ such that $V\subseteq U$. If both $T\preceq S$ and $S\preceq T$ take place, we say that \emph{the families $T$ and $S$ are equivalent}, and write $T\sim S$. By $T\precneqq S$ we mean that $T\preceq S$ and $T\nsim S$. In particular, let $\tau$ and $\eta$ be definable topologies on $X$ with definable bases $\mathcal{B}^{\tau}$ and $\mathcal{B}^{\eta}$, respectively. The topology $\eta$ is finer than the topology $\tau$ if and only if for every $\bar{a}\in X$, $\mathcal{B}_{\bar{a}}^{\tau}\preceq\mathcal{B}_{\bar{a}}^{\eta}$. If we have both $\mathcal{B}_{\bar{a}}^{\tau}\preceq\mathcal{B}_{\bar{a}}^{\eta}$ and $\mathcal{B}_{\bar{a}}^{\eta}\preceq\mathcal{B}_{\bar{a}}^{\tau}$, we say that \emph{the bases $\mathcal{B}_{\bar{a}}^{\tau}$ and $\mathcal{B}_{\bar{a}}^{\eta}$ are equivalent}, and write $\mathcal{B}_{\bar{a}}^{\tau}\sim\mathcal{B}_{\bar{a}}^{\eta}$. It follows that $\tau=\eta$ if and only if for all $\bar{a}\in X$, $\mathcal{B}_{\bar{a}}^{\tau}\sim\mathcal{B}_{\bar{a}}^{\eta}$. \end{defn} \begin{lem} \label{Let--formula} Let $\psi\left(\bar{y}\right)$, $|\bar{y}|=m$, be an $\mathcal{L}$-formula. Let $\bar{a}\in X$, and assume that for each neighborhood $U\in\tau$ of \textup{$\bar{a}$ }\textup{\emph{there exists a}} neighborhood $U_{\bar{b}}\in\mathcal{B}_{\bar{a}}$ of \textup{$\bar{a}$}\textup{\emph{ }}such that $\mathcal{M}\models\psi\left(\bar{b}\right)$\textup{ }and\textup{\emph{ }}$U_{\bar{b}}\subseteq U$\textup{. }\textup{\emph{Then}} $\left\{ U_{\bar{b}}:\mathcal{M}\models\psi\left(\bar{b}\right)\right\} \sim\mathcal{B}_{\bar{a}}$. \end{lem} \begin{proof} Clearly, $\left\{ U_{\bar{b}}:\mathcal{M}\models\psi\left(\bar{b}\right)\right\} \subseteq\left\{ U_{\bar{b}}:\bar{b}\in M^m\right\} =\mathcal{B}_{\bar{a}}$. By the assumption of the lemma, for each $U\in\mathcal{B}_{\bar{a}}$ there is $U_{\bar{b}}\in\left\{ U_{\bar{b}}:\mathcal{M}\models\psi\left(\bar{b}\right)\right\} $ such that\emph{ }$U_{\bar{b}}\subseteq U$. Thus $\left\{ U_{\bar{b}}:\mathcal{M}\models\psi\left(\bar{b}\right)\right\} \sim\mathcal{B}_{\bar{a}}$. \end{proof} Informally, Lemma \ref{Let--formula} says that if a definable property holds for arbitrary small neighborhoods of $\bar{a}$, then one can pick a definable basis for the neighborhoods of $\bar{a}$ such that this property holds for all of its sets. As an immediate corollary we obtain: \begin{lem} \label{Given-a-property} Let $\bar{a}\in X$, and assume that for every $U_{\bar{b}}\in\mathcal{B}_{\bar{a}}$, $\mathcal{M}\models\left(\psi_{1}\left(\bar{b}\right)\lor\psi_{2}\left(\bar{b}\right)\right)$. For $i=1,2$, denote $\mathcal{B}_{\bar{a}}^{i}=\left\{ U_{\bar{b}}\in\mathcal{B}_{\bar{a}}:\psi_{i}\left(\bar{b}\right)\right\} $. Then either $\mathcal{B}_{\bar{a}}^{1}\mathcal{\sim B}_{\bar{a}}$ or $\mathcal{B}_{\bar{a}}^{2}\mathcal{\sim B}_{\bar{a}}$. \end{lem} We end this part with some definitions that we use later on: \begin{defn} $\left(X,\tau\right)$ is \emph{definably connected} if there are no definable non-empty open sets $U,W$ such that $U\cap W=\emptyset$ and $U\cup W=X$. Equivalently, $X$ does not contain any definable proper non-empty clopen subset. A definable $Y\subseteq X$ is \emph{definably connected} (with respect to $\tau$) if the space $\left(Y,\tau|_{Y}\right)$ is definably connected\emph{.} \end{defn} \begin{defn} Let $\tau$ be a definable topology on $X$. A \textbf{definable}, maximal definably connected subset of $X$ is called a \emph{definably connected component} of $X$. If $X$ can be decomposed into finitely many definably connected components, then we say that $\left(X,\tau\right)$ has finitely many definably connected components. The space $\left(X,\tau\right)$ is called \emph{totally definably disconnected} if its only definably connected subsets are singletons and $\emptyset$. A definable \emph{subset} $A\subseteq X$ is called \emph{totally definably disconnected} if it is so with respect to the subspace topology. \end{defn} Note that each definably connected component is a clopen set (since the closure of a definably connected set is itself definably connected). \begin{rem*} We do not know in general the answer to the following question: Given a definable topology $\tau$ on $X$ and $\bar{a}\in X$, is the union of all (definable) definably connected subsets of $X$ which contain $\bar{a}$, a definable set itself? \end{rem*} \section{Towards the main results} From now on we assume that \\ \textbf{$\boldsymbol{\mathcal{M}=\left(M;<,+,\ldots\right)}$ is an o-minimal expansion of an ordered group, $\boldsymbol{X\subseteq M^{n}}$ is a definable one-dimensional bounded set and $\boldsymbol{\tau}$ is an definable Hausdorff topology on $\boldsymbol{X}$. For simplicity, assume that $\boldsymbol{X}$ and $\boldsymbol{\tau}$ are definable over $\boldsymbol{\emptyset}$. } Whenever we mention a topology without pointing out which one, we are referring to the topology $\tau$. Thus, whenever we write $\mathcal B_{\bar{a}}$ we refer to the basis of neighborhoods with respect to $\tau$. Having said that, we refer to various types of $<$-intervals by their usual names. E.g, the term ``open-interval'' always refers to an interval of the form $(a,b)$, for $a<b$ in $M$. By o-minimality, $X$ is a finite union of $0$-cells and bounded $1$-cells in $\mathcal{M}$. Hence, by applying an appropriate $\mathcal{L}$-definable bijection we can assume that $X$ is a bounded subset of $M$:\textbf{ \[ \boldsymbol{X=\left(s_{1},t_{1}\right)\sqcup\ldots\sqcup\left(s_{l},t_{l}\right)\sqcup F}, \] where $\boldsymbol{F}$ is a finite set of points, each $\boldsymbol{s_{i},t_{i}}$ is in $\boldsymbol{M}$, and $\boldsymbol{s_{i}\neq t_{j}}$ for all $\boldsymbol{1\leq i\neq j\leq l}$. } Let $\mathcal{B}^{af}=\left\{ \left(b_{1},b_{2}\right):b_{1},b_{2}\in M\right\} $ be the standard definable basis for $\left(M,\tau^{af}\right)$. A basis for the $\tau^{af}$-neighborhoods in $X$ of $a\in X$ is \[ \mathcal{B}_{a}^{af}\left(X\right):=\left\{ \left(b_{1},b_{2}\right)\cap X:a\in\left(b_{1},b_{2}\right)\right\} . \] To simplify notation, we write $\mathcal{B}_{a}^{af}$ instead of $\mathcal{B}_{a}^{af}\left(X\right)$.\\ We need a few more definitions. Below the term ``locally'' refers to the affine topology. \begin{defn} We say that the point $a\in X$ is \emph{locally isolated }if there are $U\in\mathcal{B}_{a}$ and an open-interval $I\ni a$ such that $U\cap I=\left\{ a\right\} $. \end{defn} \begin{defn} We say that the point $a\in X$ is \emph{locally right-closed }if for every small enough $U\in\mathcal{B}_{a}$ there exists an open-interval $I_{U}\ni a$ and a point $a'\in X$, $a'<a$, such that $U\cap I_{U}=\left(a',a\right]$. A \emph{locally left-closed }point is defined similarly. \end{defn} \begin{defn} We say that the point $a\in X$ is \emph{locally Euclidean }if for every small enough $U\in\mathcal{B}_{a}$ there exists an open-interval $I_{U}\ni a$ and two points $a',a''\in X$, $a'<a<a''$, such that $U\cap I_{U}=\left(a',a''\right)$. \end{defn} Here is an easy observation: \begin{lem} \emph{\label{lem: locally iso or open or half} }For every\emph{ }\textup{\emph{$a\in X$, exactly one of the following holds:}} \textup{\emph{(1) $a$ is}} locally isolated. \textup{\emph{(2) $a$ is}} locally right-closed. \textup{\emph{(3) $a$ is}}\emph{ }locally left-closed. \textup{\emph{(4) $a$ is}} locally Euclidean.\textup{ } \end{lem} \begin{proof} Fix $a\in X$. By o-minimality, every definable subset $U$ containing $a$ is a finite union of points and intervals. This means that there exists an open-interval $I\ni a$ such that $I\cap U$ is either $\left\{ a\right\} $ or a half-closed-interval, or half-open-interval, or an open-interval. Thus, $U$ is from one of the above four types and it is easy to see that each of those is a definable property of $U$. The result follows by Lemma \ref{Given-a-property}. \end{proof} Notice that if $\mathcal{B}_{a}\mathcal{\npreceq B}_{a}^{af}$ then $a$ is not locally Euclidean. Hence we have: \begin{cor} \label{cor: a locally iso=00005Cleft=00005Cright} For every $a\in X$, if $\mathcal{B}_{a}\mathcal{\npreceq B}_{a}^{af}$ then $a$ is either locally isolated or locally right-closed or locally left-closed\emph{.} \end{cor} \subsection{\label{subsec: shadows-points}The shadows of a point} For a set $U\subseteq X$, by $cl^{af}\left(U\right)$ we mean the $\tau^{af}$-closure of $U$ \textbf{in }$\boldsymbol{M}$, even if $cl^{af}\left(U\right)\nsubseteq X$. The definition below, which plays a key role in our analysis of the topology, makes sense for every definable topology on $X$ (independently of $\dim X$). \begin{defn} We define the\emph{ set of shadows} of a point $a\in X$ to be \[ \boldsymbol{S}\left(a\right):=\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\right). \] We call a point in $\boldsymbol{S}\left(a\right)$ a \emph{shadow} of $a$. \end{defn} It is not hard to see that $\boldsymbol{S}\left(a\right)$ does not depend on the specific choice of a basis. Intuitively, as the example below shows, $\boldsymbol{S}\left(a\right)$ is the set of all points in $M^n$ which are ``glued'' to $a$ by the topology $\tau$. \begin{example} \label{exa: =00005Cinfty} Let $\tau$ be the topology on $X=\left(s,t\right)\subseteq\mathbb{R}$ that is homeomorphic to the figure $\infty$, as follows: Fix $a\in\left(s,t\right)$, and let \[ \mathcal{B}^{\tau}:=\left\{ \left(r_{1},r_{2}\right)\subseteq\left(s,t\right):r_{1}<r_{2}\leq a\text{ or }a\leq r_{1}<r_{2}\right\} \cup \] \[ \left\{ \left(s,r_{1}\right)\cup\left(r_{2},r_{3}\right)\cup\left(r_{4},t\right):r_{1}\leq r_{2}<a<r_{3}\leq r_{4}\right\} . \] The point $a$ corresponds to the middle point of $\infty$, where $s$ and $t$ are attached to $a$. The figure $\infty$ shape is formed by closing the two sides of $\left(s,t\right)$ to the point $a$. Notice that $a$ is the only point such that $\mathcal{B}_{a}\nsim\mathcal{B}_{a}^{af}$, and $\boldsymbol{S}\left(a\right)=\left\{ a,s,t\right\} $. \end{example} \begin{example} Let $M^{2}\supseteq X=\left(0,1\right)\times\left\{ 1,2\right\} $. Let $\prec$ be the lexicographic order on $X$, and $\tau^{\prec}$ be the associated topology on $X$. One may check that for every $c\in\left(0,1\right)$, \[ \boldsymbol{S}\left(\left\langle c,1\right\rangle \right)=\boldsymbol{S}\left(\left\langle c,2\right\rangle \right)=\left\{ \left\langle c,1\right\rangle ,\left\langle c,2\right\rangle \right\} . \] \end{example} The following is immediate: \begin{fact} For every $a\in X$, (1) $a\in\boldsymbol{S}\left(a\right)$. (2) If $\tau$ is the affine topology on $X$ then $\boldsymbol{S}\left(a\right)=\left\{ a\right\} $. \end{fact} We still assume below that $X\subseteq M$ is a definable one dimensional bounded set. \begin{lem} \label{lem:GeneralProperty2 of Y_a} For every $a\in X$, the following are equivalent: (1) $\boldsymbol{S}\left(a\right)=\left\{ a\right\} $. (2) $\mathcal{B}_{a}^{af}\preceq\mathcal{B}_{a}$. (3) $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{iso}$ or $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{^{\left(\,\,\right]}}$ or $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{^{\left[\,\,\right)}}$ or $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{af}$ (where $\mathcal{B}_{a}^{af}=\mathcal{B}_{a}^{af}\left(X\right)$). \end{lem} \begin{proof} $(1)\Rightarrow(2)$: Assume $\mathcal{B}_{a}^{af}\npreceq\mathcal{B}_{a}$. Then there exists an interval $I\in\mathcal{B}_{a}^{af}$ such that for every $U\in\mathcal{B}_{a}$, $U\nsubseteq I$. Since $X$ is bounded, every $cl^{af}\left(U\right)$ is a nonempty $\tau^{af}$-closed and bounded set, and so is every $cl^{af}\left(U\setminus I\right)$. Thus, $\left\{ cl^{af}\left(U\setminus I\right):U\in\mathcal{B}_{a}\right\} $ is a definable filtered collection of $\tau^{af}$-closed and bounded non-empty sets, so by Lemma \ref{lem: J}, its intersection is non-empty. Since $a\in I$, it does not belong to this intersection. Therefore, $\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\setminus I\right)$ must contain an element different than $a$. Finally, note that \[ \boldsymbol{S}\left(a\right)=\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\right)\supseteq\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\setminus I\right), \] hence $\boldsymbol{S}\left(a\right)$ contains an element other than $a$. That is, $\left\{ a\right\} \subsetneq\boldsymbol{S}\left(a\right)$. $(2)\Rightarrow(3)$: We assume that $a$ satisfies one of (1)-(4) from Lemma \ref{lem: locally iso or open or half}. Assume for example that it is locally right-closed, namely for every sufficiently small $U\in\mathcal{B}_{a}$ there exists an open-interval $I\ni a$ such that $I\cap U=\left(a',a\right]$. Note that this assumption implies that $\mathcal{B}_{a}\preceq\mathcal{B}_{a}^{^{\left(\,\,\right]}}$. We show that $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{^{\left(\,\,\right]}}$ (the other cases are treated similarly). We need to prove $\mathcal{B}_{a}^{^{\left(\,\,\right]}}\preceq\mathcal{B}_{a}$. Fix $\left(a-\epsilon,a\right]\in\mathcal{B}_{a}^{^{\left(\,\,\right]}}$, and we show that for some $W\in\mathcal{B}_{a}$, $W\subseteq\left(a-\epsilon,a\right]$. Consider the interval $\left(a-\epsilon,a+\epsilon\right)$. By the assumption $\mathcal{B}_{a}^{af}\preceq\mathcal{B}_{a}$, there exists $U\in\mathcal{B}_{a}$ such that $U\subseteq\left(a-\epsilon,a+\epsilon\right)$. By our assumption on $a$, there must be an open-interval $I\ni a$ such that $I\cap U=\left(a',a\right]$. We can take $I$ small enough, and assume that $I=\left(a-\delta,a+\delta\right)\subseteq\left(a-\epsilon,a+\epsilon\right)$ with $a-\delta\leq a'<a$. Once again by the assumption $\mathcal{B}_{a}^{af}\preceq\mathcal{B}_{a}$, there exists $V\in\mathcal{B}_{a}$ such that $V\subseteq\left(a-\delta,a+\delta\right)$. Finally, there is $W\in\mathcal{B}_{a}$ such that $W\subseteq V\cap U$. Thus we have \[ W\subseteq V\cap U\subseteq\left(a-\delta,a+\delta\right)\cap U=\left(a',a\right]\subseteq\left(a-\epsilon,a\right]. \] Therefore $\mathcal{B}_{a}^{^{\left(\,\,\right]}}\preceq\mathcal{B}_{a}$, and thus by our assumptions we have $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{^{\left(\,\,\right]}}$. $(3)\Rightarrow(1)$: Direct verification. \end{proof} \begin{lem} \label{lem: B_a < B_a^M --> Y_a =00005Cneq =00007Ba=00007D} For every $a\in X$, if $\mathcal{B}_{a}\precneqq\mathcal{B}_{a}^{af}$ then $\boldsymbol{S}\left(a\right)\neq\left\{ a\right\} $. \end{lem} \begin{proof} If $\mathcal{B}_{a}\precneqq\mathcal{B}_{a}^{af}$, then it follows that $\mathcal{B}_{a}^{af}\npreceq\mathcal{B}_{a}$, and thus by Lemma \ref{lem:GeneralProperty2 of Y_a} we have $\boldsymbol{S}\left(a\right)\neq\left\{ a\right\} $. \end{proof} \begin{lem} \label{lem: a locally isolated implies Y_a > =00007Ba=00007D} For every $a\in X$, if $a$ is locally isolated and not isolated then $\boldsymbol{S}\left(a\right)\supsetneqq\left\{ a\right\} $. \end{lem} \begin{proof} Since $a$ is not isolated , $\mathcal B_a\nsim \mathcal B_a^{iso}$. Since it is not locally isolated, the other possibilities of Lemma \ref{lem:GeneralProperty2 of Y_a} (3) fail as well. Therefore, by Clause (1) of that lemma, $\boldsymbol{S}\left(a\right)\supsetneqq\left\{ a\right\} $. \end{proof} Lemmas \ref{lem: (*)} - \ref{lem:GeneralProperty1 of Y_a} can be easily generalized for $X$ of arbitrary dimension $n$ (by considering basic $\tau^{af}$-open sets of dimension $n$ and their closure instead of open and closed-intervals). \begin{lem} \label{lem: (*)} For every $a,b\in X$, if $b\in\boldsymbol{S}\left(a\right)$ and $b\ne a$ then $\mathcal{B}_{b}\npreceq\mathcal{B}_{b}^{af}$. \end{lem} \begin{proof} If $\mathcal{B}_{b}\preceq\mathcal{B}_{b}^{af}$, then every $U\in\mathcal{B}_{b}$ would contain an open interval $I_{U}\ni b$, hence $b$ could not be separated from $a$, in contradiction to the fact that $\tau$ is Hausdorff. \end{proof} \begin{lem} \label{lem:C1. b in Y_a if and only if a in cl(I)} Let $a\in X$ and $b\in M$. Then $b\in\boldsymbol{S}\left(a\right)\iff$ For every open-interval $I\ni b$, $a\in cl\left(I\cap X\right)$. \end{lem} \begin{proof} Let $b\in\boldsymbol{S}\left(a\right)=\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\right)$. So for each $U\in\mathcal{B}_{a}$, $b\in cl^{af}\left(U\right)$. That is, for every $U\in\mathcal{B}_{a}$ and every $V\in\mathcal{B}_{b}^{af}$, we have $U\cap V\neq\emptyset$. Therefore, for every $U\in\mathcal{B}_{a}$ and every open-interval $I\ni b$, we have $U\cap I\neq\emptyset$. This exactly means that for every open-interval $I\ni b$, $a\in cl\left(I\right)$. For the other direction, just follow from bottom to top: Assume that for every open-interval $I\ni b$ we have $a\in cl\left(I\right)$. That is, for every open-interval $I\ni b$, for every $U\in\mathcal{B}_{a}$, we have $U\cap I\neq\emptyset$. This means that for every $U\in\mathcal{B}_{a}$, $b\in cl^{af}\left(U\right)$. Since $\boldsymbol{S}\left(a\right)=\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\right)$, we get $b\in\boldsymbol{S}\left(a\right)$. \end{proof} \begin{lem} \label{lem:newC cl(I)} Let $\left(c,d\right)\subseteq X$ be an open-interval. Then \[ \left\{ x\in X:\boldsymbol{S}\left(x\right)\cap\left(c,d\right)\neq\emptyset\right\} \subseteq cl\left(\left(c,d\right)\right)\subseteq\left\{ x\in X:\boldsymbol{S}\left(x\right)\cap\left[c,d\right]\neq\emptyset\right\} . \] \end{lem} \begin{proof} The first inclusion follows from direction $\Rightarrow$ of Lemma \ref{lem:C1. b in Y_a if and only if a in cl(I)}. For the second inclusion, let $x_{0}\in cl\left(\left(c,d\right)\right)$. That is, for every $U\in\mathcal{B}_{x_{0}}$ we have $U\cap\left(c,d\right)\neq\emptyset$. Therefore, for every $U\in\mathcal{B}_{x_{0}}$ we also must have $cl^{af}\left(U\right)\cap cl^{af}\left(\left(c,d\right)\right)\neq\emptyset$. Since $cl^{af}\left(\left(c,d\right)\right)=\left[c,d\right]$, this gives $\left(\bigcap_{U\in\mathcal{B}_{x_{0}}}cl^{af}\left(U\right)\right)\cap\left[c,d\right]\neq\emptyset$. That is, $\boldsymbol{S}\left(x_{0}\right)\cap\left[c,d\right]\neq\emptyset$. \end{proof} \begin{lem} \label{lem:C2. Y_a is finite } For every $a\in X$, $\boldsymbol{S}\left(a\right)$ is a finite set. Moreover, $|\boldsymbol{S}\left(a\right)|$ is uniformly bounded, that is, there exists $k\in\mathbb{N}$ such that for all $a\in M$, $|\boldsymbol{S}\left(a\right)|\leq k$. \end{lem} \begin{proof} Fix $a\in X$. Since $\tau$ is Hausdorff, for every $x\in X$, if $x\neq a$ then there is $U_{0}\in\mathcal{B}_{a}$ such that $x\notin U_{0}$, hence if $x\in\boldsymbol{S}\left(a\right)$, then $x\in\left(cl^{af}\left(U_{0}\right)\setminus U_{0}\right)$. Since $x\in\boldsymbol{S}\left(a\right)=\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\right)$, we deduce $x\in\bigcap_{U\in\mathcal{B}_{a}}\left(cl^{af}\left(U\right)\setminus U\right)$, and thus we get $\boldsymbol{S}\left(a\right)\setminus\left\{ a\right\} \subseteq\bigcap_{U\in\mathcal{B}_{a}} cl^{af}\left(U\right)\setminus U$. By general properties of the o-minimal topology and since $X\subseteq M$, for every set $U\subseteq M$ we have that $cl^{af}\left(U\right)\setminus U$ is finite. So \[ |\boldsymbol{S}\left(a\right)|\leq|cl^{af}\left(U_{0}\right)\setminus U_{0}|+1, \] where the $+1$ stands for $a$ itself. In particular, $\boldsymbol{S}\left(a\right)$ is a finite set. Moreover, when $a$ varies over all elements of $M$ we have that $\left\{ \boldsymbol{S}\left(a\right):a\in M\right\} $ is a definable family, hence by o-minimality it has a uniform bound. That is, there is $k\in\mathbb{N}$ such that for all $a\in M$, $|\boldsymbol{S}\left(a\right)|\leq k$. \end{proof} The generalization of Lemma \ref{lem:C2. Y_a is finite } above to arbitrary dimension would just say that for every $x\in X$, $dim\left(\boldsymbol{S}\left(a\right)\right)<\dim X$. \begin{lem} \label{lem:GeneralProperty1 of Y_a} Denote $\boldsymbol{S}\left(a\right)=\left\{ a_{1},a_{2},\ldots a_{r}\right\} $. Then for any open-intervals $I_{i}\ni a_{i}$, $1\leq i\leq r$, there exists $U\in\mathcal{B}_{a}$ such that $U\subseteq\bigcup_{i=1}^{r}I_{i}$. \end{lem} \begin{proof} Fix $I_{i}\ni a_{i},1\leq i\leq r$. Assume towards contradiction that for every $U\in\mathcal{B}_{a}$, $U\nsubseteq\bigcup_{i=1}^{r}I_{i}$. Therefore, $\left\{ cl^{af}\left(U\setminus\left(\bigcup_{i=1}^{r}I_{i}\right)\right):U\in\mathcal{B}_{a}\right\} $ is a definable filtered family of non-empty $\tau^{af}$-closed and bounded sets. Thus, each set $cl^{af}\left(U\setminus\left(\bigcup_{i=1}^{r}I_{i}\right)\right)$ is definably compact, and therefore, their intersection is non-empty: \[ \emptyset\neq\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\setminus\left(\bigcup_{i=1}^{r}I_{i}\right)\right)\subseteq\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\right)=\boldsymbol{S}\left(a\right)=\left\{ a_{1},a_{2},\ldots a_{r}\right\} . \] But since $a_{i}\in I_{i}=int^{af}\left(I_{i}\right)$ for each $1\leq i\leq r$, then \[ a_{i}\notin\bigcap_{U\in\mathcal{B}_{a}}cl^{af}\left(U\setminus\left(\bigcup_{i=1}^{r}I_{i}\right)\right) \] for each $1\leq i\leq r$, and this is a contradiction. \end{proof} As we noted earlier, each $b$ is in $\boldsymbol{S}\left(b\right)$. Now we prove: \begin{lem} \label{lem:C3. 2pts a s.t. b in Y_a} For every $b\in M$, there are at most two points $a\in X$ other than $b$, such that $b\in\boldsymbol{S}\left(a\right)$. Moreover, if $b\in X$ is not locally isolated then there exists at most one such $a$. \end{lem} \begin{proof} If $b\in\boldsymbol{S}\left(a\right)$ then for every $U\in\mathcal{B}_{a}$ we have $b\in cl^{af}\left(U\right)$. Since $\mathcal{M}$ is o-minimal there is an open-interval $I_{U}\subseteq U$ such that $b\in cl^{af}\left(I_{U}\right)$. If $b\in X$ then there must be $U\in\mathcal{B}_{a}$ such that $b\notin U$, hence $b$ is one of the end points of $I_{U}$. If $b\notin X$ then also $b$ is an end point of $I_{U}$. Assume that there are two distinct points $a_{1},a_{2}\in X$ different than $b$, such that $b\in\boldsymbol{S}\left(a_{1}\right),\boldsymbol{S}\left(a_{2}\right)$. Then, since $\tau$ is Hausdorff, there are disjoint $U_{1}\in\mathcal{B}_{a_{1}}$, $U_{2}\in\mathcal{B}_{a_{2}}$ with $b\in cl^{af}\left(I_{U_{1}}\right)\cap cl^{af}\left(I_{U_{2}}\right)$. Because $\tau$ is Hausdorff, $b$ is a left end point of one of these intervals $I_{U_{i}}$ and a right end point of the other. For the same reason, $b$ is locally isolated and there cannot be a third point $a_{3},$ with $b\in\boldsymbol{S}\left(a_{3}\right)$. \end{proof} \subsection{\label{subsec: Shadows-generic-points}Shadows of generic points} In this subsection we work in an elementary extension $\mathcal{N}=\left(N;<,\ldots\right)$ of $\mathcal{M}$ which is sufficiently saturated. Note that now we have $\left(X,\tau\right)=\left(X\left(N\right),\tau\left(N\right)\right)$ (instead of $\left(X,\tau\right)=\left(X\left(M\right),\tau\left(M\right)\right)$ ). It is easy to verify that $\tau\left(N\right)$ is still a Hausdorff topology on $X\left(N\right)$. We assume that $X$ and $\tau$ are definable over $\emptyset$. \\ We first recall the following known lemma: \begin{lem} \label{lem:Generic open interval} Let $a$ be a generic point in $X\subseteq M^n$ over $\emptyset$. Let $U\subseteq M^n$ be a definable affine-open neighborhood of $a$, defined over a set $A$ of parameters. Then there exists a definable affine-open neighborhood of $a$, $W\subseteq U$, defined over a set $B\supseteq A$, such that $dim\left(a/A\right)=dim\left(a/B\right)$. In particular, let $a\in M$ be a generic point over $\emptyset$. Then we can choose an arbitrary small interval $\left(a_{1},a_{2}\right)$, $a_{1}<a<a_{2}$, such that $a$ is still generic over $\left\{ a_{1},a_{2}\right\} $. \end{lem} \begin{lem} \label{lem: a generic implies b generic} If $a\in X$ and $b\in\boldsymbol{S}\left(a\right)$, then $b$ is generic over $\emptyset$ if and only if $a$ is generic over $\emptyset$. \end{lem} \begin{proof} Since $\boldsymbol{S}\left(a\right)$ is an $a$-definable finite set by Lemma \ref{lem:C2. Y_a is finite }, $b\in\boldsymbol{S}\left(a\right)$ implies that $b\in acl\left(a\right)$. By Lemma \ref{lem:C3. 2pts a s.t. b in Y_a}, there is a finite number of points $a'\in X$ such that $b\in\boldsymbol{S}\left(a'\right)$. Since the set of all these points is definable over $b$, we also have $a\in acl\left(b\right)$. It follows that $b$ is generic over $\emptyset$ if and only if $a$ is generic over $\emptyset$. \end{proof} \begin{lem} \label{lem:Lemma Y_b in Y_a} Let $a\in X$ be a generic point over $\emptyset$, and let $b\in\boldsymbol{S}\left(a\right)$. Then $\boldsymbol{S}\left(b\right)\subseteq\boldsymbol{S}\left(a\right)$. \end{lem} \begin{proof} Since $a$ is generic we must have $\boldsymbol{S}\left(a\right),\boldsymbol{S}\left(b\right)\subseteq X$. By Lemma \ref{lem:C2. Y_a is finite }, $\boldsymbol{S}\left(a\right)$ is a finite set. Denote $\boldsymbol{S}\left(a\right)=\left\{ a_{1},a_{2},\ldots a_{r}\right\} $ for some fixed ordering of $\boldsymbol{S}\left(a\right)$ in which $a_{1}=a$. By the genericity of $a$, there is a $\emptyset$-definable open-interval $J_{1}\subseteq N$ such that for all $x\in J_{1}$, $|\boldsymbol{S}\left(x\right)|=r$. Now, we can define $\emptyset$-definable functions $f_{i}:J_{1}\rightarrow N$, $1\leq i\leq r$, such that for every $x\in J_{1}$, we have $\boldsymbol{S}\left(x\right)=\left\{ f_{1}\left(x\right),\ldots,f_{r}\left(x\right)\right\} $ and $f_{1}\left(x\right)=x$. So each $f_{i}\left(x\right)$ is a shadow of $x$. By Lemma \ref{lem:C3. 2pts a s.t. b in Y_a}, the $f_{i}$ cannot be constant on any open-interval. Hence, by the Monotonicity Theorem for o-minimal structures~\cite{Dries} and the genericity of $a$, there is a $\emptyset$-definable open-interval $J_{2}\subseteq J_{1}$, $a\in J_{2}$, such that each $f_{i}$ is continuous (with respect to the $\tau^{af}$-topology) and strictly monotone on $J_{2}$. Therefore, $f_{i}|_{J_{2}}:J_{2}\rightarrow f_{i}\left(J_{2}\right)$ is a homeomorphism (with respect to $\tau^{af}$) for all $1\leq i\leq r$. Since $\boldsymbol{S}\left(a\right)$ is a finite set, there exists an open-interval $J_{3}\subseteq J_{2}$, $a\in J_{3}$, such that for all $1\leq i\neq j\leq r$, $f_{i}\left(J_{3}\right)\cap f_{j}\left(J_{3}\right)=\emptyset$. Note that we might need additional parameters to define $J_{3}$, but by Lemma \ref{lem:Generic open interval}, we can pick $J_{3}$ such that $a$ is still generic over its end points. To simplify notation we absorb these additional parameters into the language, and thus assume that $J_{3}$ is definable over $\emptyset$. Recall that $b\in\boldsymbol{S}\left(a\right)$. We now prove a claim:\\ \\ \textbf{Claim.} For every open-interval $J\subseteq J_{3}$ such that $a\in J$, there is $W\in\mathcal{B}_{b}$ such that $W\subseteq\bigcup_{i=1}^{r}f_{i}\left(J\right)$.$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$\smallskip{} \emph{Proof. }Assume towards contradiction that for some open-interval $J\subseteq J_{3}$ such that $a\in J$, \[ (*)\text{ for every }W\in\mathcal{B}_{b},\text{ we have }W\nsubseteq\bigcup_{i=1}^{r}f_{i}\left(J\right). \] By Lemma \ref{lem:Generic open interval}, we can replace $J$ by an open-interval $J'\subseteq J$ with $a\in J'$, such that $b$ is still generic over the parameters defining $J'$. Note that we still have that for every $W\in\mathcal{B}_{b}$, $W\nsubseteq\bigcup_{i=1}^{r}f_{i}\left(J'\right)$. Thus we may assume that $b$ is generic over the parameters defining $J$. We can now formulate $(*)$ as a definable property of $b$, call it $\varphi\left(b\right)$. Since $b$ is generic over the parameters defining $J$, there exists an open-interval $I\ni b$ such that $\varphi\left(y\right)$ is true for all $y\in I$. By Lemma \ref{lem:GeneralProperty1 of Y_a}, there exists $U\in\mathcal{B}_{a}$ such that $U\subseteq\bigcup_{i=1}^{r}f_{i}\left(J\right)$. Clearly, no $y\in U$ satisfies $\varphi\left(y\right)$, hence $U\cap I=\emptyset$. It follows that $b\notin cl^{af}\left(U\right)$, contradicting the fact that $b\in\boldsymbol{S}\left(a\right)$. $\boxempty$\\ Now we are ready to finish the proof of Lemma \ref{lem:Lemma Y_b in Y_a}. By the Claim, given an open-interval $J\subseteq J_{3}$ with $a\in J$, there is $W\in\mathcal{B}_{b}$ such that $W\subseteq\bigcup_{i=1}^{r}f_{i}\left(J\right)$. Thus, \[ cl^{af}\left(W\right)\subseteq\bigcup_{i=1}^{r}cl^{af}\left(f_{i}\left(J\right)\right). \] Recall that $\boldsymbol{S}\left(b\right)=\bigcap_{V\in\mathcal{B}_{b}}cl^{af}\left(V\right)$, and therefore \[ \boldsymbol{S}\left(b\right)\subseteq\bigcup_{i=1}^{r}cl^{af}\left(f_{i}\left(J\right)\right),\text{ for any open-interval \ensuremath{J\ni a}.} \] By the continuity of the $f_{i}$, the intersection of all $\bigcup_{i=1}^{r}cl^{af}\left(f_{i}\left(J\right)\right)$, as $J$ varies over all open-intervals containing $a$, is exactly $\left\{ a_{1},a_{2},\ldots a_{r}\right\} $. Thus, $\boldsymbol{S}\left(b\right)\subseteq\left\{ a_{1},a_{2},\ldots a_{r}\right\} =\boldsymbol{S}\left(a\right)$. \end{proof} We give an example of $a\in X$ and of $b\in\boldsymbol{S}\left(a\right)$ that are not generic over $\emptyset$, for which the result of Lemma \ref{lem:Lemma Y_b in Y_a} is not true: \begin{example} Let $X$ be an open-interval. We define a definable Hausdorff topology on $X$, by describing small enough basic neighborhoods of three distinct non-generic points $a,b,c\in X$: $U_{a}\in\mathcal{B}_{a}$ is of the form $U_{a}=\left\{ a\right\} \cup\left(b-\epsilon,b\right)$, $U_{b}\in\mathcal{B}_{b}$ is of the form $U_{b}=\left[b,b+\epsilon\right)\cup\left(c,c+\epsilon\right)$, and $c$ is an isolated point. Every other $x\in X$ is Euclidean. One can verify that $\boldsymbol{S}\left(a\right)=\left\{ a,b\right\} $ and $\boldsymbol{S}\left(b\right)=\left\{ b,c\right\} $, so $\boldsymbol{S}\left(b\right)\nsubseteq\boldsymbol{S}\left(a\right)$ and thus Lemma \ref{lem:Lemma Y_b in Y_a} is not true in this case. \end{example} \subsection{$\boldsymbol{\tau\subseteq\tau^{af}|_{X}}$ (every $\boldsymbol{\tau}$-open set is also $\boldsymbol{\tau^{af}}$-open in $\boldsymbol{X}$)} The purpose of this subsection is to analyze a special case, when $\tau$ coarsens the affine topology on $X$. Namely, every $\tau$-open set can be written as the intersection of $X$ and a definable $\tau^{af}$-open subset of $M$. We aim to prove the next theorem: \begin{thm} \label{Let--and-finite} Assume that $\tau\subseteq\tau^{af}|_{X}$, that is, for all $x\in X$, $\mathcal{B}_{x}\preceq\mathcal{B}_{x}^{af}$. Then there are at most finitely many points $a\in X$ such that $\mathcal{B}_{a}\nsim\mathcal{B}_{a}^{af}$. Equivalently, there are at most finitely many $a\in X$ such that $\mathcal{B}_{a}^{af}\npreceq\mathcal{B}_{a}$. \\ \emph{We first introduce:} \end{thm} \begin{defn} Let $X=\left(s_{1},t_{1}\right)\sqcup\ldots\sqcup\left(s_{l},t_{l}\right)\sqcup F$ such that $F$ is finite be a definable subset of $M$. If $s_{i}\notin F$, then each set of the form $\left(s_{i},r\right)$ for $s_{i}<r\leq t_{i}$, is called a \emph{left generalized ray of $X$.} If $t_{i}\notin F$, then each set of the form $\left(r,t_{i}\right)$ for $s_{i}\leq r<t_{i}$, is called a \emph{right} \emph{generalized ray of $X$.} A left generalized ray and a right generalized ray are both called \emph{generalized rays. } \end{defn} For example, if $M=\mathbb{R}$ and $X=\left(3,\pi\right]=\left(3,\pi\right)\cup\left\{ \pi\right\} $, then $\left(3,3.1\right)$ is a left generalized ray, but $\left(3.1,\pi\right)$ is not a generalized ray. \begin{rem*} Note that if $U$ is a definable subset of $X$ and $b\in cl^{af}\left(U\right)\setminus X$, then $b$ is an endpoint of a generalized ray contained in $U$. \end{rem*} We will see that whenever $a\in X\setminus F$ has $\mathcal{B}_{a}\nsim\mathcal{B}_{a}^{af}$, every neighborhood $U\in\mathcal{B}_{a}$ contains a generalized ray. As a result, we conclude that there are at most $2l$ points $a\in X$ such that $\mathcal{B}_{a}\nsim\mathcal{B}_{a}^{af}$: Indeed, if there were more than $2l$ such points, then by the pigeonhole principle, two of those points would have intersecting generalized rays, in contradiction to the fact that $\tau$ is Hausdorff. Therefore, in order to prove Theorem \ref{Let--and-finite}, it is sufficient to prove: \begin{lem} \label{Let--point-ray} Let $a\in X\setminus F$ be a point such that $\mathcal{B}_{a}\nsim\mathcal{B}_{a}^{af}$. Then every neighborhood $U\in\mathcal{B}_{a}$ contains a generalized ray. \end{lem} \begin{proof} Since we are working under the assumption that every open set is also $\tau^{af}$-open in $X$, $\mathcal{B}_{a}\nsim\mathcal{B}_{a}^{af}$ implies $\mathcal{B}_{a}\precneqq\mathcal{B}_{a}^{af}$. Therefore, by Lemma \ref{lem: B_a < B_a^M --> Y_a =00005Cneq =00007Ba=00007D}, we have $\boldsymbol{S}\left(a\right)\neq\left\{ a\right\} $. Let $b\in\boldsymbol{S}\left(a\right)$, $b\neq a$. We claim that $b\notin X$: Every $U\in\mathcal{B}_{a}$ is also a $\tau^{af}$-open neighborhood of $a$. So if we had $b\in X$, then since $b\in cl^{af}\left(U\right)$ for every $U\in\mathcal{B}_{a}$, $a$ and $b$ could not be separated, contradicting the fact that $\tau$ is Hausdorff. Therefore $b\notin X$, and as remarked above $U$ must contain a generalized ray.~$\square$ \medskip{} This ends the proof of Theorem \ref{Let--and-finite}. \end{proof} \subsection{Almost $\boldsymbol{\tau\subseteq\tau^{af}|_{X}}$ } The next technical lemma states two equivalent conditions that clarify what we mean by ``almost $\tau\subseteq\tau^{af}|_{X}$''. \begin{lem} \label{lem:almost T^M|_X} Let $X\subseteq M$ be a definable set, and let $\tau$ be a definable Hausdorff topology on $X$. Let $G\subseteq X$ be a finite set. The following are equivalent: (1) For every $a\in X\setminus G$, $\mathcal{B}_{a}^{\tau}\preceq\mathcal{B}_{a}^{af}$. (2) Every $\tau$-open subset of $X\setminus G$ is also $\tau^{af}$-open in $X$ (that is, $\tau|_{X\setminus G}\subseteq\tau^{af}|_{X}$). \end{lem} \begin{proof} $(1)\Rightarrow(2)$: Take $U'\in\tau|_{X\setminus G}$. That is, there exists $U\in\tau$ such that $U'=U\cap\left(X\setminus G\right)$. Since $\left(X\setminus G\right)\in\tau$, then $U'\in\tau$. So for every $a\in U'$, there is a basic neighborhood $W_{a}\in\mathcal{B}_{a}^{\tau}$ such that $W_{a}\subseteq U'$. By (1), there is $V_{a}\in\mathcal{B}_{a}^{af}$ such that $V_{a}\subseteq W_{a}\subseteq U'\subseteq X$. Therefore, $U'=\bigcup_{a\in U'}V_{a}$ is $\tau^{af}$-open in $X$, and hence $U'\in\tau^{af}|_{X}$. $(2)\Rightarrow(1)$: Fix $a\in X\setminus G$, and let $U\in\mathcal{B}_{a}^{\tau}$. Since $G$ is $\tau$-closed then $U\setminus G\subseteq X\setminus G$ is $\tau$-open, that is, $U\setminus G\in\tau|_{X\setminus G}$. By (2), we have $U\setminus G\in\tau^{af}|_{X}$. Because $a\in U\setminus G$, there is a basic neighborhood $V\in\mathcal{B}_{a}^{af}$ such that $V\subseteq\left(U\setminus G\right)\subseteq U$. Thus, $\mathcal{B}_{a}^{\tau}\preceq\mathcal{B}_{a}^{af}$. \end{proof} We proceed with some general lemmas and a theorem. \begin{lem} \label{lem:tau-open if and only if <-open} Assume that $X$ is a subset of $M$ and that there is a finite set $G\subseteq X$ such that on $X\setminus G$, every $\tau$-open set is $\tau^{af}$-open in $X$. Then there exists a definable set $X'\subseteq M$ and a definable topology $\tau'$ on $X'$ such that $\left(X,\tau\right)$ is definably homeomorphic to $\left(X',\tau'\right)$, and on each open-interval $I'\subseteq X'$, a subset of $I'$ is $\tau'$-open if and only if it is $\tau^{af}$-open. \end{lem} \begin{proof} Denote $X=\left(s_{1},t_{1}\right)\sqcup\ldots\sqcup\left(s_{l},t_{l}\right)\sqcup F$ where $F$ is finite and $s_{i},t_{i}\in M$. Since on $X\setminus G$ every $\tau$-open set is $\tau^{af}$-open in $X$, by applying Theorem \ref{Let--and-finite} to $\left(X\setminus G,\tau|_{X\setminus G}\right)$ we get that the set \[ A:=\left\{ a\in X\setminus G:\mathcal{B}_{a}\nsim\mathcal{B}_{a}^{af}\right\} =\left\{ a\in X\setminus G:\mathcal{B}_{a}\precneqq\mathcal{B}_{a}^{af}\right\} \] is finite. Denote $H:=F\cup G\cup A$, and fix the obvious cell decomposition of $M$ compatible with $\left\{ \left(s_{1},t_{1}\right),\ldots,\left(s_{l},t_{l}\right),H\right\} $. Define $X'$ as follows: Leave each $1$-cell as it is, and map $H$ to a disjoint set $H'$ of $\tau^{af}$-isolated points. So $X'=\left(q_{1},r_{1}\right)\sqcup\ldots\sqcup\left(q_{k},r_{k}\right)\sqcup H'$ for a finite $k\geq l$ and a finite set of points $H':=f\left(H\right)=f\left(F\cup G\cup A\right)$. This gives us a definable bijection $f:X\rightarrow X'$. Define the topology on $X'$ to be the obvious topology induced by $\tau$ and $f$, that is, $\tau':=\left\{ f\left(U\right):U\in\tau\right\} $. Thus, $f:\left(X,\tau\right)\longrightarrow\left(X',\tau'\right)$ is by definition a definable homeomorphism. Therefore, for every subset $U'\subseteq\left(q_{i},r_{i}\right)$, $U$' is $\tau'$-open if and only if $U'$ is $\tau^{af}$-open. \end{proof} In the process of proving Theorem \ref{thm: TFAE affine} below, we move from a definable topology on a one-dimensional $X\subseteq M^{n}$ to a definably homeomorphic topology on $X'\subseteq M$. While some properties are obviously invariant under definable homeomorphism, others might depend on the embedding of $X$ in $M^{n}$. We thus first need: \begin{lem} \label{lem: Kobi's} Let $X\subseteq M^{n}$ and $X'\subseteq M^{k}$ be definable one-dimensional sets. If $f:X\to X'$ is a definable bijection, then there is a finite set $G\subseteq X$ such that for all $a\in X\setminus G$, the family of sets \[ f(\mathcal{B}_{a}^{af}(X))=\left\{ f(U\cap X):U\text{ is }\tau^{af}\text{-open in \ensuremath{M^{n}}},a\in U\right\} \] forms a basis to the neighborhoods of $f(a)$ in the affine topology $\tau^{af}$ on $X'$. \end{lem} \begin{proof} By basic properties of definable functions in o-minimal structures, there is a finite set $G\subseteq X$ such that $f:X\setminus G\to X'\setminus f(G)$ is a definable homeomorphism, with respect to the affine topology on both $X\setminus G$ and $X'\setminus f(G)$. The result follows. \end{proof} \section{The main theorems} \begin{thm} \label{thm: TFAE affine} Let $X\subseteq M^{n}$ be a definable bounded set, $\dim X=1$, and let $\tau$ be a definable Hausdorff topology on $X$. Then the following are equivalent: \end{thm} \begin{enumerate} \item \label{enu: homeomorphic affine}$\left(X,\tau\right)$ is definably homeomorphic to a definable set with its affine topology. \item \label{enu: subsets components}Every definable subset of $X$ has a finite number of definably connected components, with respect to $\tau$. \item \label{enu: coarsens affine}For all but finitely many $x\in X$, $\mathcal{B}_{x}\preceq\mathcal{B}_{x}^{af}$. \item \label{enu: cofinite affine}There is a finite set $G\subseteq X$ such that on $X\setminus G$ every $\tau$-open set is $\tau^{af}$-open in $X$. \end{enumerate} \begin{proof} We observe first that if $f:(X,\tau)\longrightarrow(X',\tau')$ is a definable homeomorphism, then for every $a\in X$, $f$ sends the basis of $\tau$-neighborhoods $\mathcal{B}_{a}$ to a basis of $\tau'$-neighborhoods of $f(a)\in X'$. By Lemma \ref{lem: Kobi's}, there exists a finite set $G\subseteq X$ such that for all $a\in X\setminus G$, $f(\mathcal{B}_{a}^{af}(X))=\mathcal{B}_{f(a)}^{af}(X')$. It follows that for all $a\in X\setminus G$, \[ \mathcal{B}_{a}\preceq\mathcal{B}_{a}^{af}(X)\Longleftrightarrow\mathcal{B}_{f(a)}\preceq\mathcal{B}_{f(a)}^{af}(X'). \] Thus, property (\ref{enu: coarsens affine}) holds for $X$ if and only if it holds for $X'$. By the same lemma, (\ref{enu: cofinite affine}) holds for $X$ if and only if it holds for $X'$. Properties (\ref{enu: homeomorphic affine}), (\ref{enu: subsets components}) are clearly invariant under definable homeomorphisms. The above discussion, together with Lemma \ref{lem:tau-open if and only if <-open}, allows us to assume that $X$ is a bounded subset of $M$. $(1)\Rightarrow(2)$: If $\left(X,\tau\right)$ is definably homeomorphic to a definable set with its affine topology, then by o-minimality, every definable subset of $X$ has a finite number of definably connected components. $(2)\Rightarrow(3)$: Assume towards contradiction that there is an infinite definable set of points $A\subseteq X$ such that $\mathcal{B}_{a}\mathcal{\npreceq B}_{a}^{af}$ for all $a\in A$. By Corollary \ref{cor: a locally iso=00005Cleft=00005Cright}, every $a\in A$ is either locally isolated or locally right-closed or locally left-closed\emph{.} Notice that these properties are all definable properties of $a$. If there are infinitely many locally isolated points in $A$, then the set \linebreak{} $\left\{ a\in A:a\text{ is locally isolated}\right\} $ is a definable infinite set, so contains an interval. Notice that for every locally isolated point $a\in A$ and small enough $U_{a}\in\mathcal{B}_{a}$ there exists an open-interval $I\ni a$ such that $U_{a}\cap I=\left\{ a\right\} $. Fix $a_{0}\in A$ generic over $\emptyset$ and an open-interval $I_{0}\ni a_{0}$ such that $U_{a_{0}}\cap I_{0}=\left\{ a_{0}\right\} $. Now, similarly to the proof of Lemma \ref{lem:Lemma Y_b in Y_a}: We can fix an open-interval $J_{3}\subseteq I_{0}\cap A$ of locally isolated points such that for every $a\in J_{3}$, for every small enough $U_{a}\in\mathcal{B}_{a}$, we have $U_{a}\cap J_{3}=\left\{ a\right\} $. Therefore, $J_{3}$ is a definable infinite set that is totally disconnected, contradicting (2). If there are infinitely many locally right-closed points in $A$, then the set \linebreak{} $\left\{ a\in A:a\text{ is locally right-closed}\right\} $ is a definable infinite set. Similarly to the above, there exists an open-interval $J$ such that every $a\in J$ is locally right-closed, and we can obtain such a $J$ for which for every $a\in J$ there is $U_{a}\in\mathcal{B}_{a}$ such that $U_{a}\cap J=\left(a',a\right]$. Therefore, once again $J$ is a definable infinite set for which the only definably connected sets are singletons, contradicting (2). We treat similarly the remaining case. Notice that Lemma \ref{lem:Lemma Y_b in Y_a} is carried out in an elementary extension $\mathcal{N}$ of $\mathcal{M}$. However, the existence of an interval $J_{3}$ with all of these properties, is easily seen to be a first-order property of the structure. Thus, after possibly quantifying over parameters, we obtain the existence of such an interval in the structure $\mathcal{M}$ in which we are working, and obtain a contradiction in $\mathcal{M}$. Thus, we showed that the set $A\subseteq X$ of all points $a\in X$ such that $\mathcal{B}_{a}\mathcal{\npreceq B}_{a}^{af}$ must be a finite set. $(3)\Rightarrow(4)$: Assume that for all but finitely many $x\in X$ we have $\mathcal{B}_{x}\preceq\mathcal{B}_{x}^{af}$, and denote $G=\left\{ x\in X:\mathcal{B}_{x}\npreceq\mathcal{B}_{x}^{af}\right\} $. Thus, for the finite set $G\subseteq X$ we have, by Lemma \ref{lem:almost T^M|_X}, that on $X\setminus G$, every $\tau$-open set is $\tau^{af}$-open in $X$. $(4)\Rightarrow(1)$: We give a direct proof by showing how to embed $\left(X,\tau\right)$ in $M^{3}$. By Lemma \ref{lem:tau-open if and only if <-open}, we assume that $X\subseteq M$ is a finite union of disjoint open-intervals and $\tau^{af}$-isolated points: \[ X=\left(q_{1},r_{1}\right)\sqcup\ldots\sqcup\left(q_{k},r_{k}\right)\sqcup H, \] such that for every $U\subseteq\left(q_{i},r_{i}\right)$, $U$ is $\tau$-open if and only if $U$ is $\tau^{af}$-open. Because the only definable bijections at hand are those given by the underlying group operation $+$, we need to be slightly careful in our construction of the embedding. We first identify $H=\{h_1,\ldots, h_n\}$ with a finite subset of points in $M^3$ of the form $\langle p_i,0,0\rangle $, with $p_1<p_2<\cdots<p_n$ in $M$, such that for all $i_1,i_2=1,\ldots, n$, and $j=1,\ldots, k$, $2k|p_{i_1}-p_{i_2}|<r_j-q_j$. We would like to understand the $\tau$-neighborhoods of points in $H$. Consider $h\in H$ and $U\in\mathcal{B}_{h}$. Since $\tau$ is Hausdorff, the finite set $H$ is $\tau$-closed, and thus $U\setminus H$ is a $\tau$-open set. So by our assumption, $U\setminus H$ is also $\tau^{af}$-open. Thus, every small enough $U\in\mathcal{B}_{h}$ is of the form $U=V\sqcup\left\{ h\right\} $ for a certain $\tau^{af}$-open $V\subseteq U\setminus H$. If $h$ is not $\tau$-isolated, then similarly to the proof of Lemma \ref{Let--point-ray}, up to equivalence of bases every small enough $U\in\mathcal{B}_{h}$ is a finite union of generalized rays and the singleton $\left\{ h\right\} $ itself: \[ U=\left(\bigcup_{1\leq j\leq k}\left(q_{i_{j}},q_{i_{j}}+\epsilon\right)\right)\cup\left(\bigcup_{1\leq j\leq k}\left(r_{i_{j}}-\epsilon,r_{i_{j}}\right)\right)\sqcup\left\{ h\right\} . \] For every $i\in\left\{ 1,\ldots,k\right\} $, consider $\left(q_{i},r_{i}\right)$ and its two generalized rays. Assume first that for every $h\in H$ there exists a neighborhood $W\in\mathcal{B}_{h}$ that does not intersect $\left(q_{i},r_{i}\right)$. In this case we identify $\left(q_{i},r_{i}\right)$ with an interval on the x-axis in $M^{3}$ of the same length, whose affine closure does not intersect $H$. Assume now that there exists $h'\in H$ such that every neighborhood $W\in\mathcal{B}_{h^{'}}$ intersects $\left(q_{i},r_{i}\right)$. As we pointed out above, it follows that every $W\in\mathcal{B}_{h^{'}}$ contains a generalized ray, say a left generalized ray in $\left(q_{i},r_{i}\right)$. In this case we definably identify $\left(q_{i},r_{i}\right)$ with a piecewise-linear curve $C_{i}$ in $M^{3}$ such that $\left(h',0,0\right)$ is the endpoint of $C_{i}$ which corresponds to $q_{i}$. Note that since $\tau$ is Hausdorff, if such $h'$ exists then it is unique. If there is also $h''\in H$ such that every neighborhood $W\in\mathcal{B}_{h''}$ contains a right generalized ray of $\left(q_{i},r_{i}\right)$, then we choose the curve $C_{i}$ such that its other endpoint is $\left(h'',0,0\right)$. We may need to stretch, shrink or twist $C_{i}$ so it fits properly in $M^{3}$, without intersecting any other point of $H$ and any other image of an interval $\left(q_{j},r_{j}\right)$. All of the above can be done definably in $\mathcal{M}$. This is possible since both the set $H$ and the number $k$ are finite, and we chose to $p_i$'s to be sufficiently close to each other. If it happens to be that $h'=h''$, then in $M^{3}$, both sides of $C_{i}$ will be attached to $\left(h',0,0\right)$, closing a piecewise-linear loop. It may also happen that we have to attach both sides of another curve $C_{j}$ to this same $\left(h',0,0\right)$, and in this case we obtain several loops attached to the same point $\left(h',0,0\right)$. It is straightforward that by doing the above we get a definable embedding $f:\left(X,\tau\right)\rightarrow\left(M^{3},\tau^{af}\right)$, which is a definable homeomorphism when restricted to its image. Therefore, the proof of this direction is complete. This ends the proof of Theorem \ref{thm: TFAE affine}. \end{proof} \subsection{Main theorem} Our goal is to prove Theorem \ref{thm: Condition =0000235 affine}, which yields an additional equivalent condition to the ones in Theorem \ref{thm: TFAE affine} for when a definable topology is definably homeomorphic to an affine one. Note that unlike condition (2) of Theorem \ref{thm: TFAE affine}, our new condition (2) of Theorem \ref{thm: Condition =0000235 affine} will only require $X$ itself to have finitely many definably connected components. On the way to proving the theorem we shall gain a better understanding of general definable one dimensional Hausdorff topologies. \begin{thm} \label{thm: regular+connected implies affine} Let $X\subseteq M^{n}$ be a definable bounded set, $\dim X=1$, and let $\tau$ be a definable Hausdorff, regular topology on $X$. If $\left(X,\tau\right)$ is definably connected, then $\left(X,\tau\right)$ is definably homeomorphic to a definable set with its affine topology. \end{thm} \begin{proof} As before, we may assume that $X$ is a subset of $M$ of the form $X=\left(s_{1},t_{1}\right)\sqcup\ldots\sqcup\left(s_{l},t_{l}\right)\sqcup F$ with $F$ finite. We first prove this theorem in our sufficiently saturated elementary extension $\mathcal{N}$ of $\mathcal{M}$, and afterwards we explain why it holds also for our original $\mathcal{M}$. Note that the Hausdorffness and regularity of $X$ can be formulated in a first-order way, thus $\left(X\left(N\right),\tau\left(N\right)\right)$ is also Hausdorff and regular. Let us see that $X\left(N\right)$ is definably connected: Assume towards contradiction that $X\left(N\right)$ is not definably connected. Let $Z=\varphi\left(N,\bar{c}\right)$ be a definable non-trivial clopen subset of $X\left(N\right)$. It is easy to see that there is a formula $\psi\left(\bar{y}\right)$, $|\bar{y}|=|\bar{c}|$, such that every element $\bar{c}'\in N$ satisfies $\psi$ if and only if $\varphi\left(N,\bar{c}'\right)$ is a non-trivial clopen subset of $X\left(N\right)$. Since $\mathcal{N}\vDash\exists\bar{y\:}\psi\negthinspace\left(\bar{y}\right)$, also $\mathcal{M}\vDash\exists\bar{y}\:\psi\negthinspace\left(\bar{y}\right)$. So for some $\bar{d}\subseteq M$, $\varphi\left(M,\bar{d}\right)$ is a non-trivial clopen subset of $X\left(M\right)$, and this is a contradiction. Therefore, $X\left(N\right)$ must be definably connected. \smallskip{} We begin with a claim: \\ \\ \textbf{Claim 1.} There are at most finitely many locally isolated points in $X$. $\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$\smallskip{} \emph{Proof. }Assume towards contradiction that there is $a\in X$ generic over $\emptyset$, which is locally isolated. Let $U\in\mathcal{B}_{a}$ and $I\ni a$ be an open-interval such that $U\cap I=\left\{ a\right\} $. As in the proof of Lemma \ref{lem:Lemma Y_b in Y_a}, we may assume that there are definable continuous strictly monotone functions $f_{1},\ldots,f_{r}:I\rightarrow X$, such that for all $x\in I$, $\boldsymbol{S}\left(x\right)=\left\{ f_{1}\left(x\right),\ldots,f_{r}\left(x\right)\right\} $ with $f_{1}\left(x\right)=x$, and $f_{i}\left(I\right)\cap f_{j}\left(I\right)=\emptyset$ for $1\leq i\neq j\leq r$. Since we assume that $\left(X,\tau\right)$ is definably connected, $a$ cannot be isolated. Thus, by Lemma \ref{lem: a locally isolated implies Y_a > =00007Ba=00007D} we conclude that $\boldsymbol{S}\left(a\right)\supsetneqq\left\{ a\right\} $, hence $r\geq2$. Let $b=f_{2}\left(a\right)\in\boldsymbol{S}\left(a\right)$ for $f_{2}$ as in Lemma \ref{lem:Lemma Y_b in Y_a}. We show that there is no $W\in\mathcal{B}_{a}$ such that $cl\left(W\right)\subseteq U$: Because $\boldsymbol{S}\left(a\right)\cap f_{2}\left(I\right)=\left\{ b\right\} $, for every $W\in\mathcal{B}_{a}$ there must be some interval of the form $\left(b',b\right)$ or $\left(b,b''\right)$ that is contained in $W\cap f_{2}\left(I\right)$. Without loss of generality, $\left(b',b\right)\subseteq f_{2}\left(I\right)$. By Lemma \ref{lem:newC cl(I)}, \[ cl\left(\left(b',b\right)\right)\supseteq\left\{ x\in X:\boldsymbol{S}\left(x\right)\cap\left(b',b\right)\neq\emptyset\right\} . \] By the definition of $f_{2}$, we also have \[ \left\{ x\in X:\boldsymbol{S}\left(x\right)\cap\left(b',b\right)\neq\emptyset\right\} \supseteq f_{2}^{-1}\left(\left(b',b\right)\right). \] It follows that $cl\left(\left(b',b\right)\right)$ contains an infinite subset of $I$, but $U\cap I=\left\{ a\right\} $, so $cl\left(W\right)$ cannot be contained in $U$ for $W\in\mathcal{B}_{a}$. That is, $\tau$ is not regular, and this is a contradiction. $\boxempty$ \textbf{}\\ \textbf{}\\ \textbf{Claim 2.} There are at most finitely many $x\in X$ such that $\mathcal{B}_{x}\sim\mathcal{B}_{x}^{^{\left[\,\,\right)}}$ or $\mathcal{B}_{x}\sim\mathcal{B}_{x}^{^{\left(\,\,\right]}}$.\smallskip{} \emph{Proof.} Assume towards contradiction that $J\subseteq X$ is an open-interval such that for every $x\in J$, $\mathcal{B}_{x}\sim\mathcal{B}_{x}^{^{\left[\,\,\right)}}$ (without loss of generality). Thus we can assume that for every $x\in J$, we have $\mathcal{B}_{x}=\mathcal{B}_{x}^{^{\left[\,\,\right)}}$. Notice that although $J$ is an open set and each interval $\left[c,d\right)\subseteq J$ is open as well, we can not conclude immediately that $\left[c,d\right)$ is also closed, because we do not know that $X\setminus\left[c,d\right)$ is open. For this, we must use the regularity of $\tau$. For every $x_{0}\in J$ generic over $\emptyset$, let $U:=\left[x_{0},z_{0}\right)\in\mathcal{B}_{x_{0}}$ be such that $U\subseteq J$. By the regularity of $\tau$ there exists $W=\left[x_{0},y_{0}\right)\in\mathcal{B}_{x_{0}}$ such that $cl\left(W\right)\subseteq U$. Note that since $cl\left(W\right)\setminus W\subseteq U\subseteq J$ and for every $x\in J$ we have $\mathcal{B}_{x}=\mathcal{B}_{x}^{^{\left[\,\,\right)}}$, we must have $cl\left(W\right)=W=\left[x_{0},y_{0}\right)$ (because every $a\in U\setminus\left[x_{0},y_{0}\right)$ has an open neighborhood disjoint from $\left[x_{0},y_{0}\right)$ ). Therefore, $cl\left(W\right)$ is also open in $\tau$, and hence it is clopen. This is a contradiction to $\left(X,\tau\right)$ being definably connected. $\boxempty$ \\ We proceed with our proof of Theorem \ref{thm: regular+connected implies affine}. We assume that $\left(X,\tau\right)$ is not definably homeomorphic to any definable set with its affine topology, and we show that $X$ contains a definable clopen set. In fact, given Claim 1 and Claim 2 we shall not make further use of regularity. \\ Using Theorem \ref{thm: TFAE affine} (3), there is a point $a\in X$ generic over $\emptyset$ such that $\mathcal{B}_{a}\mathcal{\npreceq B}_{a}^{af}$. By Corollary \ref{cor: a locally iso=00005Cleft=00005Cright}, $a$ is locally isolated or locally right-closed or locally left-closed\emph{.} By Claim 1, $a$ is locally right-closed or left-closed\emph{.} If $\boldsymbol{S}\left(a\right)=\left\{ a\right\} $, then we must have either $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{^{\left[\,\,\right)}}$ or $\mathcal{B}_{a}\sim\mathcal{B}_{a}^{^{\left(\,\,\right]}}$ by Lemma~\ref{lem:GeneralProperty2 of Y_a}. Both cases are not possible due to Claim~2. Thus, we assume from now on that for any generic $x\in X$ such that $\mathcal{B}_{x}\mathcal{\npreceq B}_{x}^{af}$, the set $\boldsymbol{S}\left(x\right)$ properly contains $\left\{ x\right\} $. \\ \\ \textbf{Claim 3. }$|\boldsymbol{S}\left(a\right)|=2$, and if $b\in\boldsymbol{S}\left(a\right)$ then $\boldsymbol{S}\left(a\right)=\boldsymbol{S}\left(b\right)=\left\{ a,b\right\} $.$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$\smallskip{} \emph{Proof. }Since $\boldsymbol{S}\left(a\right)$ properly contains $\left\{ a\right\} $, we have $|\boldsymbol{S}\left(a\right)|\geq2$. Let $b\in\boldsymbol{S}\left(a\right)$, $b\neq a$. Note that since $a$ is generic over $\emptyset$ then by Lemma \ref{lem: a generic implies b generic} so is $b$. By Lemma \ref{lem: (*)}, $\mathcal{B}_{b}\mathcal{\npreceq B}_{b}^{af}$, so $\boldsymbol{S}\left(b\right)\supsetneqq\left\{ b\right\} $. Since $b$ is generic, it follows from Lemma \ref{lem:Lemma Y_b in Y_a} that $\boldsymbol{S}\left(b\right)\subseteq\boldsymbol{S}\left(a\right)$. Assume towards contradiction that $\boldsymbol{S}\left(b\right)\ne\left\{ a,b\right\} $. Hence, there is $c\in\boldsymbol{S}\left(b\right)$ (so also in $\boldsymbol{S}\left(a\right)\,$), $c\neq a,b\,$. By Lemma \ref{lem: a generic implies b generic}, $c$ is generic over $\emptyset$, so by Lemma \ref{lem:C3. 2pts a s.t. b in Y_a} it must also be locally isolated, contradicting Claim 1. Therefore, it must be that $\boldsymbol{S}\left(b\right)=\left\{ a,b\right\} $. By replacing $a$ and $b$ in the above, we also get $\boldsymbol{S}\left(a\right)=\left\{ a,b\right\} $.~$\boxempty$ \\ We say that a point $x\in X$ \emph{inhabits the left side} of a point $y\in X$ if for every $U\in\mathcal{B}_{x}$ there exists $y'\in X$, $y'<y$, such that $\left(y',y\right)\subseteq U$. We say that $x$ \emph{inhabits the right side} of $y\in X$ if for every $U\in\mathcal{B}_{x}$ there exists $y''\in X$, $y<y''$, such that $\left(y,y''\right)\subseteq U$. We note several easy observations for $a$ that is generic over $\emptyset$, not locally isolated, such that $\mathcal{B}_{a}\mathcal{\npreceq B}_{a}^{af}$: (1) If $a$ inhabits the left side or the right side of $b$ then $b\in\boldsymbol{S}\left(a\right)$. (2) Conversely, if $b\in\boldsymbol{S}\left(a\right)$ then $a$ inhabits the left side or the right side (or~both) $\,\,\,\,\,\,\,\,\,\,$of $b$. (For the case $b=a$ we use here the fact that $a$ is not locally isolated). (3) $a$ cannot inhabit both sides of $b$. Indeed, since $b$ is generic over $\emptyset$ then it is $\,\,\,\,\,\,\,\,\,\,$not locally isolated by Claim 1, and since $\tau$ is Hausdorff it must be possible $\,\,\,\,\,\,\,\,\,\,$to separate between $a$ and $b$. (4) $a$ inhabits the left side (the right side) of $b$ if and only if $b\in\boldsymbol{S}\left(a\right)$ and $b$ is locally left-closed (locally right-closed). \\ By Claim 3, $\boldsymbol{S}\left(a\right)=\left\{ a,b\right\} =\boldsymbol{S}\left(b\right)$ for $b\neq a$, and from its proof we deduce that $a$ inhabits exactly one side of $a$ and exactly one side of~$b$, and so does $b$. As we have seen before, we can find an interval $J\ni a$ and definable continuous and strictly monotone functions $f_{1},f_{2}:J\rightarrow X$ with $f_{1}\left(J\right)\cap f_{2}\left(J\right)=\emptyset$, such that for every $x\in J$, $\boldsymbol{S}\left(x\right)=\left\{ f_{1}\left(x\right)=x,f_{2}\left(x\right)\right\} $. Moreover, the genericity of $a$ also implies that we may choose $J$ such that all $x\in J$ are ``of the same form'' as $a$. Namely, \\ \\ (i) Every $x\in J$ is locally left-closed or every $x\in J$ is locally right-closed.$\,\,\,\,\,\,\,\,\,\,$\smallskip{} (ii) Every $y\in f_{2}\left(J\right)$ is locally left-closed or every $y\in f_{2}\left(J\right)$ is locally right-closed.\\ Without loss of generality, assume that every $x\in J$ is locally left-closed and every $y\in f_{2}\left(J\right)$ is locally right-closed (the other cases are treated similarly). By (4), $x$ inhabits the right side of $y:=f_{2}\left(x\right)$ and $y$ inhabits the left side of $x$. \\ \\ \textbf{Claim 4.} Under these assumptions, $f_{2}$ is strictly increasing.$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$\smallskip{} \emph{Proof.} Assume towards contradiction that $f_{2}$ is strictly decreasing. That is, for every $c,d\in J$, if $c<d$ then $f_{2}\left(c\right)>f_{2}\left(d\right)$. Fix $c\in J$ generic over $\emptyset$. By our assumption, $c$ is locally left-closed, that is, for every small enough $U\in\mathcal{B}_{c}$ there exists an open-interval $I_{U}\ni c$ and a point $x>c$ such that $U\cap I_{U}=\left[c,x\right)$. By our assumption, $f_{2}\left(c\right)$ is locally right-closed. So $f_{2}$ being strictly decreasing implies that for every $W\in\mathcal{B}_{f_{2}\left(c\right)}$ and $x>c$, we have \[ W\cap f_{2}\left(\left[c,x\right)\right)=W\cap\left(f_{2}\left(x\right),f_{2}\left(c\right)\right]\neq\emptyset. \] Note that since $f_{2}$ is strictly decreasing and $f_{2}\left(c\right)\in\boldsymbol{S}\left(c\right)$ , there must be $y'<f_{2}\left(c\right)$ such that $\left(y',f_{2}\left(c\right)\right]\subseteq U$. Thus, for every $U\in\mathcal{B}_{c}$, we must have $W\cap U\supseteq W\cap\left(y',f_{2}\left(c\right)\right]\neq\emptyset$. This contradicts the fact that $\tau$ is Hausdorff, and therefore $f_{2}$ must be strictly increasing.~$\boxempty$ \\ Recall that for every $x\in J$ we have $\boldsymbol{S}\left(x\right)=\left\{ x,y\right\} =\boldsymbol{S}\left(y\right)$, and as we just showed $f_{2}$ is strictly increasing. By Lemma \ref{lem:GeneralProperty1 of Y_a}, we know that for every small enough $U\in\mathcal{B}_{x}$, $U\subseteq\left(J\cup f_{2}\left(J\right)\right)$, and for every small enough $W\in\mathcal{B}_{y}$, $W\subseteq\left(J\cup f_{2}\left(J\right)\right)$. Therefore, under our assumptions we get that for every $x\in J$, \[ \mathcal{B}_{x}\sim\left\{ \left[x,x''\right)\cup\left(y,y''\right):x''\in J,x<x'',y''=f_{2}\left(x''\right)\right\} , \] and for every $y\in f_{2}\left(J\right)$, \[ \mathcal{B}_{y}\sim\left\{ \left(x',x\right)\cup\left(y',y\right]:x\in J,y'<y,x'=f_{2}^{-1}\left(y'\right)\right\} . \] So by replacing the bases we can assume that \[ \left(\ast\right)\begin{cases} \mathcal{B}_{x}=\left\{ \left[x,x''\right)\cup\left(y,y''\right):x''\in J,x<x'',y''=f_{2}\left(x''\right)\right\} \\ \mathcal{B}_{y}=\left\{ \left(x',x\right)\cup\left(y',y\right]:y'\in f_{2}\left(J\right),y'<y,x'=f_{2}^{-1}\left(y'\right)\right\} . \end{cases} \] For every $x\in J$ and $y=f_{2}\left(x\right)\in f_{2}\left(J\right)$, we consider the definable families: \[ \mathcal{B}_{x}^{f}:=\left\{ U\cap f_{2}\left(J\right):U\in\mathcal{B}_{x}\right\} \text{ and\,\, }\mathcal{B}_{y}^{f^{-1}}:=\left\{ W\cap J:W\in\mathcal{B}_{y}\right\} . \] Thus we have: \\ \\ (iii) $\mathcal{B}_{x}^{f}=\left\{ \left(y,y''\right):y''\in f_{2}\left(J\right),y<y''\right\} $. $\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$\smallskip{} (iv) $\mathcal{B}_{y}^{f^{-1}}=\left\{ \left(x',x\right):x'\in J,x'<x\right\} $. \\ We are now ready to prove that $X$ is not definably connected. \\ \textbf{}\\ \textbf{Claim 5.} For $a''\in J$ with $a''>a$, the set $Z:=\left[a,a''\right)\cup\left(f_{2}\left(a\right),f_{2}\left(a''\right)\right]$ is clopen.$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$\smallskip{} \emph{Proof.} By $\left(\ast\right)$, $Z$ is open as the union of two basic open sets. We explain why $Z$ is also closed: By general properties of closure and since each singleton is closed, we have \[ cl\left(Z\right)=\left\{ a\right\} \cup cl\left(\left(a,a''\right)\right)\cup cl\left(\left(f_{2}\left(a\right),f_{2}\left(a''\right)\right)\right)\cup\left\{ f_{2}\left(a''\right)\right\} . \] Thus, by Lemma \ref{lem:newC cl(I)} we deduce that \begin{align*} \left\{ a\right\} \cup\left\{ x\in\negthinspace X\negthinspace:\boldsymbol{S}\negthinspace\left(x\right)\cap\left(a,a''\right)\ne\emptyset\right\} \cup\left\{ x\in\negthinspace X\negthinspace:\boldsymbol{S}\negthinspace\left(x\right)\cap\left(f_{2}\left(a\right)\negthinspace,f_{2}\left(a''\right)\right)\ne\emptyset\right\} \cup\left\{ f_{2}\left(a''\right)\right\} \\ \subseteq cl\left(Z\right)\subseteq\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\ \left\{ a\right\} \cup\left\{ x\in\negthinspace X\negthinspace:\boldsymbol{S}\negthinspace\left(x\right)\cap\left[a,a''\right]\ne\emptyset\right\} \cup\left\{ x\in\negthinspace X\negthinspace:\boldsymbol{S}\negthinspace\left(x\right)\cap\left[f_{2}\left(a\right)\negthinspace,f_{2}\left(a''\right)\right]\ne\emptyset\right\} \cup\left\{ f_{2}\left(a''\right)\right\} \negthinspace. \end{align*} The difference between the right hand side and the left hand side is $\left\{ a'',f_{2}\left(a\right)\right\} $. Let us show that these two points are not in $cl\left(Z\right)$: For $a''$, we know that for $a'''\in J$ with $a'''>a''$, the set $\left[a'',a'''\right)\cup\left(f_{2}\left(a''\right),f_{2}\left(a'''\right)\right]$ is an open neighborhood of $a''$ which does not intersect $Z$. Thus, $a''\notin cl\left(Z\right)$. Similarly, the point $f_{2}\left(a\right)$ has an open neighborhood of the form $\left[a',a\right)\cup\left(f_{2}\left(a'\right),f_{2}\left(a\right)\right]$, which does not intersect $Z$. Thus, we have $f_{2}\left(a\right)\notin cl\left(Z\right)$. Notice that $\boldsymbol{S}\left(x\right)\cap\left(a,a''\right)\ne\emptyset$ if and only if $x\in\left(a,a''\right)$, and $\boldsymbol{S}\left(x\right)\cap\left(f_{2}\left(a\right),f_{2}\left(a''\right)\right)\ne\emptyset$ if and only if $f_{2}\left(x\right)\in\left(f_{2}\left(a\right),f_{2}\left(a''\right)\right)$. Therefore, we conclude that \[ cl\left(Z\right)=\left\{ a\right\} \cup\left(a,a''\right)\cup\left(f_{2}\left(a\right),f_{2}\left(a''\right)\right)\cup\left\{ f_{2}\left(a''\right)\right\} =\left[a,a''\right)\cup\left(f_{2}\left(a\right),f_{2}\left(a''\right)\right]=Z, \] hence we proved that $Z$ is clopen.~$\boxempty$ \\ By Claim 5, $X$ contains the definable clopen set $Z$. That is, $\left(X,\tau\right)$ is not definably connected. Hence, we proved Theorem \ref{thm: regular+connected implies affine} for our sufficiently saturated $\mathcal{N}$. \\ \\ \textbf{Claim 6.} Theorem \ref{thm: regular+connected implies affine} holds also in $\mathcal{M}$. $\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$\smallskip{} \emph{Proof.} At the beginning of the proof of Theorem \ref{thm: regular+connected implies affine} we saw that if $\left(X\left(M\right),\tau\left(M\right)\right)$ is Hausdorff, regular and definably connected, then so is $\left(X\left(N\right),\tau\left(N\right)\right)$. Therefore, if $\mathcal{M}$ satisfies the conditions of Theorem \ref{thm: regular+connected implies affine}, then $\mathcal{N}$ satisfies them as well. In this case, we get from the theorem that there exist a definable bijection $f_{\bar{c}}:X\left(N\right)\rightarrow S_{\bar{c}}$ where $S_{\bar{c}}\subseteq N^{k}$ for some $k$, such that $f_{\bar{c}}\,,S_{\bar{c}}$ are definable over parameters $\bar{c}\subseteq N$, and $f_{\bar{c}}$ is a homeomorphism of $\left(X\left(N\right),\tau\left(N\right)\right)$ and $S_{\bar{c}}$ with the affine topology. We can now write a formula $\psi\left(\bar{y}\right)$, $|\bar{y}|=|\bar{c}|$, such that for every $\bar{c}'\in N$, \begin{align*} \mathcal{N} & \vDash\psi\left(\bar{c}'\right)\Longleftrightarrow f_{\bar{c}'}\text{ is a (definable) homeomorphism of }\left(X\left(N\right),\tau\left(N\right)\right)\\ & \text{ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,and a (definable) }S_{\bar{c}'}\subseteq N^{k}\text{ with the affine topology}. \end{align*} Since $\mathcal{N}\vDash\exists\bar{y\;}\psi\left(\bar{y}\right)$, then so does $\mathcal{M}$, and hence there exists $\bar{d}\subseteq M$ such that $f_{\bar{d}}:X\left(M\right)\rightarrow S_{\bar{d}}\left(M\right)$ is the desired homeomorphism. That is, Theorem \ref{thm: regular+connected implies affine} holds also for $\mathcal{M}$. This ends the proof of both Claim 6 and Theorem \ref{thm: regular+connected implies affine}. \end{proof} We can now add to Theorem \ref{thm: TFAE affine} another equivalent condition: \begin{thm} \label{thm: Condition =0000235 affine} Let $X\subseteq M^{n}$ be a definable bounded set, $\dim X=1$, and let $\tau$ be a definable Hausdorff topology on $X$. Then the following are equivalent: \end{thm} \begin{enumerate} \item $\left(X,\tau\right)$ is definably homeomorphic to a definable set with its affine topology. \item $\tau$ is regular, and $\left(X,\tau\right)$ has finitely many definably connected components. \end{enumerate} \begin{proof} $(1)\Rightarrow(2)$: This follows from the basic theory of o-minimal structures (as is discussed in \cite{Dries}). $(2)\Rightarrow(1)$: By assumption, $X$ is a disjoint union of definable sets $X_{1},\ldots,X_{m}$, each open (hence closed) in $X$, and definably connected with respect to $\tau$. Since $X$ is regular so is each $X_{i}$ (with the induced $\tau$ topology). By Theorem \ref{thm: regular+connected implies affine}, each $X_{i}$ is definably homeomorphic to some $Y_{i}\subseteq M^{k_{i}}$ with its affine topology. Let $k:=max_{1\leq i\leq m}k_{i}+1$, and embed each $Y_{i}$ in $M^{k}$. Furthermore, we may choose the sets $Y_{i}$ such that \[ cl^{af}\left(Y_{i}\right)\cap cl^{af}\left(Y_{j}\right)=\emptyset \] for $i\neq j$. It follows that $X$ is definably homeomorphic to $Y:=\bigsqcup_{i=1}^{m}Y_{i}$. \end{proof} A combination of Theorem \ref{thm: TFAE affine} and Theorem \ref{thm: Condition =0000235 affine} gives us the Main theorem, as stated in the introduction:\\ \textbf{}\\ \textbf{Main theorem.}\emph{ Let $\mathcal{M}$ be an o-minimal expansion of a linearly ordered group. Let $X\subseteq M^{n}$ be a definable bounded set with $\dim X=1$, and let $\tau$ be a definable Hausdorff topology on $X$. Then the following are equivalent: } \begin{enumerate} \item \emph{$\left(X,\tau\right)$ is definably homeomorphic to a definable subset of $M^{k}$ for some $k$, with its affine topology. } \item \emph{There is a finite set $G\subseteq X$ such that every $\tau$-open subset of $X\setminus G$ is open with respect to the affine topology on $X\setminus G$. } \item \emph{Every definable subset of $X$ has finitely many definably connected components, with respect to $\tau$.} \item \emph{$\tau$ is regular and $X$ has finitely many definably connected components, with respect to $\tau$.}\\ \end{enumerate} We end with another theorem that is an immediate consequence of our work towards Theorem \ref{thm: Condition =0000235 affine}: \begin{thm} Let $X\subseteq M^{n}$ be a definable bounded set with $\dim X=1$, and let $\tau$ be a Hausdorff topology on $X$. Assume that $X$ has finitely many locally isolated points, and finitely many points $x$ such that $\mathcal{B}_{x}\sim\mathcal{B}_{x}^{^{\left[\,\,\right)}}$ or $\mathcal{B}_{x}\sim\mathcal{B}_{x}^{^{\left(\,\,\right]}}$. If, in addition, $\left(X,\tau\right)$ has finitely many definably connected components, then $\left(X,\tau\right)$ is definably homeomorphic to a definable set with its affine topology. \end{thm} \begin{proof} The proof follows from Theorem \ref{thm: Condition =0000235 affine}, since in the proof of Theorem \ref{thm: regular+connected implies affine} (which leads to Theorem \ref{thm: Condition =0000235 affine}), we only used regularity in Claim 1 and in Claim 2. \end{proof} \subsection{\label{subsec:Examples}Example } The next example is of a definable Hausdorff topology that is definably connected and not regular, thus it can not be definably homeomorphic to a definable set with its affine topology. It demonstrates the necessity of two different assumptions in two different principal theorems: For Theorem \ref{thm: TFAE affine}, it shows that for the direction $(2)\Rightarrow(1)$ it would have not been enough to only assume that $\left(X,\tau\right)$ is definably connected. For Theorem \ref{thm: regular+connected implies affine}, it demonstrates that it is not enough to only assume that $\left(X,\tau\right)$ is Hausdorff and definably connected, and it is necessary to add the assumption that $\tau$ is regular. \begin{example} Let $M\supseteq X=\left(0,1\right)\cup\left(1,2\right)\cup\left(2,3\right)$ be the union of three disjoint open-intervals. We define a definable topology $\tau$ on $X$ via families of basic neighborhoods of points: For $a\in\left(0,1\right)$, take \[ \mathcal{B}_{a}:=\left\{\left\{ a\right\} \cup\left(a+1,b\right)\cup\left(c,a+2\right):a+1<b<2\,,2<c<a+2\right\} , \] for $b\in\left(1,2\right)$, take \[ \mathcal{B}_{b}:=\left\{ \left(b',b\right]:1<b'<b\right\} , \] and for $c\in\left(2,3\right)$, take \[ \mathcal{B}_{c}:=\left\{ \left[c,c''\right):c<c''<3\right\} . \] One can check that this topology is not regular. This fact, as well as points of $\left(1,2\right)\cup\left(2,3\right)$ having half-open-intervals neighborhoods, guaranties that $\tau$ is not definably homeomorphic to the affine topology $\tau^{af}$. The topology $\tau$ is definably connected since the only definable clopen subsets of $X$, are $\emptyset$ and $X$ itself. Nevertheless, $X$ contains definable subsets that are totally definably disconnected (for instance, the only connected components of the interval $\left(1,2\right)$ are its singletons). \end{example} \subsection{Some final comments} \begin{enumerate} \item Note that the Main Theorem fails, as stated, without the assumption that $X$ is bounded: Let $\mathcal{M}=\left(\mathbb{R};<,+\right)$ and let $X\subseteq\mathbb{R}^{2}$ be the union of the line $\mathbb{R}\times\left\{ 0\right\} $ and two other points $p_{1},p_{2}\in\mathbb{R}^{2}$. We can endow $X$ with a topology $\tau$ which identifies it with $\left\{ -\infty\right\} \cup\mathbb{R}\cup\left\{ +\infty\right\} $, with $p_1,p_2$ identified with $-\infty$, $+\infty$, respectively. It is easy to verify that clauses (2)-(4) of Theorem \ref{thm: TFAE affine} hold. Let us see that (1) fails. Indeed, it is not hard to verify that $\left(X,\tau\right)$ is definably compact (note that here we make an exception and use the term ``definably compact'' with respect to the topology $\tau$). Therefore, if $\left(X,\tau\right)$ were definably homeomorphic to a definable set $X'\subseteq\mathbb{R}^{n}$ with its affine topology, then $X'$ would have to be bounded. However, in $\mathcal{M}$ there is no definable bijection between bounded and unbounded sets. \item Naturally, the ultimate goal of this project is to understand definable topologies in arbitrary dimension. Towards this goal some modifications are needed in the Main Theorem. For example, it is not hard to see that Clause (2) cannot hold as such since the set of points $a\in X$ at which the $\tau$-topology is different than the affine topology can be infinite (but probably of dimension strictly smaller than $\dim X$). At any case, we do not know if the equivalence of (1), (3) and (4) in that theorem is still true for arbitrary dimension. \end{enumerate}
train/arxiv
BkiUdZs5qhDBMnKwALTV
5
1
\section{Introduction} We consider the stationary radiative transfer equation \begin{align}\label{eq:rte} s\cdot\nabla \phi(r,s) + \mu(r) \phi(r,s) = \sigma(r) \Big( \int_\mathcal{S} \phi(r,s')\,{\rm ds}' - \phi(r,s)\Big) + f(r,s), \end{align} which models the equilibrium distribution of an ensemble of mutually non-interacting particles in an isotropically scattering medium. The function $\phi(r,s)$ here denotes the density of particles at a point $r\in\mathcal{R}$ moving in direction $s\in\mathcal{S}={\mathbb S}^{d-1}$, and the symbol $\nabla$ denotes derivatives with respect to the spatial variables $r$ only. The medium is characterized by rates $\mu$ and $\sigma$ of absorption and scattering. Interior sources are represented by $f$ and the inflow of particles over the boundary is modeled by \begin{align}\label{eq:rte_bc} \phi(r,s)=g(r,s) \qquad\text{for } r\in{\partial \R}, s\cdot n(r)<0, \end{align} where $n(r)$ is the unit outward normal at a point $r\in{\partial \R}$. Problems of the form \eqref{eq:rte}--\eqref{eq:rte_bc} arise in various applications, e.g., in neutron physics \cite{CaseZweifel67}, in medical imaging \cite{Arridge99}, in astrophysics \cite{Cercignani:1988,Chandrasekhar60,Peraiah04}, or climatology \cite{Kondratyev69}. In this paper, we are interested in the determination of the material properties, encoded in the spatially varying parameters $\mu$ and $\sigma$ from measurements \begin{align}\label{eq:obs} B\phi = \int_{s\cdot n(r)>0} \phi(r,s) s\cdot n \,{\rm ds} \end{align} of the outflow of particles over the boundary. This parameter identification problem can be formally written as an abstract operator equation \begin{align}\label{eq:ip_informal} B S(\mu,\sigma) = \mathcal{M}, \end{align} where $\mathcal{M}$ is a given measurement, and $S$ denotes the parameter-to-solution map defined by $S(\mu,\sigma)=\phi$ solving \eqref{eq:rte}--\eqref{eq:rte_bc}. Note that $S$ and $\mathcal{M}$ also depend on the sources $f$ and $g$. Due to the many important applications, the inverse problem \eqref{eq:ip_informal} has been investigated intensively in the literature. To give an impression of the basic properties of the problem, let us summarize some of the most important results: The parameters $\mu$, $\sigma$ can be uniquely identified, if sufficiently many measurements are available \cite{ChoulliStefanov98}. In particular, multiple excitations $f$ and $g$ are required. The stability of the identification process with respect to perturbations in the data has been investigated in \cite{Bal08,BalJol08,McDowallStefanovTamason2010}. In general, the stability will be very low. Various methods to numerically solve the parameter identification problem have been proposed as well \cite{Dorn98,KloHie99,WrightArridgeSchweiger07}. It is by now well understood that solving \eqref{eq:ip_informal} is an ill-posed problem. For a stable solution, we will therefore consider Tikhonov regularization, to be precise, we define approximate solutions via minimization problems of the form \begin{align}\label{eq:tikhonov} &\| BS(\mu,\sigma)- \mathcal{M}\|_{L^q({\partial \R})}^q + \alpha \|\mu-\mu_0\|^p_{L^p(\mathcal{R})} + \alpha \|\sigma-\sigma_0\|^p_{L^p(\mathcal{R})} \to \min_{(\mu,\sigma) \in D(S)}, \end{align} where $\mu_0$ and $\sigma_0$ denote some a-priori information about the unknown parameters $\mu$, $\sigma$. The domain $D(S)$ will be defined below. This can be seen as an optimal control problem governed by an integro partial differential equation. The main focus of this manuscript is to establish the existence of minimizers for \eqref{eq:tikhonov} and thus to ensure the well-posedness of the regularized problem. We will also show that \eqref{eq:tikhonov} is a regularization method in the sense of \cite{EHN96}. In addition, we will investigate iterative algorithms to approximate the minimizers. The key ingredient for our arguments is a careful analysis of the mapping properties of the parameter-to-solution map $S$. We will establish its strong and weak continuity with respect to the corresponding $L^p$ and $L^q$ topologies, and derive various differentiability results. Let us mention that for particular choices of the parameter and measurement spaces, the stable solution of the inverse problem \eqref{eq:ip_informal} by Tikhonov regularization has been considered already in \cite{DiDoNaPaSi02,ThesisSch,TangHanHan2013}. Our results here are more general and require a much finer analysis of the operator $S$. We will make more detailed comments on this in the following sections. As a numerical method for minimizing the Tikhonov functional, we consider a variation of the iteratively regularized Gau\ss-Newton method. This method has been investigated in the framework of regularization methods in \cite{BakKok04,KaNeuSche08}. Here, we investigate its properties for minimization of the regularized functional. The outline of the manuscript is as follows: in Section~\ref{sec:prelim}, we introduce the necessary notation and recall some existence results for the transport equation. After fixing the domain of $S$, we proof our main results about continuity, weak continuity, and differentiability of $S$ in Section~\ref{sec:properties}. We turn back to the optimal control problem in Section~\ref{sec:OCproblem} and investigate iterative methods for its solution in Section~\ref{sec:iterative}. For illustration of our theoretical considerations, some numerical results are presented in Section~\ref{sec:numerics}, and we conclude with a short summary. \section{Preliminaries} \label{sec:prelim} Let us introduce the basic notions and the functional analytic setting in which we investigate the solvability of the radiative transfer problem. The following physically reasonable and quite general assumptions will be used throughout the paper. \begin{enumerate} \label{ass:1} \itemsep1ex \item[(A1)] $\mathcal{R}\subset\mathbb{R}^3$ is a bounded domain with Lipschitz boundary. \item[(A2)] $\mu\in L^\infty(\mathcal{R})$ and $0\leq \mu(r) \leq \overline{\mu}$ for a.e.\@ $r \in \mathcal{R}$ with some constant $\overline{\mu} \ge 0$. \item[(A3)] $\sigma\in L^\infty(\mathcal{R})$ and $0 \leq \sigma(r) \leq \overline{\sigma}$ for a.e.\@ $r \in \mathcal{R}$ with some constant $\overline{\sigma} \ge 0$. \end{enumerate} Since ${\partial \R}$ is Lipschitz continuous, we can define for almost every $r \in {\partial \R}$ the outward unit normal vector $n=n(r)$. We denote by $\Gamma:={\partial \R} \times \mathcal{S}$ the boundary of the tensor product domain $\mathcal{R}\times\S$ and decompose $\Gamma$ into an in- and outflow part by \begin{align}\label{eq:def_dD} \Gamma_{\pm} :=\{(r,s)\in{\partial \R}\times \mathcal{S}: \pm s\cdot n(r) >0\}. \end{align} We will search for solutions of the radiative transfer problem \eqref{eq:rte}--\eqref{eq:rte_bc} in the space \begin{align*} \mathbb{V}^p &:= \{ v\in L^p(\mathcal{R}\times\S): s\cdot \nabla v\in L^p(\mathcal{R}\times\S) \text{ and } v\in L^p(\Gamma_-; |s\cdot n|)\} \end{align*} which is equipped with the graph norm \begin{align*} \| v\|^p_{\mathbb{V}^p} &:= \|v\|^p_{L^p(\mathcal{R}\times\S)} + \|s\cdot\nabla v\|^p_{L^p(\mathcal{R}\times\S)} + \|v\|_{L^p(\Gamma_-;|s\cdot n|)}^p. \end{align*} Here $L^p(\Gamma_-;|s\cdot n|)$ denotes a weighted $L^p$-space with weighting function $|s\cdot n|$. Note that for $1 \le p \le \infty$, the spaces $\mathbb{V}^p$ are complete and that $\mathbb{V}^2$ is a Hilbert space. Due to the boundedness of the spatial domain $\mathcal{R}$, the embedding $\mathbb{V}^p \hookrightarrow \mathbb{V}^q$ is continuous for $q \le p$, but neither $\mathbb{V}^p \hookrightarrow \mathbb{V}^q$ nor $\mathbb{V}^p \hookrightarrow L^p(\mathcal{R}\times\S)$ are compact. For functions $u\in \mathbb{V}^p$ and $v \in \mathbb{V}^q$ with $q=1-1/p$, we obtain the integration-by-parts formula \begin{align} \label{eq:green} (s \cdot \nabla u, v)_{\mathcal{R}\times\S} = -(u, s \cdot \nabla v)_{\mathcal{R}\times\S} + (s \cdot n\; u, v)_{\Gamma}. \end{align} As usual, the symbol $(u,v)_D$ is used for the integral of the product of two functions over some domain $D$. Applying this formula to $u \in \mathbb{V}^p$ and $v = u |u|^{p-2}$ yields \begin{align} \label{eq:outflow} \|u\|_{L^p(\Gamma_+;|s \cdot n|)}^p \le \|u\|_{L^p(\Gamma_;|s \cdot n|)}^p + p\|u\|_{L^p(\mathcal{R}\times\S)} \|s \cdot \nabla u \|_{L^p(\mathcal{R}\times\S)}, \end{align} i.e., the outflow trace of functions in $\mathbb{V}^p$ is well-defined and the trace operator is continuous from $\mathbb{V}^p$ to $L^p(\Gamma_+;|s\cdot n|)$. Via H\"older's inequality, we immediately obtain \begin{lemma} \label{lem:obs} The operator $B:\mathbb{V}^p \to L^p(\Gamma_+;|s\cdot n|)$ defined in \eqref{eq:obs} is linear and bounded. \end{lemma} Let us introduce the transport operator \begin{align*} \mathcal{A}:\mathbb{V}^p \to L^p(\mathcal{R}\times\S), \quad (\mathcal{A}\phi)(r,s):= s\cdot\nabla\phi(r,s) \end{align*} which models the flow of particles in direction $s$, and the averaging operator \begin{align*} \Theta:L^p(\mathcal{R}\times\S) \to L^p(\mathcal{R}\times\S), \quad (\Theta\phi)(r,s):= \int_\mathcal{S} \phi(r,s')\,{\rm ds}', \end{align*} describing the scattering of particles by the background medium. The collision operator \begin{align*} \mathcal{C} = \mu I + \sigma (I - \Theta) \end{align*} then models the total interaction of particles with the medium. Note, that $\mathcal{C}$ depends linearly on the parameters $\mu$ and $\sigma$, and we will sometimes write $\mathcal{C}(\mu,\sigma)$ to emphasize this dependence. For later reference, let us summarize some basic properties of the operators, which follow more or less directly from their definition; see \cite{DL93vol6,EggSch10:3} for details. \begin{lemma} Let (A1)--(A3) hold. Then the operators $\mathcal{A} :\mathbb{V}^p\to L^p(\mathcal{R}\times\S)$, $\Theta: L^p(\mathcal{R}\times\S)\to L^p(\mathcal{R}\times\S)$, and $\mathcal{C}:L^p(\mathcal{R}\times\S) \to L^p(\mathcal{R}\times\S)$ are bounded linear operators. Moreover, $\Theta$ and $\mathcal{C}$ are self-adjoint and $\mathcal{C}$ is positive on $L^2(\mathcal{R}\times\S)$. \end{lemma} As already mentioned, the energy spaces $\mathbb{V}^p$ are not compactly embedded in $L^p(\mathcal{R}\times\S)$. The following result, known as averaging lemma, serves as a substitute and will play a key-role in our analysis. \begin{lemma}\label{lem:scatteringcompactness} For any $1<p<\infty$ the averaging operator $\Theta:\mathbb{V}_0^p\to L^p(\mathcal{R})$ is compact. Here $\mathbb{V}_0^p$ denotes the subspace of $\mathbb{V}^p$ with vanishing inflow boundary conditions. \end{lemma} We refer to \cite{GoLiPeSe88} for a proof of this result. Let us mention that averaging lemmas also play a key role for the spectral analysis of the radiative transfer equation. Using the operators defined above, the radiative transfer problem \eqref{eq:rte}--\eqref{eq:rte_bc} can be written in compact form: Given $f \in L^p(\mathcal{R}\times\S)$ and $g \in L^p(\Gamma_-;|s\cdot n|)$, find $\phi \in \mathbb{V}^p$ such that \begin{alignat}{3} \label{eq:op_eq} \mathcal{A} \phi + \mathcal{C} \phi &= f &\quad&\text{in } \mathcal{R}\times\S, & \\ \phi &= g &\quad &\text{on } \Gamma_-. \label{eq:op_bc} \end{alignat} The two equations have to hold in the sense of $L^p(\mathcal{R}\times\S)$ and $L^p(\Gamma_-;|s\cdot n|)$, respectively. The existence and uniqueness of solutions for this problem is established next. \begin{theorem}\label{thm:existence} Let (A1)--(A3) hold. Then for any $f\in L^p(\mathcal{R}\times\S)$ and $g\in L^p(\Gamma_-; |s\cdot n|)$, $1\leq p \leq\infty$, the radiative transfer problem \eqref{eq:op_eq}--\eqref{eq:op_bc} has a unique solution $\phi\in \mathbb{V}^p$ and \begin{align*} \|\phi\|_{\mathbb{V}^p}\leq C \big(\|f\|_{L^p(\mathcal{R}\times\S)} + \|g\|_{L^p(\Gamma_-;|s\cdot n|)}\big) \end{align*} with a constant $C$ depending only on $\diam\mathcal{R}$, $p$ and the bounds $\overline{\mu}$ and $\overline{\sigma}$ in (A2)--(A3). \end{theorem} For a proof of this and further results, let us refer to \cite{DL93vol6,EggSch13LP} and the references given there. \section{Properties of the parameter-to-solution map} \label{sec:properties} In this section, we investigate the mapping properties of the parameter-to-solution map \begin{align} S : D(S) \subset L^p(\mathcal{R}) \times L^p(\mathcal{R}) \to \mathbb{V}^p, \qquad (\mu,\sigma) \mapsto \phi, \end{align} where $\phi$ is the solution of \eqref{eq:op_eq}--\eqref{eq:op_bc} for given data $f$ and $g$. The domain of $S$ is defined by \begin{align*} D(S):=\{ (\mu,\sigma)\in L^p(\mathcal{R})\times L^p(\mathcal{R}): \text{(A2)--(A3) hold} \}. \end{align*} Note that the operator $S$ also depends on the choice of $p$ and on the data $f$ and $g$. For ease of presentation, we will emphasize this dependence only if necessary. \subsection{Continuity}\label{sec:weakClosedness} Let us start with presenting some results about the continuity of $S$ with respect to the strong and weak topologies. The latter case will play a fundamental role in the analysis of the optimal control problem later on. \begin{theorem}[Continuity]\label{thm:continuity} Let $1<p,q<\infty$ and assume that $f \in L^q(\mathcal{R}\times\S)$ and $g\in L^q(\Gamma_-;|s\cdot n|)$. Then $S$ is continuous as mapping from $L^p(\mathcal{R}) \times L^p(\mathcal{R})$ to $\mathbb{V}^q$. \end{theorem} \begin{proof} Let $(\mu,\sigma)\in D(S)$ and $\{(\mu^n,\sigma^n)\} \subset D(S)$ such that $(\mu^n,\sigma^n)\to (\mu,\sigma)\in L^p(\mathcal{R})\times L^p(\mathcal{R})$. Furthermore, denote by $\phi$ and $\phi^n$ the solutions of \eqref{eq:op_eq}--\eqref{eq:op_bc} with parameters $(\mu^n,\sigma^n)$ and $(\mu,\sigma)$, respectively. Then \begin{align*} \big(\mathcal{A} + \mathcal{C}(\mu^n,\sigma^n)\big) (\phi^n-\phi) = (\mu-\mu^n)\phi + (\sigma-\sigma^n)\Theta\phi. \end{align*} Since $\mu^n\to \mu$ in $L^p(\mathcal{R})$, we can choose a subsequence, again denoted by $\{\mu^n\}$, such that $\mu^n\to \mu$ a.e.\@ in $\mathcal{R}$ and consequently $\mu_n\phi \to \mu \phi$ a.e.\@ in $\mathcal{R}\times\S$. Since $|\mu_n \phi|\leq C |\phi|$ is uniformly bounded, Lebesgue's dominated convergence theorem ensures $(\mu-\mu^n)\phi \to 0$ in $L^q(\mathcal{R}\times\S)$. Similarly, $(\sigma-\sigma^n)\Theta\phi\to 0$ in $L^q(\mathcal{R}\times\mathcal{S})$. The uniform a-priori estimate of Theorem~\ref{thm:existence} then yields $\phi^n\to\phi$ in $\mathbb{V}^q$. \end{proof} We will show next, that the parameter-to-solution map is also continuous in the weak topology. This directly implies the weak-lower semi-continuity of the Tikhonov functional and thus yields the well-posedness of the regularization method. The proof of the result heavily relies on the compactness provided by the averaging lemma. \begin{theorem}[Weak continuity]\label{thm:weak_continuity} Let $1<p,q<\infty$ and assume that $f \in L^q(\mathcal{R}\times\S)$ and $g\in L^q(\Gamma_-;|s\cdot n|)$. Then $S$ is weakly continuous, i.e., if $ D(S) \ni (\sigma^n,\mu^n)\rightharpoonup (\sigma,\mu)$ in $L^p(\mathcal{R})\times L^p(\mathcal{R})$, then $(\sigma,\mu)\in D(F)$ and $S(\sigma^n,\mu^n) \rightharpoonup S(\sigma,\mu)$ in $\mathbb{V}^{q}$. \end{theorem} \begin{proof} Since $D(S)$ is closed and convex, $D(S)$ is weakly closed and $(\mu,\sigma)\in D(S)$. Now let $\phi_n,\phi \in \mathbb{V}^p$ denote the unique solutions of \eqref{eq:op_eq}--\eqref{eq:op_bc} with parameters $\mu^n$, $\sigma^n$ and $\mu$, $\sigma$, respectively. Then the difference $\phi^n-\phi$ satisfies the transport problem \begin{align} \label{eq:difference} (\mathcal{A}+\mathcal{C}(\mu,\sigma))(\phi-\phi^n) = \tilde f^n \quad \text{in } \mathcal{R}\times\mathcal{S}, \qquad \phi-\phi^n=0 \quad \text{on } \Gamma_- \end{align} with right-hand side defined by \begin{align*} \tilde f^n = (\mu^n-\mu) \phi^n + (\sigma^n - \sigma) (\phi^n - \Theta \phi^n). \end{align*} By Theorem~\ref{thm:existence}, the operator $\mathcal{A}+\mathcal{C}$ is continuously invertible. It thus remains to prove that $\tilde f^n \rightharpoonup 0$. Multiplying the first term with $\psi\in C^\infty_0(\mathcal{R}\times\S)$ and integrating yields \begin{align*} \int_{\mathcal{R}\times\mathcal{S}} (\mu^m-\mu)\phi^n \psi\,{\rm d}(r,s) &= \int_\mathcal{R} (\mu-\mu^n) \int_\mathcal{S} \phi^n(r,s)\psi(r,s)\,{\rm ds}\,{\rm dr} =: I^n(\psi). \end{align*} Now by Lemma~\ref{lem:scatteringcompactness}, we obtain $\int_\mathcal{S} \phi^n \psi \,{\rm ds} = \Theta(\phi^n \psi) \to \Theta(\phi \psi)$ strongly in $L^p(\mathcal{R}\times\S)$. From this we conclude that $I^n(\psi) \to 0$ and as a consequence $(\mu^n-\mu)\phi^n \rightharpoonup 0$. The term involving $\sigma^n-\sigma$ can be treated in a similar way. \end{proof} For the following quantitative estimate, we require some slightly stronger assumptions on the source terms. This kind of regularity seems to be necessary since due to its hyperbolic type the transport equation does not possess a regularizing effect. \begin{theorem}\label{thm:lipschitz} Let $f \in L^\infty(\mathcal{R}\times\S)$ and $g\in L^\infty(\Gamma_-)$. Then for any $1\leq q\leq p\leq \infty$ the operator $S$ is Lipschitz continuous as a mapping from $L^p(\mathcal{R}) \times L^p(\mathcal{R})$ to $\mathbb{V}^q$. \end{theorem} \begin{proof} Let $(\mu,\sigma),(\tilde\mu,\tilde\sigma)\in D(S)$ and denote by $\phi,\tilde\phi\in\mathbb{V}^q$ the corresponding solutions of the transport problem \eqref{eq:op_eq}--\eqref{eq:op_bc}. The difference $\tilde\phi-\phi$ then satisfies \eqref{eq:difference} with right-hand side $\tilde f=(\tilde \mu - \mu) \tilde \phi + (\tilde \sigma - \sigma) (\tilde \phi - \Theta \tilde \phi)$. Using Theorem~\ref{thm:existence} we obtain \begin{align*} \|\phi-\tilde\phi\|_{\mathbb{V}^q \leq C \big( \|\tilde\mu-\mu\|_{L^q(\mathcal{R})} + \|\tilde\sigma-\sigma\|_{L^q(\mathcal{R})} \big) \|\tilde \phi\|_{L^\infty(\mathcal{R}\times\S)}. \end{align*} Due to the regularity of the data $f$ and $g$, we have $\tilde \phi \in \mathbb{V}^\infty$, which completes the prove. \end{proof} \subsection{Differentiability}\label{sec:differentiability} As a next step, we investigate differentiability of the parameter-to-solution map. We call a parameter pair $(\hat\mu,\hat\sigma)\in L^p(\mathcal{R})\times L^p(\mathcal{R})$ an admissible variation for $(\mu,\sigma)\in D(S)$, if the perturbed parameters $(\mu,\sigma)+t(\hat\mu,\hat\sigma)\in D(F)$ for $|t|\ll 1$. \begin{theorem}\label{thm:gateaux} Let $1\leq q\leq p\leq \infty$ and let $f \in L^\infty(\mathcal{R}\times\S)$ and $g\in L^\infty(\Gamma_-)$. For $(\mu,\sigma)\in D(S)$ and admissible variation $(\hat\mu,\hat\sigma) \in L^p(\mathcal{R}) \times L^p(\mathcal{R})$, let $S'(\mu,\sigma)[\hat\mu,\hat\sigma] = w \in \mathbb{V}^q$ be defined as the unique solution of \begin{align}\label{eq:sensitivity} \mathcal{A} w + \mathcal{C} w = \tilde f \quad \text{in } \mathcal{R}\times\S, \qquad w =0 \quad \text{on } \Gamma_- \end{align} with $\tilde f = - \mathcal{C}(\hat\mu,\hat\sigma)\phi$ and $\phi\in\mathbb{V}^q$ solving \eqref{eq:op_eq}--\eqref{eq:op_bc} with parameters $(\mu,\sigma)$. Then, there holds \begin{align}\label{eq:est_der} \|S'(\mu,\sigma)[\hat\mu,\hat\sigma]\|_{\mathbb{V}^q} \leq C \big(\|\hat\mu\|_{L^p(\mathcal{R}\times\S)} + \|\hat\sigma\|_{L^p(\mathcal{R}\times\S)} \big) \|g\|_{L^\infty(\Gamma_-)}. \end{align} \end{theorem} \begin{proof} Let $\phi_t\in\mathbb{V}^q$ denote the solution of \eqref{eq:op_eq}--\eqref{eq:op_bc} for parameters $(\mu,\sigma)+t(\hat\mu,\hat\sigma)\in D(S)$ and $t$ sufficiently small and let $w_t := (\phi_t-\phi)/t$. Then \begin{alignat*}{3} \mathcal{A} (w_t-w) + \mathcal{C} (w_t-w) &= \mathcal{C}(\hat\mu,\hat\sigma)(\phi-\phi_t) &\quad&\text{in } \mathcal{R}\times\S, \end{alignat*} and $w_t-w = 0$ on $\Gamma_-$. By Theorem~\ref{thm:existence} we thus obtain \begin{align*} \|w_t-w\|_{\mathbb{V}^q} \leq C \big(\|\hat\mu\|_{L^p(\mathcal{R}\times\S)} + \|\hat\sigma\|_{L^p(\mathcal{R}\times\S)} \big) \|\phi-\phi_t\|_{L^\infty(\mathcal{R}\times\S)}. \end{align*} The continuity of the parameter-to-solution map, and the integrability condition on the data, yields $\phi_t \to \phi$ in $L^\infty(\mathcal{R}\times\S)$ from which we conclude that $w_t\to w$ in $\mathbb{V}^q$ as $t\to 0$. The estimate \eqref{eq:est_der} follows again from Theorem~\ref{thm:existence}. \end{proof} One can see from \eqref{eq:sensitivity} that $S'$ depends linearly on the variation $(\hat\mu,\hat\sigma)$. By the continuous extension principle, the operator $S'(\mu,\sigma)$ can then be extended to a bounded linear operator $S'(\mu,\sigma):L^p(\mathcal{R})\times L^p(\mathcal{R})\to \mathbb{V}^q$, which we call the derivative of $S$ in the following. \begin{theorem}\label{thm:der_lip} Let $2\leq p\leq \infty$ and $1 \le q \le p/2$ and assume that $f \in L^\infty(\mathcal{R}\times\S)$ and $g\in L^\infty(\Gamma_-)$. Then $S'$ is Lipschitz continuous, i.e., for $(\mu_1,\sigma_1)$, $(\mu_2,\sigma_2)\in D(S)$ there holds \begin{align*} & \| S'(\mu_{1},\sigma_{1}) - S'(\mu_{2},\sigma_{2})\|_{\mathcal{L}(L^{p}(\mathcal{R})\times L^{p}(\mathcal{R});\mathbb{V}^{q})} \\ & \qquad \qquad \qquad \leq L \big(\|\mu_{1}-\mu_{2}\|_{L^{p}(\mathcal{R})}+ \|\sigma_{1}-\sigma_{2}\|_{L^{p}(\mathcal{R})} \big) \|g\|_{L^\infty(\Gamma_-)}. \end{align*} \end{theorem} \begin{proof} Let $(\mu_{i},\sigma_{i})\in D(S)$, $i=1,2$, and let $w_{i}\in\mathbb{V}^q$, $i=1,2$, be the solutions of the sensitivity problems in Theorem~\ref{thm:gateaux} for some admissible direction $(\hat\mu,\hat\sigma)\in L^p(\mathcal{R})\times L^p(\mathcal{R})$. Then $w_1-w_2$ satisfies \eqref{eq:sensitivity} with $\tilde f=-\mathcal{C}(\hat\mu,\hat\sigma)(\phi_1-\phi_2)-\mathcal{C}(\mu_1-\mu_2,\sigma_1-\sigma_2)w_2$. Using H\"older's inequality the two parts of $\tilde f$ can be estimated individually by \begin{align} \| \mathcal{C}(\hat\mu,\hat\sigma)(\phi_1-\phi_2) \|_{L^{q}(\mathcal{R}\times\S)} &\leq C \big(\|\hat\mu\|_{L^{p}(\mathcal{R})}+\|\hat\sigma\|_{L^{p}(\mathcal{R})}\big) \|\phi_1-\phi_2\|_{L^{p}(\mathcal{R}\times\S)},\label{eq:sens1}\\ \| \mathcal{C}(\mu_1-\mu_2,\sigma_1-\sigma_2)w_2\|_{L^{q}(\mathcal{R}\times\S)} &\leq C \big(\|\mu_1-\mu_2\|_{L^{p}(\mathcal{R})}+\|\sigma_1-\sigma_2\|_{L^{p}(\mathcal{R})}\big)\|w_2\|_{L^{p}(\mathcal{R}\times\S)}.\label{eq:sens2} \end{align} Using Theorem~\ref{thm:lipschitz} and Theorem~\ref{thm:gateaux} we then obtain via the triangle inequality \begin{align*} \|\tilde f\|_{L^q(\mathcal{R}\times\mathcal{S})}\leq C \big(\|\hat\mu\|_{L^{p}(\mathcal{R})}+\|\hat\sigma\|_{L^{p}(\mathcal{R})}\big) \big(\|\mu_1-\mu_2\|_{L^{p}(\mathcal{R})}+\|\sigma_1-\sigma_2\|_{L^{p}(\mathcal{R})}\big). \end{align*} The Lipschitz estimate now follows from the a-priori estimates stated in Theorem~\ref{thm:existence}. \end{proof} Differentiability of $S$ has already been proven in \cite{DiDoNaPaSi02}, but under more restrictive assumptions and only for $p=\infty$, which turns out to be the simplest case. The proofs of \cite{DiDoNaPaSi02} cannot be applied to the more general setting considered here. By carefully inspecting the estimates \eqref{eq:sens1}--\eqref{eq:sens2}, using assumptions (A2)--(A3), H\"older's inequality, and interpolation, we obtain \begin{corollary}\label{cor:der_hoelder} Let $1\leq q <\infty$ and $q<p\leq 2q$ and assume that $f \in L^\infty(\mathcal{R}\times\S)$ and $g\in L^\infty(\Gamma_-)$. Then $S'$ is H\"older continuous with H\"older exponent $\frac{p-q}{q}$. \end{corollary} This estimate will allow us to obtain convergence of iterative minimization algorithms under very general conditions. With the same techniques as used to prove Theorem~\ref{thm:der_lip}, one can also analyze higher order derivatives. For later reference let us state a result about the existence of the Hessian. \begin{theorem}\label{thm:hessian} Let $p=3q$ for some $1\leq q\leq \infty$ and assume that $f \in L^\infty(\mathcal{R}\times\S)$ and $g\in L^\infty(\Gamma_-)$. Then $S : D(S) \subset L^p(\mathcal{R}) \times L^p(\mathcal{R}) \to \mathbb{V}^q$ is twice continuously differentiable and $S''$ is given by \begin{align*} S''(\mu,\sigma)[(\hat\mu_{1},\hat\sigma_{1}), (\hat\mu_{2},\hat\sigma_{2})] = H, \end{align*} where $H\in \mathbb{V}^q$ is the unique solution of \begin{alignat*}{3} \mathcal{A} H + \mathcal{C} H &= \mathcal{C}(\hat\mu_{1},\hat\sigma_{1})w(\hat\mu_{2},\hat\sigma_{2}) +\mathcal{C}(\hat\mu_{2},\hat\sigma_{2}) w(\hat\mu_{1},\hat\sigma_{1}) &\quad&\text{in } \mathcal{R}\times\S, &\\ H &=0 &\quad&\text{on } \Gamma_-. \end{alignat*} Moreover, $S''(\mu,\sigma)$ is Lipschitz continuous w.r.t. its arguments and \begin{align*} &\| S''(\mu_{1},\sigma_{1}) - S''(\mu_{2},\sigma_{2})\|_{\mathcal{L}(L^{p}(\mathcal{R})\times L^{p}(\mathcal{R}), L^{p}(\mathcal{R})\times L^{p}(\mathcal{R}); \mathbb{V}^{q})}\\ &\qquad \qquad \leq C \big(\|\mu_{1}-\mu_{2}\|_{L^{p}(\mathcal{R})}+ \|\sigma_{1}-\sigma_{2}\|_{L^{p}(\mathcal{R})} \big), \end{align*} with $C$ depending only on the domain, the bounds for the parameters, and the data. \end{theorem} Like above, the Hessian should first be defined for admissible parameter variations and then be extended to a bounded bilinear map. The estimate then follows in the same way as the Lipschitz estimate for the first derivative. We will utilize the properties of the Hessian to show local convexity of the regularized functional \eqref{eq:tikhonov} in a Hilbert space setting. \section{The optimal control problem} \label{sec:OCproblem} Let us recall the definition of the optimal control problem \begin{align* \| BS(\mu,\sigma)- \mathcal{M}\|_{L^q({\partial \R})}^q + \alpha \|\mu-\mu_0\|^p_{L^p(\mathcal{R})} + \alpha \|\sigma-\sigma_0\|^p_{L^p(\mathcal{R})} \to \min_{(\mu,\sigma) \in D(S)}, \end{align*} defined by minimizing the Tikhonov functional for some $\alpha \ge 0$. Based on the results about the mapping properties of the parameter to solution map $S$ and the observation operator $B$, we will now comment on the existence and stability of minimizers. The arguments are rather standard, and we only sketch the main points. Let us refer to \cite{EHN96,EnKuNeu89} for details and proofs. \subsection{Existence of Minimizers} By weak continuity of $S$ and weak lower semi-continuity of norms, the Tikhonov functional is weakly lower semi-continuous and bounded from below. Due to the box constraints and the reflexivity of $L^p$, $1<p<\infty$, the domain $D(S)$ is weakly compact. This yields the existence of a minimizer $(\mu_\alpha,\sigma_\alpha)$ for any $\alpha\geq 0$. \subsection{Stability of Minimizers} The minimizers are stable w.r.t. perturbations in the following sense: For $\alpha_n\to\alpha \geq 0$ and $\mathcal{M}^{n}\to \mathcal{M}$ there exists a sequence of minimizers $(\mu_{\alpha_n},\sigma_{\alpha_n})$ converging weakly to a minimizer $(\mu_\alpha,\sigma_\alpha)$. This follows from the weak compactness of $D(S)$ and weak continuity of $S$. If $\alpha>0$, then we can obtain strong convergence. \subsection{Convergence of Minimizers} From the stability result, we already deduce that subsequences of minimizers $(\mu_{\alpha_n},\sigma_{\alpha_n})$ converge weakly towards a minimizer of the $L^p$-norm residual of equation \eqref{eq:ip_informal} if $\alpha_n \to 0$. If the inverse problem is solvable and if $\alpha_n\to 0$ and $\|\mathcal{M}^n-\mathcal{M}\|_{L^p}^p/\alpha_n \to 0$, then convergence is strong and the limit is a solution of \eqref{eq:ip_informal}. \subsection{Remarks and generalizations} Note that, in general, uniqueness of solutions for the inverse problem \eqref{eq:ip_informal} or of minimizers for the optimal control problem \eqref{eq:tikhonov} cannot be expected. We will discuss this issue in more detail in the next section. Also note that, with the same arguments as above, we can analyze minimization problems of the form \begin{align*} \| BS(\mu,\sigma)- \mathcal{M}\|_{L^q({\partial \R})}^q + \alpha R(\mu,\sigma) \to \min, \end{align*} where $R$ is some more general regularization functional. One particular choice $R(\mu,\sigma) = \|\mu - \mu_0\|^2_{H^1(\mathcal{R}\times\S)} + \|\sigma - \sigma_0\|^2_{H^1(\mathcal{R}\times\S)}$ will be considered in more detail in the next section. Total variation regularization $R(\mu,\sigma) = |\mu|_{TV} + |\sigma|_{TV}$ is frequently used in image reconstruction; for an analysis see for instance \cite{AcarVogel}. Due to the continuous embedding of $H^1$ and $BV$ in certain $L^p$ spaces, the statements about existence, stability, and convergence of minimizers made above also hold true for these choices. Our results thus generalize those of \cite{TangHanHan2013}. Note however, that in dimension $d=3$, we cannot obtain Lipschitz- or H\"older continuity of the derivative $S'$ for $TV$-regularization, while for $H^1$ we even obtain Lipschitz continuous second derivatives. This is our guideline for the setting of the next section. \section{Iterative minimization algorithms}\label{sec:iterative} To ensure convergence of minimization algorithms, one has to impose some more restrictive conditions. In order to motivate the crucial assumptions, let us recall a basic convergence rate result from nonlinear regularization theory \cite{EHN96,EnKuNeu89}. To simplify the presentation, we restrict ourselves to a Hilbert space setting and consider the Tikhonov functional \begin{align} \label{eq:tikhonov2} \| BS(\mu,\sigma)- \mathcal{M}\|_{L^2({\partial \R})}^2 + \alpha \| \mu-\mu_0\|_{H^1(\mathcal{R})}^2 + \alpha \| \sigma-\sigma_0\|_{H^1(\mathcal{R})}^2. \end{align} Note that due to the continuous embedding of $H^1$ into $L^6$ in dimension $d \le 3$, we can use all properties of $S$ derived in Section~\ref{sec:properties} for $q=2$ and $p \le 6$. In particular, we infer from Theorem~\ref{thm:der_lip} and Theorem~\ref{thm:hessian} that $S$ has Lipschitz-continuous first and second derivatives. \subsection{Convergence Rates for Minimizers} It is well-known that quantitative estimates for convergence can only be obtained under some kind of source condition. We therefore assume in the following that there exists some $w\in \mathbb{V}^2$ such that \begin{align}\label{eq:source} (\mu^\dagger,\sigma^\dagger)-(\mu_0,\sigma_0) = S'(\mu^\dagger,\sigma^\dagger)^* w,\qquad L\|w\|_{\mathbb{V}^2}<1, \end{align} where $(\mu^\dagger,\sigma^\dagger)$ solves \eqref{eq:ip_informal} and $L$ is the Lipschitz constant of $S'$; see Theorem~\ref{thm:der_lip}. From the abstract theory of nonlinear Tikhonov regularization \cite{EHN96,EnKuNeu89}, we deduce that \begin{align*} \|(\mu_\alpha,\sigma_\alpha)-(\mu^\dagger,\sigma^\dagger)\|_{H^1(\mathcal{R})\times H^1(\mathcal{R})} = \mathcal{O}(\sqrt{\alpha})\quad\text{and}\quad \|BS(\mu_\alpha,\sigma_\alpha)-\mathcal{M}\|_{L^2({\partial \R})} = \mathcal{O}(\alpha), \end{align*} where $(\mu_\alpha,\sigma_\alpha)$ are corresponding minimizers of the Tikhonov functional with $\alpha>0$. Note that the best possible rate one could expect for the error in the parameters is $o(1)$, and for the residual is $\mathcal{O}(\sqrt{\alpha})$, if \eqref{eq:source} is not fulfilled. \subsection{An iterative algorithm for computing a minimizer}\label{sec:PGN} For minimizing the Tikhonov functional \eqref{eq:tikhonov2}, we consider a projected Gau\ss-Newton (PGN) method. To ease the notation, we use $x=(\mu,\sigma)$ and $F(x)=B S(\mu,\sigma)$. The method then reads \begin{align*} \hat x_{n+1} &= x_n + \big( F'(x_n)^* F'(x_n) + \alpha_k I)^{-1} \big[F'(x_n)^* (\mathcal{M} - F(x_n)) + \alpha_k (x_0 - x_n) \big] \\ x_{n+1} &= P_{D(S)}(\hat x_{n+1}). \end{align*} Here $P_{D(S)}$ denotes the metric projection onto $D(S)$ with respect to the $H^1$-norm and $F'(x)^* = S'(\mu,\sigma)^* B^*$ is the Hilbert space adjoint of the linearized parameter-to-measurement operator. As usual, $F'(x)^* w$ can be computed via the solution of an adjoint problem similar to \eqref{eq:sensitivity}. A detailed analysis of the PGN iteration in the framework of iterative regularization methods can be found \cite{BakKok04,KalNeu06}. Here, we consider this algorithm for the approximation of minimizers $x_\alpha = (\mu_\alpha,\sigma_\alpha)$ of the Tikhonov functional \eqref{eq:tikhonov2}. To promote global convergence, we choose a geometrically decaying sequence $\alpha_n=\max\{\frac{\alpha_0}{2^n},\alpha\}$ of regularization parameters. If the source condition \eqref{eq:source} holds with $\|w\|$ sufficiently small, then \begin{align*} \|x_n-x^\dag\|_{H^1(\mathcal{R})\times H^1(\mathcal{R})} \le C \sqrt{\alpha_n} \|w\|_{\mathbb{V}^2} \quad \text{and} \quad \|F(x_n) - \mathcal{M}\|_{L^2({\partial \R})} \le C \alpha_n \|w\|_{\mathbb{V}^2} \end{align*} with a constant $C$ not depending on $\alpha$ or $w$. For $\alpha = 0$, we recover the usual convergence rate statement of the iterative regularization method without data noise \cite[Chapter~4]{BakKok04}. For $\alpha>0$, the iteration is bounded but convergence is not so clear. \subsection{Local convexity and convergence to minimizers} We will now explain that for $\alpha>0$ and under the source condition \eqref{eq:source}, the PGN iteration converges to a local minimizer $x_\alpha$ of the Tikhonov functional. Consider the Hessian of the Tikhonov functional given by \begin{align*} H(x) = F''(x)^* (F(x) - \mathcal{M}) + F'(x)^* F'(x) + \alpha I. \end{align*} One can easily see that, if $F$ is two-times differentiable and the norm of the residual $F(x)-\mathcal{M}$ is sufficiently small, such that $\|F''(x)^* (F(x)-\mathcal{M})\| < \alpha$, then the Hessian is positive definite. Now, by the Lipschitz estimate for the first derivative we deduce that $\|F''(x)\| \le L \|B\|$, and from the convergence rate estimates for nonlinear Tikhonov regularization we have $\|F(x_\alpha)-\mathcal{M}\| \le C \alpha \|w\|$. Hence we conclude that, if the source condition \eqref{eq:source} is valid and $\|w\|$ is sufficiently small, then the Tikhonov functional is locally convex in a neighborhood of the minimizers $x_\alpha$. From the estimates for $\|x_\alpha - x^\dag\|$ and $\|x_n-x^\dag\|$ and by the Lipschitz estimate for the first derivative, one can actually conclude that the region of convexity is always reached after a finite number of iterations. For a detailed analysis using similar arguments see \cite{Ram03}. In the area of convexity, the linear convergence follows with standard arguments. \subsection{Remarks and Extensions} Using the abstract theory of regularization methods in Banach spaces \cite{HoKaPoSch07}, the statements of the section can in principle be extended to the $L^p$-$L^q$ setting considered earlier; see also \cite{Resmerita2005,ScherzerVariationalMethodsInImaging2009,SchusterKaltenbacherHofmannKazimierski2012}. The required convergence rates results for the GN method in Banach spaces have been established in \cite{KaltenbacherHofmann2010,KalSchSch09}. At the end of our discussion, let us mention that also projected gradient methods in combination with appropriate rules for the choice for the stepsize can be used for minimizing the Tikhonov functional. For these methods, convergence to stationary points can be established even without a source condition and merely under H\"older continuity of the derivative \cite{HiPiUlUl09}. The same holds true for the PGN method \cite{EggSch11}. \section{Computational Experiments} \label{sec:numerics} To illustrate the theoretical results of the previous sections, we will present some numerical experiments in the following. \subsection{Discretization} For discretization of the radiative transfer problem \eqref{eq:rte}--\eqref{eq:rte_bc} we employ the $P_N$-FEM method. This is a Galerkin approximation using a truncated spherical harmonics expansion with respect to the direction $s$ and a mixed finite element approximation for the corresponding spatially dependent Fourier coefficients. Due to the variational character of the method, one can systematically obtain consistent discretizations of the operator parameter-to-solution operator $S$, its derivative $S'$, and the adjoint $(S')^*$. Let us refer to \cite{EggSch10:3,LewisMiller84,WrightArridgeSchweiger07} for an analysis of the method and details on the implementation. \subsection{Test example and choice of parameters} We consider the setup depicted in Figure~\ref{fig:setup}: The computational domain $\mathcal{R}$ is a two-dimensional circle with radius $25$ mm. The absorption parameter $\mu$ is in the range of $0.005$ mm$^{-1}$ to $0.04$ mm$^{-1}$. The scattering $\sigma$ ranges from $5$ mm$^{-1}$ to $30$ mm$^{-1}$. This order of magnitude is typical for applications in optical tomography \cite{Arridge99}. The data $\mathcal{M}\in \mathbb{R}^{16\times 16}$ are generated by sequentially illuminating the object by one of the sources $g_j$ and recording the outgoing light on the $i$th detector for prescribed parameters $\mu$ and $\sigma$, i.e. \begin{align}\label{eq:measurement} \mathcal{M}_{ij} = \int_{\Sigma_i} B \phi_j(r) \,{\rm d} r. \end{align} Here $\phi_j$ is the photon density generated by the $j$th source and $\Sigma_i\subset{\partial \R}$ models the area of the $i$th detector; see Figure~\ref{fig:setup} for the arrangements of sources and detectors. For our numerical experiments, we choose a sequence of regularization parameters $\alpha_n = \max\{\frac{\alpha_0}{2^n},\alpha_{\min}\}$ with $\alpha_0=\frac{1}{100}$ and $\alpha_{\min}=10^{-10}$. As initial guess, we use the constant functions $\mu_0=0.015$ mm$^{-1}$ and $\sigma_0=15$ mm$^{-1}$. \begin{figure}[ht] \includegraphics[width=0.33\textwidth]{mesh}\hfill \includegraphics[width=0.33\textwidth]{mua}\hfill \includegraphics[width=0.33\textwidth]{mus} \caption{\label{fig:setup} Left: Grid with $1287$ vertices, blue circles denote the $16$ source positions, red triangles denote $16$ detector positions. Middle: True distribution of $\mu$. Right: True distribution of $\sigma$.} \end{figure} \subsection{Generation of Data and Non-uniqueness} Note that our choice of parameters $\mu$ and $\sigma$ depicted in Figure~\ref{fig:setup} cannot be expected to satisfy the source condition \eqref{eq:source}. To be able to observe convergence rates, we therefore compute in a first step a minimizer $(\mu^\dagger,\sigma^\dagger):=(\mu_{\alpha_{\min}},\sigma_{\alpha_{\min}})$ of the Tikhonov functional with $\alpha_{\min}=10^{-10}$. The result of this preprocessing step is depicted in Figure~\ref{fig:xad}. \begin{figure}[ht] \includegraphics[width=0.4\textwidth]{xadMU}\quad \includegraphics[width=0.4\textwidth]{xadSIGMA} \caption{\label{fig:xad} Calibrated parameters $\mu^\dagger$ (left) and $\sigma^\dagger$ (right) obtained by minimizing the Tikhonov functional for initial guess $\mu_0=0.015$ and $\sigma_0=15$, $\alpha=10^{-10}$ and data $BS(\mu,\sigma)$ from Figure~\ref{fig:setup}.} \end{figure} Let us mention that we obtain different reconstructions $(\mu^\dagger,\sigma^\dagger)$ when changing the initial value $(\mu_0,\sigma_0)$, which is a clear indication of non-uniqueness in for the inverse problem \eqref{eq:ip_informal}; see also \cite{Arridge99} for a theoretical explanation. Using the calibrated parameter $(\mu^\dag,\sigma^\dag)$ as truth-approximation, we then compute the measurements $\mathcal{M} = B S(\mu^\dag,\sigma^\dag)$ as in \eqref{eq:measurement}. The relative error in the data corresponding to the parameters depicted in Figure~\ref{fig:setup} and \ref{fig:xad} is less then $0.05$\%. This indicates the ill-posedness and possible non-uniqueness for the inverse problem. \subsection{Convergence rates for minimizers} In a first numerical test, we want to demonstrate the convergence of the minimizers $(\mu_\alpha,\sigma_\alpha)$ of the Tikhonov functional \eqref{eq:tikhonov2} towards the correct parameter pair $(\mu^\dag,\sigma^\dag)$ generated in the preprocessing step. We denote by $$ {\rm res}_\alpha = \|BS(\mu_\alpha,\sigma_\alpha)-\mathcal{M}\|_{2},\qquad {\rm err}_\alpha = \|(\mu_\alpha,\sigma_\alpha)-(\mu^\dagger,\sigma^\dagger)\|_{H^1(\mathcal{R})}, $$ the observed residuals and errors in the regularized solutions. The convergence rates for the residual and the error can be seen in Figure~\ref{fig:rate}. \begin{figure}[ht] \includegraphics[width=0.4\textwidth]{rate_residuals}\qquad \includegraphics[width=0.4\textwidth]{rate} \caption{\label{fig:rate} Rates of convergence for minimizers of the Tikhonov functional. Left: ${\rm res}_\alpha$ (crosses) and $\mathcal{O}(\alpha)$ for $\alpha=10^{-n}$ and $n\in\{1,\ldots,9\}$. Right: ${\rm err}_\alpha$ (crosses) and $\mathcal{O}(\sqrt{\alpha})$.} \end{figure} As predicted by theory, we observe the asymptotic rate $\mathcal{O}(\sqrt{\alpha})$ for the error ${\rm err}_\alpha$. The convergence rate for the residuals ${\rm res}_\alpha$ is slightly less than the expected rate $\mathcal{O}(\alpha)$. \subsection{Convergence of PGN method for $\alpha$ fixed} With the second experiment, we would like to demonstrate the linear convergence of the PGN method to the minimizer of the Tikhonov functional. To do so, we compute for $\alpha=10^{-5}$ the minimizers $(\mu_\alpha,\sigma_\alpha)$ by iterating the PGN method until convergence. We then restart the iteration to create a sequence $(\mu_n,\sigma_n)$ of PGN iterates defined as in Section~\ref{sec:PGN} with $\alpha_n=\max(\frac{1}{100}\frac{1}{2^n},\alpha)$. The residuals and the errors in the $n$th iteration given by $$ {\rm res}_n^\alpha := \|BS(\mu_n,\sigma_n)-\mathcal{M}\|_{2},\qquad {\rm err}_n^\alpha = \|(\mu_n,\sigma_n)-(\mu_\alpha,\sigma_\alpha)\|_{H^1} $$ are depicted in Figure~\ref{fig:exp}. For comparison, we also display the theoretical convergence curve. \begin{figure}[ht] \includegraphics[width=0.4\textwidth]{residuals1}\quad \includegraphics[width=0.4\textwidth]{errors1} \caption{\label{fig:exp} Convergence of the PGN method for fixed $\alpha=10^{-5}$. Left: residual ${\rm res}_n^\alpha$. Right: linear convergence of ${\rm err}_n^\alpha$ and $(0.65)^n$ (dotted).} \end{figure} In the first iterations, $\alpha_n$ is still rather large and the iterates stay within the vicinity of the initial guess. After $\alpha_n$ decreased sufficiently, the convergence of the error ${\rm err}_n^\alpha$ gets linear, i.e. ${\rm err}_n^\alpha \leq C \rho^n$ for some $0<\rho<1$. The residuals do not converge to zero here, since the minimizer $(\mu_\alpha,\sigma_\alpha)$ does not solve the inverse problem \eqref{eq:ip_informal} exactly. The residuals and the errors are however monotonically decreasing, which highlights the stability of the method. \section{Conclusions} In this paper we investigated numerical methods for reconstructing scattering and absorption rates in stationary radiative transfer from boundary observations. For a stable solution of this inverse problem, we considered Tikhonov regularization which leads to an optimal control problem constrained by an integro partial differential equation. Using some sort of compactness provided by the averaging lemma, we were able to prove the weak continuity of the parameter-to-solution mapping. This allows us to show existence and stability of minimizers. We also established important differentiability properties which are required for the convergence of iterative minimization algorithms. We discussed the convergence of a projected Gau\ss-Newton method. Under the typical source condition, which is also required for nonlinear regularization theory, we could establish local convexity of the Tikhonov functional in the vicinity of minimizers, and thus obtained local linear convergence of the projected Gau\ss-Newton method. It would be interesting to know, if convergence of iterative minimization algorithms can be shown without some sort of source condition.
train/arxiv
BkiUcjjxK7Ehm308qBZO
5
1
\section{Introduction} \hspace*{0.6cm} In classical general relativity, due to the uniqueness theorem of black holes \cite{a1,1,2,3}, the asymptotically flat charged black hole solutions with zero angular momentum in four dimensions are named as Reissner-Nordstrom (RN) black holes, which have two spherical event horizons. In four-dimensional anti-de Sitter (AdS) spacetime, it is well-known that except for compact horizons of arbitrary genus, there exist some solutions with noncompact planar or hyperbolic horizons. Because of the development of Anti-de Sitter/conformal field theory (AdS/CFT) correspondence \cite{Maldacena:1997re,Maldacena:1998re,Witten:1998qj,Aharony:1999ti}, it becomes more important to study physical properties of AdS black holes. \par Because the asymptotically AdS black hole has a boundary metric of conformal structure, we could deform the boundary metric to obtain a family of solutions of black hole with deforming horizon, whose curvature is not a constant value. There are many works in this field with both analytical and numerical methods. For the analytic method, the authors in \cite{Chen:2015zoa} constructed a family of black hole solutions with deforming horizons in four-dimensional spacetime by using AdS C-metric \cite{levicivita1917,weyl1917,Plebanski:1976gy}. In addition, a class of solutions of four-dimensional AdS black holes with noncompact event horizons of finite area was found in \cite{Klemm:2014rda,Gnecchi:2013mja}, and black holes with bottle-shaped event horizon were founded analytically in \cite{Chen:2016rjt}. With numerical methods, the authors in \cite{Markeviciute:2017jcp} got a family of deforming solutions including soliton and black hole with dipolar differential boundary $\Omega(\theta)=\varepsilon \cos(\theta)$. The constant $\varepsilon$ is the boundary rotation parameter and $\theta$ is polar angle. When $\varepsilon>2$, the norm Killing vector $\partial _t$ becomes spacelike for certain regions which also are called as ergoregions, and deforming AdS black holes with ergoregions may be unstable due to superradiant scattering \cite{Green:2015kur}. Because of superradiance, both solitons and black holes develop hair at $\varepsilon>2$. Motivated by this work, we also studied deforming solutions with odd multipolar \cite{Sun:2019qsf} and even multipolar \cite{Li:2019tsm} differential rotation boundary. Furthermore, in \cite{Crisford:2018qkz}, the authors numerically studied a class of vacuum solutions with a noncompact, differential rotation boundary metric. With AdS C-metric, the effect of changing boundary metric on hyperbolic and compact AdS black holes had been studied in \cite{Horowitz:2018coe}. Considering the matter fields, the authors in \cite{a17} constructed the deforming black holes in $D = 5$ minimal gauged supergravity. \par Until now, the works of deforming AdS black hole with differential boundary \cite{Markeviciute:2017jcp, Li:2019tsm,Sun:2019qsf} are only studied in the situations without charge. It would be interesting to see whether there exist the charged deforming AdS black holes solutions in Einstein-Maxwell-AdS spacetime. In this paper, we would like to numerically solve coupled Einstein-Maxwell equations to obtain a family of solutions of charged deforming black holes. These solutions have the anti-symmetric rotation profile on the equatorial plane, which keeps total angular momentum of black hole being zero. In contract to the situations without charge, there exist some new properties of black holes due to the existence of charge $q$. Firstly, there exists at least one value of horizon for an arbitrary temperature, while there exists no horizon when $T<T_{min}$ for $q=0$. Besides, the extremum of temperature is determined by charge $q$ and divide temperature into several parts according to the charge $q$. In different regions of temperature, the number of values for horizon is different. Specifically, in the region with one value of horizon for a fixed temperature, there exist two families of solutions with same horizon when temperature is lower than the minimal extremum of temperature for RN-AdS black hole $T_{RN}=\frac{\sqrt{6}}{3\pi}$. Furthermore, in the region with three values of horizon for a fixed temperature, it is interesting to find that two small branches have same properties such as horizon geometry, entropy and quasinormal modes, although their horizon radii are not equal. \par The plan of our work is as follows. In the next section, we introduce our model and numerical method. In Sec. \ref{Sec3}, we obtain numerical solutions of charged AdS black hole with differential rotation boundary and discuss the effect of the temperature $T$ and the charge $q$ on solutions. We also show more properties of deforming charged AdS black hole including horizon geometry, entropy and stability. The conclusions and outlook are given in the last section. \section{Model and numerical method}\label{Sec2} \hspace*{0.6cm} We start with Einstein-Maxwell action in four-dimensional AdS spacetime, whose action is given by \begin{align} S=\frac{1}{16\pi G}\int \mathrm{d}^4x&\sqrt{-g}\left(R-2\Lambda-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}\right), \label{eq:action} \end{align} where $G$ is the gravitational constant, $\Lambda$ is cosmological constant represented by AdS radius $L$ as $-3/L^2$ in four-dimensional spacetime, $g$ is determinant of metric and $R$ is Ricci scalar. \par The equations of motion of the Einstein and the Maxwell fields which can be derived from the Lagrangian density (\ref{eq:action}) are as follows \begin{subequations}\label{m} \begin{equation}\label{equation1} R_{\mu\nu}+\frac {3}{L^2}g_{\mu\nu}-(\frac {1}{2}F_{\mu\lambda}{F_{\nu}}^{\lambda}-\frac {1}{8}g_{\mu\nu}F_{\lambda\rho}F^{\lambda\rho})=0, \end{equation} \begin{equation}\label{equation2} \nabla_{\mu}F^{\mu\nu}=0. \end{equation} \end{subequations} The spherically symmetric solution of motion equations (\ref{m}) is the well-known Reissner-Nordstrom-AdS (RN-AdS) black hole. The metric of RN-AdS black hole solution could be written as follows \begin{eqnarray}\label{matric1} ds^{2} &=& -\left(1-\frac{2M}{r}+\frac{q^2}{r^2}+\frac{r^2}{L^2}\right)dt^2+\left(1-\frac{2M}{r}+\frac{q^2}{r^2}+\frac{r^2}{L^2}\right)^{-1}dr^2 +r^2 d\Omega^2, \label{eq:RN} \end{eqnarray} and the gauge field is written as \begin{equation}\label{gauge} F=dA, \;\;\; A=\frac{q}{r}dt. \end{equation} Here, $d\Omega^2$ represents the standard element on $S^2$, the constant $M$ is the mass of black hole measured from the infinite boundary, and the constant $q$ is the charge of black hole. The black hole mass is related to the charge $q$ and horizon radius $r_+$ by the relation \begin{equation}\label{root} M=\frac{1}{2}\left(r_++\frac{q^2}{r_+}+\frac{r_+^3}{L^2} \right), \end{equation} where $r_{+}$ is the largest root. The Hawking temperature $T_H$ of RN-AdS black hole is given by \begin{eqnarray} T_H=\frac{1}{4\pi r_+} \left(1+ \frac{3 r_+^2}{L^2}-\frac{q^2}{r_+^2}\right). \end{eqnarray} At near infinity the metric is asymptotic to the anti-de Sitter spacetime, and boundary metric is given by \begin{equation}\label{boundary} ds_\partial^2=-dt^2+d\theta^2+\sin^2\theta d\phi^2. \end{equation} \par In order to obtain the new asymptotic Anti-de Sitter solution, the authors in \cite{Markeviciute:2017jcp} add differential rotation to the boundary metric, which is given by \begin{equation} ds_\partial^2=-dt^2+d\theta^2+\sin^2\theta[d\phi+\Omega(\theta)dt]^2, \label{eq:boundary} \end{equation} with a dipolar differential rotation $\Omega(\theta)=\varepsilon\cos\theta$. The constant $\varepsilon$ is the amplitude of the boundary rotation. The norm of Killing vector $\partial_t$ is \begin{equation} \|{\partial{t}}\|^2=-1+\frac{\varepsilon^2}{4}\sin^2({2\theta}). \label{eq:timelike} \end{equation} From the above equation, we could find that the maximal value of Killing vector appears at $\theta=\frac{\pi}{4}$. We will take the same dipolar differential rotation boundary (\ref{eq:boundary}). \par In order to get a set of charged deforming black hole solutions, we would like to use DeTurk method \cite{Headrick:2009pv,Wiseman2012,Dias:2015nua} to solve equations of motion (\ref{m}). By adding a gauge fixing term, we change equations (\ref{equation1}) to elliptic equations: \begin{equation}\label{DTequation} R_{\mu\nu}+\frac {3}{L^2}g_{\mu\nu}-(\frac {1}{2}F_{\mu\lambda}{F_{\nu}}^{\lambda}-\frac {1}{8}g_{\mu\nu}F_{\lambda\rho}F^{\lambda\rho})-\nabla_{(_\mu}\xi_{\nu)}=0, \end{equation} where the Deturk vector $\xi^\mu=g^{\nu\rho}(\Gamma^\mu_{\nu\rho}[g]-\Gamma^\mu_{\nu_\rho}[\tilde{g}])$ is related to reference metric $\tilde{g}$. It is notable that the reference metric $\widetilde{g}$ should be chosen to have the same boundary and horizon structure with $g$. Using this method to solve equations (\ref{equation2}) and (\ref{DTequation}), we could obtain a family of charged AdS black hole solutions. \section{Black hole solutions} \label{Sec3} \hspace*{0.6cm} To obtain solutions of charged deforming AdS black hole, we start with this ans$\ddot{a}$tze of metric, \begin{subequations} \begin{multline} \mathrm{d}s^2=\frac{L^2}{(1-y^2)^2}\Bigg\{-y^2\tilde{\Delta}(y)U_1\mathrm{d}t^2+\frac{4\,y_+^2 U_2\,\mathrm{d}y^2}{\tilde{\Delta}(y)}+y_+^2 \Bigg[\frac{4\,U_3}{2-x^2}\left(\mathrm{d}x+x \sqrt{2-x^2}\,y\,U_4\, \mathrm{d}y\right)^2\\+(1-x^2)^2 U_5\,\left(\mathrm{d}\phi+y^2x\sqrt{2-x^2}\,U_6\,\mathrm{d}t\right)^2 \Bigg]\Bigg\}, \label{eq:ansatzbh} \end{multline} \noindent with \begin{equation} \Delta(y)=\frac{q^2(1-y^2)^2}{L^2y_+^2}+(1-y^2)^2+y_+^2 (3-3y^2+y^4)\,,\quad\text{and}\quad \tilde{\Delta}(y)= \Delta(y) \delta + y_+^2 (1 - \delta)\,, \end{equation}\label{key} \end{subequations} where the functions $U_i,i\in(1,...,6)$ depend on $x$ and $y$, the parameter $q$ is the charge of black hole, and $y_{+}$ is horizon radius. Here, $y$ is related to radial coordinate $r$ with $r=Ly_+/(1-y^2)$, and $x$ represents polar angle on $S^2$ with $\sin\theta=1-x^2$. When $U_1=U_2=U_3=U_5=\delta=1$ and $U_4=U_6=0$, the line element (\ref{key}) can reduce to RN-AdS black hole. \par Considering an axial symmetry system, we have polar angle reflection symmetry $\theta\rightarrow\pi-\theta$ on the equatorial plane, and thus it is convenient to consider the coordinate range $\theta \in [0,\pi/2] $, i.e $x \in [0,1] $. We require the functions to satisfy the following boundary conditions on the equatorial plane $x=0$, \begin{equation} \partial_x U_i(0,y)=0, \;\;\;i=1,2,3,4,5,6, \end{equation} and set axis boundary conditions at $x=1$, where regularity must be imposed Dirichlet boundary conditions on $U_4$ and Neumann boundary conditions on the other functions \begin{equation}\label{abc} U_4(1,y)=0, \end{equation} and \begin{equation}\label{abc} \partial_x U_1(1,y)=\partial_x U_2(1,y)=\partial_x U_3(1,y)=\partial_x U_5(1,y)=\partial_x U_6(1,y)=0. \end{equation} At $y=1$, we set $U_4=0$, $U_{6}=\varepsilon$ and $U_{1}=U_{2}=U_{3}=U_{5}=1$. Moreover, expanding the equations of motion near $x=1$ gives $U_3(1,y)=U_5(1,y)$. In order to ensure that the number of unsolved functions is the same as that of equations in Deturk method, we introduce the component $A_\phi$ in gauge potential. We choose the following form of gauge potential \begin{equation}\label{gauge} A = A_t dt+ A_\phi d\phi , \end{equation} where $A_t$ and $A_\phi$ are all real functions of $x$ and $y$. As for the boundary conditions of vector field, we set $A_t(x,1)=\mu$ and $A_x(x,1)=0$, where the constant $\mu$ is chemical potential which represents the asymptotic behavior of Maxwell field at infinity. At $x=1$, we choose $A_t(1,y)=0$ and $A_x(1,y)=0$. \par The Hawking temperature of charged deforming black hole under ans$\ddot{a}$tze (\ref{key}) takes the following form: \begin{equation} T=\frac{1}{4\pi}\sqrt{-g^{tt}g^{MN}\partial_{M}g_{tt}\partial_{N}g_{tt}}\mid_{r=r_{+}}=\frac{y_{+}^4+\delta(-q^2+y_{+}^2(1+2y_{+}^2))}{4\pi y_{+}^3}. \label{temperature} \end{equation} When the charge $q=0$, the formula (\ref{temperature}) can reduce to the temperature of Schwarzschild-AdS black hole which was also given in \cite{Markeviciute:2017jcp}. When we fix $\delta=1$, the extremums of temperature $T$ depend on the value of charge $q$: \begin{itemize} \item $q=0$: There is only a local minimum $T_{min}=T_S=\frac{\sqrt{3}}{2\pi}$, which is the minimal temperature of Schwarzschild-AdS black hole. \item $0<q<1/6$: There are two extremums of temperature. \begin{equation} \left\{ \begin{aligned} &T_{min}=\frac{3\sqrt{\frac{3}{2}}\left(-q^2+\frac{1}{12}\left(\sqrt{1-36q^2}+1\right)^2+\frac{1}{6}\left(\sqrt{1-36q^2}+1\right)\right)}{\pi\left(\sqrt{1-36q^2}+1\right)^{3/2}},\\ &T_{max}=\frac{3\sqrt{\frac{3}{2}}\left(-q^2+\frac{1}{12} \left(1-\sqrt{1-36 q^2}\right)^2+\frac{1}{6}\left(1-\sqrt{1-36q^2}\right)\right)}{\pi\left(1-\sqrt{1-36 q^2}\right)^{3/2}} . \end{aligned} \right. \end{equation} \item $q=1/6$: $T_{max}=T_{min}=T_{RN}=\frac{\sqrt{6}}{3\pi}$, which is the minimal extremum of temperature for RN-AdS black hole. \item $q>1/6$: There exists no extremum of temperature. \end{itemize} \begin{figure}[!] \centering \includegraphics[width=0.8\textwidth]{hu-temperature1.pdf} \caption{The temperature $T$ as functions of $y_+$ for $\delta$=1. From top to bottom, the black, blue, red and green lines describe charge $q=0$, $\frac{1}{9}$, $\frac{1}{6}$ and $\frac{1}{4}$, respectively. The red and black horizontal dashed lines represent $T_{S}=\frac{\sqrt{3}}{2\pi}$ and $T_{RN}=\frac{\sqrt{6}}{3\pi}$. The red and black vertical lines represent $y_{+}=\frac{1}{\sqrt{3}}$ and $y_+=\frac{1}{\sqrt{6}}$.} \label{temf} \end{figure} \par Next, we will analyze how the charge $q$ and temperature $T$ determine the number of values of horizon. In Fig.$\ $\ref{temf}, we plot the temperature $T$ as functions of horizon $y_+$ at $\delta=1$ for several values of charge $q$. The black, blue, orange and green lines represent $q=0,\frac{1}{9}, \frac{1}{6}$ and $\frac{1}{4}$, respectively. For $q=\frac{1}{6}$, the intersection of the black horizontal and the vertical dashed lines indicates horizon $y_+=\frac{1}{\sqrt{6}}$ and $T_{RN}=\frac{\sqrt{6}}{3\pi}$. For $q=0$, the intersection of the red horizontal and the vertical dashed lines indicates horizon $y_+=\frac{1}{\sqrt{3}}$ and $T_{S}=\frac{\sqrt{3}}{2\pi}$. The number of values of horizon depends on different ranges of temperature $T$ and charge $q$: \begin{enumerate} \item $q=0$: \begin{enumerate} \item $T<T_{S}$: There exists no horizon. \item $T=T_{S}$: There are two equal values of horizon $y_+=\frac{1}{\sqrt{3}}$. \item $T>T_{S}$: There are two different values of horizon. \end{enumerate} \item $0<q<1/6$: \begin{enumerate} \item $T<T_{min}$ or $T>T_{max}$: There is one value of horizon. \item $T=T_{min}$ or $T=T_{max}$: There are three values of horizon and two of them are equal. \item $T_{min}<T<T_{max}$: There are three different values of horizon. \end{enumerate} \item $q=1/6$: \begin{enumerate} \item $T=T_{RN}$: There are three equal values of horizon $y_+=\frac{1}{\sqrt{6}}$. \item $T\neq T_{RN}$: There is only one value of horizon. \end{enumerate} \item $q>1/6$: There is only one value of horizon. \end{enumerate} \par By regulating parameter $\delta$, we can also get three values of horizon below the local minimal temperature $T_{min}$. For simplify, we fix chemical potential $\mu=0.5$ and AdS radius $L=1$ in our numeral calculations. \numberwithin{equation}{subsection} \par In Fig.$\ $\ref{u}, we give the typical distributions of $U_4$ as functions of $x$ and $y$ for $T=0.42$, $\varepsilon=1.6$ and $\delta=1$. When fixing $q=0.07057<\frac{1}{6}$, we can obtain three values of horizon. The distributions of $U_4$ for two small branches with $y_+=0.0992$ (left) and $y_+=0.1773$ (right) are given in the top of Fig.$\ $\ref{u}. The left of bottom is the distributions of $U_4$ for large branch $y_+=1.5436$. To understand how the charge $q$ influences the distributions of $U_4$, we also plot $U_4$ as functions of $y$ at the equatorial plane $x=1$ for several values of $q$. From top to bottom, the distributions of function $U_4$ with charge $q=$ $0$, $1.7068$, $2.2684$, $3.4299$ are represented by black, red, blue, green and pink lines, respectively. Due to the existence of relation (\ref{temperature}), the horizon radius $y_+$ increases with the increasing of charge $q$ for a fixed temperature. \begin{figure}[!hbt] \centering \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[width=6.5cm]{smallsmallQ4.pdf} \end{minipage} \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[width=6.5cm]{smallQ4.pdf} \end{minipage} \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[width=6.5cm]{largeQ4.pdf} \end{minipage} \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[width=6.5cm]{Q4boundary.pdf} \end{minipage} \caption{\emph{Top}: The distributions of $U_4$ as functions of $x$ and $y$ of two small branches for $y_+=0.0992$ (left) and $y_+=0.1773$ (right). \emph{Bottom left}: The distributions of $U_4$ as functions of $x$ and $y$ for large branch $y_+=1.5436$. The three solutions of $U_4$ are given with $T=0.42$, $\varepsilon=1.6$, $\delta=1$ and $q=0.07057$. \emph{Bottom right}: The distributions of $U_4$ as functions of $y$ at the equatorial plane $x=1$ for several values of charge $q$ with $T=0.42$ and $\varepsilon=1.6$. From top to bottom, the black, red, blue, green and pink lines describe the charge $q=0$, $1.7068$, $2.2684$, $2.8363$ and $3.4299$, respectively.} \label{u} \end{figure} \subsection{Horizon geometry}\label{31} \hspace*{0.6cm}In this subsection, we will study how the black hole horizon geometry behaves with the increase of boundary rotation parameters $\varepsilon$ and charge $q$. We could use an isometric embedding in the three-dimensional space \cite{c1,c2,c3,c4,c5} to investigate the horizon geometry of a two-dimensional surface in a curved space \cite{Markeviciute:2017jcp,Gibbons:2009qe}. With the method provided by \cite{Markeviciute:2017jcp}, the horizon of black hole is embedded into hyperbolic $H^3$ space in global coordinates: \begin{equation} ds^2_{H^3}=\frac{dR^2}{1+R^2/l^2}+R^2\left[\frac{dX^2}{1-X^2}+(1-X^2)d\phi^2\right], \end{equation} where $l$ is the radius of the hyperbolic space and we fix $l=0.73$ in our whole calculation. The induce metric of the horizon of black hole is the following form: \begin{equation} ds^2_{H}=L^2\left[\frac{4y_{+}^2U_{3}(x,0)}{2-x^2}dx^2+y_{+}^2(1-x^2)^2U_{5}(x,0)d\phi^2\right], \label{reduce1} \end{equation} which can be obtained from the ans$\ddot{a}$tze (\ref{key}). The embedding is given by a curve with two parameters $\{R(x),X(x)\}$ and written by: \begin{equation} ds_{pb}^2=\left[\frac{R(x)'}{1-\frac{R(x)^2}{l^2}}+\frac{R(x)'^2X(x)'^2}{1-X(x)^2}\right]dx^2+R(x)^2(1-X(x)^2)d\phi^2. \label{reduce2} \end{equation} Equating this line element with induce metric (\ref{reduce1}), we can get the following first order differential equation: \begin{eqnarray} 0&=&4H(x)P(x)(X(x)^2-1)^2[P(x)-l^2(X(x)^2-1)] \\ &&+4l^2P(x)X(x)(X(x)^2-1)P(x)'X(x)'-(X(x)^2-1)^2l^2P(x)^2(l^2+P(x))^2X(x)'^2, \nonumber \end{eqnarray} where $H(x)=(2-x^2)^{-1}(4y_+^2U_3(X,0))$ and $P(X)=y_+^2(1-x^2)^2U_5(x,0)$. \begin{figure}[hhh] \centering \includegraphics[width=0.9\textwidth]{42largebranch.pdf} \caption{Hyperbolic embedding of the cross section of the large black hole horizon for different values of $\varepsilon$ with $T=0.42$ and $q=0.07057$.} \label{largehorizon} \end{figure} \par In Fig.$\ $\ref{largehorizon}, we show the hyperbolic embedding of the cross section of the large black hole horizon for different values of $\varepsilon$ with the charge $q=0.07057$ and the temperature $T=0.42\geq T_{min}=0.2325$. From inner to outer, the black, red, green, orange and blue lines describe the boundary rotation parameter $\varepsilon$=$0.6$, $1.2$ ,$1.6$, $1.8$ and $1.9$, respectively. It is clear that the horizon deforms more dramatically with the increase of boundary rotation parameter $\varepsilon$. \begin{figure}[t] \centering \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[scale=0.26]{2585large.pdf} \end{minipage}% \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[scale=0.26]{2585smallhorizon.pdf} \end{minipage} \caption{Hyperbolic embedding of the cross section of three black hole horizons for different values of $\varepsilon$ at $T=0.2585$ and $q=0.07057$. \emph{Left}: The horizon geometry of large branch for $y_+=0.9152$. \emph{Right}: The horizon geometry of two small branches for $y_+=0.3110$(lines) and $y_+=0.0859$(dots).} \label{fig4} \end{figure} \par Considering there is only one horizon radius when $T<T_{min}$ with $\delta=1$, we adjust $\delta<1$ to get three values of horizon and study the deformation of horizon for a fixed low temperature. In Fig.$\ $\ref{fig4}, we present hyperbolic embedding of the cross section of three black hole horizons for different values of $\varepsilon$ with $T=0.2585$ and $q=0.07057$. In the left panel, we show large black hole solutions for $y_+=0.9152$ and find that the size of the deformation of horizon cross section increases as $\varepsilon$ increases, which is similar to the situation in Fig.$\ $\ref{largehorizon}. In the right panel, we show the result of two small branches for $y_+=0.3110$(lines) and $y_+=0.0859$(dots). What is different from the left panel is that the size of the deformation of horizon is a decreasing function of boundary rotation parameters $\varepsilon$. For the two small branches with different horizon radii, the horizon radius of the bigger one is nearly four times as that of the smaller one, but it is interesting to find that the two small branches have same embedding graphs of horizon geometry. \begin{figure}[hhh] \centering \includegraphics[width=0.8\textwidth]{horizonQ.pdf} \caption{Hyperbolic embedding of the cross section of the large black hole horizon for different values of charge $q$ with $y_+=1.5$ and $\varepsilon=1.6$. } \label{fig5} \end{figure} \par To show the effect of charge $q$ on the deformation of horizon, we give hyperbolic embedding of the cross section of the large black hole horizon for different values of charge $q$ with $y_+=1.5$ and $\varepsilon=1.6$ in Fig.$\ $\ref{fig5}. Due to the existence of relation (\ref{temperature}), the temperature decreases with the increasing of charge $q$ for a fixed horizon radius. From outer to inner, the red, black, orange and blue lines represent $q=0$, $1.6104$, $2.1712$ and $2.6143$ respectively. The deformation of horizon becomes smaller as the charge $q$ increases. \subsection{Entropy} \hspace*{0.6cm}In this subsection, we will discuss the entropy of deforming charged black hole. The formula of entropy of black hole is written as \begin{equation} S=\frac{A}{4G_N}=\frac{2\pi y_+^2L^2}{G_N}\int^1_0dx\frac{1-x^2}{\sqrt{2-x^2}}\sqrt{U_3(x,0)U_5(x,0)}. \end{equation} \par In Fig.$\ $\ref{42entropy}, we show the entropy against boundary rotation parameter $\varepsilon$ with $T=0.42$ and $q=0.07057$. The large black hole with $y_+=1$ is shown in the left panel, while in the right panel, two small branches with $y_+=0.1773$ and $y_+=0.0992$ are represented by red line and black dots respectively. For the large black hole, the entropy is a increasing function of boundary rotation parameter $\varepsilon$. The entropy approaches infinity as $\varepsilon\rightarrow2$, and we could not find charged deforming black hole solutions when $\varepsilon>2$. As for two small branches with a fixed temperature, the entropy decreases with the increase of boundary rotation parameter $\varepsilon$, and there exist solutions when $\varepsilon>2$. Furthermore, we also find another family of small black hole solutions, and in these solutions, the entropy increases with the increase of $\varepsilon$. \par \begin{figure}[t] \centering \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[scale=0.27]{42largebranchentroy.pdf} \end{minipage}% \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[scale=0.25]{42smallentroy.pdf} \end{minipage} \caption{The entropy as functions of boundary rotation parameter $\varepsilon$ for the temperature $T=0.42$ and the charge $q=0.07057$. \emph{Left}: The entropy against boundary rotation parameter $\varepsilon$ for the large branch of black hole solutions $y_+=1$. \emph{Right}: The entropy against boundary rotation parameter $\varepsilon$ for two small branches of black hole $y_+=0.1773$(black dots) and $y_{+}=0.0992$(red line). The vertical red dot lines represent $\varepsilon=2$.} \label{42entropy} \end{figure} To obtain the complete phase diagram of entropy for $\delta=1$, we investigate the whole region of temperature in terms of entropy. We show the entropy as functions of boundary rotation parameter $\varepsilon$ for different values of temperature $T$ with $\delta=1$ in Fig.$\ $\ref{deltaentropy}. In the left panel, when we fix $q=0.07057$, there are two local extremums $T_{max}=0.4635$ and $T_{min}=0.2735$, the entropy of which are represented by red and green lines respectively. The two extremums divide the temperature into three regions: \begin{itemize} \item Region A with $T>T_{max}$: There is only one value of horizon for a fixed temperature and the entropy increases with the increasing of boundary rotation parameter $\varepsilon$. The region A is indicated by the red area. \item Region B with $T_{min}<T<T_{max}$: There are three values of horizon for a fixed temperature. For the large branch of black hole, the entropy increases with $\varepsilon$. Although these two small branches have different black hole horizons, they have same entropy which is a decreasing function of boundary rotation parameter $\varepsilon$. \item Region C with $T<T_{min}$: There is only one horizon, but we could find two branches of entropy. The entropy increases with rotation parameter $\varepsilon$ at one branch, while it is a decreasing function of $\varepsilon$ in another branch. It is notable that when $T\leq T_{RN}\approx0.2599$, the two branches of entropy for one temperature join up. The region C is indicated by the blue area. \end{itemize} \begin{figure}[t] \centering \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[scale=0.27]{entropy111.pdf} \end{minipage}% \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[scale=0.24]{largeQentropy.pdf} \end{minipage} \caption{The entropy as functions of boundary rotation parameter $\varepsilon$ for different values of temperature $T$ with $\delta=1$. \emph{Left}: For $q=0.07057$, local maximum and minimum of temperature $T$ are equal to $0.4635$ and $0.2735$ represented by red and green lines respectively. The two extremums divide the phase diagram of entropy into three regions. \emph{Right}: For $q=1.7068$, there is always one solution without extremum of temperature $T$. The vertical red dot lines represent $\varepsilon=2$. } \label{deltaentropy} \end{figure} \par In the right panel of Fig.$\ $\ref{deltaentropy}, we fix $q=1.7068>\frac{1}{6}$. There only exist one value of horizon for any temperature, but we could obtain two branches of entropy. The entropy increases with rotation parameter $\varepsilon$ at one branch, while it is a decreasing function of $\varepsilon$ in another branch. It is notable that when $T\leq T_{RN}$, the two branches of entropy could connect, which is similar to the region C in left panel. \par Similar to Subsection \ref{31}, we adjust $\delta<1$ to get three values of horizon and study the entropy for a fixed low temperature. In Fig.$\ $\ref{entropy}, we exhibit entropy as functions of boundary rotation parameter $\varepsilon$ for different values of temperature $T$ at $q=0.07057$. The minimal temperature $T=0.2735$ for $\delta=1$ is represented by black lines. For a fixed temperature $T\leq T_{min}\approx0.2599$, the large branch join up with two small branches, which form a set of lines. At each set, the line showing that entropy increases with the increasing of $\varepsilon$ describes large branch and corresponding solid line and dot line describe two small branches. From left to right, these sets of lines indicate $T= 0.2599, 0.2492, 0.2325,0.2257$ and $0.2104$, respectively. Similar to above results in the right panel of Fig.$\ $\ref{42entropy}, the two small branches have same entropy. When temperature is lower than $T_{min}$, the large branches also have solutions with $\varepsilon>2$. The entropy become infinity when $\varepsilon$ approaches to a maximum value of solutions. \begin{figure}[hhh] \centering \includegraphics[width=0.9\textwidth]{deletaentropy.pdf} \caption{The entropy as functions of boundary rotation parameter $\varepsilon$ for $T\leq T_{min}=0.2735$ with $q=0.07057$. The Black lines indicates $T=T_{min}$ and the red vertical dash line indicates the $\varepsilon=2$.} \label{entropy} \end{figure} \subsection{Stability} \hspace*{0.6cm}In this subsection, we study the stability of deforming charged black hole solutions. Following the method provided in \cite{Markeviciute:2017jcp,d1,d2}, we consider a free, massless and neutral scalar field perturbation to background and solve the Klein-Gordon equation \begin{equation} \square\Phi=0 , \label{KG} \end{equation} and we could decompose the scalar field as the following standard form \begin{equation} \Phi=\hat{\Phi}_{\omega,m}(x,y)e^{-i\omega t+im\phi}, \end{equation} where the constant $\omega$ is the frequency of the complex scalar field and $m$ is the azimuthal harmonic index. Considering the ingoing Eddington-Finkelstein coordinates \cite{d2,Berti:2009kk}, the scalar field with the ans$\ddot{a}$tze of the black hole metric (\ref{key}) could be decomposed into \begin{equation} \Phi(t,x,y,\phi)=e^{-i\omega t}e^{i\omega \phi}y^{-i\frac{2\omega y_+}{1+3y_+^2}}(1-y^2)^3(1-x^2)^{|m|}\psi(x,y) , \end{equation} where the powers of $x$ and $y$ are chosen to make function $\psi(x,y)$ regular at the origin. The boundary conditions are imposed as follow: \begin{equation} \left\{ \begin{aligned} &\partial_{x}\psi(x,y)=0, \quad x=\pm1,\\ &\partial_{y}\psi(x,y)=0, \quad y=0,\\ &-2iy_+\omega \psi(x,y)+(1+3y_+^2)\partial_y\psi(x,y)=0, \quad y=1. \end{aligned} \right. \end{equation} \begin{figure}[!hbt] \centering \includegraphics[width=0.8\textwidth]{SandSSQuasinormalmode.pdf} \caption{ The real part of frequencies $\omega$ against the rotation parameter $\varepsilon$ of two small branches for different value of angular quantum number $m$ at $T=0.42$ and $q=0.07057$. The black horizontal line represents $Re$ $\omega=0$. The black vertical horizontal line represents $\varepsilon=2$.} \label{QN} \end{figure} In Fig.$\ $\ref{QN}, we give the real part of quasinormal frequencies $\omega$ against the rotation parameter $\varepsilon$ of two small branches for different values of angular quantum number $m$. From top to bottom, these dot lines represent $m=5, 8, 10, 13$ and $16$ respectively. Similar to the above results of horizon geometric and entropy, these two small branches have equal quasinormal frequencies though the horizon radius of the bigger one is nearly twice as that of the smaller one. The real part of frequencies $Re$ $\omega$ is always positive when $m<13$. When $m\geq13$, $Re$ $\omega$ would appear a negative value with the increase of rotation parameter $\varepsilon$, which means we could obtain a stable deforming charged black hole solution with scalar condensation. \section{Conclusions and Outlook}\label{Sec4} \hspace*{0.6cm}In this paper, we studied the conformal boundary of four-dimensional static asymptotically AdS solutions in Einstein-Maxwell gravity and constructed solutions of deforming charged AdS black hole. In contrast with the situations without charge, the charge $q$ could influence the extremums of temperature $T$ which divide the range of temperature into different regions according to the value of charge $q$. The number of horizons depends on the different regions of temperature $T$. Moreover, there exists no horizon when $T<T_{min}$ for $q=0$, but when we take charge $q\neq0$, there is at least one value of horizon for a fixed temperature. \par We also investigated physical properties for charged deforming AdS black holes, including the deformation of horizon, entropy and stability: \begin{itemize} \item Deformation of horizon: In the region with three values of horizon for a fixed temperature, the deformation of horizon for large branch increases with the increasing of boundary rotation parameter $\varepsilon$, while that of small branches is a decreasing function of $\varepsilon$, which shows very similar results to the cases without charge. We also studied how the horizon deforms against the charge $q$ and found that the deformation of horizon became smaller as the charge $q$ increases. \item Entropy: In the region with three values of horizon for a fixed temperature, with the increase of $\varepsilon$, the entropy of large branches increases, while that of small branches decreases. There also exist another set of unstable solutions of small branches, where the entropy increases with the increasing of $\varepsilon$. The entropy of large branch and small branches for a fixed temperature join up when temperature $T$ is lower than $T_{RN}$. It is worth noting that in the region with one value of horizon for a fixed temperature, we could find two families of solutions with same horizon radius, and they have different properties of entropy when the temperature $T<T_{RN}$. \item Stability: We have studied the stability of scalar fields in the background of deforming charged AdS black holes, and found that when angular quantum number $m\geq13$, the real part of frequencies begins to appear negative values, which means scalar condensation. \end{itemize} \par The most interesting finding in our research is that in the region with three values of horizon at one temperature, the two small branches for a fixed temperature have same numerical results, including deformation of horizon, entropy and stability though their horizon radii might vary many times. \par At present, we have studied the horizon geometry, entropy and stability of charged AdS black hole with differential rotation boundary. But the angular momentum, energy densities and thermodynamic relation of deforming charged black hole have not been studied, and we hope to investigate these in our future work. Besides, we are planning to study the deforming charged black holes in $f(R)$ gravity and nonlinear electrodynamics. \section*{Acknowledgement} We would like to thank Yu-Xiao Liu and Jie Yang for helpful discussion. Some computations were performed on the Shared Memory system at Institute of Computational Physics and Complex Systems in Lanzhou University. This work was supported by the Fundamental Research Funds for the Central Universities (Grants No. lzujbky-2017-182).
train/arxiv
BkiUa8rxK4sA-5Y3qatH
5
1
\section{Introduction} \label{sec:intro} The introduction of \emph{point--to--set} correlation functions \cite{bib:biroli, bib:montsem, bib:cavagna1, bib:cavagna2, bib:frmo} allowed important progresses in understanding the growth of static correlations in supercooled liquids near the glass transition. These non-standard correlation functions measure how deeply the effect of amorphous boundary conditions penetrates within a system. In order to introduce them, let us consider a large ensemble of interacting particles becoming glassy at low temperature. We assume that the liquid is trapped in a metastable state. We freeze the motion of all the particles outside a sphere of radius $R$. Then we let the particles inside the sphere free to move and eventually to rearrange in a different metastable state. The effect of the external particles is to create a pinning field favouring internal configurations which best match the frozen exterior. For small radius $R$, the effect of the pinning field on the interior of the sphere is strong. In that case the sphere remains in the same state. On the contrary, for large radius $R$, the effect becomes weak and the sphere can be found in a different state. Roughly speaking, a \emph{point--to--set} correlation function measures the overlap between the initial state and the one reached after the rearrangement of the system. It has been found in numerical experiments that on lowering the temperature the effect of the amorphous boundary conditions propagates deeper into the region \cite{bib:cavagna1,bib:cavagna2}. \\ Standard Random First Order Transition (RFOT) \cite{bib:kirk,bib:biroli} assumes that the competition between an entropy-rich state with high energy and an entropy-poor state with low energy, can explain the transition from high-overlap to low-overlap metastable states of the previous system, as the radius of the sphere is increased. As we are going to show, such a mechanism has to be reconsidered. In order to do this, let us consider, for simplicity, a Ising-like model described by an Hamiltonian $H$. We freeze a configuration $S^{\alpha}$ in a region $A$ of the system. We study the thermodynamic considering only configurations $S$ constrained to be close to $S^{\alpha}$ is $A$: \begin{equation} Z [S^{\alpha}] = \sum_S e^{-\beta H[S]} \chi_{A}[S,S^{\alpha}] \ , \end{equation} where \begin{equation} \chi_{A}[S^1,S^2] = \left\{ \begin{array}{lll} 1 & \ \ \rm{if } \ \ S^1_i=S^2_i& \ \ \forall i \in A\\ 0 & \ \ \rm{otherwise} \end{array} \right. . \end{equation} The thermodynamic average of an observable $\mathcal{O}$ of the system is obtained by averaging with constrained Boltzmann measure the configurations inside the sphere and with Boltzmann measure the configurations $S^{\alpha}$: \begin{equation} \langle \mathcal{O} \rangle = \sum_{S^{\alpha}} \frac{e^{-\beta H[S^{\alpha}]}}{Z} \sum_{S} \chi_{A}[S,S^{\alpha}]\frac{e^{-\beta H[S]}}{Z[S^{\alpha}]} \mathcal{O}(S) \ . \end{equation} This average coincides with the usual thermodynamical one: $ \frac{1}{Z} \sum_{S} e^{-\beta H[S]} \mathcal{O}(S)$. This simple fact has deep implications: in the case in which $A$ is a sphere of radius $R$, on average, the energy per degree of freedom is independent of $R$. If, for typical choices of the position of the sphere, one finds that two thermodynamic states coexist for a well defined value of $R$, they will have the same energy. Possible mechanisms for coexistence should therefore have a purely entropic origin \cite{bib:FranzSemerjian}. \\ In recent numerical experiments \cite{bib:cavagna3} the energy paid to put different metastable states in contact has been measured. The procedure is the following: freeze two states $\alpha$ and $\beta$, exchange a sphere of the state $\alpha$ with a sphere of the state $\beta$ and let the system evolve. Inspired by this idea, in the present work we want to introduce a different \emph{point--to--set} correlation function defined as the free-energy cost to put different metastable states at distance $l$. In order to do that, we consider a \emph{sandwich}-geometry: two regions of the space divided by a box of width $l$ and then freeze different metastable states at opposite sides of the box, Figure \ref{fig:sandwich}. This system is well suited in order to be studied in the framework of a $p$-spin model with Kac interaction \cite{bib:frmo,bib:frton1,bib:frton2,bib:frton3,bib:fr1,bib:fr2}. \\ The paper is organized as follows: in Section \ref{sec:model} we introduce the model that we consider and the basic definitions; in Section \ref{sec:calc} we briefly illustrate how to obtain the free energy of the system; more details on these calculations can be found in \ref{appa} and in \ref{appb}; in Section \ref{sec:res} we present our results and in Section \ref{sec:conc} we draw our conclusions. \section{The model} \label{sec:model} We consider a finite-dimensional version of the spherical $p$-spin model, defined on a $d$-dimensional cubic lattice $\Lambda$ of linear size $L$, whose elementary degrees of freedom are spins $S_i \in \mathbb{R}$ with $i\in \Lambda$. We introduce the interaction range $\gamma^{-1}>0$ and a non negative rapidly decreasing function $\psi(x)$ normalized by: $\int \mathrm{d}^d x \psi(|x|)=1$. We define the local overlap of two configurations $S^1$ and $S^2$ as: \begin{equation} Q_{S^1S^2}(i)=\gamma^d \sum_{j\in \Lambda}\psi(\gamma |i-j|)S^1_j S^2_j \ . \end{equation} We impose that configurations are subjected to the local spherical constraint: $Q_{S^1S^1}(i)=1$ $\forall i \in \Lambda$. We then introduce the finite-range $p$-spin Hamiltonian: \begin{equation} H_p [S,J]= -\sum_{i_1,...,i_p} J_{i_1,...,i_p}S_{i_1}...S_{i_p} \end{equation} where the couplings $J_{i_1,...,i_p}$ are i.i.d. random variables with zero mean and variance: \begin{equation} \espe{}{J_{i_1,...,i_p}^2}=\gamma^{pd}\sum_{k\in \Lambda} \psi(\gamma |i_1-k|)...\psi(\gamma |i_p-k|) \ . \end{equation} $\gamma^{-1}$ is the interaction range since only variables located at vertices $i$ and $j$ such that $|i-j|<\gamma ^{-1}$ really interact. This also implies that the Hamiltonian is a random variable with zero mean and variance: \begin{equation} \espe{}{H[S^1,J]H[S^2,J]}=\sum_{i\in \Lambda}f(Q_{S^1S^2}(i)) \ , \end{equation} where $f(x)$ is a polynomial with positive coefficients, for example $f(x)=x^p$, if we consider a pure $p$-spin model; in the following we will consider $f(x)=\frac{1}{10}x^2+x^4$, where the quartic term assures to have a regular gradient expansion of the free-energy density. We analyze the model in the Kac limit: $L, \gamma^{-1} \to \infty$ with $L \gg \gamma^{-1}$, where the model can be solved by saddle-point approximation. \\ \begin{center} \begin{figure}[htbp] \setlength{\unitlength}{1cm} \begin{picture}(14,3) \thicklines \put(1,0){\line(1,0){5}} \put(6,0){\line(0,1){3}} \put(1,3){\line(1,0){5}} \put(1,0){\line(0,1){3}} \put(2.5,0){\line(0,1){3}} \put(4.5,0){\line(0,1){3}} \put(3.5,2.5){\vector(1,0){1}} \put(3.5,2.5){\vector(-1,0){1}} \put(3.5,0.5){\vector(1,0){2.5}} \put(3.5,0.5){\vector(-1,0){2.5}} \put(3.4,0.6){$L$} \put(3.4,2.6){$l$} \put(1.7,1.5){\large{$\alpha$}} \put(5.2,1.5){\large{$\alpha$}} \put(8,0){\line(1,0){5}} \put(13,0){\line(0,1){3}} \put(8,3){\line(1,0){5}} \put(8,0){\line(0,1){3}} \put(9.5,0){\line(0,1){3}} \put(11.5,0){\line(0,1){3}} \put(10.5,2.5){\vector(1,0){1}} \put(10.5,2.5){\vector(-1,0){1}} \put(10.5,0.5){\vector(1,0){2.5}} \put(10.5,0.5){\vector(-1,0){2.5}} \put(10.4,0.6){$L$} \put(10.4,2.6){$l$} \put(8.7,1.5){\large{$\alpha$}} \put(12.2,1.5){{\large$\beta$}} \end{picture} \caption{The \emph{sandwich}-geometry for a system $\alpha \alpha$ (left) and $\alpha \beta$ (right). The box $B(l)$ is the central region, $A^+(l)$ and $A^-(l)$ are the lateral ones.} \label{fig:sandwich} \end{figure} \end{center} The sandwich-geometry is implemented by considering three regions of the lattice $\Lambda$: $A^+(l)$, $A^-(l)$ and a box $B(l)$, Figure \ref{fig:sandwich}. In order to put the same or different states at opposites sides of the box, we introduce two different systems, that we call $\alpha \alpha$ and $\alpha \beta$: \begin{itemize} \item system $\alpha \alpha$: we fix a configuration $S^{\alpha}$ drawn from the Boltzmann equilibrium measure. We consider the thermodynamic of configurations $S$ constrained to be close to $S^{\alpha}$ both in $A^+(l)$ and in $A^-(l)$; \item system $\alpha \beta$: we fix two configurations $S^{\alpha}$ and $S^{\beta}$ drawn from the Boltzmann equilibrium measure. We consider the thermodynamic of configurations $S$ constrained to be close to $S^{\alpha}$ in $A^+(l)$ and to $S^{\beta}$ in $A^-(l)$. \end{itemize} We consider a system $\alpha \beta$. Let be $\mathcal{O}$ an observable of the system and $\bar{q}\le1$. The constrained Boltzmann measure $\langle \cdot \rangle_{\alpha \beta}(l)$ is: \begin{eqnarray} \label{eq:valmed} \langle \mathcal{O} \rangle_{\alpha \beta}(l) \equiv & \frac{1}{Z[S^{\alpha}_{A^+},S^{\beta}_{A^-}]} \int \mathrm{d} S \mathcal{O}(S) e^{-\beta H[S,J]} \nonumber \\ & \times \prod_{i \in A^-} \delta (Q_{S^{\alpha}S}(i)- \bar{q}) \prod_{i \in A^+} \delta (Q_{S^{\beta}S}(i)- \bar{q}) \end{eqnarray} where $\int$ denotes integration over configurations satisfying the local spherical constraint. The partition function is: \begin{eqnarray} Z[S^{\alpha}_{A^+},S^{\beta}_{A^-}] \equiv & \int \mathrm{d} S e^{-\beta H[S,J]} \nonumber \\ & \times \prod_{i \in A^-} \delta (Q_{S^{\alpha}S}(i)- \bar{q}) \prod_{i \in A^+} \delta (Q_{S^{\beta}S}(i)- \bar{q}) \ . \end{eqnarray} The symbol $\mathbb{E}$ represents the average over both the distribution of fixed configurations $S^{\alpha}$ and $S^{\beta}$ and the disorder; the free energy of the system $F_{\alpha \beta}(l)$ is then: \begin{equation} F_{\alpha \beta}(l,T) \equiv -\frac{1}{\beta} \ \espe{}{ \ln Z[S^{\alpha}_{A^+},S^{\beta}_{A^-}] } \ . \end{equation} For a system $\alpha \alpha$, the constrained Boltzmann measure $\langle \cdot \rangle_{\alpha \alpha}(l)$ is obtained by imposing the constraint $\prod_{i \in A^+ \cup A^-} \delta (Q_{S^{\alpha}S}(i)- \bar{q})$; then $F_{\alpha \alpha}(l,T)\equiv -\frac{1}{\beta}\espe{}{ \ln Z[S^{\alpha}_{A^+ \cup A^-}] } $ \\ As we will see in the following, $F_{\alpha \beta}(l,T) $ and $F_{\alpha \alpha}(l,T)$ can be calculated in the Kac limit, $\gamma \to 0$ taken after $L \to \infty$. This allows us to measure the free-energy cost per unit area to put different metastable states at a distance $l$: \begin{equation} \label{eq:sigma} Y(l,T) \equiv \lim_{\gamma \to 0} \lim_{L\to \infty }\frac{F_{\alpha \beta}(l,T)-F_{\alpha \alpha}(l,T)}{L^{d-1}} \ ; \end{equation} this quantity can be interpreted as an effective, distance-dependent, surface tension. \section{Calculations} \label{sec:calc} In the following we consider a system $\alpha \beta$; a system $\alpha \alpha$ can be treated in the same way. In order to calculate $F_{\alpha \beta}$, the average $\mathbb{E}$ can be taken by introducing replicas along the lines of \cite{bib:fr1, bib:fr2} (more details on calculations can be found in \ref{appb}). Integrals over spin variables are then treated for an $(m+n) \times (m+n)$ matrix order parameter $q_{ab}(i)$. We rescale the position to define $x=i \gamma \in [-\hat{L}, \hat{L}]^d$, $\hat{L} \equiv \gamma L$ to get: \begin{equation} F_{\alpha \beta}(\hat{l}) = -\frac{1}{\beta} \ \lim_{m,n\to 0 } \int [\mathrm{d} q_{ab}] e^{-\frac{1}{\gamma^d} \mathcal{S}_{\alpha \beta}(q_{ab})} \ . \end{equation} The dependency upon $\gamma$ is now completely explicit and, for $\gamma \to 0$, the functional integral can be performed using the saddle-point method. We look for a replica symmetric saddle point $q^{\mathrm{RS}}_{ab}(x)$. This is characterized by three scalar functions $p_1(x)$, $p_2(x)$ and $q(x)$; $p_1$ and $p_2$ are the local overlap between the constrained configuration and the reference configuration $S^{\alpha}$ and $S^{\beta}$ respectively and $q$ is the local overlap of two constrained configurations when they belong to the same metastable state (see \ref{appa} for more details). Using this ansatz we obtain that $\mathcal{S}_{\alpha \beta}(q_{ab}) = n \int \mathcal{L}_{\alpha \beta} \mathrm{d}^d x + O(n^2)$, where: \begin{eqnarray} \label{eqlag} \mathcal{L}_{\alpha \beta}(x) = & -\frac{\beta^2}{2} [ f(1)+ 2f((\psi \ast p_1)(x)) +2f((\psi \ast p_2)(x)) -f((\psi \ast q)(x)) ] + \nonumber \\ & + \frac{1}{2} \left[ \log(1-q(x)) - \frac{p_1^2(x)+p_2^2(x)-q(x)}{1-q(x)} \right] \end{eqnarray} with: \begin{equation} (\psi \ast q)(x) = \int \mathrm{d}^d y \psi(|y-x|)q(y) \ . \end{equation} The constraint enforcing $S$ to be close to $S^{\alpha}$ in $A^-(\hat{l})$ and to $S^{\beta}$ in $A^+(\hat{l})$ is fulfilled by setting $p_1(x)=\bar{q}$ for $x\in A^-(\hat{l})$ and $p_2(x)=\bar{q}$ for $x \in A^+(\hat{l})$. We obtain $F_{\alpha \beta}(\hat{l})$ by evaluating the fields $p_1(x)$, $p_2(x)$ and $q(x)$ in the saddle point of the action $\mathcal{S}^0_{\alpha \beta}=\int \mathrm{d} ^d x \mathcal{L}_{\alpha \beta}(x)$. The resulting free energy will present an extensive part $O(L^d)$ which will be the same for a system $\alpha \alpha$ and for a system $\alpha \beta$. Then, in the calculation of the surface tension $Y(\hat{l},T)$, the extensive part of the free energy will erase and contributions come only from the sub-leading order $O(L^{d-1})$; the resulting form of the surface tension is $Y(\hat{l},T) = \hat{F}_{\alpha \beta}(\hat{l}, T)- \hat{F}_{\alpha \alpha}(\hat{l}, T)$, where $\hat{F}_{\alpha \beta}(\hat{l}, T) = \frac{1}{\beta}\int_0^{\hat{l}} \mathrm{d} x\mathcal{L}_{\alpha \beta}(x)$. \\ We introduce a simplification in the Lagrangians: we expand the terms of the form $f((\psi \ast q)(x))$ in gradient of $q(x)$ and we truncate to the second order obtaining $f(q(x))-cf''(q(x)) (\nabla q)^2(x)$ where $c=\frac{1}{2d}\int z^2 \psi(|z|) \mathrm{d} z^d$ (in our running example $c=1$). We find the saddle-point fields iterating numerically the Euler-Lagrange equations of (\ref{eqlag}). \section{Results} \label{sec:res} The system $\alpha \alpha$ has been studied in spherical geometry \cite{bib:frmo}; we verified that in sandwich-geometry the behaviour does not change with respect to the spherical one. Two critical temperatures characterize the system: $T_s \approx 0.766287$ and $T_d \approx 0.813526$. Setting the temperature of the system $T \gtrsim T_d$, we find two lengths: $\hat{l}_0(T)$ and $\hat{\xi}_d(T)$, such that, for widths of the box $\hat{l} \in [\hat{l}_0(T), \hat{\xi}_d(T)]$, the action $\mathcal{S}^0_{\alpha \alpha}$ has two local minima. A minimum is characterized by a saddle-point field $p(x)$ rapidly decaying to zero in the interior of the box; we name this low-overlap minimum. The other minimum is characterized by a saddle-point field $p(x)$ everywhere large; we name this high-overlap minimum. For $\hat{l} > \hat{\xi}_d$ ($\hat{l} < \hat{l}_o$) only the low-(high-)overlap minimum exists. $\hat{\xi}_s(T)$ is defined as the minimum value of $\hat{l}$ such that the low-overlap minimum is the global minimum of the action. The critical temperatures $T_s$ and $T_d$ are defined as the temperature at which $\hat{\xi}_s(T)$ and $\hat{\xi}_d(T)$ respectively diverge. For a better comprehension, we present in Figure \ref{fig:enlib} the plot of the sub-extensive part of the free energy of high-(low-)overlap minimum $\hat{F}_{\alpha \alpha}^H(\hat{l})$ ($\hat{F}_{\alpha \alpha}^L(\hat{l})$) divided by the size $\hat{l}$ for a system at a temperature $T_s <T< T_d$. $\hat{\xi}_s(T)$ is then the value of $\hat{l}$ where $\hat{F}_{\alpha \alpha}^L(\hat{l})$ and $\hat{F}_{\alpha \alpha}^H(\hat{l})$ cross. Then the global free energy of a system $\alpha \alpha$ is $F_{\alpha \alpha}(\hat{l})= \min \left\{ F_{\alpha \alpha}^L(\hat{l}), F_{\alpha \alpha}^H(\hat{l})\right\}$. \\ On the other hand, in the case of a system $\alpha \beta$, the action $\mathcal{S}^0_{\alpha \beta}$ has always a single minimum. Profiles of the saddle-point field $p_1(x)$ can be seen in Figure \ref{fig:absol}. The sub-extensive part of the free energy of the unique minimum $\hat{F}_{\alpha \beta}(\hat{l})/\hat{l}$ for a temperature $T_s< T< T_d$ is also plotted in Figure \ref{fig:enlib}. At all temperatures and values of $\hat{l}$ that we have studied, the sub-extensive part of the free energy of a system $\alpha \beta$ $\hat{F}_{\alpha \beta}(\hat{l})$ is close to the sub-extensive part of the low-overlap free energy of a system $\alpha \alpha$ $\hat{F}_{\alpha \alpha}(\hat{l})$, as can be seen in the inset of Figure \ref{fig:enlib}. \\ In Figure \ref{fig:diff} we follow the evolution of $\hat{l}$-dependent surface tension $Y(\hat{l},T)$ for systems at different temperatures $T>T_s$. We note that the static correlation length $\hat{\xi}_s(T)$ separates two regimes. For $\hat{l}<\hat{\xi}_s(T)$, $Y(\hat{l},T)$ has a power-law followed by a linear decrease. For $\hat{l}>\hat{\xi}_s(T)$, as we see in the inset of Figure \ref{fig:diff}, the decrease becomes exponential: \begin{equation} Y(\hat{l},T) \sim C \ e^{-\hat{l}/\tilde{l}}, \end{equation} with $\tilde{l}$ weakly dependent on the temperature and showing no evident relation with $\hat{\xi}_s$. This shows that the surface tension $Y(\hat{l},T)$ is sensibly different from zero only for $\hat{l}\lesssim \hat{\xi}_s$. A similar result has been obtained in \cite{bib:moore}; in that case the interface free energy has been obtained changing the boundary conditions along one direction, from periodic to anti-periodic. Particular attention must be spent in the case $T=T_s$. At $T_s$, the static correlation length $\hat{\xi}_s$ diverges. This means that the high-overlap minimum is the global minimum of the action $\mathcal{S}^0_{\alpha \alpha}$ for all the values of $\hat{l}$. We see in Figure \ref{fig:diff} that, for $T$ approaching $T_s$, the profile of $Y(\hat{l},T)$ takes the shape of a plateau. Consequently, at the critical temperature $T_s$, in the limit $\hat{l} \to \infty$, the surface tension $Y(\hat{l},T_s)$ does not fall to zero and takes a limiting value $Y(T_s)$. Arguably, the value $Y(T)$ is different from zero for temperatures $T<T_s$. \\ According to phenomenological arguments \cite{bib:biroli}, the static correlation length $\hat{\xi}_s(T)$ can be interpreted as the typical size of metastable states of a system at a temperature $T$. Following this idea, in a system $\alpha \beta$ we are freezing a patchwork of metastable states of size $\hat{\xi}_s(T)$ outside the box and letting the system free to rearrange inside the box. If the width of the box is larger than the typical metastable-state size, $\hat{l} \gg \hat{\xi}_s(T)$, the system inside the box has enough space to rearrange in many different metastable states. On the contrary, when the width of the box is smaller than the metastable-state size, $\hat{l}<\hat{\xi}_s$, since there is not enough space to create metastable states on the interior, the frozen states are in contact and then ``repel" each other. This explains why the surface tension $Y(\hat{l},T)$ is significantly different from zero only for $\hat{l} < \hat{\xi}_s(T)$ and why the overlap profiles $p_1(x)$ and $p_2(x)$ between frozen metastable states and the interior of the box decrease faster for small boxes. At the critical temperature $T_s$ the size of metastable states diverges. Consequently, the surface tension takes a finite value also in the limit $\hat{l} \to \infty$. \\ Other observables of the system have been considered. We studied the internal energy $U$. We verified that for a system $\alpha \alpha$ the high-overlap and the low-overlap phases have the same energy, as motivated in Section \ref{sec:intro}. In Figure \ref{fig:enint} we follow the evolution of $U_{\alpha \beta}(\hat{l})-U_{\alpha \alpha}(\hat{l})$ for different temperatures of the system. A detailed derivation of this quantity can be found in \ref{appb}. In this case, we note a power-law followed by a an exponential decrease. \\ We also computed the configurational entropy $\Sigma$ as a function of the size $\hat{l}$ of the box, Figure \ref{fig:entropia}. For a system $\alpha \alpha$ only the low-overlap phase presents a configurational entropy $\Sigma_{\alpha \alpha}$ different from zero. As noticed in \cite{bib:frmo}, for $\hat{l}<\hat{l}^{1RSB}$ the replica-symmetric solution is incorrect since it gives a negative entropy. We found that the same is true for a system $\alpha \beta$. In the inset of Figure \ref{fig:entropia} we plot the difference between the configurational entropy of the two systems. We note that this quantity is a decreasing function of the size $\hat{l}$ of the system. This is consistent with the observation that the system loses memory of the frozen exterior for large sizes of the box. \section{Conclusions}\label{sec:conc} In this paper we have studied a distance-dependent surface tension, defined as the free-energy cost to put metastable states at a given distance. This has been done in the framework of a disordered microscopic model with Kac interactions that can be solved in the mean-field limit. We have found that the surface tension is sensibly different from zero only for distances between metastable states smaller than the static correlation length of the system. A description of this behaviour in terms of a phenomenological droplet argument has been proposed. Other observables, like the internal energy and the configurational entropy, has been studied. The behaviour of the configurational entropy allowed to identify under which size the replica-symmetric ansatz becomes incorrect and a 1-RSB solution must be considered. \ack It is a pleasure to thank D. Fichera, M. Castellana and G. Biroli for interesting discussions. \begin{figure}[H] \begin{center} \includegraphics[width=7cm, angle= -90]{b125.eps} \caption{Plot of the profiles of the saddle-point field $p_1(x)$ for a system $\alpha \beta$ at temperature $T=0.8$ for different values of the box $\hat{l}$. At this temperature $\hat{\xi}_s \sim 24$.} \label{fig:absol} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=7cm, angle=270]{EnergiaLibera.eps} \caption{Main figure: Plot of the sub-extensive part of the free energy divided by the size as a function of $\hat{l}$ for a system at a temperature $T=0.7874$ of: high-overlap minimum of a system $\alpha \alpha$, $\hat{F}_{\alpha \alpha}^H(\hat{l})/ \hat{l}$; low-overlap minimum of a system $\alpha \alpha$, $\hat{F}_{\alpha \alpha}^L(\hat{l})/\hat{l}$; unique minimum of a system $\alpha \beta$, $\hat{F}_{\alpha \beta}(\hat{l})/ \hat{l}$. The static correlation length $\hat{\xi}_s$ is pointed out. Using this scale $\hat{F}_{\alpha \alpha}^L(\hat{l})$ and $\hat{F}_{\alpha \beta}(\hat{l})$ are indistinguishable. Inset: the difference $\hat{F}_{\alpha \alpha}^L(\hat{l})- \hat{F}_{\alpha \beta}(\hat{l})$ in logarithmic scale. } \label{fig:enlib} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=8cm, angle=-90]{Differenza.eps} \caption{Plot of $Y(l,T)$ for different temperatures as a function of the width of the box $\hat{l}$. We remember that $T_s \approx 0.766287$ and $T_d \approx 0.813526$.} \label{fig:diff} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=8cm, angle= -90]{Differenza2.eps} \caption{Plot of $U_{\alpha \beta}(\hat{l})-U_{\alpha \alpha}(\hat{l})$ for different temperatures as a function of the width of the box $\hat{l}$. We remember that $T_s \approx 0.766287$ and $T_d \approx 0.813526$.} \label{fig:enint} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=7cm,angle=-90]{entropia.eps} \caption{Main figure: The configurational entropy $\Sigma_{\alpha \alpha}(l)$ in function of $l$ for a system $\alpha \alpha$ and $\Sigma_{\alpha \beta}(l)$ for a system $\alpha \beta$ at a temperature $T=0.8$. Inset: the difference $\Sigma_{\alpha \beta}(l)-\Sigma_{\alpha \alpha}(l)$.} \label{fig:entropia} \end{center} \end{figure}
train/arxiv
BkiUacHxK02iP5rWIPyN
5
1
\section{Introduction} Feature detection and description have been an important and integral part of many computer vision applications related to inferencing 3D scene geometry, such as image retrieval, 3D reconstruction, and localization. In the literature, the performance evaluation of features have primarily been based on datasets that contain planar surfaces combined with affine transformations, regardless of whether the feature was constructed by hand (e.g., SIFT \cite{lowe2004distinctive, bay2006surf, rosten2008faster, shi1994good, harris1988combined} and their variants), or constructed by data-driven deep learning (e.g., \cite{LIFT}, \cite{detone2018superpoint}, \cite{savinov2017quad}, \cite{alcantarilla2012kaze}, \cite{DBLP:journals/corr/abs-1805-09662}). Although this is useful for making detectors and descriptors that work well on textured planar surfaces or arbitrary scenes with small viewpoint changes, modern applications of image features, like augmented reality, autonomous robot navigation, and building large-scale city maps, often further require features to work well with regard to large viewpoint changes within various geometries significantly different from planar surfaces. When the existing features are evaluated in scenarios that contain significant viewpoint changes between non-planar surfaces, researchers have found that most methods perform poorly \cite{aanaes2012interesting,moreels2007evaluation}. One prominent exception is the MagicPoint detector \cite{detone2018superpoint}. DeTone \etal introduce a network architecture called SuperPoint and two networks, one named MagicPoint, which is a detector trained to recognize corner points with the use of a synthetic dataset. This dataset is comprised of simple geometric shapes, like grids and boxes, which was then augmented with noise. On this synthetic 3D dataset, it was able to achieve $0.979$ mean average precision, which is a $>0.3$ mean average precision increase over traditional corner detectors like Shi \cite{shi1994good}, FAST\cite{rosten2008faster}, and Harris\cite{harris1988combined}. The other network is both a detector and a descriptor that they also call SuperPoint. The detector for SuperPoint, which we call SuperPoint-COCO to differentiate it from the architecture of the neural network, was trained using homographic adaptation, which is a process DeTone \etal used to adapt a set of detections to be viewpoint invariant on the MS COCO dataset. This is done by simulating viewpoint changes using homographies applied to the images. As the MS COCO dataset does not have ground-truth feature detections, the detection labels are provided by the MagicPoint detector. The SuperPoint-COCO detector is not pre-trained on the synthetic dataset and is only trained on the augmented MS COCO dataset. In our tests we show that SuperPoint-COCO suffers a large performance penalty compared to the MagicPoint detector in real 3D scenes. We believe this indicates that training on 3D data is necessary to develop viewpoint invariance with regards to non-planar surfaces. To this end, we developed methods to collect 2D images of 3D data and find the ground truth keypoints within these images to mimic the success of MagicPoint, but with data that more closely resembles scenes encountered in the use of these detectors. We generated a 3D data set using the Gibson simulator \cite{gibson} with the Matterport dataset \cite{Matterport3D} to tackle the issue of precision and variety in 3D data. We develop an algorithm for computing a repeatable set of detections given a set of 3D data and a collection of other detections, and we developed a fast, accurate, and powerful way to benchmark algorithm in indoor environments using the Scannet dataset \cite{dai2017scannet}. We will release the source code for all tools we developed. \section{Previous Work} Because the SuperPoint architecture used for MagicPoint was proven to be effective for learning viewpoint invariance on non-planar surfaces, we decided to use this network architecture, as well as the loss function, when selecting a network to learn our new dataset. The model is based on VGG \cite{vgg} and outputs a low spatial dimensional (1/8th scale) grid of values, storing the extra spatial data in the depth of the layer so it can be reshaped back to a full resolution interest map. A softmax is applied to each cell of the grid before it is reshaped to prevent double detections. The loss is the mean of a softmax cross entropy loss across each cell. We utilize the implementation by Rémi Pautrat and Paul-Edouard Sarlin \cite{superpointimpl}. Two papers evaluate the performance of detectors with regards to viewpoint invariance on non-planar surfaces. Moreels and Perona \cite{moreels2007evaluation} conclude that the Difference of Gaussians detector (DoG) performs almost as well as the Hessian Affine detector. Aan{\ae}s \cite{aanaes2012interesting} utilizes the DTU Robot Dataset, to evaluate several detectors. Aan{\ae}s concludes that DoG, Hessian Blob, and Harris Corner detectors are also viewpoint invariant. However, neither of them test any data based detectors. The most similar method to the one we develop is LIFT \cite{LIFT}. LIFT is trained to predict the subset of features detected by SIFT that are not flagged as outliers during 3D reconstruction. However, this can result in inaccurate labels. Due to imperfect feature matching, not every good point survives 3D reconstruction. Even worse, bad points can pass this check if not enough viewpoints capture it. Lastly the datasets are not very large as they use the Piccadilly and Roman Forum datasets \cite{wilson2014robust}. Our network utilizes a large dense 3D scene capture dataset. This allows us to initialize with a variety of features in addition to SIFT, and prevents inaccuracies in labeling as we can directly compute feature projections. LF-Net\cite{DBLP:journals/corr/abs-1805-09662} trains using the Scannet Dataset. The Scannet dataset \cite{dai2017scannet} is comprised of videos captured by a handheld depth sensor within small indoor environments. Most of the sequences consist of several loops within a single room. LF-Net trains their network on 15 frame intervals with a single frame being projected into the next, which, by our calculations, yields an average angle change of $12.5^{\circ}$. The authors show that the Superpoint-COCO descriptors outperform both LF-Net and LIFT for indoor feature matching with wide baselines, but do not directly evaluate the reliability of detections. Rosten \etal used a small dataset comprised of 37 images with significant viewpoint changes to find the parameters for the FAST corner detector \cite{rosten2008faster}. Savinov \etal trains a network to output viewpoint invariant detection by training it to assign patches that correspond to the same point the same rank \cite{savinov2017quad}. They train on the DTU Robot Image Dataset \cite{aanaes2012interesting} as well as the NYUv2 dataset \cite{Silberman:ECCV12}, both of which are 3D indoor datasets. However, they only train on a handful of images (n=40) and test on fewer. In our work we train using a very large dataset of over 100,000 images from 150 different 3D building captures. Many papers apply affine transformations to images to simulate viewpoint changes. There are many datasets \cite{hpatches, mikolajczyk2005comparison, cordes2013high, fischer2014descriptor} that are composed entirely of planar objects. There are also many datasets\cite{heinly2012comparative, zitnick2011edge, winder2009picking} that image objects far in the distance. The combination of far distance and small rotations make the images roughly planar and an affine transformation between images a reasonable approximation. This assumption was previously necessary due to the lack of availability of labeled 3D data. However, with the recent surge in 3D data coming from scene segmentation research \cite{Silberman:ECCV12, dai2017scannet, Matterport3D}, we can now train and evaluate detectors without this assumption on scenes where this assumption would not be appropriate. \section{Proposed Technique} \subsection{Training Set Generation} \label{sec:datasets} To generate a large amount of 3D data for training our network, we utilize the Gibson simulator, which renders photo realistic viewpoints of scenes captured with a Matterport sensor \cite{DBLP:journals/corr/abs-1709-06158}. The benefit of using simulated data is the ability to capture thousands of viewpoints from awkward angles, which is important to the method by which we generate ground truth labels for detection. Simulated data also has almost perfect pose and allows for more accurate calculation of ground truth for the training process. From the 340 areas, we select 150 areas that have a significant amount of texture and or objects filling the scenes because many areas are of completely empty and texture-less apartments. We capture images from each of the 150 areas by randomly positioning the camera within the bounds of the scene and checking if the camera position is valid. The orientation of the camera is randomly sampled with the yaw sampled uniformly from $[0, 2\pi)$, the pitch sampled from a Normal$(\mu=0,\sigma=\frac \pi 8)$, and the roll being sampled from a Normal$(\mu=0,\sigma=\frac \pi 4)$. The validity of a camera position is determined by casting a ray in each axial direction and outward from the camera viewpoint and checking that it falls within acceptable bounds. The lower bound was set to be 0.6 meters for all rays to stop the camera from intersecting with walls, and the upper bound was set to be 5 meters for the top and bottom, 20 for the sides, and 10 meters for the viewpoint ray to keep viewpoints inside the model. Part of the nature of the Gibson renderer is that it fills in the gaps for pieces of data that were missing, leaving blurry artifacts that shift around in some of the images, which can be seen in Figure \ref{fig:a}. Attempts to filter out these images using blur detection were made, but the nature of the images made it too difficult to discern good images from bad ones. However, the majority of the images are of good quality, and the model has shown the ability to transfer what it learned to the real world. The bounding box around each area was used to estimate how many images should be taken for the area. However, we are more interested in the effective volume of the area, which we estimated by sampling 100 images to get an estimate for the percentage of image that are usable within the bounding box. We then multiply the volume given by the bounding box by the ratio of images we found to be usable. For each 1 meter cubed of effective volume in an area, we captured 10 images, for a total of 407389 images extracted from the 340 areas. \begin{figure*}[htb] \includegraphics[width=\textwidth]{diagram.pdf} \caption{Full overview of the method we used to train our network to detect points that are repeatable using 3D data. The process starts with using Matterport Data \cite{Matterport3D}, which is rendered using the Gibson renderer. The gaps are filled using a neural network called Goggles, and thousands of valid pictures are taken using the Random View Photographer. These images are passed to a collection of detectors to get a score for each pixel, which are then projected onto an octomap\cite{hornung13auro}, which is found by converting the mesh to an octomap using binvox\cite{binvox}. This painted map is then used to generate ground truth detections for the 2D image. These labels are then used to train SuperPoint\cite{detone2018superpoint}.} \label{diagram} \end{figure*} \subsection{Ground Truth Labeling} We propose a technique that, given a set of detections $D_i\subset \mathbb{Z}\times\mathbb{Z}\times \mathbb{R}_+$, on some set of images $I_i\in\mathbb{R}^{n\times n}$ with pose $P_i\in\text{SE}^3$, within an area that has a 3D model, can find the subset of detections within each frame that are viewpoint invariant. For any frame $I_i$, we can find the location of a detection $(I_x, I_y, c)\in D_i$ within the map by raycasting from $P_i$ in the direction $P_i\begin{bmatrix} I_x & I_y & 1 & 1 \end{bmatrix}^T$. We paint the first point hit in the map using the confidence of the detection $c$. We also track the number of images that have seen the pixel, which we call the number of views, for use in averaging later. Once this painting process is complete, we are left with a 3D map of all detections given, which can be seen Figure \ref{diagram}. Using this map, we can then evaluate whether the detection is repeatable within the set of detections by once again iterating through each frame $I_i$ and retrieving the values for confidence and number of views within the map at each detection$(I_x, I_y, c)\in D_i$. This is done by, once again, raycasting from the pose in direction of the pixel and retrieving the confidence value and number of views of the first point hit. We can then evaluate the repeatability of the detection based on how confident each view of the detection was about there being a feature at that point. In practice, there is quite a bit of noise involved in this process, so we have to perform additional preprocessing to reject noisy points and boost the signal of points that the original set of detections might not have detected. \subsubsection{Map Creation Details} Because our method is predicated on all repeatable detections existing in the set we are going to narrow down, we used all of the easily available detectors we had to generate the set of detections to help ensure that we had a comprehensive set of detections. We use SIFT\cite{lowe2004distinctive}, SURF\cite{bay2006surf}, ORB\cite{rublee2011orb}, Harris Corners, Good Features to Track (GFTT)\cite{shi1994good}, and MagicPoint\cite{detone2018superpoint}. We set the confidence of each detector to be $\frac 1 N$, where N is the number of features detected on that frame, to avoid over saturating the map with too many detections. To allow for fast raycasting within the map, we utilize an Octomap\cite{hornung13auro} and discretize the map, which was originally a mesh, to a voxel grid with a resolution 0.01 meters to aggregate the 3D detections in space \cite{binvox}\cite{nooruddin03}. Calculation of the score maps is an expensive calculation that takes approximately 500 core hours. However, it is easily parallelizable, so we utilized the Pacific Research Platform cluster computer to calculate the ground truth, allowing for the calculation to be performed in 6 hours. \subsubsection{Extracting and Filtering of Points}\label{technique:eval} The problem with evaluating each of the detections is the amount of noise that can be present in the map. The most important source of noise is the noise caused by a phenomenon we call backshadow, which is caused by noise in the projection of 2D data into 3D space can cause a shadow like effect on regions far away from the region in 3D space the projection was meant to effect. If each point in 3D space is evenly viewed, then backshadow causes few issues because each point would have enough data to reject outliers, like those caused by backshadow. This is why the ability for simulated data to capture odd viewpoints is important. However, even with our simulation, bias caused by the position checker caused there to occasionally be areas with few views, and backshadow is a serious issue for points with a low number of views. As a result, when compiling the score map, we reject the use of the score map and use MagicPoint as the ground truth if the number of views is lower than the threshold=10 at any individual point. For the rest of the points, we want to utilize the map of the number of views that viewed each pixel, which we call counting map, to divide the original score map we obtain so we can find the average value of the score map for each pixel. However, the problem with the counting map is that it can have spiky regions, which can cause abnormal averages. To counter act this, we erode the counting map and blur it with kernels of size 9 to smooth the counting map, which we then use to divide the score map to obtain the mean score map. Once we have obtained our mean score map, we need to find the set of repeatable detections. However, using the mean score map to evaluate the original set of detections yield numerous double detections for the same point. The first problem was that detections would often yield slightly different locations for the same detection. This led into the second problem, which was that the discretization of the map would cause all of these points to fall under the same voxel and all be assigned a high score within the mean score map. In theory, we would want the detections for a single 3D feature to lie within the same space within the discretization of the map. However, we also need a high enough resolution to distinguish between close points. \subsubsection{Filtering Double Points} We propose the following solution to this problem. We create a set of candidate points, which we filter to remove double points, from which we select a subset using the mean score map. First, we wanted to to add points to the original detection set using the map because we found it to be far from perfect. To obtain candidate points from the mean score map, we applied Difference of Gaussians (DoG) to the score map and obtained the peaks greater than 0.01. However, the discretization of the map at close distances would present many detections, so we prevented the map from adding to the set of detections if the detection was closer than 0.5m, which we checked by utilizing a depth map for the frame. We then compiled all of the detections into a single map of detections. Let $D_s$ be the set of detections from the mean score map and $D_a$ be the set of detections from all other detectors. Let $S$ be the mean score map. Then the map of candidate detections $C$ was set to \begin{equation} C_{x,y} = \begin{cases} 3 & \text{if } (x, y) \in D_s\cap D_a \\ 2 & \text{if } (x, y) \in D_s\setminus D_a \\ 1 & \text{if } (x, y) \in D_a\setminus D_s \\ 0 & \text{else } \end{cases} \end{equation} Now that we had a map of the priority of each detection, we could filter the double points by applying a Gaussian blur to $C*S$ and extracting the local maxima (with a radius of 2px). We multiply the priority map by the score map to bias the candidate detections to be in locations with a high score because we found that the map would often present a more accurate location for the feature than any of the feature detectors had found. These peaks are then assigned values from the score map to allow for accurate thresholding, which we set to 0.05, creating our final set of detections. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{kitchen2.png} \captionof{figure}{% A room rendered within the Matterport dataset by the Gibson simulator \cite{gibson}. Note the slight amount of smearing on the table caused by Goggles filling in the data. } \label{fig:a} \end{figure} \section{Experiments} \subsection{Test Set Selection} We did not want to use the Gibson dataset to test algorithms because of issues caused by Goggles\cite{gibson}, the algorithm used to fill holes in the dataset, as it can distort the images. However, due to our desire to test the ability for a network to handle non planar perspective changes, we needed a corresponding testing dataset that has depth, otherwise the testing method will not be able to check where a feature point lies in 3D space. There are many datasets and tests that exist in the detection literature. However, as mentioned in the previous works section, most viewpoint invariance tests are based on datasets that approximate affine transformations on planar surfaces. There two tests designed to check viewpoint invariance in non-planar surfaces though: the test by Aan{\ae}s that utilizes the DTU Robot Dataset\cite{aanaes2012interesting}, and the test by Moreels and Perona \cite{moreels2007evaluation}.Unfortunately, Moreels and Perona \cite{moreels2007evaluation} do not make their evaluation method available and the test developed by Aan{\ae}s \etal\cite{aanaes2012interesting} takes multiple days to weeks to evaluate a single detector. As a result, we designed out our detector repeatability test, which is based on the Scannet\cite{dai2017scannet} dataset. The Scannet dataset is comprised of videos of indoor environments captured by a mobile scanning device. The dataset consists of 707 small indoor areas, like a doom room, each of which can feature multiple videos, for a total of 1513 RGB-D videos of indoor environments. The original purpose of the videos was to reconstruct the rooms, so each of the scans typically capture a person walking around in a circle, moving the sensor outwards to scan the room, then moving to scan other objects within the room, like sofas. The Scannet dataset does not fulfill our requirements for the training set because we need many different perspectives of the same objects for our algorithm to work properly. However, as a test set, it is quite good. The motion of the camera typically shows objects at angles between 0\degree and 90\degree because of the scanning motion, which we consider to be sufficient. Angles further than 90 degrees start to introduce issues with occlusion, where the object itself might occlude the visibility of it's corners, which requires the kind of very precise depth that is hard to find in any modern dataset. The full Scannet Dataset is very large, so we restrict our usage to the first 200 areas and only use every 30th frame. We selected this framerate from a visual inspection of the data, which showed that every 30th frame represented an average of 12.5 degrees of viewpoint change. The framerate did not seem to impact our results that much because we perform and all images to each other in an area, which limits the impact of such a choice. To perform this pseudo all-to-all comparison, we first cache a map of the pairwise correspondence between the frames that describe whether images view the same area of the scene. We check whether a query and a candidate image capture the same area by using the depth from the candidate image to project points into 3D space, which we then project back into the query image to check how much of the image contains points from candidate. If at least 10\% of the candidate is contains the backprojection of the query, the pair of images is considered to be worth processing. This results in an average of 40-90 degrees of viewpoint change, depending on the area. \subsection{Detection Repeatability Test} To measure a feature detector's repeatability with respect to viewpoint change, we want to know how often it is able to predict the same point from multiple viewpoints. However, the feature detector is probably not going to be able to detect exactly the same point because it isn't perfect. Therefore, we want to know if the feature detector is able to detect the same point within some distance. However, choosing a distance threshold is an easy way to add a highly influential variable to testing that can obscure the actual quality of an algorithm. Instead, we compile a histogram of the distances to the closest point for each detection. The histogram has the advantage of being both easy to interpret while maintaining the transparency in how many points a detector is detecting. A degenerate detector, for example, is perfectly repeatable but would show up as a huge spike at 0 distance. The problem is that some detectors, like the Hessian Affine detector, detect around 1000 features, which is an order of magnitude more points than detectors like Good Features to Track or MagicPoint \cite{aanaes2012interesting}. Although we could just allow these detectors to detect their max number of points, it would cause scaling issues with the histogram and limit the interpretability of the data. Therefore, we limit the number of detections to 2000 and apply non maximal suppression to prevent detectors from predicting double points. Some methods only plot a log histogram of the number of detections for each distance. This allows us to fairly test other algorithms while keeping the histogram manageable. To deal with the scaling of the histogram, and to make the data more interpretable, we also include a plot of the percentage of detections for each distance. To calculate this histogram, we take the detections from some image of size ($240\times 320$), which we call the candidate, and back project it into the query image. This gives us the position of the detections found in the candidate in the query image. We can then find the distance to the closest candidate detection for each detection in the query image. To back project the detections from the candidate to the query image, we take each detection $a = (a_x,a_y)\in D_c$ and project it into 3D space using the candidate depth map $d_c\in\mathbb{R}^{n\times m}$. We can then use the shared camera matrix $K$ and the relative rotation $R$ and translation $T$ between the two cameras to project this detection back into the detection into the query viewpoint to get the candidate detections $D_c^q$. We then check whether the detection is obscured in the query frame as well as filter out noise in the depth by checking that the distance to the detection is close to the depth of the query image $d_q\in\mathbb{R}^{n\times m}$. Formally, \begin{align} \text{Let } &b_z\begin{bmatrix} b_x & b_y & 1\end{bmatrix} = K \begin{bmatrix} R & T\end{bmatrix} \begin{bmatrix} a_x \\ a_y \\ d_{c}(xy) \\ 1 \\ \end{bmatrix} \\ D_c^q &= \{(b_x, b_y) \text{ if } |b_z - d_{q}(b_x, b_y)| < \epsilon\} \\ H_{qc}(r) &= \sum_{c\in D_q} \begin{cases} 1 & \text{if } |\min\{\|b-c\| ,b \in D^q_c\} - r|<0.5\\ 0 & \text{else} \\ \end{cases} \end{align} Where $H_{qc} :\mathbb{N} \to \mathbb{N}$ is the histogram that maps the integer pixel distance from the query detection to the number of queries with that distance. Unmentioned in this formal description is the fact that we store the number of the unmatched detections in the last column of the histogram. For these unmatched detections, we only include the detection repeatability for points that are visible in both frames. One of the concerns with using 3D data in feature evaluation is that it has been exceptionally slow, with the DTU Robot Dataset test taking 10 seconds per a pair of images. If we restrict the dataset to just be comparisons within the same lighting conditions, that would be 424800 pairs, which would take approximately 50 days to evaluate. The datasets can also be huge, with the DTU Dataset taking 500 GB of storage. With this in mind, we developed a C++ code base that can evaluate a detector in 10 minutes on an Intel i9-7900X with 20 threads. We find the nearest neighbor of each query within the candidate by convolving a binary picture of the candidate detections $C$ within the query with a series of circular distance filters. Each of these filters $f_i:\mathbb{R}^{m\times n}\to \mathbb{R}^{m\times n}, i\in\{1, \ldots 20\}$ has a circle with some radius $i$, and we set a distance map $D\in \mathbb{R}^{m\times n}$ within the query frame to be equal to the minimum value of the $i$ such that $f_i$ is not zero at that point. Formally, \begin{align} f_i(x, y) = \begin{cases} 1 & \text{if } |\|(x, y)\| - i| < 0.5 \\ 0 & \text{else} \end{cases} \\ D(x, y) = \min\{i | f_i(x, y)\circ C \geq 1\} \end{align} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{all_histograms.pdf} \caption{Histograms of the average percentage (top) and number (bottom) of points whose closest point in the paired image is within a given distance range. This average is computed across all pairs in our test set. Higher numbers at lower distances means that the detector is more precise when repeating detections. The distance bin of 10+ indicates the percentage of detections that were not repeated.} \label{results:hist} \end{figure*} \subsection{Results} We compare the FAST\cite{rosten2008faster} detector with the default parameters from ORB\cite{rublee2011orb}, SURF\cite{bay2006surf}, SIFT\cite{lowe2004distinctive}, MagicPoint, SuperPoint,\cite{detone2018superpoint}, as well as our algorithm, which we label Superpoint-Gibson. The results of this comparison are shown in Figure \ref{results:hist}. The repeatability of a detector can be expressed as the percentage of points that have a corresponding detection within a certain distance. However we also want our detector to reliably detect as many points as possible. To this end, we create histograms of both the average number and percentage of points that have a corresponding point at a given distance. The 10+ distance bin can be considered points without a repeated detection. MagicPoint and our network, Superpoint-Gibson, outperform the other algorithms, detecting both more points and a higher percentage of points at a distance of 1,2, and 3 pixels than any of the other detectors. In addition, they both also predict a smaller percentage of unmatched points than the other detectors. Note that both MagicPoint and our algorithm were both trained on synthetic 3D data. While MagicPoint had a precisely labeled ground truth thanks to using a variety of simple shapes for which the labels were known, our algorithm was able to surpass MagicPoint by utilizing richer 3D data to create a new set of repeatable points. Similar to how SuperPoint-COCO was trained, our method incorporates detections from the MagicPoint detector as part of it's set of detection labels, but does not train on the synthetic dataset. The results from SuperPoint-COCO show that it is not enough to simply utilize the detections from a viewpoint invariant detector to inherit the viewpoint invariance, however our algorithm avoids the drop in performance by training using real viewpoint changes. If we sum the bins from 0-3 pixels we get the number of points with a corresponding feature detection within 3 pixels. Our algorithm detects an average of 10.75 repeatable features per image. This represents a $13\%$ improvement over Magipoint which detects an avarage of 9.52 repeatable features per image. The performance of MagicPoint over SuperPoint-COCO hints at the relative importance on training on real viewpoint change to training on real images. By incorporating real images as well as real viewpoint changes we were able to further improve performance over SuperPoint-COCO, specifically when considering those features that can be detected reliably within 3 pixels. \section{Conclusion and Future Work} Our main finding with respect to generating a synthetic dataset is that trying to fill in regions of images for which there is no data is detrimental to the training of feature detectors. The use of a neural network, such as Goggles\cite{gibson}, causes too many errors to be used in such a precise application, even to the point of occasional objects going missing from scenes. In future work on generation of synthetic datasets, we think that it would be best to mask out sections of terrain for which there is not data instead of filling it in from the void. With such a system in place, it might be possible to utilize lower quality datasets to allow for the diversification of data to include more cluttered scenes, outdoor scenes, lighting changes, and more. The testing algorithm we designed is transparent and easy to use. Although the results are not as simple as assigning a number to a detector to rank how "good" it is, we believe that the design of the output allows for a less biased depiction for how well each detection performs so it's strengths can be seen. We once again confirmed the findings of previous benchmarks and showed that most detectors perform poorly with regards to viewpoint change relative to non-planar surfaces \cite{aanaes2012interesting,moreels2007evaluation}. We showed that training on 3D data is a solution to designing detectors that are non-planar viewpoint invariant. Not only did we demonstrate that the same model when trained with 3D data outperforms the same model trained to be invariant to homographies applied to images, we also demonstrated a method of extending this training method to more environments. With a greater diversity of data, our algorithm for ground truth generation has the potential to train all purpose detectors that would be able to achieve very high precision detections, allowing for precise rigid body transformation estimation and more. {\small \bibliographystyle{ieee_fullname}
train/arxiv
BkiUeI45qoTDtv38wSDx
5
1
\section{Maintaining a (2+$\epsilon$)-approximate matching}\label{app:mat2} In this section we adapt the algorithm by Charikar and Solomon~\cite{charikar2018fully} to get a $(2+\epsilon)$-approximate matching algorithm with $O(1)$ number of rounds per update, $\widetilde{O}(1)$ communication per round, and $\widetilde{O}(1)$ active machines per round. Although the algorithm from \cite{charikar2018fully} needs small modifications to achieve the above bounds in our model, these are essential as the original algorithm relies on the fact that it is executed sequentially. We first give an overview of the algorithm, and then show how one can resolve the issues that arise from the distributed implementation of the algorithm. \subsection{Overview of the sequential algorithm} The algorithm by Charikar and Solomon \cite{charikar2018fully} builds on the framework established by Baswana, Gupta, and Sen~\cite{baswana2011fully} that was designed for fully-dynamic maximal matching with $O(\log n)$ amortized update time. For ease of presentation, we first very briefly describe the framework from \cite{baswana2011fully} and then the modified version in \cite{charikar2018fully}. The set of vertices is decomposed into $\log_{\gamma}n+2$ levels, $\gamma \in O(\log n)$. The unmatched vertices are assigned level $-1$, while the matched vertices are assigned to levels $[0, \dots, \log_{\gamma}n]$, where $\gamma = \theta(n)$. Denote the level of a vertex $v$ as \lvl{v}. Let $v$ be a matched vertex and $e=(u,v)$ be the edge of the matching that is adjacent to $v$. Roughly speaking, the level of $v$ in the level-decomposition is the logarithm (with base $\gamma$) of the cardinality of the sampling space from which $e$ was selected uniformly at random, that is, the fact that $\lvl{v}=\ell$ implies that $e$ was selected uniformly at random among at least $\gamma^{\ell}$ edges. We refer to the cardinality of the sampling space from which an edge $e$ was selected as the \emph{support} of $e$. Notice that while neighbors of a vertex $v$ get deleted the support of the edge $e$ reduces, but insertions of new neighbors of $v$ do not increase the support of $e$ as they were not an option when $e$ was sampled. The aforementioned grouping of the vertices and their adjacent matched edges serves as an estimation of the number of updates needed to delete an edge of the matching at each level. That is, a matching edge at level $\ell$ is expected to be deleted after, roughly, $\gamma^{\ell}/2$ deletions of edges adjacent to $v$. Moreover, the algorithm maintains an orientation of the edges in the graph where each edge between two vertices $u$ and $v$ is oriented from the vertex with higher level to the vertex of lower level; ties are broken suitably by the algorithm. The outgoing edges from a vertex $v$ are stored in a list $\outset{v}$, while for the incoming edges of a vertex the algorithm maintains a partition of the edges into lists based on their level, that is, the incoming edges of $v$ at level $\ell \geq \lvl{v}$ are stored in $\inset{v}{\ell}$. Notice that the more refined maintenance of the incoming edges of a vertex allows vertex $v$ to traverse only the incoming edges at a specific level, while such a process for the outgoing edges requires the traversal of the whole list $\outset{v}$. At this point it is useful to define the quantity $\lowneighbors{v}{\ell}$ which represents the number of neighbors of vertex $v$ at levels $1$ through $\ell-1$. This is mainly used in the algorithm in \cite{charikar2018fully}. The algorithm maintains the following invariants during its execution. \begin{itemize} \item[(i)] Any matched vertex has level at least 0. \item[(ii)] The endpoints of any matched edge are at the same level, and this level remains unchanged until the edge is deleted from the matching. \item[(iii)] All free vertices have level -1 and out-degree 0. (This guarantees that the matching is maximal.) \item[(iv)] An edge $(u,v)$ with $\lvl{u} > \lvl{v}$ is oriented by the algorithm from $u$ to $v$. In case where $\lvl{u} = \lvl{v}$, the orientation is determined suitably by the algorithm. \end{itemize} Whenever an edge is deleted from the matching, the algorithm places each endpoint of the deleted edge at a level $\ell$ such that it can pick an incident matching edge among a pool of $\gamma^{\ell}$ candidate edges. We avoid giving the details on how to update the maintained data structures after an edge insertion or deletion, as these details are out of the scope of this paper. Roughly speaking, the main idea of the analysis in \cite{baswana2011fully} builds on the fact that to remove a matching edge $e=(u,v)$ at level $\ell$, the adversary needs to delete $O(\gamma^{\ell})$ many edges, which allows the algorithm to accumulate enough potential to restore the maintained invariants by reassigning levels to $u$ and $v$ and update the data structures $\outset{\cdot}$ and $\inset{\cdot}{\cdot}$ of $u$ and $v$ and their affected neighbors. The bottleneck of the algorithm is in maintaining the data structures $\outset{\cdot}$ and $\inset{\cdot}{\cdot}$ throughout the updates. With our model, each machine contains local computational power and can send messages in batches to the neighbors of a vertex stored at the machine. This allows one to update the affected data structures in batches, by simply sending and receiving the appropriate information from each endpoint of the inserted/deleted edge to their neighbors and each individual vertex updates the data structures concerning themselves. That is, if a vertex changes level, it can learn it can update all the relevant data structure in $O(1)$ rounds using a number of machines and communication that is analogous to the number of neighbors of the vertex. The algorithm from \cite{charikar2018fully} maintains a relaxed version of the invariants that are maintained by \cite{baswana2011fully}. As the authors argue themselves, in order for the algorithm to have a subpolynomial worst-case update time it is necessary to be proactive with respect to deletions of matched edges. More specifically, the authors present a scenario where the adversary can force the algorithm to reduce drastically the support of many edges of the matching, and then remove many edges of the matching that have reduced support, which forces the algorithm to perform a polynomial time computation within few updates. Charikar and Solomon \cite{charikar2018fully} deal with such situations by designing an algorithm that ensures that at any point in time every edge of the matching at level $\ell$ is sampled uniformly at random from a relatively large sample space (i.e., $\Omega((1-\epsilon)\cdot \gamma^{\ell})$). That is, the algorithm maintains a relatively large support for each edge of the matching independently of the adversarial deletions. This is done to keep low the probability of the adversary ``hitting'' an edge of the matching at level $\ell$, and thus, at any point in time only few edges might be deleted from the matching by the adversary. As hinted in the discussion of the algorithm from \cite{baswana2011fully}, a deletion of an edge from the matching at level $\ell$ can trigger an update that requires $\Omega(\gamma^{\ell})$ time in order to place the endpoints of the deleted edge in the right level and try to match them with another vertex in their neighborhood. The algorithm from \cite{charikar2018fully} uses a similar approach, with the difference that each long update is executed in small batches of operations, where each batch is executed after an edge update and performs a polylogarithmic number of operations. More specifically, each batch contains either $\Delta = O(\log^5n)$ or $\Delta' = \Delta \cdot \log n$ operations, depending on the type of update that is being performed. In other words, a long process is simulated over polynomially many adversarial edge insertions/deletions. The period during which the algorithm remains active after an edge insertion or deletion is called \emph{update cycle}. At a high level, the algorithm ensures a low-enough probability of deleting an edge of the matching which, in turn, allows it to process such a deletion in many update cycles, without having to deal with many such deleted edges simultaneously, with high probability. This is achieved by proactively deleting edges of the matching that have low support and then trying to match again the newly free endpoints of the deleted edges; the endpoints of deleted edges by the algorithm are called \emph{temporarily free} vertices. In addition, to ensure low-enough probability of an adversarial deletion of a matched edge, the algorithm runs several procedures that remain active throughout the sequence of the edge insertions/deletions (one of which keeps removing edges with low support). These procedures are called \emph{schedulers}, and each such scheduler is responsible for ensuring different invariants that are maintained throughout the algorithm. The algorithm executes for each level $-1,0,\dots,\log_{\gamma}n$ of level-decomposition a copy of a scheduler from each type. Each of those copies is called a \emph{subscheduler}, and all subschedulers of the same type are managed from the same scheduler. Hence, there are $O(\log_{\gamma}n)$ subschedulers managed by a constant number of schedulers. Since long updates are executed in many small batches, it is unavoidable that at each round there exist vertices that are in the process of being updated. These vertices are called \emph{active} and they are maintained in a list throughout the execution of the algorithm; we call this list the \emph{active list} and denote is by $\mathcal{A}$. It is shown that at any point in time there are at most $O(\log n)$ active vertices, with high probability. The algorithm also maintains the vertices that become free due to adversarial edge deletions. Such vertices are maintained in lists based on the level of the deleted edges, i.e., the algorithms maintains a list $Q_{i}$ at each level $i$. Recall that the algorithm breaks down the execution of each process in batches of size $\Delta$ or $\Delta'=\Delta\cdot\log n$. The size of each batch depends on the procedure that initiated the execution and not on the level of the subscheduler that runs the batch; that is, for the batches handled by the same scheduler is uniform across the different levels. Hence, the execution of a procedure by a subscheduler at level $\ell$, which requires $T_\ell$ time, is carried out in $T_\ell / \Hat{\Delta}$, where $\Hat{\Delta}\in \{\Delta, \Delta'\}$. The level-decomposition ensures that a procedure that is executed by a subscheduler at level $\ell$ requires at least as many rounds as any process at levels $\ell' < \ell$. As a consequence, during an execution of a process at level $\ell$, possibly many processes at lower levels are executed. Before describing the schedulers that are used by the algorithm, we first review three supporting procedures. In what follows, similarly to \cite{charikar2018fully}, we assume that the length of the update sequence is limited to $O(n^2)$, and that the maintained matching has size $\Omega(\log^5 n/ \epsilon^4)$. These assumptions can be easily removed. \paragraph{The authentication process.} While updating the data structures of a vertex $v$, some of its neighbors potentially change their level multiple times. This happens because a procedure handling a vertex at level $\ell$ compared to a procedure handling a vertex at level $\ell' < \ell$ takes $\gamma^{\ell - \ell'}$ times more time (as the exact difference depends on the type of the processes being carried out). Hence, at the end of the execution of the process handling vertex $v$, vertex $v$ might not be updated about the level of some of its neighbors. To deal with this, the algorithm keeps track of the neighbors of $v$ that change their level, and acts upon such changes. This is implemented efficiently as follows. At the end of the execution of a procedure handling a vertex $v$, the algorithm iterates over the list of active vertices and for each active vertex $z$ the information of $v$ about $z$ is being updated. Since two procedures might have a very different execution times, we also need to take care of the scenario where a neighbor $w$ of $v$ enters and leaves the active list $\mathcal{A}$ before $v$ finishes its execution. However, just before $w$ leaves $\mathcal{A}$, both $v$ and $w$ are active, and hence, it suffices to scan the active list $\mathcal{A}$ and for each neighbor $z$ of $w$ that is active (that includes $v$), update their information about $w$. Since $|\mathcal{A}| = O(\log n)$, searching for all neighbors of a vertex in the list and updating their mutual information takes $O(\log^2 n)$ time, which means that it can be executed in a single batch (i.e., it should not be simulated in multiple update rounds). In our model, this can be implemented in $O(1)$ rounds using only $\widetilde{O}(1)$ machines per round. \paragraph*{Procedure $\setlevel(v,\ell)$.} This supporting procedure is responsible to set the level of $v$ to be $\ell$ and to update all the necessary data structures of $v$ and its affected neighbors. This procedure is called by the various subschedulers to facilitate the level change that is associated with them. Notice that the level $\ell$ to which $v$ is set is not determined by the procedure itself, but by its caller. We refer to this process as the rising, or falling, of $v$ depending on whether $\lvl{v}<\ell$ or $\lvl{v}>\ell$, respectively, where $\lvl{v}$ is the level of $v$ before the call of \setlevel{} procedure. This process is executed by a level $\hat{\ell}=\max\{\ell,\lvl{v}\}$ subscheduler. The procedure runs in a total of $O(\gamma^{\hat{\ell}})$ time, which is executed in batches of size $\Delta$ or $\Delta'$ (depending on the subscheduler calling it). The procedure starts by storing the old level of $v$ (that is, $\ell_v^{old} = \lvl{v}$), and setting $\lvl{v} = \ell$. Then it updates the vertices in $\outset{v}$ about the new level of $v$, that is, for each vertex $w\in \outset{v}$ such that $\lvl{w}<\ell$ it moves $v$ from $\inset{w}{\ell_v^{old}}$ to $\inset{w}{\ell}$. Next, depending on whether $v$ is rising of falling, we further need to flip the outgoing (resp., incoming) edges of $v$ with its appropriate neighbors to restore the invariants of the level-decomposition. In the case where $v$ is falling, that is $\ell < \ell^{old}_v$, for each vertex $w\in \outset{v}$ such that $\ell<\lvl{w}\leq \ell_v^{old}$ we move $w$ from $\outset{v}$ to $\inset{v}{\lvl{w}}$ and $v$ from set $\inset{w}{\ell_{v}^{old}}$ to $\outset{w}$. We further need to update the value $\lowneighbors{w}{i}$, for all $w\in \outset{v}$ and all $\ell+1 \leq i \leq \ell_{v}^{old}$. We again do that by iterating through the set $\outset{v}$ and for each edge we increment all appropriate counters. The procedure is analogous for the case where $v$ is rising. Recall that the $O(\gamma^{\Hat{\ell}})$ time required by procedure \setlevel, to change the level of vertex $v$ from $\ell_{v}^{old}$ to $\ell$ where $\hat{\ell} = \max \{\ell, \ell_{v}^{old}\}$, is executed in batches of size $\Hat{\Delta}\in \{\Delta, \Delta'\}$. In our distributed implementation of the algorithm we will execute all $\Delta$ operations of each batch of procedure \setlevel{} in one round. To do so, we notice that all updates in the data structures of $v$ and its neighbors are independent from each other, that is, the updates made in the data structure of each neighbors $w\in \outset{v}$ do not depend on the preceding or succeeding updates to other neighbors of $v$. Hence, we can execute all of them in parallel. We can use the machinery developed in Section \ref{sec:maximal-matching} to identify to which machine each message should be delivered. In other words, the $\Hat{\Delta}$ operations that are executed by each subscheduler at each update round are performed in $O(1)$ MPC rounds. \paragraph*{Procedure \handlefree$(v)$.} This procedure is responsible for handling a temporarily free vertex $v$. The procedure first identifies the highest level $\ell$, such that $\lowneighbors{v}{\ell} \geq \gamma^\ell$ (recall that $\lowneighbors{v}{\ell}$ is the number of neighbors of $v$ in level strictly lower than $\ell$), and then the corresponding set $S(v)$ of non-active neighbors of $v$ at levels lower than $\ell$. Then the procedure samples uniformly at random a vertex $w$ from $S(v)\setminus \mathcal{A}$ as the new mate of $v$ in the matching. To match $v$ with $w$ the procedure does the following. It first unmatches $w$ from its former mate $w'$, then $v$ and $w$ are set to level $\ell$ using calls to \setlevel$(v,\ell)$ and \setlevel$(w,\ell)$ and adds edge $(w,v)$ into the matching. Finally, if $w$ was previously matched with $w'$ the procedure \handlefree$(w')$ is called recursively. The running time of \handlefree$(v)$ is bounded by $O(\gamma^{\lvl{v}}\log^4n)$ (see \cite{charikar2018fully} for the analysis), and it is executed in batches of size $\Hat{\Delta}$. Note that also in this case we can execute all $\Hat{\Delta}$ operations in a constant number of rounds. \paragraph*{Maintained invariants.} The algorithm in \cite{charikar2018fully} maintains the following invariants: \begin{itemize} \item[(a)] Any matched vertex has level at least 0. \item[(b)] The endpoints of any matched edge are of the same level, and this level remains unchanged until the edge is deleted from the matching. (This defines the level of a matched edge, which is at least 0 by item (a), as the level of its endpoints.) \item[(c)] Any vertex of level -1 is unmatched and has out-degree 0. \item[(d)] An edge $(u, v)$, such that $\lvl{u} > \lvl{v}$ and $u$ and $v$ are not temporarily free, is oriented as from $u$ to $v$. \item[(e)] For any level-$\ell$ matched edges $e$ with $T_\ell/\Delta \geq 1$ and any $t$, it holds that $|S_t(e)| > (1-2\epsilon)\cdot \gamma^{\ell}$. \item[(f)] For any vertex $v$ and any level $\ell > \lvl{v}$, it holds $\Phi_v(\ell) \leq \gamma^{\ell} \cdot O(\log^2 n)$ \label{inv:bounded-neighbors-up-to-a-level} \end{itemize} Notice that the invariants (a)--(d) are equivalent to the invariants (i)--(iv) of the algorithm from \cite{baswana2011fully} which are adapted to take into account the concept of temporarily free vertices. Invariant (e) formalizes the property of maintaining large support for all edges of the matching. Next we review the four schedulers that are used to maintain the invariants (a)--(f) of the algorithm. \paragraph*{Scheduler \freeschedule.} The \freeschedule{} scheduler handles the vertices that become temporarily free due to the adversary. Such vertices reside at $\log_{\gamma}n+1$ queues $\ensuremath{\mathcal{Q}}_0, \dots, \ensuremath{\mathcal{Q}}_{\log_{\gamma}n}$ at the different levels of the level-decomposition. Each subscheduler at level $\ell$ iteratively removes a vertex $v$ from $\ensuremath{\mathcal{Q}}_\ell$ and calls \handlefree$(v)$, which runs in time $O(\gamma^{\ell})$, simulating $\Delta'$ steps with each update operation. In \cite{charikar2018fully} the subschedulers at the different levels are executed in an order from the one at the highest level to the one at the lowest level. In our adaptation to the DMPC model, the $\log_{\gamma} n$ \freeschedule{} subschedulers are executed in parallel. Each such subscheduler simulates $\Delta$ operations (in fact, their calls to \handlefree), and the total work by all subschedulers requires a constant number of MPC rounds. However, this parallel execution of the subschedulers at different levels creates some conflicts that do not appear in \cite{charikar2018fully}, as the subschedulers are assumed to follow some predefined order of execution. We will show how these conflicts can be resolved later on. \paragraph*{Scheduler \unmatchschedule.} The goal of the \unmatchschedule{} subscheduler at level $\ell$ is to guarantee that the size of the support of each matched edge at level $\ell$ remains between $\gamma^{\ell}\cdot(1-\epsilon)$ and $\gamma^{\ell}$ (that is, invariant (e) of the algorithm). Each subscheduler at level $\ell$ simply removes the level-$\ell$ matching edge $e=(u,v)$ of smallest sample space, and executes \handlefree$(u)$ and \handlefree$(v)$. The computation that is triggered by a removal of a matched edge at level $\ell$ is bounded by $O(\gamma^{\ell})$, and it is executed in batches of $\Delta$ operations. Each such batch contains exchange of information between $u$ and $v$ and $\Delta$ of their neighbors, and hence, can be executed in $O(1)$ rounds using $\widetilde{O}(1)$ machines and communication per round. \paragraph*{Scheduler \riseschedule.} Each subscheduler of this type at level $\ell$ ensures that for each vertex $w$ at level $\ell'<\ell$ it holds that $\Phi_w(\ell) \leq \gamma^{\ell} \cdot O(\log^2 n)$. The subscheduler, each time identifies the vertex $w$ at level $\ell'<\ell$ with the highest value of $\Phi_w(\ell)$, removes the matching edge $(w,w')$ (if such an edge exists), and raises $w$ to level $\ell$ by executing \riseschedule$(w,\ell)$. Finally, the subscheduler executes \handlefree$(w)$ and \handlefree$(w')$ to match both $w$ and $w'$. The execution of this subscheduler takes $T_\ell = O(\gamma^{\ell})$ time in batches of size $\Delta$, that is the subscheduler is executed from $O(\gamma^{\ell} / \Delta)$ update cycles. Again, each batch of this update can be executed in a constant number of DMPC rounds. \paragraph*{Scheduler \shuffleschedule.} This scheduler at level $\ell$ each time picks a matching edge $e=(u,v)$ uniformly at random among the matching edges at level $\ell$, removes it from the matching, and executes \handlefree$(u)$ and \handlefree$(v)$ to try and match again the endpoints of the removed matching edge. The role of this scheduler is mainly to facilitate the proof in \cite{charikar2018fully} showing that the adversary has low probability of deleting a matched edge (the low probability is defined depending on the level of the matched edge). While the total time required by this subscheduler is $O(\gamma^{\ell})$, it is executed in batched of size $\Delta' = \Delta\cdot \log n$, which ensures that it runs faster than the \unmatchschedule{} by a logarithmic factor, for technical reasons as explained in \cite{charikar2018fully}. This scheduler runs only for the levels $\ell$ for which $\gamma^{\ell}/\Delta'>1$ as otherwise each update of at level $\ell$ is executed immediately (not in multiple chunks) and hence the adversary does not have enough time to delete an edge during the execution of a subscheduler at this level. In a same way as the other schedulers, each batch of this scheduler can be executed in $O(1)$ rounds using $\widetilde{O}(1)$ machines and communication per round. \smallskip\noindent\textbf{Handling updates.} Following the insertion of an edge $e=(u,v)$, the algorithm updates the relevant data structures in time $O(\log_{\gamma} n)$. If both $u$ and $v$ are at level $-1$, the algorithm marks $e$ as matched edge and sets the level of $u$ and $v$ to be $0$ by calling \setlevel$(u,0)$ and \setlevel$(v,0)$. In \cite{charikar2018fully} it is shown that this process is enough to guarantee that all the invariants are maintained, and that an edge insertion can be handled in $O(\log^4n)$ time. The above process can be simulated in $O(1)$ DMPC rounds, as all instructions involve exchanging information between $u$ and $v$ and their neighbors, as well as each vertex updating their part of the maintained data structures. To process the deletion of an edge $e=(u,v)$ we proceed as follows. If the edge does not belong to the matching, it is sufficient to update the relevant data structures which requires only $O(\log n)$ time. On the other hand, if $e$ belongs to the matching we first remove it from the matching, add its endpoints in $\ensuremath{\mathcal{Q}}_{\lvl{u}}$. The above process is sufficient, as the subscheduler \handlefree{} at level $\lvl{u}$ will handle the newly free vertices $u$ and $v$ appropriately. Moreover, the process ensures that all the invariants maintained by the algorithm continue to be satisfied after this process. As one can observe, the insertions and deletions of edges do not trigger any long update procedures (even when deleting edges of the matching!), but rather work together with the schedulers in maintaining the invariants (a)--(f) of the algorithm, which in turn ensures that the maintained matching is almost-maximal. However, as the different subscheduler at the different levels do not communicate with each other but operate independently, there are some issues that arise if they try to process the same vertices. \subsection{Conflicts between schedulers} Here we deal with synchronization issues that arise from the fact that all subschedulers are working simultaneously at all times. These issues are called conflicts between subschedulers. We exhibit the conflicts that arise and show the modifications that need to be made in order to ensure that all invariants of the algorithm are satisfied at all times. Some of the modifications were already suggested in \cite{charikar2018fully}, however, we repeat them here for completeness of the algorithm. In what follows we ignore the overhead added by updating the list $\mathcal{A}$ of active vertices. \paragraph*{Sampling mates conflict.} The procedure \handlefree~$(v)$ at level $\ell$, as part of a call from its subscheduler, might pick as a new mate of $v$ a vertex that is already processed by some other subscheduler. However, this is not really an issue as the sampling avoids such vertices by sampling from $S(v)\setminus \mathcal{A}$, and the active list $\mathcal{A}$ contains all vertices that are being processed. \paragraph*{Deleting unmatched edges conflict.} A conflict may arise when \unmatchschedule{} or \shuffleschedule{} subschedulers try to remove a vertex that has already been removed from the matching. While the case where the processed edge has been removed at a previous round is not really a conflict, as once a subscheduler removes an edge from the matching it informs all other subschedulers in $O(1)$ rounds and using $O(\log n)$ machines, the case where subschedulers from different levels try to remove the same edge from the matching is problematic. For each \unmatchschedule{} subscheduler we pick the top $2 \log n$ choices of edges to remove from the matching and for each \shuffleschedule{} subscheduler we pick $2\log n$ random edges to remove from the matching. Next, all \unmatchschedule{} and \shuffleschedule{} subschedulers send their $2 \log n$ choices to the same machine, and that machine decides for which subscheduler removes which edges from the matching, by first processing the \unmatchschedule{} subschedulers in decreasing order of their level followed by the \shuffleschedule{} subschedulers in decreasing order of their level and for each subscheduler we assign the first unassigned choice among its list of $2 \log n$ choices. Then, the machine communicates to the subscheduler their assigned edges, and hence no conflicts occur among the different subschedulers as each has a unique edge to delete from the matching. \paragraph*{Match an active vertex conflict.} A conflict arises if the next vertex chosen by \freeschedule{} subscheduler at level $\ell$ from a queue $\ensuremath{\mathcal{Q}}_\ell$ is active. To deal with this issue we delay the scheduling of all the \freeschedule{} subschedulers at least one round (within the same update cycle) after the rest of the subschedulers so that they can send which vertices they mark active in order for them to be removed from the queues $\ensuremath{\mathcal{Q}}_{\ell}$. \paragraph{Raising and falling conflicts.} During subscheduler \riseschedule{}, at level $\ell$, the vertex $v$ that is picked to be raised might be already active. We do not try to prevent this type of conflicts, as it is possible that we actually want to raise $v$ to level $\ell$ even though $v$ is already active, in order to satisfy the invariant (f) of the algorithm. In particular, during the process where \riseschedule{} at level $\ell$ chooses a vertex $v$ to move it to level $\ell$, some other procedure might be handling $v$, that is, $v$ might be in the process of being raised or fallen level. Notice that, if $v$ is being raised or fallen at some level $\ell'>\ell$, then there is no need for \riseschedule{} subscheduler to raise $v$ to $\ell'$. The case where \riseschedule{} needs to raise $v$ to $\ell$ is when $\ell'<\ell$ (the destination level of $v$ at the process of raising or falling). First, we take care of the conflicts between subschedulers of type \riseschedule{}. Similarly to the case of the \unmatchschedule{} and \shuffleschedule{} subschedulers, we process the \riseschedule{} subschedulers in a sequence according to their levels and we assign to them (inactive and unassigned) vertices to rise, ensuring that each \riseschedule{} subscheduler at level $\ell$ does not raise the same vertex with one of a \riseschedule{} subschedulers at higher levels. Other than conflicts between different \riseschedule{} subschedulers, the only other scheduler that might conflict with \riseschedule{} is \handlefree. In this case we distinguish conflicts of a \riseschedule{} subscheduler with calls \handlefree$(w)$, where $w$ is being raised, and calls $\handlefree(w)$, where $w$ is being fallen. As shown in \cite{charikar2018fully}, the conflicts between \riseschedule{} subscheduler and calls to the procedure \handlefree$(w)$ where $w$ is being raised are avoided as follows. Each level-$\ell$ \riseschedule{} subscheduler picks the subsequent vertex that it is going to raise and adds it into the set of active vertices, so that it cannot be processed by other schedulers. The vertex that is going to be raised with the next call to \riseschedule, is called the \emph{next-in-line} vertex of the corresponding subscheduler. That is, each time a call to \riseschedule{} subscheduler is being initiated, it chooses the vertex that it is going to raise in the next call, and proceeds with raising the vertex that was chosen in the previous call. It can be shown that this mechanism avoids conflicts between \riseschedule{} and procedure \handlefree, where \handlefree{} is processing a raising vertex. The correctness is based on the fact that the call to level-$\ell$ \riseschedule{} subscheduler will end later than the call to level-$\ell'$ \handlefree{} procedure, where $\ell'<\ell$. Moreover, because we schedule the different \riseschedule{} subschedulers in a decreasing order of their level, exactly as it is being done in \cite{charikar2018fully}, our distributed version does not affect their analysis. Finally, we need to deal with the conflicts that arise between \riseschedule{} subschedulers and calls to procedure \handlefree$(w)$, where $w$ is in the process of falling. This is a special case on its own, and is not resolved solely by the next-in-line mechanism discussed before, as the call to \handlefree$(w)$ may have been initiated from a level $\ell'>\ell$. The first modification is that during a call to \handlefree$(w)$ we first check whether $w$ is the next-in-line vertex of any of the \riseschedule{} subschedulers at levels $j > \lvl{w}$, and if yes, we ignore the call to \handlefree$(w)$. This trick guarantees that there are no level-$j$ (where $j > \lvl{w}$) \riseschedule{} subscheduler attempts to raise $w$ while $w$ is falling from $\lvl{w}$ to a level $\ell$, as part of a call to \handlefree$(w)$. It is possible that while $w$ is falling from $\lvl{w}$ to $\ell$, a level-$j$ \riseschedule{} subscheduler attempts to raise $w$ to level $j$. The next-in-line trick does work here as the call to \handlefree$(w)$ requires more time than \riseschedule{} and hence it is not guaranteed that $w$ will be in a next-in-line for some \riseschedule{} subscheduler with level $\ell<j<\lvl{w}$. We deal with this by preventing any level-$j$ \riseschedule{} subschedulers to raise $w$ to level $j$ while $w$ is falling for any $j<\lvl{w}$. Although this guarantees that no \riseschedule{} subscheduler raises a falling vertex, the fact that we prevent the subscheduler to raise a vertex, might violate invariant (f), i.e., that for any vertex $v$ and any level $\ell'> \lvl{v}, \Phi_v(\ell') \leq \gamma^{\ell'} \cdot O(\log^2 n)$. To ensure that this does not happen, right after $w$ falls to level $\ell$, we immediately raise to the highest level $\ell'$ that violates invariant (f). It is shown in \cite{charikar2018fully} that this modification prevents the violation of invariant (f) and also the new version of \riseschedule{} subscheduler can be done within the time of the scheduler that initiated the falling of $w$. \begin{theorem} A $(2+\epsilon)$-approximate matching can be maintained fully-dynamically in the dynamic MPC model in $O(1)$ rounds per update, using $\widetilde{O}(1)$ active machines per round and $\widetilde{O}(1)$ communication per round, in the worst case. \end{theorem} \begin{proof} As we argued with the description of each scheduler, we simulate the $\Delta$ or $\Delta'$ operations executed by each subscheduler in \cite{charikar2018fully} with $O(1)$ number of rounds, using $\widetilde{O}(1)$ active machines and $\widetilde{O}(1)$ communication. Since the job done by the different subschedulers is independent among them, and there are only $O(\log n)$ of these subschedulers, it follows that the execution of all subschedulers in one update cycle can be executed in $O(1)$ rounds, using $\widetilde{O}(1)$ active machines and $\widetilde{O}(1)$ communication. By the same argument, the authentication process at each update cycle for all subschedulers can be executed in the same time. Finally, with analogous reasoning, it can be shown that the modifications needed to ensure that no conflicts arise can be executed within the same asymptotic complexity. \end{proof} \section{Fully-dynamic connected components and approximate MST}\label{sec:cc} In this section we present a fully-dynamic deterministic distributed algorithm for maintaining the connected components of a graph with constant number of rounds per edge insertion or deletion, in the worst case\footnote{Note that no constant round algorithm for connected component is known for the static case. On the downside, the number of active machines per round is not bounded. We leave as an interesting area of future work to design an algorithm that uses a smaller number of machines per update}. At the heart of our approach we use Euler tours, which have been successfully used in the design of dynamic connectivity algorithms in the centralized model of computation, e.g., in \cite{henzinger1999randomized,holm2001poly}. Given a rooted tree $T$ of an undirected graph $G$, an \emph{Euler tour} (in short, E-tour) of $T$ is a path along $T$ that begins at the root and ends at the root, traversing each edge exactly twice. The E-tour is represented by the sequence of the endpoints of the traversed edges, that is, if the path uses the edges $(u,v),(v,w)$, then $v$ appears twice. As an E-tour is defined on a tree $T$, we refer to the tree $T$ of an E-tour as the Euler tree (E-tree, in short) of the E-tour. The root of the E-tree appears as the first and as the last vertex of its E-Tour. The length of a tour of an E-tree $T$ is $ELength_{T} = 4(|T|-1)$, the endpoints of each edge appear twice in the E-tour. See Figures \ref{fig:EulerTour-insertion} and \ref{fig:EulerTour-deletion} for examples. As the preprocessing shares similarities with the edge insertion, we postpone the description of the preprocessing after describing the update procedure to restore the E-tour after an edge insertion or deletion. \begin{figure*} \vspace{-0.3cm} \begin{center} \includegraphics[trim={1cm 9cm 2cm 1cm}, clip=true, width=0.88\textwidth]{EulerTour-insertion} \end{center} \vspace{-0.6cm} \caption{(i) A forest and an E-tour of each of its tree below. The brackets represent the first and the last appearance of a vertex in the E-tour. (ii) The E-tour after setting $e$ to be the root of its tree. (iii) The E-tour after the insertion of the edge $(e,g)$.} \label{fig:EulerTour-insertion} \end{figure*} \begin{figure*}[t!] \vspace{-0.3cm} \begin{center} \includegraphics[trim={0.8cm 9cm 5cm 1cm}, clip=true, width=0.88\textwidth]{EulerTour-deletion} \end{center} \vspace{-0.6cm} \caption{(i) A tree and an E-tour of the tree below it. The brackets represent the first and the last appearance of a vertex in the E-tour. (ii) An intermediate step of the update of the E-tour after the deletion of the edge $(a,b)$. The red lines in the E-tour indicate the split points of outdated E-tour. (iii) The E-tour after the deletion of the edge $(a,b)$.} \label{fig:EulerTour-deletion} \vspace{-0.2cm} \end{figure*} We assume that just before an edge update, we maintain for each connected component of the graph a spanning tree, and an E-tour of the spanning tree. Using vertex-based partitioning we distribute the edges across machines, and each vertex is aware of the ID of its component, and together with each of its edges we maintain the ID of the component that it belongs to and the two indexes in the E-tour (of the tree of the component) that are associated with the edge. Moreover, we maintain with each vertex $v$ the index of its first and last appearance in the E-tour of its E-tree, which we denote by $f(v)$ and $l(v)$. We denote by $index_v$ the set of all indexes that $v$ appears in the E-tour of $T$. Note that $|index_v| = 2\cdot d_T(v)$ in the E-tour, where $d_T(v)$ is the degree of $v$ in the corresponding E-tree $T$. We do not explicitly store $index_v$, this is implicitly stored with each vertex as information on $v$'s edges. Therefore, we perform updates on the indexes in $index_v$ but it is actually the indexes that are stored at the edges that are updated. To update this information in a distributed fashion, we leverage the properties of an E-tour which allows us to change the root of an E-tree, merge two E-trees, and split an E-tree, by simply communicating the first and last indexes of the new root, or the endpoints of the inserted/deleted edge. \smallskip\noindent\textbf{Handling updates.} The main idea to handle updates efficiently is that the E-tour of the spanning trees can be updated efficiently without requiring a lot of communication. For instance, one can change the root of an E-tree, and update all the information stored in the vertices of that tree, by sending $O(1)$-size messages to all vertices. Moreover, we can test whether a vertex $u$ is an ancestor of a vertex $v$, in their common E-tree, using only the values $f(u)$ and $l(u)$ and $f(v)$ and $l(v)$. The insertions and deletions of edges in the graph are handled as follows. \paragraph{$insert(x,y)$:} If $x$ and $y$ are in the same connected component, we simply add the edge to the graph. Otherwise, we proceed as follows. We first make $y$ the root of its E-tree $T_y$ (if it is not already). Let $ELength_{T_y} = 4(|T_y|-1)$ denote the length of the E-tour of $T_y$. For each vertex $w$ in $T_y$ we update each index $i \in index_w$ to be $i = ((i + ELength_{T_y}-l(y))\mod ELength_{T_y}) + 1$. These shifts of the indexes of $w$ correspond to a new E-tour starting with the edge between $y$ and its parent, where the parent is defined with respect to the previous root of $T_y$. Second, we update the indexes $i\in index_w$ of the vertices $w \in T_y$ to appear after the first appearance of $x$ in the new E-tour. For each vertex $w$ in $T_y$ update each index $i\in index_w$ to be $i = i + f(x) + 2$. Third, set $index_y = index_y \cup \{f(x)+2, f(x)+l(y)+3\}$ and $index_x = index_x \cup \{f(x)+1, f(x)+l(y)+4\}$, where $l(y)$ is the largest index of $y$ in the E-tour of $T_y$ before the insertion of $(x,y)$. Finally, to update the indexes of the remaining vertices in $T_x$, for each $i\in index_w$ where $i>f(x)$ we set $i=i+4\cdot ELength_{T_y}$. See Figure \ref{fig:EulerTour-insertion} for an illustration. Notice that the only information required by each vertex $w$ to perform this update, besides $index_w$ which is implicitly stored on the edges of $w$ and $f(w)$, is $ELength_{T_y}, f(y), l(y), f(x), l(x)$, which can be sent to all machines via a constant size message from $x$ and $y$ to all other machines. Notice that $x$ and $y$ do not need to store $f(x),l(x)$ and $f(y),l(y), ELength_{T_y}$, respectively, as they can simply learn those by sending and receiving an appropriate message to all machines. Hence each insertion can be executed in $O(1)$ rounds using all machines and $O(\sqrt{N})$ total communication per round (as all communication is between $x$ or $y$ with all other machines, and contains messages of $O(1)$ size). \paragraph{$delete(x,y)$:} If $(x,y)$ is not a tree edge in the maintained forest, we simply remove the edge from the graph. Otherwise, we first split the E-tree containing $x$ and $y$ into two E-trees, and then we reconnect it if we find an alternative edge between the two E-trees. We do that as follows. We check whether $x$ is an ancestor of $y$ or vice versa by checking whether $f(x)<f(y)$ and $l(x)>l(y)$. Assume w.l.o.g. that $x$ is an ancestor of $y$ in $T_x$. First, we set $index_x = index_x \setminus \{ f(y)-1, l(y)+1\}$ and $index_x = index_y \setminus \{ f(y), l(y)\}$ (that is, we simply drop the edge $(x,y)$). Then, for all descendants $w$ of $y$ in $T_y$ (including $y$), for each $i\in index_w$ set $i = i - f(y)$, where $f(y)$ is the smallest index of $y$ before the deletion of $(x,y)$. Update $|T_y|$ and allocate a new ID for the new connected component containing $y$. Second, for all vertices $w \in T_x \setminus T_y$ and all $i\in index_w$ if $i>l(y)$ set $i=i-((l(y)-f(y)+1)+2)$, where $l(y)$ and $f(y)$ are the largest and smallest, respectively, index of $y$ before the deletion of $(x,y)$. This is to inform all vertices that appear after $l(y)$ that the subtree rooted at $y$ has been removed, and hence the E-tour just cuts them off (the +2 term accounts for the two appearances of x in the E-tour because of $(x,y)$). Finally, we find an edge from a vertex $v \in T_y$ to a vertex $w\in T_x$, and execute $insert(x,y)$. Similarly to the case of an edge insertion, all of the above operations can be executed in a constant number of rounds as the only information that is required by the vertices are the ID of the components of $x$ and $y$, and the values $f(x),l(x),f(y),l(y)$, which are sent to all machines. Moreover, the search of a replacement edge to reconnect the two trees of $x$ and $y$ can be done in $O(1)$ rounds as we only need to send the IDs of the two components to all machines, and then each machine reports at most one edge between these two components to a specific machine (specified also in the initial message to all machines). \smallskip\noindent\textbf{Preprocessing.} During the preprocessing, we compute a spanning forest $\mathcal{T}$ of the input graph and an E-tour on each tree $T \in \mathcal{T}$ with arbitrary roots. We build on top of the $O(\log n)$ randomized algorithm that computes a spanning forest of a graph by iteratively identifying small neighborhoods to contract into single vertices and at each iteration reduces the number of vertices by a constant fraction~\cite{ahn2012analyzing}. It is instructive to view the contraction process as merges of connected component that are build-up throughout the execution, where initially all components are singleton vertices. We augment this algorithm to maintain a spanning tree in each component, as well as an E-tour of each spanning tree. We do this as follows. Using vertex-based partitioning we distribute the edges across machines, and each vertex is aware of the ID of its component, and together with each of its edges we maintain the two indexes in the E-tour (of the tree of the component) that are associated with the edge as well as the ID of the component containing the edge. At each iteration, several components might merge into one, but all such merges have a common component to which they are contracted; we call this component the \emph{central component} of the merge. Whenever two, or more, components merge into one, they all get the ID of the central component. Each of the components that merge to the central component uses a single edge to merger their spanning tree as a subtree of the spanning tree of the central component. Let $C_1, C_2, \dots, C_l$ be the components that merge and w.l.o.g. let $C_1$ be the central component of the merge. Moreover, let $e_2, \dots, e_l$ be the edges the non-central components use to connect to the central component $C_1$. Our plan is to simulate the sequence of edge insertions of $e_2, \dots, e_l$ within a constant number of rounds. First, in parallel for each component $C_i \in \{C_2, \dots, C_l\}$ with connecting edge $e_i = (x_i,y_i), x_i \in C_1, y_i = C_i$, we set its root to be $y_i$. This is, essentially, the first step of the insert $e_i$ operation. Second, we store the tree edges of all components $C_1,\dots,C_l$ into $O(\sqrt{N})$ machines, and we sort them based on the smallest of the indexes of their endpoints. (Sorting can be done in $O(1)$ rounds as shown in \cite{goodrich2011sorting}.) The sorting implies an order of the machines storing the ordered edges; let $M_1, \dots, M_{q}$ be these machines. For each component $C_i$ with connection edge $e_i=(x_i,y_i), x_i \in C_1, y_i = C_i$, we send the size of the E-tour of $C_i$ (which is $4(|C_i|-1)$), to the machine (among the machines $M_1, \dots, M_{q}$) storing the index $f(x_i)$ and we associate it with that index (it can be part of the message). If more than one trees connect to the same vertex, we impose a total order among them defined by the IDs of the other endpoints of the connection edges of the components, and for each component $C_j$ in this order, we compute the sum $\psi(C_j)$ of sizes of the components before $C_j$ in that order. If there is a single component $C_j$ connecting to a vertex, then its $\psi(C_j)=0$. (The $\psi$ values are used in the final step of each iteration.) Within each machine $M_i, 1 \leq i \leq q$ we sum the sizes that were sent to indexes stored at $M_i$ in the previous step, and we send this information to all machines $M_j, i<j\leq q$. (Each machine sends a message of constant size to each other machine. Hence, all messages can be sent in one round.) In tandem, we also sum the values on the messages, of the same type, that are received at machine $M_i$ from machines $M_p, 1\leq p < i$. Now we can compute for each index $i$ of an edge $e=(w,z)$ in $C_i$ the sum of sizes $\phi(i)$ of components that are attached as subtrees to vertices $w$ with smaller value $f(w)<f(v)$ (here we also consider those components that were attached to indexes stored in $M_i$). This allows use to execute the final step of the procedure of inserting an edge in parallel for all conflicting component merges. In particular, for each index $j$, we set $j=j+4 \phi(j)$. Finally, we execute the second step of the process of inserting an edge. That is, for each component $C_i, i>1$ with connection edge $e_i=(x_i,y_i), x_i \in C_1, y_i = C_i$, and each index $j$ of an edge in $C_i$ we set $j=j + f(x_i) + 4 \phi(x_i) + 4 \psi(C_i) + 2$. All additional steps of the base algorithm can be executed in $O(1)$ rounds, and hence the whole preprocessing can be executed in $O(\log n)$ rounds. \subsection{Extending to $(1+\epsilon)$-approximate MST} To maintain a minimum spanning tree instead of a spanning tree, we use the dynamic spanning tree algorithm with the following two changes. First, whenever an edge $(x,y)$ is added and the two endpoints are already in the same tree, we compute the edge $(u,v)$ with the maximum weight among all the edges whose both endpoints are ancestors of either $x$ or $y$ (but not both) and we compare it to the weight of $(x,y)$ (these tests can be done efficiently using the E-tree). We only keep the edge with the minimum weight among $(u,v)$ and $(x,y)$. Second, at Step 3 of the $delete$ operation, instead of adding any edge between the two trees, the algorithm adds the minimum among all such edges. The preprocessing can be adjusted to compute a $(1+\epsilon)$-approximate MST by doing bucketization, which introduces only a $O(\log n)$ factor in the number of rounds. In fact, it is enough to bucket the edges by weights and compute connected components by considering the edges in bucket of increasing weights iteratively and separately \ignore{ \section{Fully-dynamic $O(1)$-approximate matching with logarithmic round and communication bounds.} We present an $O(1)$-approximate fully-dynamic maximum matching with $O(\log \Delta)$ rounds per update and $O(\log \Delta)$ active machines per round in the worse case. \subsection*{Overview of the static algorithm} Let $\Delta$ be the maximum degree in the graph. The algorithm proceeds with $\log \Delta$ rounds. At the $i$-th round, for each node $v$ with degree larger than $\Delta/{2^{i+1}}$ sample each edge incident to $v$ with probability $p_i = 2^{i}/{4\Delta}$. For convenience, we call the nodes with degree larger than $\Delta/{2^{i+1}}$ as \emph{high-degree} nodes at level $i$. Every isolated sampled edge (edge whose endpoints have no other incident sampled edge) is added to the matching and its endpoints are removed from the graph. We remove from the graph also all high-degree nodes at level $i$. At the end of the $\log \Delta$ rounds there are no edges left in the graph. It can be shown that the above algorithm produces an $O(1)$-approximate maximum cardinality matching. In fact, at the $i$-th iteration of the algorithm the isolated sampled edges match a constant factor of the high-degree nodes. This is achieved since the sampling probability ensures that with constant probability each high-degree node is matched. Notice that this is sufficient to show that the algorithm produces a constant factor approximation since for each threshold of degrees $[\Delta/{2^{i+1}},\Delta/{2^i}]$ the nodes that were matched at previous rounds only increase the fraction of the nodes in that threshold that are matched. \subsection*{Dynamic algorithm} We build on the static algorithm described above. We execute the static algorithm on the initial graph, before any edge update occurs. In a high-level, we attempt to maintain a simulation of the static algorithm over the sequence of updates. However, our algorithm goes beyond that and attempts to match more high-degree nodes at each round by trying to match even nodes with sampled degree larger than one. We say that a node $v$ is at level $i$, if $v$ was either high-degree node at round $i$ or it was sampled at round $i$. We denote the level of a $v$ by $lvl(v)$. Moreover, as the nodes change their level throughout the updates, we need to keep track of the edges leading to nodes of the same level or of larger level; we call these the \emph{alive neighbors} of the node. Finally, at each node we also maintain a 2-approximation of the degrees of its neighbors (notice, that to maintain exact counters might require $O(\Delta)$ active machines per round to communicate the updated degree of a node to its neighbors). Before describing the update procedures, we first describe a supporting method which is then conveniently used during the updates. The procedure starts from an unmatched node $v$ at level $i$ and samples from its neighbors with approximate degree at most $\Delta/2^{i}$ with probability $\log \Delta 2^{i}/{4\Delta}$. The $\log \Delta$ factor is to ensure that with high probability we hit a constant fraction of nodes that are truly at level $\geq i$ or above. If at least one sampled neighbor $w$ is free, then we insert $(v,w)$ into the matching, otherwise, we select a neighbor at a level strictly higher than $lvl(v)$, we unmatch $w$ and we call recursively the procedure on the old mate of $w$. Although, $v$ is not aware of the level of its neighbors (as it depends not only on their degree, but also on whether some node with higher degree sampled an edge to that node), it can spend one round to learn the level of its sampled neighbors (constant number). \sidenote{NP: It is not clear among which edges $v$ should sample because it might be the case that too many of its many neighbors are matched to nodes from lower level. Hopefully, we can sample among the ones that are of equal or lower degree, and then argue that not too many of them are of lower level (because they we matched to someone from a higher degree). } We refer to this procedure as $resample(v)$. \paragraph*{$ResampleBelow(v)$.} \begin{itemize} \item Let $i = lvl(v)$, and let $N^{\leq i+1}(v)$ be the neighbors of $v$ with 2-approximate degree at most $2^{i+1}$. \item Sample each node from $N^{\leq i+1}(v)$ with probability ${\log \Delta 2^{i+1}}/{4\Delta}$. \item Verify the level and the status of the sampled nodes. Let $S$ be the set of nodes whose level is $\leq i$. \item Let $w$ be an unmatched node in $S$ such that $lvl(w) \leq i$. If no such node exists, let $w$ be a node in $S$ such that $lvl(w)<i$. If no such node exists, set $w=\emptyset$. \item If $w\not=\emptyset$, $lvl(w)=i$ and $w$ unmatched, add $(v,w)$ into the matching. If $w\not=\emptyset$, $lvl(w)=i$ and $w$ matched, do nothing. \item If $w\not=\emptyset$ and $lvl(w)<i$, then add $(v,w)$ into the matching. If $w$ was previously matched free its mate $z$ and call $Resample(z)$ \sidenote{NP: notice here that $lvl(z)<i$ as otherwise the level of $w$ would be also smaller.}. \end{itemize} Let $e=(x,y)$ be the edge to be inserted. We first update the degree of $x,y$ and then simulate the static algorithm with the updated degrees of $x$ and $y$. \paragraph*{$UpdateLevel(v)$} \begin{itemize} \item If $d'(v) > 2^{lvl(v)}$. \begin{itemize} \item Increase the level of $v$ by one, i.e., $lvl(v) = lvl(v)+1$. \item Call $ResampleBelow(v)$. \end{itemize} \item If $d'(v) < 2^{lvl(v)-1}$. \begin{itemize} \item Decrease the level of $v$ by one, i.e., $lvl(v) = lvl(v)-1$. \item For each node $w$ in the set $N^{i-1}(v)$ of neighbors of $v$ with level $i-1$, sample $w$ with probability $2^{i-1}/{4\Delta}$. \item if a sampled node is unmatched add $(v,w)$ into the matching and set $lvl(v)=lvl(v)-1$. Otherwise, call $ResampleBelow(v)$. \end{itemize} \end{itemize} \paragraph*{$AddEdge(x,y)$} \begin{itemize} \item If $lvl(x) >= lvl(y)$. \begin{itemize} \item If $x$ is matched, do nothing. \item If $x$ is unmatched, sample $e$ with probability $2^{lvl(x)}/{4\Delta}$. \item If $e$ is not sampled do nothing. \item If $e$ is sampled, $lvl(y)\leq lvl(x)$, and $y$ is unmatched add $(x,y)$ to the matching. \item If $e$ is sampled, $lvl(y)<lvl(x)$, and $y$ is matched add $(x,y)$ to the matching, call $ResampleBelow(x)$. \end{itemize} \end{itemize} \paragraph*{$DeleteEdge(x,y)$} \begin{itemize} \item If $(x,y)$ is in the matching, remove it. \item Call $updateLevel(x)$ and $updateLevel(y)$. \end{itemize} } \section{Introduction} Modern applications often require performing computations on massive amounts of data. Traditional models of computation, such as the RAM model or even shared-memory parallel systems, are inadequate for such computations, as the input data do not fit into the available memory of today's systems. The restrictions imposed by the limited memory in the available architectures has led to new models of computation that are more suitable for processing massive amounts of data. A model that captures the modern needs of computation at a massive scale is the Massive Parallel Computing (MPC) model, that is captured by several known systems (such as MapReduce, Hadoop, or Spark). At a very high-level, a MPC system consists of a collection of machines that can communicate with each other through indirect communication channels. The computation proceeds in synchronous rounds, where at each round the machines receive messages from other machines, perform local computations, and finally send appropriate messages to other machines so that the next round can start. The crucial factors in the analysis of algorithms in the MPC model are the number of rounds and the amount of communication performed per round. The MPC model is an abstraction of a widely-used framework in practice and has resulted in an increased interest by the scientific community. An additional factor that contributed to the interest in this model is that MPC exhibits unique characteristics that are not seen in different parallel and distributed architectures, such as its ability to perform expensive local computation in each machine at each round of the computation. Despite its resemblance to other parallel models, such as the PRAM model, the MPC model has been shown to have different algorithmic power from the PRAM model~\cite{karloff2010model}. The ability of the MPC model to process large amounts of data, however, comes with the cost of the use of large volumes of resources (processing time, memory, communication links) during the course of the computation. This need of resources strengthens the importance of efficient algorithms. Although the design of efficient algorithms for solving problems in the MPC model is of vital importance, applications often mandate the recomputation of the solution (to a given problem) after small modifications to the structure of the data. For instance, such applications include the dynamic structure of the Web where new pages appear or get deleted and new links get formed or removed, the evolving nature of social networks, road networks that undergo development and constructions, etc. In such scenarios, even the execution of very efficient algorithms after few modifications in the input data might be prohibitive due to their large processing time and resource requirements. Moreover, in many scenarios, small modifications in the input data often have a very small impact in the solution, compared to the solution in the input instance prior to the modifications. These considerations have been the driving force in the study of dynamic algorithms in the traditional sequential model of computation. Dynamic algorithms maintain a solution to a given problem throughout a sequence of modifications to the input data, such as insertions or deletion of a single element in the maintained dataset. In particular, dynamic algorithms are able to adjust efficiently the maintained solution by typically performing very limited computation. Moreover, they often detect almost instantly that the maintained solution needs no modification to remain a valid solution to the updated input data. The update time of a dynamic algorithm in the sequential model is the time required to update the solution so that it is a valid solution to the current state of the input data. Dynamic algorithms have worst-case update time $u(N)$ if they spend at most $O(u(N))$ after every update, and $u(N)$ amortized update bound if they spend a total of $O(k\cdot u(N))$ time to process a sequence of $k$ updates. The extensive study of dynamic algorithms has led to results that achieve a polynomial, and often exponential, speed-up compared to the recomputation of a solution from scratch using static algorithms, for a great variety of problems. For instance, computing the connected components of a graph takes $O(m+n)$ time, where $n$ and $m$ are the number of vertices and edges of the graph, respectively, while the most efficient dynamic algorithms can update the connected components after an edge insertion or an edge deletion in $O(\log n)$ amortized time~\cite{holm2001poly}, or in sub-polynomial time in the worst-case~\cite{lulli2017fast}. Similarly, there exist algorithms that can maintain a maximal matching in polylogarithmic time per update in the worst case~\cite{bernstein2019adeamortization}, while recomputing from scratch requires $O(m+n)$ time. So far, there has been very little progress on modelling dynamic parallel algorithms in modern distributed systems, despite their potential impact in modern applications, with respect to the speed-up and reduced use of resources. There have been few dynamic algorithms that maintain the solution to a problem in the distributed setting. For instance, in~\cite{censorhillel2016optimal}, Censor-Hillel et al. present a dynamic algorithm for maintaining a Maximal Independent Set of a graph in the LOCAL model. Assadi et al.~\cite{Assadi2018fully} improve the message complexity by adjusting their sequential dynamic algorithm to the LOCAL model. In~\cite{ahn2018access}, Ahn and Guha study problems that can be fixed locally (i.e., within a small neighborhood of some vertex) after some small modification that has a very limited impact on the existing solution. This line of work has been primarily concerned with minimizing the number of rounds and the communication complexity. Moreover, the algorithms designed for the LOCAL model do not necessarily take into account the restricted memory size in each machine. In this paper, we present an adaptation of the MPC model, that we call DMPC, that serves as a basis for dynamic algorithms in the MPC model. First, we impose a strict restriction on the availability of memory per machine, which mandates the algorithms in this model to operate in any system that can store the input in the total memory. Second, we define a set of factors that determine the complexity of a DMPC algorithm. These factors consist of (i) the number of rounds per update that are executed by the algorithm, (ii) the number of machines that are active per round, and (iii) the total amount of communication per round, which refers to the sum of sizes of all messages sent at any round. A final requirement for our model is that DMPC algorithms should provide worst-case update time guarantees. This is crucial not only because of the shared nature of the resources, but also because it is imposed by many real-world applications, in which one needs to act fast upon an update in the data, such as detecting a new malicious behavior, or finding relevant information to display to a new activity (e.g., displaying ads, friend recommendations, or products that are relevant to a purchase). Inspired by today's systems that share their resources between many different applications at any point in time, it is necessary to design algorithms that do not require dedicated systems to operate on, and that can be executed with limited amounts of resources, such as memory, processors, and communication channels. This necessity is further strengthened by the fact that typically dynamic algorithms are required to maintain a solution to a problem over long series of updates, which implies that the application is running for a long sequence of time. Our model imposes these properties through the predefined set of restriction. In particular, we focus on three main dimensions \paragraph{Memory.} Dynamic algorithms in our model are required to use a very limited amount of memory in each machine. Specifically, assuming that the input is of size $N$, each machine is allowed to use only $O(\sqrt{N})$ memory. Note that this limitation does not aim at ensuring that the machines are large enough to fit $O(\sqrt{N})$ bits (as a system with such weak machines would need many millions of machines to even store the data, given that even weak physical machines have several GB of memory). Rather, it aims at guaranteeing that the allocation of the machines of the model to physical machines is flexible in terms of memory, allowing the system to move machines of the model across different physical machines without affecting the execution of the algorithm. (Notice that the system can co-locate several machines of the model to a single physical machine.) \paragraph{Resource utilization and number of machines.} Our model promotes limited processing time in several ways. First, two factors of evaluation of an algorithm are the number of rounds that are required to process each update, and the number of machines that are active at each round of the update. Notice that machines that are not used by the execution of a dynamic algorithm can process other applications that co-exist in the same physical machines. Moreover, algorithms with worst-case update time are guaranteed to end the execution of a particular update in limited time, thus avoiding locking shared resources for large periods of time. \paragraph{Communication Channels.} In our model, one of the factors that contributes to the complexity of an algorithm is the amount of communication that occurs at each round during every update. Furthermore, the number of machines that are active per round also contributes to the complexity of an algorithm (namely, the number of machines receiving or transmitting messages). These two facts ensure that efficient algorithms in the DMPC model use limited communication. \smallskip Similarly to the sequential model, the goal of a dynamic algorithm in the DMPC model is to maintain a solution to a problem more efficiently than recomputing the solution from scratch with a static algorithm. Here, the main goal is to reduce the bounds in all three factors contributing to the complexity of an algorithm. However, algorithms reducing some of the factors, without increasing the others, may also be of interest. We initiate the study of dynamic algorithms in the DMPC model by designing algorithms for basic graph-theoretic problems. In particular, we present fully-dynamic algorithms for maintaining a maximal matching, a $\nicefrac{3}{2}$-approximate matching, a $(2+\epsilon)$-approximate matching, and the connected components of an unweighted graph, as well as a $(1+\epsilon)$-approximate Minimum Spanning Tree (MST) of a weighted graph. Finally, we show that our model can exploit successfully the techniques that were developed for dynamic algorithms in the sequential model. In particular, we present a black-box reduction that transforms any sequential dynamic algorithm with $p(S)$ preprocessing time and $u(S)$ update time to an algorithm in the dynamic MPC model which performs the preprocessing step in $O(p(S))$ rounds, uses $O(1)$ machines and $O(1)$ total communication per round, and such that each update is performed in $O(u(S))$ number or rounds using $O(1)$ machines and $O(1)$ total communication per round. With this reduction, the characteristics (amortized vs. worst-time and randomized vs. deterministic) of the DMPC algorithm are the same as the sequential algorithm. \smallskip\noindent\textbf{Related work in the classic MPC model.} It was known from the PRAM model how to compute a $(1+\epsilon)$ approximate matching in $O(\log n)$ rounds~\cite{lotker2008improved}. Lattanzi et al. ~\cite{lattanzi2011filtering} introduced the so-called filtering technique which gives an algorithm for computing a 2-approximate matching in $O(1/c)$ rounds assuming that the memory per machine is $O(n^{1+c})$, for any $c>0$. Under the same memory assumption, Ahn and Guha \cite{ahn2018access} showed an algorithm running in $O(1/(c \epsilon))$ number of rounds for $(1+\epsilon)$ approximate matching. Both those algorithms run in $O(\log n)$ time when the memory in each machine is $\Theta(n)$, which matches the bound that was known from the PRAM model. It was only recently that Czumaj et al. \cite{Czumaj2018round} overcame the $O(\log n)$ barrier for computing an approximate matching. In particular, in \cite{Czumaj2018round} the authors presented a $(1+\epsilon)$-approximate matching in $O((\log \log n)^2)$ time with $\widetilde{O}(n)$ memory per machine. This bound has been improved to $O(\log \log n)$ rounds, under the assumption of slightly superlinear memory per machine ~\cite{ghaffari2018improved,assadi2019coresets}. Very recently, Ghaffari and Uitto~\cite{ghaffari2019sparsifying} presented an algorithm that uses only sublinear memory and can compute a $(1+\epsilon)$-approximate matching in $\widetilde{O}(\sqrt{\log \Delta})$ rounds, where $\Delta$ is the maximum degree in the graph. Another central problem in the MPC model is the computation of the connected components of a graph. This problem can be solved in $O(\log n)$ rounds \cite{lulli2017fast,lkacki2018connected}. In particular, the algorithm in \cite{lkacki2018connected} runs in $O(\log \log n)$ rounds on certain types of random graphs. In the case where each machine contains $O(n^{1+c})$ memory, it is known how to compute the connected components of a graph in a constant number of rounds \cite{lattanzi2011filtering}. Under a well-known conjecture \cite{yaroslavtsev2017massively}, it is impossible to achieve $o(\log n)$ on general graphs if the space per machine is $O(n^{1-c})$ and the total space in all machines is $O(m)$. Very recently Andoni et al.~\cite{DBLP:conf/focs/AndoniSSWZ18} presented a new algorithm that uses sublinear memory and runs in $\tilde{O}(\log D)$ parallel rounds, where $D$ is the diameter of the input graph. \smallskip\noindent\textbf{Our results.} Throughout the paper we denote by $G=(V,E)$ the input graph, and we use $n=|V|$, $m=|E|$, and $N=m+n$. All bounds that are presented in this section are worst-case update bounds. Our algorithmic results are summarized in Table \ref{tab:algorithms}. All of our algorithms use $O(N)$ memory across all machines, and hence make use of $O(\sqrt{N})$ machines. \begin{table*}[t!] \centering \caption{Algorithmic results achieved in this paper. The bounds presented in the first part of the table hold in the worst-case.} \label{tab:algorithms} \renewcommand{\arraystretch}{1.2} \begin{tabular}{lcccc} \multicolumn{1}{c|}{Problem} & \#rounds & \begin{tabular}[c]{@{}c@{}}\#active \\ machines\end{tabular} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Commun.\\ per round\end{tabular}} & Comments \\ \hline \multicolumn{1}{l|}{Maximal matching} & $O(1)$ & $O(1)$ & \multicolumn{1}{c|}{$O(\sqrt{N})$} & \begin{tabular}[c]{@{}c@{}}Use of a coordinator, \\ starts from an arbitrary graph. \end{tabular} \\ \hline \multicolumn{1}{l|}{3/2-app. matching} & $O(1)$ & $O(n/\sqrt{N})$ & \multicolumn{1}{c|}{$O(\sqrt{N})$} & \begin{tabular}[c]{@{}c@{}}Use of a coordinator \end{tabular} \\ \hline \multicolumn{1}{l|}{$(2+\epsilon)$-app. matching} & $O(1)$ & $\widetilde{O}(1)$ & \multicolumn{1}{c|}{$\widetilde{O}(1)$} & \begin{tabular}[c]{@{}c@{}} \end{tabular} \\ \hline \multicolumn{1}{l|}{Connected comps} & $O(1)$ & $O(\sqrt{N})$ & \multicolumn{1}{c|}{$O(\sqrt{N})$} & \begin{tabular}[c]{@{}c@{}} Use of Euler tours,\\starts from an arbitrary graph \end{tabular} \\ \hline \multicolumn{1}{l|}{$(1+\epsilon)$-MST} & $O(1)$ & $O(\sqrt{N})$ & \multicolumn{1}{c|}{$O(\sqrt{N})$} & \begin{tabular}[c]{@{}c@{}} The approx. factor comes\\ from the preprocessing, \\ starts from an arbitrary graph.\end{tabular} \\ \hline \multicolumn{5}{c}{Results from reduction to the centralized dynamic model} \\ \hline \multicolumn{1}{c|}{Maximal matching} & $O(1)$ & $O(1)$ & \multicolumn{1}{c|}{$O(1)$} & \begin{tabular}[c]{@{}c@{}}Amortized, randomized \end{tabular} \\ \hline \multicolumn{1}{c|}{Connected comps} & $\widetilde{O}(1)$ & $O(1)$ & \multicolumn{1}{c|}{$O(1)$} & \begin{tabular}[c]{@{}c@{}}Amortized, deterministic \end{tabular} \\ \hline \multicolumn{1}{c|}{MST} & \ $\widetilde{O}(1)$ & $O(1)$ & \multicolumn{1}{c|}{$O(1)$} & \begin{tabular}[c]{@{}c@{}}Amortized, deterministic \end{tabular} \end{tabular} \end{table*} \paragraph{Maximal matching.} Our first algorithm maintains fully-dynamically a maximal matching in $O(1)$ rounds per update in the worst case, while the number of machines that are active per rounds is $O(1)$, and the total communication per round is $O(\sqrt{N})$. The general idea in this algorithm, inspired from~\cite{neiman2016simple}, is to use vertex-partitioning across the machines and additionally to store at one machine the last $\sqrt{N}$ updates in a buffer, together with the changes that each of these updates generated. We call this summary of updates and the changes that they trigger the \emph{update-history}. Every time that an update arrives (i.e., an edge insertion or an edge deletion), the update-history is sent to the endpoints that are involved in the update, and each endpoint adjusts its data structure based on the update-history (that is, it updates its knowledge of which vertices among its neighbors are free), and further sends back (to the machine that maintains the update-history) any possible changes that the update might have triggered. The machines that maintain the endpoints of the updated edge might further communicate with one of their neighbors to get matched with them. Additional challenges arise from the fact that the neighborhood of a single vertex might not fit in a single machine. For comparison, the best static MPC algorithm to compute a maximal matching in the static case runs in $O(\log \log n)$ when the space per machine is $\widetilde{O}(n)$ \cite{ghaffari2018improved}, $O(\sqrt{ \log n})$ when the space is sublinear \cite{ghaffari2019sparsifying} and in $O(c/\delta)$ rounds when $N\in \Omega(n^{1+c})$ and the space per machine is $\Omega(n^{1+\delta})$ \cite{lattanzi2011filtering}. These algorithms use all the machines at each round and generate $\Omega(N)$ communication per round. We note that although our algorithm has communication complexity $O(\sqrt{N})$ per round in the case where the available memory per machine is $O(\sqrt{N})$, the communication complexity is actually proportional to the number of machines used by the system. Namely, if we allow larger memory per machine then the communication complexity reduces significantly. Hence, in real-world systems we expect our algorithm to use limited communication per MPC round. \paragraph{3/2-approximate matching.} We further study the problem of maintaining a maximum cardinality matching beyond the factor 2 approximation given by a maximal matching. We present an algorithm for maintaining a $3/2$-approximate matching that runs in a constant number of rounds, uses $O(\sqrt{N})$ machines per round and with $O(\sqrt{N})$ communication per round. The best known static algorithm for computing a $O(1+\epsilon)$ approximate matching runs in $O(\log \log n)$ rounds in the case where the memory available in each machine is $ \widetilde{O}(n)$~\cite{Czumaj2018round,ghaffari2018improved,assadi2019coresets} or in $O(\sqrt{ \log \Delta})$ rounds when the memory available in each machine is sublinear~\cite{yaroslavtsev2017massively}, where $\Delta$ the maximum degree in the graph. \paragraph{$(2+\epsilon)$-approximate matching.} Our algorithm for maintaining a maximal matching requires polynomial communication among the machines and the use of a coordinator machine. To overcome those restrictions, we explore the setting where we are allowed to maintain an almost maximal matching instead of a proper maximal matching. In other terms, at most an $\epsilon$ fraction of the edges of a maximal matching may be missing. In this setting, we show that we can adapt the fully-dynamic centralized algorithm by Charikar and Solomon \cite{charikar2018fully} that has polylogarithmic worst-case update time. We note that our black-box reduction to the DMPC model yields a fully-dynamic algorithm with a polylogarithmic number of rounds. However we show how we can adapt the algorithm to run in $O(1)$ rounds per edge insertion or deletion, using $O(\textit{polylog}(n))$ number of active machines and total communication per round. \footnote{We note that one could adopt the algorithm from \cite{bernstein2019adeamortization} to maintain a (proper) maximal matching with the same asymptotic bounds; however, that algorithm does not maintain a consistent matching throughout its execution, meaning that the maintained matching could be completely different between consecutive update operations, which is not a desirable property for many applications.} \paragraph{Connected components and $(1+\epsilon)$ MST} We consider the problem of maintaining the connected components of a graph and the problem of maintaining a $O(1+\epsilon)$-approximate Minimum Spanning Tree (MST) on a weighted graph. For both problems we present fully-dynamic deterministic algorithms that run in $O(1)$ rounds per update in the worst case, with $O(\sqrt{N})$ active machines and $O(\sqrt{N})$ total communication per round. Notice that, in order to maintain the connected components of a graph, it suffices to maintain a spanning forest of the graph. As it is the case also for centralized algorithms, the hard case is to handle the deletion of edges from the maintained spanning forest. The main ingredient in our approach is the use of Euler tour of a spanning tree in each connected component. This enables us to distinguish between different trees of the spanning forest, based on the tour numbers assigned to each of vertices of the trees, which we use to determine whether a vertex has an edge to particular part of a tree. Notice that to achieve such a bound, each vertex needs to known the appearance numbers of its neighbors in the Euler tour, which one cannot afford to request at each round as this would lead to $O(N)$ communication. We show how to leverage the properties of the Euler tour in order to avoid this expensive step. In the static case, the best known algorithm to compute the connected components and the MST of a graph requires $O(c/\delta)$ rounds when $N\in \Omega(n^{1+c})$ and $S\in \Omega(n^{1+\delta})$ \cite{lattanzi2011filtering}. In the case where $S\in o(n)$, \cite{Chitnis:2013:FCC:2510649.2511220} presented an algorithm to compute the connected components of a graph in $O(\log n)$ rounds, with all the machines and $\Omega(N)$ communication per round. \paragraph{Bounds from the dynamic algorithms literature.} We present a reduction to dynamic algorithms in the centralized computational model. More specifically, we show that if there exists a centralized algorithm with update time $u(m,n)$ and preprocessing time $p(m,n)$ on a graph with $m$ edges and $n$ vertices, then there exists a dynamic MPC algorithm which updates the solution in $O(u(m,n))$ rounds with $O(1)$ active machines per round and $O(1)$ total communication, after $p(m,n)$ rounds of preprocessing. The characteristics of the centralized algorithm (e.g., amortized or worst-case update time, randomized or deterministic) carry over to the MPC model. This reduction, for instance, implies an amortized $\widetilde{O}(1)$ round fully-dynamic DMPC algorithm for maintaining the connected components or the maximum spanning tree (MST) of a graph \cite{holm2001poly}, and an amortized $O(1)$ round fully-dynamic DMPC algorithm for the maximal matching problem \cite{somon2016fully}. These algorithms however do not guarantee worst-case update time, which is important in applications. Moreover, the connected components and MST algorithms have super-constant round complexity. \smallskip\noindent\textbf{Road map.} In Section~\ref{sec:model} we introduce the DMPC model. Then, in Sections~\ref{sec:maximal-matching} and \ref{sec:mat1} we present our maximal matching and $\nicefrac{3}{2}$-approximate matching, respectively. We present our connected components and $(1+\epsilon)$-approximate MST algorithms in Section~\ref{sec:cc}. In Section~\ref{app:mat2}, we present our $(2+\epsilon)$-approximate matching algorithm, and finally the reduction is presented in Section~\ref{app:red}. \section{Fully-dynamic DMPC algorithm for maximal matching} \label{sec:maximal-matching} In this section we present a deterministic fully-dynamic algorithm for maintaining a maximal matching with a constant number of rounds per update and a constant worst-case number of active machines per update, when the memory of each machine is $\Omega(\sqrt{N})$ bits, where $N=(m+n)$ and $m$ is the maximum number of edges throughout the update sequence. The communication per round is $O(\sqrt{N})$. Recall that our model introduces additional restrictions in the design of efficient algorithms. Specifically, the memory of each machine might not even be sufficient to store the neighborhood of a single vertex, which implies that the edges incident to a single vertex may be stored in polynomially many machines. In this framework, a scan of the neighbors of a single vertex requires a polynomially number of active machines in each round. Our algorithm borrows an observation from the fully-dynamic algorithm for maximal matching of Neiman and Solomon \cite{neiman2016simple}, which has $O(\sqrt{m})$ worst-case update time and $O(n^2)$ space, or the same amortized update bound with $O(m)$ space. Specifically, Neiman and Solomon \cite{neiman2016simple} observe that a vertex either has a low degree, or has only few neighbors with high degree. This allows us to treat vertices with large degree separately from those with relatively small degree. We call a vertex \emph{heavy} if it has a large degree and \emph{light} if it has a small degree. The threshold in the number of vertices that distinguishes light from heavy vertices is set to be $2\sqrt{m}$. As the memory of each machine is $\Omega(\sqrt{m})$, we can fit the light vertices together with their edges on a single machine, but for heavy vertices we can keep only up to $O(\sqrt{m})$ of their edges in a single machine. Given that each vertex knows whether it is an endpoint of a matched edge, the only non-trivial update to be handled is when an edge $e=(x,y)$ of the matching is deleted and we have to check whether there exists an edge adjacent to $x$ or $y$ that can be added to the matching. Notice that if the neighborhood of each vertex fits in a single machine, then it is trivial to bound the number of rounds to update the solution, as it is sufficient to search for free neighbors of $x$ and $y$ that can be matched to those vertices. Such a search can be done in a couple of rounds by sending a message from $x$ and $y$ to their neighbors to ask whether they are free to join or not. However, this does not immediately bound the number of active machines per round. \smallskip\noindent\textbf{Overview.} Our algorithm keeps for each light vertex all the edges of its adjacency list in a single machine. For every heavy node we keep only $\sqrt{2m}$ edges that we call \emph{alive}. We call \emph{suspended} the rest of the edges of $v$. We initially invoke an existing algorithm to compute a maximal matching in $O(\log n)$ rounds. Our algorithm always maintains a matching with the following invariant: \begin{invariant} \label{inv:heavy-matched} No heavy vertex gets unmatched throughout the execution of the algorithm\footnote{After computing the initial maximal matching some heavy vertices might be unmatched. During the update sequence, once a heavy vertex gets matched, it is not being removed from the matching, unless it becomes light again}. \end{invariant} If a new edge gets inserted to the graph, we simply check if we can add it to the matching (i.e., if both its endpoints are free). Now assume that an edge $(x,y)$ from the matching gets deleted. If both the endpoints are light, then we just scan their adjacency lists (that lie in a single machine) to find a replacement edge for each endpoint of $(x,y)$. If $x$ is heavy, then we search the $\sqrt{2m}$ alive edges of $x$ and if we find a neighbor that is free we match it. If we cannot find a free neighbor of $x$, then among the (matched) $\sqrt{2m}$ alive neighbors of $x$ there should exist a neighbor $w$ with a light mate $z$ (as otherwise the sum of degrees of the mates of neighbors of $x$ would exceed $m$), in which case we remove $(w,z)$ from the matching, we add $(x,w)$ to the matching, and we search the neighbors of the (light) vertex $z$ for a free neighbor to match $z$. If $y$ is heavy, we proceed analogously. We build the necessary machinery in order to keep updated the aforementioned allocation of the adjacency lists to the available machines. This involves moving edges between machines whenever this is necessary, which introduces several challenges, since we cannot maintain updated the information in all machines with only $O(1)$ message exchange. On the other hand, we cannot allocate edges to an arbitrary number of machines. We deal with these issues by periodically updating the machines by taking advantage of the fact that we can send large messages from the coordinator machine. \smallskip\noindent\textbf{Initialization and bookkeeping.} Our algorithm makes use of $O(\sqrt{N})$ machines. We assume that all vertices of the graph contain IDs from $1$ to $n$. Our algorithm executes the following preprocessing. First, we compute a maximal matching (this can be done in $O(\log n)$ rounds with the randomized CONGEST algorithm from~\cite{ISRAELI198677}). Together with each edge in the graph we store whether an endpoint of the edge is matched: if it is, we also store its mates in the matching. In a second phase, we compute the degree of each vertex (this can be done in $O(1)$ rounds for all vertices). We place the vertices into the machines in such a way that the whole adjacency list of light vertices, and arbitrary $\sqrt{2m}$ edges from the adjacency list of heavy vertices, are stored in single machines. The remaining adjacency list of a heavy vertex is stored in separate exclusive machines (only store edges of that vertex) so that as few machines as possible are used to store the adjacency list of a heavy vertex. On the other hand, the light vertices are grouped together into machines. The machines that store heavy vertices are characterized as \emph{heavy machines}, and those storing adjacency lists of light vertices as \emph{light machines}. One of the machines acts as the coordinator, in the sense that all the queries and updates are executed through it. The coordinator machine, denoted by $M_C$, stores an update-history $\mathcal{H}$ of the last $O(\sqrt{N})$ updates in both the input and the maintained solution, i.e., which edges have been inserted and deleted from the input in the last $\sqrt{N}$ updates and which edges have been inserted and deleted in the maintained maximal matching. Moreover, for each newly inserted edge that exists in the update-history we store a binary value for each of its endpoints that indicates if their adjacency list has been updated to include the edge. For convenience, throughout this section we say that the algorithm invokes some function without specifying that all the communication is made through $M_C$. We dedicate $O(n/\sqrt{N})$ machines to store statistics about the vertices of the graphs, such as their degree, whether they are matched and who is their mate, the machine storing their alive edges, the last in the sequence of machines storing their suspended edges (we treat the machines storing suspended edges as a stack). To keep track of which machine keeps information about which vertices, we allocate many vertices with consecutive IDs to a single machine so that we can store the range of IDs stored in each machine. Hence in $M_C$, except of the update-history $\mathcal{H}$, we also store for each range of vertex IDs the machine that contains their statistics. This information fits in the memory of $M_C$ as the number of machines is $O(\sqrt{N})$. Finally, $M_C$ also stores the memory available in each machine. \smallskip\noindent\textbf{Maintaining the bookkeeping.} In what follows, for the sake of simplicity, we assume that the update-history $\mathcal{H}$ is updated automatically. Further, we skip the description of the trivial update or queries on the statistics of a vertex, such as its degree, whether it is an endpoint of a matched edge, the machine storing its alive edges, etc. All of these can be done in $O(1)$ rounds via a message through the coordinator machine $M_C$. After each update to the graph, we update the information that is stored in a machine by executing those updates in a round-robin fashion, that is, each machine is updated after at most $O(\sqrt{N})$ updates. Recall that we use $O(\sqrt{N})$ machines. Throughout the sequence of updates we use the following set of supporting procedures to maintain a valid allocation of the vertices into machines: -- {\bf $getAlive(x):$} Returns the ID of the machine storing the alive neighbors of $x$. -- {\bf $getDegInMachine(M,x):$} Returns $x$'s degree in machine $M$. -- {\bf $getSuspended(x):$} Returns the ID of the last in the sequence of heavy machines storing the edges of $x$. -- {\bf $fits(M, s):$} Return $true$ if $s$ edges fit into a light machine $M$, and $false$ otherwise. -- {\bf $toFit(s):$} Returns the ID of a light machine that has enough memory to store $s$ edges, and the available space in that machine. -- {\bf $addEdge((x,y))$:} We only describe the procedure for $x$, as the case for $y$ is completely analogous. If $x$ is heavy, add $(x,y)$ in the machine $getSuspended(x)$ if it fits, or otherwise to a new machine, and set the new machine to be $getSuspended(x)$. If, on the other hand, $x$ is light and $(x,y)$ fits into $getAlive(x)$, we simply add $(x,y)$ in $getAlive(x)$. If, $(x,y)$ does not fit in $getAlive(x)$ then call $moveEdges(x,s,M_x,toFit(s),\mathcal{H})$, where $s$ is the number of alive edges of $x$ (if $x$ becomes heavy, we mark that). If all of the remaining edges in the machine $M_x$ (of light vertices other than $x$) fit into another machine, then move them there (this is to bound the number of used machines). -- {\bf $moveEdges(x, s, M_1, M_2, \mathcal{H})$}, where $x$ is light: First, remove from machine $M_1$ deleted edges of $x$ based on $\mathcal{H}$. Second, send from $M_1$ up to $s$ edges of $x$ to $M_2$. If the $s$ edges do not fit into $M_2$, move the neighbors of $x$ from $M_2$ to a machine that fits them, i.e., execute $M_{x'}=toFit(s+getDegInMachine(M,x))$, move the $s$ edges of $x$ in $M_1$ to $M_{x'}$ and call $moveEdges(x, getDegInMachine(M,x), M_2, M_{x'}, \mathcal{H})$. -- {\bf $fetchSuspended(x,s)$}, where $x$ is heavy: Moves $s$ suspended edges to the machine $M_x=getAlive(x)$. To achieve this we call \\ $moveEdges(x, s, getSuspended(x), M_x)$. While the number of edges moved to $M_x$ is $s'<s$, call $moveEdges(x, s-s',getSuspended(x), M_x)$. -- {\bf $moveSuspended(x,s,L)$}, where $x$ is heavy: Moves the set $L$ of $s$ edges of $x$ from machine $getAlive(x)$ to the machines storing the suspended edges of $x$. We first fit as many edges as possible in the machine $getSuspended(x)$, and the rest (if any) to a new machine. -- {\bf $updateVertex(x, \mathcal{H}):$} Update the neighbors of $x$ that are stored in $M_x = getAlive(x)$ based on $\mathcal{H}$. If $x$ is heavy and the number of edges from the adjacency list of $x$ in $M$ is $s < \sqrt{2m}$, then call $fetchSuspended(x,\sqrt{2m}-s)$. If $x$ is heavy and the set of alive edges has size $s>\sqrt{2m}$, then call $moveSuspended(x,s-\sqrt{2m},L)$, where $L$ are $s-\sqrt{2m}$ edges of $x$ that do not contain the edge $(x,mate(x))$. If, on the other hand, $x$ is light and the set of alive edges of $x$ does not fit in $M_x$ after the update, call $moveEdges(x,s,M_x,toFit(s),\mathcal{H})$, where $s$ is the number of alive edges of $x$. If all of the remaining edges in the machine $M_x$ (of light vertices other than $x$) fit into another machine, then move them there (this is to bound the number of used machines). -- {\bf $updateMachine(M, \mathcal{H}):$} Update all adjacency lists stored in machine $M$ to reflect the changes in the update-history $\mathcal{H}$. If $M$ is a heavy machine of a vertex $x$, we proceed as in the case of $updateVertex(x, \mathcal{H})$, but now on machine $M$ rather than $getAlive(x)$. Now we assume $M$ is light. First, delete the necessary edges of the light vertices stored at $M$ based on $\mathcal{H}$. If all of the remaining edges of the machine fit into another half-full machine, then move them there (this is to bound the number of used machines). \smallskip\noindent\textbf{Handling updates.} Now we describe how our algorithm updates the maintained maximal matching after an edge insertion or an edge deletion. \paragraph{\bf$insert(x,y)$} First, execute $updateVertex(x)$, $updateVertex(y)$, and $addEdge((x,y))$. If both $x$ and $y$ are matched then do nothing and return. If neither $x$ nor $y$ are matched, add $(x,y)$ to the matching and return. In the case where $x$ is matched and heavy and $y$ is unmatched and light then do nothing and return. The same happens if $y$ is matched and heavy and $x$ is unmatched. If $x$ is unmatched and heavy, search for a (matched, as this is a maximal matching) neighbor $w$ of $x$ whose mate $z$ is light, remove $(w,z)$ from the matching, add $(x,w)$ to the matching, and if $z$ (who is a light vertex) has an unmatched neighbor $q$ add $(z,q)$ to the matching. If $y$ is unmatched and heavy proceed analogously. Note that this restores Invariant \ref{inv:heavy-matched}. In any case, the update-history is updated to reflect all the changes caused by the insertion of $(x,y)$. \paragraph{\bf$delete(x,y)$} First, update $\mathcal{H}$ to reflect the deletion of $(x,y)$ and call $updateVertex(x)$ and $updateVertex(y)$. If $(x,y)$ is not in the matching do nothing and return. (The edge has already been deleted from the adjacency lists via the calls to $updateVertex$.) If $(x,y)$ is in the matching proceed as follows. First, remove $(x,y)$ from the matching. If $z\in\{x,y\}$ is heavy, search for a neighbor $w$ of $z$ whose mate $w'$ is light, remove $(w,w')$ from the matching, add $(z,w)$ to the matching, and if $w'$ (who is a light vertex) has an unmatched neighbor $q$ add $(w',q)$ to the matching. If $z\in\{x,y\}$ is light, scan the neighborhoods of $z$ for a unmatched vertex $w$, and add $(z,w)$ to the matching. In any case, the update-history is updated to reflect all the changes caused by the deletion of $(x,y)$. \begin{lemma} The algorithm uses $O(\sqrt{N})$ machines. \end{lemma} \begin{proof} We show that we maintain at most twice the number of machines than the optimum placement. Let $M_1, \dots, M_l$ be the machines that store the adjacency list of a heavy vertex $x$, where $M_1=getAlive(x)$. Since only $M_l$ is not full, we use at most twice as many machines as the optimum placement for each heavy vertex. Let now $M_1, \dots, M_l$ be all the machines storing light vertices. Since with each update of a light adjacency list we check if we can merge two light machines, it follows that there are no two machines whose edges can be stored in one. Hence, our claim holds also in this case. The lemma follows from the observation that the optimum placement of the edges requires $O(\sqrt{N})$ machines. \end{proof} \begin{lemma} Both $insert(x,y)$ and $delete(x,y)$ run in $O(1)$ rounds, activate $O(1)$ machines per round, and generate $O(\sqrt{N})$ communication per round. \end{lemma} \begin{proof} Recall that we manage the machines that are used to store the sequence of machines storing the suspended edges of heavy vertices as stacks, that is, we store the last machine storing the suspended edges of a vertex $x$ together with the rest of the statistics for $x$, and each machine maintains a pointer to the next machine in the sequence. Hence, we can access in $O(1)$ rounds the machine that is last in the sequence of machines maintaining the suspended edges of a vertex. The only supporting function that is not trivially executable in $O(1)$ rounds is $fetchSuspended$. Note that a call to $fetchSuspended$ makes multiple calls to $moveEdges$ to transfer edges suspended edges of a heavy vertex $x$. As each machine is updated every $O(\sqrt{N})$ rounds, it follows that the number of edges that have been removed from the graph and the machines storing those edges are not yet updated, is $O(\sqrt{N})$. As all the calls to $moveEdges$ transfer at most $O(\sqrt{N})$ edges of $x$, and all but one machines storing suspended edges of $x$ are full, it follows that there is at most a constant number of calls to $moveEdges$. \end{proof} \section{Fully-dynamic 3/2-approximate maximum matching}\label{sec:mat1} The algorithm for the $3/2$ approximate matching builds on top of the algorithm for maintaining a maximal matching from Section \ref{sec:maximal-matching}. Our algorithm is an adaptation of the algorithm from \cite{neiman2016simple} to our DMPC model. Our algorithm's approximation is based on a well-known graph-theoretic connection between augmenting paths in an unweighted graph, with respect to a matching, and the approximation factor of the matching relatively to the maximum cardinality matching. An augmenting path is a simple path starting and ending at a free vertex, following alternating unmatched and matched edges. Specifically, a matching that does not contain augmenting paths of length $(2k-1)$ in a graph, is a $(1+\frac{1}{k})$-approximate matching~\cite{hopcroft1973n}. In this section we show that it is possible to use the technique in~\cite{neiman2016simple} to design a simple DMPC algorithm for $k=2$. The additional information that the algorithm needs to maintain, compared to the algorithm from Section \ref{sec:maximal-matching}, is the number of unmatched neighbors of each vertex. We call these counters \emph{free-neighbor} counters of the light vertices. We keep this information in the $O(n/\sqrt{N})$ machines storing the statistics about the vertices of the graph. In this algorithm, we assume that the computation starts from the empty graphs (An initialization algorithm for this problem would require eliminating all augmenting paths of length 3, but we are not aware of such an algorithm that does not require considerably more than $O(N)$ total memory). Since the algorithm from Section \ref{sec:maximal-matching} maintains a matching where all heavy vertices are always matched, we only need to update the free-neighbor counters whenever a light vertex changes its matching status. Recall that a light vertex keeps all of its neighbors in the same machine. Therefore, we simply need to update the counters of the neighbors of the light vertex. This requires a message of size $O(\sqrt{N})$ from the light vertex $v$ that changed its status to the coordinator and from there appropriate messages of total size $O(\sqrt{N})$ to the $O(n/\sqrt{N})$ machines storing the free-neighbor counters of the neighbors of $v$. Given that we maintain for each vertex its free-neighbor counter, we can quickly identify whether an edge update introduces augmenting paths of length $3$. The modifications to the algorithm from Section \ref{sec:maximal-matching} are as follows. -- In the case of the insertion of edge $(u,v)$, if $u$ is matched but $v$ unmatched, we check whether the mate $u'$ of $u$ has a free neighbor $w$; if this is the case, we remove $(u,u')$ from the matching and we add $(w,u')$ and $(u,v)$ (this is an augmenting path of length 3). The only free-neighbor counters that we have to update are those of the neighbors of $w$ and $v$, as no other vertices change their status, and no new augmenting paths are introduces as no matched vertex gets unmatched. -- If both $u$ and $v$ are free after the insertion of $(u,v)$, we add $(u,v)$ into the matching and update the free-neighbor counters of all neighbors of $u$ and $v$ (who are light vertices, as all heavy vertices are matched). -- If we delete an edge which is not in the matching, then we simply update the free-neighbor counters of its two endpoints, if necessary. -- Whenever an edge $(u,v)$ of the matching is deleted, we treat $u$ as follows. If $u$ has a free neighbor $w$, then we add $(u,w)$ to the matching and update the free-neighbor counters of the neighbors of $w$ (who is light). If $u$ is light but has no free neighbors, then we search for an augmenting path of length 3 starting from $u$. To do so, it is sufficient to identify a neighbor $w$ of $u$ whose mate $w'$ has a free neighbor $z\not=u$. If there exists such $w'$ then we remove $(w,w')$ from the matching and add $(u,w)$ and $(w',z)$ to the matching, and finally update the free-neighbor counters of the neighbors of $z$ (who is light). No other vertex changes its status. If on the other hand, $u$ is heavy, then we find an alive neighbor $w$ of $u$ with a light mate $w'$, remove $(w,w')$ from the matching and add $(u,w)$ to it. (This can be done in $O(1)$ rounds communication through the coordinator with the, up to $n/\sqrt{N}$, machines storing the mates of the statistics of the $O(\sqrt{N})$ alive neighbors of $w'$.) Finally, given that $w'$ is light, we proceed as before trying to either match $w'$ or find an augmenting path of length 3 starting from $w'$. Then, we proceed similarly to the case where $u$ was light. Notice that in all cases where we have to update the free-neighbor counters of all neighbors of a vertex $v$, $v$ is a light vertex, so there are at most $O(\sqrt{N})$ counters to be updated and thus they can be accessed in $O(1)$ rounds, using $O(n/\sqrt{N})$ active machines, and $O(\sqrt{N})$ communication complexity. Hence, given the guarantees from Section \ref{sec:maximal-matching} and the fact that we only take a constant number of actions per edge insertion or deletion, we conclude that our algorithm updates the maintained matching in $O(1)$ rounds, using $O(n/\sqrt{N})$ machines and $O(\sqrt{N})$ communication per round in the worst case. We conclude this section by proving the approximation factor of our algorithm. \begin{lemma} The algorithm described in this section correctly maintains a $3/2$-approximate matching. \end{lemma} \begin{proof} In order to guarantee the $3/2$ approximation we need to argue that there are no augmenting paths of length $3$ or more. Such a path exists if and only if there is an edge of the matching whose both endpoints have a free neighbor. We show that after every update made by our algorithm, we eliminate all such matched edges. That is, for each edge of the matching we ensure that at most one endpoint has a free neighbor. We proceed with a case study, assuming that our claim holds just before the update we consider. Recall that the maintained matching is always maximal as we build on the algorithm from Section \ref{sec:maximal-matching}. The only two cases where we need to search for an augmenting path of length $3$ is when either a new vertex becomes free, or when we connected a matched vertex with a free vertex. In the case where a vertex $u$ becomes free due to an edge deletion, our algorithm tests whether $u$ is an endpoint of a length-3 augmenting path $\langle u,w,w',z \rangle$, where $w$ is a matched neighbor of $u$ that the mate of $u$ and $w$ a free neighbor of $u'$, if $u'$ has free neighbor, and by removing $(u,u')$ and adding $(u,v)$ and $(u',w)$ to the matching to augment the length $3$ augmenting path. This does not create new augmenting paths as $u$ and $z$ have no free neighbors and no new vertex becomes free. For the second case where we connect a matched and a free edge, we again search and augment possible augmenting paths of length 3. Given that all free-neighbors counters are updated every time a vertex enters/leaves the matching, our algorithm maintains a $3/2$-approximate matching. \end{proof} \section{The model}\label{sec:model} In this work we build on the model that was introduced by Karloff, Suri, and Vassilvitski \cite{karloff2010model}, and further refined in \cite{andoni2014parallel,beame2013communication,goodrich2011sorting}. This model is commonly referred to as the \emph{Massive Parallel Computing (MPC)} model. In its abstraction, the MPC model is the following. The parallel system is composed by a set of $\mu$ machines $M_1, \dots, M_\mu$, each equipped with a memory that fits up to $S$ bits. The machines exchange messages in synchronous rounds, and each machine can send and receive messages of total size up to $S$ at each round. The input, of size $N$, is stored across the different machines in an arbitrary way. We assume that $S,\mu \in O(N^{1-\epsilon})$, for a sufficiently small $\epsilon$. The computation proceeds in rounds. In each round, each machine receives messages from the previous round. Next, the machine processes the data stored in its memory without communicating with other machines. Finally, each machines sends messages to other machines. At the end of the computation, the output is stored across the different machines and it is outputted collectively. The data output by each machine has to fit in its local memory and, hence, each machine can output at most $S$ bits. Since at each round all machines can send and receive messages of total size $S$, the total communication per round is bounded by $S \cdot \mu \in O(N^{2-2\epsilon})$. See \cite{karloff2010model} for a discussion and justification. When designing MPC algorithms, there are three parameters that need to be bounded: -- Machine Memory: In each round the total memory used by each machine is $O(N^{(1-\epsilon)})$ bits. -- Total Memory: The total amount of data communicated at any round is $O(N^{(2-2\epsilon)})$ bits. -- Rounds: The number of rounds is $O(\log^i n)$, for a small $i\geq 0$. Several problems are known to admit a constant-round algorithm, such as sorting and searching \cite{goodrich2011sorting}. \smallskip\noindent\textbf{Dynamic algorithms.} In the centralized model of computation, dynamic algorithms have been extensively studied in the past few decades. The goal of a dynamic algorithm is to maintain the solution to a problem while the input undergoes updates. The objective is to update the solution to the latest version of the input, while minimizing the time spent for each update on the input. A secondary optimization quantity is the total space required throughout the whole sequence of updates. A dynamic graph algorithm is called \emph{incremental} if it allows edge insertions only, \emph{decremental} if it allows edge deletions only, and \emph{fully-dynamic} if it allows an intermixed sequence of both edge insertions and edge deletions. Most basic problems have been studied in the dynamic centralized model, and they admit efficient update times. Some of these problems include, connectivity and minimum spanning tree \cite{holm2001poly, nanongkai2017dynamic}, approximate matching \cite{arar2018dynamic, baswana2011fully, bernstein2019adeamortization, charikar2018fully, neiman2016simple, somon2016fully}, shortest paths \cite{Abraham2017fully, demetrescu2001fully}. \smallskip\noindent\textbf{Dynamic algorithms in the DMPC model.} Let $G=(V,E)$ be a graph with $n=|V|$ vertices and $m=|E|$ edges. In the general setting of the MPC model, where the memory of each machine is strictly sublinear in $n$, no algorithms with constant number of rounds are known even for very basic graph problems, such as maximal matching, approximate weighted matching, connected components. Recomputing the solution for each of those problems requires $O(\log n)$ rounds, the amount of data that is shuffled between any two rounds can be as large as $O(N)$, all the machines are active in each round, and all machines need to communicate with each other. Therefore, it is natural to ask whether we can update the solution to these problems after a small change to the input graph, using a smaller number of rounds, less active machines per round, and less total communication per round. Notice that bounding the number of machines that communicate immediately implies the same bound on the active machines per round. For convenience, we call active the machines that are involved in communication. The number of active machines also implies a bound on the amount of data that are sent in one round, as each machine has information at most equal to its memory (i.e., $O(\sqrt{N})$ bits). The complexity of a dynamic algorithm in the DMPC model can be characterized by the following three factors: -- The \emph{number of rounds} required to update the solution. -- The \emph{number of machines} that are active per round. -- The \emph{total amount of data involved in the communication} per round. An ideal algorithm in the DMPC model processes each update using a constant number of rounds, using constant number of machines and constant amount of total communication. While such an algorithm might not always be feasible, a dynamic algorithm should use polynomially (or even exponentially) less resources than it's static counterpart in the MPC model. \paragraph{Use of a coordinator.} Distributed systems often host multiple jobs simultaneously, which causes different jobs to compete for resources. Additionally, systems relying on many machines to work simultaneously are prone to failures of either machines or channels of communication between the machines. Our model, allows solutions where all updates are sent to a single (arbitrary, but fixed) machine that keeps additional information on the status of the maintained solution, and then coordinates the rest of the machines to perform the update, by sending them large messages containing the additional information that it stores. Examples of such an algorithm is our algorithm for the maximal matching, and the $3/2$ approximate matching. In practice, the use of a coordinator might create bottlenecks in the total running time, since it involves transmission of large messages, and also makes the system vulnerable to failures (i.e., if the coordinator fails, one might not be able to recover the solution). We note that the role of the coordinator in our matching algorithms is not to simulate centralized algorithms (as we do in our reduction from DMPC algorithms to dynamic centralized algorithms), i.e., to perform all computation at the coordinator machine while treating the rest of the machines as memory. In particular, we treat the coordinator as a buffer of updates and changes of the solution, and we communicate this buffer to the rest of the machines on a need-to-know basis. \paragraph{Algorithmic challenges.} The main algorithmic challenges imposed by our model are the sublinear memory (most of the algorithm known in the MPC model use memory in $\Omega(n)$) and the restriction on the number of machines used in every round. This second point is the main difference between the MPC and DMPC model and poses a set of new interesting challenges. \section{Simulating sequential dynamic algorithms with MPC algorithms}\label{app:red} \begin{lemma} Assume that there exists a sequential algorithm $\mathcal{SA}$ for maintaining a solution to the problem $\mathcal{P}$ with polynomial preprocessing time $p(N)$, and update time $u(N)$, where the algorithm is either deterministic or randomized and the update time is amortized or worst-case. There exists a DMPC algorithm $\mathcal{MRA}$ with $O(p(N))$ number of rounds for the preprocessing, and $O(u(N))$ number of rounds per update with $O(1)$ machines active per round. The DMPC algorithm is of same type as the sequential algorithm. \end{lemma} \begin{proof} For the reduction, we assume that the computation is taking place on a single machine $M_{\mathcal{MRA}}$ and the rest of the machines act as the main memory in the corresponding sequential algorithm. For each array-based data structure of the sequential algorithm, we allocate a single machine to keep track of how the data are distributed over the machines, i.e., the data structure allocates a minimum number of machines (up to a constant factor) and distributes the data in intervals such that a whole interval of the array lies on a single machine. For each list-based data structure, similarly to the way the sequential algorithm stores a single link to the first element of the list we store only the machine storing the first element together with its position in the memory of the machine. Moreover, at the position of each element of the list we also store a pointer to the machine and position of the next element. For other type of data structures we could act similarly. For instance if a data structure is a list of a dynamically reallocated array-based data structure, then we could maintain the array-based data structures in as few machines as possible and allocate new machines whenever it is necessary (this is to ensure that we do not use too many machines). Whenever the algorithm that is executed on $M_{\mathcal{MRA}}$ requests access to an arbitrary position of an array-based data structure, then in a constant number of rounds this memory position is fetched and written back again (if the value has been changed) by only accessing a constant number of machines. In the case where the $\mathcal{MRA}$ algorithm requests access to an element of a list, it is required to specify a pointer to the machine and position of the element (in the beginning a pointer to the first element is specified, and as the list is scanned, the pointer to the next element is known by the algorithm). The complexity of a sequential algorithm is determined by the number of its accesses to the memory and also by arithmetic operations. Since each access to the memory by $\mathcal{SA}$ is simulated by a constant number of rounds by $\mathcal{MRA}$ with constant number of active machines per round, the running time of $\mathcal{SA}$ is translated to rounds of $\mathcal{MRA}$. Therefore, the preprocessing time $p(N)$ and the update time $u(N)$ of the sequential algorithm can be simulated by $O(p(N))$ and $O(u(N))$ rounds, respectively, by the algorithm $\mathcal{MRA}$ with constant number of machines per round. \end{proof} \section{Discussion and Open Problems} Although we believe that our model is a natural extension of the MPC model for dynamic algorithms, we acknowledge that the DMPC model has a few deficiencies. The main deficiency of the model is that it allows algorithms that during an update make use of a predefined set of machines (in the sense that the same machines are used during this update independently of the nature and content of the update), for instance, the algorithms that make use of a coordinator machine in some of the algorithms presented in this paper. Such practices might create bottlenecks in the performance of algorithms, and even make the overall system more vulnerable to failures or malicious attacks. This consideration can be taken into account by adding the following parameter to the DMPC model. Assuming that the updates are executed uniformly at random from all possible updates at each round, we measure the entropy of the normalized distribution of the total communicated bits among the pairs of machines at each round. The higher the value of the metric the better the algorithm in terms of how uniformly distributed are the transmitted messages among the machines. We next further elaborate on this potential metric. Consider a particular update round $r$, where the update happening is drawn uniformly at random from the set of all possible update that can potentially happen at round $r$. Let $\phi:V\times V \rightarrow [C^2]$, where $C$ the total number of machines, be a mapping from pairs of machines to integers, and let $\alpha$ be the vector where $\alpha[\phi(i,j)]$ is the expected size message transmitted from machine $M_i$ to machine $M_j$ at round $r$, which depends on the update happening at round $r$. For instance, an algorithm using a coordinator machine $M_c$ will have $\sum_{i\not=c}\alpha[\phi(c,i)] = \sqrt{N}$, and hence $M_c$ will be certainly activated and transmitting $\sqrt{N}$ bits in expectation. Ideally, we would like the expected total communication to be equally distributed over the values $\alpha[\phi(i,j)]$. This can be captured by the notion of entropy, defined over the normalized vector $\overline{\alpha}$, where $\sum_{i,j}\overline{\alpha}[\phi(i,j)] = 1$. The entropy $H(\overline{\alpha})$ is maximized when the distribution of the unit value over the entries of $\overline{\alpha}$ is uniform, that is, when $\overline{\alpha}[\phi(i,j)] = 1/\ell$, where $\ell$ the length of $\overline{\alpha}$. Note that the absolute value of the average value of $\alpha$ is upper bounded by the bound on total communication per round required by our model. Intuitively, the measure of entropy that we consider quantifies the randomness in the set of machines that exchange messages, with respect to a random update. For instance, when using a coordinator machine for an algorithm, then the communication is concentrated at the connection between the coordinator and the machine storing the updated elements (which is random), and hence the total entropy is limited. A second deficiency of our model is that the measure of total communication per round is somewhat ``coarse'' as it ignores the form of the messages exchanged, e.g., consider a round that uses $O(\sqrt{N})$ total communication and $O(\sqrt{N})$ active machines, in this case, the model does not distinguish between the case where each active machine transmits short messages of size $O(1)$ to another machine and the case where one machine transmits a large message of size $O(\sqrt{N})$ and the rest of the machines small messages. Notice that also this second deficiency can be taken into account by introducing the entropy-based metric that we used in the case of the first deficiency. For the sake of simplicity of the model, we chose to avoid incorporating complicated metrics as parameters of the model in this initial paper introducing the model. However, we believe that this is an interesting direction for future work.arxaa \paragraph{Open Problems.} In this paper we initiated the study of algorithms in the DMPC model, by considering some very basic graph problems. It is natural to study more problems in this setting as the MPC model becomes the standard model for processing massive data sets, and its limitations with respect to processing dynamically generated data are clear. In general, we think it is of great value to understand the complexity of fundamental problems in the DMPC model, both in terms of upper and lower bounds. We also believe that it would be interesting to establish connections with other models of computation, in order to develop a better understanding of the strengths and weaknesses of the DMPC model.
train/arxiv
BkiUdeI5qsNCPeN2F1Sp
5
1
\section{Introduction} \label{sc_introduction} Integrated phase I-II clinical trial designs allow accelerated drug development since they assess, within a single protocol, the safety and efficacy of a compound or a combination of drugs. With the use of traditional chemotherapy compounds, a dose limiting toxicity (DLT) is generally ascertained after one cycle of therapy for the purpose of estimating the maximum tolerated dose (MTD), and it is generally assumed that both the dose-toxicity and dose-efficacy relationships are monotonically increasing functions (see e.g., \cite{le2009dose}). This implies that the optimal dose, i.e., the dose with most desirable benefit-risk trade-off, must be at the MTD. However, as pointed out by \cite{hoff2007targeted,li2017toxicity,lin2020adaptive}, with other types of compounds such us molecularly targeted therapies or immunotherapies, the monotonicity assumption of of the dose-efficacy response may not hold given that efficacy may plateau or decrease at high dose levels, which implies that the optimal dose may not be located at the MTD. Phase I-II designs that do not assume monotonicity of the dose-efficacy relationship have been studied extensively in the last two decades. For instance, \cite{thall2004dose}, proposed a Bayesian phase I-II design based on toxicity-efficacy probability trade-offs. \cite{nebiyou2005bayesian} proposed a design in which binary toxicity outcomes and a continuous biomarker expression outcomes are jointly modeled. \cite{zhang2006adaptive} employed a flexible continuation-ratio model to account for the potentially monotonically increasing / non-increasing / decreasing dose-efficacy profiles. \cite{houede2010utility} proposed a design for drug combinations with ordinal toxicity and efficacy outcomes in which the optimal dose combination was found through a utility function. \cite{yuan2011bayesian} proposed a phase I-II design for late-onset efficacy with drug combinations that incorporates adaptive randomization of patients in stage II with the intention of allocating more patients to more efficacious dose combination levels. \cite{cai2014bayesian} proposed a phase I-II design in two stages in which the optimal dose combination is estimated by encouraging the exploration of untried dose combinations to avoid the problem of recommending suboptimal doses. \cite{guo2017bayesian} proposed a phase I-II design for molecularly targeted agents that considers different biomarker subgroups. \cite{lyu2019aaa} proposed a two-stage phase I-II design in which the optimal dose combination is obtained my maximizing a utility function. One characteristic of this design is that it allows to do adaptive dose insertion if the current estimate of the optimal dose combination is far from all the pre-defined (discrete) dose combination levels available in the trial. We find this feature particularly appealing in the context of molecularly targeted therapies because it allows to adapt the initial grid of discrete dose combination levels, if necessary. Without it, we may incur in a substantial loss of information, especially when the knowledge about the dose-toxicity and the dose-efficacy surfaces is limited (\cite{diniz2019comparison}). However, adaptive dose insertion may be challenging in practice since new drug formulation may be frequently requested with short notice. We note that adaptive dose insertion is conceptually similar to having continuous dose levels administered intravenously, which has been extensively studied in both phase I and phase I-II designs by \cite{tighiouart2014,tighiouart2017bayesian,diniz2017,diniz2018,jimenez2019,tighiouart2019two,jimenez2020bayesian,jimenez2021combining,jimenez2021bayesian}, in the setting of cytotoxic agents. Our research is motivated by a published phase I-II clinical trial design that combines a MEK and a PIK3CA inhibitors (\cite{lyu2019aaa}), considering four discrete dose levels for each compound, and enrolling a total of 96 late-stage cancer patients. The primary endpoint of the study was to improve the efficacy rate from 5\% to 30\% taking into consideration that a dose combination with 20\% efficacy rate or higher is consider beneficial as long as the dose is well tolerated. In this article, we propose a two-dimensional flexible cubic spline function to model the marginal distribution of the efficacy response in settings combining either two molecularly targeted agents or a cytotoxic with a molecularly targeted agent. In stage I, the estimated MTD is calculated using the escalation with overdose control (EWOC) principle. In stage II, instead of allocating patients directly to the standardized dose combination with the currently highest utility estimate, we follow the adaptive randomization principle, which prevents the design to become stuck at local optima (\cite{yuan2016bayesian}). The most desirable benefit-risk trade-off (i.e., the optimal dose combination) is calculated by maximizing a utility function. The remainder of this article is organized as follows. In section \ref{sc_method} we introduce the dose-toxicity and dose-efficacy models together with the utility function and the stage I and stage II dose finding algorithms. In section \ref{sc_simulation_study}, we present an extensive simulation study to evaluate the operating characteristics of the approach as well as a comparison with state-of-the-art methodology in the context of this type of clinical trial. We conclude the article in section \ref{sc_discussion} with a discussion and some concluding remarks. \section{Method} \label{sc_method} \subsection{Probability models} Consider a phase I-II design combining $Q$ pre-specified doses $x_1 < \dots < x_Q$ of compound $X$ with $\mathcal{J}$ pre-specified doses $y_1 < \dots < y_{\mathcal{J}}$ of compound $Y$. The dose levels of each compound are standardized to fall within the interval [0,1] using the transformations $f_x(x) = (x - x_1) / (x_Q - x_1)$ and $f_y(y) = (y - y_1) / (y_{\mathcal{J}} - y_1)$, with $x = x_1, \dots, x_Q$ and $y = y_1, \dots, y_{\mathcal{J}}$, resulting into $x \in [0,1]$ and $y \in [0,1]$, respectively. Let $Z_T = \{0,1\}$ be the binary indicator of DLT where $Z_T = 1$ represents the presence of a DLT after a predefined number of treatment cycles, and $Z_T = 0$ otherwise. Let $Z_E = \{0,1\}$ be the binary indicator of treatment response where $Z_E = 1$ represents a positive response after a predefined number of treatment cycles, and $Z_E = 0$ otherwise. The optimal dose combination as well as all the adaptive features of the design are based on the joint probability $P(\textbf{Z} | x, y, \boldsymbol{\Psi})$, where $\textbf{Z} = \{Z_T, Z_E\}$ is a vector containing the toxicity and efficacy binary outcomes, and $\boldsymbol{\Psi} = \{\boldsymbol{\Psi}_T, \boldsymbol{\Psi}_E\}$ is a vector containing the parameters of the marginal toxicity and efficacy models, respectively. For notational simplicity, we suppress the arguments $\boldsymbol{\Psi}_T$ and $\boldsymbol{\Psi}_E$ when it will not cause confusion. Following the work of \cite{ivanova2009adaptive,cai2014bayesian,lyu2019aaa,jimenez2020bayesian,jimenez2021combining} we model the marginal toxicity and efficacy models independently (i.e., we assume that toxicity and efficacy are independent). The marginal probability of toxicity $P(Z_T = 1 | x, y)$ is modeled using the linear logistic regression model \begin{equation} \label{eq_pdlt} P(Z_T = 1 | x,y) = F(\alpha_0 + \alpha_1 x + \alpha_2 y + \eta x y), \end{equation}where $F(.)$ is the cumulative distribution function of the logistic distribution (i.e., $F(u) = 1 / (1 + e^{-u})$). Following \cite{tighiouart2017bayesian}, we reparameterize equation \eqref{eq_pdlt} in terms of parameters that clinicians can easily interpret. Let $\rho_{uv}$ denote the probability of DLT when the levels of agents $X = u$ and $Y = v$, with $u = \{0, 1\}$, and $v = \{0, 1\}$, so that $\alpha_0 = F^{-1}(\rho_{00})$, $\alpha_1 = (F^{-1}(\rho_{10}) - F^{-1}(\rho_{00}))$, and $\alpha_2 = (F^{-1}(\rho_{01}) - F^{-1}(\rho_{00}))$. The marginal probability of efficacy $P(Z_E = 1 | x, y)$ is modeled using the cubic spline \begin{equation} \label{eq_peff} P(Z_E = 1 | x, y) = F(\beta_0 + \beta_1 x + \beta_2 x^2 + \sum_{i=3}^{5} \beta_i (x - \kappa_{i-2})_{+}^{3} + \beta_6 y + \beta_7 y^2 + \sum_{j=8}^{10} \beta_j (y - \kappa_{j-4})_{+}^{3} + \beta_{11}xy), \end{equation}where $\kappa_1 = \kappa_4 = 0$. To shorten the notation, let $P(Z_T = 1 | x,y) = \pi_T(x,y)$ and $P(Z_E = 1 | x, y) = \pi_E(x,y)$. \subsection{Prior distributions} The prior distribution of the parameters in $\pi_T(x,y)$ and $\pi_E(x,y)$ are usually elicited after consultation with the clinicians based on previous single agent and/or drug combination studies. In this article, we do not have any elicited prior distribution and therefore we use the following vague distributions: $\rho_{01} \sim \mbox{beta}(1,1)$, $\rho_{10} \sim \mbox{beta}(1,1)$, and conditional on $(\rho_{01}, \rho_{10})$, we assume that $\rho_{00} / \min (\rho_{01}, \rho_{10}) \sim \mbox{beta}(1,1)$. The interaction parameter $\eta$ represents the synergism of the combination which means that it has to be positive. We assign $\eta$ a vague gamma prior, for example $\eta \sim \mbox{gamma(0.1, 0.1)}$. In $\pi_E(x,y)$, we assume vague normal distributions for all the $\beta$ parameters so that $\beta_r \sim N(0,10^2)$, $r = 1, \dots, 11$. The justification for allowing all the $\beta$ parameters to be either positive or negative is that we expect the dose-efficacy surface to be non-linear, which implies that efficacy may decrease at higher dose combination levels. We assign $\kappa_2,\kappa_3,\kappa_5,\kappa_6$ uniform distributions with the logical restriction that $\kappa_2 < \kappa_3$ and $\kappa_5 < \kappa_6$. Thus, $(\kappa_2,\kappa_3) \sim \mbox{Uniform}((\kappa_2,\kappa_3): 0 \leq \kappa_2 < \kappa_3 \leq 1)$ and $(\kappa_5,\kappa_6) \sim \mbox{Uniform}((\kappa_5,\kappa_6): 0 \leq \kappa_5 < \kappa_6 \leq 1)$. Conventionally, vague priors have enormous variances, which work well with medium to large sample sizes but may lead to numerical instability with smaller sample sizes. Ideally, the prior distributions should be vague enough to cover all the plausible values of the parameters, but not too vague that causes stability issues. \subsection{Likelihood and posterior distributions} Let $N$ denote the maximum sample of the trial and $D_m = { (Z_{T_i}, Z_{E_i}, x_i, y_i), i = 1, \dots, m }$ be the data collected (i.e., toxicity outcomes, efficacy outcomes and dose combinations) after enrolling $m$ patients. The posterior distribution of the dose-toxicity model parameters is $p({\boldsymbol \Psi}_T | D_m) \propto p({\boldsymbol \Psi}_T) \times \mathcal{L}(D_m | {\boldsymbol \Psi}_T)$, and the posterior distribution of the dose-efficacy model parameters is $p({\boldsymbol \Psi}_E | D_m) \propto p({\boldsymbol \Psi}_E | D_n) \times \mathcal{L}(D_m | {\boldsymbol \Psi}_E)$, where ${\boldsymbol \Psi}_T = (\rho_{00}, \rho_{01}, \rho_{10}, \eta)$, ${\boldsymbol \Psi}_E = (\beta_0, \dots, \beta_{11}, \kappa_1, \dots, \kappa_6)$, $\mathcal{L}(D_m | {\boldsymbol \Psi}_T) = \prod_{i=1}^{m} \pi_T(x_i,y_i)^{Z_i} \times (1 -\pi_T(x_i,y_i))^{1-Z_i}$ and $ \mathcal{L}(D_m | {\boldsymbol \Psi}_E) = \prod_{i=1}^{n} (\pi_E(x_i,y_i))^{E_i} \times (1 - \pi_E(x_i,y_i))^{1-E_i}$. Bayesian computation is done using \texttt{JAGS} (\cite{plummer2003jags}) and \texttt{R} (\cite{r2019}). \subsection{Optimal dose combination} \label{sc_bodc} As mentioned in section \ref{sc_introduction}, with molecularly targeted therapies it is not guaranteed that the optimal dose combination will be at the MTD, which means we will need to find the most desirable trade-off between risk and benefit. Ideally, we would like a dose combination with low toxicity and high efficacy (i.e., low risk and high benefit). Utility functions $U(.)$ are convenient tools that allow to formally assess the benefit-risk trade-off between undesirable and desirable clinical outcomes. They have received considerable attention over the last years, specially with the development of targeted therapies and immuno-therapies (see e.g., \cite{houede2010utility, thall2013using, thall2014optimizing, guo2015bayesian, murray2017robust,liu2018bayesian,lyu2019aaa}), and their definition can be more or less complicated, depending on the setting and the number of outcomes that we may involve in the trade-off. For example, \cite{liu2018bayesian} considers three outcomes (toxicity, efficacy and immune response). Once the utility function is defined, the optimal dose (combination) will correspond to the dose (combination) that maximizes the utility function based on the current parameters estimates. In section \ref{sc_simulation_study}, we provide the definition of the utility function used for the simulation study, which in generic terms, we refer to as $U(x,y)$. However, each trial will have a different utility function based on the clinicians' criteria about benefit-risk trade-off desirability. \subsection{Dose-optimization algorithm} We define the MTD as any dose combination with \begin{equation} \label{eq_mtd} F(\alpha_0 + \alpha_1 x + \alpha_2 y + \eta x y) = \theta_T, \end{equation}where $\theta_T$ represents the highest probability of toxicity we are willing to accept. With discrete dose levels, it is possible that none of the experimental dose levels have an exact probability of toxicity equal to $\theta_T$ and therefore the MTD could be an empty set. In this article, stage I follows the escalation with overdose control (EWOC) principle (\cite{babb1998cancer, tighiouart2005flexible, tighiouart2010dose, tighiouart2017bayesian, tighiouart2012number, shi2013escalation}), where the posterior probability of overdosing the next cohort of patients is bounded by a feasibility bound $\alpha$. In a cohort with two patients, the first one would receive a new dose of compound $X$ given that the dose $y$ of compound $Y$ that was previously assigned. The other patient would receive a new dose of compound $Y$ given that dose $x$ of compound $X$ was previously assigned. The feasibility bound $\alpha$ increases from 0.25 up to 0.5 in increments of 0.05 (\cite{wheeler2017toxicity}). In other words and based on \eqref{eq_mtd}, if $x$ is fixed, the posterior distribution of the MTD takes the form $[\tilde{y}_{\mbox{\tiny MTD}}~|~\tilde{{\boldsymbol \Psi}}_T,x] = (F^{-1}(\theta_T) - \tilde{\alpha}_0 - \tilde{\alpha}_1 \times x) / (\tilde{\alpha}_2 + \tilde{\alpha}_3 \times x)$ whereas if $y$ is fixed, it takes the form $[\tilde{x}_{\mbox{\tiny MTD}}~|~\tilde{{\boldsymbol \Psi}}_T,y] = (F^{-1}(\theta_T) - \tilde{\alpha}_0 - \tilde{\alpha}_2 \times y) / (\tilde{\alpha}_1 + \tilde{\alpha}_3 \times y)$, with $\tilde{{\boldsymbol \Psi}}_T = \{\tilde{\alpha}_0 = F^{-1}(\tilde{\rho}_{00}), \tilde{\alpha}_1 = (F^{-1}(\tilde{\rho}_{10}) - F^{-1}(\tilde{\rho}_{00})), \tilde{\alpha}_2 = (F^{-1}(\tilde{\rho}_{01}) - F^{-1}(\tilde{\rho}_{00})), \tilde{\eta}_3\}$ representing the current posterior distribution of the model parameters in \eqref{eq_pdlt}. Stage I will enroll a total of $N_1 = C_1 \times m_1$ patients, where $C_1$ represents the total number of cohorts in phase I with equal number of patients $m_1$. The stage I algorithm proceeds as follows: \begin{enumerate} \item The first cohort ($c_1 = 1$) of $m_1$ patients starts at the dose combination $(x = 0, y = 0)$. \item For cohorts $c_1 > 1$, if $c_1$ is an even number, \begin{itemize} \item[i)] Patient $2c_{1} - 1$ receives the dose combination $(x_{2c_{1}-1}, y_{2c_{1}-1} = y_{2c_{1}-3})$ where $x_{2c_{1}-1}$ represents the discrete dose level in $\mathcal{Q}$ that is closest, in terms of euclidean distance, to the $\alpha$-th percentile of $[\tilde{x}_{\mbox{\tiny MTD}}~|~\tilde{{\boldsymbol \Psi}}_T,y = y_{2c_{1}-3}]$. \item[ii)] Patient $2c_{1}$ receives the dose combination $(x_{2c_{1}} = x_{2c_{1}-2}, y_{2c_{1}})$ where $y_{2c_{1}}$ represents the discrete dose level in $\mathcal{J}$ that is closest, in terms of euclidean distance, to the $\alpha$-th percentile of $[\tilde{y}_{\mbox{\tiny MTD}}~|~\tilde{{\boldsymbol \Psi}}_T,x = x_{2c_{1}-2}]$. \end{itemize} In contrast, if $c_1$ is an odd number, \begin{itemize} \item[i)] Patient $2c_{1} - 1$ receives the dose combination $(x_{2c_{1} - 1} = x_{2c_{1} - 3}, y_{2c_{1} - 1})$ where $y_{2c_{1} - 1}$ represents the discrete dose level in $\mathcal{J}$ that is closest to the $\alpha$-th percentile of $[\tilde{y}_{\mbox{\tiny MTD}}~|~\tilde{{\boldsymbol \Psi}}_T,x = x_{2c_{1}-3}]$. \item[ii)] Patient $2c_{1}$ receives the dose combination $(x_{2c_{1}}, y_{2c_{1}} = y_{2c_{1} - 2})$ where $x_{2c_{1}}$ represents the discrete dose level in $\mathcal{Q}$ that is closest to the $\alpha$-th percentile of $[\tilde{x}_{\mbox{\tiny MTD}}~|~\tilde{{\boldsymbol \Psi}}_T,y = y_{2c_{1}-2}]$. \end{itemize} \item Keep enrolling cohorts of size $m_1$ until $c_1 = C_1$. \end{enumerate} Stage II will enroll a total of $N_2 = n_2 + C_2 \times m_2$ patients, where $m_2$ represents the cohort size and $n_2$ represents the initial fixed cohort of patients in which the probability of allocating a patient to a dose combination is the same along the space of safe dose combinations. Let $N = N_1 + N_2$ be the total number of patients that the entire study will enroll. The algorithm in stage II proceeds as follows: \begin{enumerate} \item An initial cohort ($c_2 = 1$) of size $n_2$ is distributed among the dose combinations with $\widehat{\pi}_T(x,y) \leq \theta_T$ given $D_{N_1}$, so that as many dose levels as possible have at least one patient allocated to them. \item For cohorts $c_2 > 1$ of size $m_2$, we sequentially allocate patients using adaptive randomization in which the probability of being allocated to a dose combination is proportional to the current utility estimate (i.e., $\pi_{\mbox{\tiny AR}}(x,y) = \widehat{U}(x,y) / \sum \widehat{U}(x,y)$, where $\pi_{\mbox{\tiny AR}}(x,y)$ represents the probability of being allocated to dose combination $(x,y)$). \item Keep enrolling cohorts of size $m_2$ until $c_2 = C_2$. \end{enumerate} So far, the design has only used the initial set of discrete dose combinations. One important question is whether we should restrict the optimal dose combination recommendation to the initial grid of discrete dose combination, or whether we should allow the design to recommend any (continuous) dose combination within the standardized space of dose combinations $[0,1] \times [0,1]$. We believe that the design is flexible enough to identify the region (or regions) in which the true utility is higher, even in settings with complex dose-efficacy surfaces. Thus, having the option to recommend a continuous dose combination would improve the chances of identifying the true optimal dose combination in case it would be distant from all of the existing discrete dose combination levels. Therefore, at the end of stage II, the optimal dose combination is calculated as \begin{equation} (x^{\mbox{\tiny OPT}},y^{\mbox{\tiny OPT}}) = \underset{(x,y) \in [0,1]\times [0,1]}{\mbox{arg max}} \left \{ \widehat{U}(x,y) \right \}. \end{equation} The proposed design contains two stopping rules for safety, one for stage I and a different one for stage II. During stage I, we would stop the trial if \begin{equation} P(\pi_T(x = 0,y = 0) > \theta_T + 0.1| D_n) > \delta_{\theta_1}, \end{equation}where $\delta_{\theta_1} = 0.5$. In contrast, during stage II we would stop the trial if \begin{equation} P(\Theta > \theta_T + 0.1|D_n) > \delta_{\theta_2}, \end{equation} where $\Theta$ represents the rate of DLTs for both stages of the design regardless of dose and $\delta_{\theta_2} = 0.7$ represents the confidence level (i.e., 70\%) that a prospective trial results in an excessive DLT rate. A non-informative Jeffreys prior Beta$(0.5,0.5)$ is placed on the parameter $\Theta$. Stopping rules for futility and efficacy are not considered in this trial given the potential complexity of the dose-efficacy surface and relatively small sample size. However, if deemed necessary, they could be incorporated into the design. \section{Simulation Studies} \label{sc_simulation_study} In this section, we describe the performance of our approach in identifying the optimal dose combination, compare the safety of the trial and average utility of the recommended dose combinations with the AAA design, and study robustness to varying sample size. \subsection{Model and design performance} \label{sc_model_performance} We assess the performance of our design using simulation studies. Considering the setting of the motivating trial, we assume that both compounds $X$ and $Y$ have four standardized dose levels within the interval $[0,1]$. In stage I we select $m_1 = 2$ and $C_1 = 15$, yielding a total of $N_1 = 30$ patients. In stage II we select $n_2 = 12$, $m_2 = 6$ and $C_2 = 9$, yielding a total of $N_2 = 66$ patients. In order to compare the performance of our approach with the triple adaptive (AAA) Bayesian design of \cite{lyu2019aaa}, we let the toxicity upper bound $\theta_T = 0.3$, the efficacy lower bound $\theta_E = 0.2$ and the lowest acceptable utility defined below as $U_0 = 0.1$. The utility function used in \cite{lyu2019aaa} is defined as \begin{equation} \label{eq_utility} U(x,y) = \mathbf{1} \left ( \pi_T(x,y) \leq \theta_T \right ) \times \left ( 1 - \frac{(1 - \eta_0) \times \pi_T(x,y)}{\theta_T} \right ) \times [\eta_1 \exp(\eta_2 \times \pi_{E}(x,y)) + \eta_3], \end{equation}where `$\mathbf{1}(.)$' denotes an indicator function and $\eta_0 = 0.368$, $\eta_1 = 0.385$, $\eta_2 = 1.28$ and $\eta_3 = -0.385$ are parameters, elicited by the physicians, that establish the benefit-risk trade-offs (see \cite{lyu2019aaa}, sections 2.2 and 4.1). This utility function has the property that for fixed probability of efficacy $\pi_E(x,y)$ at dose combination $(x,y)$, it decreases linearly as a function of $\pi_T(x,y)$ provided that $\pi_T(x,y) \leq \theta_T$ and it increases exponentially as a function of $\pi_E(x,y)$ for fixed risk of DLT $\pi_T(x,y)$, provided that $\pi_T(x,y) \leq \theta_T$. The design is evaluated through the following operating characteristics: i) average DLT rate and percentage of trials with DLT rate above $\theta_T$ and $\theta_T + 0.1$, ii) probability of early stopping for safety, iii) average (true) utility of the recommended optimal dose combinations, iv) average (true) utility of the patients allocated with adaptive randomization, and v) the distribution of the recommended optimal dose combinations. We specified 6 scenarios using the marginal models defined in equations \eqref{eq_pdlt} and \eqref{eq_peff} that vary with respect to the location of the true optimal dose combination (also referred as target dose combination) as well as in the complexity of surface of the utility function (i.e., uni-modal and bi-modal). Figure \ref{plot_simulation_scenarios} shows the (true) utility surfaces for these scenarios. The parameter values used to produce these scenarios are available in Table S1 of the supplementary material. Under each scenario, we simulated 2000 trials. To assess the performance of our design relative to the state-of-the-arm AAA methodology in \cite{lyu2019aaa}, we compare the average, median and the 2.5th and 97.5th percentiles of the distribution of the (true) utilities of the recommended optimal dose combinations of these two designs. In the remainder of the manuscript, we refer to the 2.5th and 97.5th percentiles of the (empirical) distribution of the (true) utilities of the recommended optimal dose combinations as a 95\% confidence interval, which should interpreted as a measure of precision or reliability. Also, the (true) utilities of the recommended optimal dose combination are referred to recommended (true) utilities. \begin{figure} \caption{Contour plots showing the utility surface from scenarios 1-6. We also display the grid of discrete dose combination levels (i.e., black dots) with their corresponding (true) utility values whenever these (i.e., the utility values) are above zero.} \centering \vspace{0.25cm} \includegraphics[scale=0.7]{plot_simulation_scenarios.pdf} \label{plot_simulation_scenarios} \end{figure} Scenarios 1, 5 and 6 are designed to be uni-modal whereas scenarios 2, 3 and 4 are designed to be bi-modal and therefore more complex. The goal is to have scenarios with low, medium and high utility values for the target dose combination. Scenario 3 is expected to be particularly challenging. In this scenario, we placed very low utility values in the entire path that stage I is expected to follow given the very low toxicity probability that we established throughout the entire dose-toxicity surface. At the end of stage I, the number of positive efficacy responses is expected to be very low and stage II will start with very little information regarding the potential location of the target dose combination. In these scenarios, the average DLT rate ranged between 7-17\%, the proportion of trials with DLT rates above $\theta_T$ and $\theta_T + 0.1$ was equal to zero, and the proportion of early stoppings for safety was also equal to zero. These results are displayed in Table \ref{table_safety_operating_characteristics_spline}. In terms of the distribution of the recommended (true) utilities, Table \ref{table_utility_operating_characteristics_spline} shows that the average (true) utilities were close to the (true) utilities of the target dose combinations in all scenarios, showing that our design is able to capture complex uni-modal and bi-modal dose-utility surfaces. These average values were always above those obtained with the AAA design, with differences between 0.06 and 0.01. We observe that the median (true) utilities were also very close to the (true) utility of the target dose combination in all scenarios with practically no differences between the proposed design and the AAA design. One of the most notable differences between the proposed design and the AAA design was found in the 95\% confidence intervals of the distribution of recommended (true) utilities. The proposed design yielded, in general, narrower distributions, with notable differences with respect to the AAA design in scenarios 2, 4 and 6. We also see that the average (true) utility of the patients allocated with adaptive randomization was relatively close to the (true) utility of the target dose combination, taking into consideration that the adaptive randomization phase is still a learning part of the design. In Figure S1 of the supplementary material we display the (empirical) distribution of the recommended optimal dose combinations. We see how the design tends to recommend optimal dose combinations in the region(s) in which the (true) utility is higher. In scenarios 3, we observe a larger dispersion of the recommended optimal dose combinations with respect to the other scenarios, even though the average (true) utility is fairly close to the (true) utility of the target dose combinations. This dispersion is however expected given that this scenarios was designed to have a low number of positive efficacy outcomes. \begin{table} \caption{Safety operating characteristics from scenarios 1-6. The table displays the average DLT rate, the percentage of trials with DLT rates above $\theta_T$ and $\theta_T+0.1$, the percentage of trials stopped following the stage I and stage II safety stopping rules from each scenario.} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|cccccc|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{}} & \multicolumn{6}{c|}{Scenario} \\ \cline{2-7} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{5} & 6 \\ \hline \makecell{Average DLT rate (\%)} & \multicolumn{1}{c|}{12} & \multicolumn{1}{c|}{17} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{11} & 10 \\ \hline \makecell{Percentage of trials with DLT rate above $\theta_T$ (\%)} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & 0 \\ \hline \makecell{Percentage of trials with DLT rate above $\theta_T + 0.1$ (\%)} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & 0 \\ \hline \makecell{Percentage of trials stopped early for safety (\%)} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & 0 \\ \hline \end{tabular} }% \label{table_safety_operating_characteristics_spline} \end{table} \begin{table} \caption{Summary of the distribution of the (true) utility of the recommended optimal dose combinations from scenarios 1-6. The table displays the (true) utility of the target dose combination (TDC) as a reference, the average, median and 95\% confidence interval (i.e., percentiles 2.5\% and 97.5\%) of the distribution of (true) utility of the optimal dose combinations (ODC) recommended by the proposed design, the average, median and 95\% confidence interval (i.e., percentiles 2.5 and 97.5) of the distribution of (true) utility of the optimal dose combinations (ODC) recommended by the AAA design, and the average (true) utility of the patients allocated during the adaptive randomization (AR) phase from each scenario.} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|cc|cccccc|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{}} & \multicolumn{6}{c|}{Scenario} \\ \cline{3-8} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{5} & 6 \\ \hline \multicolumn{2}{|c|}{\makecell{Utility of TDC \\ (for reference)}} & \multicolumn{1}{c|}{\textbf{0.68}} & \multicolumn{1}{c|}{\textbf{0.94}} & \multicolumn{1}{c|}{\textbf{0.72}} & \multicolumn{1}{c|}{\textbf{0.98}} & \multicolumn{1}{c|}{\textbf{0.36}} & \textbf{0.39} \\ \hline \multicolumn{1}{|c|}{\multirow{3}{*}{\makecell{Proposed \\ design}}} & Average & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.92} & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.93} & \multicolumn{1}{c|}{0.30} & 0.33 \\ \cline{2-8} \multicolumn{1}{|c|}{} & Median & \multicolumn{1}{c|}{0.66} & \multicolumn{1}{c|}{0.93} & \multicolumn{1}{c|}{0.72} & \multicolumn{1}{c|}{0.94} & \multicolumn{1}{c|}{0.32} & 0.34 \\ \cline{2-8} \multicolumn{1}{|c|}{} & 95\% CI & \multicolumn{1}{c|}{0.58-0.68} & \multicolumn{1}{c|}{0.81-0.94} & \multicolumn{1}{c|}{0.06-0.72} & \multicolumn{1}{c|}{0.65-0.98} & \multicolumn{1}{c|}{0.19-0.35} & 0.21-0.38 \\ \hline \multicolumn{1}{|c|}{\multirow{3}{*}{\makecell{AAA \\ design}}} & Average & \multicolumn{1}{c|}{0.62} & \multicolumn{1}{c|}{0.86} & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{0.89} & \multicolumn{1}{c|}{0.28} & 0.32 \\ \cline{2-8} \multicolumn{1}{|c|}{} & Median & \multicolumn{1}{c|}{0.66} & \multicolumn{1}{c|}{0.93} & \multicolumn{1}{c|}{0.72} & \multicolumn{1}{c|}{0.94} & \multicolumn{1}{c|}{0.30} & 0.33 \\ \cline{2-8} \multicolumn{1}{|c|}{} & 95\% CI & \multicolumn{1}{c|}{0.55-0.66} & \multicolumn{1}{c|}{0.40-0.94} & \multicolumn{1}{c|}{0.06-0.72} & \multicolumn{1}{c|}{0.57-0.98} & \multicolumn{1}{c|}{0.16-0.33} & 0.16-0.35 \\ \hline \multicolumn{2}{|c|}{\makecell{Average utility of patients \\ allocated during AR phase}} & \multicolumn{1}{c|}{0.51} & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.33} & \multicolumn{1}{c|}{0.78} & \multicolumn{1}{c|}{0.24} & 0.26 \\ \hline \end{tabular} }% \label{table_utility_operating_characteristics_spline} \end{table} \subsection{Robustness to deviations between the true underlying marginal models and the working marginal models} \label{sc_robustness_model_deviations} In this section we evaluated the performance of the proposed design by changing the true underlying marginal probability models. In other words, the binary toxicity and efficacy data were no longer generated using equations \eqref{eq_pdlt} and \eqref{eq_peff}. For this evaluation, we used the same marginal probability of toxicity and efficacy models from \cite{lyu2019aaa}. We note that these marginal probabilities do not include an interaction term modeling synergism between the two drugs following the recommendations of \cite{wang2005two} and the simulation results of \cite{Mozgunov2021} for discrete dose combinations. However, omitting an interaction term will compromise safety of the trial and reduce precision of the estimated MTD contour for continuous dose levels as shown in \cite{tighiouart2022}. We implemented ten out the eleven scenarios evaluated in their publication. We decided not to include scenario 4 (see Figure 5 in \cite{lyu2019aaa}) because the utility function was not defined for any of the dose combinations contained in the initial space of dose combination and, in this article, we restrict the dose-finding search to the initial space of standardized dose combinations $[0,1] \times [0,1]$. In a second evaluation, we implemented the proposed design in the six scenarios displayed in Figure \ref{plot_simulation_scenarios} with lower sample sizes in stages I and II. Regarding the first assessment (i.e., deviation between the true underlying marginal models and the working marginal models), in Figure \ref{plot_simulation_scenarios_lyu} we show the (true) utility surfaces for the simulated scenarios. The parameter values used to produce these scenarios are available in Table S2 of the supplementary material. \begin{figure} \caption{Contour plots showing the (true) utility surface from scenarios 1-3 and 5-11 from \cite{lyu2019aaa}. We also display the grid of discrete dose combination levels (i.e., black dots) with their corresponding (true) utility values whenever these (i.e., the utility values) are above zero.} \centering \vspace{0.25cm} \includegraphics[scale=0.7]{plot_simulation_scenarios_lyu.pdf} \label{plot_simulation_scenarios_lyu} \end{figure} In terms of safety, the proposed design resulted in an average DLT rate between 13\% and 24\%, see Table \ref{table_safety_operating_characteristics_lyu}. The percentage of trials with DLT rates above $\theta$ ranges between 0.0\% and 3.66\%, and the percentage of trial with DLT rates above $\theta_T + 0.1$ were equal to zero. Last, the percentage of trials stopped early for safety ranges between 0\% and 2.1\%. The safety results published by \cite{lyu2019aaa} show a worse performance with respect to our proposed design. The most notable difference is visible in scenario 3, where the MTD is located in the middle of the space of dose combinations. In this scenario, AAA design reported a proportion of trials with DLT rate above $\theta_T$ was 16.4\% whereas in our design this proportion is equal to 3.66\%. We would like to highlight that, from our point of view, a proportion of trials with DLT rate above $\theta_T$ of 16.4\% is too high and more restrictive safety criteria should be applied to lower it. In terms of the distribution of (true) recommended utilities, the proposed design has an average (true) utility that is, in general, close to the (true) utility of the target dose combinations, with values that are either very similar or higher than those of the AAA design under scenarios 1, 8, 10, and 11. The only scenario in which the proposed design seems to slightly underperform the AAA design is scenario 5. A similar behavior is observed with the medians of the recommended (true) utilities. The 95\% confidence interval of the distributions of the (true) utilities are, in general, narrower with the proposed design, with some notable differences under scenarios 1, 2, 8, 10, and 11. These results are consistent with those observed in the main simulation study of this manuscript (see Table \ref{table_utility_operating_characteristics_spline}). We also note that, in the proposed design, the average (true) utility of the patients allocated with adaptive randomization is relatively close to the (true) utility of the target dose combination, taking into consideration that the adaptive randomization phase is still a learning part of the design. These results are displayed in Table \ref{table_utility_operating_characteristics_lyu}. In Figure S2 of the supplementary material we display the distribution of the recommended optimal dose combination in each scenario. Again, this Figure shows how the proposed design correctly identifies the region in which the target dose combination is located. Again, we notice that there is a higher dispersion in the optimal dose combination recommendations in scenarios in which the (true) utility of the target dose combination is, overall, not very high. \begin{table} \caption{Safety operating characteristics from scenarios 1-3 and 5-11 from \cite{lyu2019aaa}. The table displays the average DLT rate, the percentage of trials with DLT rates above $\theta_T$ of the proposed design, the percentage of trials with DLT rates above $\theta_T$ of the AAA design, and $\theta_T+0.1$, and the percentage of trials stopped following the stage I and stage II safety stopping rules from each scenario.} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|cccccccccc|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{}} & \multicolumn{10}{c|}{Scenario} \\ \cline{2-11} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{10} & 11 \\ \hline \makecell{Average DLT rate (\%) \\ (proposed)} & \multicolumn{1}{c|}{17} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{24} & \multicolumn{1}{c|}{18} & \multicolumn{1}{c|}{19} & \multicolumn{1}{c|}{17} & \multicolumn{1}{c|}{13} & \multicolumn{1}{c|}{13} & \multicolumn{1}{c|}{17} & 18 \\ \hline \makecell{Percentage of trials with \\ DLT rate above $\theta_T$ (\%) (proposed)} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{\bf{3.66}} & \multicolumn{1}{c|}{0.05} & \multicolumn{1}{c|}{0.10} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & 0 \\ \hline \makecell{Percentage of trials with \\ DLT rate above $\theta_T$ (\%) (AAA)} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0.1} & \multicolumn{1}{c|}{\bf{16.4}} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c|}{1.3} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & 0.1 \\ \hline \makecell{Percentage of trials with \\ DLT rate above $\theta_T + 0.1$ (\%) (proposed)} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & 0 \\ \hline \makecell{Percentage of trials \\ stopped early for safety (\%) (proposed)} & \multicolumn{1}{c|}{0.25} & \multicolumn{1}{c|}{0.15} & \multicolumn{1}{c|}{0.40} & \multicolumn{1}{c|}{2.10} & \multicolumn{1}{c|}{0.40} & \multicolumn{1}{c|}{0.35} & \multicolumn{1}{c|}{0.10} & \multicolumn{1}{c|}{0.05} & \multicolumn{1}{c|}{0.40} & 0.85 \\ \hline \end{tabular} } \label{table_safety_operating_characteristics_lyu} \end{table} \begin{table} \caption{Summary of the distribution of the (true) utility of the recommended optimal dose combinations from scenarios 1-3 and 5-11 from \cite{lyu2019aaa}. The table displays the (true) utility of the target dose combination (TDC) as a reference, the average, median and 95\% confidence interval (i.e., percentiles 2.5\% and 97.5\%) of the distribution of (true) utility of the optimal dose combinations (ODC) recommended by the proposed design, the average, median and 95\% confidence interval (i.e., percentiles 2.5 and 97.5) of the distribution of (true) utility of the optimal dose combinations (ODC) recommended by the AAA design, and the average (true) utility of the patients allocated during the adaptive randomization (AR) phase from each scenario.} \resizebox{\columnwidth}{!}{% \begin{tabular}{|cc|cccccccccc|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{}} & \multicolumn{10}{c|}{Scenario} \\ \cline{3-12} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{10} & 11 \\ \hline \multicolumn{2}{|c|}{\makecell{Utility of TDC \\ (for reference)}} & \multicolumn{1}{c|}{\textbf{0.45}} & \multicolumn{1}{c|}{\textbf{0.55}} & \multicolumn{1}{c|}{\textbf{0.28}} & \multicolumn{1}{c|}{\textbf{0.42}} & \multicolumn{1}{c|}{\textbf{0.47}} & \multicolumn{1}{c|}{\textbf{0.07}} & \multicolumn{1}{c|}{\textbf{0.52}} & \multicolumn{1}{c|}{\textbf{0.46}} & \multicolumn{1}{c|}{\textbf{0.55}} & \textbf{0.51} \\ \hline \multicolumn{1}{|c|}{\multirow{3}{*}{\makecell{Proposed \\ design}}} & Average & \multicolumn{1}{c|}{0.42} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.20} & \multicolumn{1}{c|}{0.26} & \multicolumn{1}{c|}{0.38} & \multicolumn{1}{c|}{0.05} & \multicolumn{1}{c|}{0.46} & \multicolumn{1}{c|}{0.39} & \multicolumn{1}{c|}{0.52} & 0.48 \\ \cline{2-12} \multicolumn{1}{|c|}{} & Median & \multicolumn{1}{c|}{0.43} & \multicolumn{1}{c|}{0.54} & \multicolumn{1}{c|}{0.26} & \multicolumn{1}{c|}{0.29} & \multicolumn{1}{c|}{0.45} & \multicolumn{1}{c|}{0.06} & \multicolumn{1}{c|}{0.48} & \multicolumn{1}{c|}{0.42} & \multicolumn{1}{c|}{0.54} & 0.50 \\ \cline{2-12} \multicolumn{1}{|c|}{} & 95\% CI & \multicolumn{1}{c|}{0.35-0.45} & \multicolumn{1}{c|}{0.45-0.55} & \multicolumn{1}{c|}{0.00-0.28} & \multicolumn{1}{c|}{0.01-0.42} & \multicolumn{1}{c|}{0.00-0.47} & \multicolumn{1}{c|}{0.00-0.07} & \multicolumn{1}{c|}{0.21-0.52} & \multicolumn{1}{c|}{0.13-0.46} & \multicolumn{1}{c|}{0.41-0.55} & 0.40-0.51 \\ \hline \multicolumn{1}{|c|}{\multirow{3}{*}{\makecell{AAA \\ design}}} & Average & \multicolumn{1}{c|}{0.37} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.20} & \multicolumn{1}{c|}{0.31} & \multicolumn{1}{c|}{0.37} & \multicolumn{1}{c|}{0.04} & \multicolumn{1}{c|}{0.44} & \multicolumn{1}{c|}{0.39} & \multicolumn{1}{c|}{0.46} & 0.43 \\ \cline{2-12} \multicolumn{1}{|c|}{} & Median & \multicolumn{1}{c|}{0.34} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.26} & \multicolumn{1}{c|}{0.33} & \multicolumn{1}{c|}{0.46} & \multicolumn{1}{c|}{0.05} & \multicolumn{1}{c|}{0.45} & \multicolumn{1}{c|}{0.44} & \multicolumn{1}{c|}{0.49} & 0.44 \\ \cline{2-12} \multicolumn{1}{|c|}{} & 95\% CI & \multicolumn{1}{c|}{0.29-0.45} & \multicolumn{1}{c|}{0.29-0.55} & \multicolumn{1}{c|}{0.00-0.28} & \multicolumn{1}{c|}{0.01-0.42} & \multicolumn{1}{c|}{0.00-0.46} & \multicolumn{1}{c|}{0.00-0.07} & \multicolumn{1}{c|}{0.17-0.52} & \multicolumn{1}{c|}{0.14-0.46} & \multicolumn{1}{c|}{0.32-0.53} & 0.34-0.50 \\ \hline \multicolumn{2}{|c|}{\makecell{Average utility \\ of patients allocated \\ during AR phase}} & \multicolumn{1}{c|}{0.24} & \multicolumn{1}{c|}{0.31} & \multicolumn{1}{c|}{0.12} & \multicolumn{1}{c|}{0.16} & \multicolumn{1}{c|}{0.26} & \multicolumn{1}{c|}{0.03} & \multicolumn{1}{c|}{0.29} & \multicolumn{1}{c|}{0.24} & \multicolumn{1}{c|}{0.31} & 0.31 \\ \hline \end{tabular}% } \label{table_utility_operating_characteristics_lyu} \end{table} \subsection{Robustness to changes in the sample size} In this section, we re-assessed the proposed design in the six scenarios generated with the proposed working marginal probability models (i.e., Figure \ref{plot_simulation_scenarios}) with lower sample sizes in stages I and II. The results are available in Tables S3 and S4 of the supplementary material for sample size of 10, 20, and 30 for stage I, 30, 42, 54, and 66 in stage II, and cohort size 6 and 12 in stage II. Overall, we observed that reducing the number of patients only in stage II yields worse performance than reducing patients only in stage I. We also observed that, for all sample sizes considered in stage I, all scenarios with 54 and 42 patients in stage II had a performance that was still as good as or better than the performance obtained with the AAA design, and the only case in which the proposed design was slightly worse, in some scenarios, than the AAA design, was when stage II uses only 30 patients. In other words, we could enroll 62 patients (i.e., 20 in stage I and 42 in stage II), which would be a 35.42\% reduction of the overall sample size, and still have results as good as or better than those from the AAA design. \section{Discussion} \label{sc_discussion} We proposed a Bayesian phase I-II design for dual-agent dose optimization with molecularly targeted therapies. Under the presence of these types of compounds, the monotonicity assumption of the dose-efficacy response may not be appropriate given that efficacy may decrease or plateau at high dose levels, and thus the optimal dose will be a trade-off between toxicity and efficacy. For this reason, we employ a flexible cubic spline function for the marginal distribution of the efficacy response and a utility function to assess the risk-benefit trade-off between undesirable and desirable clinical outcomes. The proposed marginal dose-efficacy model has a relatively high number of parameters, which accommodates simple and complex (e.g., bi-modal) dose-efficacy surfaces. However, because in stage I dose-escalation is driven by the marginal dose-toxicity model, which has a small number of parameters, and we do not allow for skipping untried dose combinations, there is no concern regarding the variability of the parameter estimation at the inception of the trial. Then, by the time the design starts using the marginal probability of efficacy to allocate subsequent cohorts of patients, there are already enough patients enrolled in the trial. Traditionally, in settings in which the assumption of monotonicity of the dose-efficacy holds (i.e., with cytotoxic agents), the purpose was not to have a precise estimation of the entire dose-efficacy curve (or surface) but to provide a precise estimation of the MTD in phase I trials, or the optimal dose in phase I-II trials. However, without the assumption of monotonicity of the dose-efficacy curve (or surface), implementing dose optimization becomes harder since the target dose combination could be anywhere in the space of doses available in the clinical trial and in regions with low probability of toxicity. In this situation, having a more precise estimation of the entire dose-efficacy curve (or surface) could be helpful since we can better locate the region(s) in which the target dose (or dose combination) is located. We are inspired by a published case study which combines a MEK inhibitor and a PIK3CA inhibitor considering 4 discrete dose levels in each compound and a total of 96 late-stage cancer patients (\cite{lyu2019aaa}). In this setting, we first implemented our approach using 6 scenarios generated by the working marginal dose-toxicity and dose-efficacy models. These scenarios were intended to have different utility contour shapes, with one or two modes, and with low, medium and high (true) utility values for the target dose combinations. The proposed design achieved good operating characteristics, with low toxicity rates and with average recommended (true) utilities that were close to the (true) utilities of the target dose combinations. In other words, the design was, on average, able to correctly identify the region(s) of the utility surface in which the target dose combination was located. To assess model robustness to deviations from the true marginal models for toxicity and efficacy, we derived operating characteristics under the scenarios presented in \cite{lyu2019aaa}. The proposed design showed, again, low toxicity rates and average (true) utilities of the recommended optimal dose combination that were close to the (true) utilities of the scenario-specific target dose combinations. The performance of the proposed design was compared to that of the AAA design, a state-of-the-art design proposed by \cite{lyu2019aaa} specifically tailored for the combination of molecularly targeted therapies. For this purpose, we implemented the AAA design in the scenarios generated with the working marginal models that we proposed in this article, as well as in the scenarios generated with the marginal models from \cite{lyu2019aaa}. In terms of safety, we have observed that the proposed design is safer than the AAA design, as explained in section \ref{sc_robustness_model_deviations}. In terms of target dose combination(s) recommendation, we looked at the distribution of the (true) utilities of the recommended optimal dose combinations produced by both designs. These results showed that, under the presence of more complex (e.g., bi-modal) utility surfaces, the proposed design had better performance (i.e., higher average (true) utility of the recommended optimal dose combinations) than the AAA design. When the surfaces were less complex (e.g., uni-modal), the differences in performance between the proposed design and the AAA design became smaller. The distribution of the (true) utilities of the recommended optimal dose combinations of the proposed design, measured through the percentiles 2.5 and 97.5, were narrower with respect to those from the AAA design, with notable differences in some scenarios. We also evaluated the robustness of the proposed design with respect to smaller sample sizes in stages I and II. This assessment showed that the operating characteristics of the design with smaller sample sizes in both stages were still close to those obtained with the sample size used for the main simulation study. As a reference, we observed that an overall sample size reduction of 35\% still leads to operating characteristics that were as good as or better than those produced by the AAA design in the evaluated scenarios. With these results, we conclude that the number of parameters employed by the marginal probability of efficacy model is not an issue in this type of clinical trial setting. In general, we believe that the proposed design has the following advantages over the AAA design: i) it has better safety operating characteristics, ii) it has better performance, specially under complex dose-utility surfaces, iii) it is more precise in terms of optimal dose combination recommendation, and iv) it is easier to implement since it does not incorporate the adaptive dose insertion feature of the AAA design, which could be challenging in practice if many dose insertions are needed during the trial. One potential weakness of the proposed design has been observed in the scenarios in which the (true) utility surface had, overall, low values. In this setting, there was a slightly higher dispersion of the optimal dose combination recommendation with respect to other tested scenarios, which translated into slightly lower average (true) utilities. However, a higher dispersion in this situation was not completely unexpected since low utility values are usually a result of low efficacy rates, which, given the relatively high number of parameter of the marginal efficacy model, can lead to lower estimation precision. Nevertheless, because we are using vague prior distributions, we can consider the results presented in this manuscript as the baseline operating characteristics of the proposed design. By using more informative prior distributions in the marginal probability models, and/or increasing the sample size, the design's operating characteristics are expected to improve. \subsection*{Data availability} No real data was used in the development of this article. The \texttt{R} and \texttt{JAGS} scripts necessary to fully reproduce the results presented in this article are available at \newline \url{https://github.com/jjimenezm1989/Phase-I-II-design-targeted-therapies}. \subsection*{Funding} Mourad Tighiouart is funded by NIH the National Center for Advancing Translational Sciences (NCATS) UCLA CTSI (UL1 TR001881-01), NCI P01 CA233452-02, and U01 grant CA232859-01. \subsection*{Disclaimer} Jos\'e L. Jim\'enez is employed by Novartis Pharma A.G. who provided support in the form of salary for the author, but did not have any additional role in the preparation of the manuscript. Also, the views expressed in this publication are those of the authors and should not be attributed to any of the funding institutions or organisations to which the authors are affiliated. \bibliographystyle{plainnat}
train/arxiv
BkiUdRc4uBhjCUQN77HP
5
1
\section{introduction} In addition to zero resistance, the Meissner effect is another hallmark of superconductivity. The directly measured penetration depth($\lambda$) in a weak magnetic field provides information of the gap structure, and is a characteristic length scale of a bulk superconductor. In general, $\rho_s\propto1/\lambda^2$. The number of electrons in the superconducting phase, $\rho_s$, characterizes the phase rigidity of a superconductor. In conventional Bardeen-Cooper-Schrieffer (BCS) superconductors, the penetration depth exhibits an exponential behavior at low temperatures, and the power-law behavior in $\Delta\lambda(T)\equiv \lambda(T)-\lambda(0)$ has been considered as evidence for unconventional pairing symmetry in the high-temperature superconductors~\cite{4}. Compare to cuprates, the remarkable features of iron pnictides are the nature of magnetism and the multiband character. They have triggered massive studies since their discovery~\cite{q1,q3}. In this letter we focus on its response to a weak external magnetic field. There are several ways to measure magnetic penetration depth~\cite{xu,exp2,mudiff}. In the $1111$ systems, at low temperatures, some experiments~\cite{exp5} found a power-law behavior $\lambda(T)$, while others~\cite{exp6,exp7} have found an exponential temperature dependence of $\lambda(T)$. The situation in the 122 system is also unclear: The superfluid density $\rho_s(T)$ exhibits an exponential behavior in the cleanest $\mathrm{Ba_{1-x}K_xFe_2As_2}$~\cite{Fegap2}, while measurements on $\mathrm{Ba(Fe_{1-x}Co_x)_2As_2}$ have shown a power-law behavior of $\lambda(T)$~\cite{Fegap3,exp4,exp9,jiey,aabar,lamd04} with the exponent varying from $1.6$ to $2.8$, and a two-gap scenario is suggested for $\mathrm{Ba(Fe_{1-x}Co_x)_2As_2}$ and $\mathrm{Ba_{1-x}Rb_xFe_2As_2}$~\cite{ted3,exp1}. And there are also some theoretical works~\cite{bakfe,rafael,yunkyu,ro1}. In this letter, we carry out systematic calculations of $\rho_s(T)$ based on a two-orbital phenomenological model~\cite{zhang}. Within this model, each unit cell accommodates two inequivalent Fe ions and results based on this model on various properties of Fe-pnictide superconductors~\cite{zhang,zhou,huang,gao1,gao2,huang2,huang3,gao3,zhou3} are in reasonable agreement with experimental measurements. When we normalize the energy parameters of the Fe-Fe nearest and next-nearest neighbors, the hopping integrals defined below are chosen as $t_{1-4}=1, 0.4, -2.0, 0.04$~\cite{zhang}, respectively. In the momentum $k$ space, the single-particle Hamiltonian matrix can be written as~\cite{gao1,huang2} \begin{eqnarray}\label{ht} H_{t,k}&=&\left( \begin{array}{cccc} a_1-\mu & a_3 & a_4 & \,0 \\ a_3 & a_1-\mu & \,0 & a_4 \\ a_4 & \,0 & a_2-\mu & a_3 \\ \,0 & a_4 & a_3 & a_2-\mu \\ \end{array} \right), \end{eqnarray} with $a_1=-2t_2\cos{(k_x+k_y)}-2t_3\cos{(k_x-k_y)}$, $a_2=-2t_3\cos{(k_x-k_y)}-2t_2\cos{(k_x+k_y})$, $a_3=-2t_4(\cos{(k_x+k_y)}+\cos{(k_x-k_y)})$, $a_4=-2t_1(\cos{k_x}+\cos{k_y})$, where $\mu$ is the chemical potential. Here we have chosen the $x$ axis along the link connecting nearest neighbor (NN) Fe ions, and the distance between NN Fe is taken as the unit of length. The pairing term $H_{\Delta,k}=\sum_{\alpha\nu \bf{k}}(\Delta_{\alpha,\bf{k}}c^{\dag}_{\alpha\nu \bf{k}\uparrow}c^{\dag}_{\alpha\nu -\bf{k}\downarrow}+H.c.)$ has only next-nearest-neighbor (NNN) intra-orbital pairing, where $\alpha$ denotes Fe $A$ or Fe $B$ in the unit cell and $\nu$ denotes the orbitals. It will lead to the $s_{\pm}$-wave pairing symmetry~\cite{Fegap2,Fegap3,Fegap4}. The self-consistent conditions are $\Delta_{\alpha\bf{k}}=2\sum_{\tau}\cos{\bf{k}}_{\tau}\Delta^{\alpha}_{i,i+\tau}$ and $\Delta^{\alpha}_{i,i+\tau}=\frac{V}{2}\langle c^{\alpha}_{i\nu\uparrow}c^{\alpha}_{i+\tau,\nu \downarrow}-c^{\alpha}_{i\nu\downarrow}c^{\alpha}_{i+\tau,\nu \uparrow} \rangle =\frac{V}{N_s}\sum_ {\bf{k}}\cos{\bf{k}}_{\tau}\langle c_{\alpha\nu, \bf{k}\uparrow}c_{\alpha\nu, -\bf{k}\downarrow} \rangle$, with $\tau=\bf{x}\pm \bf{y}$ and the pairing strength $V=1.2$. The interaction term includes the Hund's coupling $J_{H}=1.3$ and the on-site Coulomb interaction $U$, in which we choose $U=3.4$ and $U=4.0$ as two different kinds of homogenous systems. After taking the mean-field treatment~\cite{zhou,huang}, $H_{int}$ can be expressed as \begin{eqnarray}\label{2} H_{int}&=&U\sum_{i\mu\sigma\neq\bar{\sigma}}\langle n_{{ i}\mu\bar{\sigma}}\rangle n_{{i}\mu\sigma}+(U-3J_H)\sum_{ i\mu\neq\nu\sigma} \langle n_{i\mu\sigma}\rangle n_{i\nu\sigma}\nonumber\\ &&+(U-2J_H)\sum_{i\mu\neq\nu\sigma\neq\bar{\sigma}} \langle n_{ i\mu\bar\sigma}\rangle n_{i\nu \sigma}. \end{eqnarray} In the presence of spin-density-wave ($\mathrm{SDW}$) order, $H_{int}$ in the $k$ space can be decoupled into a diagonal term and magnetic term. Define $\psi^{\dag}_{\bf{k}\sigma}=(c^{\dag}_{A0,\bf{k}\uparrow},c^{\dag}_{A1,\bf{k}\uparrow},c^{\dag}_{B0,\bf{k}\uparrow},c^{\dag}_{B1,\bf{k}\uparrow})$, $\varphi^{\dag}_{\bf{k}}=(\psi^{\dag}_{\bf{k}\uparrow},\psi^{\dag}_{\bf{k+Q}\uparrow},\psi_{-\bf{k}\downarrow},\psi_{-\bf{k+Q}\downarrow})$, the Hamiltonian without external field in $k$ space can be written as $\varphi^{\dag}_{\bf{k}} H_0 \varphi_{\bf{k}}$~\cite{gao1,huang2}, with \begin{eqnarray}\label{3} H_0&=&\left(\begin{array}{cccc} H^{\prime}_{t,\bf{\bf{k}}} & R & IH_{\Delta,\bf{\bf{k}}} & 0\\ R & H^{\prime}_{t,\bf{k+Q}} & 0 & IH_{\Delta,\bf{k+Q}}\\ IH_{\Delta,\bf{k}} & 0 & -H^{\prime}_{t,\bf{k}} &R\\ 0 & IH_{\Delta,\bf{\bf{k+Q}}} & R &-H^{\prime}_{t,\bf{\bf{k+Q}}}\\ \end{array}\right), \end{eqnarray} where $I$ is a $4\times 4$ unit matrix, $R=-\frac{M}{2}(U+J_H)H_M$, and the corresponding $H^{\prime}_{t,\bf{k}}=H_{t,\bf{k}}+\frac{n}{4}(3U-5J_H)I$, with $n=2+x$. $R$ relates to the magnetic order~\cite{gao1,huang2} with \begin{eqnarray}\label{3} H_{M}&=&\left(\begin{array}{cc} \mathcal{I}& 0 \\ 0& \mathcal{I}\exp{\mathrm{i}\bf{Q}\cdot \textbf{R}_{AB}} \\ \end{array}\right)\;, \end{eqnarray} in Eq.(4) $\mathcal{I}$ is a $2\times 2$ unit matrix. Due to $\mathrm{SDW}$ order, the wave vector $\bf{k}$ is restricted in the magnetic Brillouin zone (BZ). The self-consistent condition is $M=\frac{1}{2}\sum_{\nu}(n_{A\nu\uparrow}-n_{A\nu\downarrow})=\frac{1}{2N_s}\sum_{\nu,\bf{k}} \sigma c^{\dag}_{A\nu\sigma \bf{k}}c_{A\nu\sigma \bf{\bf{k+Q}}}$, $\textbf{R}_{AB}$ is the distance of Fe B to the origin sited by Fe A. $N_s$ is the number of unit cells. We take $N_s=512$ to obtain self-consistent parameters and $N_s=768$ in the calculation of $\rho_s$. After diagonalizing $\sum_{\bf{k}}\varphi^{\dag}_{\bf{k}} H_0 \varphi_{\bf{k}}=\sum_{\bf{k}m}E_{\bf{k},m}\gamma^{\dag \bf{k}}_{m}\gamma^{\bf{k}}_{m}$ by a $16\times 16$ canonical transformation matrix $\mathbb{T}$, we can obtain all properties of the system without the external field. Our investigation of the superfluid density $\rho_s$ follows the linear response approach described by Refs.~\cite{1,2,3,4}. In the presence of a slowly varying vector potential $A_x(r,t)=A(q,\omega)e^{\mathrm{i}\bf{q}\cdot r_i-\mathrm{i}\omega t}$ along the $x$ direction, the hopping term is modified by a phase factor, $c^{\dag}_{i\sigma}c_{j\sigma}\rightarrow c^{\dag}_{i\sigma}c_{j\sigma}\exp{\mathrm{i}\frac{e}{\hbar c}\int^{r_i}_{r_j}\textbf{A}(\textbf{r},t)\cdot \mathrm{d} \textbf{r}} $. Throughout the letter we set $\hbar=c=1$. By expanding the factors to the order of $A^2$, we obtained the total Hamiltonian $H_{tot}=H_0+H^{\prime}$ with \begin{eqnarray}\label{5} H^{\prime}=-\sum_iA_x(r_i,t)[eJ^P_x(r_i)+\frac{1}{2}e^2A_x(r_i,t)K_x(r_i)]. \end{eqnarray} $J^P_x(r_i)$ is the particle current density along the $x$ axis, $K_x(r_i)$ is the kinetic energy density along the $x$ axis. Their expressions are \begin{eqnarray}\label{5.1} K_x(r_i)&=&-\sum_{\nu\nu^{\prime}\sigma\delta}t_{i,i+\delta}x^2_{i,i+\delta}(c^{\dag}_{i\nu\sigma}c_{i+\delta,\nu^{\prime}\sigma}+H.c.),\\ J^P_x(r_i)&=&-\mathrm{i}\sum_{\nu\nu^{\prime}\sigma\delta}t_{i,i+\delta}x_{i,i+\delta}(c^{\dag}_{i\nu\sigma}c_{i+\delta,\nu^{\prime}\sigma}-H.c.), \end{eqnarray} only $\delta=x,x\pm y$ have contributions to the $x$ component and $x_{i,i+\delta}=1$ in our coordination. The charge current density along the $x$ axis is defined as \begin{eqnarray}\label{6} J^{Q}_x(r_i)\equiv-\frac{\delta H^{\prime}}{\delta A_x(r_i,t)}=eJ^{p}_x(r_i)+e^2K_x(r_i)A_x(r_i,t). \end{eqnarray} The kinetic energy is calculated to zeroth order of $A_x(r_i)$, corresponding to the diamagnetic part, and that of the paramagnetic part $J^{P}_{x}(r_i)$ is calculated to the first order of $A_x(r_i)$. In the interaction representation we have \begin{eqnarray}\label{10} \langle J^{P}_{x}(r_i)\rangle &=&-\mathrm{i}\int_{-\infty}^{t} \langle [J^{P}_{x}(r_i,t),H^{\prime}(t^{\prime})]_{-} \rangle_0dt^{\prime}\nonumber\\&=&-\frac{eA_x(r,t)}{N_s}\Pi_{xx}(\bf{q},\omega), \end{eqnarray} $\langle\rangle$ represents the expectation value based on the wave function of $H_{tot}$ while $\langle\rangle_0$ corresponds to the wave function of $H_0$. In the Matsubara formalism we have the current-current correlation $\Pi_{xx}(\textbf{q},\mathrm{i}\omega)= \int^{\beta}_0 d\tau e^{\mathrm{i}\omega\tau}\Pi_{xx}({\bf{q}},\tau)$, and $\Pi_{xx}(\textbf{q},\tau)=-\langle T_{\tau} J^P_x(\textbf{q},\tau)J^P_x(-\textbf{q},0) \rangle_0=\sum_{m_1m_2}\Pi^{m_1m_2}_{xx}(\textbf{q},\tau)$ where $\mathrm{T}_{\tau}$ is the time ordering operator, $J^P_x(\textbf{q},\tau)=e^{\tau H_0}J^P_x(\textbf{q}) e^{-\tau H_0}$, $J^P_x(\textbf{q})=\sum_{i}e^{-\mathrm{i}\textbf{q}\cdot \textbf{r}_i}J^P_x(r_i)=\sum_{m_1m_2}J^P_{m_1,m_2}(\textbf{q})$ is a summation over $\textbf{k}$. Calculation of $\Pi_{xx}(\textbf{q},\mathrm{i}\omega)$ is in the framework of equations of motion of Green's function, \begin{eqnarray} \frac{d\Pi^{m_1m_2}_{xx}(\textbf{q},\tau)}{d \tau}&=&-[J^P_{m_1,m_2}(\textbf{q}),J^P_{x}(-\textbf{q})]_{-}\nonumber\\ &-&\langle T_{\tau}e^{H_0\tau}[H_0,J^P_{m_1,m_2}(\textbf{q})]_{-}e^{-H_0 \tau} J^P_{x}(-\textbf{q},0) \rangle_0 .\nonumber \end{eqnarray} A lengthy but straightforward algebra leads to \begin{eqnarray}\label{corr} \Pi_{xx}(\textbf{q},\mathrm{i}\omega) \!\!=\!\!\!\!\!\sum_{{\bf{k}}m_1m_2} \!\! \frac{ Y_{m_1m_2}^{\bf{k},\bf{k+q}}Y_{m_2m_1}^{\bf{k+q},\bf{k}}(f(E_{{\bf{k}},m_1})-f(E_{{\bf{k+q}},m_2}) ) }{\mathrm{i}\omega+(E_{{\bf{k}},m_1}-E_{{\bf{k+q}},m_2})}, \end{eqnarray} where $f$ is the Fermi distribution function. Through analytic continuation, $\Pi_{xx}(\textbf{q},\omega)$ is obtained. When $\omega=0$, the derivative of $f$ has an important contribution to $\Pi_{xx}(q,\mathrm{i}\omega)$. The quantity $Y^{\textbf{k},\textbf{k+q}}_{m_1m_2}$ can be expressed as \begin{eqnarray} Y^{\textbf{k},\textbf{k+q}}_{m_1m_2}&=&\frac{2}{N_s}[t_4(\xi_4(\sin{k_{x-y}}+\sin{k_{x+y}})+\xi^{\prime}_4(\sin{k^{\textbf{Q}}_{x-y}}+\sin{k^{\textbf{Q}}_{x+y}}) )\nonumber\\ &+&t_3(\xi_2\sin{k_{x-y}}+\tilde{\xi}_2\sin{k_{x+y}}+\xi^{\prime}_2\sin{k^{\textbf{Q}}_{x-y}}+\tilde{\xi}^{\prime}_2\sin{k^{\textbf{Q}}_{x+y}})\nonumber\\ &+&t_2(\xi_2\sin{k_{x+y}}+\tilde{\xi}_2\sin{k_{x-y}}+\xi^{\prime}_2\sin{k^{\textbf{Q}}_{x+y}}+\tilde{\xi}^{\prime}_2\sin{k^{\textbf{Q}}_{x-y}})\nonumber\\ &+&t_1(\xi_1\sin{k_x}+\xi^{\prime}_1\sin{k^{\textbf{Q}}_x})], \end{eqnarray} with $\xi_1=\alpha^{\textbf{k},\textbf{k+q}}_{1,3}+\alpha^{\textbf{k+q},\textbf{k}}_{3,1} +\alpha^{\textbf{k+q},\textbf{k}}_{9,11}+\alpha^{\textbf{k},\textbf{k+q}}_{11,9}$, $\xi_2=\alpha^{\textbf{k},\textbf{k+q}}_{1,1}+\alpha^{\textbf{k},\textbf{k+q}}_{9,9}$, $\tilde{\xi}_2=\alpha^{\textbf{k},\textbf{k+q}}_{3,3}+\alpha^{\textbf{k},\textbf{k+q}}_{11,11}$, $\xi_4=\alpha^{\textbf{k},\textbf{k+q}}_{1,2}+\alpha^{\textbf{k+q},\textbf{k}}_{2,1}+\alpha^{\textbf{k+q},\textbf{k}}_{9,10}+\alpha^{\textbf{k},\textbf{k+q}}_{10,9}$, and $\alpha^{\textbf{k},\textbf{k}^{\prime}}_{ij}=\mathbb{T}^*_{i,m_1}(\textbf{k})\mathbb{T}_{j,m_2}(\textbf{k}^{\prime})+\mathbb{T}^*_{i+1,m_1}(\textbf{k})\mathbb{T}_{j+1,m_2}(\textbf{k}^{\prime})$. The corresponding $\xi^{\prime}_i$ is connected to $\xi_i$ by changing $\alpha_{i,j}$ into $\alpha_{i+4,j+4}$. $k_{x\pm y}$ denotes $k_x\pm k_y$ and $k^{\textbf{Q}}_{x\pm y}=k_{x\pm y}+\textbf{Q}$. The superfluid weight measures the ratio of the superfluid density to the mass $D_s/\pi e^2=\rho_s/m^{\ast}=-\langle J^{Q}_{x}(r_i,t) \rangle/e^2A_x(r_i)$, and the Drude weight is a measurement of the ratio of density of mobile charges to their mass~\cite{1,2,3,4}, \begin{eqnarray}\label{13} \frac{D_s}{\pi e^2}&=&\frac{1}{N}\Pi_{xx}(q_x=0,q_y\rightarrow 0,\omega=0)-\langle K_x \rangle_0,\\ \frac{D}{\pi e^2}&=&\frac{1}{N}\Pi_{xx}(q_x=0,q_y=0,\omega\rightarrow 0)-\langle K_x \rangle_0. \end{eqnarray} \begin{figure} \centering \vspace{-0.5cm} \includegraphics[width=9cm]{Graph1.eps} \vspace{-0.5cm} \caption{(color online) Panels (a), (c), and (d) plot $D_s$ (black solid line), $D$ (orange dashed line ), $\Delta$ (red dotted line), and $M$ (blue dash-dot-dotted line) as functions of $x$ at different temperatures. The right scale is for $D_s$ and $D$ while the left scale is for $\Delta$ and $M$. Panel (b) plots $\lambda(0)$ as a function of $x$. The inset of panel (b) is the phase diagram of temperature $T$ and $x$. }\label{figDs} \end{figure} Figure~\ref {figDs} shows the variation of $D_s$, $D$, $M$ and superconducting (SC) order $\Delta=\frac{1}{4}\sum_{\alpha}(\Delta^{\alpha}_{i,i+{\bf{x+y}}}+\Delta^s_{i,i+{\bf{x-y}}})$, as functions of $x$ at different temperatures. $D$ does not change much as the temperature varies and we plot it clearly in Figs.~\ref {figDs}(c) and ~\ref {figDs}(d). At zero temperature, we do not show the plot of $D$ because in almost all the doping levels $D_s=D$ as long as $\Delta$ has finite value; Fig.~\ref {figDs}(a) shows that in the overdoped regime, the superconducting gap disappears and $D_s$ drops to zero, while $D$ is finite just like the plot in panels (c) and (d); hence, in the overdoped levels when $\Delta=0$ the system corresponds to metal. We can see from Fig.~\ref {figDs}(a) that at $T=0$, $D_s$ increases with the increase of $x$ until it reaches the SDW boundary. In the underdoped region $x<0.05$, most of the Fermi surfaces are gapped by SDW~\cite{zhou,huang3}, doping is the major source of charge carrier; hence, the superfluid density as well as mobile charge density increase linearly with the increase of $x$. While at larger doping $0.5<x<0.1$, SDW is suppressed, the gapped surfaces shrinks significantly, and more intrinsic charge carriers are released to the system in addition to the doping carriers. This is the reason why the increase of $D_S=D$ with doping becomes more dramatic than the linear dependence in this region. After SDW disappears, $\Delta$ dominates the behavior of $D_s$, and shows a flat behavior in a considerably large doping range. In panel (b) we show the variation of $\lambda(0)$ as a function of $x$ for $x\leq0.3$. We define $\rho_s(T)=D_s(T)=\lambda(T)^{-2}$ with arbitrary units. Compared to the phase diagram in the inset, we find that in the $\mathrm{SDW+SC}$ coexisting regime, $\lambda(0)$ shows a sharp increase with the decrease of $x$, which is in good agreement with experiments~\cite{exp4,exp9}. An external magnetic field can couple relevant correlation functions; hence, $\rho_s$ is a nonlocal quantity, describing the stiffness of the system. Figure~\ref{figDs}(c) and ~\ref{figDs}(d) show that at finite $T$, $D_s$ deviates from $D$, the suppression of $D_s$ is stronger than that of $\Delta$. For the $U=4$ case, the results (not shown here) are very similar to the results presented here. \begin{figure} \centering \includegraphics[width=3.0cm]{Graph2a.eps} \includegraphics[width=2.7cm]{Graph2b.eps} \includegraphics[width=2.7cm]{Graph2c.eps} \caption{(color online) Density of states at $T=0.02$ for different $x$. All those calculations are for the $U=3.4$ case. }\label{figlos} \end{figure} Temperature dependence of superfluid density is a quantity reflecting the low-energy residual density of states(DOS) inside the superconducting gap. Equation(\ref{corr}) indicates that the difference between $D$ and $D_s$ is related to the derivation of $f$ near the Fermi surface, and can be understood as excitation of quasiparticles $\rho_q$. Fig.~\ref{figlos} shows the DOS at $T=0.02$. For $x=0.05$ and $0.1$ the gap is considerably larger, hence $D_s$ is equal or almost equal to $D$. Although there is a gap at $x=0.2$[see Fig.~\ref{figlos}(c)], it is small; therefore, $f'(E_k)$ has its contribution to $D_s$, and therefore $D_s$ deviates from $D$. \begin{figure} \centering \vspace{-0.5cm} \includegraphics[width=9.5cm]{Graph3.eps} \vspace{-1cm} \caption{(color online) Panels (a), (b), and (c) plot the renormalized superfluid density $\rho_s(T)/\rho_s(0)$ and superconducting order parameter $\Delta(T)/\Delta(0)$ as functions of the temperature $T/T_c$ at different doping levels for $U=3.4$. $T_{sdw}$ is the transition temperature for SDW. The green dotted lines are linear-in-T fitting functions. Panels ($a^{\prime}$), ($b^{\prime}$), and ($c^{\prime}$) are similar but for $U=4.0$. Panel (d),($d^{\prime}$) show the comparison of our results with experiment data at $x=0.08$. Blue solid line in the inset of panel ($d^{\prime}$) plots $\rho_q(T)/\rho(T_c)$ as a function of $T/T_c$ at $x=0.08$ and the red dashed line is the aid for the eyes. }\label{figdepth} \end{figure} We choose three typical doping levels, to show the temperature $T/T_c$ dependence of $\rho_s(T)/\rho_s(0)$ and $\Delta(T)/\Delta(0)$ for $U=3.4$ as well as for $U=4.0$. From Fig.~\ref{figdepth} we can see that the suppression of superfluid density is stronger than that of the superconducting order parameter in all cases. At low temperatures, the curve of $\rho_s(T)/\rho_s(0)$ is flat, a characteristic of a nodeless superconducting gap. As $T$ increases, a linear-in-$T$ behavior of superfluid density is dominant in all cases. For $U=3.4$ cases, linear functions $-1.55T/T_c+1.52$ and $-1.57T/T_c$+1.49 are used to fit this kind of behavior for $x=0.1$ and $x=0.2$, respectively, which are shown in Figs.~\ref{figdepth}(b) and ~\ref{figdepth}(c). It is consistent with the power-law behavior observed in the experiments~\cite{Fegap3,exp4,exp9,jiey,aabar,lamd04}. Interestingly, they are in good agreement with the direct measurements of superfluid density in films of Fe-pnictide superconductors in Ref.\onlinecite{jiey}. We show our results and the experimental data [see Fig.1(a) in Ref.\onlinecite{jiey}] together in Figs.\ref{figdepth}(d) (U=3.4 case) and 3($d^{\prime}$) (U=4.0 case), and their consistence is explicit. In order to understand the wider linear $T$ dependence of $\rho_s(T)$, the inset in Fig.~\ref{figdepth}(d) plots the renormalized $\rho_q(T)/\rho(T_c)$ as a function of $T/T_c$ at $x=0.08$; the red dashed line aids for eyes. We can see that the number of excited quasiparticles is exponentially small at low $T$ with strong superconductivity, but it is proportional to linear $T$ within a certain temperature range before superconductivity disappears. The easy appearance of linear-in-T behavior is closely related to anisotropic $S_{\pm}$ superconducting paring, since in-gap states(Andreev states) may be induced in this case. The ratio $2\Delta_k(0)/k_BT_c$ at optimal doping is about $4.3\; (4.5)$ for the $U=3.4\;(4.0)$ system. \begin{figure} \centering \includegraphics[width=4.1cm]{Graph4a.eps} \includegraphics[width=4.1cm]{Graph4b.eps} \caption{(color online) Panel (a) plots $\Delta \lambda(T)$ as a function of $T/T_c$ at typical selected doping for $U=4$, the dashed lines are the corresponding fitting functions. Panel (b) is the Uemura plot of Fe-base superconductor. The $x$ axis is $\rho_s(0)$ for different doping, the $y$ axis is the corresponding $T_c$ for the given dopings.}\label{figuemu} \end{figure} Experiments always measure $\Delta\lambda(T)=\lambda(T)-\lambda(0)$, so we show the evolution of $\Delta\lambda(T)$ at selected doping concentrations for $U=4.0$ in Fig.~\ref{figuemu}(a). The results of $U=3.4$ are very similar. In the low-temperature range the curve is flat. At high temperature approaching the disappearance of superconductivity, there is a jump for the value of $\Delta\lambda(T)$, which we show by the colored solid dots. We fit the evolution of $\Delta\lambda(T)$ by a power-law behavior. See Fig.~\ref{figuemu}(a); the corresponding fitting function $4(T/T_c)^{3.6}$($2(T/T_c)^{3}$ ) is for data of $x=0.05$ ($x=0.1,0.2$) and it may be the reason why the experiments give different exponents for different samples. Experiments have shown that the Uemura relation~\cite{Uerelation} holds ~\cite{uehold} for a 1111 system but does not hold for a 122 system ~\cite{ueunhold}. In Fig.~\ref{figuemu}(b), we plot $T_c$ versus $\rho_s(0)$ based on our model. The blue-dashed line (red-dotted line) is for the $U=3.4$ ($U=4.0$) system. It shows that at very low doping levels, about $x<0.035$(grey point), both the $U=3.4$ and $U=4$ systems follow the same empirical linear relation(grey line). As $T_c$ close to the maximum and $\rho_s(0)$ saturate at $x>0.08$ ($0.1$) for $U=3.4$ ($U=4.0$), and the data significantly deviate from the linear relation. This is because in the very underdoped region the doping is a major source of charge carriers and the Uemura relation is valid here. Based on a two-orbital phenomenological model, we have studied the stiffness of superconductivity in clean iron-based superconductors. At zero temperature, we find $\lambda(0)$ a sharp jump as $x$ decreases in the regime of the coexisting $\mathrm{SDW+SC}$ orders; the variation of $\lambda(0)$ as a function of doping is in good agreement with experiments~\cite{exp4}. As far as we know this is a new theoretical result. At low temperatures, $\rho_s(T)/\rho_s(0)$ is flat, then shows a linear-in-T behavior before the system loses its superconductivity. It is in good agreement with experiments of direct measurement of superfluid density in films \cite{jiey}. The evolution of $\Delta\lambda(T)$ roughly follows the power-law behavior with different exponents corresponding to different doping levels. Only at low doping levels, the empirical Uemura linear relation holds for the iron-based superconductors. This work was supported by the Texas Center for Superconductivity at the University of Houston and by the Robert Welch Foundation under Grant No. E-1146 (H.H, Y.G. , C.S.T.), and by the NNSA of the U.S. DOE at LANL under Contract No. DE-AC52-06NA25396 and the U.S. Department of Energy Office of Basic Energy Sciences (J.-X.Z.), and by NSFC No.11204138(Y.G.).
train/arxiv
BkiUdgk5qX_AYz6UqGap
5
1
\section{Analytical form} For the Reader's convenience, we present here analytical form of the expressions used in the main part of the paper. For $w(t)$ given by (\ref{eq:gauss}) and $f(t)$ given by (\ref{eq:nontrivialtransmission}), we obtain the integral $\left(\ref{eq:conlikelihood}\right)$ \[ l(\delta t)=\intop_{-\infty}^{+\infty}\left[-\frac{\left(t+\delta t\right)^{2}}{2}\right]\left[1-\frac{e^{-\left(t-1\right)^{2}}}{10}\right]\exp\left(-\frac{t^{2}}{2}\right)dt \] \begin{equation} =\frac{\sqrt{\pi}\left[\sqrt{3}\left(7+12\,\delta t+9\,\delta t^{2}\right)-270\sqrt[3]{e}\left(1+\delta t^{2}\right)\right]}{270\sqrt{2}\sqrt[\,3]{e}}.\label{eq:analytint} \end{equation} The maximum of $l(\delta t)$ is determined by its derivative \begin{equation} l'\left(\delta t\right)=\frac{\sqrt{\pi}\left[\sqrt{3}\left(12+18\,\delta t\right)-540\sqrt[3]{e}\,\delta t\right]}{270\sqrt{2}\sqrt[\,3]{e}},\label{eq:analytder} \end{equation} and is attained for \begin{equation} \delta t=\frac{2}{30\sqrt{3}\sqrt[\,3]{e}-3}.\label{eq:analytdt} \end{equation}
train/arxiv
BkiUb6fxK6nrxpQc0ABm
5
1
\section{Proofs} \subsection{Weak* compactness of \texorpdfstring{$\operatorname{PF}(\Omega)$}{PF(Omega)}} \label{app:weakstarcompact} \citet[Appendix D4]{walley1991statistical} states that the set of linear previsions, $\operatorname{PF}(\Omega)$, is compact due to the Alaoglu-Bourbaki theorem, but does not explain how this follows. For completeness, we provide an argument. First, we can observe, like \citet{walley1991statistical}, that the set is weak* closed. We use the following well known Lemma, see e.g.\@\xspace \citep[Lemma 12.3.4.]{deitmar2016analysis}. \begin{lemma} Let $\mathcal{X}$ be a topological space, $K \subseteq \mathcal{X}$ be compact and $L \subseteq K$ be closed. Then $L$ is compact. \end{lemma} Thus we will show that $\operatorname{PF}(\Omega)$ is a subset of some weak* compact set in $(\linfty)^*$, hence it is in fact weak* compact. From the Alaoglu-Bourbaki theorem (see e.g.\@\xspace \citep[p.\@\xspace 70]{holmes1975geometric}), we know that the unit ball of the dual norm is weak* compact in $(\linfty)^*$. By definition of the dual norm $\|\cdot\|^*$ of the $L^\infty$ norm $\|X\| \coloneqq \sup_{\omega \in \Omega}\{|X(\omega)|\}$, this is the following set: \begin{align*} \Def{B} &\Def{\coloneqq \left\{X^* \in (\linfty)^* \colon \|X^*\|^* \leq 1\right\}}\\ &= \left\{X^* \in (\linfty)^* \colon \sup\left\{|X^*(X)| \colon X \in L^\infty \text{ for which } \|X\| \leq 1\right\} \leq 1\right\}\\ &= \left\{X^* \in (\linfty)^* \colon \sup\left\{|X^*(X)| \colon X \in L^\infty \text{ for which } \sup_{\omega \in \Omega}\left\{|X(\omega)|\right\} \leq 1\right\} \leq 1\right\}. \end{align*} Thus, to show that $\operatorname{PF}(\Omega)$ is weak* compact, it suffices to show that $\operatorname{PF}(\Omega) \subseteq B$. That is, given some $E \in \operatorname{PF}(\Omega)$, we show that if $X \in L^\infty$ is such that $\sup_{\omega \in \Omega}\{|X(\omega)|\} \leq 1$, then $|E(X)|\leq 1$, since then the supremum over all such $X$ is also $\leq 1$. Thus it suffices to show that $|E(X)| \leq \sup_{\omega \in \Omega} \{|X(\omega)|\}$. We know that $E(|X|) \leq \sup_{\omega \in \Omega}\{|X(\omega)|\}$, since $E$ is a coherent upper prevision, cf.\@\xspace \citet[2.6.1a]{walley1991statistical}. But we also have that $|E(X)| \leq E(|X|)$ from monotonicity and $E(0)=0$, see e.g.\@\xspace \citet[Proposition 5]{pichler2013natural}, and thus $|E(X)| \leq E(|X|) \leq \sup_{\omega \in \Omega}\{|X(\omega)|\} \leq 1$, which concludes the proof. \subsection{Properties of the Induced Risk Measure} \label{app:inducedriskmeasure}. To see that $X-\overline{R}(X) \in \mathcal{D}_{{\bm{v}}{\Omega}}$: \begin{align*} X-\overline{R}(X) \in \mathcal{D}_{{\bm{v}}{\Omega}} \Longleftrightarrow &\lim_{n\rightarrow \infty} \sup \frac{1}{n} \sum_{i=1}^n \left((X-\overline{R}(X))\left({\bm{v}}{\Omega}(i)\right)\right) \leq 0\\ \Longleftrightarrow&\limsup_{n \rightarrow \infty} \vv{\Sigma X} - \overline{R}(X) \leq 0, \quad \text{since } \overline{R}(X) \text{ is constant}\\ \Longleftrightarrow&\limsup_{n \rightarrow \infty} \vv{\Sigma X}(n) - \limsup_{n \rightarrow \infty} \vv{\Sigma X}(n) = 0 \leq 0. \end{align*} Now, show that $\overline{R}(X) = \limsup_{n \rightarrow \infty} \vv{\Sigma X}(n)$ is in fact the smallest number such that the relation in \eqref{eq:rdeffromdesirable} holds. Suppose there exists $\varepsilon>0$ such that $\overline{R}(X)-\varepsilon$ (with $\overline{R}$ defined as before) makes $X-(\overline{R}(X)-\varepsilon)$ desirable, that is \[ \limsup_{n \rightarrow \infty} \vv{\Sigma X} - \overline{R}(X) + \varepsilon \leq 0, \] which is a contradiction due to our choice of $\overline{R}$. \subsection{Proof of Remark~\ref{remark:gbrdef}} \label{app:gbrdefcoincidence} In the literature, the generalized Bayes rule is defined as the solution $\alpha^*$ of $\overline{R}\left(\chi_B(X-\alpha)\right)=0$. We show that $\alpha^*= \inf\left\{\alpha \in {\bm{R}} \colon \overline{R}\left(\chi_B (X - \alpha)\right) \leq 0\right\}$. Of course, we get for $\alpha \coloneqq \alpha^*$ that equality holds $(=0)$. We just have to exclude the possibility that there exists $\tilde{\alpha} < \alpha^*$ so that $\overline{R}\left(\chi_B (X - \tilde{\alpha})\right) \leq 0$. Assume such an $\tilde{\alpha}$ exists, so $\overline{R}\left(\chi_B X - \chi_B \tilde{\alpha}\right) \leq 0$. Since $\chi_B X - \chi_B \tilde{\alpha} > \chi_B X - \chi_B \alpha^*$ if $\tilde{\alpha} < \alpha^*$, we get from the monotonicity of $\overline{R}$ that: \[ 0 \geq \overline{R}\left(\chi_B X - \chi_B \tilde{\alpha}\right) > \overline{R}\left(\chi_B X - \chi_B \alpha^*\right) = 0, \] a contradiction. Note that if $X > Y$ then $\exists \varepsilon > 0 \colon X \geq Y + \varepsilon$ and consequently $\overline{R}(X) \geq \overline{R}(Y+\varepsilon) = \overline{R}(Y) + \varepsilon$, hence $\overline{R}(X) > \overline{R}(Y)$. Thus we have shown that $\alpha^*=\inf\left\{\alpha \in {\bm{R}} \colon \overline{R}\left(\chi_B (X - \alpha)\right) \leq 0\right\}$. The other expressions in Definition~\ref{def:gbrdef} follow by simple manipulations. \subsection{Supplement for Proof of Proposition~\ref{prop:gbrcoincidence}} \label{app:gbrlemma} \begin{lemma} \label{lemma:gbrlemma} Let ${\bm{v}}{a}\colon \mathbb{N} \rightarrow {\bm{R}}$ be a sequence and ${\bm{v}}{b} \colon \mathbb{N} \rightarrow (0,1]$ be a nonnegative sequence such that $\lim \inf_{n \rightarrow \infty} {\bm{v}}{b}(n) > 0$. Then: \begin{align*} \limsup_{n \rightarrow \infty} {\bm{v}}{a}(n) \leq 0 &\Longleftrightarrow \limsup_{n \rightarrow \infty} \frac{{\bm{v}}{a}(n)}{{\bm{v}}{b}(n)} \leq 0. \end{align*} \end{lemma} \begin{proof} For brevity we simply write $a_n$ and $b_n$ for the sequences ${\bm{v}}{a}(n)$ and ${\bm{v}}{b}(n)$. \begin{align*} \limsup_{n \rightarrow \infty} a_n \leq 0 &\Longleftrightarrow \limsup_{n \rightarrow \infty} \frac{a_n}{b_n} \leq 0\\ \lim_{n \rightarrow \infty} \left(\sup_{k \geq n} a_k\right) \leq 0 &\Longleftrightarrow \lim_{n \rightarrow \infty} \left(\sup_{k \geq n} \frac{a_k}{b_k} \right) \leq 0. \end{align*} where we know that $b_n \in (0,1]$ and furthermore $0 < \lim \inf_{n \rightarrow \infty} b_n \leq \limsup_{n \rightarrow \infty} b_n$. If the sequence $b_n$ would actually converge, then the statement is clearly true, since we can then pull out the limit of $b_n$ (this is allowed). We begin by showing that $LHS \leq 0 \implies RHS \leq 0$. Our assumption is that \begin{equation} \label{eq:assumptionlhs} \forall \varepsilon>0 : \exists n_0 \in \mathbb{N} : \forall n \geq n_0: \sup_{k \geq n} a_k < \varepsilon. \end{equation} Our aim is to show that \[ \forall \varepsilon'>0 : \exists n_0' \in \mathbb{N} : \forall n \geq n_0': \sup_{k \geq n} \frac{a_k}{b_k} < \varepsilon' . \] So let some $\varepsilon'>0$ be given and fixed. We have to exhibit some $n_0'$ such that the above statement holds. Choose $\varepsilon \coloneqq \varepsilon' \cdot \lim_{n\rightarrow \infty}\inf_{k \geq n} b_k \cdot \frac{1}{\kappa}$, for an arbitrary $\kappa > 1$. Then $\varepsilon > 0 $ by our assumption that $\lim_{n\rightarrow \infty}\inf_{k \geq n} b_k > 0$, i.e. that $\underline{P}(B)>0$. Note that $\varepsilon \leq \varepsilon'$. Then, we know that $\exists n_0(\varepsilon)$ such that $\forall n \geq n_0(\varepsilon)$ $\sup_{k \geq n} a_k < \varepsilon$. Also, we know that $\forall \kappa' > 1 : \exists n_0'' \in \mathbb{N} : \forall n \geq n_0''$: \begin{equation} \label{eqref:kapparatio} \frac{\lim_{n\rightarrow \infty}\inf_{k \geq n} b_k}{\inf_{k \geq n} b_k} \leq \kappa'. \end{equation} Since the numerator is the limit of the denominator (which exists) and furthermore $\inf_{k \geq n} b_k$ is monotone increasing in $n$, that is, $\forall n \in \mathbb{N}$: $\lim_{n\rightarrow \infty}\inf_{k \geq n} b_k \geq \inf_{k \geq n} b_k$. Thus, the ratio approaches $1$ from above for large $n$. Now choose $\kappa' \coloneqq \kappa$ and $n_0' \coloneqq \max(n_0(\varepsilon), n_0''(\kappa'))$. That is, we know that then both \eqref{eqref:kapparatio} and \eqref{eq:assumptionlhs} hold. Then we want to show: \[ \sup_{k \geq n} \frac{a_k}{b_k} = \max\left(\sup_{k \geq n, a_k \geq 0} \frac{a_k}{b_k}, \sup_{k \geq n, a_k < 0} \frac{a_k}{b_k}\right) \stackrel{!}{<} \varepsilon', \] which is a legitimate decomposition of the supremum into the ``negative'' and ``nonnegative'' subsequences. But look at the second term ($a_k < 0$) and observe that since $b_k > 0$, clearly $\sup_{k \geq n, a_k < 0} \frac{a_k}{b_k} \leq 0 < \varepsilon'$. Thus we only have to consider the first term. Further observe that \[ \sup_{k \geq n, a_k \geq 0} \frac{a_k}{b_k} \leq \sup_{k \geq n, a_k \geq 0} a_k \cdot \sup_{k \geq n, a_k \geq 0} \frac{1}{b_k} = \sup_{k \geq n, a_k \geq 0} a_k \cdot \frac{1}{\inf_{k \geq n, a_k \geq 0} b_k}, \] due to nonnegativity of the $a_k \geq 0$ and a general rule for the supremum/infimum, which applies since $b_k$ is strictly positive. Now by assumption, \[ \sup_{k \geq n, a_k \geq 0} a_k \cdot \frac{1}{\inf_{k \geq n, a_k \geq 0} b_k} < \varepsilon' \lim_{n\rightarrow \infty}\inf_{k \geq n} b_k \frac{1}{\kappa} \frac{1}{\inf_{k \geq n, a_k \geq 0} b_k} = \varepsilon' \cdot \underbrace{\frac{\lim_{n\rightarrow \infty}\inf_{k \geq n} b_k}{\inf_{k \geq n, a_k \geq 0} b_k}}_{\leq \kappa} \cdot \frac{1}{\kappa} \leq \varepsilon'. \] Noting that $\inf_{k \geq n} b_k \leq \inf_{k \geq n, a_k \geq 0} b_k$ and therefore \[ \frac{\lim_{n\rightarrow \infty}\inf_{k \geq n} b_k}{\inf_{k \geq n, a_k \geq 0} b_k} \leq \frac{\lim_{n\rightarrow \infty}\inf_{k \geq n} b_k}{\inf_{k \geq n} b_k} \leq \kappa'. \] Altogether, we have shown that \[ \sup_{k \geq n} \frac{a_k}{b_k} < \varepsilon', \] and therefore $LHS \leq 0 \implies RHS \leq 0$. It remains to show that $RHS \leq 0 \implies LHS \leq 0$. Our assumption is that \begin{equation} \label{eq:assrhsimplieslhs} \forall \varepsilon'>0 : \exists n_0' \in \mathbb{N} : \forall n \geq n_0': \sup_{k \geq n} \frac{a_k}{b_k} < \varepsilon. \end{equation} and our aim is to show that then \[ \forall \varepsilon>0 : \exists n_0 \in \mathbb{N} : \forall n \geq n_0: \sup_{k \geq n} a_k < \varepsilon. \] So let $\varepsilon>0$ be fixed. Choose $\varepsilon' \coloneqq \varepsilon$ and set $n_0 \coloneqq n_0'$. Then we want to show that $\forall n \geq n_0$: \[ \sup_{k \geq n} a_k = \max\left(\sup_{k \geq n, a_k \geq 0} a_k, \sup_{k \geq n, a_k < 0} a_k\right) \stackrel{!}{<} \varepsilon. \] As to the second term, it is obviously negative, in particular $\sup_{k \geq n, a_k < 0} a_k < \varepsilon$. For the first term, where the $a_k$ are nonnegative, observe that then $a_k \leq \frac{a_k}{b_k}$ since $b_k \in (0,1]$, consequently we have $\forall n \geq n_0$: \[ \sup_{k \geq n, a_k \geq 0} a_k \leq \sup_{k \geq n, a_k \geq 0} \frac{a_k}{b_k} \leq \sup_{k \geq n} \frac{a_k}{b_k} < \varepsilon = \varepsilon'. \] by our assumption \eqref{eq:assrhsimplieslhs}. And thus we have shown that $RHS \leq 0 \implies LHS \leq 0$. \end{proof} \subsection{Proof of Remark~\ref{remark:wronggbr}} \label{app:wronggbr} Take for example $X(\omega)=-1$ for a $B \subseteq \Omega$ where $\underline{P}(B)<1$. Then $\sup X = -1$, but \[ \overline{R}\left(X \chi_B\right) = \limsup_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \left(X \chi_B\right)\left(\vv{\Omega}(i)\right) = \limsup_{n \rightarrow \infty} - \frac{1}{n} \sum_{i=1}^n \chi_B\left(\vv{\Omega}(i)\right) = - \liminf_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_B\left(\vv{\Omega}(i)\right) = - \underline{P}(B), \] and $\sup X = -1 < - \underline{P}(B)$, hence \ref{item:UP1} does not hold. Thus $X \mapsto \overline{R}\left(X \chi_B\right)$ is not a coherent upper prevision on $L^\infty$ in general (it is of course for $B=\Omega$). \input{independenceviapi.tex} \section{From Cluster Points to Sequence} \label{sec:converseresult} In the previous section, we have shown how from a given sequence we can construct a coherent upper prevision from the set of cluster points $\operatorname{CP}({\bm{v}}{E})$. In this section, we show the converse, thus ``closing the loop'': given an arbitrary coherent upper prevision, we construct a sequence ${\bm{v}}{\Omega}$ such that the induced upper prevision is just the specified one. We take this to be an argument for the well-groundedness of our approach. For simplicity, we assume a finite possibility space $\Omega$. \begin{theorem} \label{theorem:converse} Let $|\Omega|<\infty$. Let $\overline{R}$ be a coherent upper prevision on $L^\infty$. There exists a sequence ${\bm{v}}{\Omega}$ such that we can write $\overline{R}$ as: \[ \overline{R}(X) = \overline{R}_{{\bm{v}}{\Omega}}(X) = \sup \left\{E(X) \colon E \in \mathcal{E}_{{\bm{v}}{\Omega}} \right\}, \quad \mathcal{E}_{{\bm{v}}{\Omega}} \coloneqq \operatorname{CP}\left({\bm{v}}{E}_{{\bm{v}}{\Omega}}\right) \quad \forall X \in L^\infty, \] where we now make the dependence on the sequence ${\bm{v}}{\Omega}$ explicit in the notation, i.e.\@\xspace ${\bm{v}}{E}_{{\bm{v}}{\Omega}}(n) = X \mapsto \frac{1}{n} \sum_{i=1}^n X\left({\bm{v}}{\Omega}(i)\right)$. \end{theorem} The significance of this result is that it establishes strictly frequentist \textit{semantics} for imprecise probability. It shows that to any decision maker who, in the subjectivist fashion, uses a coherent upper prevision, we can associate a sequence, which would yield the same upper prevision in a strictly frequentist way. We interpret this result as evidence for the naturalness, and arguably completeness, of our theory. The key to prove this is Theorem~\ref{th:CP-r-C}, for which we introduce some convenient notation. For $k\in\mathbb{N}$, let $\Def{[k]\coloneqq\{1,\ldots,k\}}$ and define the \Def{$(k-1)$-simplex} as \[ \Def{\Delta^k \coloneqq \left\{d=(d_1,\ldots,d_k) \in {\bm{R}}^k\colon \sum_{i=1}^n d_i = 1,\ d_i \geq 0 \ \forall i\in[k]\right\}.} \] It is also helpful to have a dual notation for sequences $z\colon\mathbb{N}\rightarrow [k]$, whereby we write either $\Def{z(i)}$ or $\Def{z_i}$ to mean the same thing. \begin{definition} Suppose $k\in\mathbb{N}$ and $x\colon\mathbb{N}\rightarrow [k]$. For any $n\in\mathbb{N}$ define the \Def{\emph{relative frequency of $x$ with respect to $i$ at $n$}}, $r_i^x\colon\mathbb{N}\rightarrow [0,1]$ via \[ \Def{r_i^x(n)\coloneqq \textstyle\frac{1}{n}|\{j\in[n]\colon x_j=i\}|} \] and the \Def{\emph{relative frequency of $x$ at $n$}}, $r^x\colon\mathbb{N}\rightarrow\Delta^k$ as \begin{equation} \Def{r^x(n)\coloneqq r_{[k]}^x(n)= \left(\begin{array}{c}r_1^x(n)\\ \vdots \\ r_k^x(n)\end{array}\right).} \label{eq:r-x-def} \end{equation} \end{definition} \begin{theorem}\label{th:CP-r-C} Suppose $k\in\mathbb{N}$ and $C$ is a rectifiable closed curve in $\Delta^k$. There exists $x\colon\mathbb{N}\rightarrow [k]$ such that $\operatorname{CP}(r^x)=C$. \end{theorem} The proof (which is constructive) is in Appendix~\ref{app:construction} along with an example. From this, we obtain the following Corollary (proven in Appendix~\ref{app:fromboundarytocurves}). Denote the topological boundary of a set $D$ as $\partial D$. \begin{corollary} \label{cor:CP-convexbody} Suppose $k \in \mathbb{N}$ and $D \subseteq \Delta^k$ is a non-empty convex set. There exists $x\colon\mathbb{N}\rightarrow [k]$ such that $\operatorname{CP}(r^x)=\partial D$. \end{corollary} Since we have a finite possibility space $\Omega$, we can identify each linear prevision $\mathcal{E}$ with a point in the simplex, by assigning coordinates to its underlying finitely additive probability; in the case of ${\bm{v}}{E}_{{\bm{v}}{\Omega}}(n)$, this is the relative frequency $r^{{\bm{v}}{\Omega}}(n)$. This is formalized in the following. \begin{proposition} \label{prop:simplexcorrespondence} Let ${\bm{v}}{E}(n) : \mathbb{N} \rightarrow \operatorname{PF}(\Omega)$ be a sequence of linear previsions with underlying probabilities ${\bm{v}}{P}(n) \coloneqq A \mapsto {\bm{v}}{E}(n)(A)$. Then $E \in \operatorname{CP}\left({\bm{v}}{E}(n)\right)$ with respect to the weak* topology if and only if the sequence ${\bm{v}}{D} \colon \mathbb{N} \rightarrow \Delta^k$, ${\bm{v}}{D}(n) \coloneqq \left({\bm{v}}{P}(n)(\omega_1),\ldots, {\bm{v}}{P}(n)(\omega_k)\right)$ has as cluster point $d_E=\left(E\left(\chi_{\{\omega_1\}}\right),\ldots,E\left(\chi_{\{\omega_k\}}\right)\right)$ with respect to the topology induced by the Euclidean norm on ${\bm{R}}^k$. \end{proposition} For the proof, see Appendix~\ref{app:euclideansufficient}. Combining Corollary~\ref{cor:CP-convexbody} and Proposition~\ref{prop:simplexcorrespondence} allows us to now prove Theorem~\ref{theorem:converse}. \begin{proof}[Proof of Theorem~\ref{theorem:converse}] Let $\Omega=[k]$. If $\overline{R}$ is a coherent upper prevision on $L^\infty$, we can write it as \citep[Theorem 3.6.1]{walley1991statistical}: \[ \overline{R}(X) = \sup \left\{E(X) \colon E \in \mathcal{E} \right\}, \quad \forall X \in L^\infty, \] for some weak* compact and convex set $\mathcal{E} \subseteq \operatorname{PF}(\Omega)$. From \citep[Theorem 3.6.2]{walley1991statistical} we further know that \[ \overline{R}(X) = \sup \left\{E(X) \colon E \in \mathcal{E} \right\} = \sup \left\{E(X) \colon E \in \operatorname{ext} \mathcal{E} \right\},\quad \forall X \in L^\infty, \] where $\operatorname{ext}$ denotes the set of extreme points of $\mathcal{E}$.\footnote{A point $E \in \mathcal{E}$ is an extreme point of $\mathcal{E}$ if it cannot be written as a convex combination of any other elements in $\mathcal{E}$.} Then: \begin{align*} \overline{R}(X) &= \sup \left\{E(X) \colon E \in \mathcal{E} \right\}\\ &= \sup \left\{E(X) \colon E \in \operatorname{ext} \mathcal{E} \right\}\\ &\leq \sup \left\{E(X) \colon E \in \partial \mathcal{E} \right\}\\ &\leq \sup \left\{E(X) \colon E \in \mathcal{E} \right\} = \overline{R}(X), \end{align*} since $\operatorname{ext}\mathcal{E} \subseteq \partial \mathcal{E}$ and $\partial \mathcal{E} \subseteq \mathcal{E}$; note that $\mathcal{E}$ is closed. In summary, $\overline{R}(X) = \sup \left\{E(X) \colon E \in \partial \mathcal{E} \right\}$. Now choose $D \coloneqq \left\{\left(E(\chi_{\omega_1}),\ldots,E(\chi_{\omega_k})\right) \colon E \in \mathcal{E}\right\}$, which is a non-empty convex set in $\Delta^k$. We then obtain from Corollary~\ref{cor:CP-convexbody} a sequence ${\bm{v}}{\Omega} \colon \mathbb{N} \rightarrow [k]$ with $\operatorname{CP}(r^{{\bm{v}}{\Omega}})=\partial D$. But then it follows from Proposition~\ref{prop:simplexcorrespondence} that the sequence ${\bm{v}}{E}_{{\bm{v}}{\Omega}}$ has cluster points $\operatorname{CP}\left({\bm{v}}{E}_{{\bm{v}}{\Omega}}\right) = \partial \mathcal{E}$. Thus \[ \overline{R}(X) = \sup \left\{E(X) \colon E \in \operatorname{CP}\left({\bm{v}}{E}_{{\bm{v}}{\Omega}}\right) \right\}, \quad \forall X \in L^\infty, \] which concludes the proof. \end{proof} \citet{ivanenkobook} offers a somewhat similar result to Theorem~\ref{theorem:converse} by generalizing from sequences to \textit{sampling nets}. Ivanenko's \citeyearpar{ivanenkobook} main result states that ``any sampling directedness has a regularity, and any regularity is the regularity of some sampling directedness.'' \citep[Theorem 4.2]{ivanenkobook}. We provide a brief introduction to Ivanenko's setup in Appendix~\ref{app:ivanenkonets}. Our result is more parsimonious in the sense that it relies only on sequences, which are arguably more intuitive objects than such sampling nets. We observe that two sequences ${\bm{v}}{\Omega}_1$, ${\bm{v}}{\Omega}_2$, might have different sets of cluster points $\operatorname{CP}\left({\bm{v}}{E}_{{\bm{v}}{\Omega}_1}\right)$, $\operatorname{CP}\left({\bm{v}}{E}_{{\bm{v}}{\Omega}_1}\right)$, but when their convex hull coincides, the same upper probability and prevision is induced.\footnote{Assume $\overline{R}(X) \coloneqq \sup\left\{E(X) \colon E \in \mathcal{E}\right\}$. Then indeed $\overline{R}(X) = \sup\left\{E(X) \colon E \in \overline{\operatorname{co}} \mathcal{E}\right\}$, where $\overline{\operatorname{co}}$ denotes the weak* closure of the convex hull; cf.\@\xspace \citep[Section 3.6]{walley1991statistical}.} Thus, in light of the argument in Section~\ref{sec:inducedriskmeasure}, \textit{for the purpose of mass decision making}, we may consider these sequences equivalent. While in the classical case, relative frequencies are the relevant description of a sequence, the statistical regularity provides an analogous description in the general case; moreover, we differentiate only ``up to the same convex hull'' for decision making. \subsection{Conditional Probability} \label{sec:unstablecondprob} Recall our sequence of unconditional finitely additive probabilities ${\bm{v}}{P}(n) \coloneqq A \mapsto \frac{1}{n} \sum_{i=1}^n \chi_A\left({\bm{v}}{\Omega}(i)\right)$. We want to define a similar sequence of \textit{conditional} finitely additive probabilities. A very natural approach is the following: let $A,B \subseteq \Omega$ be such that $\vv{\Omega}(i) \in B$ for at least one $i \in \mathbb{N}$. We write $\Def{2^\Omega_{1+}}$ for the set of such events, i.e.\@\xspace events which occur at least once in the sequence. Define a sequence of conditional probabilities $\Def{{\bm{v}}{P}(\cdot|B)\colon \mathbb{N} \rightarrow \operatorname{PF}(\Omega)}$ by \begin{equation} \label{eq:defcondprobseq} \Def{{\bm{v}}{P}(\cdot|B)(n) \coloneqq \Psi\left(A \mapsto \frac{\sum_{i=1}^n (\chi_{A} \cdot \chi_B)\left({\bm{v}}{\Omega}(i)\right)}{\sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)}\right),} \end{equation} where we consider only those $\vv{\Omega}(i)$ which lie in $B$, and hence we adapt the relative frequencies to the occurrence of $B$. Informally, this is simply counting $|\mbox{$A$ and $B$ occured}|/|\mbox{$B$ occured}|$. Until $B$ occurs for the first time, the denominator will be $0$ and thus the mapping undefined (returning the falsum $\bot$), therefore we define a wrapper function $\Def{\Psi \colon \operatorname{PF}(\Omega)\cup\{\bot\} \rightarrow \operatorname{PF}(\Omega)}$ as: \[ \Def{\Psi(P) \coloneqq \begin{cases} P & \text{ if } P \text{ is a finitely additive probability on } 2^\Omega\\ P_0 & \text{ otherwise}, \end{cases} } \] where $P_0$ is an arbritrary finitely additive probability on $2^\Omega$. This guarantees that ${\bm{v}}{P}(\cdot|B)$ is a sequence of valid finitely additive probabilities. Throughout, we demand that the event $B$ on which we condition is in $2^\Omega_{1+}$, i.e.\@\xspace occurs \emph{at least once} in the sequence. Note that this is a much weaker condition than demanding that $P(B)>0$, if $B$ is precise. Denote by $\Def{n_B}$ the smallest index so that ${\bm{v}}{\Omega}(n_B) \in B$. Note that ${\bm{v}}{P}(A|B)(n) = {\bm{v}}{P}(n)(A \cap B) / {\bm{v}}{P}(n)(B)$ for $n \geq n_B$. \begin{proposition} ${\bm{v}}{P}(\cdot|B)$ is a sequence of finitely additive probabilities. \end{proposition} \begin{proof} For $n < n_B$, this is clear due to $\Psi$. Now let $n \geq n_B$. \ref{prop:pf1}: ${\bm{v}}{P}(\Omega|B)(n)=1$: obvious. \ref{prop:pf2}: If $A,C \subseteq \Omega$, $A \cap C = \emptyset$, then we show that ${\bm{v}}{P}(A \cup C|B)(n) = {\bm{v}}{P}(A|B)(n) + {\bm{v}}{P}(C|B)(n)$. \begin{align*} {\bm{v}}{P}(A \cup C|B)(n) &= \frac{\sum_{i=1}^n \left(\chi_{A\cup C} \cdot \chi_B\right)\left({\bm{v}}{\Omega}(i)\right)}{\sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)} \\ &= \frac{\sum_{i=1}^n \left(\chi_{A} \cdot \chi_B\right)\left({\bm{v}}{\Omega}(i)\right) + \sum_{i=1}^n \left(\chi_{C} \cdot \chi_B\right)\left({\bm{v}}{\Omega}(i)\right)}{\sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)}\\ &= {\bm{v}}{P}(n)(A\cap B) / {\bm{v}}{P}(n)(B) + {\bm{v}}{P}(n)(C\cap B) / {\bm{v}}{P}(n)(B)\\ &= {\bm{v}}{P}(A|B)(n) + {\bm{v}}{P}(C|B)(n). \end{align*} Noting that since $A$ and $C$ are disjoint, ${\bm{v}}{\Omega}(i)$ cannot lie in both at the same time for any $i$. \end{proof} Even though the probability is conditional, we deal with a sequence of finitely additive probabilities again. Hence, we can now essentially repeat the argument from Section~\ref{sec:ivanenkoformal}. To each ${\bm{v}}{P}(\cdot|B)(n)$, associate its uniquely corresponding linear prevision $\Def{{\bm{v}}{E}(\cdot|B)(n)}$, which is of course given by ($n \geq n_B$): \[ \Def{{\bm{v}}{E}(\cdot|B)(n) = \vv{\Sigma X}|B(n) \coloneqq X \mapsto \frac{\sum_{i=1}^n \left(X \cdot \chi_B\right)\left({\bm{v}}{\Omega}(i)\right)}{\sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)}.} \] It is easy to check that ${\bm{v}}{E}(\cdot|B)(n)$ is coherent. For $n<n_B$, set $\Def{{\bm{v}}{E}(\cdot|B)(n) = \operatorname{NatExt}(P_0)}$. From the weak* compactness of $\operatorname{PF}(\Omega)$, we obtain a non-empty closed set of cluster points $\operatorname{CP}({\bm{v}}{E}(\cdot|B))$. \begin{definition} If $B \in 2^\Omega_{1+}$, we define the conditional upper prevision and the conditional upper probability as: \[ \Def{ \overline{R}(X|B) \coloneqq \sup\left\{\tilde{E}(X) \colon \tilde{E} \in \operatorname{CP}\left({\bm{v}}{E}(\cdot|B)\right)\right\}; \quad \overline{P}(A|B) \coloneqq \sup\left\{Q(A) \colon Q \in \operatorname{CP}\left({\bm{v}}{P}(\cdot|B)\right)\right\}, \quad A \subseteq \Omega.} \] \end{definition} Since they are expressed via an envelope representation,\footnote{An envelope representation expresses a coherent upper prevision as a supremum over a set of linear previsions.} $\overline{R}$ and $\overline{P}$ are automatically coherent \citep[Theorem 3.3.3]{walley1991statistical}. By similar reasoning as in Section~\ref{sec:ivanenkoformal}, we get the following representation. \begin{proposition} The conditional upper prevision (probability) can be represented as: \[ \overline{R}(X|B) = \limsup_{n \rightarrow \infty} \vv{\Sigma X}|B(n), \quad X \in L^\infty; \quad \overline{P}(A|B) = \limsup_{n \rightarrow \infty} {\bm{v}}{P}(A|B)(n), \quad A \subseteq \Omega. \] \end{proposition} Also, we obtain the corresponding lower quantities $\overline{R}(X|B) = \liminf_{n \rightarrow \infty} \vv{\Sigma X}|B(n)$ and $\underline{P}(A|B) = \liminf_{n \rightarrow \infty} {\bm{v}}{P}(A|B)(n)$. Note that these definitions also have reasonable frequentist semantics even when $B$ occurs only finitely often; then the sequence ${\bm{v}}{P}(\cdot|B)$ is eventually constant and we have ${\bm{v}}{P}(A|B) = |\mbox{$A$ and $B$ occured}|/|\mbox{$B$ occured}|$. For instance, if $A$ and $B$ occur just once, but simultaneously, then $\overline{P}(A|B)=\underline{P}(A|B)=1$. This is an advantage over Kolmogorov's approach, where conditioning on events of measure zero is not meaningfully defined. We now further analyze the conditional upper probability and the conditional risk measure. As a warm-up, we consider the case of precise probabilities. If for some event $A \subseteq \Omega$, we have $\overline{P}(A|B)=\underline{P}(A|B)$, we write $\Def{\tilde{P}(A|B) \coloneqq \lim_{n \rightarrow \infty} {\bm{v}}{P}(A|B)(n)}$. \begin{proposition} \label{prop:condprecise} Assume $P(B),P(A\cap B)$ exist for some $A,B \subseteq \Omega$ and $P(B)>0$. Then it holds that $\tilde{P}(A|B)=P(A|B)$, where $P(\cdot|B)$ is the conditional probability in the sense of Equation~\ref{eq:defcondmeasure}. \end{proposition} \begin{proof} \begin{align*} \tilde{P}(A|B) &= \lim_{n \rightarrow \infty} {\bm{v}}{P}(A|B)(n)\\ &= \lim_{n \rightarrow \infty} \frac{\frac{1}{n} \sum_{i=1}^n (\chi_{A} \cdot \chi_B)\left({\bm{v}}{\Omega}(i)\right)}{\frac{1}{n} \sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)}\\ &\overset{\ref{proof:prop34step1}}{=} \frac{\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n (\chi_{A} \cdot \chi_B)\left({\bm{v}}{\Omega}(i)\right)}{\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)}\\ &\overset{}{=}\frac{P(A\cap B)}{P(B)}\\ &\overset{\ref{proof:prop34step2}}{=} P(A|B). \end{align*} \begin{enumerate}[nolistsep,label=(\arabic*), ref=(\arabic*)] \item \label{proof:prop34step1} The limits exist by assumption and the denominator is $>0$. \item \label{proof:prop34step2} In the sense of Equation~\ref{eq:defcondmeasure}. \end{enumerate} \end{proof} Thus, when the relative frequencies of $B$ and $A \cap B$ converge, we reproduce the classical definition of conditional probability. Now what happens under non-convergence? \subsection{The Generalized Bayes Rule} We now relax the assumptions of Proposition~\ref{prop:condprecise} and only demand that $\underline{P}(B)>0$.\footnote{This condition is indispensable in order to make the connection to the generalized Bayes rule.} Then we observe that the conditional risk measure coincides with the \textit{generalized Bayes rule}, which is an important updating principle in imprecise probability (see e.g.\@\xspace \citep{introtoiplowerprev}). The unconditional set of desirable gambles is: \[ \Def{\mathcal{D}_{{\bm{v}}{\Omega}} \coloneqq \left\{X \in L^\infty \colon \limsup_{n \rightarrow \infty} \vv{\Sigma X} \leq 0\right\} = \left\{X \in L^\infty\colon \overline{R}(X) \leq 0\right\}.} \] \begin{definition} \label{def:gbrdef} For $\underline{P}(B)>0$, we define the \Def{\emph{conditional set of desirable gambles}} as: \[ \Def{\mathcal{D}_{{\bm{v}}{\Omega}|B} \coloneqq \left\{X \in L^\infty \colon X \chi_B \in \mathcal{D}_{{\bm{v}}{\Omega}}\right\} = \left\{X \in L^\infty\colon \limsup_{n \rightarrow \infty} {\bm{v}}{\Sigma (X\chi_B)} \leq 0\right\},} \] and a corresponding upper prevision, which we call the \Def{\emph{generalized Bayes rule}}, as: \begin{align} \label{eq:ourgbrdef} \Def{\operatorname{GBR}(X|B)} &\Def{\coloneqq \inf\left\{\alpha \in {\bm{R}} \colon X - \alpha \in \mathcal{D}_{{\bm{v}}{\Omega}|B}\right\}}\\ &= \inf\left\{\alpha \in {\bm{R}} \colon \chi_B (X - \alpha) \in \mathcal{D}_{{\bm{v}}{\Omega}}\right\}\nonumber\\ &= \inf\left\{\alpha \in {\bm{R}} \colon \overline{R}\left(\chi_B (X - \alpha)\right) \leq 0\right\}.\nonumber \end{align} \end{definition} \begin{remark} \label{remark:gbrdef} \normalfont In fact, \citet{walley1991statistical} defines the generalized Bayes rule as the solution of $\overline{R}(\chi_B(X-\alpha))=0$ for $\alpha$. It can be checked that this solution coincides with Definition~\ref{def:gbrdef},\footnote{The conditional set of desirable gambles is considered for instance in \citep{augustin2014introduction} and \citep{wheeler2021gentle}, but there the link to the generalized Bayes rule is not made technically clear.} see Appendix~\ref{app:gbrdefcoincidence}. \end{remark} \begin{proposition} \label{prop:gbrcoincidence} Let $\underline{P}(B)>0$. It holds that $\overline{R}(X|B)=\operatorname{GBR}(X|B)$. \end{proposition} \begin{proof} It is not hard to check that $\overline{R}(\cdot|B)$ is a coherent upper prevision on $L^\infty$, hence we can represent it as \citep[Theorem 3.8.1]{walley1991statistical}: \[ \overline{R}(X|B) = \inf\left\{\alpha \in {\bm{R}}\colon X-\alpha \in \mathcal{D}_{\overline{R}(\cdot|B)}\right\}, \quad \mbox{\ where\ \ } \Def{\mathcal{D}_{\overline{R}(\cdot|B)} \coloneqq \left\{X \in L^\infty\colon \overline{R}(X|B) \leq 0\right\}}. \] We show that $\overline{R}(X|B)=\operatorname{GBR}(X|B)$ by showing that $\mathcal{D}_{\overline{R}(\cdot|B)} = \mathcal{D}_{{\bm{v}}{\Omega}|B}$. Let $X \in L^\infty$. On the one hand, we know \begin{align} X \in \mathcal{D}_{{\bm{v}}{\Omega}|B} \Longleftrightarrow X\chi_B \in \mathcal{D}_{{\bm{v}}{\Omega}} &\Longleftrightarrow \overline{R}\left(X\chi_B\right) \leq 0\nonumber\\ \label{eq:generalizedbayesrulelimsup}&\Longleftrightarrow \limsup_{n \rightarrow \infty} \sum_{i=1}^n \frac{(X\chi_B) \left({\bm{v}}{\Omega}(i)\right)}{n} \leq 0. \end{align} On the other hand, \begin{align} \label{eq:frequencyconditionallimsup} X \in \mathcal{D}_{\overline{R}(\cdot|B)} \Longleftrightarrow \overline{R}(X|B) \leq 0 \Longleftrightarrow \limsup_{n \rightarrow \infty} \frac{\frac{1}{n} \sum_{i=1}^n (X\chi_B)\left({\bm{v}}{\Omega}(i)\right)}{\frac{1}{n} \sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)} \leq 0. \end{align} It remains to show that the two limit statements (Equation~\ref{eq:generalizedbayesrulelimsup} and Equation~\ref{eq:frequencyconditionallimsup}) are equivalent. Due to the limit operation we can neglect the terms $n=1,\ldots,n_B-1$. Furthermore, we know that ${\bm{v}}{b}(n) \in (0,1]$, $n \geq n_B$, and also $0 < \lim \inf_{n \rightarrow \infty} {\bm{v}}{b}(n)$. Thus, defining $\Def{{\bm{v}}{a}(n) \coloneqq \frac{1}{n}\sum_{i=1}^n (X\chi_B)\left({\bm{v}}{\Omega}(i)\right)}$ and $\Def{{\bm{v}}{b}(n) \coloneqq \frac{1}{n} \sum_{i=1}^n \chi_B\left({\bm{v}}{\Omega}(i)\right)}$, we can leverage Lemma~\ref{lemma:gbrlemma},\footnote{To rigorously apply the Lemma, we would again introduce a wrapper for the sequence ${\bm{v}}{b}(n)$ to ensure strict positivity, since finitely many terms $i=1,\ldots, n_B-1$ might be zero.} included in Appendix~\ref{app:gbrlemma} to show: \begin{align*} \limsup_{n \rightarrow \infty} {\bm{v}}{a}(n) \leq 0 &\Longleftrightarrow \limsup_{n \rightarrow \infty} \frac{{\bm{v}}{a}(n)}{{\bm{v}}{b}(n)} \leq 0. \end{align*} \end{proof} \begin{remark} \normalfont \label{remark:wronggbr} Note that $X \mapsto \limsup_{n \rightarrow \infty} {\bm{v}}{\Sigma (X\chi_B)} = \overline{R}\left(X \chi_B\right)$ is \emph{not} in general a coherent upper prevision on $L^\infty$, as it can violate \ref{item:UP1}; see Appendix~\ref{app:wronggbr}. In general, we have $\operatorname{GBR}(X|B)\neq \overline{R}\left(X \chi_B\right)$. \end{remark} As a consequence, we can apply the classical representation result for the generalized Bayes rule. \begin{corollary} If $\underline{P}(B)>0$, the conditional risk measure can be obtained by updating each linear prevision in the set of cluster points, that is: \[ \overline{R}(X|B) = \sup\left\{E(X|B) \colon E \in \operatorname{CP}\left({\bm{v}}{E}\right)\right\}, \] where conditioning of the linear previsions is in the sense of Definition \ref{def:condlinearprev}. \end{corollary} This follows from \citep[Theorem 6.4.2]{walley1991statistical}. Intuitively, it makes no difference whether we consider the cluster points of the sequence of conditional probabilities or whether we condition all probabilities in the set of cluster points in the classical sense. \section{Existence of Sequences with Prespecified Relative Frequency Cluster Points} \label{app:construction} In this section we prove Theorem \ref{th:CP-r-C} and thus demonstate the existence of sequences $x\colon\mathbb{N}\rightarrow[k]$ whose corresponding relative frequencies $r^x\colon\mathbb{N}\rightarrow\Delta^k$ have the property that their set of cluster points $\operatorname{CP}(r^x)=C$, where $C$ is an arbitrary closed rectifiable curve in $\Delta^k$. We do so constructively by providing an explicit procedure which takes a chosen $C$ and constructs a suitable $x$. we illustrate our method with some examples. \citet[p.\@\xspace 11]{mises1964mathematical} considered a binary sequence given by \[ x=\left\langle 0^{[1]}, 1^{[1]}, 0^{[2]}, 1^{[2]}, 0^{[4]}, 1^{[4]}, \ldots 0^{[2^i]}, 1^{[2^i]}, \ldots \right\rangle \] for $i\rightarrow\infty$. (The notation $\Def{i^{[j]}}$ here means $j$ repetitions of $i$.) It is a straightforward calculation to show that the induced relative frequencies have all elements of $\left[\frac{1}{3},\frac{2}{3}\right]$ as cluster points. When $x\colon\mathbb{N}\rightarrow [k]$ with $k>2$ a much richer set of behaviors of $\operatorname{CP}(r^x)$ is possible. The question of how common sequences with non-convergent relative frequencies are is addressed in Section~\ref{sec:pathologies-or-norm}. \subsection{Sufficient to Work With Topology Induced by Euclidean Norm on the Simplex} \label{app:euclideansufficient} Let $|\Omega|=k<\infty$. A linear prevision $E$ on $L^\infty$ is in a one-to-one correspondence with a finitely additive probability $P$, which we can represent as a point in the $(k-1)$-simplex (see Lemma~\ref{lemma:pfsimplex} below). It is convenient to then consider the cluster points of a sequence of such probabilities in the $(k-1)$-simplex, with respect to the topology induced by the Euclidean norm on ${\bm{R}}^k$. We show that the notion of such a cluster point coincides with a cluster point in $(\linfty)^*$ with respect to the weak* topology. This is the goal of this section; in particular, we prove Proposition~\ref{prop:simplexcorrespondence}. Since $|\Omega|=k<\infty$, we can represent any $X \in L^\infty$ as: \[ X(\omega) = c_1 \chi_{\{\omega_1\}} + \cdots + c_k \chi_{\{\omega_k\}}, \] where $c_i=X(\omega_i)$. Similarly, any $Z \in (\linfty)^*$ can be represented as: \begin{align*} Z(X) &= Z\left(c_1 \chi_{\{\omega_1\}} + \cdots+ c_k \chi_{\{\omega_k\}}\right)\\ &= c_1 Z\left(\chi_{\{\omega_1\}}\right) + \cdots + c_k Z\left(\chi_{\{\omega_k\}}\right), \end{align*} since the $Z \in (\linfty)^*$ are linear functionals and the coefficients $c_i$ depend on $X$. Intuitively, $Z\left(\chi_{\{\omega_i\}}\right) = P(\omega_i)$ if $Z \in \operatorname{PF}(\Omega)$. Define $\Def{d_i \coloneqq Z\left(\chi_{\{\omega_i\}}\right)}$ $\forall i \in 1,\ldots,n$ and consequently define \[ \Def{\|Z\| \coloneqq \sqrt{d_1^2 + \cdots + d_k^2}.} \] For a given $Z \in (\linfty)^*$, call $\Def{d_Z\coloneqq(d_1,..,d_k)} \in {\bm{R}}^k$ the \Def{coordinate representation of $Z$}. \begin{lemma} $\|\cdot\|$ is a norm on $(\linfty)^*$. \end{lemma} \begin{proof} \emph{Point-separating property:} if and only if $Z=0$, where $0 \in (\linfty)^*$ is given by $0(X)=0$, $\forall X \in L^\infty$. But this is easily observed, due to the similar property holding for the Euclidean norm: $\|Z\|=0$ if and only if $d_i=0$ $\forall i=1,\ldots,n$. Since $d_i =Z\left(\chi_{\{\omega_i\}}\right)$, this is the case exactly if $Z=0$. \emph{Subadditivity:} $\|Y+Z\| \leq \|Y\| + \|Z\|$. For the input $Y+Z$ we get $d_i = (Y+Z)\left(\chi_{\{\omega_i\}}\right) = Y\left(\chi_{\{\omega_i\}}\right) + Z\left(\chi_{\{\omega_i\}}\right)$ and then subadditivity follows from the similar property for the Euclidean norm. \emph{Absolute homogeneity:} $\|\lambda Z\| = |\lambda| \|Z\|$, $\forall \lambda \in {\bm{R}}$. This follows easily. With these properties, $\|\cdot\|$ is a valid norm on $(\linfty)^*$. \end{proof} We now show that the $(k-1)$-simplex is in a one-to-one correspondence with the set of linear previsions $\operatorname{PF}(\Omega)$ via the coordinate representation. \begin{lemma} \label{lemma:pfsimplex} Let $d \in \Delta^k$. Then $Z(X) \coloneqq c_1 d_1 + ... + c_k d_k \in \operatorname{PF}(\Omega)$. Conversely, let $Z \in \operatorname{PF}(\Omega)$. Then the corresponding $d_Z \in \Delta^k$. \end{lemma} \begin{proof} Let $d \in \Delta^k$ and $Z(X) \coloneqq c_1 d_1 + \cdots + c_k d_k$. Since $\sum_{i=1}^k d_i = 1$ we have immediately that $Z\left(\chi_\Omega\right)=1$, noting that $\chi_\Omega= 1 \chi_{\{\omega_1\}} + \cdots+ 1 \chi_{\{\omega_k\}}$. Also, if $X \geq 0$, i.e.\@\xspace $c_i \geq 0$, $\forall i \in [k]$, then $Z(X) \geq 0$ since $d_i \geq 0$ $\forall i$. Thus, $Z \in \operatorname{PF}(\Omega)$. Conversely, let $Z$ be a linear prevision, i.e.\@\xspace $Z\left(\chi_\Omega\right)=1$ and $Z(X) \geq 0$ if $X \geq 0$. From $Z\left(\chi_\Omega\right)=1$ we can deduce that $\sum_{i=1}^k d_i = 1$. If $X \geq 0$, we know that $c_i \geq 0$ $\forall i \in [k]$, hence $Z(X) \geq 0$ can only be true if all $d_i \geq 0$. Thus $d_Z \in \Delta^k$. \end{proof} We here restate Proposition~\ref{prop:simplexcorrespondence} for convenience. \begin{proposition} \label{prop:equivofclusterpoints} Let ${\bm{v}}{E}(n) : \mathbb{N} \rightarrow \operatorname{PF}(\Omega)$ be a sequence of linear previsions with underlying probabilities ${\bm{v}}{P}(n) \coloneqq A \mapsto {\bm{v}}{E}(n)(A)$. Then $E \in \operatorname{CP}\left({\bm{v}}{E}(n)\right)$ with respect to the weak* topology if and only if the sequence ${\bm{v}}{D} \colon \mathbb{N} \rightarrow \Delta^k$, ${\bm{v}}{D}(n) \coloneqq \left({\bm{v}}{P}(n)(\omega_1),\ldots, {\bm{v}}{P}(n)(\omega_k)\right)$ has as cluster point $d_E=\left(E\left(\chi_{\{\omega_1\}}\right),\ldots,E\left(\chi_{\{\omega_k\}}\right)\right)$ with respect to the topology induced by the Euclidean norm on ${\bm{R}}^k$. \end{proposition} First note that if $E \in \operatorname{PF}(\Omega)$, then $d_E = \left(E\left(\chi_{\{\omega_1\}}\right), \ldots, E\left(\chi_{\{\omega_k\}}\right)\right) = \left(P(\omega_1),\ldots,P(\omega_k)\right)$, where $P$ is the underlying probability of $E$, and hence $\|E\| = \sqrt{P(\omega_1)^2 + \cdots + P(\omega_k)^2}$. To complete the proof, we need some further statements first. \begin{definition} A vector space $\mathcal{X}$ is called topological vector space if the topology on $\mathcal{X}$ is such that $(x,y) \mapsto x+y$ is continuous with respect to the product topology on $\mathcal{X} \times \mathcal{X}$ and $(\lambda,x) \mapsto \lambda x$ is continuous with respect to the product topology on ${\bm{R}} \times \mathcal{X}$. We call a topology which makes $\mathcal{X}$ a topological vector space a \emph{linear topology}. \end{definition} \begin{remark} The weak* topology makes $(\linfty)^*$ a topological vector space with a Hausdorff topology,\footnote{See for instance Exercise 13 and 14 in \citet{taoweakstar}.} where vector addition and scalar product are defined pointwise: $Y+Z \coloneqq X \mapsto Y(X) + Z(X)$ $\forall X \in L^\infty$, $\lambda Z \coloneqq X \mapsto \lambda Z(X)$ $\forall X \in L^\infty$. \end{remark} \begin{remark}[well-known] \label{remark:normtvs} A vector space whose topology is induced by a norm is a topological vector space. \end{remark} \begin{proposition} \label{prop:tvsonetopology} On every finite dimensional vector space X there is a unique topological vector space structure. In other words, any two Hausdorff linear topologies on X coincide \citep{tvs3notes}. \end{proposition} Now Proposition~\ref{prop:equivofclusterpoints} directly follows. \begin{proof}[Proof of Proposition~\ref{prop:equivofclusterpoints}] Our norm $\|\cdot\|$ makes $(\linfty)^*$ a topological vector space due to Remark~\ref{remark:normtvs}, and any topology induced by a norm is Hausdorff; but the weak* topology also makes $(\linfty)^*$ a topological vector space, and the weak* topology is Hausdorff. Hence we can conclude from Proposition~\ref{prop:tvsonetopology} that they coincide. But then the two notions of what a cluster point is of course also coincide, since this depends just on the topology. \end{proof} Thus, for the proof of Theorem~\ref{theorem:converse}, we will work exclusively with the topology induced by the Euclidean metric restricted to $\Delta^k$. For $z\in\Delta^k$ and $\epsilon>0$ define the \Def{$\epsilon$-neighbourhood (ball) } \[ \Def{N_\epsilon(z)\coloneqq\left\{p\in\Delta^k\colon \|p-z\|<\epsilon\right\}} \] where $\|\cdot\|$ is the Euclidean norm (restricted to the simplex). Then from \citep[p.\@\xspace 430]{schechter1997handbook} we have an equivalent definition of a cluster point: \begin{definition} Say that $z\in\Delta^k$ is a \Def{\emph{cluster point}} of a sequence $r\colon\mathbb{N}\rightarrow\Delta^k$ (and denote by $\Def{\operatorname{CP}(r)}$ the \Def{\emph{set of all cluster points of $r$}}) if for all $\epsilon>0$, $\left|\{n\in\mathbb{N}\colon r(n)\in N_\epsilon(z)\right|=\aleph_0$, where $\Def{\aleph_0\coloneqq|\mathbb{N}|}$ is the cardinality of the natural numbers. \end{definition} \subsection{Further Notation} We will work solely with the topology on $\Delta^k$ induced by the Euclidean metric; by the argument in the previous subsection the cluster points w.r.t. this topology coincide with those w.r.t. the weak* topology. We introduce further notation to assist in stating our algorithm. The $i$th canonical unit vector in $\Delta^k$ is denoted $\Def{e_i\coloneqq(0,\ldots,1,\dots,0)}$, where the 1 is in the $i$th position. The boundary of the simplex is \[ \Def{\partial\Delta^k\coloneqq\{(z_1,\ldots,z_k) \colon z_1,\ldots,z_k\ge 0,\ z_1+\cdots+z_k=1\}.} \] If $p_1,p_2\in\Delta^k$ then $\Def{l(p_1,p_2)\coloneqq\{\lambda p_1+(1-\lambda) p_2\colon \lambda\in [0,1]\}}$ is the \Def{\emph{line segment connecting $p_1$ and $p_2$}}. If $C\subset\Delta^k$ is a rectifiable closed curve parametrised by $c\colon[0,1]\rightarrow\Delta^k$, its length is $\Def{\operatorname{length}(C)=\int_0^1 |c'(t)| dt}$. For $y\in\mathbb{R}$, $\Def{\lfloor y\rceil}$ is the nearest integer to $y$: $\Def{\lfloor y\rceil\coloneqq\lfloor y+\textstyle\frac{1}{2}\rfloor}$. We apply certain operations elementwise. For example, if $z=\langle z_1,\ldots,z_k\rangle\in\Delta^k$ and $\iota\in\mathbb{N}^k$, then $\Def{\lfloor Tz\rceil\coloneqq\langle\lfloor Tz_1\rceil,\ldots, \lfloor T z_k\rceil\rangle}$ and for $T\in\mathbb{N}$, $\Def{\iota/T}\in\mathbb{R}^n$ is simply $\langle\iota_1/T,\ldots,\iota_k/T\rangle$. If $i<j\in \mathbb{N}$ the ``interval'' $\Def{[i,j]\coloneqq \{m\in \mathbb{N}\colon i\le m \le j\}}$. To avoid confusion, we will reserve ``sequence'' for the infinitely long $x\colon\mathbb{N}\rightarrow [k]$ and use ``\Def{segment}'' to denote finite length strings $z\colon[n]\rightarrow [k]$ which we will write explicitly as $\langle z_1,\ldots, z_n\rangle$. We construct the sequence $x$ attaining the desired behavior of $r^x$ by iteratively appending a series of segments. We denote the empty segment as $\langle\,\rangle$. If $x^1$ and $x^2$ are two finite segments of lengths $\ell_1$ and $\ell_2$ then is their \Def{concatenation} is the length $\ell_1+\ell_2$ segment $\Def{x^1 x^2\coloneqq \left\langle x_1^1,\ldots,x_{\ell_1}^1, x_1^2,\ldots, x_{\ell_2}^2\right\rangle}$. We extend the $i^{[j]}$ notation to segments: if $z=\langle z_1,\ldots,z_\ell\rangle$, then $\Def{z^{[\iota]}\coloneqq\langle{z,z \ldots, z}\rangle} $ is the length $\ell\iota$ segment formed by concatenating $\iota$ copies of $z$. Given $n\in\mathbb{N}$ and a sequence $x\colon\mathbb{N}\rightarrow [k]$, the \Def{shifted sequence $x^{+n}\colon\mathbb{N}\rightarrow [k]$} is defined via $\Def{x^{+n}(i)\coloneqq x(i+n)}$ for $i\in\mathbb{N}$. \subsection{Properties of Relative Frequency Sequences} Our construction of $x$ relies upon the following elementary property of relative frequency sequences. \begin{lemma} \label{lemma:r-decomposition} Suppose $k,n,m\in\mathbb{N}$, $x\colon\mathbb{N}\rightarrow [k]$. Then \begin{equation} r^x(n+m) = \displaystyle\frac{n}{n+m} r^x(n) +\frac{m}{n+m} r^{x^{+n}}(m). \label{eq:r-decomposition} \end{equation} \end{lemma} \begin{proof} For any $i\in[k]$ we have \begin{align*} r_i^x(n+m) &= \textstyle\frac{1}{n+m}|\{j\in[n+m]\colon x(j)=i\}|\\ &= \textstyle\frac{1}{n+m}\left(|\{j\in[n]\colon x(j)=i\}| + |\{j\in[n+m]\setminus[n]\colon x(j)=i\}|\right)\\ &= \textstyle\frac{1}{n+m}\frac{n}{n} |\{j\in[n]\colon x(j)=i\}| + \frac{1}{n+m}\frac{m}{m} |\{t\in[m]\colon x(n+t)=i\}|\\ &= \textstyle\frac{n}{n+m} r_i^x(n) + \frac{m}{n+m} \frac{1}{m}|\{t\in[m]\colon x^{+n}(t)=i\}|\\ & = \textstyle\frac{n}{n+m} r_i^x(n) + \frac{m}{n+m} r_i^{x^{+n}}(m). \end{align*} Since this holds for all $i\in[k]$ we obtain \eqref{eq:r-decomposition}. \end{proof} Observe that (\ref{eq:r-decomposition}) also holds when $x\colon[n+m]\rightarrow [k]$ is a segment, in which case $x^{+n}=\langle x_{n+1},\ldots,x_{n+m}\rangle$. Furthermore note that (\ref{eq:r-decomposition}) is a convex combination of the two points $r^x(n)$ and $r^{x^{+n}}(m)$ in $\Delta^k$ since $\frac{n}{n+m}+\frac{m}{n+m}=1$ and both coefficients are positive. These two points are (respectively) the relative frequency of $x$ at $n$, and the relative frequency of $x^{+n}$ at $m$. This latter sequence will be the piece ``added on'' at each stage of our construction and forms the basis of our piecewise linear construction of $r^x$ such that its cluster points are a given $C\subset\Delta^k$. The set of cluster points of any sequence is closed. In addition, we have \begin{lemma} \label{lem:connected} For any $k\in\mathbb{N}$ and $x\colon\mathbb{N}\rightarrow [k]$, $\operatorname{CP}(r^x)$ is a connected set. \end{lemma} This follows immediately from \citep[Lemma 2.6]{bauschke2015} upon observing that $\lim_{n\rightarrow\infty} \|r^x(n)-r^x(n+1)\|=0$ since $\|r^x(n)-r^x(n+1)\| =\|r^x(n)-\frac{n}{n+1}r^x(n)-\frac{1}{n+1} e_{(x(n+1)}\|= \frac{1}{n+1}\|r^x(n)-e_{x(n+1)}\|\le \frac{2}{n+1}$. The boundedness of $r^x$ is essential for this to hold --- for unbounded sequences the set of cluster points need not be connected \citep{avsic1970limit}. \iffalse see also \citep[Lemma 2.6 and Corollary 2.7]{bauschke2015}.) \textcolor{red}{TODO: Why don't you use Lemma 2.6 from \citep{bauschke2015} directly? It remains to show compactness of $\Delta^k$, which by Heine-Borel should be direclty given by closedness and boundedness (when $\Delta^k$ is thought to be embedded in $\mathbb{R}^k$). Then by Lemma 2.6 one gets compactness for free.} \textcolor{blue}{Bob: probably because I proved it myself first, and only thought to search the literature after I typed it :-) We can indeed just refer to Bauschke} \begin{proof} We prove the Lemma by contradiction. Thus suppose $\operatorname{CP}(r^x)=A\cup B \subseteq\Delta^k$, where $A$ and $B$ are two subsets (which can be assumed closed) such that $A\cap B=\emptyset$. Let $D\coloneqq\Delta^k\setminus(A\cup B)$. Denote by $\operatorname{vol}_{k-1}(A)$ the $(k-1)$-content of a $(k-1)$-dimensional set $A$. Since $\operatorname{vol}_{k-1}(\Delta^k)<\infty$ we have that $\operatorname{vol}_{k-1}(D)<\infty$ and thus for any $\epsilon>0$ there exists a proper cover of $D$, $\{N_{\epsilon}(c_i)\colon i\in[M_\epsilon]\}$, with $c_i\in D$ for all $i\in[M_\epsilon]$ and $\bigcup_{i\in[M_\epsilon]} N_\epsilon(c_i)$, where recall $N_{\epsilon}(c_i)$ is an $\epsilon$-ball (w.r.t. the metric induced by $\|\cdot\|$) centred at $c_i$ and $N_\epsilon<\infty$. Equation \ref{eq:r-decomposition} implies that for any $\epsilon>0$ there exists $n_0\in\mathbb{N}$ such that for all $n>n_0$, $\|r^x(n+1)-r^x(n)\|\le \epsilon$ (we can simply take $n_0=\lceil 1/\epsilon\rceil$). In order that both $A$ and $B$ are subsets of $\operatorname{CP}(r^x)$ it is necessary that the sequence $r^x$ enters both $A$ and $B$ infinitely often (since the definition of a cluster point requires each $\epsilon$-neighbourhood be visited infinitely often). Consequently it is necessary that the sequence $r^x$ switches between $A$ and $B$ infinitely often. Combined with the previous fact, this means that there exists $i^*\in[N_\epsilon]$ such that $ \left|\left\{n\in\mathbb{N}\setminus[n_0]\colon r^x(n)\in N_\epsilon(c_{i^*}) \right\}\right|=\aleph_0, $ since $N_\epsilon$ is finite but the switching occurs infinitely. Since $\epsilon>0$ is arbitrary, we conclude that $c_{i^*}$ is a cluster point of $r^x$, but by construction, $c_{i^*}\in D$, and thus $c_{i^*}\not\in A \cup B=\operatorname{CP}(r^x)$ -- a contradiction. \end{proof} \fi \subsection{Logic of the Construction} \label{app:logicofconstruction} The idea of our construction is as follows (see Figure \ref{fig:construction} below for a visual aid). In order to satisfy the definition of cluster points, we need to return to each neighbourhood of each point in $C$ infinitely often. To that end we iterate through an infinite sequence of generations indexed by $g$. For each $g$, we approximate $C$ by a polygonal approximation $C^g$ comprising $V^g$ seperate segments. We choose the sequence $(V^g)_{g\in\mathbb{N}}$ so that $C^g$ approaches $C$ in an appropriate sense. Then for generation $g$ we append elements to $x$ to ensure the sequence of relative frequencies makes another cycle approximately following $C^g$. We control the approximation error of this process and ensure its error is of a size that also decreases with increasing $g$. We now describe the construction of a single generation. Thus suppose $g$ is now fixed and suppose the current partial sequence (segment) $x$ has length $n$. We suppose (and will argue this is ok later) that $r^x(n)$ is close to one of the vertices of $C^{g-1}$. We then choose a finer approximation $C^g$ of $C$ (since $V^g>V^{g-1}$). For each vertex $p_v^g$, $v\in[V^g]$ we append elements to $x$ resulting in a segment of length $n'$. We do this in a manner such that we move the relative frequency from $r^x(n)$ to $r^x(n')\approx p_v^g$. We do so by appending multiple copies of a vector $z$ to $x$ where $r^z(m)$ points in the same direction as the direction one needs to go from $p_{\mathrm{old}}$ to $p_{\mathrm{new}}$. This can only be done approximately because with a finite length segment, the set of directions one can move the relative frequencies is quantized. We choose the fineness of the quantization to be fine enough to achieve the accuracy we need. That is governed by the parameter $T\in\mathbb{N}$. We then append $\tilde{\ell}$ copies of $z$ to $x$ where $\tilde{\ell}$ is the integer closest to the real number $\ell$ that would be the ideal number of steps needed to get to the desired point $p_{\mathrm{new}}$. We also control the error incurred by approximating $\ell$ by $\tilde{\ell}$. The upshot of this is that with the resulting extension to $x$ we have $r^x(n')$ is sufficiently close to $p_{\mathrm{new}}$. We then repeat this operation for all the vertices $p_v^g$ for $v\in[V^g]$. This completes generation $g$. We show below that for each generation $g$, \emph{all} the points in the relative frequency sequence are adequately close to $C^g$, where ``adequately close'' is quantified and increases in accuracy as $g$ increases. \begin{algorithm}[t] \caption{Construction of $x$ such that $\operatorname{CP}(r^x)=C$\label{alg:C}} \begin{algorithmic}[1] \Require $C\subset\Delta^k$, a rectifiable closed curve parametrized as $c\colon[0,1]\rightarrow\Delta^k$ \Require $V\colon\mathbb{N}\rightarrow\mathbb{N} $ \Comment{Number of segments at generation $g$; as a function of $n$} \Require $T\colon\mathbb{N}\rightarrow\mathbb{N} $ \Comment{Controls quantization of angle; needs to be increasing} \State $x\gets \langle 1\rangle$ \Comment{Arbitrary initialization $x_1=1$} \State $p_{\mathrm{old}}\gets e_1$ \Comment{${p_{\mathrm{old}}}=r^{\langle 1\rangle}(1)=e_1$} \State $n\gets 1$ \Comment{$n$ is always updated to correspond to the current length of $x$} \State $g \gets 1$ \While{{true}} \Comment{Iterate over repeated generations $g$; $V$ is chosen at start of generation} \State $V\gets V(g)$ \Comment{Choose $V$ for generation $g$} \State $p_v \gets c(v/V) \ \mbox{for\ } v=0,\ldots,V$ \Comment{Vertices of $C^g\coloneqq \bigcup_{v\in[V]} l(p_{v-1},p_{v})$ } \State $v \gets 0$ \While{$v\le V$} \Comment{For all vertices of $C^g$} \State $T \gets T(n)$ \Comment{Quantization of angle; chosen per segment} \State $p_{\mathrm{new}} \gets p_{v+1}$ \Comment{The next vertex of $C^g$} \State $\gamma_i\gets {p_{\mathrm{old},i}}/(p_{\mathrm{old},i}- p_{\mathrm{new},i}) \mbox{\ for\ } i\in[k]$ \Comment{Will have $p_{\mathrm{old}}\approx p_v$} \State $\gamma\gets\min\{\gamma_i\colon i\in[k], \gamma_i>0\}$ \Comment{See (\ref{eq:gamma-def})} \State $p^*\gets \gamma(p_{\mathrm{new}}-p_{\mathrm{old}}) + p_{\mathrm{old}}$ \Comment{Determine $p^*\in\partial\Delta^k$} \State $\iota\gets \lfloor Tp^*\rceil$ \Comment{Elementwise; $\iota=(\iota_1,\ldots,\iota_k)$} \State $\tilde{p}^*\gets \iota/T $ \Comment{Elementwise; quantized version of $p^*$} \State $\tilde{T}\gets \sum_{i=1}^k \iota_i$ \Comment{Will have $\tilde{T}\approx T$} \State $y\gets \langle 1^{[\iota_1]},\ldots,k^{[\iota_k]}\rangle$ \Comment{The string $y$ is thus of length $\tilde{T}$} \State $\tilde{\ell}\gets {\lceil\frac{n}{T(\gamma-1)}\rceil}$ \Comment{Integer number of repetitions of $y$ needed} \State $x\gets x\, y^{[\tilde{\ell}]}$ \Comment{Construct new $x$ by appending $z$, comprising $\tilde{\ell}$ copies of $y$} \State $n\gets n+\tilde{\ell}\tilde{T}$ \Comment{Length of $x$ now} \State $p_{\mathrm{old}}\gets r^x(n)$ \Comment{Relative frequency at current $n$} \State $v\gets v+1$ \Comment{Move onto next vertex of $C^g$} \EndWhile \State $g\gets g+1$ \Comment{Move onto next generation of the construction} \EndWhile \Comment{Procedure never terminates} \end{algorithmic} \end{algorithm} We consistently use the following terminology in describing our algorithm: \begin{description}[nolistsep] \item[generation] These are indexed by $g$ and entail an entire pass around the curve $C$, or more precisely its polygonal approximation $C^g\coloneqq \bigcup_{v\in[V^g]} l(p_{v-1},p_{v})$ \item[segment] Corresponds to a single line segment $ l(p_{v-1},p_{v})$ of the $g$th polygonal approximation. \item[piece] Corresponds to appending $z=\langle 1^{[\iota_1]},\ldots,k^{[\iota_k]}\rangle $ to $x$, which results in moving $r^x(n)$ in the direction $\hat{p}^*$. \item[step] The appending of a single element of $z$, which will always move $r^x(n)$ towards one of the vertices of the simplex $e_i$ ($i\in[k]$). \end{description} The end result is that we have constructed a procedure (Algorithm \ref{alg:C}) which runs indefinitely ($g$ increases without bound), and which has the property that for any choice of $\epsilon>0$, if one waits long enough, there will be a sufficiently large $g$ such that all the relative frequencies associated with generated $g$ are within $\epsilon_g$ of $C$, and $(\epsilon_g)_{g\in\mathbb{N}}$ is a null sequence. We will thus conclude that $\operatorname{CP}(r)\supseteq C$. We will also argue that $\operatorname{CP}(r)\subseteq C$ completing the proof. \subsection{Construction of \texorpdfstring{$p^*$}{p-star} and its Approximation \texorpdfstring{$\tilde{p}^*$}{p-star tilde}} The basic idea of the construction is to exploit Lemma \ref{lemma:r-decomposition}. Suppose $n\in\mathbb{N}$ (and suppose it is ``large'') and fix $m=1$ in (\ref{eq:r-decomposition}) to obtain \begin{equation} \label{eq:r-one-step} r^x(n+1) = \textstyle\frac{n}{n+1} r^x(n) +\frac{1}{n+1} r^{x^{+n}}(1). \end{equation} Now $r^{x^{+n}}(1)=e_{x(n+1)}$ and so $r^x(n+1) = \frac{n}{n+1} r^x(n) +\frac{1}{n+1} e_{x(n+1)}$. When $n$ is large $\frac{n}{n+1}\approx 1$ and $\frac{1}{n+1}$ is small, and so this says that appending $x(n+1)$ to the length $n$ segment $x([n])$ moves the relative frequency $r^x$ from $r^x(n)$ in the direction of $e_{x(n+1)}$ by a small amount. Observe that the \emph{only} directions which the point $r^x(n)$ can be moved is towards one of the vertices of the $k$-simplex, $e_1,\ldots, e_k$. Thus if we had, for a fixed $n$ that $r^x(n)={p_{\mathrm{old}}}$ and we wished to append $m$ additional elements $\Def{z}$ to $x$ to produce $xz$ such that $r^{xz}(n+m)={p_{\mathrm{new}}}$, we need to figure out a way of heading in the direction $d={p_{\mathrm{new}}}-{p_{\mathrm{old}}}$ when at each step we are constrained to move a small amount to one of the vertices. The solution is to approximate the direction $d$ by a quantized choice that can be obtained by an integer number of elements of $[k]$. Given arbitrary ${p_{\mathrm{old}}}\ne{p_{\mathrm{new}}}\in\operatorname{relint}\Delta^k$, we define $p^*$ to be the intercept by $\partial\Delta^k$ of the line segment starting at ${p_{\mathrm{old}}}$ and passing through ${p_{\mathrm{new}}}$. (If $p_\mathrm{new}\in\partial\Delta^k$ set $p^*=p_\mathrm{new}$.) The intercept on the boundary of $\Delta^k$ is denoted $p^*$ and is given by \begin{equation}\label{eq:p-star-def} \Def{p^*\coloneqq\gamma ({p_{\mathrm{new}}}-{p_{\mathrm{old}}})+{p_{\mathrm{old}}}} \end{equation} for some $\gamma>0$. We can determine $\gamma$ as follows. The choice of $\gamma$ can not take $p^*$ outside the simplex. Thus let $\gamma_i$ ($i\in[k]$) satisfy $\gamma_i({p_{\mathrm{new}}}_i-{p_{\mathrm{old}}}_i)+{p_{\mathrm{old}}}_i=0$. Thus $\Def{\gamma_i=\frac{{p_{\mathrm{old}}}_i}{{p_{\mathrm{old}}}_i-{p_{\mathrm{new}}}_i}}$. Any $\gamma_i<0$ points in the wrong direction and so we choose \begin{equation} \label{eq:gamma-def} \Def{\gamma\coloneqq\min\{\gamma_i\colon i\in[k]\ \mbox{and}\ \gamma_i >0\}.} \end{equation} Such a choice of $\gamma$ guarantees that $p^*\in\partial\Delta^k$. Observe that the requirement that $\gamma_i>0$ means the denominator in the definition of $\gamma_i$ is positive and less than the numerator, and thus all $\gamma_i$ which are positive exceed $1$, and consequently $\gamma>1 $. We can now take $p^*$ to be the direction we would like to move $r^x(n)$ towards. However our only control action is to choose a sequence $z\in[k]^m$. To that end we suppose we quantize the vector $p^*$ so that it has rational components with denominator $T\in\mathbb{N}$ (which will be strategically chosen henceforth). As we will shortly show, this will allow us to move (approximately) towards $p^*$. Thus let $\Def{\iota_i\coloneqq\lfloor Tp_i^*\rceil}$ for $i\in[k]$ and set \begin{equation} \label{eq:p-tilde-star-def} \Def{\tilde{p}^*\coloneqq \left(\frac{\iota_1}{T},\ldots,\frac{\iota_k}{T}\right).} \end{equation} Observe that $\tilde{p}^*$ is not guaranteed to be in $\Delta^k$ because there is no guarantee that $\sum_{i=1}^k \tilde{p}^*_i=1$. We have \begin{lemma} \label{lem:tilde-p-star-error} Let $p^*$ and $\tilde{p}^*$ be defined by (\ref{eq:p-star-def}) and (\ref{eq:p-tilde-star-def}) respectively. Then for all $i\in [k]$, $\left|\tilde{p}_i^*-p_i^*\right|\le \frac{1}{2T}$. \end{lemma} \begin{proof} $ \left|\tilde{p}_i^*-p_i^*\right|=\textstyle\frac{1}{T}\left|T\tilde{p}_i^*-Tp_i^*\right| = \textstyle\frac{1}{T}\left|\lfloor T{p}_i^*\rceil -T p_i^*\right| \le\textstyle\frac{1}{2T}, $ by definition of the rounding operator $\lfloor\cdot\rceil$. \end{proof} \subsection{Determining the Number of Steps to Take} Observe that (\ref{eq:r-decomposition}) can be written as \begin{equation} r^x(n+m) = (1-\alpha) r^x(n) +\alpha r^{x^{+n}}(m). \end{equation} where $\alpha=\frac{m}{n+m}$ and thus $1-\alpha=\frac{n}{n+m}$. This suggests that we can engineer the construction of $x$ by requiring a suitable $\alpha$ such that \[ (1-\alpha){p_{\mathrm{old}}} +\alpha p^*={p_{\mathrm{new}}}. \] Recall we want the sequence $r^x$ to move from ${p_{\mathrm{old}}}$ to ${p_{\mathrm{new}}}$ which can be achieved by a taking a suitable comvex combination of ${p_{\mathrm{old}}}$ and $p^*$, which corresponds to appending a suitable number of copies of $z$ to $x$, where $z$ is chosen to move $r^x(n)$ in the direction of $p^*$. If we substitute the definition of $p^*$ from (\ref{eq:p-star-def}) we obtain the problem: \begin{align*} & \mbox{\ Find\ } \alpha\mbox{\ such that\ } (1-\alpha){p_{\mathrm{old}}} +\alpha[\gamma({p_{\mathrm{new}}}-{p_{\mathrm{old}}})+{p_{\mathrm{old}}}]={p_{\mathrm{new}}}\\ \Leftrightarrow & \mbox{\ Find\ } \alpha\mbox{\ such that\ } (1-\alpha){p_{\mathrm{old}}} +\alpha\gamma{p_{\mathrm{new}}} -\alpha\gamma{p_{\mathrm{old}}} +\alpha{p_{\mathrm{old}}} -{p_{\mathrm{new}}} =0_k\in\mathbb{R}^k\\ \Leftrightarrow & \mbox{\ Find\ } \alpha\mbox{\ such that\ } {p_{\mathrm{old}}}(1-\alpha\gamma) +{p_{\mathrm{new}}}(\alpha\gamma-1)=0_k,\\ \end{align*} which is only true when either ${p_{\mathrm{old}}}={p_{\mathrm{new}}}$ (which is a trivial case) or when $1-\alpha\gamma=0$ and thus $\Def{\alpha\coloneqq 1/\gamma}$, which we take as a definition. Since $\gamma>1$ this implies $\alpha<1$, which is consistent with our original motivation for taking convex combinations. We will append the segment $z$ to $x$, where $z=y^{[\ell]}$ and $y=\langle 1^{[\iota_1]},\ldots,k^{[\iota_k]}\rangle$. Now if each $y$ is of length $T$ and we notionally made $\ell$ repetitions, we would have $m=\ell T$ From the definition of $\alpha$ this means \begin{equation} \label{eq:alpha-def} {\alpha=\frac{\ell T}{n+\ell T}}. \end{equation} We presume $n$ is given (at a particular stage of construction) and $T\in\mathbb{N}$ is a fixed design parameter. We can thus solve for $\ell$ to obtain \[ \ell= \frac{\alpha n}{T(1-\alpha)}=\frac{(1/\gamma) n}{T(1-1/\gamma)}=\frac{n}{T(\gamma-1)}. \] Observe that $\ell$ is not guaranteed to be an integer, a complication we will deal with later. If it was an integer, we would create $z$ by concatenating $\ell$ copies of $y$ which is of length $T$. The vector $y$ moves ${p_{\mathrm{old}}}$ towards $p^*$. By appending $\ell$ copies we should move $r^x$ to ${p_{\mathrm{new}}}$ as desired. However we do not head exactly in the direction of $p^*$, since we worked with a quantized version $\tilde{p}^*$ instead, and we can not always take $\ell$ copies because $\ell$ is not guaranteed to be an integer; instead we will take \[ \Def{\tilde{\ell}\coloneqq \left\lceil\frac{\alpha n}{T(1-\alpha)}\right\rceil= \left\lceil\frac{n}{T(\gamma -1)}\right\rceil} \] copies of $y$ which will move $r^x(n)$ towards $\hat{p}^*$ instead of $p^*$. We now proceed to analyse the effect of these approximations on our construction. \subsection{Analyzing the Effect of Approximations} The parameter $T\in\mathbb{N}$ is a design variable. Our construction will utilize \begin{equation} \label{eq:tilde-T-def} \Def{\tilde{T}\coloneqq\sum_{j=1}^k\iota_j}, \end{equation} where $\iota_j=\lfloor Tp_j^*\rceil$ for $j\in[k]$. Then $\tilde{T}\approx T$, a claim which we quantify below. \begin{lemma}\label{lem:T-tilde-T} Suppose $T\in\mathbb{N}$, and $\tilde{T}$ is defined as above. Then $T-\frac{k}{2}\le \tilde{T}\le T+\frac{k}{2}$. \end{lemma} \begin{proof} By definition of the rounding operator $\lfloor\cdot\rceil$, we have that \[ \left|\iota_j - \lfloor Tp_j^*\rceil\right| \le \textstyle\frac{1}{2} \ \ \ \forall j\in[k]. \] Thus \begin{align*} & Tp_j^* -\frac{1}{2} \le \iota_j \le T p_j^* +\frac{1}{2}\ \ \ \ \forall j\in[k]\\ \Rightarrow\ \ & \sum_{j=1}^k\left(Tp_j^*-\frac{1}{2}\right) \le \sum_{j=1}^k \iota_j \le \sum_{j=1}^k \left(Tp_j^*+\frac{1}{2}\right)\\ \Rightarrow\ \ & T-\frac{k}{2}\le \tilde{T}\le T+\frac{k}{2}. \end{align*} \end{proof} \iffalse We make use of the following elementary inequality. \begin{lemma} Suppose $a,b,c\in\mathbb{R}_{>0}$ and $a+c\le b$. Then \[ \frac{a}{b-c}\le \frac{a+c}{b}. \] \end{lemma} \begin{proof} By assumption, we have \begin{align*} & a+c \le b \\ \Rightarrow\ \ & ac+c^2 \le bc\\ \Rightarrow\ \ & 0 \le bc-ac-c^2\\ \Rightarrow\ \ & ab \le ab +bc -ac -c^2\\ \Rightarrow\ \ & ab \le (a+c)(b-c)\\ \Rightarrow\ \ & \frac{a}{b-c} \le \frac{a+c}{b}, \end{align*} where the last step is justified since $0<a\le b-c$ and hence $0< b-c$. \end{proof} \fi Ideally we move $r^x(n)$ to \begin{equation} \label{eq:pnew} {p_{\mathrm{new}}}=(1-\alpha){p_{\mathrm{old}}}+\alpha p^*. \end{equation} by appending $z$ (i.e.~ we hope that $r^{xz}(n+m)={p_{\mathrm{new}}}$). But in fact the segment $z$ which we will append to $x$ will move $r^x$ from ${p_{\mathrm{old}}}$ instead to \begin{equation} \label{eq:phatnew} \Def{{\hat{p}_{\mathrm{new}}}\coloneqq(1-\tilde{\alpha}){p_{\mathrm{old}}}+\tilde{\alpha}\hat{p}^*}, \end{equation} where \begin{equation} \label{eq:tilde-alpha-def} \Def{\tilde{\alpha}\coloneqq \frac{\tilde{\ell}T}{\tilde{\ell}T+m}.} \end{equation} and \begin{equation} \label{eq:hat-p-star-def} \Def{\hat{p}^*\coloneqq\left(\frac{\iota_1}{\tilde{T}},\ldots,\frac{\iota_k}{\tilde{T}}\right).} \end{equation} We now determine the error incurred from these approximations. We first need the following Lemma \begin{figure} \vspace*{-2.7cm} \begin{center} \def0.5\textwidth{0.5\textwidth} \input{figures/drawing2.pdf_tex} \end{center} \vspace*{-4cm} \caption{Illustration of construction of the sequence $x$. The figure shows how a single segment is created. We have the desired curve $C$ (in dark green) and a polygonal approximation $C^g$ using 5 segments (thus $V^g=5)$. We assume that we have already constructed the first $n$ elements of $x$ and thus we can compute $r^x(n)$. Denote this by $p_{\mathrm{old}}$, an element of the simplex, itself shown in pale green. We hope to append a segment $y$ of length $m$ to $x$ such $r^{xy}(n+m)=p_\mathrm{new}$. However we can only choose elements of $y$ from $[k]$ and that means that at each step we move towards one of the vertices of the simplex. We note that ideally we would move from $p_\mathrm{old}$ towards $p^*\in\partial\Delta^k$. In order to deal with the restrictions on directions we can head, we quantize $p^*$ as $\hat{p}^*=\lfloor T p^*\rceil/T$ where in this example we have chosen $T=9$. As argued in the main text this means we will thus be restricted to heading towards one of a fixed set of points on the boundary (marked in blue). Observe $\hat{p}^*$ is located at one of these points in the diagram, but in general it might not even be on the boundary of the simplex. We then construct $y$ to move $r^x$ towards $\hat{p}^*$ and take sufficient steps to move to $\hat{p}_{\mathrm{new}}$. This too is done in repeated steps by setting $y=z^{[\tilde{\ell}]}$ where $z$ is a shorter segment (marked by purple ticks on the line segment $l(p_\mathrm{old},\hat{p}^*)$) which will move towards $\hat{p}^*$ by a small amount. The end result is that we get $r^{xy}(n+m)=\hat{p}_\mathrm{new}$ which is contained within $N_\epsilon(p_\mathrm{new})$, an $\epsilon$-ball centered at $p_\mathrm{new}$. \label{fig:construction} } \end{figure} \begin{lemma}\label{lem:tilde-alpha-minus-alpha} Suppose $n,T\in\mathbb{N}$, $\alpha$ is defined by (\ref{eq:alpha-def}) and $\tilde{\alpha}$ is defined by (\ref{eq:tilde-alpha-def}). Then \[ \left|\alpha - \tilde{\alpha}\right| \le \frac{T}{n}. \] \end{lemma} \begin{proof} By definition of $\tilde{\alpha}$ we have \begin{align*} |\alpha-\tilde{\alpha}| & = \left| \frac{\tilde{\ell}T}{\tilde{\ell} T +n} - \frac{\ell T}{\ell T +n} \right|\\ &= \left|\frac{\tilde{\ell} T (\ell T+n) -\ell T(\tilde{\ell} T+n)}{(\tilde{\ell} T +n)(\ell T+n)} \right|\\ &=\left|\frac{\tilde{\ell}Tn - \ell Tn}{(\tilde{\ell} T +n)(\ell T+n)} \right|.\\ \intertext{Since $(\tilde{\ell} T +n)(\ell T+n)>0$ and $\tilde{\ell}=\lceil\ell\rceil\ge\ell$, we have} (\tilde{\ell} T +n)(\ell T+n) & \ge (\ell T+n)(\ell T+n)\ge n^2\\ \intertext{and since $|\tilde{\ell}-\ell|\le 1$,} |\alpha-\tilde{\alpha}| & = \frac{|\tilde{\ell}-\ell|\cdot Tn}{n^2} \le \frac{T}{n}. \end{align*} \end{proof} Our construction does not move $r^x(n)$ towards $\tilde{p}^*$ but an approximation of it, namely $\hat{p}^*$ defined in (\ref{eq:hat-p-star-def}). We exploit the fact that repeating a segment does not change its relative frequencies, which we state formally as \begin{lemma} Let $z=\left\langle 1^{[\iota_1]},\ldots,k^{[\iota_k]}\right\rangle$. Then $r^z(\tilde{T})=\hat{p}^*=r^{z^{[\tilde{\ell}]}}(\tilde{\ell}\tilde{T})$. \end{lemma} \begin{proof} For any $i\in[k]$ we have $r_i^z(\tilde{T})=\frac{1}{\tilde{T}}\left|\left\{j\in[\tilde{T}]\colon z_j=i\right\}\right|=\frac{1}{\tilde{T}} \iota_i$. The first equality is immediate. For the second, similarly we have $r_i^{z^{[\tilde{\ell}]}}(\tilde{\ell}\tilde{T})= \frac{1}{\tilde{\ell}\tilde{T}}\left|\left\{j\in[\tilde{\ell}\tilde{T}]\colon z_j^{[\tilde{\ell}]}=i\right\}\right|=\frac{1}{\tilde{\ell}\tilde{T}}\cdot \tilde{\ell} \iota_i = \frac{1}{\tilde{T}}\iota_i$ by definition of $z^{[\tilde{\ell}]}$. \end{proof} Since $\tilde{p}^*=\left(\frac{\iota_1}{{T}},\ldots,\frac{\iota_k}{{T}}\right)$ we have that $\tilde{p}^*=\frac{\tilde{T}}{T}\hat{p}^*$. This allows us to show: \begin{lemma}\label{lem:hat-p-tilde-p} Suppose $T\in\mathbb{N}$ and $\hat{p}^*$ is defined via (\ref{eq:hat-p-star-def}). Then $\left\|\hat{p}^*-\tilde{p}^*\right\|\le \frac{k}{2T}$. \end{lemma} \begin{proof} We have $ \left\|\hat{p}^*-\tilde{p}^*\right\| =\left\|\hat{p}^* - \frac{\tilde{T}}{T} \hat{p}^*\right\| =\left|1-\frac{\tilde{T}}{T}\right|\cdot\|\hat{p}^*\| \le \left|1-\frac{\tilde{T}}{T}\right|. $ Suppose $\tilde{T}<T$, then $1-\frac{\tilde{T}}{T}>0$ and $ \left|1-\frac{\tilde{T}}{T}\right|=1-\frac{\tilde{T}}{T}\le 1-\frac{T-k/2}{T} = \frac{k}{2T} $ by Lemma \ref{lem:T-tilde-T}. Similarly if $\tilde{T}>T$, then $1-\frac{\tilde{T}}{T}<0$ and $ \left|1-\frac{\tilde{T}}{T}\right|=\frac{\tilde{T}}{T}-1\le \frac{T+k/2}{T}-1 = \frac{k}{2T} $ completing the proof. \end{proof} \begin{lemma} \label{lem:hpnew-pnew} Suppose $k,T\in\mathbb{N}$ and ${p_{\mathrm{new}}}$ and ${\hat{p}_{\mathrm{new}}}$ are defined as above. Then \begin{equation} \|{\hat{p}_{\mathrm{new}}}-{p_{\mathrm{new}}}\|\le \frac{4 T}{n} +\frac{k}{T}. \label{eq:hpnew-pnew} \end{equation} \end{lemma} \begin{proof} From (\ref{eq:pnew}) and (\ref{eq:phatnew}) we have \begin{align} \|{\hat{p}_{\mathrm{new}}}-{p_{\mathrm{new}}}\| &= \|(1-\tilde{\alpha}){p_{\mathrm{old}}} +\tilde{\alpha}\hat{p}^* -(1-\alpha){p_{\mathrm{old}}} -\alpha p^*\|\nonumber\\ &=\|[(1-\tilde{\alpha})-(1-\alpha)]{p_{\mathrm{old}}}+ (\tilde{\alpha}\hat{p}^*-\alpha p^*)\|\nonumber\\ & \le \|(\alpha-\tilde{\alpha}){p_{\mathrm{old}}}\| + \|\tilde{\alpha}\hat{p}^* -\alpha p^*\|\nonumber\\ &\le \sqrt{2} |\tilde{\alpha}-\alpha| + \|\tilde{\alpha}\hat{p}^* -\alpha p^*\|.\label{eq:two-term-expression} \end{align} The second term in (\ref{eq:two-term-expression}) can be bounded as follows: \begin{align} \|\tilde{\alpha}\hat{p}^* -\alpha p^*\| &= \|(\tilde{\alpha}-\alpha+\alpha)\hat{p}^* -\alpha p^*\|\nonumber\\ &=\|(\tilde{\alpha}-\alpha)\hat{p}^* + (\alpha\hat{p}^* -\alpha p^*)\|\nonumber\\ &\le \|(\tilde{\alpha}-\alpha)\hat{p}^*\| + \|\alpha\hat{p}^* -\alpha p^*\|\nonumber\\ &=|\tilde{\alpha}-\alpha|\cdot\|\hat{p}^*\| +\alpha\|\hat{p}^*-p^*\|\nonumber\\ &=|\tilde{\alpha}-\alpha|\cdot\|\hat{p}^*\| +\alpha\|(\hat{p}^* - \tilde{p}^*)+(\tilde{p}^* -p^*)\|\nonumber\\ &\le|\tilde{\alpha}-\alpha|\sqrt{2} +\alpha\|\hat{p}^* - \tilde{p}^*\|+\alpha\|\tilde{p}^* -p^*\|\nonumber\\ &\le \sqrt{2}|\tilde{\alpha}-\alpha| +\frac{k}{2T} +\alpha\left( \textstyle\sum_{i=1}^k \left(\tilde{p}_i^*-p_i^*\right)^2 \right)^{1/2},\nonumber\\ \intertext{by Lemma \ref{lem:hat-p-tilde-p} and the fact that $\|\hat{p}^*\|\le 1$,} &\le \sqrt{2}|\tilde{\alpha}-\alpha| + \frac{k}{2T} + \alpha \left(\textstyle\sum_{i=1}^k \left(\frac{1}{2T}\right)^2\right)^{1/2}\nonumber\\ &= \sqrt{2}|\tilde{\alpha}-\alpha| +\frac{k}{2T}+ \alpha \frac{\sqrt{k}}{2T}, \label{eq:intermediate-bound} \end{align} where we used Lemma \ref{lem:tilde-p-star-error} in the penultimate step. Since $\alpha\le 1$, combining (\ref{eq:two-term-expression}) and (\ref{eq:intermediate-bound}) we have \[ \|{\hat{p}_{\mathrm{new}}}-{p_{\mathrm{new}}}\|\le 2\sqrt{2}\left|\tilde{\alpha}-\alpha\right| + \frac{k}{2T}+ \frac{\sqrt{k}}{2T} \le 4\left|\tilde{\alpha}-\alpha\right| + \frac{k}{T}. \] Appealing to Lemma \ref{lem:tilde-alpha-minus-alpha} gives us (\ref{eq:hpnew-pnew}). \end{proof} The above arguments control the errors at the end of a \emph{piece} (and thus in a \emph{segment}). But for later purposes we need control at each \emph{step}. This follows immediately by the fact that we make small steps: \begin{lemma}\label{lem:r-within-segment} For $n\in\mathbb{N}$ and $m\in[\tilde{T}]$ and any $x\colon\mathbb{N}\rightarrow[k]$, \[ \left\|r^x(n)-r^x(n+m)\right\|\le \frac{2T+k}{n}. \] \end{lemma} \begin{proof} By Lemma \ref{lemma:r-decomposition}, \begin{align*} \left\|r^x(n)-r^x(n+m)\right\| &=\left\|r^x(n)-\textstyle\frac{n}{n+m}r^x(n) -\frac{m}{n+m}r^{x^{+n}}\!(m)\right\|\\ &=\left\|(1-\textstyle\frac{n}{n+m}) r^x(n) - \frac{m}{n+m} r^{x^{+n}}\!(m)\right\|\\ &=\frac{m}{n+m}\left\|r^x(n)-r^{x^{+n}}\!(m)\right\|\\ &\le \frac{2m}{n+m}\\ &\le \frac{2m}{n}\\ &\le \frac{2T+k}{n}, \end{align*} where the first inequality holds since $\|r^x(n)\|,\|r^{x^{+n}}\!(m)\|\le 1$ and the last step follows from Lemma \ref{lem:T-tilde-T}. \end{proof} \subsection{Completing the Proof of Theorem \ref{th:CP-r-C}} The remaining piece of the argument concerns the piecewise linear approximation of $C$ by $C^g$. For $A,B\subseteq\Delta^k$ and $a\in\Delta^k$ define $\Def{d(a,B)\coloneqq\min\{\|a-b\|\colon b\in B\}}$ and the \Def{Hausdorff distance} \begin{equation} \label{eq:hausrdorff-distance-def} \Def{d(A,B)\coloneqq\max\{d(a,B)\colon a\in A\}}. \end{equation} Let us write $V^g$ (the number of vertices in the piecewise linear approximation at generation $g$) in functional form as $\Def{V(g)}$. Let $\Def{n(g)}$ denote the length of the segment of $x$ that has been constructed at the beginning of generation $g$, and let $\Def{g(n)\coloneqq \inf\{g\in\mathbb{N}\colon n\le n(g)\}}$ denote its quasi-inverse. Clearly $n(g)$ is strictly increasing in $g$ and $g(n)$ is increasing, but often constant. With these definitions, we have $V=V(g)=V(g(n))$. Denote by $\Def{\tilde{C}(V)}$ the best piecewise linear approximation of $C$ with $V$ vertices, in the sense of minimizing $\Def{\psi_C(V)\coloneqq d(C,\tilde{C}(V))}$. Since every rectifiable curve $C$ has a Lipschitz continuous parametrisation, we have that $V\mapsto \psi_C(V)$ is decreasing in $V$ and $\lim_{V\rightarrow\infty} \psi_C(V)=0$. Thus $\lim_{g\rightarrow\infty} \psi_C(V(g))=0$ and $\lim_{n\rightarrow\infty} \psi_C(V(g(n)))=0$, although the convergence could be very slow (in $n$) and its speed will depend on the choice of $C$. Denote by $\Def{\bar{C}(n)\coloneqq \tilde{C}(V(g(n)))}$ the sequence of best possible piecewise linear approximations of $C$ indexed by $n$, and let $\Def{\bar{\psi}_C(n)\coloneqq d(C,\bar{C}(n))}$. We have thus shown: \begin{lemma}\label{lem:bar-psi} Let $C\subset\Delta^k$ be a rectifiable curve. Then \[ \lim_{n\rightarrow\infty} \bar{\psi}_C(n)=0. \] \end{lemma} We summarize what we know so far. \begin{enumerate} \item For all generations $g$, $\|{\hat{p}_{\mathrm{new}}}-{p_{\mathrm{new}}}\|\le \frac{4T}{n}+\frac{k}{T}$, where ${p_{\mathrm{new}}}=p_v$ for $v\in[V^g]$ (Lemma \ref{lem:hpnew-pnew}). This means the following. Suppose at the beginning of segment $v$ in generation $g$ we have $n=\operatorname{length}(x)$. By definition, we have $r^x(n)={p_{\mathrm{old}}}$ and $r^{xz^{[\tilde{\ell}]}}(n+\tilde{\ell}\tilde{T})={\hat{p}_{\mathrm{new}}}$. Furthermore, for $m=i\tilde{T}$, $i\in[\tilde{\ell}]$ we have $r^{xz^{[i]}}(n+i\tilde{T})\in l({p_{\mathrm{old}}},{\hat{p}_{\mathrm{new}}})$. \item Furthermore, (by Lemma \ref{lem:r-within-segment}) for all $j\in[\tilde{\ell}\tilde{T}]$, $d\left(r^{xz^{[\tilde{\ell}]}}(j), l({p_{\mathrm{old}}},{\hat{p}_{\mathrm{new}}})\right)\le \frac{2T+k}{n}$ --- the relative frequencies for all points in the segment are close to the line segment $l({p_{\mathrm{old}}},{\hat{p}_{\mathrm{new}}})$. \end{enumerate} Combining these facts, and appealing to the triangle inequality, we conclude that the sequence $x$ constructed by Algorithm 1 satisfies \begin{equation} \label{eq:overall-error-abstract} d(r^x(n),C)\le \frac{4T}{n} +\frac{k}{T}+\frac{2T+k}{n} +\bar{\psi}_C(n) \ \ \ \forall n\in\mathbb{N}. \end{equation} We now choose $T=T(n)$ and $V=V(g(n))$ appropriately. One choice is to choose $T(n)=\sqrt{n}$. Equation \ref{eq:overall-error-abstract} then implies \[ d(r^x(n),C)\le \frac{4\sqrt{n}}{n} +\frac{k}{\sqrt{n}}+\frac{2\sqrt{n}+k}{n} + \bar{\psi}_C(n) \ \ \ \forall n\in\mathbb{N} \] which implies that for all $n\in\mathbb{N}$ with $\sqrt{n}> k$, \[ d(r^x(n),C)\le \frac{4}{\sqrt{n}} +\frac{k}{\sqrt{n}}+\frac{4}{\sqrt{n}} +\bar{\psi}_C(n)= \frac{8+k}{\sqrt{n}} +\bar{\psi}_C(n), \] and thus Lemma \ref{lem:bar-psi} implies $\lim_{n\rightarrow\infty} d(r^x(n),C)=0$. Furthermore, by the generational nature of our construction, $r^x(n)$ repeatedly approximately follows the curve $C$. In each generation it gets closer and closer. Thus for any $\epsilon$, for any point $y\in C$, the sequence $r^x$ enters an $\epsilon$-ball of $y$ infinitely often. Thus $\operatorname{CP}(r)\supseteq C$. Finally, since in each generation the bounds above constrain $r^x$ more and more tightly, there cannot exist cluster points that are not in $C$; that is, $\operatorname{CP}(r)\subseteq C$. We have thus proved Theorem \ref{th:CP-r-C}. \subsection{Remarks on the Construction} We make a few remarks on the construction. \label{app:remarksonconstruction} \begin{enumerate}[nolistsep] \item By the definition of $\tilde{\ell}$ we are guaranteed that $\tilde{\ell}\ge 1$ for each segment and each generation. For the construction to approximate well, we need $\tilde{\ell}\gg 1$, which it will inevitably be when $n$ gets large enough. \item Recall $n(g)$ is the length of $x$ at the beginning of generation $g$. Let $L_C^g\coloneqq\operatorname{length}(C^g)>0$. Since each step which $r^x$ moves is of size less than $1/n(g)$ (see Equation \ref{eq:r-one-step}), we require at least $L_C^g\cdot n(g)$ steps for $r^x$ to traverse the whole of $C$ in generation $g$. Thus at the end of generation $g$ and the beginning of generation $g+1$ we have \[ n(g+1)\ge n(g)+ L_C^g\cdot n(g) =\lambda_C^g\cdot n(g).\] Furthermore, $L_C^g$ is increasing in $g$ and approaches $\operatorname{length}(C)$. Thus for all $g$, $\lambda_C^g>1$ (and is in fact increasing in $g$). Hence $n(g)$ grows exponentially with $g$. \item The growth of the length of $x$ is controlled in a complex manner by the nature of the curve $C$. In particular if $C$ is very complex, then $\bar{\psi}_C$ must decay slowly. Furthermore, if $C$ has parts close to $\partial\Delta^k$, in particular if some vertices $v$ of the piecewise linear approximation $C^g$ are close to $\partial\Delta^k$, then $\tilde{\ell}$ can end up very large for that segment, meaning that the length of the sequence $x$ grows more rapidly. See Lemma \ref{lem:extreme-case} for an illustration of this observation. \end{enumerate} Finally note that since (by Lemma \ref{lem:connected}) $\operatorname{CP}(r^x$) must always be connected, we have in Theorem \ref{th:CP-r-C} what appears to be the most general result possible (under the restriction that $x$ takes values only in a finite set $[k]$). We do not know what the appropriate generalization is to sequences $x$ that can take values in an infinite (or uncountable) set\footnote{ Although we do not pursue this in any detail, we remark that one could design an algorithm to construct $x$ such that $\operatorname{CP}(r^x)$ is \emph{any} subset $D\subseteq\Delta^k$ by using our algorithm as a subroutine. The idea would be to construct a space filling curve that fills $D$, each generation of which is a rectifiable curve. One would appeal to our algorithm for each generation, and then once within a suitable tolerance, change the target to be the next generation of the space filling curve. A suitable method would be to simply intersect an extant families of closed space filling curves for $k$ dimensional cubes with the $(k-1)$-simplex (e.g. generalisations of the Moore curve), attaching joins on $\partial D$ where necessary. Since (see the main body of the paper) it ends up being only the convex hull of $\operatorname{CP}(r^x)$ that matters, such an exotic construction is of little direct interest.}. \subsection{From Boundary to Curves} \label{app:fromboundarytocurves} \begin{proof}[Proof of Corollary~\ref{cor:CP-convexbody}] \citet{bronshteyn1975approximation} show that if $D$ is a convex set contained in the unit ball (w.r.t.\@\xspace the Euclidean norm) in $\mathbb{R}^n$ and $\epsilon<10^{-3}$, then there exists a set of at most $K_\epsilon \coloneqq 3\sqrt{n}(9/\epsilon)^{(n-1)/2} $ points whose convex hull is at most $\epsilon$ away from $D$. Thus for any convex $D\subseteq \Delta^k$, and any $\epsilon<10^{-3}$ there is a polyhedron $D_\epsilon$ comprising the convex hull of $K_\epsilon<\infty$ points $Q_\epsilon\coloneqq \{q_i\in\Delta^k\colon i\in[K_\epsilon]\}$ such that $d(D,D_\epsilon)<\epsilon$, where $d$ is the Hausdorff distance (\ref{eq:hausrdorff-distance-def}). Hence for any $\epsilon<10^{-3}$ there exists a closed rectifiable curve $C_\epsilon$ (constructed by linearly connecting successive points in $Q_\epsilon$) such that $d(\operatorname{co} C_\epsilon,D)<\epsilon$. One can then construct a sequence $x$ such that $\operatorname{CP}(r^x)=\partial D$ as follows. Start with $\epsilon_0=10^{-3}$. Pick and construct $C_{\epsilon_0}$ as above. Construct $x$ according to the previous procedure (Appendix~\ref{app:logicofconstruction} - \ref{app:remarksonconstruction}) to get an entire generation of $r^x$ within $\epsilon_0$ of $C_{\epsilon_0}$. Then let $\epsilon_{1}=\epsilon_0/2$ and repeat the procedure, appending the constructed sequence. Continue iterating (dividing the $\epsilon$ in half each phase) and one achieves that in the limit $\operatorname{CP}(r^x)=\partial D$. \end{proof} \subsection{Illustration} \label{subsec:illustration} We illustrate our construction for $k=3$ for a two (identical) generations and thus a single piecewise linear (in fact polygonal) approximation $C^g$ of $C$. We take for $C$ (a scaled version of) the lemniscate of Bernoulli \citep[Chapter 12]{lockwood1961}, \citep[Section 5.3]{lawrence1972} mapped onto the 2-simplex as the space curve $\{(z_1(t),z_2(t),z_3(t)\colon t\in[0,2\pi]\}$, where \begin{align*} z_1(t)&=\frac{1}{3}+\frac{1}{12} \frac{2\cos(t)}{1+\sin^2(t)}\\ z_1(t)&=\frac{1}{3}+\frac{1}{12} \frac{2\sin(t)\cos(t)}{1+\sin^2(t)}\\ z_3(t)&=1-z_1(t)-z_2(t). \end{align*} We set $T=12$ and $V=30$ (number of nodes in the polygonal approximation $C^g$ for both $g=1,2$ to make the figure less cluttered) and we iterated long enough to go around the lemniscate twice, which resulted in a sequence of length 85677. As the construction proceeded, $\tilde{T}$ went from 9 up to 164 and $\tilde{\ell}$ went from 1 or 2 for the first few segments up to 69 for the last (with the largest value being 104). The results can be seen in Figure \ref{fig:lem} which plots the achieved relative frequencies at different zoom levels. The small red squares are the vertices of $C^g$. \begin{figure} \centering \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=0.99\textwidth]{lem-0.png} \caption{Overall view, showing initialisation.} \label{fig:lem-0} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=.99\textwidth]{lem-1.png} \caption{Closer view of just the region containing $C$.} \label{fig:lem-1} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=.99\textwidth]{lem-2.png} \caption{Zoom of upper right corner of Figure \ref{fig:lem-1}.} \label{fig:lem-2} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=.99\textwidth]{lem-3.png} \caption{Closer view; two generations are apparent.} \label{fig:lem-3} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=.99\textwidth]{lem-4.png} \caption{Closer again zoom of Figure \ref{fig:lem-3}.} \label{fig:lem-4} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=.99\textwidth]{lem-5.png} \caption{Near the centre of Figure \ref{fig:lem-1}.} \label{fig:lem-5} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=.99\textwidth]{lem-6.png} \caption{Further zoom of Figure \ref{fig:lem-5}. } \label{fig:lem-6} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=.99\textwidth]{lem-7.png} \caption{An even closer view of Figure \ref{fig:lem-6}.} \label{fig:s4} \end{subfigure} \caption{Illustration of approximation of the polygonal curve $C^g$ by the relative frequencies of the sequence constructed according to Algorithm 1. Two generations were used. Red squares are vertices of $C^g$. See Section \ref{subsec:illustration} for details.\label{fig:lem}}\vspace*{-6pt} \end{figure} \subsection{Construction of \texorpdfstring{$x$}{x} such that \texorpdfstring{$\operatorname{co}\operatorname{CP}(r^x)=\Delta^k$}{the Convex Hull of Cluster Points is the Simplex}} \label{subsec:maximally-nonstochastic} How much of the simplex can we cover with $\operatorname{CP}(r^x)$? This question is poorly posed as (it seems) we can only ever construct one dimensional sets that are the set of cluster points of $r^x$. However, for inducing an upper prevision, all that matters is the convex hull of $\operatorname{CP}(r^x)$. We now show the convex hull of $\operatorname{CP}(r^x)$ can be made as large as conceivable with simpler and more explicit construction: \begin{lemma} \label{lem:extreme-case} Suppose $k\in\mathbb{N}$. There exists a sequence $x\colon\mathbb{N}\rightarrow [k]$ such that $\operatorname{co}(\operatorname{CP}(r^x))=\Delta^k$. \end{lemma} \begin{proof} As before, our proof is constructive. Recall $e_1,\ldots,e_k$ are the vertices (and extreme points) of the $(k-1)$-simplex, and $\operatorname{co}\{e_1,\ldots,e_k\}=\Delta^k$. We will construct a sequence $x$ such that for all $\epsilon>0$, and all $i\in[k]$, $r^x$ visits $N_{\epsilon}(e_i)$ infinitely often. Since $\lim_{\epsilon\rightarrow 0} \operatorname{co} \bigcup_{i\in[k]} N_\epsilon(e_i) = \Delta^k$ we will have achieved the desired result. We again make use of (\ref{eq:r-decomposition}). We will construct a sequence $x$ by adding segments ($s$) such that for each successive $s$ we drive $r^x$ closer and closer towards one of the vertices $e_i$ ($i\in[k]$). In order to do this, at each $s$ we append $m$ copies of $i$ to the current $x$. Specifically, we construct $x$ as follows: \begin{algorithm}[h] \begin{algorithmic}[1] \Require $\phi\colon\mathbb{N}\rightarrow\mathbb{N}$ \State $x\gets \langle \rangle$ \State $s\gets1$ \While{true} \State $i \gets s \mod k$ \Comment{Cycle around the vertices of the simplex} \State $m\gets\phi(s+1)-\phi(s)$ \State $x\gets x\, i^m $ \Comment{Append $m$ copies of $i$ to $x$} \State $s\gets s+1$ \EndWhile \end{algorithmic} \end{algorithm} We need to make $m$ large enough so that the convex combination coefficient $\frac{m}{n+m}$ approaches $1$. To that end, consider an increasing function $\phi\colon\mathbb{N}\rightarrow\mathbb{N}$ which will further restrict later. The role of $\phi$ is to control $n$ as a function of segment number $s$; that is $n=\phi(s)$ and thus $m=\phi(s+1)-\phi(s)$. With this choice, we have \[ \frac{n}{n+m}=\frac{\phi(s)}{\phi(s+1)}\ \ \ \mbox{and}\ \ \ \ \frac{m}{n+m}=1-\frac{\phi(s)}{\phi(s+1)}. \] and thus for all $s\in\mathbb{N}$ \begin{equation} \label{eq:r-phi-s} r^x(\phi(s+1))= \frac{\phi(s)}{\phi(s+1)} r^x(\phi(s)) + \left(1- \frac{\phi(s)}{\phi(s+1)}\right) r^{x^{+\phi(s)}}\left(\phi(s+1)-\phi(s)\right) . \end{equation} We demand that $\lim_{s\rightarrow\infty}\frac{\phi(s)}{\phi(s+1)}=0$ so that as $s$ increases, the second term in (\ref{eq:r-phi-s}) dominates. For any $s\in\mathbb{N}$, $r^x(\phi(s))\in\Delta^k$ and thus \begin{align*} \|e_{s\operatorname{mod} k} - r^x(\phi(s+1))\| & = \left\|e_{s \operatorname{mod} k} - \frac{\phi(s)}{\phi(s+1)} r^x(\phi(s)) - \left(1- \frac{\phi(s)}{\phi(s+1)}\right) r^{x^{+\phi(s)}}\!\left(\phi(s+1)-\phi(s)\right)\right\|\\ &=\left\|\frac{\phi(s)}{\phi(s+1)} e_{s\operatorname{mod} k} - \frac{\phi(s)}{\phi(s+1)} r^x(\phi(s))\right\|\\ &=\frac{\phi(s)}{\phi(s+1)} \left\|e_{s \operatorname{mod} k}-r^x(\phi(s))\right\|\\ &\le \frac{\phi(s)}{\phi(s+1)} \cdot 2, \end{align*} where the second line follows from the fact that we constructed $x$ such that $r^{x^{+\phi(s)}}\!\left(\phi(s+1)-\phi(s)\right)=e_{s\operatorname{mod} k}$. But by assumption, $\lim_{s\rightarrow\infty}\frac{\phi(s)}{\phi(s+1)}=0$ and hence for any $\epsilon>0$ there exists $s_\epsilon$ such that for all $i\in[k]$, \[ \left|\left\{s\in\mathbb{N}\colon s>s_\epsilon,\ s\operatorname{mod} k=i,\ \left\|e_i-r^x(\phi(s+1))\right\|\le\epsilon\right\}\right|=\aleph_0. \] That is, for each $i\in[k]$, each $\epsilon$-neighbourhood of $e_i$ is visited infinitely often by the sequence $r^x$ and hence $\{e_1,\ldots,e_k\}\subseteq\operatorname{CP}(r^x)$. But since $r^x(n)\in\Delta^k$ for all $n\in\mathbb{N}$ we conclude that indeed $\operatorname{co}(\operatorname{CP}(r^x))=\operatorname{co}(\{e_1,\ldots,e_k\})=\Delta^k$ as required. \end{proof} A suitable choice of $\phi$ is $\phi(s)=\left\lceil\exp(s^\alpha)\right\rceil$ for some $\alpha>1$, in which case $\frac{\phi(s)}{\phi(s+1)}\approx \exp\left(-\alpha s^{\alpha-1}\right)$. An argument as in the proof of Theorem \ref{th:CP-r-C} would show that $S_k\coloneqq\bigcup_{i\in[k]} l\left(e_i,e_{(i+1)\operatorname{mod} k}\right)\subseteq\operatorname{CP}(r^x)$. Observe that when $k=3$, $S_k=\partial\Delta^k$, but for $k>4$, that is not true, even though $\operatorname{co} (S_k)=\partial\Delta^k$. \section{Related Work} \label{sec:relatedwork} We examine previous research at the intersection of frequentism and imprecise probability. While divergence of relative frequencies has been linked to imprecise probability before, this has almost exclusively been done in settings which are not \textit{strictly} frequentist. \citet{fine1970apparent} was one of the first authors to critically evaluate the hypothesis of statistical stability. \citet{fine1970apparent} observed that this widespread hypothesis is regarded as a ``striking instance of order in chaos'' in the statistics community, and sought to challenge its nature as an empirical ``fact''. In contrast to our approach, \citet{fine1970apparent} was concerned with finite sequences and the question what it means for such a sequence to be random. While Fine did mention von Mises, \citet{fine1970apparent} opted for a randomness definition based on computational complexity. Intuitively, one can consider a sequence random if it cannot be generated by a short computer program (i.e.\@\xspace universal Turing machine). Fine then showed that statistical stability (``apparent convergence'') occurs \textit{because of}, and not in spite of, high randomness of the sequence. In contrast, a sequence for which relative frequencies diverge has low computational complexity. We consider these findings surprising, and believe that an interesting avenue for future research with respect to statistical stability lies in the comparison of the computational complexity approach to von Mises randomness notion based on selection rules. We agree with \citet{fine1970apparent} that apparent convergence is not some law of nature, but rather a consequence of data handling. The previously mentioned paper may be seen as a predecessor to a long line of work by Terrence Fine and collaborators, \citep{fine1976computational, walley1982towards, kumar1985stationary, grize1987continuous, fine1988lower, papamarcou1991stationarity, papamarcou1991unstable, sadrolhefazi1994finite, fierens2009frequentist}; see also \citep{finehandbook} for an introduction. A central motivation behind this work was to develop a frequentist model for the puzzling case of stationary, unstable phenomena with bounded time averages. What differentiates this work from ours is that we take a \textit{strictly frequentist} approach: we explicitly define the upper probability and upper prevision from a given sequence. In contrast, the above works (with the exceptions of \citep{papamarcou1991unstable} and \citep{fierens2009frequentist}) use an imprecise probability to represent a single trial in a sequence of unlinked repetitions of an experiment, and then induce an imprecise probability via an infinite product space. This is in the spirit of, and can be understood as a generalization of, the standard frequentist approach, where one would assume that $X_1,X_2,\ldots$ form an i.i.d. sequence of random variables; here, there is both an ``individual $P$,'' as well as an induced ``aggregate $P$'' on the infinite product space, which can be used to measure an event such as convergence or divergence of relative frequencies. When a single trial is assumed to be governed by an imprecise probability, how can this be interpreted? And what is the interpretation of the mysterious ``aggregate imprecise probability''? This model falls prey to similar criticisms as we outlined in the Introduction (Section~\ref{sec:introductionfreqip}) concerning the theoretical law of large numbers. In fact, \citet{walley1982towards} subscribed to a frequency-propensity interpretation (specifically, they were inspired by \citet{giere1973objective}), where the imprecise probability of a single trial represents its propensity, that is, its tendency or disposition to produce a certain outcome. Consequently, one obtains a propensity for compound trials in terms of an imprecise probability and thus one can ascribe a lower and upper probability to events such as divergence of relative frequencies. To us, the meaning of such a propensity is unclear. While we are not against a propensity interpretation as such, our motivation was to work with a parsimonious set of assumptions. To this end, we took the sequence as the primitive entity, without relying on an underlying ``individual'' (imprecise) probability. Closely related to our work is \citep{papamarcou1991unstable}, who were also inspired by von Mises. The authors proved that, for any set of probability measures $\mathcal{P}$ on $(\Omega,2^\Omega)$, $|\Omega|<\infty$, and any countable set of place selection rules $\mathcal{S}$,\footnote{See \citep{papamarcou1991unstable} for the definition. Intuitively, a place selection rule is causal, i.e.\@\xspace depends only on past values.} the existence of a sequence ${\bm{v}}{\Omega} \colon \mathbb{N} \rightarrow \Omega$ with the following properties can be guaranteed \citep[Theorem 2.2]{papamarcou1991unstable}: \[ \forall {\bm{v}}{S}_j \in \mathcal{S} : \forall A \subseteq \Omega : \limsup_{n \rightarrow \infty} \frac{\sum_{i=1}^n \chi_A\left(\vv{\Omega}(i)\right) \cdot {\bm{v}}{S}_{\!\!j}(i)}{\sum_{i=1}^n {\bm{v}}{S}_{\!\!j}(i)} = \sup\{P(A) \colon P \in \mathcal{P}\}. \] That is, the sequence has the specified upper probability (take ${\bm{v}}{S}_{\!\!j}(i)=1$ $\forall i \in \mathbb{N}$) and this property is stable under subselection. Note that this claim is weaker than our Proposition~\ref{theorem:converse}, where we construct a sequence for which the set of cluster points is exactly a prespecified one. Within the setup of \citep{walley1982towards}, \citet[Theorem 1, Theorem 2]{cozmanconvex} proposed an estimator for the underlying imprecise probability of the sequence. Specifically, they computed relative frequencies along a set of selection rules (however without referring to von Mises) and then took their minimum to obtain a lower probability; in a specific technical sense, this estimation succeeds. What motivated the authors to do this is an assumption on the data-generating process: at each trial, ``nature'' may select a different distribution from a set of probability measures; the trials are then independent but not identically distributed. This viewpoint also motivated \citet{fierens2009frequentist}, who restricted themselves to finite sequences. They offered the metaphor of an \textit{analytical microscope}. With more and more complex selection rules (``powerful lenses''), along which relative frequencies are computed, more and more structure of the set of probabilities comes to light. The authors also proposed a way to simulate data from a set of probability measures. \citet{cattaneo2017empirical} investigated an empirical, frequentist interpretation of imprecise probability in a similar setting, where $X_1,X_2,..$ is a sequence of precise Bernoulli random variables, but $p_i \coloneqq P(X_i = 1)$ is chosen by nature and may differ from trial to trial, hence $p_i \in [\underline{p_i},\overline{p_i}]$. The author drew the sobering conclusion that ``imprecise probabilities do not have a generally valid, clear empirical meaning, in the sense discussed in this paper''. Works which proposed extensions (modifications) of the law of large numbers to imprecise probabilities include \citep{marinacci1999limit,maccheroni2005strong,de2008weak,chen2013strong,peng2019nonlinear}. Separate from the imprecise probability literature, \citet{gorban2017statistical} studied the phenomenon of statistical stability and its violations in depth, including theory and experimental studies. Similarly, the work of \citet{ivanenkobook}, a major motivation for our work, does not appear to be known in the imprecise probability literature. \section{Conclusion} In this work, we have extended strict frequentism to the case of possibly divergent relative frequencies and sample averages, tying together threads from \citep{mises1919grundlagen}, \citep{ivanenkobook} and \citep{walley1991statistical}. In particular, we have recovered the generalized Bayes rule from a strictly frequentist perspective by taking inspiration from \citet{mises1919grundlagen} definition of conditional probability. Furthermore, we have established strictly frequentist \textit{semantics} for imprecise probability, by demonstrating that (under the mild assumption that $|\Omega|<\infty$) we can explicitly construct a sequence for which the relative frequencies have a prespecified set of cluster points, corresponding to the imprecise probability. The hypothesis of perfect statistical stability is typically taken for granted by practitioners of statistics, without recognizing that it is just that --- a \textit{hypothesis}; see Appendix \ref{sec:pathologies-or-norm} for an elaboration of this point. Importantly, when one blindly \textit{assumes} convergence of relative frequencies, one will not notice when it is violated --- in the practical case, when only a finite sequence is given, such a violation amounts to instability of relative frequencies even for long observation intervals \citep{gorban2017statistical}. In this work, we have rejected the assumption of stability; furthermore, in contrast to other related work, we have aimed to weaken the set of assumptions by taking the concept of a sequence as the primitive. However, this gives rise to the critique that no finite part of a sequence has any bearing on what the limit is, as has been pointed out by other authors who studied attempted a frequentist understanding of imprecise probabilities (e.g.\@\xspace \citep{cattaneo2017empirical}). So what to do in practice? To form an estimate of the upper probability or the set of cluster points, an approach similar to that of \citet{cozmanconvex} might be applicable. But what can be said about the convergence rate of such an estimate? We believe that the way forward is to introduce (potentially problem-specific) \textit{randomness assumptions}, i.e.\@\xspace local irregularity assumptions. In the standard picture, this is the i.i.d. assumption. In von Mises' framework, the set of selection rules expresses randomness --- but also merely in the limit. Expressing randomness assumptions in the finite data setting might be the way forward. Finally, we remark that an interesting avenue for future research may investigate the use of \textit{nets}, which generalize the concept of a sequence. Indeed, \textit{fraction-of-time probability} \citep{gardnerbook,leskow2006foundations,napolitano2022fraction,gardner2022transitioning} is a theory of probability with remarkable parallels to von Mises' \citeyearpar{mises1919grundlagen}. Instead of sequences, this theory is based on continuous time, hence a net ${\bm{v}}{\Omega} \colon {\bm{R}} \rightarrow \Omega$.\footnote{We remark that the notion of a sampling net in \citep{ivanenkobook} is a different one, that is, fraction-of-time probability does not fit this concept, although it is based on the usage of a net.} Sample averages are then given by integration instead of summation. In essence, this amounts to using a different \textit{relative measure} than the counting measure, which is implicit in the work of \citet{mises1919grundlagen}. However, fraction-of-time probability was so far developed only for the convergent case; we expect that a similar construction as in Section~\ref{sec:ivanenkoformal} could be used to extend it to the case of divergence. \section{Unstable Independence} Closely related to conditional probability is the concept of \textit{statistical independence}. Independence plays a central role not only in Kolmogorov's \citep[p.\@\xspace 37]{Durrett:2019tt}, but more generally in most probability theories (\citet{levin1980concept}; \citet[Section IIF, IIIG and VH]{fine2014theories}). Already \citet[Introduction, p.\@\xspace 6]{DeMoivre1738} nicely summarized a pre-theoretical, probabilistic notion of real-world independence: \begin{quote} Two events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other. \end{quote} This intuitive conception was then formalized by \cite{kolmogorov1933grundbegriffe} (translated in~\citep{kolmogorov2018foundations}) in the following classical definition. \begin{definition} Let $(\Omega,\mathcal{F},P)$ be a probability space.\footnote{Here, $\Omega$ is the possibility space, $\mathcal{F}$ is a $\sigma$-algebra and $P$ is a countably additive probability measure.} We call events $A \in \mathcal{F}$ and $B \in \mathcal{F}$ \Def{\emph{classically independent}} if $P(A\cap B) = P(A)P(B)$. \end{definition} If $P(B)>0$, then we can equivalently express this condition as $P(A|B)=P(A)$ by using the definition of conditional probability. Kolmogorov's definition is formal and it has been questioned whether it is an adequate expression of what we mean by independence in a statistical context \citep{von2006note}. As it is stated in purely measure-theoretic terms, it is unclear whether it has reasonable frequentist semantics. In our framework, we construct an intuitive definition of independence, where the independence of events is based on an independence notion of \textit{processes} (cf.\@\xspace \citep[p.\@\xspace35-39]{mises1964mathematical}). Therefore, our definition is thoroughly grounded in the frequentist setting. Furthermore, we shall generalize the independence concept to the case of possible divergence, where new subtleties come into play. We will then consider how our definitions relate to the classical case when relative frequencies converge. Assume that a sequence ${\bm{v}}{\Omega}$ is given and we have constructed an upper probability $\overline{P}$ as in Section~\ref{sec:ivanenkoformal}. \begin{definition} We call an event $B \in 2^\Omega_{1+}$ \Def{\emph{irrelevant}} to another event $A \subseteq \Omega$ if: \begin{align*} \overline{P}(A|B)&=\overline{P}(A). \end{align*} \end{definition} This definition captures the concept of \textit{epistemic irrelevance} in the imprecise probability literature \citep{miranda2008survey}. Why does this definition possess reasonable frequentist semantics? Consider what $\overline{P}(A|B)$ means (see Section~\ref{sec:unstablecondprob}): we are considering a subsequence, induced by the indicator gamble $\chi_B$, that is, we condition (in an intuitive sense) on the occurence of $B$; and on this subsequence, we then consider an unconditional upper probability. If this then coincides with the orginal upper probability, our decision maker values $A$ just the same whether $B$ occurs or not. Thus $B$ is irrelevant for putting a value on $A$. In contrast to the classical, precise case, irrelevance is not necessarily symmetric. Hence, we define independence as follows. \begin{definition} Let $A,B \in 2^\Omega_{1+}$. We call $A$ and $B$ \Def{\emph{independent}} if $\overline{P}(A|B)=\overline{P}(A)$ and $\overline{P}(B|A)=\overline{P}(B)$. \end{definition} Thus, we have obtained a grounded concept of independence for events. How can we extend this to an irrelevance and independence concept for gambles? First, we briefly recall how this is done in the classical case. \begin{definition} \label{def:independencervsclassical} Let $(\Omega,\mathcal{F},P)$ be a probability space and fix the Borel $\sigma$-algebra $\mathcal{B}$ on ${\bm{R}}$. Given two gambles $X,Y \colon \Omega \rightarrow {\bm{R}}$, we say that they are \Def{\emph{classically independent}} if: \[ P(A \cap B) = P(A)P(B) \quad \forall A \in \sigma(X), B \in \sigma(Y), \] where the $\sigma$-algebra generated from a gamble $X$, $\sigma(X)$, is defined as the smallest $\sigma$-algebra which $X$ is measurable with respect to: \[ \sigma(X) \coloneqq \sigma\left(X^{-1}(\mathcal{B})\right), \] and $\sigma(\mathcal{H})$ is the smallest $\sigma$-algebra containing all sets $H \in \mathcal{H}$, $\mathcal{H} \subseteq 2^\Omega$. \end{definition} Thus independence of gambles is reduced to independence of events. But note that this definition inherently depends on the choice of the Borel $\sigma$-algebra on ${\bm{R}}$. In our case, this is similar: to define irrelevance and independence on gambles, we need to fix a set system on ${\bm{R}}$, but we leave the choice open in general. \begin{definition} \label{def:irrelevanceofrvs} Assume a set system $\mathcal{H} \subseteq 2^\Omega$ and two gambles $X,Y\colon \Omega \rightarrow {\bm{R}}$ are given. We call $Y$ irrelevant to $X$ with respect to $\mathcal{H}$ if \[ \overline{P}(X^{-1}(A)|Y^{-1}(B))=\overline{P}(X^{-1}(A)) \quad \forall A, B \subseteq \mathcal{H} \text{ if } Y^{-1}(B) \in 2^\Omega_{1+}. \] Similarly, we call them independent when both directions hold. \end{definition} Observe that if $\mathcal{H} = \mathcal{B}$ and $\overline{P}$ was actually a precise $P$ on $\sigma(X)$ and $\sigma(Y)$, this definition would be equivalent to Definition~\ref{def:independencervsclassical} (modulo the subtlety regarding conditioning on measure zero events), due to the following. \begin{lemma} Given set systems $\mathcal{A},\mathcal{B} \subseteq \Omega$, in the precise case, the following statements are equivalent. \begin{enumerate}[nolistsep,label=\textbf{\emph{PI\arabic*.}}, ref=PI\arabic*] \item \label{prop:pi1} $P(A|B)=P(A)$ $\forall A \in \mathcal{A}, B \in \mathcal{B}$ and $P(B)>0$. \item \label{prop:pi2} $P(A \cap B)=P(A)P(B)$ $\forall A \in \mathcal{A}, B \in \mathcal{B}$. \end{enumerate} \end{lemma} \begin{proof} Obviously \ref{prop:pi2} implies \ref{prop:pi1} by the definition of conditional probability. One only has to check that when \ref{prop:pi1} holds, that \ref{prop:pi2} holds even if $P(B)=0$. But if $P(B)=0$, then also $P(A \cap B)=0$ due to monotonicity of $P$ in the sense of a capacity. \end{proof} \begin{example} \normalfont Choose $\mathcal{H} \coloneqq \{(-\infty,a] \colon a \in {\bm{R}}\}$ in Definition~\ref{def:irrelevanceofrvs}. Such an $\mathcal{H}$ is called a $\Pi$-system, which is a non-empty set system that is closed under finite intersections. This particular $\Pi$-system can in fact be used to define independence in the classical case, which is done in terms of the joint cumulative distribution function. In Appendix~\ref{app:independenceviapi}, we investigate this approach to defining independence and discuss subtle differences to the classical, countably additive case. \end{example} \section{Independence via \texorpdfstring{$\Pi$}{Pi}-Systems} \label{app:independenceviapi} In this section we discuss a useful special case of defining independence via $\Pi$-systems. This is particularly insightful as it illuminates subtle differences between the definition of irrelevance (independence) in the precise, countable additive and the precise, finitely additive case. Consider the choice of $\mathcal{H} \coloneqq \{(-\infty,a] \colon a \in {\bm{R}}\}$ in Definition~\ref{def:irrelevanceofrvs}. It seems like this naturally achieves the goal of expressing independence, but we would like to leave this choice open in general. Then irrelevance of $Y$ to $X$ means: \[ \overline{P}(\{X \leq a\}|\{Y \leq b\})=\overline{P}(\{X \leq a\}), \quad \forall a, b \in {\bm{R}}, \text{ if } \{Y \leq b\} \in 2^\Omega_{1+}. \] Compare this to the classical, precise setting where independence can also be defined as: \begin{align} \label{def:classicalindependence} P(\{X \leq a\}|\{Y \leq b\})&=P(\{X \leq a\}), \quad \forall a, b \in {\bm{R}}, \text{ if } P(\{Y \leq b\})>0.\\ \Longleftrightarrow\ \ P(\{X \leq a\}\cap\{Y \leq b\})&=P(\{X \leq a\})P(\{Y \leq b\}), \quad \forall a, b \in {\bm{R}}.\nonumber\\ \Longleftrightarrow\ \ F_{X,Y}(a,b) & = F_X(a)F_Y(b), \quad \forall a, b \in {\bm{R}}.\nonumber \end{align} That is, in the classical, precise case it suffices to have the joint distribution function factorize. This is formalized in the following. \begin{proposition} Let $(\Omega,\mathcal{F},P)$ be a probability space. Assume classical independence as in Equation~\ref{def:classicalindependence} holds. Then they are also independent in the following sense: \[ P\left(X^{-1}(A) \cap Y^{-1}(B)\right) = P\left(X^{-1}(A)\right)P\left(Y^{-1}(A)\right), \quad \forall A,B \subseteq \mathcal{B}({\bm{R}}). \] where $\mathcal{B}({\bm{R}})$ is the Borel $\sigma$-algebra on ${\bm{R}}$, thereby constituting a precise special case of Definition~\ref{def:irrelevanceofrvs}\footnote{Modulo the issue of conditioning on measure zero sets.}. Furthermore, they are independent in the sense of Definition~\ref{def:independencervsclassical}. \end{proposition} Essentially, it suffices to define independence based on the set systems $\left\{X^{-1}((-\infty,a]) \colon a \in {\bm{R}}\right\}$ and $\left\{Y^{-1}((-\infty,a]) \colon a \in {\bm{R}}\right\}$, which are the pre-images of $\mathcal{H} \coloneqq \{(-\infty,a] \colon a \in {\bm{R}}\}$, to get independence on the whole generated $\sigma$-algebras. This is based on the famous $\Pi-\lambda$ Theorem. To investigate this result in our framework, where slight differences will arise due to finite additivity, we need to talk about set systems. First, we consider the \textit{system of precision}, on which we have precise probabilities. We define the \Def{\emph{system of precision}} $\Def{\mathcal{G}}$ as the induced subset of $2^\Omega$ on which the relative frequencies converge: \[ \Def{\mathcal{G} \coloneqq \left\{A \subseteq \Omega\colon \overline{P}(A)=\underline{P}(A) = P(A) = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_A(\vv{\Omega}(i))\right\}.} \] We show that $\mathcal{G}$ always constitutes a \textit{pre-Dynkin system}, but not in general a \textit{Dynkin system}. \begin{definition} A set system $\mathcal{A} \subseteq 2^\Omega$ is called a \Def{\emph{pre-Dynkin system}} if the following conditions hold: \begin{enumerate}[nolistsep,label=\textbf{\emph{PD\arabic*.}}, ref=PD\arabic*] \item \label{pd:omega} $\Omega \in \mathcal{A}$. \item \label{pd:complement} $A \in \mathcal{A} \implies A^\complement \in \mathcal{A}$. \item \label{pd:closurefinite} If $A,B \in \mathcal{A}$ and $A \cap B =\emptyset$, then $A \cup B \in \mathcal{A}$. \end{enumerate} \end{definition} Thus a pre-Dynkin system is closed under complements and (by induction) under finite union of disjoint sets. If condition \ref{pd:closurefinite} holds also for a countable collection of disjoint sets, i.e.\@\xspace if $A_1,.. \in \mathcal{A}$ and $A_i \cap A_j = \emptyset$ for all $i \neq j$ implies $\bigcup_{i=1}^\infty A_i \in \mathcal{A}$, then we speak of a \Def{\emph{Dynkin system}}. We write $\Def{\operatorname{PD}(\mathcal{H})}$ for the \Def{intersection of all pre-Dynkin systems containing a set system $\mathcal{H} \subseteq 2^\Omega$}. \begin{proposition} The system of precision $\mathcal{G}$ is a pre-Dynkin system, but not in general a Dynkin system. \end{proposition} \begin{proof} That condition \ref{pd:omega} holds is obvious. Suppose $A \in \mathcal{G}$, i.e.\@\xspace $\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_A\left(\vv{\Omega}(i)\right)$ exists. Then also $\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \left(1-\chi_A\right)\left(\vv{\Omega}(i)\right) = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n 1 - \chi_A\left(\vv{\Omega}(i)\right)$ exists, and hence $A^\complement \in \mathcal{G}$, i.e.\@\xspace \ref{pd:complement} holds. Now suppose $A$ and $B$ are in $\mathcal{G}$ and $A \cap B = \emptyset$. Then $\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_{A \cup B}\left(\vv{\Omega}(i)\right) = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_{A}\left(\vv{\Omega}(i)\right) + \chi_B\left(\vv{\Omega}(i)\right) = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_A\left(\vv{\Omega}(i)\right) + \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_B\left(\vv{\Omega}(i)\right)$ since both limits exist by assumption, hence $A \cup B \in \mathcal{G}$. It remains to give a counterexample to show that closure under countable disjoint union can fail. For this we simply set $\Omega = \mathbb{N}$. Then, we construct a sequence ${\bm{v}}{\Omega}$ such that there exist a countable set of pairwise disjoint elements in the corresponding system of precision $\mathcal{G}$ such that their union is not an element of $\mathcal{G}$. Let $\vv{\Omega}(i) = \left\langle 1^{[1]}\, 2^{[1]}\, 3^{[2]}\, 4^{[2]}\, 5^{[4]}\, 6^{[4]}\, \dots \, i^{[2^{\lceil i/2\rceil}-1]} \, \ldots \right\rangle$, for $i\rightarrow\infty$, where $\langle \cdot \rangle$ forms a sequence from its inputs. The notation $i^{[j]}$ here means $j$ repetitions of $i$. For every even natural number $2k, k \in \mathbb{N}$ we have $\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_{\{ 2k\}}\left(\vv{\Omega}(i)\right) \le \lim_{n \rightarrow \infty} \frac{1}{n} 2^{k} - 1 = 0$. Thus, $\{ 2k\} \in \mathcal{G}$ for every $k \in \mathbb{N}$. The disjoint union of all such sets, namely $\{ 2k \colon k \in \mathbb{N}\}$, however, is not in $\mathcal{G}$. To see this, we consider the sequence $h_i \coloneqq \chi_{\{ 2k \colon k \in \mathbb{N}\}}\left(\vv{\Omega}(i)\right) = \left\langle 0^{[1]}\, 1^{[1]}\, 0^{[2]}\, 1^{[2]}\, 0^{[4]}\, 1^{[4]}\, \ldots 0^{[2^i]}\, 1^{[2^i]}\, \ldots \right\rangle$. As noticed by \citet[p. 11]{mises1964mathematical}, this sequence has no unique frequency limit, i.e. $\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_{\{ 2k \colon k \in \mathbb{N}\}}\left(\vv{\Omega}(i)\right)$ does not exist. This concludes the proof that the system of precision $\mathcal{G}$ is a pre-Dynkin-system, but not generally a Dynkin-system. \end{proof} Note that the system of precision need not be closed under intersection \citep{rivas2019role}. A pre-Dynkin system is closed under finite disjoint union, but as opposed to a Dynkin system not in general under countable disjoint union. A similar relation holds between a \textit{field} (also called algebra) and a $\sigma$-algebra. To avoid confusion, we stick to the name \textit{field}. \begin{definition} A set system $\mathcal{A}$ is a \Def{\emph{field}} if the following conditions hold. \begin{enumerate}[nolistsep,label=\textbf{\emph{FLD\arabic*.}}, ref=FLD\arabic*] \item \label{field:omega} $\Omega \in \mathcal{A}$. \item \label{field:complement} $A \in \mathcal{A} \implies A^\complement \in \mathcal{A}$. \item \label{field:closurefinite} If $A,B \in \mathcal{A}$, then $A \cup B \in \mathcal{A}$. \end{enumerate} Then it it is also closed under finite intersections. We write $\Def{\operatorname{field}(\mathcal{H})}$ for the \Def{intersection of all fields containing the set system $\mathcal{H} \subseteq 2^\Omega$}. \end{definition} As opposed to a $\sigma$-algebra, a field is in general closed only under finite union. Finally, we need the concept of a $\Pi$-system, which is closed under finite intersections. \begin{definition} A set system $\mathcal{H}$ is called a \Def{\emph{$\Pi$-system}} if it is non-empty and $A,B \in \mathcal{H} \implies A\cap B \in \mathcal{H}$. \end{definition} \begin{example} \normalfont A prominent $\Pi$-system is given by $\mathcal{H}=\{(-\infty,a] \colon a \in {\bm{R}}\}$. \end{example} We now begin (with slight modifications) reproducing a series of results which are stated in the literature for the interplay of Dynkin systems and $\sigma$-algebras. In our case, we will restate them for the interplay of pre-Dynkin system with fields. \begin{proposition} A set system $\mathcal{A}$ is a field if and only if it is both a pre-Dynkin system and a $\Pi$-system. \end{proposition} \begin{proof} First, we show ``only if''. This is clear, since the field-condition ``closed under finite union'' is equivalent to ``closed under binary intersections'' (using closure under complement and $\Omega \in \mathcal{A}$). Hence it is necessary that $\mathcal{A}$ is a $\Pi$-system. Also, it is necessary that it be a pre-Dynkin system, since a field is closed under arbitrary finite union, so it also must be closed under finite disjoint union; also it is closed under complement. Next, we show the ``if'', i.e.\@\xspace that $\mathcal{A}$ being a pre-Dynkin and a $\Pi$-system imply that $\mathcal{A}$ is a field. We only have to check that it is closed under arbitrary finite union. Consider: \[ A \cup B = \Omega \setminus \left(A^\complement \cap B^\complement\right), \] which is in $\mathcal{A}$ due to it being a pre-Dynkin and $\Pi$-system. \end{proof} \begin{proposition} Pre-Dynkin-$\Pi$-Theorem: Let $\mathcal{H}$ be a $\Pi$ system. Then the generated field coincides with the generated pre-Dynkin system. \end{proposition} \begin{proof} Just use the proof in \citep[p.\@\xspace 193]{williams1991probability} and replace the one occurence of ``countable union'' with ``finite union''; consequently, we must replace ``Dynkin'' with ``pre-Dynkin'' everywhere. \end{proof} \begin{proposition} Assume precise probabilities exist on some $\Pi$-system $\mathcal{H}$. Then the generated pre-Dynkin system $\mathcal{G}(\Pi)$ is contained in the whole precise pre-Dynkin system $\mathcal{G}$. In particular, $\mathcal{G}$ contains a field. \end{proposition} \begin{proof} Obvious. \end{proof} \begin{proposition} Uniqueness Lemma: Let $\mathcal{A}$ be a $\Pi$-system on $\Omega$, of which the generated field (or equivalently, pre-Dynkin system) is the field $\mathcal{F}$, and assume we have two finitely additive measures $P$, $Q$ on the field $\mathcal{F} \subseteq 2^\Omega$, so that $P(\Omega)=Q(\Omega)=1$ and $P(A)=Q(A)$ $\forall A \in \mathcal{A}$. Then actually $P(A)=Q(A)$ $\forall A \in \mathcal{F}$. \end{proposition} \begin{proof} We show that $\mathcal{H} \coloneqq \{A \in \mathcal{F}\colon P(A)=Q(A)\}$ is a pre-Dynkin system. Clearly, $\Omega\in \mathcal{H}$ and we have closure under complement. We have to show that if $A,B \in \mathcal{H} \subseteq \mathcal{F}$ and $A\cap B =\emptyset$, then $A \cup B \in \mathcal{H}$, i.e.\@\xspace $P(A\cup B)=Q(A \cup B)$. But this follows obviously by assumption that $P(A)=Q(A)$, $P(B)=Q(B)$ and $P,Q$ are finitely additive measures on $\mathcal{F}$. Also, the $\Pi$-system $\mathcal{A}$ is contained in $\mathcal{H}$ by assumption. Then we get from the Pre-Dynkin-$\Pi$-Theorem that actually the field generated by $\mathcal{A}$ is in $\mathcal{H}$, which concludes the proof. \end{proof} \begin{example}\normalfont Let $\Pi=\{(-\infty,a] \colon a \in {\bm{R}}\}$. Consider the induced pre-Dynkin system $\operatorname{PD}(\Pi)$. We call this the Borel field by analogy, since the induced Dynkin system is the Borel $\sigma$-algebra. \end{example} Perhaps the name ``Borel field'' is unfortunate, as there is no connection to topology anymore, unlike for the Borel $\sigma$-algebra. However, the name serves to emphasize the close relation to the latter. \begin{proposition} Assume we have precise probabilities on $\{\{X \leq x\} \colon x \in {\bm{R}}\}$ and $\{\{Y \leq y\} \colon y \in {\bm{R}}\}$ and assume that the irrelevance condition~\ref{def:irrelevanceofrvs} holds on $\mathcal{H} \coloneqq \{(-\infty,a] \colon a \in {\bm{R}}\}$: \[ P(\{X \leq x\}|\{Y \leq y\}) = P(\{X \leq x\}), \quad \text{ if } \{Y \leq y\} \in 2^\Omega_{1+}. \] Then the independence condition also holds for the Borel field, i.e.\@\xspace \[ P\left(X^{-1}(A)|Y^{-1}(B)\right)=P\left(X^{-1}(A)\right), \quad \forall A,B \in \operatorname{field}(\mathcal{H}), Y^{-1}(B) \in 2^\Omega_{1+}. \] \end{proposition} \begin{proof} Define $\mathcal{A} \coloneqq \{\{X \leq x\} \colon x \in {\bm{R}}\}$ and $\mathcal{B} \coloneqq \{\{Y \leq y\} \colon y \in {\bm{R}}\}$. With similar reasoning as in \citet[Proposition 3.12]{measuretheoreticprobnotes}, we get that: \[ P(A|B) = P(A), \quad \forall A \in \operatorname{field}(\mathcal{A}), B \in \operatorname{field}(\mathcal{B}). \] To obtain the statement, it remains to show: \[ \operatorname{field}\left(X^{-1}(\mathcal{H})\right) = X^{-1}(\operatorname{field}(\mathcal{H})). \] We can follow similar reasoning as in \citep[p.\@\xspace 12, Lemma 1]{chowprobability}, since nothing in the argument depends on the $\sigma$-algebra vs. field distinction. This concludes the argument. \end{proof} This gives a good justification for the \textit{precise case} to define independence via the $\Pi$-system $\mathcal{H} \coloneqq \{(-\infty,a] \colon a \in {\bm{R}}\}$. But for imprecise probabilities, we have no such justification and thus should better directly use the whole Borel field to define independence on. \subsection{Motivation} \iffalse \begin{quote} At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate \citep{taonotes}. \end{quote} \fi \vspace*{-7mm} \hfill\begin{minipage}{0.55\textwidth} \footnotesize{\it Do other statistical properties, that can not be reduced to stochasticness, exist? This question did not attract any attention until the applications of the probability theory concerned only natural sciences. The situation is definitely [changing], when one studies social phenomena: the stochasticness gets broken as soon as we [deal] with deliberate activity of people.} \hfill --- Victor Ivanenko and Valery Labkovsky \citeyearpar{ivanenko1993}\\ \end{minipage} It is now almost universally acknowledged that probability theory ought to be based on Kolmogorov's \citeyearpar{kolmogorov1933grundbegriffe} mathematical axiomatization (translated in~\citep{kolmogorov2018foundations}).\footnote{An important exception is quantum probability \citep{gudder1979stochastic, khrennikov2016probability}.} However, if probability is defined in this purely measure-theoretic fashion, what warrants its application to real-world problems of decision making under uncertainty? To those in the so-called \textit{frequentist} camp, the justification is essentially due to the \textit{law of large numbers}, which comes in both an empirical and a theoretical flavour. Our motivation comes from questioning both of these. By the empirical version of the law of large numbers (LLN), we mean not a ``law'' which can be proven to hold, but the following hypothesis, which seems to guide many scientific endeavours. Assume we have obtained data $x_1,..,x_n$ as the outcomes of some experiment, which has been performed $n$ times under ``statistically identical'' conditions. Of course, conditions in the real-world can never truly be identical --- otherwise the outcomes would be constant, at least under the assumption of a deterministic universe. Thus, ``identical'' in this context must be a weaker notion, that all factors which we have judged as relevant to the problem at hand have been kept constant over the repetitions.\footnote{In fact, we do not need that conditions stay exactly constant, but that they change merely in a way which is so benign that the relative frequencies converge. That is, in the limit we should obtain a stable statistical aggregate.} The empirical ``law'' of large numbers, which \citet{gorban2017statistical} calls the \textit{hypothesis of (perfect) statistical stability} then asserts that in the long-run, relative frequencies of events and sample averages converge. These limits are then conceived of as the \textit{probability} of an event and the \textit{expectation}, respectively. Thus, even if relative frequencies can fluctuate in the finite data setting, we expect that they stabilize as more and more data is acquired. Crucially, this hypothesis of perfect statistical stability is not amenable to falsification, since we can never refute it in the finite data setting. It is a matter of faith to assume convergence of relative frequencies. On the other hand, there is now ample experimental evidence that relative frequencies can fail to stabilize even under very long observation intervals \citep[Part II]{gorban2017statistical}. We say that such phenomena display \textit{unstable (diverging)} relative frequencies. Rather than refuting the stability hypothesis, which is impossible, we question its adequateness as an idealized modeling assumption. Thus, if probability is understood as limiting relative frequency, then the applicability of Kolmogorov's theory to empirical phenomena is limited to those which are statistically stable; the founder himself remarked: \begin{quote} Generally speaking there is no ground to believe that a random phenomenon should possess any definite probability \citep{kolmogorov1983logical}. \end{quote} Following in the footsteps of \citet{mises1964mathematical}, \citet{walley1982towards} and \citet{ivanenkobook}, our goal is to establish a broader theory, which is also applicable to ``random'' phenomena which are outside of the scope of Kolmogorov's theory by exhibiting unstable relative frequencies. One attempt to ``prove'' (or justify) the empirical law of large numbers, which in our view is doomed to fail, is to invoke the theoretical law of large numbers, which is a purely formal, mathematical statement. The strong law of large numbers states that if $X_1,X_2,..$ is a sequence of independent and identically distributed (i.i.d.) random variables with finite expectation $\Def{\mathbb{E}[X] \coloneqq \mathbb{E}[X_1]=\mathbb{E}[X_2]=\cdots}$, then the sample average $\Def{\bar{X}_n \coloneqq \frac{1}{n} \sum_{i=1}^n X_i}$ converges almost surely to the expectation: \[ P\left(\lim_{n \rightarrow \infty} \bar{X}_n = \mathbb{E}[X]\right) = 1, \] where $P$ is the underlying probability measure in the sense of Kolmogorov. To interpret this statement correctly, some care is needed. It asserts that $P$ assigns measure $1$ to the set of sequences for which the sample mean converges, but not that this happens \textit{for all} sequences. Thus one would need justification for identifying ``set of measure 0`` with ``is negligible'' (``certainly does not happen''), which in particular requires a justification for $P$. With respect to a different measure, this set might not be negligible at all \citep[p.\@\xspace 8]{schnorr2007zufalligkeit}; see also \citep{calude1999most,whenlargeisalso} for critical arguments. Moreover, the examples in \citep[Part II]{gorban2017statistical} show that sequences with seemingly non-converging relative frequencies (fluctuating substantially even for long observation intervals) are not ``rare'' in practice. In Appendix~\ref{app:pathornormal} we examine the question of how pathological or normal such sequences are in more depth. Conceptually, the underlying problem is that the probability measure $P$, which is used to measure the event $\left\{\lim_{n \rightarrow \infty} \bar{X}_n = \mathbb{E}[X]\right\}$ has no clear meaning. Of course, in the subjectivist spirit, one could interpret it as assigning a \textit{belief} in the statement that convergence takes place. But it is unclear what a frequentist interpretation of $P$ would look like. As \citet{lacazefrequentism} observed: \begin{quote} Importantly, "almost sure convergence" is also given a frequentist interpretation. Almost sure convergence is taken to provide a justification for assuming that the relative frequency of an attribute \textit{would} converge to the probability in actual experiments \textit{were} the experiment to be repeated indefinitely [emphasis in original]. \end{quote} But again, it is unclear on what ground $P$ can be given this interpretation and according to \citet{hajek2009fifteen} this leads to a regress to mysterious ``meta-probabilities''. Furthermore, the theoretical LLN requires that $P$ be countably additive, which is problematic under a frequency interpretation \citep[pp.\@\xspace 229--230]{hajek2009fifteen}. Given these complications, we opt for a different approach, namely a \textit{strictly frequentist} one. Reaching back to Richard von Mises' \citeyearpar{mises1919grundlagen} foundational work, a strictly frequentist theory explicitly defines probability in terms of limiting relative frequencies in a sequence. Importantly, we here \textit{do not assume} that the elements of the sequence are random variables with respect to an abstract, countably additive probability measure. Instead, like von Mises, we actually take the notion of a sequence as the primitive entity in the theory. As a consequence, countable additivity does not naturally arise in this setting, and hence we do not subscribe to the frequentist interpretation of the classical strong LLN. The core motivation for our work is to drop the assumption of perfect statistical stability and instead to explicitly model the possibility of unstable (diverging) relative frequencies. Rather than merely conceding that the ``probability'' might vary over time \cite[pp.\@\xspace 27ff.]{borel1950} (which begs the question what such ``probabilities'' mean) we follow the approach of Ivanenko \citeyearpar{ivanenkobook}, reformulate his construction of a \textit{statistical regularity} of a sequence, and discover that it is closely connected to the subjectivist theory of \textit{imprecise probability}. In essence, to each sequence we can naturally associate a \emph{set} of probability measures, which constitute the statistical regularity that describes the cluster points of relative frequencies and consequently also those of sample averages. Since this works for \textit{any} sequence and \textit{any} event, we have thus countered a typical argument against frequentism, namely that the limit may not exist and hence probability is undefined \citep{hajek2009fifteen}. The relative frequencies induce a coherent upper probability and the sample averages induce a coherent upper prevision in the sense of \citet{walley1991statistical}. In the convergent case, this reduces to a precise, finitely additive probability and a linear prevision, respectively. Furthermore, we obtain a natural account of conditional probability and independence in a strictly frequentist fashion; remarkably, this approach recovers the \textit{generalized Bayes rule}, the arguably most important updating principle in imprecise probability. Furthermore, we demonstrate that the reverse direction works, too: given a set of probability measures, we can explicitly construct a sequence, which corresponds to this set in the sense that its relative frequencies have this set of cluster points. Thereby we establish strictly frequentist \textit{semantics} for imprecise probability: a subjective decision maker who uses a set of probability measures to represent their belief can also be understood as assuming an implicit underlying sequence and reasoning in a frequentist way thereon. \subsection{Von Mises - The Frequentist Perspective} Our approach is inspired by, and generalizes, Richard von Mises \citeyearpar{mises1919grundlagen} (refined and summarized in \citep{mises1964mathematical}) axiomatization of probability theory. In contrast to the subjectivist camp, von Mises concern was to develop a theory for repetitive events; which gives rise to a theory of probability that is mathematical, but which can also be used to reason about the physical world. \begin{quote} The calculus of probability, i.e. the theory of probabilities, in so far as they are numerically representable, is the theory of definite observable phenomena, repetitive or mass events. Examples are found in games of chance, population statistics, Brownian motion etc. \citep[p.\@\xspace 102]{mises1981probability}. \end{quote} Hence, von Mises is not concerned with the probability of single events, which he deems meaningless, but instead always views an event as part of a larger \textit{reference class}. Such a reference class is captured by what he terms a \textit{collective}, a disorderly sequence which exhibits both global regularity and local irregularity. \begin{definition} Formally, a \Def{\emph{collective}} is a tuple $\Def{\left({\bm{v}}{\Omega},\mathcal{A},\mathcal{S}\right)}$, which consists of the following data: \begin{enumerate}[nolistsep] \item a sequence $\Def{{\bm{v}}{\Omega} \colon \mathbb{N} \rightarrow \Omega}$; \item a set of \Def{\emph{selection rules}} $\Def{\mathcal{S} \coloneqq \{{\bm{v}}{S}_j \colon j \in \mathcal{J}\}}$, where for each $j$ in a countable index set $\mathcal{J}$, ${\bm{v}}{S}_j \colon \mathbb{N} \rightarrow \{0,1\}$ and ${\bm{v}}{S}_j(i)=1$ for infinitely many $i \in \mathbb{N}$; \item a non-empty set system $\Def{\mathcal{A} \subseteq 2^\Omega}$, where for simplicity we assume $|\mathcal{A}|<\infty$.\footnote{In fact, $\mathcal{A}$ does not necessarily has to be finite. Since an infinite domain of probabilities does not contribute a lot to a better understanding of the frequentist definition at this point, we restrict ourselves to the finite case here. The reader can find details in \citep{mises1964mathematical}.} \end{enumerate} These data form a collective if the following two axioms hold. \begin{enumerate}[nolistsep,label=\textbf{\emph{vM\arabic*.}}, ref=vM\arabic*] \item \label{vm1} The limiting relative frequencies for $A \in \mathcal{A} \subseteq 2^\Omega$ exists: \[ \Def{P(A) \coloneqq \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_A\left(\vv{\Omega}(i)\right)}.\footnote{The function $\Def{\chi_A}$ denotes the indicator gamble for a set $A \subseteq \Omega$, i.e.\@\xspace $\Def{\chi_A(\omega)\coloneqq 1}$ if $\omega \in A$ and $\Def{\chi_A(\omega)\coloneqq 0}$ otherwise.} \] We call this limit the \Def{\emph{probability of $A$}}. \item \label{vm2} Each selection rule ${\bm{v}}{S}_j \in \mathcal{S}$ does not change limiting relative frequencies: \[ \lim_{n \rightarrow \infty} \frac{\sum_{i=1}^n \chi_A\left(\vv{\Omega}(i)\right) \cdot {\bm{v}}{S}_j(i)}{\sum_{i=1}^n {\bm{v}}{S}_j(i)} = P(A) \ \ \ \ \forall A\in\mathcal{A}. \] \end{enumerate} \end{definition} Here, we view ${\bm{v}}{\Omega}$ as a sequence of elementary outcomes $\omega \in \Omega$, for some possibility space $\Omega$ on which we have a set system of events $\mathcal{A}$. Axiom \ref{vm1} explicitly defines the probability of an event in terms of the limit of its relative frequency. Demanding that this limit exists is non-trivial, since this need not be the case for an arbitrary sequence. Intuitively, \ref{vm1} expresses the hypothesis of statistical stability, which captures a global regularity of the sequence. In contrast, \ref{vm2} captures a sense of \textit{randomness} or local irregularity. Note it actually comprises \emph{two claims}: 1) the limit exists and 2) it is the same as the limit in \ref{vm1}. It is best understood by viewing a selection rule ${\bm{v}}{S}_j \colon \mathbb{N} \rightarrow \{0,1\}$ as selecting a subsequence of the original sequence ${\bm{v}}{\Omega}$ and then demanding that the limiting relative frequencies thereof coincide with those of the original sequence. Such a selection rule is called \Def{\emph{admissible}}, whereas a selection rule which would give rise to different limiting relative frequencies for at least one $A \in \mathcal{A}$ would be \Def{\emph{inadmissible}}. Why do we need axiom \ref{vm2}? Von Mises calls this the ``law of the excluded gambling system'' and it is the key to capture the notion of randomness in his framework. Intuitively, if a selection rule is inadmissible, an adversary could use this knowledge to strategically offer a bet on the next outcome and thereby make long-run profit, at the expense of our fictional decision maker. A \emph{random} sequence, however, is one for which there does not exist such a betting strategy. It turns out, that this statement cannot hold in its totality. A sequence cannot be random with respect to all selection rules except in trivial cases (cf.\@\xspace Kamke's critique of von Mises' notion of randomness, nicely summarized in \citep{lambalgen1987mises}). Thus, von Mises explicitly relativizes randomness with respect to a problem-specific set of selection rules \citep[p.\@\xspace 12]{mises1964mathematical}.\footnote{This class of selection rules necessarily must be specified in advance; confer \citep{shen2009}. One line of work aspires to fix the set of selection rules as all partially computable selection rules \citep{church1940concept}, but there is no compelling reason to elevate this to a universal choice; cf.\@\xspace \citep{derr2022fairness} for an elaborated critique.} A sequence which forms a collective (``is random with respect to'') one set of selection rules, might not form a collective with respect to another set. In our view, the role of the randomness axiom \ref{vm2} is similar to the role of more familiar randomness assumptions like the standard \textit{i.i.d.} assumption: to empower inference from finite data. In this work, however, we will be exclusively concerned with the idealized case of infinite data, since our focus is the axiom (or hypothesis) of statistical stability. We are motivated by the following question. What happens to von Mises approach when axiom \ref{vm1} breaks down? That is, when relative frequencies of at least some events do not converge. We will show that this leads to a remarkable confluence with a theory that is thoroughly grounded in the subjectivist camp: the theory of \textit{imprecise probability}. In summary, we establish a strictly frequentist theory of imprecise probability. \subsection{Imprecise Probability - The Subjectivist Perspective} \label{sec:walleyintro} We briefly introduce the \textit{prima facie} unrelated, subjectivist theory of imprecise probability, or more specifically, the theory of \textit{lower and upper previsions} as put forward by \citet{walley1991statistical}. Orthodox Bayesianism models belief via the assignment of precise probabilities to propositions, or equivalently, via a linear expectation functional. In contrast, in Walley's theory, belief is interval-valued and the linear expectation is replaced by a pair of a lower and upper expectation. Hence, the theory is strictly more expressive than orthodox Bayesianism, which can be recovered as a special case. We assume an underlying possibility set $\Omega$, where $\omega \in \Omega$ is an elementary event, which includes all relevant information. We call a function $X \colon \Omega \rightarrow {\bm{R}}$, which is bounded, i.e.\@\xspace $\sup_{\omega \in \Omega} |X(\omega)| < \infty$, a \Def{\emph{gamble}} and collect all such functions in the set $L^\infty$. The set of gambles $L^\infty$ carries a vector space structure with scalar multiplication $(\lambda X)(\omega) = \lambda X(\omega)$, $\lambda \in {\bm{R}}$, and addition $(X+Y)(\omega) = X(\omega) + Y(\omega)$. For a constant gamble $c(\omega)=c \text{ } \forall \omega$ we write simply $c$. Note that Walley's theory in the general case does not require that a vector space of gambles is given, but definitions and results simplify significantly in this case. We interpret a gamble as assigning an uncertain loss $X(\omega)$ to each elementary event, that is, in line with the convention in insurance and machine learning, we take positive values to represent loss and negative values to represent reward.\footnote{Unfortunately, this introduces tedious sign flips when comparing results to \citet{walley1991statistical}.} We imagine a decision maker who is faced with the question of how to value a gamble $X$; the orthodox answer would be the expectation $\mathbb{E}[X]$ with respect to a subjective probability measure. \citet{walley1991statistical} proposed a betting interpretation of probability, which is inspired by \citet{de2017theory}, who identifies probability with fair betting rates. The goal is to axiomatize a functional $\overline{R} \colon L^\infty \rightarrow {\bm{R}}$, which assigns to a gamble the smallest number $\overline{R}(X)$ so that $X-\overline{R}(X)$ is a \text{desirable} transaction to our decision maker, where she incurs the loss $X(\omega)$ but in exchange gets the reward $-\overline{R}(X)$. Formally: \[ \Def{\overline{R}(X) \coloneqq \inf\{\alpha \in {\bm{R}} \colon X - \alpha \in \mathcal{D} \}, \quad \mbox{\ where\ \ }\mathcal{D} \coloneqq \{X \in L^\infty\colon \overline{R}(X) \leq 0\}.} \] \citet[Section 2.5]{walley1991statistical} argued for a criterion of coherence, which any reasonable functional $\overline{R}$ should satisfy, and consequently obtained the following characterization \cite[Theorem 2.5.5]{walley1991statistical}, which we shall take here as an axiomatic \textit{definition} instead.\footnote{Here, we need the vector space assumption on the set of gambles. We also note that \citet[pp.\@\xspace 64--65]{walley1991statistical} himself made a similar definition, but then proposed the more general coherence concept.} \begin{definition} A functional $\overline{R} \colon L^\infty \rightarrow {\bm{R}}$ is a \Def{\emph{coherent upper prevision}} if it satisfies: \begin{enumerate}[nolistsep,label=\textbf{\emph{UP\arabic*.}}, ref=UP\arabic*] \item \label{item:UP1} $\overline{R}(X) \leq \sup(X)$ \quad \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: (bounds) \item \label{item:UP2} $\overline{R}(\lambda X) = \lambda \overline{R}(X) \text{, } \forall \lambda \in {\bm{R}}^+$ \quad (positive homogeneity) \item \label{item:UP3} $\overline{R}(X+Y) \leq \overline{R}(X) + \overline{R}(Y)$ \quad \thinspace\thinspace (subadditivity) \end{enumerate} \end{definition} Together, these properties also imply \citep[p.\@\xspace 76]{walley1991statistical}: \begin{enumerate}[nolistsep,start=4,label=\textbf{UP\arabic*.}, ref=UP\arabic*] \item \label{item:P6} $\overline{R}(X + c) = \overline{R}(X) + c \text{, } \forall c \in {\bm{R}}$ \qquad \:\:\:\:\:\:\:\:\:\:\:\: \thinspace \thinspace (translation equivariance) \item \label{item:P7} $X(\omega) \leq Y(\omega) \text{ } \forall \omega \in \Omega \Rightarrow \overline{R}(X) \leq \overline{R}(Y)$ \quad (monotonicity) \end{enumerate} To a coherent upper prevision, we can define its conjugate \Def{\emph{lower prevision}} by: \begin{align*} \Def{\underline{R}(X)} &\Def{\coloneqq -\overline{R}(-X)}\\ &= -\inf\{\alpha \in {\bm{R}}\colon -X - \alpha \in \mathcal{D} \}\\ &= \sup\{\alpha \in {\bm{R}}\colon \alpha - X \in \mathcal{D} \}, \end{align*} which specifies the highest certain loss $\alpha$ that the decision maker is willing to shoulder in exchange for giving away the loss $X(\omega)$, i.e.\@\xspace receiving the reward $-X(\omega)$. Due to the conjugacy, it suffices to focus on the upper prevision throughout. In general, we have that $\underline{R}(X)\leq \overline{R}(X)$ for any $X \in L^\infty$. If $\underline{R}(X)=\overline{R}(X)$ $\forall X \in L^\infty$, we say that $R\coloneqq \overline{R} = \underline{R}$ is a \textit{linear prevision}, a definition which aligns with \citet{de2017theory}. By applying an upper prevision to indicator gambles, we obtain an \Def{\emph{upper probability}} $\Def{\overline{P}(A) \coloneqq \overline{R}(\chi_A)}$, where $A \subseteq \Omega$. Correspondingly, the \Def{\emph{lower probability}} is $\Def{\underline{P}(A) \coloneqq 1-\overline{P}(A^\mathrm{C}) = \underline{R}(\chi_A)}$. In the precise case, there is a unique relationship between (finitely) additive probabilities and linear previsions; however, upper previsions are more expressive than upper probabilities. Finally, we remark that via the so-called \Def{\emph{natural extension}}, a coherent upper probability which is defined on some subsets of events $\mathcal{A} \subseteq 2^\Omega$ can be extended to a coherent upper prevision $\Def{\operatorname{NatExt}(\overline{P})}$ on $L^\infty$, which is compatible with $\overline{P}$ in the sense that $\operatorname{NatExt}(\overline{P})(\chi_A) = \overline{P}(A)$ $\forall A \subseteq \mathcal{A}$ (cf.\@\xspace \citep[Section 3.1]{walley1991statistical}). \section{Unstable Relative Frequencies} Assume that we have some fixed sequence ${\bm{v}}{\Omega} \colon \mathbb{N} \rightarrow \Omega$ on a possibility set $\Omega$ of elementary events, but that for some events $A \subseteq \mathcal{A}$, where $\mathcal{A} \subseteq 2^\Omega$, the limiting relative frequencies do not exist. What can we do then? In a series of papers \citep{ivanenko1986functional,ivanenkoclassofcriterion, ivanenkomodelofnonstochasticrussian, ivanenko1993, Ivanenko2000DecisionMI, ivanenkoonregularities, ivanenko2017expected} and a monograph \citep{ivanenkobook}, Ivanenko and collaborators have developed a strictly frequentist theory of ``hyper-random phenomena'' based on ``statistical regularities''. In essence, they tackle mass decision making in the context of sequences with possibly divergent relative frequencies. Like von Mises, they take the notion of a sequence as the primitive, that is, without assuming an a priori probability and then invoking the law of large numbers. They explicitly recognize that ``stochasticness gets broken as soon as we deal with deliberate activity of people'' \citep{ivanenko1993}\footnote{This not only occurs because of non-equilibrium effects, but also from feedback loops, what has become known as ``performativity'' \citep{mackenzie2007economists} or ``reflexivity'' \citep{soros2009}. See the epigraph at the beginning of the present paper.}. The presentation of Ivanenko's theory is obscured somewhat by the great generality with which it is presented (they work with general nets, rather than just sequences). We build heavily upon their work but entirely restrict ourselves to working with sequences. While in some sense this is a weakening, our converse result (see Section~\ref{sec:converseresult} is actually stronger as we show that one can achieve any ``statistical regularity'' by taking relative frequencies of only sequences. For simplicity, we will dispense with integrals with respect to finitely additive measures in our presentation, so that there are less mathematical dependencies involved;\footnote{The two well-known accounts of the theory of integrals with finitely additive measures are \citep{rao1983theory} and \citep{dunford1988linear}. The theory of linear previsions as laid out in \citep{walley1991statistical} appears to be an easier approach for our purposes.} instead, we work with linear previsions. Moreover, we establish\footnote{\citet{ivanenkoonregularities} mention in passing that sets of probabilities also appear in \citep{walley1991statistical}.} the connection to imprecise probability and give a different justification for the construction. \subsection{Ivanenko's Argument --- Informally} We begin by providing an informal summary of Ivanenko's construction of \textit{statistical regularities} on sequences. Assume we are given a fixed sequence ${\bm{v}}{\Omega}\colon \mathbb{N} \rightarrow \Omega$ of elementary events ${\bm{v}}{\Omega}(1),{\bm{v}}{\Omega}(2),\ldots$, where we may intuitively think of $\mathbb{N}$ as representing time. In contrast to von Mises, who demands the existence of relative frequency limits to define probabilities, we ask for something like a probability for \emph{all} events $A \subseteq \Omega$, even when the relative frequencies have no limit. To this end, we exploit that sequences of relative frequencies always have a non-empty set of cluster points, each of which is a finitely additive probability. Hence, a decision maker can use this set of probabilities to represent the global statistical properties of the sequence. In fact, we will see that this induces a coherent upper probability. Also, our decision maker is interested in assessing a value for each gamble $X\colon \Omega \rightarrow {\bm{R}}$, which is evaluated infinitely often over time. Here, the sequence of averages $n \mapsto \frac{1}{n} \sum_{i=1}^n X(\vv{\Omega}(i))$ is the object of interest. In the case of convergent relative frequencies, a decision maker would use the expectation to assess the risk in the limit, whereas in the general case of possible non-convergence, a different object is needed. This object turns out to be a coherent upper prevision. We provide a justification for using this upper prevision to assess the value of a gamble, which links it to imprecise probability. \subsection{Ivanenko's Argument --- Formally} \label{sec:ivanenkoformal} Let $\Def{\Omega}$ be an arbitrary (finite, countably infinite or uncountably infinite) set of outcomes and fix $\Def{{\bm{v}}{\Omega} \colon \mathbb{N} \rightarrow \Omega}$, an $\Omega$-valued sequence. We define a \Def{gamble $X\colon \Omega \rightarrow {\bm{R}}$} as a bounded function from $\Omega$ to ${\bm{R}}$, i.e.\@\xspace $\exists K \in {\bm{R}}\colon |X(\omega)| \leq K$ $\forall \omega \in \Omega$ and collect all such gambles in the set $L^\infty$. We assume the vector space structure on $L^\infty$ as in Section~\ref{sec:walleyintro}. The set $L^\infty$ becomes a Banach space, i.e.\@\xspace a complete normed vector space, under the supremum norm $\Def{\|X\|_{L^\infty} \coloneqq \sup_{\omega \in \Omega} |X(\omega)|}$. We denote the topological dual space of $L^\infty$ by $\Def{(\linfty)^*}$. Recall that it consists exactly of the continuous linear functionals $\phi\colon L^\infty \rightarrow {\bm{R}}$. We endow $(\linfty)^*$ with the \emph{weak*-topology}, which is the weakest topology (i.e.\@\xspace with the fewest open sets) that makes all evaluation functionals of the form $\Def{X^* \colon (\linfty)^* \rightarrow {\bm{R}}}$, $\Def{X^*(E) \coloneqq E(X)}$ for any $X \in L^\infty$ and $E \in (\linfty)^*$ continuous. Consider the following subset of $(\linfty)^*$: \[ \Def{\operatorname{PF}(\Omega) \coloneqq \left\{E \in (\linfty)^* \colon E(X) \geq 0 \text{ whenever } X \geq 0, E(\chi_\Omega)=1\right\}.} \] Due to the Alaoglu-Bourbaki theorem, this set is compact under the weak* topology, see Appendix~\ref{app:weakstarcompact}. A finitely additive probability $P\colon \mathcal{A} \rightarrow [0,1]$ on some set system $\mathcal{A}$, where $\Omega \in \mathcal{A}$, is a function such that: \begin{enumerate}[nolistsep,label=\textbf{PF\arabic*.}, ref=PF\arabic*] \item \label{prop:pf1} $P(\Omega)=1$. \item \label{prop:pf2} $P(A \cup B)=P(A)+P(B)$ whenever $A \cap B= \emptyset$ and $A,B \in \mathcal{A}$. \end{enumerate} We induce a sequence of finitely additive probabilities ${\bm{v}}{P}$ where $\Def{{\bm{v}}{P}(n) \coloneqq A \mapsto \frac{1}{n} \sum_{i=1}^n \chi_A({\bm{v}}{\Omega}(i))}$ for each $n \in \mathbb{N}$. It is easy to check that indeed ${\bm{v}}{P}(n)$ is a finitely additive probability on the whole powerset $2^\Omega$ for any $n \in \mathbb{N}$. We shall call ${\bm{v}}{P}$ the \Def{\emph{sequence of empirical probabilities}}. Due to \citep[Corollary 3.2.3]{walley1991statistical}, a finitely additive probability defined on $2^\Omega$ can be uniquely extended (via natural extension) to a linear prevision $E_P \colon L^\infty \rightarrow {\bm{R}}$, so that $E_P(\chi_A) = P(A)$ $\forall A \subseteq \Omega$. Furthermore, we know from \citep[Corollary 2.8.5]{walley1991statistical}, that there is a one-to-one correspondence between elements of $\operatorname{PF}(\Omega)$ and linear previsions $E_P \colon L^\infty \rightarrow {\bm{R}}$. Hence, we associate to each empirical probability ${\bm{v}}{P}(n)$ an \Def{\emph{empirical linear prevision}} $\Def{{\bm{v}}{E}(n) \coloneqq X \mapsto \operatorname{NatExt}({\bm{v}}{P}(n))(X)}$, where $X \in L^\infty$ and we denote the natural extension by $\operatorname{NatExt}$. We thus obtain a sequence ${\bm{v}}{E} \colon \mathbb{N} \rightarrow \operatorname{PF}(\Omega)$. On the other hand, each gamble $X \in L^\infty$ induces a sequence of evaluations as $\Def{{\bm{v}}{X}\colon \mathbb{N} \rightarrow {\bm{R}}}$, where $\Def{{\bm{v}}{X}(n) \coloneqq X\left({\bm{v}}{\Omega}(n)\right)}$. For $X \in L^\infty$, we define the sequence of averages of the gamble over time as $\Def{\vv{\Sigma X} \colon \mathbb{N} \rightarrow {\bm{R}}}$, where $\Def{\vv{\Sigma X}(n) \coloneqq \frac{1}{n} \sum_{i=1}^n X\left({\bm{v}}{\Omega}(i)\right)}$. For each finite $n$, we can also view the average as a function in $X$, i.e.\@\xspace $X \mapsto \frac{1}{n} \sum_{i=1}^n X\left({\bm{v}}{\Omega}(i)\right)$. Observe that this is a coherent linear prevision and by applying it to indicator gambles $\chi_A$, we obtain ${\bm{v}}{P}(n)$. Hence, we know from \citep[Corollary 3.2.3]{walley1991statistical} that this linear prevision is in fact the natural extension of ${\bm{v}}{P}(n)$, i.e.\@\xspace ${\bm{v}}{E}(n) = X \mapsto \frac{1}{n} \sum_{i=1}^n X\left({\bm{v}}{\Omega}(i)\right) = X \mapsto \vv{\Sigma X}(n)$. This concludes the technical setup; we now begin reproducing Ivanenko's argument. Since $\operatorname{PF}(\Omega)$ is a compact topological space under the subspace topology induced by the weak*-topology on $(\linfty)^*$, we know that any sequence ${\bm{v}}{E}\colon \mathbb{N} \rightarrow (\linfty)^*$ has a non-empty closed set of cluster points. Recall that a point $c$ is a \Def{\emph{cluster point}} of a sequence ${\bm{v}}{S} \colon \mathbb{N} \rightarrow \mathcal{T}$, where $\mathcal{T}$ is any topological space, if: \[ \forall N \mbox{, where $N$ is any neighbourhood of } c \text{ with respect to } \mathcal{T} \text{, } \forall n_0 \in \mathbb{N}: \exists n \geq n_0: {\bm{v}}{S}(n) \in N. \] We remark that this \emph{does not} imply that those cluster points are limits of convergent subsequences.\footnote{This would hold under sequential compactness, which is not fulfilled here in general, but it is for finite $\Omega$.} We denote the \Def{\emph{set of cluster points}} as $\Def{\operatorname{CP}({\bm{v}}{E})}$. Equivalently, by applying these linear previsions to indicator gambles, we obtain the \Def{set of finitely additive probabilities} $\Def{\mathcal{P} \coloneqq \left\{A \mapsto E(\chi_A) \colon E \in \operatorname{CP}({\bm{v}}{E})\right\}}$. Due to the one-to-one relationship, we might work with either $\operatorname{CP}({\bm{v}}{E})$ or $\mathcal{P}$. Following Ivanenko, we call $\mathcal{P}$ the \Def{\emph{statistical regularity}} of the sequence ${\bm{v}}{\Omega}$; in the language of imprecise probability, it is a \Def{\emph{credal set}}. We further define \[ \Def{\overline{R}(X) \coloneqq \sup \left\{E(X) \colon E \in \operatorname{CP}({\bm{v}}{E}) \right\} = \sup \left\{E_P(X) \colon P \in \mathcal{P} \right\} , \quad X \in L^\infty,} \] where $E_P \coloneqq \operatorname{NatExt}(P)$, and \[ \Def{\overline{P}(A) \coloneqq \sup \left\{E(\chi_A) \colon E \in \operatorname{CP}({\bm{v}}{E}) \right\} = \sup \left\{ P(A) \colon P \in \mathcal{P} \right\}, \quad A \subseteq \Omega.} \] Observe that $\overline{R}$ is defined on all $X \in L^\infty$ and $\overline{P}$ is defined on \textit{all} subsets of $\Omega$, even if $\Omega$ is uncountably infinite, since each $P \in \mathcal{P}$ is a finitely additive probability on $2^\Omega$. We further observe that $\overline{R}$ is a coherent upper prevision on $L^\infty$ or equivalently, a coherent risk measure in the sense of \citet{artzner1999coherent}.\footnote{For the close connection of coherent upper previsions and coherent risk measures we refer to \citep{frohlich2022risk}.} Correspondingly, $\overline{P}$ is a coherent upper probability on $2^\Omega$, which is obtained by applying $\overline{R}$ to indicator functions. This follows directly from the \textit{envelope theorem} in \citep[Theorem 3.3.3]{walley1991statistical}. So far, the definition of $\overline{R}$ and $\overline{P}$ may seem arbitrary. Yet they play a special role, as we now show. \begin{proposition} The sequence of averages $\vv{\Sigma X}$ has the set of cluster points \[ \operatorname{CP}\left(\vv{\Sigma X}\right) = \left\{E(X) \colon E \in \operatorname{CP}({\bm{v}}{E})\right\} = \left\{E_P(X) \colon {P} \in \mathcal{P}\right\}, \] and therefore \[ \overline{R}(X) = \sup \operatorname{CP}\left(\vv{\Sigma X}\right) = \limsup_{n \rightarrow \infty} \vv{\Sigma X}(n). \] \end{proposition} \begin{proof} First observe that \[ {\bm{v}}{E}(n)(X) = \vv{\Sigma X}(n). \] We use the following result from \citep[Lemma 3]{ivanenko2017expected}.\footnote{A subtle point in the argument, which \citet{ivanenko2017expected} do not make visible, is the sequential compactness of ${\bm{R}}$, which means that for any cluster point of an ${\bm{R}}$-valued sequence we can find a subsequence converging to it. } \begin{lemma} Let $f\colon Y \rightarrow {\bm{R}}$ be a continuous function on a compact space $Y$ and ${\bm{v}}{y}$ a $Y$-valued sequence. Then $\operatorname{CP}\left(n \mapsto f\left({\bm{v}}{y}(n)\right)\right) = f\left(\operatorname{CP}\left({\bm{v}}{y}\right)\right)$. \end{lemma} On the right side, the application of $f$ is to be understood as applying $f$ to each element in the set $\operatorname{CP}\left({\bm{v}}{y}\right)$. Consider now the evaluation functional $X^* \colon \operatorname{PF}(\Omega) \rightarrow {\bm{R}}$, $X^*(E) \coloneqq E(X)$, which is continuous under the weak*-topology. The application of the lemma with $f=X^*$, $Y=\operatorname{PF}(\Omega)$,${\bm{v}}{y}={\bm{v}}{E}$ hence gives: \[ \operatorname{CP}\left(n \mapsto X^*\left({\bm{v}}{E}(n)\right)\right) = X^*\left(\operatorname{CP}\left({\bm{v}}{E}\right)\right). \] But since $X^*\left({\bm{v}}{E}(n)\right)=\vv{\Sigma X}(n)$, we obtain that $\operatorname{CP}\left(\vv{\Sigma X}\right) = X^*\left(\operatorname{CP}\left({\bm{v}}{E}\right)\right) = \left\{E(X) \colon E \in \operatorname{CP}\left({\bm{v}}{E}\right)\right\}$. \end{proof} A similar statement holds for the coherent upper probability. \begin{corollary} For any $A\subseteq \Omega$, $\displaystyle\overline{P}(A) = \limsup_{n \rightarrow \infty} \left({\bm{v}}{P}(n)(A)\right) = \limsup_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \chi_A\left(\vv{\Omega}(i)\right)$. \end{corollary} \begin{proof} Just observe that ${\bm{v}}{P}(n)(A) = {\bm{v}}{\Sigma \chi_A}(n)$ and apply the previous result. \end{proof} Thus the limes superior of the sequence of relative frequencies induces a coherent upper probability on $2^\Omega$; similarly, the limes superior of the sequence of a gamble's averages induces a coherent risk measure on $L^\infty$. By conjugacy, we have that the lower prevision and lower probability are : \begin{align*} \underline{R}(X) = &\inf \left\{E(X) \colon E \in \operatorname{CP}\left({\bm{v}}{E}\right) \right\} = \liminf_{n\rightarrow\infty} \vv{\Sigma X}(n). \\ \underline{P}(A) = &\inf \left\{ P(A) \colon P \in \mathcal{P} \right\} =\liminf_{n\rightarrow\infty} \frac{1}{n} \sum_{i=1}^n \chi_A\left(\vv{\Omega}(i)\right), \quad A \subseteq \Omega, \end{align*} which are obtained in a similar way using the limes inferior. Finally, when an event is \textit{precise} in the sense that $\overline{P}(A)=\underline{P}(A)$ (and thus the $\liminf$ equals the $\limsup$ and hence the limit exists), we denote the upper (lower) probability as $P(A)$ and say that the precise probability of $A$ exists. \subsection{The Induced Risk Measure} \label{sec:inducedriskmeasure} We have seen that the upper prevision $\overline{R}$, as we have just defined it, has the property that it is induced by the statistical regularity of the sequence, and at the same time corresponds to taking the supremum over the cluster points of the sequence of averages of a gamble over time. The set of cluster points is in general a complicated object, hence it is unclear why one should take the supremum to reduce it to a single number in a decision making context. Our goal in this section is to provide some intuition \textit{why} it is reasonable to use $\overline{R}$, as we defined it, to assess the risk inherent in a sequence. \citet{Ivanenko2000DecisionMI} argued that $\overline{R}$ is the unique object which satisfies a certain list of axioms, which are similar to those for an upper prevision, but including a so-called ``principle of guaranteed result'', which appears rather mysterious to us. Our setup is as follows. We imagine an individual decision maker, who is faced with a fixed sequence ${\bm{v}}{\Omega} \colon \mathbb{N} \rightarrow \Omega$ and various gambles $X\colon \Omega \rightarrow {\bm{R}}$. The question to the decision maker is how to value this gamble in light of the sequence, i.e.\@\xspace imagining that the gamble is evaluated at each ${\bm{v}}{\Omega}(1),{\bm{v}}{\Omega}(2),...$, infinitely often. Here, $X({\bm{v}}{\Omega}(i))$ represents a loss for positive values, and a gain for negative values. We can think of the ${\bm{v}}{\Omega}(i)$ as the states of nature, and the sequence determines which are realized and how often. Importantly, we view our decision maker as facing a \textit{mass decision}, i.e.\@\xspace the gamble will not only be evaluated once, but instead infinitely often. Which gambles are \textit{desirable} to our decision maker? Desirable gambles\footnote{We remark that throughout the paper we always mean \textit{almost desirability}, cf.\@\xspace \citep[Section 3.7]{walley1991statistical}.} are those which the decision maker would accept even without any reward for it. We argue that an appropriate set of desirable gambles is given by: \[ \Def{\mathcal{D}_{{\bm{v}}{\Omega}} \coloneqq \left\{X \in L^\infty \colon \limsup_{n \rightarrow \infty} \vv{\Sigma X} \leq 0\right\}.} \] Consider what $X \in \mathcal{D}_{{\bm{v}}{\Omega}}$ means. If the limes superior of the gamble's average sequence, i.e.\@\xspace the growth rate of the accumulated loss, is negative or zero, then we are guaranteed that there is no strictly positive accumulated loss which we will face infinitely often. The choice of the average as the aggregation functional is justified from the mass decision character of the setting, since we assume that our decision maker does not care about individual outcomes, but only about long-run capital. Now, given this set of desirable gambles, we seek a functional $\overline{R}(X) \colon L^\infty \rightarrow {\bm{R}}$, so that when at each time step $i$, the transaction $X(\vv{\Omega}(i)) - \overline{R}(X)$ takes place, this results in a desirable gamble for our decision maker. Our decision maker shoulders the loss $X(\vv{\Omega}(i))$, while at the same time asking for $-\overline{R}(X)$ in advance. Intuitively, $\overline{R}(X)$ is supposed to be the certainty equivalent of the ``uncertain'' loss $X$, in the sense that $X(\vv{\Omega}(i))$ will vary over time. Therefore we define, in correspondence with $\mathcal{D}_{{\bm{v}}{\Omega}}$, the upper and lower previsions: \begin{align} \label{eq:rdeffromdesirable} \Def{\overline{R}(X) \coloneqq} &\Def{\inf\left\{\alpha \in {\bm{R}}\colon X-\alpha \in \mathcal{D}_{{\bm{v}}{\Omega}}\right\}}\\ \Def{\underline{R}(X) \coloneqq -\overline{R}(-X) =} &\Def{\sup\left\{\alpha \in {\bm{R}}\colon \alpha - X \in \mathcal{D}_{{\bm{v}}{\Omega}}\right\}.}\nonumber \end{align} When a set of desirable gambles and an upper (lower) prevision are in this correspondence, it holds that $X-\overline{R}(X) \in \mathcal{D}_{{\bm{v}}{\Omega}}$ and furthermore $\overline{R}(X)$ is the least (smallest) functional for which this holds. We can now observe that in fact $\overline{R}(X) = \limsup_{n \rightarrow \infty} \vv{\Sigma X}(n)$, since \[ \mathcal{D}_{{\bm{v}}{\Omega}} =\left\{X \in L^\infty \colon \overline{R}(X) \leq 0\right\} \] is the general correspondence for a set of desirable gambles and a coherent upper prevision. Hence, we have motivated the definition of $\overline{R}(X)$ in Section~\ref{sec:ivanenkoformal}. It is easy to see explicitly (cf.\@\xspace Appendix~\ref{app:inducedriskmeasure}) that $X-\overline{R}(X) \in \mathcal{D}_{{\bm{v}}{\Omega}}$ and that $\overline{R}(X) = \limsup_{n \rightarrow \infty} \vv{\Sigma X}(n)$ is in fact the smallest number such that the relation in Equation~\ref{eq:rdeffromdesirable} holds. With the upper prevision at hand, we may now define a preference relation on all gambles from $L^\infty$, by stating that $X \succ Y$ (``$X$ is preferred over $Y$'') if $\overline{R}(X)\leq \overline{R}(Y)$. \section{Pathological or Normal?} \label{app:pathornormal} \vspace*{-8mm} \hfill\begin{minipage}{0.55\textwidth} \footnotesize{\it Much of the confusion about probability arises because the true depth of the law of large numbers as an extremely hard analytical assertion is not appreciated at all.} --- Detlef D\"{u}rr and Stefan Teufel \citeyearpar[p.\@\xspace 62]{durr2009} \end{minipage} \label{sec:pathologies-or-norm} When one looks at \emph{finite} sequences $x\colon[n]\rightarrow[2]$, there is a simple counting argument using the binomial theorem that illustrates that the vast majority of the $2^n$ possible sequences have roughly equal numbers of elements with values of 1 and 2. If one \emph{assumes} that an infinite sequence $x\colon\mathbb{N}\rightarrow[2]$ is generated i.i.d. then this argument can be used to prove the law of large numbers, which ensures ``most'' sequences have relative frequencies which converge. Hence the construction, as illustrated in the present paper, of sequences $x\colon\mathbb{N}\rightarrow [k]$ with \emph{divergent} relative frequencies naturally raises the question of how contrived they are. That is, are we examining a rare pathology, or something ``normal'' that we might actually encounter in the world? We will refer to sequences whose relative frequencies converge as ``\Def{stochastic sequences}'' and sequences whose relative frequencies do not converge as ``\Def{non-stochastic sequences}''\footnote{ \label{footnote:rivas} This dichotomy would appear clear-cut, but there is a subtlety: there exist sequences such that $(x_i)_{i\in\mathbb{N}}$ and $(y_i)_{i\in\mathbb{N}}$ are both stochastic, but the \emph{joint} sequence $((x_i,y_i))_{i\in\mathbb{N}}$ is \emph{non}-stochastic \citep{rivas2019role}; that is, the marginal relative frequencies converge, but the joint relative frequencies do not! }. The classical law of large numbers suggests that indeed ``almost all'' sequences are stochastic, and therefore, by such reasoning, the non-stochastic sequences with which we have concerned ourselves in the present paper are indeed pathological exceptions. In this appendix we will argue: \begin{enumerate}[nolistsep] \item This very much depends upon what one means by ``rare'' or ``almost all'' and there are many choices, and the only real argument in favour of the usual ones (which declare non-stochastic sequences rare) is familiarity --- different notions of ``typicality'' (for that is what is at issue) lead to very different conclusions. Specifically, there are choices (arguably just as ``natural'' as the familiar ones) which imply that rather than non-stochastic sequences being rare, they are in fact the norm in a very strong sense. \item Nevertheless, none of the mathematical nuances of the previous point allow one to conclude \emph{anything} about the empirical prevalence of stochastic or non-stochastic sequences in the world. Indeed, no purely mathematical reasoning allows one to draw such conclusions, unless one wishes to appeal to some conception of a Kantian ``synthetic a priori.'' \end{enumerate} We will first explore what can said from a purely mathematical perspective, illustrating that there is a surprising amount of freedom of choice in precisely posing the problem, and that the choices are consequential. Then in Subsection \ref{subsec:real-measured-sequences} we examine the question of prevalence of non-stochastic sequences actually in the world. \subsection{The Mathematical Argument --- The Choices to be Made} The classical Law of Large Numbers says ``almost all sequences'' are stochastic. But the ``almost all'' claim comes from the mapping of sequences to real numbers in $[0,1]$ and then making a claim that ``almost all'' numbers correspond to stochastic sequences. Thus there are at least three choices being made here: \begin{description}[nolistsep] \item[Mapping from Sequences to Real Numbers] The choice of mapping from sequences to real numbers, to enable to use of some notion of typicality on $[0,1]$ to gauge how common stochastic sequences are. \item[Notion of Typicality] The notion of typicality to be used (e.g. Cardinality, Hausdorff dimension, Category or Measure). \item[Specific Index of Typicality] Within the above choice of notion of typicality, the particular choice of typicality index, e.g.\@\xspace the measure or topology that underpins the notion of typicality. \end{description} The choices for the classical law of large numbers are 1) $k$-ary positional representation; 2) a $\sigma$-additive measure on $[0,1]$; 3) The Lebesgue measure. As we shall summarize below, each of these three choices substantially affects the theoretical preponderance of non-stochastic sequences. That there are alternate choices that lead to the unusual conclusion that non-stochastic sequences are ``typical'' has been known for some time: ``This result may be interpreted to mean that the category analogue of the strong law of large numbers is false'' \citep[p.\@\xspace 85]{oxtoby1980measure}; see also \citep{mendez1981law}. The significance of this fact has been stressed recently \citep{whenlargeisalso,cisewski2018standards}. And it has been observed that the introduction of alternate topologies can change whether sequences are stochastic \citep{khrennikov2013p}. However, the strongest results arise in number theory, motivated by the notion of a ``normal number.'' \subsection{Notions of Typicality --- Cardinality, Dimension, Comeagreness, and Measure} Let $\Def{\mathcal{S}}$ (resp.~$\Def{\mathcal{N}}$) denote the set of stochastic (resp.~non-stochastic) sequences $\mathbb{N}\rightarrow [k]$. That is, $\Def{\mathcal{S}\coloneqq\{x\colon\mathbb{N}\rightarrow[k]\colon \lim_{n\rightarrow \infty} r^x(n)\text{ exists}\}}$ and $\Def{\mathcal{N}\coloneqq [k]^\mathbb{N}\setminus\mathcal{S}}$. (For simplicity, and alignment with Appendix~\ref{app:construction}, we restrict ourselves to sequences whose domain is $[k]$.) In order to make a claim regarding the relative preponderance of stochastic versus non-stochastic sequences, they are often mapped onto the unit interval.\footnote{The only attempts to judge the relative sizes of $\mathcal{S}$ and $\mathcal{N}$ which do not rely on such a mapping are described in the first and third of the cases listed below, and rely upon imposing a topology directly on the set of sequences $[k]^\mathbb{N}$.} In such cases, the question of relative preponderance of classes of sequences is reduced to that of a question concerning the relative preponderance of classes of subsets of $[0,1]$. The question then arises of how to measure the size of such subsets. Unlike in the finite case mentioned above, merely counting (i.e. determining the cardinality of the respective subsets) is hardly adequate, as it is easy to argue that $|\mathcal{S}|=|\mathcal{N}|=\aleph_1$. There are three notions that have been used to compare the size of $\mathcal{S}$ and $\mathcal{N}$: \begin{description}[nolistsep] \item[Measure] A countably additive measure, usually the Lebesgue measure on $[0,1]$. \item[Meagre / Comeagre] A subset $S$ of a topological space $X$ is \Def{meagre} if it is a countable union of nowehre dense sets (i.e. sets whose closure has empty interior). A set $S$ is \Def{comeagre} (residual) if $X\setminus S$ is meagre. \item[Dimension] A variety of fractal dimensions, such as the Hausdorff dimension, have also been used to judge the size of non-stochastic sequences (and the numbers they induce); however for space considerations we omit discussion of these results\footnote{ See for example \citep{eggleston1949fractional, olsen2004extremely, gu2011effective, bishop2017fractals, albeverio2017non}. }. \end{description} Some of the results obtained in the literature are summarized below. The object is not to state them in an entirely formal manner, or even to describe them in their full generality. Rather we simply wish to show the diversity of conclusions available by tweaking the three choices enumerated above. If no representation is mentioned, the usual $k$-ary positional representation is used, whereby $\tilde{x}\in[0,1]$ is constructed from $x\colon\mathbb{N}\rightarrow [k]$ via $\Def{\tilde{x}\coloneqq\sum_{i\in\mathbb{N}} (x_i-1) k^{-i}}$ (the $(x_i-1)$ term is required because our sequences map to $[k]=\{1,\ldots,k\}$). Obviously every $x\in[k]^\mathbb{N}$ maps to some $\tilde{x}\in [0,1]$; and every $z\in [0,1]$ corresponds to at least one $x\in [k]^\mathbb{N}$ (recalling we have to handle the situation that, when $k=10$ for example, $0.4\overline{9}=0.5\overline{0}$, where $\overline{i}$ means that $i$ is repeated infinitely, and thus there are two sequences $x_1,x_2\in[k]^\mathbb{N}$ such that $1/2=\tilde{x}_1=\tilde{x}_2$). \footnote{This non-uniqueness of the representation will not affect the results below because $\{\tilde{x}_1=\tilde{x}_2\in[0,1]\colon x_1\ne x_2\}=\mathbb{Q}\cap[0,1]$ and is of cardinality $\aleph_0$, whereas $|[k]^\mathbb{N}|=|[0,1]|=\aleph_1$.} Let $\Def{\tilde{\mathcal{S}}\coloneqq\{\tilde{x}\in [0,1]\colon x\in \mathcal{S}\}}$ and $\Def{\tilde{\mathcal{N}}\coloneqq\{\tilde{x}\in [0,1]\colon x\in \mathcal{N}\}}$. \begin{description}[nolistsep] \item[Most (Lebesgue measure) sequences are stochastic] This is the classical strong law of large numbers. If $\mu_{\mathrm{leb}}$ denotes the Lebesgue measure on $[0,1]$, then the claim is that $\mu_{\mathrm{leb}}(\tilde{\mathcal{S}})=1$. \item[Most (comeagre) sequences are non-stochastic] Let $X_i=[2]$ and $X=\bigtimes_{i\in\mathbb{N}} X_i$ equipped with the product topology. As a set $X\cong [2]^\mathbb{N}$. Then $\mathcal{N}\subset X$ is comeagre \citep{oxtoby1980measure}. \item[Most (comeagre) sequences are stochastic] With different choices of topology, the opposite conclusion holds --- there are topologies such that $\mathcal{S}$ is comeagre \citep{calude2003topological}. \item[Most (comeagre) sequences are extremely non-stochastic] Let $\Def{\tilde{\mathcal{N}}^\star}$ denote the subset of $[0,1]$ of $\tilde{x}$ corresponding to $x\in[k]^\mathbb{N}$ which satisfy $ \forall i \in [k],\ \liminf_{n\rightarrow\infty} r_i^x(n)=0 \text{ and } \limsup_{n\rightarrow\infty}r_i^x(n)=1. $ These sequences are (justifiably) called \Def{extremely non-stochastic}; the sequence constructed in Subsection \ref{subsec:maximally-nonstochastic} is an example. Then the set $\tilde{\mathcal{N}}^\star$ is comeagre in the usual topology of real numbers \citep{calude1999most}; confer \citep[section 7.3]{calude2002information}. \item[Most (comeagre) sequences are perversely non-stochastic] Denote the set of \Def{perversely nonstochastic sequences} $\Def{\mathcal{N}^{\star\star}\coloneqq\{x\in[k]^\mathbb{N}\colon \operatorname{CP}(r^x)=\Delta^k\}}$. Observe $\mathcal{N}^{\star\star}\subset\mathcal{N}^\star$. Let $\tilde{\mathcal{N}}^{\star\star}\coloneqq\{\tilde{x}\in [0,1] \colon x\in\mathcal{N}^{\star\star}\}$ (what \citet{olsen2004extremely} calls ``extremely non-normal numbers'', but we use ``extremely'' for the larger set $\tilde{\mathcal{N}}^\star$). Then $\tilde{\mathcal{N}}^{\star\star}$ is comeagre (in the usual topology of real numbers) \citep{aveni2022most,olsen2004extremely}. An even stronger result holds. Let $A$ denote a (not necessarily uniform) finite averaging operator and let $\Def{\mathcal{N}^{\star\star\star}\coloneqq \{x\in[k]^\mathbb{N}\colon \operatorname{CP}(A(r^x))=\Delta^k\}}$ and $\Def{\tilde{\mathcal{N}}^{\star\star\star}\coloneqq\{ \tilde{x}\in [0,1] \colon x\in\mathcal{N}^{\star\star\star}\}}$. Observe $\mathcal{N}^{\star\star\star}\subset\mathcal{N}^{\star\star}$ and $\tilde{\mathcal{N}}^{\star\star\star}\subset \tilde{\mathcal{N}}^{\star\star}$. Then $\tilde{\mathcal{N}}^{\star\star\star}$ is also comeagre \citep{stylianou2020typical}! \item[Most (Lebesgue measure) sequences are non-stochastic] There exist a range of representations of real numbers called $Q^*$-representations ($Q^*$ is a $k\times\infty$ matrix valued parameter of the representation); see \citep[Section 4]{albeverio2005topological} for details. Let $\Def{\tilde{x}^{Q^*}}$ denote the $Q^*$-representation of a sequence $x\in[k]^\mathbb{N}$, and $\Def{\tilde{\mathcal{S}}^{Q^*}\coloneqq\left\{\tilde{x}^{Q^*}\colon x\in\mathcal{S}\right\}}$ and $\Def{\tilde{\mathcal{N}}^{Q^*}\coloneqq\left\{\tilde{x}^{Q^*}\colon x\in\mathcal{N}\right\}}$. Then there exist $Q^*$ such that $\mu_\mathrm{leb}\left(\tilde{\mathcal{N}}^{Q^*}\right)=1$. \citep[p.\@\xspace 627]{albeverio2005topological}. Thus if the size of $\mathcal{N}$ is judged via certain $Q^*$ representations, Lebesgue almost all sequences are non-stochastic! \end{description} An obvious conclusion to draw from the above examples is that in answering the question of the preponderance of non-stochastic sequences, one can get essentially whatever answer one wants by choosing a range of different precise formulations of the question. At the very least, this should make us skeptical of any purely mathematical attempts to reason whether one might expect to encounter non-stochastic sequences in practice --- the topic to which we now turn. \subsection{Typical Real Sequences} \label{subsec:real-measured-sequences} \vspace*{-8mm} \hfill\begin{minipage}{0.45\textwidth} \footnotesize{\it The laws of large numbers cannot be applied for describing the statistical stabilization of frequencies in sampling experiments.} \hfill --- Andrei Khrennikov \citeyearpar[p.\@\xspace 20]{khrennikov2009interpretations} \end{minipage} What do the above points imply about the likelihood one will encounter stochastic or non-stochastic sequences when performing real measurements? \emph{Nothing}. This is not to say that in actuality we will often encounter non-stochastic sequences. Rather our point is that no amount of purely theoretical reasoning will be able to tell us in advance how ``likely'' it is to do so. What is at issue is whether stochastic sequences are in fact ``typical'' in our world. Perhaps the most surprising thing about the mathematical results summarized above is the extent to which different notions of typicality affect the conclusions. This raises the question of whether some notions of typicality are more justified when wishing to consider real sequences that have been measured in the world. In the study of physics (especially aspects of physics that are apparently intrinsically statistical) such questions have been raised, and below we briefly summarize what is known. Traditionally, ``probability'' is considered as a primitive, and notions of typicality are derived from that in terms of their ``probability'' of occurring. And the above examples illustrate that attempts to argue for the Lebesgue measure having a privileged role as the ``right'' notion of typicality are barking up the wrong tree; confer \citep{pitowsky2012typicality}. But this will not do for our question. Typicality is a more fundamental notion \citep{galvan2006bohmian,galvan2007typicality} --- arguably the ``mother of all'' notions of probability \citep{goldstein2012typicality}. Typicality is at the core of questions of non-stochastic randomness in physics, thus (consistent with the perspective of the present paper) leading to non-additive measures of typicality \citep{galvan2022non} (essentially defining a measure of typicality inspired by a coherent upper probability) which allows the extension to notions of mutual typicality necessary to reason about situations such as that referred to in footnote \ref{footnote:rivas}. In fact, typicality plays an even stronger role than answering questions regarding the preponderance of non-stochastic sequences. As \citet[p.\@\xspace 36]{durr2021typicality} observe ``the notion of typicality is necessary to understand what the statistical predictions of a physical theory really mean.'' They note that the usual appeal to the law of large numbers misses the point because while its conclusion is true (convergence of relative frequencies) \emph{if} one sees typical sequences, but ``What needs to be explained is why we only see typical sequences! That's actually the deep question underlying the meaning of probability theory from its very beginning \ldots'' \citep[p.\@\xspace 37]{durr2021typicality}. In classical mechanics, appeal to Liouville's theorem suggests an ``invariant measure'' as being a natural choice; in the quantum realm, there is an analogous choice (invariant to Bohmian flow) \citep[p.\@\xspace 41]{durr2021typicality}. But these situations are rather special from the perspective of a statistician. The situation is well summarized by \citet[p.\@\xspace 130]{durr2001bohmian}: ``What is typicality? It is a notion for defining the smallness of sets of (mathematically inevitable) exceptions and thus permitting the formulation of law of large numbers type statements. Smallness is usually defined in terms of a measure. What determines the measure? In physics, the physical theory.'' Confer \citep[Chapter 4]{durr2009} who observe that from a \emph{scientific} perspective (where one wants to make claims about the world) establishing the pre-conditions for the law of large numbers to hold is ``exceedingly difficult''\footnote{The example they give is for the Galton board, or quincunx, a device often appealed to in order to teach the reality of the central limit theorem --- an even stronger claim than the law of large numbers. The irony is that it is rarely checked empirically. And when it has been, it has been found to be untrue! \citep[Figure 8]{bagnold1983nature}.}. Very well one might say, but the arguments in favor of typicality of non-stochastic sequences given above all rely on topological arguments, or unusual encodings of sequences to numbers. What is the justification for topological notions of typicality when considering sequences of measurements obtained from the world? \citet[p.\@\xspace 270]{sklar2000topology} has actually argued that the topological perspective might offer a foundational perspective with \emph{fewer} opportunities for claims of arbitrariness than measure theoretical approaches. See also \citep[p.\@\xspace 185]{sklar1995physics} and the discussion in \citep[Chapter 4]{guttmann1999concept} which reframes the problem away from typicality to viewing the whole question from an approximation perspective where the notion of smallness of sets is naturally one of meagreness. Our point is that even within the restricted realm of physics, there are compelling arguments at least not to take the measure-based notion of typicality for granted. Once that is accepted, non-stochastic sequences seem less unusual. \subsection{Violations of the Law of Large Numbers} A typical universe is in equilibrium; but ``our universe is atypical or in non-equilibrium'' \citep[p.\@\xspace 81]{durr2009} and ``what renders knowledge at all possible is nonequilibrium'' \citep[p.\@\xspace 886]{durr1992quantum} so we should not be surprised if it is not ``typical''. And indeed that is what we see as long as we look: ``The so-called law of large numbers is also invalid for social systems with finite elements during transition'' \citep{chen1991nonequilibrium}. \citet{gorban2011statistical,gorban2017statistical,gorban2018randomness} has documented many examples of real phenomena failing to be statistically stable. Such failures are held to explain departures from ``normal'' distributions \citep{philip1987some}. But more importantly, they mean we should not expect even convergence of relative frequencies in non-equilibrium situations. Such was the conclusion of Prigogine in his ground-breaking studies of non-equilibrium thermodynamics where he spoke of a ``breakdown of the `law of large numbers'{}'' \citep[p.\@\xspace 9 and 228]{Nicolos1977}; see also \citep[p.\@\xspace 781]{prigogine1978time}, \citep[p.\@\xspace 180]{prigogine1984order} and \citep[p.\@\xspace 131]{prigogine1980}. And more recently, studies of the use of machine learning systems ``in the wild'' have recognized that non-stochasticity is not so exotic after all \citep{katsikopoulos2021classification}. Thus perhaps its time to downgrade this ``law'' of nature. \subsection{Repeal of the Law of Large Numbers} \vspace*{-8mm} \hfill\begin{minipage}{0.45\textwidth} \footnotesize{\it A typical universe shows statistical regularities as we perceive them in a long run of coin tosses. It looks as if objective chance is at work, while in truth it is not. There is no chance. That is the basis of mathematical probability theory.} \hfill --- Detlef D\"{u}rr and Stefan Teufel \citeyearpar[p.\@\xspace 64]{durr2009}. \end{minipage} \citet{desrosieres1998politics} in his history of statistical reasoning has observed the awe with which stable frequencies were viewed when they were first encountered; the effect been interpreted as a hidden divine order\footnote{``I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the `Law of Frequency of Error.' The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along.'' \citep[p.\@\xspace 66]{galton1889natural}. See also \citep{rose2016end} for a recent discussion on statistical normality. }. And indeed in many practical situations, stable frequencies do arise. But that does not mean we should take such situations as the only ones that can occur. We may well legitimately call them ``normal.'' But we can better understand the normal by studying the pathological \citep[p.\@\xspace 19--20]{canguilhem1978normal}. Ironically in his attempt to clarify the notion of ``normal'' \citet[p.\@\xspace 103]{canguilhem1978normal} considered whether ``normal'' was simply ``average'' and concluded ``the concepts of norm and average must be considered as two different concepts''. As we have seen, averages can indeed be far from normal, and potentially quite often. Perhaps we have been misled by the strange name given to the famous theorem we are considering: by calling it a ``law'' we are inheriting a lot of baggage as to what we mean by that, baggage that has been traced to notions of divine origin \citep{zilsel1942genesis}\footnote{The contrary views regarding the historical origin of the notion of a scientific law \citep{milton1981origin, ruby1986origins,weinert1995laws} do not contradict our point.} of lawfulness. And we hanker after lawfulness: \begin{quote} We \ldots naturally hope that the world is orderly. We like it that way... All of us \ldots find this idea sustaining. It controls confusion, it makes the world seem more intelligible. But suppose the world should happen in fact to be not very intelligible? Or suppose merely that we do not know it to be so? Might it not then be our duty to admit these distressing facts? \citep[p.\@\xspace 199]{midgley2013science} \end{quote} Perhaps the theory of imprecise probabilities presented in this paper which we have grounded in the instability of relative frequencies may help us to admit this ``distressing fact.'' It does suggest to us that the law of large numbers, while a fine and true theorem, as a ``law'' might be in need of repealing. \section{Ivanenko's Sampling Nets} \label{app:ivanenkonets} Ivanenko seeks to abstract away from sequences and hence defines a the notion of a \textit{sampling net}. First, we recall the standard definition of a net in topology, which generalizes sequences in an important way. A directed set $(\Lambda, \geq)$ consists of an arbitrary set $\Lambda$ and a direction $\leq$ on it, which satisfies the following properties: \begin{enumerate}[label=\textbf{DIR\arabic*.}, ref=DIR\arabic*] \item If $\lambda \in \Lambda$, then $\lambda \leq \lambda$ \quad (reflexivity). \item If $\lambda_1, \lambda_2, \lambda_3 \in \Lambda$ and $\lambda_1 \leq \lambda_2$ and $\lambda_2 \leq \lambda_3$, then $\lambda_1 \leq \lambda_3$ \quad (transitivity). \item If $\lambda_1,\lambda_2 \in \Lambda$, then $\exists \lambda_3 \in \Lambda$ such that $\lambda_1 \leq \lambda_3$ and $\lambda_2 \leq \lambda_3$ \quad (upper bound). \end{enumerate} That is, $\leq$ is a pre-order and two any two elements there exists a common upper bound. A \textit{net}\footnote{Ivanenko calls this ``directedness''.} is a function $\varphi\colon \Lambda \rightarrow \mathcal{Y}$, where the domain is a directed set $(\Lambda,\leq)$. Fix a topology $\tau$ on $\mathcal{T}$. We say that a point $T \in \mathcal{T}$ is a cluster point of the net $\varphi$, $T \in \operatorname{CP}(\varphi)$, if: \[ \forall N \text{, where N is any neighbourhood of } T \text{ with respect to } \mathcal{T} \text{, } \forall \lambda_0 \in \Lambda: \exists \lambda_1: \lambda_0 \leq \lambda_1: \varphi(\lambda_1) \in N. \] As an example, a sequence is a net, where $\Lambda = \mathbb{N}$ and $\leq$ is the familiar order on the natural numbers. Define the \textit{space of samples from $\Omega$} as: \[ \Omega^{(\infty)} \coloneqq \bigcup_{n=1}^\infty \Omega^n, \quad \Omega^n \coloneqq \underbrace{\Omega \times \cdots \times \Omega}_{n \text{ times}}. \] \citet{ivanenkobook} then calls a net $\varphi\colon \Lambda \rightarrow \Omega^{(\infty)}$, which takes values in the space of samples from $\Omega$ a \textit{sampling directedness} or \textit{sampling net} (e.g.\@\xspace in \citet{ivanenkoonregularities}). To such a net, Ivanenko associates a net $n_\varphi \colon (\Lambda,\leq) \rightarrow \mathbb{N}$ of ``frequency counts'': \[ n_\varphi(\lambda) \coloneqq n \text{ such that } \varphi(\lambda) \in \Omega^n. \] Furthermore, a corresponding net of relative frequencies $P_\varphi\colon (\Lambda,\leq) \rightarrow \operatorname{PF}(\Omega)$ can be defined as follows: \[ P_\varphi(\lambda) \coloneqq A \mapsto \frac{1}{n_\varphi(\lambda)} \sum_{i=1}^{n_\varphi(\lambda)} \chi_A(\varphi_i(\lambda)), \] where $A \subseteq \Omega$ and $\varphi(\lambda)=(\varphi_1(\lambda),\ldots,\varphi_n(\lambda)) \in \Omega^{n_\varphi(\lambda)}$. The non-empty closed set of limit points of $P_\varphi$ Ivanenko calls the \textit{statistical regularity} of the sampling net $\varphi$. The main result in \citep{ivanenkobook} is then the following. Call any non-empty weak* closed subset $\mathcal{P} \subseteq \operatorname{PF}(\Omega)$ a \textit{regularity}. \begin{proposition}[\protect{\citet[Theorem 4.2]{ivanenkobook}}] \label{prop:ivanenkomaintheorem} Any sampling net has a regularity, and any regularity is the regularity of some sampling net [..]. \end{proposition} Thus, the concept of a sampling net is in a satisfying one-to-one correspondence to that of a weak* closed set of linear previsions (in \citep{ivanenkobook}, finitely additive probabilities). Nonetheless, we remain skeptical about the utility of this concept and raise the question: what is the \textit{meaning} of a sampling net? \citet{ivanenko2017expected} give an intuition: take for example $\Lambda = {\bm{R}}^+$ with the familiar order $\leq$. Then, we could interpret $P_\varphi(t)(A)$ as the frequency of the number of hits in $A$ of the observations $(\omega_1,..,\omega_{n_\varphi(t)})$ that are performed at time $t$. Importantly, at any time $t$ we could record a totally different number of observations, since that number is itself given by the net $n_\varphi$. And to obtain the ``relative frequencies'' at time $t$, we consider only data which was observed at time $t$ and completely neglect the past. Contrast this with the case of a sequence: at each time step, we make exactly one observation, and to compute the relative frequencies at time $n$, we use \textit{the complete past}. Moreover, in the above example, the directed set ${\bm{R}}^+$ with the familiar order $\leq$ was easily intuited. However, Ivanenko's proof for the direction ``to any regularity there exists a corresponding sampling net'' is non-constructive in the sense that he uses the exotic directed set \[ \Lambda \coloneqq {\bm{R}}^+ \times (L^\infty)^{(\infty)} \times P, \] where $P \subseteq \operatorname{PF}(\Omega)$ is the regularity in question and \[ (L^\infty)^{(\infty)} \coloneqq \bigcup_{n=1}^\infty (L^\infty)^n, \quad (L^\infty)^n \coloneqq \underbrace{L^\infty \times \cdots \times L^\infty}_{n \text{ times}}. \] It is not clear to us what a realistic interpretation for such a sampling net $\varphi \colon \Lambda \rightarrow \Omega$ would look like. \section{Introduction} \input{introduction.tex} \input{ivanenko} \input{backwards} \section{Unstable Conditional Probability} An interesting aspect of our strictly frequentist approach is that there is a natural way of introducing conditional probability for events $A,B \subseteq \Omega$, which is the same for the case of converging or diverging relative frequencies. Furthermore, this approach generalizes directly to gambles. We will observe that this, perhaps surprisingly, yields the \textit{generalized Bayes rule}. In the precise case, the standard Bayes rule is recovered. Recall that for a countably or finitely additive probability, we can define conditional probability as: \begin{equation} \label{eq:defcondmeasure} \Def{Q(A|B) \coloneqq \frac{Q(A\cap B)}{Q(B)}, \quad A,B \subseteq \Omega, \text{ if } Q(B)>0.} \end{equation} Important here is the condition that $Q(B)>0$. Conditioning on events of measure zero may create trouble. Kolmogorov then allows the conditional probability to be arbitrary. This is rather unfortunate, as there arguably are settings where one would like to condition on events of measure zero. As a prerequisite, given a linear prevision $E \in \operatorname{PF}(\Omega)$, we define the conditional linear prevision as: \begin{equation} \label{def:condlinearprev} \Def{E(X|B) \coloneqq \frac{E\left(\chi_B X\right)}{E\left(\chi_B\right)}, \quad \text{ if } E\left(\chi_B\right)>0.} \end{equation} The application to indicator gambles then recovers conditional probability. As long as $E\left(\chi_B\right)>0$, it is insignificant whether we condition the linear prevision, or instead condition on the level of its underlying probability and then naturally extend it; confer \citep[Corollary 2.8.5]{walley1991statistical}. Nearly in line with Kolmogorov's conditional probability, von Mises started from the following intuitive, frequentist view: the probability of an event $A$ conditioned on an event $B$ is the frequency of the occurence of the event $A$ given that $B$ happens. In what follows, we build upon this idea, which von Mises called ``partition operation'' \citep[p.\@\xspace 22]{mises1964mathematical}. \input{conditional.tex} \input{independence.tex} \input{discussionfreqapproach.tex} \subsubsection*{Acknowledgments} This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy –- EXC number 2064/1 –- Project number 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Christian Fröhlich and Rabanus Derr. Robert Williamson thanks Jingni Yang for a series of discussions over several years about Ivanenko's work which provided much inspiration for the present work.
train/arxiv
BkiUexPxK3YB9m8SY2ta
5
1
\section{Introduction} Set partitions avoiding $k$-crossings and $k$-nestings have been extensively studied from the points of view of both Combinatorics and Mathematical Biology; see~\cite{chen,chen2,kr0} and the references therein. The bijection between partitions and vacillating (resp.~hesitating) tableaux due to Chen, Deng, Du, Stanley and Yan~\cite{chen} is now a fundamental tool for analyzing classical (resp.~enhanced) $k$-crossings and $k$-nestings. In particular, these two bijections were applied by Bousquet-M\'elou and Xin~\cite{bx} to enumerate set partitions avoiding classical or enhanced 3-crossings. After their work, the sequence $\{C_3(n)\}_{n\geq1}$ (resp.~$\{E_3(n)\}_{n\geq1}$) where $C_3(n)$ (resp.~$E_3(n)$) is the number of partitions of $[n]:=\{1,2,\ldots,n\}$ avoiding classical (resp.~enhanced) $3$-crossings has been registered as \href{https://oeis.org/A108304}{A108304} (resp.~\href{https://oeis.org/A108307}{A108307}) in OEIS: \begin{align*} \{C_3(n)\}_{n\geq1}&=\{1,2,5,15,52,202,859,3930,\ldots\},\\ \{E_3(n)\}_{n\geq1}&=\{1,2,5,15,51,191,772,3320,\ldots\}. \end{align*} The main purpose of this paper is to show that the sequence $\{E_3(n)\}_{n\geq1}$ also enumerates inversion sequences with no weakly decreasing subsequence of length $3$. As we will see, this implies the following intriguing identity between $\{C_3(n)\}_{n\geq1}$ and $\{E_3(n)\}_{n\geq1}$: \begin{equation}\label{eq:3cros} C_3(n+1)=\sum_{i=0}^n{n\choose i}E_3(i), \end{equation} where we use the convention $E_3(0)=1$. It is convenient to recall some necessary definitions. For each $n\geq1$, let $\operatorname{{\bf I}}_n$ be the set of {\em inversion sequences} of length $n$ defined as $$\operatorname{{\bf I}}_n:=\{(e_1,e_2,\ldots,e_n): 0\leq e_i<i\}.$$ An inversion sequence $e\in\operatorname{{\bf I}}_n$ is said to be {\em$(\geq,\geq,-)$-avoiding} if there does not exist $i<j<k$ such that $e_i\geq e_j\geq e_k$. The set of all $(\geq,\geq,-)$-avoiding inversion sequences in $\operatorname{{\bf I}}_n$ is denoted by $\operatorname{{\bf I}}_n(\geq,\geq,-)$. For example, we have $$\operatorname{{\bf I}}_3(\geq,\geq,-)=\{(0,0,1),(0,0,2),(0,1,0),(0,1,1),(0,1,2)\}.$$ Recently, Martinez and Savage~\cite{ms} studied this class of restricted inversion sequences and suspected the following connection with enhanced $3$-noncrossing partitions. \begin{conjecture}[Yan~\cite{yan} \& Martinez--Savage~\cite{ms}]\label{conj:yan} The cardinality of $\operatorname{{\bf I}}_n(\geq,\geq,-)$ is $E_3(n)$. \end{conjecture} In fact, this conjecture has already been proposed by Yan~\cite{yan} several years before in proving a conjecture of Duncan and Steingr\'imsson~\cite{ds}. We notice that in~\cite{yan} there is an interesting bijection between $(\geq,\geq,-)$-avoiding inversion sequences and $210$-avoiding primitive ascent sequences as we review below. Recall that a sequence of integers $x=x_1x_2\cdots x_n$ is called an {\em ascent sequence} if it satisfies $x_1=0$ and for all $2\leq i\leq n$, $0\leq x_i\leq \mathrm{asc}(x_1x_2\cdots x_{i-1})+1$, where $$\mathrm{asc}(x_1x_2\cdots x_{i-1})=|\{j\in[i-2]:x_j<x_{j+1}\}| $$ is the ascent number of $x_1x_2\cdots x_{i-1}$. Such an ascent sequence $x$ is said to be \begin{itemize} \item {\em $210$-avoiding}, if $x$ does not have decreasing subsequence of length $3$; \item {\em primitive}, if $x_i\neq x_{i+1}$ for all $i\in[n-1]$. \end{itemize} Denote by $\mathcal{A}_n(210)$ and $\mathcal{PA}_n(210)$ the set of all $210$-avoiding ordinary and primitive ascent sequences of length $n$, respectively. For example, we have $$ \mathcal{A}_3(210)=\{000,001,010,011,012\}\text{ and }\mathcal{PA}_4(210)=\{0101,0102,0120,0121,0123\}. $$ Via an intermediate structure of growth diagrams for $01$-fillings of Ferrers shapes, Yan~\cite{yan} proved combinatorially the following equinumerosity, which was first conjectured in~\cite[Conjecture~3.3]{ds}. \begin{theorem}[Main result of Yan~\cite{yan}] \label{thm:yan} The cardinality of $\mathcal{A}_n(210)$ is $C_3(n)$. \end{theorem} In the course of her combinatorial proof to Theorem~\ref{thm:yan}, she also showed that the mapping $\phi:\mathcal{PA}_{n+1}(210)\rightarrow\operatorname{{\bf I}}_n(\geq,\geq,-)$ defined for each $x\in\mathcal{PA}_{n+1}(210)$ by $$ \phi(x)=(e_1,e_2,\ldots,e_n),\,\,\text{where $e_i=i-1+x_{i+1}-\mathrm{asc}(x_1x_2\cdots x_{i+1})$}, $$ is a bijection. Therefore, Conjecture~\ref{conj:yan} is equivalent to $|\mathcal{PA}_{n+1}(210)|=E_3(n)$, as was originally suggested in~\cite[Remark~3.6]{yan}. The rest of this paper is laid out as follows. In section~\ref{sec:tree}, we develop the generating tree for $\cup_{n\geq1}\operatorname{{\bf I}}_n(\geq,\geq,-)$ and obtain a resulting functional equation. In Section~\ref{sec:proof}, we solve this functional equation via the obstinate kernel method~\cite{bo} and then apply the Lagrange inversion formula and Zeilberger's algorithm to finish the proof of Conjecture~\ref{conj:yan}. In Section~\ref{sec:yan}, we show that how Conjecture~\ref{conj:yan} together with the results in~\cite{bx} would provide an alternative approach to Theorem~\ref{thm:yan}. An extension of~\eqref{eq:3cros} to $k$-noncrossing partitions is also conjectured. In Section~\ref{sec:aw}, we apply similar technique as in section~\ref{sec:tree} to enumerate another interesting class of restricted inversion sequences introduced by Adams-Watters~\cite{aw}. It is surprising that the resulting functional equation is difficult enough that we do not know how to solve it. Finally, we conclude the paper with some further remarks. \section{The generating tree for $(\geq,\geq,-)$-avoiding inversion sequences} \label{sec:tree} A {\em left-to-right maximum} of an inversion sequence $(e_1,e_2,\ldots,e_n)$ is an entry $e_i$ satisfying $e_i>e_j$ for any $j<i$. Similar to $321$-avoiding permutations, $(\geq,\geq,-)$-avoiding inversion sequences have the following important characterization proved by Martinez--Savage~\cite{ms}. \begin{proposition}[See~\cite{ms}, Observation~7]\label{decr} An inversion sequence is $(\geq,\geq,-)$-avoiding if and only if both the subsequence formed by its left-to-right maximum and the one formed by the remaining entries are strictly increasing. \end{proposition} For each $e\in\operatorname{{\bf I}}_n(\geq,\geq,-)$, introduce the {\em parameters} $(p,q)$ of $e$, where $$p=\alpha(e)-\beta(e) \quad{ and } \quad q=n-\alpha(e) $$ with $\alpha(e)=\operatorname{max}\{e_1,e_2,\ldots,e_n\}$ and $\beta(e)$ is the greatest integer in the set $$\{e_i: e_i\text{ is not a left-to-right maximum}\}\cup\{-1\}.$$ For example, the parameters of $(0,1,2)$ is $(3,1)$, while the parameters of $(0,1,1)$ is $(0,2)$. We have the following rewriting rule for $(\geq,\geq,-)$-avoiding inversion sequences. \begin{lemma}\label{lem:cross} Let $e\in\operatorname{{\bf I}}_n(\geq,\geq,-)$ be an inversion sequence with parameters $(p,q)$. Exactly $p+q$ inversion sequences in $\operatorname{{\bf I}}_{n+1}(\geq,\geq,-)$ when removing their last entries will become $e$, and their parameters are respectively: \begin{align*} &(p-1,q+1), (p-2,q+1),\ldots, (0,q+1)\\ &(p+1,q), (p+2,q-1),\ldots, (p+q,1). \end{align*} The order in which the parameters are listed corresponds to the inversion sequences with last entries from $\beta(e)+1$ to $n$. \end{lemma} \begin{proof} In view of Proposition~\ref{decr}, the vector $f:=(e_1,e_2,\ldots,e_n,b)$ is an inversion sequence in $\operatorname{{\bf I}}_{n+1}(\geq,\geq,-)$ if and only if $\beta(e)<b\leq n$. We distinguish two cases: \begin{itemize} \item If $\beta(e)<b\leq\alpha(e)$, then $\alpha(f)=\alpha(e)$ and $\beta(f)=b$. These contribute the parameters $(p-1,q+1), (p-2,q+1),\ldots, (0,q+1)$. \item If $\alpha(e)<b\leq n$, then $\alpha(f)=b$ and $\beta(f)=\beta(e)$. This case contributes the parameters $(p+1,q), (p+2,q-1),\ldots, (p+q,1)$. \end{itemize} These two cases together give the rewriting rule for $(\geq,\geq,-)$-avoiding inversion sequences. Note that $p=\alpha(e)-\beta(e)$ may be $0$, i.e. $\alpha(e)=\beta(e)$, and in this situation the first case is empty. \end{proof} Using the above lemma, we construct a {\em generating tree} (actually an infinite rooted tree) for $(\geq,\geq,-)$-avoiding inversion sequences by representing each element as its parameters as follows: the root is $(1,1)$ and the children of a vertex labelled $(p,q)$ are those generated according to the rewriting rule in Lemma~\ref{lem:cross}. See Fig.~\ref{tree} for the first few levels of this generating tree. Note that the number of vertices at the $n$-th level of this tree is the cardinality of $\operatorname{{\bf I}}_n(\geq,\geq,-)$. \begin{figure} \setlength{\unitlength}{1mm} \begin{picture}(120,38)\setlength{\unitlength}{1mm} \thinlines \put(50,35){$(1,1)$} \put(54,33.5){\line(-5,-1){25}}\put(55,33.5){\line(5,-1){25}} \put(24,25){\small{$(0,2)$}}\put(76,25){\small{$(2,1)$}} \put(27.5,23.5){\line(-4,-1){20}}\put(28.5,23.5){\line(1,-1){5}} \put(3.5,15){\footnotesize{$(1,2)$}}\put(30,15){\footnotesize{$(2,1)$}} \put(6,13.5){\line(-2,-1){14}}\put(7,13.5){\line(-1,-2){3.5}}\put(8,13.5){\line(1,-1.5){5}} \put(-12,4){\tiny{$(0,3)$}}\put(0,4){\tiny{$(2,2)$}}\put(10,4){\tiny{$(3,1)$}} \put(-9,3){\line(-1,-1){4}}\put(-9,3){\line(0,-1){4}}\put(-9,3){\line(1,-1){4}} \put(3,3){\line(-1,-4){1}}\put(3,3){\line(-3,-2){6}}\put(3,3){\line(1,-4){1}}\put(3,3){\line(1,-1){4}} \put(13,3){\line(-1,-1){4}}\put(13,3){\line(-1,-4){1}}\put(13,3){\line(1,-4){1}}\put(13,3){\line(3,-2){6}} \put(32.5,13.5){\line(-1,-1){7}}\put(33.5,13.5){\line(1,-3){2.4}}\put(34.5,13.5){\line(3,-2){10}} \put(22,4){\tiny{$(1,2)$}}\put(33,4){\tiny{$(0,2)$}}\put(42,4){\tiny{$(3,1)$}} \put(25,3){\line(-1,-1){4}}\put(25,3){\line(0,-1){4}}\put(25,3){\line(1,-1){4}} \put(36,3){\line(-1,-1){4}}\put(36,3){\line(1,-2){2}} \put(45,3){\line(-1,-4){1}}\put(45,3){\line(-1,-1){4}}\put(45,3){\line(1,-4){1}}\put(45,3){\line(1,-1){4}} \put(80,23.5){\line(-2,-1){10}}\put(81,23.5){\line(1,-1){5}}\put(82,23.5){\line(5,-1){25}} \put(66,15){\footnotesize{$(1,2)$}}\put(83,15){\footnotesize{$(0,2)$}}\put(104,15){\footnotesize{$(3,1)$}} \put(69,13.5){\line(-2,-1){14}}\put(70,13.5){\line(-1,-2){3.5}}\put(71,13.5){\line(1,-1.5){5}} \put(52,4){\tiny{$(0,3)$}}\put(63,4){\tiny{$(2,2)$}}\put(73,4){\tiny{$(3,1)$}} \put(55,3){\line(-1,-1){4}}\put(55,3){\line(0,-1){4}}\put(55,3){\line(1,-1){4}} \put(66,3){\line(-1,-4){1}}\put(66,3){\line(-1,-1){4}}\put(66,3){\line(1,-4){1}}\put(66,3){\line(1,-1){4}} \put(76,3){\line(-1,-4){1}}\put(76,3){\line(-1,-1){4}}\put(76,3){\line(1,-4){1}}\put(76,3){\line(1,-1){4}} \put(86,13.5){\line(-1,-3){2.3}}\put(87,13.5){\line(2,-3){4.5}} \put(81,4){\tiny{$(1,2)$}}\put(89,4){\tiny{$(2,1)$}} \put(84,3){\line(-1,-2){2}}\put(84,3){\line(1,-2){2}}\put(84,3){\line(1,-1){4}} \put(92,3){\line(-1,-2){2}}\put(92,3){\line(1,-2){2}}\put(92,3){\line(1,-1){4}} \put(108,13.5){\line(-1,-1){7}}\put(108,13.5){\line(0.2,-1){1.4}}\put(108,13.5){\line(1.5,-1){10}}\put(108,13.5){\line(3,-1){21}} \put(98,4){\tiny{$(2,2)$}}\put(106,4){\tiny{$(1,2)$}}\put(115,4){\tiny{$(0,2)$}}\put(126,4){\tiny{$(4,1)$}} \put(101,3){\line(-1,-4){1}}\put(101,3){\line(-1,-1){4}}\put(101,3){\line(1,-4){1}}\put(101,3){\line(1,-1){4}} \put(109,3){\line(-1,-2){2}}\put(109,3){\line(1,-2){2}}\put(109,3){\line(1,-1){4}} \put(118,3){\line(1,-1){4}}\put(118,3){\line(-1,-2){2}} \put(129,3){\line(-1,-1){4}}\put(129,3){\line(1,-2){2}}\put(129,3){\line(-1,-2){2}}\put(129,3){\line(1,-1){4}}\put(129,3){\line(2,-1){8}} \end{picture} \caption{First few levels of the generating tree for $\cup_{n\geq1}\operatorname{{\bf I}}_n(\geq,\geq,-)$.} \label{tree} \end{figure} Define the formal power series $E(t;u,v)=E(u,v):=\sum_{p\geq0,q\geq1}E_{p,q}(t)u^pv^q$, where $E_{p,q}(t)$ is the size generating function for the $(\geq,\geq,-)$-inversion sequences with parameters $(p,q)$. We can turn this generating tree into a functional equation as follows. \begin{proposition}We have the following functional equation for $E(u,v)$: \begin{equation}\label{eq:cross} \biggl(1+\frac{tv}{1-u}+\frac{tv}{1-v/u}\biggr)E(u,v)=tuv+\frac{tv}{1-u}E(1,v)+\frac{tv}{1-v/u}E(u,u). \end{equation} \end{proposition} \begin{proof} In the generating tree for $\cup_{n\geq1}\operatorname{{\bf I}}_n(\geq,\geq,-)$, each vertex other than the root $(1,1)$ can be generated by a unique parent. Thus, we have \begin{align*} E(u,v)&=tuv+t\sum_{p\geq0,q\geq1}E_{p,q}(t)\biggl(v^{q+1}\sum_{i=0}^{p-1}u^i+\sum_{i=0}^{q-1}u^{p+1+i}v^{q-i}\biggr)\\ &=tuv+t\sum_{p\geq0,q\geq1}E_{p,q}(t)\biggl(\frac{1-u^p}{1-u}v^{q+1}+u^{p+1}v^q\frac{1-(u/v)^q}{1-u/v}\biggr)\\ &=tuv+\frac{tv}{1-u}(E(1,v)-E(u,v))+\frac{tuv}{v-u}(E(u,v)-E(u,u)), \end{align*} which is equivalent to~\eqref{eq:cross}. \end{proof} \begin{remark} It should be noted that the kernel $1+\frac{tv}{1-u}+\frac{tv}{1-v/u}$ of~\eqref{eq:cross} is exactly the same as that of the functional equation for {\em Baxter inversion sequences} in~\cite[Proposition~4.4.]{kl}. \end{remark} \section{Proof of Conjecture~\ref{conj:yan}} \label{sec:proof} In this section, we will prove Conjecture~\ref{conj:yan} by solving~\eqref{eq:cross}. It is convenient to set $v=uw$ in~\eqref{eq:cross}. The equation then becomes $$ \biggl(1+\frac{tuw}{1-u}+\frac{tuw}{1-w}\biggr)E(u,wu)=tu^2w+\frac{tuw}{1-u}E(1,uw)+\frac{tuw}{1-w}E(u,u). $$ Further setting $u=1+x$ and $w=1+y$ above we get \begin{multline}\label{eq2:cross} \frac{xy-t(1+x)(1+y)(x+y)}{t(1+x)(1+y)}E(1+x,(1+x)(1+y))\\ =xy(1+x)-yE(1,(1+x)(1+y))-\widetilde{E}(x), \end{multline} where $\widetilde{E}(x)=xE(1+x,1+x)$. We are going to apply the {\em obstinate kernel method} developed by Bousquet-M\'elou~\cite{bo} to this equation. The numerator $$ K(x,y)=xy-t(1+x)(1+y)(x+y) $$ of the coefficient of $E(1+x,(1+x)(1+y))$ in~\eqref{eq2:cross} is called the {\em kernel} of~\eqref{eq2:cross}. Observe that $K(x,y)$ is also the kernel of the functional equation in~\cite[Corollary~3]{bo} for {\em Baxter permutations}. It was shown in~\cite[Figure~3]{bo} that the three pairs $(x,Y), (\bar{x}Y,Y)$ and $(\bar{x}Y,\bar{x})$ are roots of the kernel $K(x,y)$ and can be legally substituted for $(x,y)$ in~\eqref{eq2:cross}, where $$ \bar{x}:=\frac{1}{x}\quad\text{and}\quad Y=\frac{1-t(1+x)(1+\bar{x})-\sqrt{1-2t(1+x)(1+\bar{x})-t^2(1-x^2)(1-\bar{x}^2)}}{2t(1+\bar{x})}. $$ Note that the kernel $K(x,y)$ is symmetric in $x$ and $y$ and so the dual pairs $(Y,x), (Y,\bar{x}Y)$ and $(\bar{x},\bar{x}Y)$ are also roots of $K(x,y)$ which can be legally substituted for $(x,y)$ in~\eqref{eq2:cross}. Substituting the pairs $(x,Y)$ and $(Y,x)$ for $(x,y)$ in~\eqref{eq2:cross} yields $$ \begin{cases} \,\,xY(1+x)-YE(1,(1+x)(1+Y))-\widetilde{E}(x)=0, \\ \,\,Yx(1+Y)-xE(1,(1+x)(1+Y))-\widetilde{E}(Y)=0. \end{cases} $$ Eliminating $E(1,(1+x)(1+Y))$ we get \begin{equation}\label{eq:1} Y\widetilde{E}(Y)-x\widetilde{E}(x)=xY(Y(1+Y)-x(1+x)). \end{equation} Similarly, substitute $(\bar{x}Y,Y),(Y,\bar{x}Y)$ and $(\bar{x}Y,\bar{x}),(\bar{x},\bar{x}Y)$ into~\eqref{eq2:cross} and after some computation we get two equations, which together with~\eqref{eq:1} give the system of equations: $$ \begin{cases} \,\,Y\widetilde{E}(Y)-x\widetilde{E}(x)=xY(Y(1+Y)-x(1+x)), \\ \,\,Y\widetilde{E}(Y)-\bar{x}Y\widetilde{E}(\bar{x}Y)=\bar{x}Y^2(Y(1+Y)-\bar{x}Y(1+\bar{x}Y)), \\ \,\,\bar{x}\widetilde{E}(\bar{x})-\bar{x}Y\widetilde{E}(\bar{x}Y)=\bar{x}^2Y(\bar{x}(1+\bar{x})-\bar{x}Y(1+\bar{x}Y)). \end{cases} $$ By eliminating $\widetilde{E}(Y)$ and $\widetilde{E}(\bar{x}Y)$, we get a relation between $\widetilde{E}(x)$ and $\widetilde{E}(\bar{x})$: \begin{equation}\label{eq:main} \bar{x}\widetilde{E}(x)-\bar{x}^3\widetilde{E}(\bar{x})=R(x,Y), \end{equation} where \begin{equation}\label{def:R} R(x,Y)=Y(x+1-\bar{x}^{5}-\bar{x}^{6})+Y^2(\bar{x}^{5}-\bar{x})+Y^3(\bar{x}^{3}+\bar{x}^{6}-\bar{x}-\bar{x}^{4})+Y^4(\bar{x}^{3}-\bar{x}^{5}) \end{equation} is a formal power series in $t$. Since in the left-hand side of~\eqref{eq:main}: \begin{itemize} \item $\bar{x}\widetilde{E}(x)=E(1+x,1+x)$ is a power series in $t$ with polynomial coefficient in $x$ \item and $\bar{x}^3\widetilde{E}(\bar{x})$ is a power series in $t$ with polynomial coefficient in $\bar{x}$ whose lowest power of $\bar{x}$ is $4$, \end{itemize} we have the following result. \begin{theorem} Let $Y=Y(t;x)$ be the unique formal power series in $t$ such that \begin{equation}\label{eq:Y} Y=t(1+\bar{x})(1+Y)(x+Y). \end{equation} The series solution $E(u,v)$ of~\eqref{eq:cross} satisfies \begin{equation}\label{eq:ER} E(1+x,1+x)=\mathop{\mathrm{PT}}\limits_{x} R(x,Y), \end{equation} where $R(x,Y)$ is defined in~\eqref{def:R} and the operator $\mathop{\mathrm{PT}}\limits_{x}$ extracts non-negative powers of $x$ in series of $\mathbb{Q}[x,\bar{x}][[t]]$. \end{theorem} Now we can apply the {\em Lagrange inversion formula} and {\em Zeilberger's algorithm} to finish the proof of Conjecture~\ref{conj:yan}. \begin{proof}[{\bf Proof of Conjecture~\ref{conj:yan}}] Let $E(n)=|\operatorname{{\bf I}}_n(\geq,\geq,-)|$. It follows from~\eqref{eq:ER} that \begin{multline}\label{eq:En} E(n)=[x^{-1}t^n]Y+[x^0t^n]Y-[x^5t^n]Y-[x^6t^n]Y+[x^5t^n]Y^2-[x^1t^n]Y^2\\ +[x^3t^n]Y^3+[x^6t^n]Y^3-[x^1t^n]Y^3-[x^4t^n]Y^3+[x^3t^n]Y^4-[x^5t^n]Y^4. \end{multline} Applying the Lagrange inversion formula~\cite[Theorem~5.4.2]{st2} to~\eqref{eq:Y} gives: \begin{align*} [x^mt^n]Y^k&=\frac{k}{n}[x^mt^{n-k}]((x+t)(1+t)(1+\bar{x}))^n\\ &=\frac{k}{n}\sum_{i=0}^{n-k}{n\choose i}{n\choose k+i}{n\choose m+i} \end{align*} for all $k,m\in\mathbb{Z}$ and $n\in\mathbb{N}$. Substituting this into~\eqref{eq:En} we can express $E(n)$ as $E(n)=\sum_{i=0}^{n-1}E(n,i)$, where \begin{multline*} E(n,i)=\frac{1}{n}{n\choose i}\left\{{n\choose i+1}\left[{n+1\choose i}-{n+1\choose i+6}\right]+2{n\choose i+2}\left[{n\choose i+5}-{n\choose i+1}\right]\right.\\ \left.+3{n\choose i+3}\left[{n\choose i+3}+{n\choose i+6}-{n\choose i+1}-{n\choose i+4}\right]+4{n\choose i+4}\left[{n\choose i+3}-{n\choose i+5}\right]\right\}. \end{multline*} Applying Zeilberger's algorithm~\cite{pwz} (or creative telescoping) with $E(n,i)$ above as input, the Maple package {\tt ZeilbergerRecurrence(E(n,i),n,i,E,0..n-1)} gives the P-recursion: for $n\geq1$, \begin{equation}\label{recu:En} a_nE(n)+b_nE(n+1)+c_nE(n+2)-d_nE(n+3)=0, \end{equation} where \begin{align*} a_n&=8(3n+13)(n+3)(n+2)(n+1),\\ b_n&=3(n+3)(n+2)(15n^2+153n+376),\\ c_n&=6(n+7)(3n^3+38n^2+156^n+212),\\ d_n&=(3n+10)(n+9)(n+8)(n+7). \end{align*} The initial conditions are $E(1)=1, E(2)=2$ and $E(3)=5$. On the other hand, Bousquet-M\'elou and Xin~\cite[Proposition~2]{bx} showed that the number $E_3(n)$ satisfies the P-recursion: $E_3(0)=E_3(1)=1$, and for $n\geq0$, \begin{equation}\label{recu:E3} 8(n+3)(n+1)E_3(n)+(7n^2+53n+88)E_3(n+1)-(n+8)(n+7)E_3(n+2)=0. \end{equation} It is then routine to check that the sequence defined by the above three term recursion satisfies also the four term recursion in~\eqref{recu:En} obtained via Zeilberger's algorithm. More precisely, applying to~\eqref{recu:E3} the operator $$ (3n+13)(n+2)+(3n+10)(n+7)N, $$ where $N$ is the shift operator replacing $n$ by $n+1$, yields a four term recursion for $E_3(n)$ which is exactly the same as that for $E(n)$ in~\eqref{recu:En}. This completes the proof of Conjecture~\ref{conj:yan}, since both sequences share the same initial values. \end{proof} Since our proof of Conjecture~\ref{conj:yan} uses formal power series heavily, it is natural to ask for a bijective proof. \section{A new approach to Yan's result and a conjecture} \label{sec:yan} Let $x=x_1x_2\cdots x_{n+1}$ be a $210$-avoiding ascent sequence of length $n+1$. It is apparent that the ascent sequence $x$ can be written uniquely as $\tilde{x}_1^{c_1} \tilde{x}_2^{c_2}\cdots \tilde{x}_{i+1}^{c_{i+1}}$, where $\tilde{x}:=\tilde{x}_1\tilde{x}_2\cdots\tilde{x}_{i+1}$ is a $210$-avoiding primitive ascent sequence of length $i+1$ and $c_1+c_2+\cdots+c_{i+1}=n+1$ is a $(i+1)$-composition of $n+1$. For instance, the ascent sequence $0110212224\in\mathcal{A}_{10}(210)$ can be written as $0^11^20^12^11^12^34^1$, so that $\tilde{x}=0102124\in\mathcal{PA}_7(210)$ and the corresponding $7$-composition is $1+2+1+1+1+3+1=10$. Since the number of $(i+1)$-composition of $n+1$ is ${n\choose i}$, the above decomposition gives the identity: $$ |\mathcal{A}_{n+1}(210)|=\sum_{i=0}^n{n\choose i}|\mathcal{PA}_{i+1}(210)|=\sum_{i=0}^n{n\choose i}E_3(i), $$ where the second equality follows from $|\mathcal{PA}_{i+1}(210)|=E_3(i)$ (by Conjecture~\ref{conj:yan}). Therefore, Theorem~\ref{thm:yan} is equivalent to identity~\eqref{eq:3cros}. In the following, we will show how to deduce~\eqref{eq:3cros} from the results in~\cite{bx}, which provides a new approach to Theorem~\ref{thm:yan}. Let $\mathcal{C}(t)=\sum_{n\geq1}\sum_{i=0}^{n-1}{n-1\choose i}E_3(i)t^n$ and $\mathcal{E}(t)=\sum_{n\geq0}E_3(n)t^n$. It then follows that \begin{equation}\label{eq:CE} \mathcal{C}(t)=\sum_{i\geq0}E_3(i)t^{i+1}\sum_{m\geq0}{m+i\choose i}t^{m}=\sum_{i\geq0}E_3(i)\biggl(\frac{t}{1-t}\biggr)^{i+1}=z\mathcal{E}(z), \end{equation} where $z=\frac{t}{1-t}$. As was shown in~\cite[Proposition~2]{bx}, the generating function $\mathcal{E}(t)$ satisfies: $$ t^2(1+t)(1-8t)\frac{d^2}{dt^2}\mathcal{E}(t)+2t(6-23t-20t^2)\frac{d}{dt}\mathcal{E}(t)+6(5-7t-4t^2)\mathcal{E}(t)=30. $$ Thus, if we denote $\mathcal{F}(t)=t\mathcal{E}(t)$, then \begin{equation}\label{eq:FF} t^2(1+t)(1-8t)\frac{d^2}{dt^2}\mathcal{F}(t)+2t(5-16t-12t^2)\frac{d}{dt}\mathcal{F}(t)+(20-10t)\mathcal{F}(t)=30t. \end{equation} By~\eqref{eq:CE}, we have $\mathcal{F}(t)=\mathcal{C}(x)$ with $x=\frac{t}{1+t}$. Substituting $\mathcal{F}(t)=\mathcal{C}(x)$ into~\eqref{eq:FF} and using the chain rule, we get $$ x^2(1-9x)(1-x)\frac{d^2}{dx^2}\mathcal{C}(x)+2x(5-27x+18x^2)\frac{d}{dx}\mathcal{C}(x)+10(2-3x)\mathcal{C}(x)=30x $$ after some manipulation. Comparing with~\cite[Proposition~1]{bx} we conclude that $\mathcal{C}(t)=\sum_{n\geq1}C_3(n)t^n$, which is equivalent to~\eqref{eq:3cros}, as desired. \subsection{Extension of~\eqref{eq:3cros} to $k$-noncrossing partitions: a conjecture} \begin{figure} \begin{center} \begin{tikzpicture} \SetVertexNormal \SetGraphUnit{1.3} \tikzset{VertexStyle/.append style={inner sep=0pt,minimum size=5mm}} \Vertex{1} \EA(1){2}\EA(2){3}\EA(3){4}\EA(4){5}\EA(5){6}\EA(6){7}\EA(7){8} \tikzset{EdgeStyle/.append style = {bend left = 60}} \textcolor{red}{\Edge(1)(3)\Edge(2)(5)} \Edge(5)(6) \Edge(3)(7)\Edge(6)(8) \end{tikzpicture} \end{center} \caption{The arc diagram of $\{\{1,3,7\},\{2,5,6,8\},\{4\}\}$.\label{arc}} \end{figure} Any partition $P$ of $[n]$ can be identified with its {\em arc diagram} defined as follows: \begin{itemize} \item put the nodes $1,2,\ldots,n$ on a horizontal line in increasing order; \item then draw an arc from $i$ to $j$, $i<j$, whenever $i$ and $j$ belong to a same block of $P$ and inside this block, there is not any $l$ satisfying $i<l<j$. \end{itemize} See Fig.~\ref{arc} for the arc diagram of $\{\{1,3,7\},\{2,5,6,8\},\{4\}\}$. For any $k\geq2$, a {\em$k$-crossing} (resp.~an {\em enhanced $k$-crossing}) of $P$ is a $k$-subset $(i_1,j_1), (i_2,j_2),\ldots, (i_k,j_k)$ of arcs in the arc diagram of $P$ such that $$ i_1<i_2<\cdots<i_k<j_1<j_2<\cdots<j_k\,\,\text{(resp.~$i_1<i_2<\cdots<\textcolor{blue}{i_k\leq j_1}<j_2<\cdots<j_k$)}. $$ For instance, the partition in Fig.~\ref{arc} has no $3$-crossing but contains one enhanced $3$-crossing, which is formed by the arcs $(1,3), (2,5), (3,7)$. Let $C_k(n)$ (resp.~$E_k(n)$) be the number of partitions of $[n]$ avoiding classical (resp.~enhanced) $k$-crossings. It is known that $C_2(n)=C_n:=\frac{1}{n+1}{2n\choose n}$, the $n$th {\em Catalan number}, and $E_2(n)=\sum\limits_{i=0}^{\lfloor n/2\rfloor}{n\choose 2i}C_i$ is the $n$th {\em Motzkin number}~\cite[Exercise~6.38]{st2}. The Catalan numbers are also related to Motzkin numbers by (cf.~\cite{deng}) \begin{equation} \label{ca:mz} C_2(n+1)=\sum_{i=0}^n{n\choose i}E_2(i). \end{equation} In other words, the binomial transformation of Motzkin numbers are Catalan numbers. In view of identities~\eqref{ca:mz} and~\eqref{eq:3cros}, the following conjecture is tempting. \begin{conjecture}\label{conj:lin} Fix $k\geq2$. The following identity holds: \begin{equation}\label{eq:kimlin} C_k(n+1)=\sum_{i=0}^n{n\choose i}E_k(i). \end{equation} \end{conjecture} It would be interesting to see if the bijections of Chen et al.~\cite{chen} or Krattenthaler~\cite{kr0} could help to prove this conjecture. If this conjecture is true, then the $D$-finiteness (see~\cite[Theorem~6.4.10]{st2}) of $$ \mathcal{C}_k(t)=\sum_{n\geq1}C_k(n)t^n\quad\text{and}\quad \mathcal{E}_k(t)=\sum_{n\geq1}E_k(n)t^n $$ are the same. \section{Adams-Watters' restricted inversion sequences}% \label{sec:aw An inversion sequence $e=(e_1,e_2,\ldots,e_n)\in\operatorname{{\bf I}}_n$ is called a {\em $\mathcal{AW}$-inversion sequence} (here $\mathcal{AW}$ stands for Adams-Watters) if for every $2<i\leq n$, we have $e_i\leq \operatorname{max}\{e_{i-2},e_{i-1}\}+1$. Let $\operatorname{{\bf I}}_n(\mathcal{AW})$ denote the set of $\mathcal{AW}$-inversion sequences of length $n$. For example, we have $$ \operatorname{{\bf I}}_3(\mathcal{AW})=\{(0,0,0),(0,0,1),(0,1,0)(0,1,1)(0,1,2)\}. $$ The $\mathcal{AW}$-inversion sequences were introduced by Adams-Watters~\cite{aw} (see also~\href{https://oeis.org/A108307}{A108307} in OEIS) who also conjectured that $|\operatorname{{\bf I}}_n(\mathcal{AW})|=E_3(n)$. Unfortunately, this is not true as $$ \{|\operatorname{{\bf I}}_n(\mathcal{AW})|\}_{n\geq1}=\{1,2,5,15,191,773,3336,\ldots\} $$ and this sequence now appears as \href{https://oeis.org/A275605}{A275605} in OEIS. We will show in the following how to get a functional equation for the generating function of a two-variable extension of this sequences. In order to get a rewriting rule for $\mathcal{AW}$-inversion sequences, we introduce the {\em parameters} $(p,q)$ for each $e\in\operatorname{{\bf I}}_n(\mathcal{AW})$ by $$ p=e_n+1 \quad{ and } \quad q=\operatorname{max}\{e_{n-1},e_{n}\}+1-e_n. $$ For example, the parameters of $(0,1,2,3,2)\in\operatorname{{\bf I}}_5(\mathcal{AW})$ is $(3,2)$. The following result can be checked routinely. \begin{lemma}\label{lem:AW} Let $e\in\operatorname{{\bf I}}_n(\mathcal{AW})$ be an inversion sequence with parameters $(p,q)$. Exactly $p+q$ inversion sequences in $\operatorname{{\bf I}}_{n+1}(\mathcal{AW})$ when removing their last entries will become $e$, and their parameters are respectively: \begin{align*} &(1,p), (2,p-1),\ldots, (p,1)\\ &(p+1,1), (p+2,1),\ldots, (p+q,1). \end{align*} The order in which the parameters are listed corresponds to the inversion sequences with last entries from $0$ to $\operatorname{max}\{e_{n-1},e_{n}\}+1$. \end{lemma} Define the formal power series $F(t;u,v)=F(u,v):=\sum_{p,q\geq1}F_{p,q}(t)u^pv^q$, where $F_{p,q}(t)$ is the size generating function for the $\mathcal{AW}$-inversion sequences with parameters $(p,q)$. We can translate Lemma~\ref{lem:AW} into the following functional equation. \begin{proposition}We have the following functional equation for $F(u,v)$: \begin{equation}\label{eq:AW} F(u,v)=tuv+\frac{tuv}{v-u}(F(v,1)-F(u,1))+\frac{tuv}{1-u}(F(u,1)-F(u,u)). \end{equation} Equivalently, if we write $F(u,v)=\sum_{n\geq1}f_n(u,v)t^n$, then $f_1(u,v)=uv$ and for $n\geq2$, \begin{equation}\label{rec:AW} f_n(u,v)=\frac{uv}{v-u}(f_{n-1}(v,1)-f_{n-1}(u,1))+\frac{uv}{1-u}(f_{n-1}(u,1)-f_{n-1}(u,u)). \end{equation} \end{proposition} Although we have not been able to solve~\eqref{eq:AW}, we note that recursion~\eqref{rec:AW} can be applied to compute $|\operatorname{{\bf I}}_n(\mathcal{AW})|=f_n(1,1)$ easily. \section{Final remarks} Fix a positive integer $k$. The definition of $\mathcal{AW}$-inversion sequences can be generalized to {\em $k$-$\mathcal{AW}$-inversion sequences} by requiring $$ e_i\leq \operatorname{max}\{e_{i-1},e_{i-2},\ldots,e_{i-k}\}+1 $$ for an inversion sequence $e=(e_1,e_2,\ldots,e_n)$ and every $1\leq i\leq n$, where we take the convention $e_m=0$ whenever $m$ is nonpositive. It is apparent that $k$-$\mathcal{AW}$-inversion sequences of length $n$ is enumerated by \begin{itemize} \item the $n$th Catalan number $C_n$, when $k=1$; \item the $n$th Bell number $B_n$, when $k=n-1$. Note that in this case, the $k$-$\mathcal{AW}$-inversion sequences are known as {\em restricted growth functions}, which are used to encode set partitions. \end{itemize} The $2$-$\mathcal{AW}$-inversion sequences is just the $\mathcal{AW}$-inversion sequences we have investigated here. But even for enumeration of this special case, we have obtained no explicit formula. The longest decreasing and increasing subsequences and their variants in permutations have already been studied from various aspects; see the interesting survey written by Stanley~\cite{st}. We expect similar studies on inversion sequences and ascent sequences to be fruitful. In particular, our results suggest that inversion sequences with no weakly $k$-decreasing subsequence and ascent sequences, primitive or ordinary, avoiding strictly $k$-decreasing subsequence for $k>3$ may be worth further investigation. \subsection*{Recent developments} Since a preliminary version of this paper was posed on arXiv, there have been two interesting developments. Via $01$-filling of triangular shape, Yan~\cite{yan2} constructed a bijection between enhanced $3$-nonnesting partitions and $(\geq,\geq,-)$-avoiding inversion sequences, thereby providing a bijective proof of Conjecture~\ref{conj:yan}. Very recently, Kim and the author~\cite{kl2} obtained two different combinatorial proofs of Conjecture~\ref{conj:lin}, one of which even proves a refinement of~\eqref{eq:kimlin}, taking the number of blocks into account. \section*{Acknowledgement} The author is grateful to Shaoshi Chen for his help on Zeilberger's algorithm. He also would like to thank the referees for their corrections and suggestions to improve the presentation. This work was done while the author was a Postdoc at CAMP, National Institute for Mathematical Sciences. The author's research was supported by the National Science Foundation of China grant 11501244 and the Austrian Science Foundation FWF, START grant Y463 and SFB grant F50.
train/arxiv
BkiUbfE5qhDACkNVzlBQ
5
1
\section{The case of a general finite-index subgroup of genus zero} \label{app:mathy} The purpose of this appendix is to give a proof of Theorem \ref{thm:main} for a general finite-index subgroup of genus zero. The difference to the case considered in section \ref{sec:mero_sec} is that now there might be elliptic points or irregular cusps, whose local analytic structure is more complicated. Throughout this appendix, we keep the notation of section \ref{sec:mero_sec}. \subsection{The case \texorpdfstring{$k\leq 1$}{}} \begin{proposition} \label{prop:kleq1} For $k\leq 1$, we have \[ \mathcal{Q}\mathcal{M}_k(\Gamma,R_S)=\delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))\oplus\mathcal{M}_k(\Gamma,R_S). \] In other words, the following statements are true. \begin{itemize} \item[(i)] We have \[ \delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))\cap \mathcal{M}_k(\Gamma,R_S)=\{0\}, \] as subspaces of $\mathcal{Q}\mathcal{M}_k(\Gamma,R_S)$. \item[(ii)] We have \[ \mathcal{Q}\mathcal{M}_k(\Gamma,R_S)=\delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))+\mathcal{M}_k(\Gamma,R_S). \] \end{itemize} \end{proposition} \begin{proof} To prove (i), we need to show that, if $f\in \delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))\cap \mathcal{M}_k(\Gamma,R_S)$, then $f=0$. The proof is essentially the same as in ref.~\cite[Theorem 6.1]{matthes2021iterated}, so we will omit some details. Let $g\in \mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S)$ be such that $\delta(g)=f$, and denote by $g_0,\ldots,g_p$ the coefficient functions of $g$. The coefficient functions of $f$ are then given by \[ f_r=\delta(g_r)+\frac{k-2-r+1}{12}g_{r-1}, \qquad 0\leq r\leq p+1, \] with the convention that $g_{-1}=g_{p+1}\equiv 0$. On the other hand, since $f$ is modular, we have $f_r=0$ for $1\leq r\leq p+1$. In particular, for $r=p+1$, we have $\frac{k-2-p}{12}g_p=0$, hence $g_p=0$ (here, we use that $k\leq 1$). By recursion on $p$, the same argument yields that $g_r=0$ for all $0\leq r\leq p$, so that $g=0$, by uniqueness of the coefficient functions. For the proof of (ii), we need to show that every $f\in \mathcal{Q}\mathcal{M}_k(\Gamma,R_S)$ can be written as $f=\delta(g)+h$, for some $g\in \mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S)$ and some $h\in \mathcal{M}_k(\Gamma,R_S)$. Let $f_0,\ldots,f_p$ denote the coefficient functions of $f$ and assume without loss of generality that $f_p\neq 0$. We can write (cf.~ref.~\cite[Theorem 4.1]{Royer}) \[ f=\sum_{r=0}^p \overline{f}_r\cdot E_2^r, \] for uniquely determined $\overline{f}_r\in \mathcal{M}_{k-2r}(\Gamma,R_S)$, where $E_2$ denotes the normalized, holomorphic Eisenstein series of weight two, and the integer $p$ is, by definition, the \emph{depth} of $f$. Moreover, we have $f_p=\overline{f}_p$. We now prove the desired statement by induction on $p$, the case $p=0$ being trivial (take $g=0$ and $h=f=\overline{f}_0$). In the general case, a direct computation shows that the meromorphic quasi-modular form \[ \overline{f}_p\cdot E_2^p-\frac{12}{k-p-1}\delta(\overline{f}_p\cdot E_2^{p-1}) \] has depth $\leq p-1$, and we conclude by the induction hypothesis. \end{proof} \subsection{The case \texorpdfstring{$k\geq 2$}{}} \begin{proposition} For $k\geq 2$, we have \[ (\delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))+\mathcal{M}_k(\Gamma,R_S))\oplus \mathcal{M}_{2-k}(\Gamma,R_S)E_2^{k-1}=\mathcal{Q}\mathcal{M}_k(\Gamma,R_S). \] \end{proposition} \begin{proof} We begin by proving that \[ (\delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))+\mathcal{M}_k(\Gamma,R_S))\cap \mathcal{M}_{2-k}(\Gamma,R_S)E_2^{k-1}=\{0\}. \] Let $g\in \mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S)$ with coefficient functions $g_0,\ldots,g_p$ and assume that $f+\delta(g)=h\cdot E_2^{k-1}$, for some $f\in \mathcal{M}_k(\Gamma,R_S)$ and $h\in \mathcal{M}_{2-k}(\Gamma,R_S)$. Then \[ \delta(g_r)+\frac{k-2-r+1}{12}g_{r-1}=0, \qquad \mbox{for all }r\geq k, \] which shows that $g$ necessarily has depth $\leq k-2$. Moreover, since $g$ has weight $k-2$, one can show that $\delta(g)$ has depth $\leq k-2$. On the other hand, $h\cdot E_2^{k-1}$ has depth $k-1$, unless $h=0$, so that the equality $f+\delta(g)=h\cdot E_2^{k-1}$ yields that $h=0$. Therefore, also $g=0$, as was to be shown. We next show that \[ \delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))+\mathcal{M}_k(\Gamma,R_S)+\mathcal{M}_{2-k}(\Gamma,R_S)E_2^{k-1}=\mathcal{Q}\mathcal{M}_k(\Gamma,R_S). \] Let $f\in \mathcal{Q}\mathcal{M}_k(\Gamma,R_S)$ with coefficient functions $f_0,\ldots,f_p$, such that $f_p\neq 0$, and write $f=\sum_{r=0}^p\overline{f}_r\cdot E_2^r$, for some $\overline{f}_r\in \mathcal{M}_{k-2r}$. As remarked above, we have $f_p=\overline{f}_p$. If $p\neq 0,k-1$, then the same argument as in the proof of Proposition \ref{prop:kleq1} shows that $\overline{f}_p\cdot E_2^p -\frac{12}{k-p-1}\delta(\overline{f}_p\cdot E_2^{p-1})$ has depth $\leq p-1$. The desired statement now follows by descending induction on $r$. \end{proof} \begin{proposition} For $k\geq 2$, we have \[ \delta(\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S))\cap \mathcal{M}_k(\Gamma,R_S)=\delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S)). \] \end{proposition} \begin{proof} Again, the proof is essentially the same as in ref.~\cite[Theorem 6.1]{matthes2021iterated}. Let $g\in \mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S)$ be such that $\delta(g)=f$, and denote by $g_0,\ldots,g_p$ the coefficient functions of $g$. We may assume without loss of generality that $g_p\neq 0$. As in the case $k<2$, we have \[ f_r=\delta(g_r)+\frac{k-2-r+1}{12}g_{r-1}, \qquad 0\leq r\leq p+1, \] and $f_r=0$, for $1\leq r\leq p+1$. In particular, the equality $f_{p+1}=0$ implies that $p=k-2$. Likewise, by recursion on $r$, the equality $f_r=0$ shows that $\delta(g_r)=-\frac{k-r-1}{12}g_{r-1}$, for all $1\leq r\leq p$. The desired statement now follows from $g=g_0$ and $g_p\in \mathcal{M}_{2-k}(\Gamma,R_S)$, which are both general facts about the coefficient functions of quasi-modular forms (cf.~ref.~\cite{Royer}). \end{proof} In order to conclude the proof of Theorem \ref{thm:main} for arbitrary genus zero subgroups in the case $k\geq 2$, it now suffices to prove the following analogue of Theorem \ref{thm:bijection}. \begin{thm} \label{theorem} There is a direct sum decomposition \[ \mathcal{M}_k(\Gamma,R_S)=\delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S))\oplus \widetilde{\mathcal{M}}_k(\Gamma,R_{s_0}). \] \end{thm} \subsection{Divisors of meromorphic modular forms in negative weight} The key ingredient for the proof of Theorem \ref{theorem} is a formula for the degree of the divisor \[ \left\lfloor\operatorname{div} f\right\rfloor:=\sum_{P\in X_\Gamma}\left\lfloor \nu_P(f) \right\rfloor \cdot [P] \in \operatorname{Div}(X_\Gamma), \] where $0\neq f \in \mathcal{M}_{2-k}(\Gamma)$. Since the vanishing order of $f$ at an elliptic point or irregular cusp might be half- or third-integral, the divisor $\left\lfloor\operatorname{div} f\right\rfloor$ is in general different from $\operatorname{div}(f)=\sum_{P\in X_\Gamma}\nu_P(f) \cdot (P)$. \begin{proposition} \label{proposition} We have \[ \deg \left\lfloor\operatorname{div} f\right\rfloor=\begin{cases} 0 & k=2\\ -1-\dim S_k(\Gamma)&k\geq 3. \end{cases} \] \end{proposition} \begin{proof} The valence formula yields that \begin{equation} \label{equation} \deg \operatorname{div}(f)=\frac{(2-k)d_\Gamma}{12}=-(2-k)-\left(\frac{k}{4}-\frac 12\right)\varepsilon_2-\left(\frac{k}{3}-\frac 23\right)\varepsilon_3-\left( \frac{k}{2}-1 \right)\varepsilon_\infty, \end{equation} where in the second equality we have also used that $X_\Gamma$ has genus zero. This proves the statement for $k=2$, since in that case we necessarily have $\operatorname{div}(f)=\left\lfloor\operatorname{div} f\right\rfloor$. If $k\geq 3$, we first need a lemma which provides some arithmetic information about the vanishing order of $f$ at an elliptic point or an irregular cusp. \begin{lemma} \label{lemma} Let $k$ be an integer and $0\neq g\in \mathcal{M}_k(\Gamma)$. Then \[ \nu_P(g) \equiv \begin{cases} \frac{k}{4} \mod \mathbb Z & \mbox{if $P$ is elliptic of order two,} \\ \frac{k}{3} \mod \mathbb Z & \mbox{if $P$ is elliptic of order three,}\\ \frac{k}{2} \mod \mathbb Z & \mbox{if $P$ is an irregular cusp}. \end{cases} \] \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma}.] The statement is trivial for $k=0$, since $g$ is then a holomorphic function on $X_\Gamma$ and therefore $\nu_P(g)\in \mathbb Z$. In general, if $\Gamma=\operatorname{SL}_2(\mathbb Z)$, then the desired assertion follows immediately by comparing both sides of the valence formula (where $i$ and $\rho$ denote the elliptic points defined in eq.~\eqref{eq:eps_infty_to_dGamma} above) \[ \nu_{[i]}(g)+\nu_{[\rho]}(g)+\sum_{P\in X_\Gamma\setminus \{[i],[\rho]\}}\nu_P(g)=\frac{k}{12}. \] For general $\Gamma$, if $k$ is even, then we can write $g=g'g''$ where $g'\in \mathcal{M}_0(\Gamma)$ and $g''\in \mathcal{M}_k(\operatorname{SL}_2(\mathbb Z))$, and the desired statement follows from the above, as every elliptic point is $\operatorname{SL}_2(\mathbb Z)$-equivalent to either $i$ or $\rho$, and every cusp is $\operatorname{SL}_2(\mathbb Z)$-equivalent to $\infty$. If $\Gamma$ is arbitrary and $k$ is odd, then the existence of a non-zero meromorphic modular form of weight $k$ implies that $-I\notin\Gamma$, hence that there are no elliptic points of order two. On the other hand, if $P$ is elliptic of order three, then $\nu_P(g)=\frac{1}{2}\nu_P(g^2) \equiv \frac{k}{3} \mod \mathbb Z$, by what was just established in the case of even weights. This proves the statement for elliptic points. Finally, if $P$ is an irregular cusp of width $h$, then $g(\tau+h)=(-1)^kg(\tau)$. On the other hand, the Fourier coefficients $\sum_{m=n}^\infty a_me^{\pi im\tau/h}$ of $g$ at $P$ satisfy $a_m=(-1)^ma_m$, and the result follows. \end{proof} We now return to the proof of Proposition \ref{proposition}. If $k$ is even, then combining \eqref{equation} with Lemma \ref{lemma} yields that \[ \deg \left\lfloor\operatorname{div} f\right\rfloor=-(2-k)-\left\lfloor \frac{k}{4} \right\rfloor\varepsilon_2-\left\lfloor \frac{k}{3} \right\rfloor\varepsilon_3-\left(\frac{k}{2}-1\right)\varepsilon_\infty=-1-\dim S_k(\Gamma), \] proving the desired statement in that case. If $k$ is odd, then a similar argument yields that \[ \deg \left\lfloor\operatorname{div} f\right\rfloor=-(2-k)-\left\lfloor \frac{k}{3} \right\rfloor\varepsilon_3-\left(\frac{k}{2}-1\right)\varepsilon^{\rm reg}_\infty-\left(\frac{k}{2}-\frac 12\right)\varepsilon^{\rm irr}_\infty=-1-\dim S_k(\Gamma), \] where $\varepsilon^{\rm reg}_\infty$ (respectively, $\varepsilon^{\rm irr}_\infty$) denotes the number of regular (respectively, irregular) cusps. This ends the proof of Proposition \ref{proposition}. \end{proof} \begin{rmk} If $\Gamma$ is an arbitrary finite-index subgroup of $\operatorname{SL}_2(\mathbb Z)$, not necessarily of genus zero, and $0\neq f\in \mathcal{M}_{2-k}(\Gamma)$, then essentially the same proof yields that \[ \deg \left\lfloor\operatorname{div} f\right\rfloor= \begin{cases} 0, & k=2,\\ g-1-\dim S_k(\Gamma), & k\geq 3, \end{cases} \] where $g$ denotes the genus of $X_\Gamma$. We expect that this formula is well-known to the experts but did not find it in the literature. \end{rmk} \subsection{Proof of \texorpdfstring{Theorem \ref{theorem}}{}} Theorem \ref{theorem} follows by combining the assertions in the next two propositions. \begin{proposition} We have \[ \delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S))\cap \widetilde{\mathcal{M}}_k(\Gamma,R_s)=\{0\}. \] \end{proposition} \begin{proof} Let $f\in \delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S))\cap \widetilde{\mathcal{M}}_k(\Gamma,R_s)$, so that in particular $f=\delta^{k-1}(g)$, for some $g\in \mathcal{M}_{2-k}(\Gamma,R_S)$. If $P\in X_\Gamma\setminus S_\Gamma$ were such that $\nu_P(g)<0$, then $\nu_P(f)<\frac{1-k}{h_P}$, which contradicts $f\in \widetilde{\mathcal{M}}_k(\Gamma,R_s)$. Therefore $\nu_P(g)\geq 0$ and a similar argument shows that $\nu_s(g)\geq 0$, for all $s\in S_\Gamma\setminus\{s_0\}$. We now distinguish between the cases $k=2$ and $k\geq 3$. If $k=2$, then $f\in \widetilde{\mathcal{M}}_k(\Gamma,R_s)$ implies that $\nu_{s_0}(f)\geq 0$ which in turn yields that $\nu_{s_0}(g)\geq 0$. Therefore the meromorphic function $g$ has no poles on $X_\Gamma$, hence must be constant, and we conclude that $f=\delta(g)=0$. If $k\geq 3$ and $g\neq 0$, then Proposition \ref{proposition} now implies that \[ \left\lfloor \nu_{s_0}(g)\right\rfloor=-1-\dim S_k(\Gamma)-\sum_{P \in X_\Gamma\setminus \{s_0\}}\left\lfloor \nu_{P}(g)\right\rfloor \leq -1-\dim S_k(\Gamma). \] In particular, $\nu_{s_0}(g)<0$, hence that $\nu_{s_0}(f)=\nu_{s_0}(g) \leq -1-\dim S_k(\Gamma)$, which contradicts $f\in \widetilde{\mathcal{M}}_k(\Gamma,R_s)$. Therefore we must have $g=0$, hence also $f=0$, ending the proof. \end{proof} \begin{proposition} We have \[ \delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S))+\widetilde{\mathcal{M}}_k(\Gamma,R_s)=\mathcal{M}_k(\Gamma,R_S). \] \end{proposition} \begin{proof} Let $f\in \mathcal{M}_k(\Gamma,R_S)$. If $f\in \widetilde{\mathcal{M}}_k(\Gamma,R_s)$, there is nothing to prove. Otherwise, one of the conditions (i)-(iii) in Theorem \ref{theorem} must be violated. We shall prove that it is always possible to add an element of $\delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S))$ to $f$ such that the result is contained in $\widetilde{\mathcal{M}}_k(\Gamma,R_s)$, which clearly implies the desired result. Assume first that there exists $P\in X_\Gamma\setminus S_\Gamma$ such that $\nu_P(f)<\frac{1-k}{h_P}$ and choose $0\neq g \in \mathcal{M}_{2-k}(\Gamma,R_S)$. There exists $\varphi \in \mathcal{M}_0(\Gamma,R_S)$ such that $\nu_P(\varphi)=\nu_P(f)+\frac{k-1}{h_P}-\nu_P(g)$ (the right hand side is an integer by Lemma \ref{lemma}) and such that $\nu_Q(\varphi)>|\nu_Q(g)|$, for all $Q\in X_\Gamma \setminus \{P,s_0\}$. Then $\nu_P(\delta^{k-1}(\varphi\cdot g))=\nu_P(f)$, hence there exists $\alpha\in \mathbb C$ such that $\nu_P(f-\alpha\delta^{k-1}(\varphi\cdot g))>\nu_P(f)$ and $\nu_Q(f-\alpha\delta^{k-1}(\varphi\cdot g))\geq \nu_Q(f)$ for all $Q\in X_\Gamma\setminus \{s_0\}$. Repeating this step a finite number of times, we may thus ensure that, up to adding an element of $\delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S))$, we have $\nu_P(f)\geq \frac{1-k}{h_P}$, for all $P\in X_\Gamma\setminus S_\Gamma$. A similar argument shows that we may also assume that $\nu_{s}(f)\geq 0$ for all $s\in S_\Gamma\setminus \{s_0\}$. Now assume that $\lfloor\nu_{s_0}(f)\rfloor<-\dim S_k(\Gamma)$ and choose $0\neq g\in \mathcal{M}_{2-k}(\Gamma,R_S)$. Up to possibly multiplying $g$ by a suitable modular function that only has a pole at $s_0$, we may assume that $\nu_P(g)\geq 0$, for all $P\in X_\Gamma\setminus \{s_0\}$. Moreover, by Proposition \ref{proposition} and since $X_\Gamma$ has genus zero , there exists $\varphi \in \mathcal{M}_0(\Gamma)$ such that $\nu_P(\varphi\cdot g)\geq 0$, for all $P\in X_\Gamma \setminus \{s_0\}$, and $\nu_{s_0}(\varphi\cdot g)=\nu_{s_0}(f)$. As before, one can now show that, up to adding an element of $\delta^{k-1}(\mathcal{M}_{2-k}(\Gamma,R_S))$, we have $\lfloor\nu_{s_0}(f)\rfloor \geq -\dim S_k(\Gamma)$, proving the proposition. \end{proof} \section{Banana integrals and iterated integrals of meromorphic modular forms} \label{sec:bananameromorphic} The analysis of the monodromy group of the sunrise and banana integrals implies via Theorem~\ref{thm:section2} that both integrals can be expressed through all orders in $\epsilon$ in terms of iterated integrals of meromorphic modular forms for the congruence subgroup $\Gamma_1(6)$, which is a neat subgroup in the sense of Definition~\ref{defi:neat}. In the remainder of this section we make this statement concrete, and we derive a form of the differential equation satisfied by the master integrals for the sunrise and banana families that involves the basis of meromorphic modular forms defined in Theorem~\ref{thm:main} only. The strategy of section~\ref{sec:FIs_and_deqs} of solving the first-order systems then implies that all iterated integrals that appear in the solution, to all orders in the dimensional regulator $\epsilon$, only involves the basis of meromorphic modular forms implied by Theorem~\ref{thm:main}. Note that once this form of the differential equation is known, it is straightforward to solve it explicitly. In particular, the initial condition is known to all orders in $\epsilon$ in terms of $\Gamma$ functions for banana integrals of arbitrary loop order~\cite{Bonisch:2021yfw}. \subsection{The iterated integrals for the sunrise integral} We start by discussing the case of the two-loop equal-mass sunrise integral. This case is in principle well known, and we will show that we can recover the results of ref.~\cite{Adams:2018yfj}. However, we discuss this case in some detail, as it allows us to set our conventions and to point out differences with respect to the three-loop equal-mass banana integral, which will be discussed in section~\ref{sec:ban_iterated}. There are two master integrals $(\mathcal{S}_1(\epsilon;t), \mathcal{S}_2(\epsilon;t))$ for the two-loop equal-mass sunrise integral, which are accompanied by a tadpole integral (which is constant in our normalisation: $I^{\textsf{sun}}_{2,2,0,0,0}(p^2,m^2;2-2\epsilon)=1$)\cite{Remiddi:2016gno}: \begin{equation}\bsp\label{eq:sunrise_MIs} \mathcal{S}_1(\epsilon;t) &\, = -I^{\textsf{sun}}_{1,1,1,0,0}(p^2,m^2;2-2\epsilon)\,,\\ \mathcal{S}_2(\epsilon;t) &\, = -\left[\frac{1}{3}(t^2-6t+21)-12\epsilon(t-1)\right]\,I^{\textsf{sun}}_{1,1,1,0,0}(p^2,m^2;2-2\epsilon)\\ &\,\phantom{=}-2(t-1)(t-9)\,I^{\textsf{sun}}_{2,1,1,0,0}(p^2,m^2;2-2\epsilon)\,, \esp\end{equation} where the variable $t$ is defined in eq.~\eqref{eq:t_def}. Both master integrals are finite for $\epsilon=0$ and satisfy the differential equation: \begin{equation}\label{eq:sun_GM} \partial_t\begin{pmatrix}\mathcal{S}_1(\epsilon;t) \\ \mathcal{S}_2(\epsilon;t)\end{pmatrix} = \Big[B^\textsf{sun}(t) -2 \epsilon D^\textsf{sun}(t)\Big]\begin{pmatrix}\mathcal{S}_1(\epsilon;t) \\ \mathcal{S}_2(\epsilon;t)\end{pmatrix} + \begin{pmatrix}0\\1\end{pmatrix}\,. \end{equation} Explicit expression for the matrices $B^\textsf{sun}(t)$ and $D^\textsf{sun}(t)$ are collected in appendix~\ref{app:sun}. In a next step, we want to introduce a modular parametrisation and apply the results of section~\ref{sec:mero_sec}. In order to do this, the Hauptmodul needs to be normalised as in eq.~\eqref{eqn:Hauptmodul}. This is, however, not the case for the Hauptmodul for $\Gamma_1(6)$ defined in eq.~\eqref{eq:t_def}. We define: \begin{equation}\label{eq:t_to_xi} \xi(\tau) = \frac{9}{t(\tau)} = \frac{\eta(2\tau)^8\eta(3\tau)^4}{\eta(\tau)^4\eta(6\tau)^8} = \frac{1}{q} + \mathcal{O}(q^0)\,. \end{equation} Similarly, we define \begin{equation} \aleph_1(\tau) = \frac{\sqrt{3}}{2\pi\,\xi(\tau)}\,\Psi_1(t(\tau)) = \frac{\eta(\tau)\eta(6\tau)^6}{\eta(2\tau)^2\eta(3\tau)^3} = q + \mathcal{O}(q^2)\,, \end{equation} in agreement with the normalisation in eq.~\eqref{eq:Hk_normalisation} for $k=1$.\footnote{Since $\Gamma_1(6)$ is neat, we must $h=1$ in eq.~\eqref{eq:Hk_normalisation}. Moreover, $\Gamma_1(6)$ has four cusps, so eq.~\eqref{eq:eps_infty_to_dGamma} implies $d_{\Gamma_1(6)}=1$.} The Jacobian of the change of variables from $\xi$ to $\tau$ is \begin{equation} d\xi = -2\pi i \xi(\tau)(\xi(\tau)-1)(\xi(\tau)-9)\,\aleph_1(\tau)^2\,d\tau\,. \end{equation} Then, letting \begin{equation} \begin{pmatrix}\mathcal{S}_1(\epsilon;t) \\ \mathcal{S}_2(\epsilon;t)\end{pmatrix} = \frac{1}{\epsilon^2(2\pi i)^2\,\xi(\xi-1)(\xi-9)}\,W^{\textsf{sun}}(\tau)\begin{pmatrix}\widetilde{\mathcal{S}}_1(\epsilon;t) \\ \widetilde{\mathcal{S}}_2(\epsilon;t)\end{pmatrix}\,, \end{equation} with \begin{equation} W^{\textsf{sun}}(\tau) = \begin{pmatrix} (2\pi i)^2 \xi(\xi-1)(\xi-9)\,\aleph_1(\tau) & 0 \\ \frac{\pi^2}{3}\left[11 \xi^2-54\xi+27+6\epsilon(\xi+3)^2\right]\,\aleph_1(\tau)-\frac{G_2(\tau)}{\aleph_1(\tau)} & -\frac{2\pi i \epsilon}{\aleph_1(\tau)} \end{pmatrix}\,, \end{equation} we find \begin{equation} \partial_{\tau}\begin{pmatrix}\widetilde{\mathcal{S}}_1(\epsilon;t) \\ \widetilde{\mathcal{S}}_2(\epsilon;t)\end{pmatrix} = \epsilon\,\widetilde{D}^\textsf{sun}(\tau)\,\begin{pmatrix}\widetilde{\mathcal{S}}_1(\epsilon;t) \\ \widetilde{\mathcal{S}}_2(\epsilon;t)\end{pmatrix}+108\pi^2 \epsilon\,(\xi-1)(\xi-9)\,\aleph_1(\tau)^3\,\begin{pmatrix}0\\1\end{pmatrix}\,, \end{equation} with \begin{equation} \widetilde{D}^\textsf{sun}(\tau) = \begin{pmatrix}i\pi (\xi^2+10\xi-27) \,\aleph_1(\tau)^2 & 1 \\ -\pi^2\,(\xi+3)^4\,\aleph_1(\tau)^4 & i\pi (\xi^2+10\xi-27) \,\aleph_1(\tau)^2 \end{pmatrix}\,. \end{equation} In the previous equations we used the shorthand $\xi=\xi(\tau)$ to keep the notation as light as possible. Let us comment on the form of the differential equation. We observe that the differential equation only involves (holomorphic) modular forms of weights up to 4 for $\Gamma_1(6)$. We also observe that $\epsilon$ factorises from the matrix multiplying the homogeneous part, so that the differential equation is in canonical form. As a consequence, the master integrals in the basis $(\widetilde{\mathcal{S}}_1(\epsilon;t), \widetilde{\mathcal{S}}_2(\epsilon;t))$ can be expressed in terms of iterated integrals of modular forms for $\Gamma_1(6)$, which are pure functions of uniform weight~\cite{ArkaniHamed:2010gh} according to the definition of ref.~\cite{Broedel:2018qkq}. The initial condition can be fixed to all orders in $\epsilon$ in terms of zeta values. These results are actually not new, but they agree with the findings of ref.~\cite{Adams:2018yfj}. The change of basis from $(\widetilde{\mathcal{S}}_1(\epsilon;t), \widetilde{\mathcal{S}}_2(\epsilon;t))$ to $({\mathcal{S}}_1(\epsilon;t), {\mathcal{S}}_2(\epsilon;t))$ involves a matrix whose entries are rational in $\epsilon$ and meromorphic quasi-modular forms for $\Gamma_1(6)$ with poles at most at the cusps. More precisely, for $i\ge j$, we have \begin{equation}\bsp \widetilde{D}^\textsf{sun}(\tau)_{ij} &\,\in M_{2(1+i-j)}(\Gamma_1(6))\,,\\ W^{\textsf{sun}}(\tau)_{ij} &\,\in \mathcal{Q}\mathcal{M}_{3-2j}^{\le (i-1)}(\Gamma_1(6),S_{\Gamma_1(6)})(\epsilon)\,. \esp\end{equation} \subsection{The iterated integrals for the three-loop banana integral} \label{sec:ban_iterated} We now extend the discussion of the previous section to the three-loop equal-mass banana integrals. We choose three master integrals as~\cite{Primo:2017ipr} \eqs{ \label{eq:fints} \mathcal{I}_1(\epsilon;x) &= (1 + 2 \epsilon ) (1 + 3 \epsilon)I_{1,1,1,1,0,0,0,0,0}(p^2,1;2-2\epsilon)\, ,\\ \mathcal{I}_2(\epsilon;x) &= (1 + 2 \epsilon ) I_{2,1,1,1,0,0,0,0,0}(p^2,1;2-2\epsilon)\, ,\\ \mathcal{I}_3(\epsilon;x) &= I_{2,2,1,1,0,0,0,0,0}(p^2,1;2-2\epsilon)\,. } All three master integrals are finite at $\epsilon=0$. The fourth master integral is the three-loop tadpole integral with squared propagators (which in our normalisation again evaluates to unity, $I_{2,2,2,0,0,0,0,0,0}(p^2,1;2-2\epsilon) = 1$). The three master integrals in eq.~\eqref{eq:fints} satisfy the inhomogeneous equation~\cite{Primo:2017ipr} \begin{equation}\label{eq:banana_DEQ} \partial_x \begin{pmatrix}\cI_1(\eps;x)\\\cI_2(\eps;x)\\\cI_3(\eps;x)\end{pmatrix} = \Big[B^\textsf{ban}(x) + \epsilon D^\textsf{ban}(x)\Big]\begin{pmatrix}\cI_1(\eps;x)\\\cI_2(\eps;x)\\\cI_3(\eps;x)\end{pmatrix} + \begin{pmatrix}0\\0\\-\frac{1}{2(4x-1)}\end{pmatrix}\,. \end{equation} The explicit expressions of the matrices can be found in appendix~\ref{app:ban}. We change variables from $x$ to $t$ according to eq.~\eqref{eq:change_of_vars}, followed by the change of variables in eq.~\eqref{eq:t_to_xi}. We introduce a new basis according to \begin{equation} \begin{pmatrix}\cI_1(\eps;x)\\\cI_2(\eps;x)\\\cI_3(\eps;x)\end{pmatrix} = \frac{(1+2\epsilon)(1+3\epsilon)}{\epsilon^2}\,W^\textsf{ban}(\tau)\begin{pmatrix}\widetilde{\mathcal{I}}_1(\epsilon;\tau)\\\widetilde{\mathcal{I}}_2(\epsilon;\tau)\\\widetilde{\mathcal{I}}_3(\epsilon;\tau)\end{pmatrix}\,. \end{equation} The non-vanishing entries of $W^\textsf{ban}(\tau)$ are: \begin{align} \nonumber W^{\textsf{ban}}&(\tau)_{11} = (2\pi i)^2\, \xi\,\aleph_1(\tau)^2\,,\\ W^{\textsf{ban}}&(\tau)_{21} = \frac{\xi }{2 \left(\xi^2-9\right) (1+3 \epsilon )}\,G_2(\tau )\\ \nonumber&\,+\frac{ \left(\xi^2-12 \xi+27\right) (\xi+3)^2+6\epsilon \left(\xi^4+20 \xi^3-90 \xi^2+180 \xi+81\right)}{6 \left(\xi^2-9\right)^2 (1+3 \epsilon )}\,\pi ^2 \xi\,\aleph_1(\tau )^2\,,\\ \nonumber W^{\textsf{ban}}&(\tau)_{22} = \frac{\pi\,\epsilon\,\xi}{2(1+3\epsilon)\,(\xi^2-9)}\,,\\ \nonumber W^{\textsf{ban}}&(\tau)_{31} = -\frac{\xi }{24 \pi ^2 (\xi-9) (\xi-1) (\xi+3)^2 (1+2 \epsilon ) (1+3 \epsilon)}\,\frac{G_2(\tau )^2}{ \aleph_1(\tau )^2}\\ \nonumber&-\frac{\xi^4+6 \xi^3-540 \xi^2 +162 \xi+243+6\epsilon \left(\xi^4+16 \xi^3-306 \xi^2+144 \xi+81\right) }{36 (\xi-9) (\xi-3) (\xi-1) (\xi+3)^3 (1+2 \epsilon) (1+3 \epsilon)}\,\xi\,G_2(\tau )\\ \nonumber&-\frac{\pi ^2 \xi \aleph_1(\tau )^2 }{216 (\xi-9) (\xi-3)^2 (\xi-1) (\xi+3)^4 (1+2 \epsilon) (1+3 \epsilon)}\\ \nonumber&\quad\times\Big[(\xi^5+3 \xi^4+1062 \xi^3-3726 \xi^2+729 \xi+2187) (\xi+3)^3\\ \nonumber&\quad\phantom{\times}+12\epsilon (\xi^8+10 \xi^7+1386 \xi^6-18126 \xi^5+82188 \xi^4-194562 \xi^3\\ \nonumber&\quad \phantom{\times}+78246 \xi^2+39366 \xi+19683) +36 \epsilon ^2 (\xi^8+40 \xi^7+860 \xi^6-17064 \xi^5+68454 \xi^4\\ \nonumber&\quad\phantom{\times} -153576 \xi^3+69660 \xi^2+29160 \xi+6561) \Big]\,,\\ \nonumber W^{\textsf{ban}}&(\tau)_{32} = -\frac{\xi \epsilon }{12 \pi (\xi-9) (\xi-1) (\xi+3)^2 (1+2 \epsilon ) (1+3 \epsilon )}\,\frac{G_2(\tau )}{ \aleph_1(\tau )^2}\\ \nonumber&-\frac{\pi \xi \epsilon \left[\xi^4+6 \xi^3-540 \xi^2 +162 \xi+243+6\epsilon\, \left(\xi^4+16 \xi^3-306 \xi^2+144 \xi+81\right) \right]}{36 (\xi-9) (\xi-3) (\xi-1) (\xi+3)^3 (1+2 \epsilon ) (1+3 \epsilon )}\,,\\ \nonumber W^{\textsf{ban}}&(\tau)_{33} = -\frac{\xi \epsilon ^2}{2 (\xi-9) (\xi-1) (\xi+3)^2 (1+2 \epsilon) (1+3 \epsilon) \aleph_1(\tau )^2}\,. \end{align} Note that the entries of $W^{\textsf{ban}}(\tau)$ are again rational in $\epsilon$ and meromorphic quasi-modular forms for $\Gamma_1(6)$: \begin{equation} W^{\textsf{ban}}(\tau)_{ij} \in \mathcal{Q}\mathcal{M}_{4-2j}^{\le(i-1)}(\Gamma_1(6),S_{\Gamma_1(6)}\cup \{[\tau_{\pm 3}]\})(\epsilon)\,, \qquad \xi(\tau_{\pm 3}) = \pm 3\,. \end{equation} Unlike the case of the two-loop sunrise integral, now we do not only have poles at the MUM-points $\xi\in \{0,1,9,\infty\}$, but we also have poles at $\xi = \pm 3$. These poles arise from the singularities of the differential operator in eq.~\eqref{eq:L_ban_3} which are not MUM-points, i.e., $x\in\{1/4,1\}$. The vector $(\widetilde{\mathcal{I}}_1(\epsilon;\tau),\widetilde{\mathcal{I}}_2(\epsilon;\tau),\widetilde{\mathcal{I}}_3(\epsilon;\tau))$ satisfies the differential equation: \begin{equation}\bsp\label{eq:ban_final} \partial_{\tau}\begin{pmatrix}\widetilde{\mathcal{I}}_1(\epsilon;\tau)\\\widetilde{\mathcal{I}}_2(\epsilon;\tau)\\\widetilde{\mathcal{I}}_3(\epsilon;\tau)\end{pmatrix} &\,=i\,\epsilon\,\widetilde{D}^{\textsf{ban}}(\epsilon;\tau)\begin{pmatrix}\widetilde{\mathcal{I}}_1(\epsilon;\tau)\\\widetilde{\mathcal{I}}_2(\epsilon;\tau)\\\widetilde{\mathcal{I}}_3(\epsilon;\tau)\end{pmatrix} + 8\pi i (\xi-1)(\xi-9)(\xi^2-9)\,\aleph_1(\tau)^4\begin{pmatrix}0\\0\\1\end{pmatrix} \,, \esp\end{equation} with \begin{equation} \widetilde{D}^{\textsf{ban}}(\epsilon;\tau) = \begin{pmatrix} d_2(\tau) & -1 &0\\ d_4(\tau) & d_2(\tau)& -6 \\ \frac{1-4\epsilon^2}{\epsilon^2}\,\,d_6(\tau) & \frac{1}{6}d_4(\tau) & d_2(\tau) \end{pmatrix}\,, \end{equation} where we defined: \begin{equation}\bsp d_2(\tau) &\,=\frac{4\pi\,(\xi^4-10\xi^3+18\xi^2-90\xi+81)}{\xi^2-9}\,\aleph_1(\tau)^2\,,\\ d_4(\tau) &\,=-\frac{2\pi^2\,(\xi^2-18\xi+9)^2\,(\xi^4-12\xi^3+102\xi^2-108\xi+81)}{(\xi^2-9)^2}\,\aleph_1(\xi)^4\,,\\ d_6(\tau) &\,=\frac{8\pi^3\,\xi\,(\xi^2-18\xi+9)^3\,(\xi^4-12\xi^3+38\xi^2-108\xi+81)}{3(\xi^2-9)^3}\,\aleph_1(\tau)^6\,. \esp\end{equation} The structure of the differential equation is particularly simple, and the functions $d_k(\tau)$ are meromorphic modular forms of weight $k$: \begin{equation} d_k(\tau)\in \widetilde{\mathcal{M}}_{k}(\Gamma_1(6), \{[\tau_{\pm3}]\})\,. \end{equation} The appearance of the additional poles at $\xi = \pm3$ can again be traced back to the singularities at $x\in\{1/4,1\}$, which are not MUM-points, and so they do not map to cusps when passing to the variable $\tau$. We note that in order to arrive at this simple form, the algorithm of section~\ref{sec:neat_proof}, which allows every meromorphic quasi-modular form to be decomposed according to Theorem~\ref{thm:main}, plays a crucial role. The differential equation can easily be solved to arbitrary orders in $\epsilon$ in terms of iterated integrals of meromorphic modular forms for $\Gamma_1(6)$. The initial condition is known to all order in $\epsilon$ from ref.~\cite{Broedel:2019kmn,Bonisch:2021yfw}. We have explicitly computed all master integrals through $\mathcal{O}(\epsilon^2)$, and we have checked numerically that our results are correct by comparing the numerical evaluation of the iterated integrals in terms of $q$-expansions to a direct numerical evaluation of the banana integrals from Mellin--Barnes integrals in the Euclidean region. The results are lengthy and not very illuminating, and they are available from the authors upon request. Let us conclude by making an important observation. Despite all the structural similarities between $\widetilde{D}^{\textsf{sun}}(\tau)$ and $\widetilde{D}^{\textsf{ban}}(\tau)$, the differential equation~\eqref{eq:ban_final} is \emph{not} in canonical from, because the entry in the lower left corner of $\widetilde{D}^{\textsf{ban}}(\tau)$ is not independent of $\epsilon$! This is not entirely surprising: Canonical differential equations are expected to be closely related to the concept of pure functions~\cite{Henn:2013pwa}. Pure functions in turn are expected to have only logarithmic singularities~\cite{ArkaniHamed:2010gh,Broedel:2018qkq}. We see, however, that $d_4(\tau)$ and $d_6(\tau)$ have double and triple poles at $\xi=\pm3$. More generally, we see that, as soon as we consider poles that do not lie at the cusps, the basis of meromorphic modular forms obtain from Theorem~\ref{thm:main} will generically lead to functions with higher-order poles, and there is in general no way to preserve modularity and only have logarithmic singularities. It is possible to achieve an alternative decomposition which leads to a basis of quasi-modular forms with single-poles. More precisely, for $k\ge 2$ we have a decomposition: \begin{equation}\label{eq:dec_QM} \mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S) = \delta\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S) \oplus \mathcal{M}_{2-k}(\Gamma,R_S)\,G_2^{k-1}\oplus \widetilde{\mathcal{Q}\mathcal{M}}_k(\Gamma,R_{\infty})\,, \end{equation} where we defined (cf.~eqs.~\eqref{eq:Mtilde_def} and~\eqref{eq:Shat_def}): \begin{equation}\bsp\widetilde{\mathcal{Q}\mathcal{M}}_k(\Gamma,R_{\infty})&\, := M_k(\Gamma) \cup \widehat{\mathcal{S}}_k(\Gamma)\cup\widehat{\mathcal{Q}\mathcal{M}}_k(\Gamma,R_{\infty})\,\\ \widehat{\mathcal{Q}\mathcal{M}}_k(\Gamma,R_{\infty})&\,:= \bigoplus_{\substack{P\in R \\ 0\le m<k-1}}\!\!\!\! \mathbb{C}\,\frac{\aleph_{k-2m}\,G_2^m}{\xi-P}\,. \esp\end{equation} The difference between the sets $\widehat{\mathcal{M}}_k(\Gamma,R_{\infty})$ and $\widehat{\mathcal{Q}\mathcal{M}}_k(\Gamma,R_{\infty})$ is that the former only contains meromorphic modular forms, but with poles of higher order, and the latter only contains quasi-modular forms of higher depth, but with at most simple poles. The proof of the decomposition in eq.~\eqref{eq:dec_QM} (for neat subgroups of genus zero) is similar to the proof in section~\ref{sec:neat_proof}. The form of the differential equation in this basis, however, is extremely complicated (and even further away from being canonical). For the future, it would be interesting to investigate if it is possible to to define a canonical basis for the equal-mass three-loop banana integrals. This may involve introducing integration kernels that are primitives of modular forms with only simple poles, but at the expense of loosing modularity, similar to the case of elliptic polylogarithms~\cite{Broedel:2017kkb} (see also ref.~\cite{matthesfonseca}). \section{The differential equations for the sunrise and banana integrals} \label{app:sunban} \subsection{The differential equations for the sunrise integrals} \label{app:sun} The matrices appearing in the differential equation in eq.~\eqref{eq:sun_GM} are \begin{equation}\bsp B^\textsf{sun}(t) &\, = \frac{1}{6t(t-1)(t-9)}\begin{pmatrix} 3 (3+14t-t^2) & -9 \\\\ (t+3) (t^3-15 t^2+75 t+3) & 3 (3+14t-t^2) \end{pmatrix}\,,\\ D^\textsf{sun}(t) &\, =\frac{1}{6t(t-1)(t-9)}\begin{pmatrix} 6 (t-1) t & 0 \\ (t+3) (t^3-9 t^2+63 t+9) & 3 (t-9) (t+1) \end{pmatrix}\,. \esp\end{equation} A basis of maximal cuts for the two-loop equal-mass sunrise integral in $d=2$ dimensions, i.e., a basis for the solution space of the differential operator in eq.~\eqref{eq:sunriseDO} is \begin{equation} \begin{split} \label{eq:psi1_def} \Psi_1(t) & = \frac{4}{[(3-\sqrt{t})(1+\sqrt{t})^3]^{1/2}}\,{\EK}\left(\frac{t_{14}(t)t_{23}(t)}{t_{13}(t)t_{24}(t)}\right)\,,\\ \Psi_2(t) & = \frac{4 i}{[(3-\sqrt{t})(1+\sqrt{t})^3]^{1/2}}\,{\EK}\left(\frac{t_{12}(t)t_{34}(t)}{t_{13}(t)t_{24}(t)}\right)\,, \end{split} \end{equation} with $t_{ij}(t) = t_i(t)-t_j(t)$ and \begin{equation}\label{eq:SR_t_i_def} t_1(t) = -4\,,\quad t_2(t) = -(1+\sqrt{t})^2\,,\quad t_3(t) = -(1-\sqrt{t})^2\,, \quad t_4(t)=0\,, \end{equation} and $\EK(\lambda)$ denotes the complete elliptic integral of the first kind: \begin{equation} \EK(\lambda) = \int_0^1\frac{dt}{\sqrt{t(1-t)(1-\lambda t)}}\,. \end{equation} \subsection{The differential equations for the banana integrals} \label{app:ban} The matrices $B^\textsf{ban}(x)$ and $D^\textsf{ban}(x)$ entering the differential equation~\eqref{eq:banana_DEQ} are: \begin{align} B^\textsf{ban}(x) &= \begin{pmatrix} \frac{1}{x} & \frac{4}{x} & 0 \\ \frac{1}{4(1-x)} & \frac{1}{x}+\frac{2}{1-x} & \frac{3}{x}+\frac{3}{1-x} \\ -\frac{1}{8(1-x)} + \frac{1}{8(1-4x)} \,\,&\,\, -\frac{1}{1-x} + \frac{3}{2(1-4x)} \,\,&\,\, \frac{1}{x}+\frac{6}{1-4x}-\frac{3}{2(1-x)} \end{pmatrix}\,,\\ D^\textsf{ban}(x) &= \begin{pmatrix} \frac{3}{x} & \frac{12}{x} & 0 \\ \frac{1}{1-x} & \frac{2}{x}+\frac{6}{1-x} & \frac{6}{x}+\frac{6}{1-x} \\ -\frac{1}{2(1-x)} + \frac{1}{2(1-4x)} \,\,&\,\, -\frac{3}{1-x}+\frac{9}{2(1-4x)} \,\,&\,\, \frac{1}{x}+\frac{12}{1-4x}-\frac{3}{1-x} \end{pmatrix}\,. \end{align} A basis of the solution space for the differential operator $\mathcal{L}_x^{\mathsf{ban},(3)}$ in eq.~\eqref{eq:L_ban_3} can then be chosen as (for $0\le t\le 1$) \begin{equation} \begin{split}\label{eq:H1_to_Psi1} I_1(x(t)) &\,=\frac{1}{3}\,t\,(\Psi_1(t)+\Psi_2(t))\,(\Psi_1(t)+3\Psi_2(t))\,,\\ J_1(x(t)) &\,=\frac{i}{3}\,t\,\Psi_1(t)\,(\Psi_1(t)+\Psi_2(t))\,,\\ H_1(x(t)) &\,= -\frac{1}{3}\,t\,\Psi_1(t)^2\,, \end{split} \end{equation} where $\Psi_1(t)$ and $\Psi_1(t)$ are the maximal cuts of the sunrise integral, and $x(t)$ is defined in eq.~\eqref{eq:change_of_vars}. \section{Conclusion} \label{sec:conclusions} In this paper we have considered a class of differential equations which can be solved to all orders in $\epsilon$ in terms of iterated integrals of meromorphic modular forms. We have described these differential equations in detail, and we have argued that the type of modular forms required is related to the monodromy group of the associated homogeneous differential equation. On the mathematical side, one of the main results of this paper is a generalisation of the main theorems for the full modular group $\mathrm{SL}_2(\bZ)$ of ref.~\cite{matthes2021iterated} to arbitrary genus-zero subgroups of finite index. In particular, we have provided an explicit decomposition of the space of meromorphic modular forms into a direct sum of two spaces. The first space collects all those meromorphic modular forms which can be written as derivatives of other functions, and which are thus irrelevant when considering integrals. We provide an explicit basis for the second space (at least in the case of so-called neat subgroups), and, using a classical result due to Chen, we show that the resulting iterated integrals are independent. On the physics side, we have clarified by explicit calculations how the monodromy groups of the associated homogeneous differential equations determine the type of modular forms that can arise. In particular, this gives another argument why the congruence subgroup associated to the two-loop equal-mass sunrise integral should have level 6, rather than 12 (see refs.~\cite{Adams:2017ejb,Frellesvig:2021vdl}). Finally, we have provided, for the first time, a complete description of the higher orders in $\epsilon$ for all master integrals for the three-loop equal-mass banana family. The results, which involve iterated integrals of meromorphic modular forms, are rather lengthy, and they are available from the authors upon request. In some sense, the differential equations and iterated integrals considered here can be interpreted as one of the simplest generalisations of MPLs: while MPLs arise from iterated integrations of rational functions, our integrals arise from iterated integrations of rational functions multiplied by solutions of a second-order linear differential operator that admits a modular parametrisation. Moreover, similar to the case of MPLs, we can identify classes of differential equations which can always be solved in terms of these functions in an algorithmic way. Note that there are natural generalisations of the class of Feynman integrals to which our construction applies, like those where the maximal cuts are rational functions, but the inhomogeneity involves iterated integrals of meromorphic modular forms. This is for example the case for the two-loop kite integral or some integrals contributing to the three-loop $\rho$ parameter~\cite{Adams:2016xah,Remiddi:2016gno,Adams:2018yfj,Abreu:2019fgk}. There are still some open questions. First, it often happens for Feynman integrals that one needs to consider additional square roots, in addition to modular forms (cf., e.g., ref.~\cite{Aglietti:2004tq}). If all square roots can be rationalised, one can reduce the complexity again to the situation of rational functions. In the setup of modular forms, however, if the branch points of the square root are not aligned with the cusps of the modular curve, it is not clear that the functions obtained by rationalising the square roots will fall within the class of meromorphic modular forms considered here. Second, we have shown that for the three-loop banana integrals, the differential equation is very compact when expressed in terms of the basis of meromorphic modular forms defined in section~\ref{sec:mero_sec}, but it is not in canonical form. For the future, it would be interesting to understand if and how a canonical form for this differential equation can be obtained, and what the resulting concept of pure functions would be. We leave these questions for future work. \section{Differential equations and modular parametrisations} \label{sec:DEQs} \subsection{Feynman integrals and differential equations} \label{sec:FIs_and_deqs} The goal of this paper is to study a certain class of Feynman integrals and to characterize the functions necessary for their evaluation. We work in dimensional regularisation in $d=d_0-2\epsilon$ dimensions, where $d_0$ is an even integer. The Feynman integrals to be considered depend on a single dimensionless variable $x$ or -- equivalently -- two dimensionful scales. It is well known that using integration-by-parts identities~\cite{Chetyrkin:1981qh,Tkachov:1981wb}, all Feynman integrals that share the same set of propagators raised to different integer powers can be expressed as linear combinations of a small set of so-called \emph{master integrals}. Those master integrals satisfy a system of first-order linear differential equations of the form~\cite{Kotikov:1990kg,Kotikov:1991hm,Kotikov:1991pm,Gehrmann:1999as,Henn:2013pwa} \begin{equation}\label{eq:DEQ_generic} \partial_x\mathcal{I}(x,\epsilon) = A(x,\epsilon)\mathcal{I}(x,\epsilon) + \mathcal{N}(x,\epsilon)\,, \end{equation} where $\mathcal{I}(x,\epsilon) = (I(x,\epsilon),\partial_xI(x,\epsilon),\ldots,\partial_x^{r-1}I(x,\epsilon))^T$ is the vector of independent master integrals depending on the maximal set of propagators in the family. $\mathcal{N}(x,\epsilon)$ is an inhomogeneous term stemming from integrals with fewer propagators, which we assume to be known and expressible to all orders in the dimensional regulator $\epsilon$ as a linear combination with rational functions in $x$ as coefficients of multiple polylogarithms (MPLs), defined by: \begin{equation}\label{eq:MPL_def} G(a_1,\ldots,a_n;x) = \int_0^x\frac{dt}{t-a_1}\,G(a_2,\ldots,a_n;t)\,, \end{equation} where the $a_i$ are complex constants that are independent of $x$. The entries of $A(x,\epsilon)$ are rational functions in $x$ and $\epsilon$. The differential equation~\eqref{eq:DEQ_generic} is equivalent to the inhomogeneous differential equation: \begin{equation}\label{eq:DEQ_generic_higher} \mathcal{L}_{x,\epsilon}^{(r)}I(x,\epsilon) = N(x,\epsilon)\,, \end{equation} where $\mathcal{L}_{x,\epsilon}^{(r)}$ is a differential operator of degree $r$ whose coefficients are rational functions in $x$ and $\epsilon$. In order to solve the differential equation~\eqref{eq:DEQ_generic}, we first note that it is always possible to choose the basis of master integrals such that the matrix $A(x,\epsilon)$ is finite as $\epsilon\to0$ (see, e.g., refs.~\cite{Chetyrkin:2006dh,Lee:2019wwn}). In that case we can change the basis of master integrals according to \begin{equation}\label{eq:change_IJ} \mathcal{I}(x,\epsilon) = W_r(x)\mathcal{J}(x,\epsilon)\,, \end{equation} where $W_r(x)$ is the Wronskian matrix of the homogeneous part of eq.~\eqref{eq:DEQ_generic_higher} at $\epsilon=0$, \begin{equation}\label{eqn:DEgen} \mathcal{L}_x^{(r)}u(x) = 0\,, \end{equation} where $\mathcal{L}_x^{(r)}= \mathcal{L}_{x,\epsilon=0}^{(r)}$. Let us write \begin{equation}\label{eq:cL_with_coefficients} \mathcal{L}^{(r)}_x = \sum_{j=0}^r a_j(x)\partial_x^j\,, \end{equation} with $a_j(x)$ being rational functions and $a_r(x)=1$. If we denote the solution space of $\mathcal{L}_x^{(r)}$ by \begin{equation}\label{eq:sol_space} \textrm{Sol}(\mathcal{L}_x^{(r)}) = \bigoplus_{s=1}^r\mathbb{C}\psi_s(x)\,, \end{equation} then the Wronskian is $W_r(x) = (\psi_s^{(p-1)}(x))_{1\le p,s\le r}$, where $\psi_s^{(p)}(x) := \partial_x^p\psi_s(x)$. The Wronskian is in fact the matrix for a basis of maximal cuts for $\mathcal{I}(x,\epsilon)$~\cite{Primo:2016ebd,Frellesvig:2017aai,Harley:2017qut,Bosma:2017ens}. After the change of variables in eq.~\eqref{eq:change_IJ}, the differential equation for $\mathcal{J}(x,\epsilon)$ takes the form \begin{equation}\bsp \partial_x\mathcal{J}(x,\epsilon) &\,= W_r(x)^{-1}\big(A(x,\epsilon)-A(x,0)\big)W_r(x)\,\mathcal{J}(x,\epsilon) + W_r(x)^{-1}\mathcal{N}(x,\epsilon)\\ &\, =\epsilon \widetilde{A}(x,\epsilon)\,\mathcal{J}(x,\epsilon) + \widetilde{\mathcal{N}}(x,\epsilon)\,. \esp\end{equation} This solution to the above system can be written as a path-ordered exponential: \begin{equation}\label{eq:path_ordered} \mathcal{J}(x,\epsilon) = \mathbb{P}\,\exp\left[\epsilon\int_{x_0}^xdx'\,\widetilde{A}(x',\epsilon)\right]\mathcal{J}(x_0,\epsilon)\,. \end{equation} The path-ordered exponential can easily be expanded into a series in $\epsilon$, and the coefficients of this expansion involve iterated integrals over one-forms multiplied by polynomials in the entries of the Wronskian. We see that in our setting where the $\epsilon$-expansion of the differential operator $\mathcal{L}_{x,\epsilon}^{(r)}$ and the inhomogeneity $\mathcal{N}(x,\epsilon)$ involve rational functions and MPLs only, the class of iterated integrals needed to express $\mathcal{J}(x,\epsilon)$ is determined by the solution space of $\mathcal{L}_x^{(r)}$ in eq.~\eqref{eq:sol_space}. It is an interesting question when these iterated integrals can be expressed in terms of other classes of special functions studied in the literature. For example, in the case where $\widetilde{A}(x,\epsilon)$ is rational in $x$, these iterated integrals can be evaluated in terms of MPLs. In general, however, little is known about these iterated integrals. The main goal of this paper is to discuss a certain class of differential equations where the resulting iterated integrals can be completely classified and an explicit basis can be constructed algorithmically. Before we describe this class of differential equations, we need to review some general material on linear differential equations. \subsection{Linear differential operators and their monodromy group} \label{sec:frobenius_review} A point $x_0$ is called a \textit{singular} point of eq.~\eqref{eqn:DEgen} if one of the coefficient functions $a_j(x)$ in eq.~\eqref{eq:cL_with_coefficients} has a pole at $x_0$. The point at infinity is called \emph{singular} if after a change of variables $x\to 1/y$ in eq.~\eqref{eqn:DEgen} the transformed equation has a pole at $y=0$ in one of the coefficient functions. Points which are not singular are called \textit{ordinary} points of the differential equation. A singular point $x_i$ is called \textit{regular}, if the coefficients $a_{r-j}$ have a pole of at most order $j$ at $x_i$. If all singular points of eq.~\eqref{eqn:DEgen} are regular, the equation is called \textit{Fuchsian}. In the following we only discuss Fuchsian differential equations, and we denote the (finite) set of singular points by $\Sigma:=\{x_0,\ldots,x_{q-1}\}\subset \mathbb{P}^1_{\mathbb{C}}$, and use the notation $X:=\mathbb{P}^1_{\mathbb{C}}\setminus\Sigma$. The differential operators obtained from Feynman integrals are expected to be of Fuchsian type. For every point $y_0\in\mathbb{P}^1_{\mathbb{C}}$ of a Fuchsian differential operator, the \textit{Frobenius method} can be used to construct a series representation of $r$ independent local solutions in a neighbourhood of this point. The starting point is the \textit{indicial polynomial} of a point, which can be obtained as follows: The differential equation~\eqref{eqn:DEgen} is equivalent to $\widetilde{\mathcal{L}}_x^{(r)}u(x)=0$, where $\widetilde{\mathcal{L}}_x^{(r)}$ has the form \begin{equation} \widetilde{\mathcal{L}}^{(r)}_x = \sum_{j=0}^r \tilde{a}_j(x)\theta_x^j\,,\qquad \theta_x = x\partial_x\,, \end{equation} where the $\tilde{a}_j(x)$ are polynomials that are assumed not to have a common zero. Note that the singular points are precisely the zeroes of $\tilde{a}_r(x)$. The indicial polynomial of $\widetilde{\mathcal{L}}^{(r)}_x$ at $y_0=0$ is then $P_0(s) = \sum_{j=0}^r \tilde{a}_j(0)s^j$. The roots $s_i$ of $P_0(s)$ are called the \emph{indicials} or \emph{local exponents} at $0$. The indicial polynomial $P_{y_0}(s)$ and the local exponents at another point $y_0$ can be obtained by changing variables to $y=x-y_0$ (or $y=1/x$ if $y_0=\infty$). The local exponents characterise the solution space locally close to the point $y_0$ in the form of convergent power series. More precisely, if $y_0\in X = \mathbb{P}^1_{\mathbb{C}}\setminus \Sigma$ is a regular point, then $P_{y_0}(s)$ has degree $r$, and so there are precisely $r$ local exponents $s_1,\ldots,s_r$ (counted with multiplicity). Correspondingly, there are $r$ linearly independent power series solutions $\phi_i(y_0;x)$, $i\in\{1,...,r\}$, to eq.~\eqref{eqn:DEgen} of the form \begin{equation} \label{eqn:powerseriessolutions} \phi_i(y_0;x) = \sum_{n\ge 0}c_{i,n}(x-y_0)^{s_i+n}\,, \qquad c_{i,0}=1 \,. \end{equation} Note that this representation holds for $y_0\neq\infty$; if $y_0=\infty$, the expansion parameter is $1/x$. If $y_0\in\Sigma$ is a singular point, the degree of the indicial polynomial is less than $r$, and so there are less than $r$ local exponents (even when counted with multiplicity) and thus less than $r$ local solutions of the form \eqref{eqn:powerseriessolutions}. Without loss of generality we assume $y_0=x_0\in\Sigma$. The missing solutions generically exhibit a logarithmic behaviour as one approaches $x_0$. In particular, in the case of a single local exponent $s_1$, there is a single power series solution, and a tower of $(r-1)$ logarithmic solutions (we only consider the case $x_0\neq\infty$) \begin{equation}\bsp\label{eq:Frob_log_solution} \phi_i(x_0;x) &\, = (x-x_0)^{s_1}\sum_{k=1}^{i} \frac{1}{(k-1)!}\log^{k-1}(x-x_0)\, \sigma_{k}(x_0;x)\,, \esp\end{equation} where the $\sigma_{k}(x_0;x)$ are holomorphic at $x=x_0$. A singular point with such a hierarchical logarithmic structure of solutions with $s_1$ an integer is called a point of \emph{maximal unipotent monodromy} (MUM-point). The power series obtained from the Frobenius method have finite radius of convergence: the solutions $\phi(y_0;x):=(\phi_i(y_0;x))_{1\le i\le r}$ converge in a disc whose radius is the distance to the nearest singular point. It is possible to analytically continue the basis of solutions $\phi(y_0;x)$ to all points in $X$. We can cover $\mathbb{P}^1_{\mathbb{C}}$ by a finite set of open discs $D_{y_k}$ centered at $y_k\in\mathbb{P}^1_{\mathbb{C}}$ such that $\phi(y_k;x)$ converges inside $D_{y_k}$. Since the solutions $\phi(y_k,x)$ and $\phi(y_l,x)$ have to agree for each value of $x$ in the overlapping region $D_{y_k}\cap D_{y_l}$, one can find a matching matrix from the following equation: \begin{equation} \label{eqn:calcmatchingmatrices} \phi(y_k;x)= \begin{pmatrix}\phi_r(y_k;x)\\\vdots\\\phi_1(y_k;x)\\\end{pmatrix}=R_{y_k,y_l}\phi(y_l;x)=R_{y_k,y_l}\begin{pmatrix}\phi_r(y_l;x)\\\vdots\\\phi_1(y_l;x)\\\end{pmatrix}\,. \end{equation} Note that the matching matrix $R_{y_k,y_l}$ must be constant. Practically, it can be found by numerically evaluating each component of the above equation for several numerical points in the overlapping region. This allows one to determine the matching matrices at least numerically to high precision by taking enough orders in the expansion. In some cases, one may even be able to determine its entries analytically by solving for them in an ansatz for the matrix $R_{y_k,y_l}$. A precise numerical evaluation allows one to identify corresponding analytic expressions in many situations. \paragraph{The monodromy group.} The Frobenius method allows one to construct a basis of solutions locally for each point $y_0\in \mathbb{P}_{\mathbb{C}}^1$. The local solutions can be analytically continued to a global basis of solutions defined for all $x\in X$. We can also take a point $x\in X$ and a closed loop $\gamma$ starting and ending at $x$ and analytically continue the solution $\phi(y_0;x)$ along $\gamma$. Clearly, if $\gamma$ does not encircle any singular point, Cauchy's theorem implies that the value of $\phi(y_0;x)$ must be the same before and after analytic continuation. One can now ask the question how the vector of solutions $\phi(y_0;x)$ is altered if transported along the small loop $\gamma$ encircling a singular point $x_k$. Let us denote by $\phi_{\circlearrowleft}(y_0;x)$ the value of $\phi(y_0,x)$ after analytic continuation along $\gamma$. Since $\phi_{\circlearrowleft}(y_0;x)$ must still satisfy the differential equation even after analytic continuation, it must be expressible in the original basis $\phi(y_0;x)$, and so there must be a constant $r\times r$ matrix $\rho_{y_0}(\gamma)$ -- called the \emph{monodromy matrix} -- such that \begin{equation} \phi_{\circlearrowleft}(y_0;x) = \rho_{y_0}(\gamma)\phi(y_0;x)\,. \end{equation} The subindex on $\rho$ denotes the local basis in which the monodromy is expressed. Changing the local basis from $y_0$ to $y_1$ amounts to conjugating the monodromy matrix by the matching matrix from eq.~\eqref{eqn:calcmatchingmatrices}: \begin{equation} \rho_{y_0}(\gamma) =R_{y_0,y_1}\rho_{y_1}(\gamma)R_{y_0,y_1}^{-1}\,. \end{equation} \begin{figure} \begin{center}\includegraphics{basic.pdf}\end{center} \caption{Paths for the analytic continuation and the calculation of the monodromies for a differential operator with $q$ regular singular poles, one of which at zero and one at infinity. The (blue) reference point $x_\mathsf{ref}$ has been conveniently chosen in the (green) disc $D_{x_0}$ around $x_0=0$.} \label{fig1} \end{figure} Let us now explain how we can find the monodromy matrix for a collection of loops $\gamma_{x_k}$ encircling the singular points $x_k$ in the counter-clockwise direction, but no other singular points (see figure~\ref{fig1}). We focus for now on the singular point $x_0$. We can fix a reference point $x_\mathsf{ref} \in D_{x_0}$, and we can also choose the loop $\gamma_{x_0}$ to lie entirely inside $D_{x_0}$. The effect of the analytic continuation on $\phi(x_0;x_\mathsf{ref})$ is easy to describe. Indeed, consider for example the local solution in eq.~\eqref{eq:Frob_log_solution}. Since $\sigma_k(x_0;x)$ is holomorphic at $x_0$, its value does not change when it is analytically continued along $\gamma_{x_0}$. So, only the logarithms $\log(x-x_0)$ and the non-integer powers $(x-x_0)^{s_1}$ are affected. Hence, we find \begin{equation}\bsp \phi_{i,\circlearrowleft}&(x_0;x_\mathsf{ref}) =\\ & =e^{2\pi i s_1}(x_{\mathsf{ref}}-x_0)^{s_1}\sum_{k=1}^{i} \frac{1}{(k-1)!}\left[\log(x_{\mathsf{ref}}-x_0) + 2\pi i\right]^{k-1}\, \sigma_{k}(x_0;x_\mathsf{ref})\,. \esp\end{equation} In this way, we can work out the entries of the \emph{local} monodromy matrices $\rho_{x_0}(\gamma_{x_0})$ for each singular point $x_0$. For a singular point $x_k\neq x_0$, we can decompose the loop $\gamma_{x_k}$ based at $x_{\mathsf{ref}}\in D_{x_0}$ into a segment from $x_{\mathsf{ref}}$ to a new reference point $\tilde{x}_{\mathsf{ref}}\in D_{x_k}$, followed by a loop $\tilde{\gamma}_{x_k}$ based at $\tilde{x}_{\mathsf{ref}}$ around $x_k$ and lying entirely inside $D_{x_k}$, and finally we add the segment from $\tilde{x}_{\textrm{ref}}$ to ${x}_{\mathsf{ref}}$ in the opposite direction. Correspondingly, we can then express the monodromy matrix as \begin{equation}\label{eqn:monodromytranslation} \rho_{x_0}(\gamma_{x_k}) = R_{x_0,x_k}\rho_{x_k}(\tilde{\gamma}_{x_k})R_{x_0,x_k}^{-1}\,, \end{equation} and the local monodromy matrix $\rho_{x_k}(\tilde{\gamma}_{x_k})$ can be determined as described previously. Following this procedure, we can associate a monodromy matrix to every singular point $x_k\in\Sigma$. The set of global monodromy matrices around all singularities but one\footnote{The reason for the monodromy group being generated by one generator less than the number of poles is the following: a loop enclosing no singularity will lead to a trivial monodromy, which is represented as unit matrix. Accordingly, the appropriately ordered product of monodromy matrices with respect to all poles should yield the unit matrix, as the corresponding contour can be deformed into the trivial loop.} expressed in the basis of the reference neighbourhood will then generate the \emph{monodromy group}. \paragraph{Mathematical interpretation.} The differential operator $\mathcal{L}^{(r)}_x$ determines a \mbox{rank-$r$} vector bundle over $X= \mathbb{P}_{\mathbb{C}}^1\setminus \Sigma$, i.e., for each $x\in X$ the fiber $V_x$ over $x$ is an \mbox{$r$-dimensional} complex vector space, and the solution $\phi(y_0;x)$ is a basis of $V_x$ (because the solutions are linearly independent if $x$ is not a singular point) . Let $\gamma$ in $X$ be a closed loop based at $x\in X$. The analytic continuation of $\phi(y_0;x)$ along $\gamma$ does not depend on the details of the path. More precisely, the result of the analytic continuation depends on the homotopy class of $\gamma$ in $X$ only. Accordingly, it is sufficient to consider the fundamental group $\pi_1(X,x)$. If we fix the basis of solutions $\phi(y_0;x)$, analytic continuation provides a group homomorphism: \begin{equation}\bsp\label{eq:monodromy_rep} \rho_{y_0} : \pi_1(X,x) &\,\to \textrm{GL}(V_x)\simeq \textrm{GL}_r(\mathbb{C})\,\\ \gamma&\,\mapsto \rho_{y_0}(\gamma)\,. \esp\end{equation} In other words, we can interpret analytic continuation as a representation of the fundamental group of $X$ of loops based at $x$ in the fiber $V_x$, called the \emph{monodromy representation}. The monodromy group is then the image of $\pi_1(X,x)$ in $\textrm{GL}_r(\mathbb{C})$ under $\rho_{y_0}$. In the case of the punctured Riemann sphere $X= \mathbb{P}_{\mathbb{C}}^1\setminus \{x_0,\ldots,x_{q-1}\}$ the structure of the fundamental group is easy to describe: it is the free group generated by the loops $\gamma_{x_k}$, $0\le k<q-1$. Hence, we see that the monodromy group is generated by the matrices $\rho_{y_0}(\gamma_{x_k})$ with $0\le k<q-1$, which are precisely the matrices we have constructed earlier in this section. \subsection{A class of differential equations allowing for a modular parametrisation} \label{ssec:ClassModularParametrization} \label{sec:modular_DEQs} After the general review in the previous subsection, we are now going to describe the class of differential equations we want to discuss in the remainder of this paper. Our starting point is a differential equation of the form~\eqref{eq:DEQ_generic_higher} satisfying the assumptions from section~\ref{sec:FIs_and_deqs}, that is, to all orders in the $\epsilon$-expansion $\mathcal{L}_{x,\epsilon}^{(r)}$ and $N(x,\epsilon)$ only involve rational functions and MPLs. Here, we would like to make the following additional assumptions: \begin{enumerate} \item The operator $\mathcal{L}_x^{(r)}$ is the $(r-1)^{\textrm{th}}$ symmetric power of a degree-two operator $\tilde{\mathcal{L}}_x^{(2)}$. That is, if the solution space of $\tilde{\mathcal{L}}_x^{(2)}$ is \begin{equation} \textrm{Sol}(\tilde{\mathcal{L}}_x^{(2)}) = \mathbb{C}\,\psi_1(x)\oplus \mathbb{C}\,\psi_2(x)\,, \end{equation} then the solution space of $\mathcal{L}_x$ reads \begin{equation}\label{eq:Sol_L_x} \textrm{Sol}({\mathcal{L}}_x^{(r)}) = \bigoplus_{a+b=r-1}\mathbb{C}\,\psi_1(x)^a\psi_2(x)^b\,. \end{equation} \item We make the following assumptions about $\tilde{\mathcal{L}}_x^{(2)}$. First, we assume that all singular points of $\tilde{\mathcal{L}}_x^{(2)}$ are MUM-points. We denote the holomorphic and logarithmically-divergent solutions at $x=x_0$ by $\psi_1(x)$ and $\psi_2(x)$ respectively. Second, its monodromy group, which we will call $\Gamma_2$ in the following, is conjugate to a subgroup of $\mathrm{SL}_2(\bZ)$ of finite index, i.e., there exists $\gamma \in \operatorname{SL}_2(\mathbb C)$ such that $\gamma\Gamma_2\gamma^{-1}$ is a subgroup of $\mathrm{SL}_2(\bZ)$ of finite index. \end{enumerate} Note that these assumptions imply that the determinant of the Wronskian matrix, \begin{equation}\label{eq:Det_def} D(x) := \det \left(\begin{smallmatrix} \psi_1(x) & \psi_2(x) \\ \psi_1'(x) & \psi_2'(x) \end{smallmatrix}\right)\,,\qquad \psi_a'(x) = \partial_x\psi_a(x)\,, \end{equation} is a rational function of $x$. While it may seem that these assumptions are rather restrictive, differential equations of this type cover several cases of interesting Feynman integrals. For example, they cover the case of (several) Feynman integrals associated to one-parameter families of elliptic curves ($n=2$) and K3 surfaces~\cite{Doran:1998hm} ($n=3$) where the subtopologies can be expressed in terms of MPLs without additional non-rationalisable square roots. This includes in particular the case of the equal-mass two- and three-loop banana integrals, which are going to be discussed explicitly in section~\ref{sec:bananameromorphic}. In the remainder of this section, we present a characterisation of the space of functions that is needed to express the result. \paragraph{The modular parametrisation for $\tilde{\mathcal{L}}_x^{(2)}$.} Let us first discuss the structure of the solution space $\textrm{Sol}(\tilde{\mathcal{L}}_x^{(2)})$. We assume that $x_0=0$ is a MUM-point, and $\psi_1(x)=\phi_1(0;x)$ is holomorphic at $x=0$, while $\psi_2(x)=\phi_2(0;x)$ is logarithmically divergent. We define \begin{equation}\label{eq:tau_def_generic} \tau := \frac{\psi_2(x)}{\psi_1(x)}\,,\qquad q:= e^{2\pi i \tau}\,. \end{equation} We can always choose a basis of $\textrm{Sol}(\tilde{\mathcal{L}}_x^{(2)})$ such that $\Im\tau>0$ for $x\in X=\mathbb{P}^1_{\mathbb{C}}\setminus\Sigma$, and so $\tau\in \mathfrak{H}:=\{\tau\in\mathbb{C}:\Im\tau>0\}$. We see that the change of variable from $x$ to $q$ is holomorphic at $x=0$. It can be inverted (at least locally, as a power series) to express $x$ in terms of $q$. This series will converge for $|q|<1$, or equivalently, for all $\tau\in\mathfrak{H}$. It may, however, diverge whenever $x$ approaches a singular point of the differential equation. Let us analyse how the monodromy group $\Gamma_2$ acts in the variable $\tau$. Consider $\gamma\in\pi_1(X,x)$. We know that if we analytically continue $\psi(x) = (\psi_2(x),\psi_1(x))^T$ along $\gamma$, then the solution changes to $\psi_{\circlearrowleft}(x) = \tilde{\rho}_0(\gamma)\psi(x) = \left(\begin{smallmatrix}a& b\\c&d \end{smallmatrix}\right)\psi(x)$, for some $\left(\begin{smallmatrix}a& b\\c&d \end{smallmatrix}\right)\in \Gamma_2\subseteq \mathrm{SL}_2(\bZ)$. It is then easy to see that the monodromy group acts on $\tau$ via M\"obius transformations: \begin{equation}\label{eq:Moebiusaction} \tau_{\circlearrowleft} = \frac{a\tau+b}{c\tau+d} =: \gamma\cdot \tau\,. \end{equation} Clearly, $x$ should not change under analytic continuation (because $x$ is a rational function, and thus free of branch cuts), and so $x(\tau)$ must be invariant under the action of the monodromy group: \begin{equation}\label{eq:modular_functions_def} x\left( \frac{a\tau+b}{c\tau+d}\right) = x(\tau) \,, \textrm{ for all } \left(\begin{smallmatrix}a& b\\c&d \end{smallmatrix}\right)\in \Gamma_2\,. \end{equation} A (meromorphic) function from $\mathfrak{H}$ to $\mathbb{C}$ that satisfies eq.~\eqref{eq:modular_functions_def} is called a \emph{modular function} for $\Gamma_2$. If we define $h_1(\tau) := \psi_1(x(\tau))$, then $h_1$ changes under analytic continuation according to: \begin{equation}\bsp\label{eq:h1_transform} h_1\left( \frac{a\tau+b}{c\tau+d}\right) &\,= h_1(\tau)_{\circlearrowleft} = \psi_1(x(\tau))_{\circlearrowleft} \\ &\,= c\,\psi_2(x(\tau))+d\, \psi_1(x(\tau)) = (c\tau+d)\,h_1(\tau)\,. \esp\end{equation} A holomorphic function from $\mathfrak{H}^* := \mathfrak{H}\cup \mathbb{P}^1_{\mathbb{Q}}$ to $\mathbb{C}$ that satisfies eq.~\eqref{eq:h1_transform} is called a \emph{modular form of weight 1} for $\Gamma_2$ (see section~\ref{sec:modular_review}). We see that whenever $\Gamma_2\subseteq \mathrm{SL}_2(\bZ)$, the differential equation $\tilde\mathcal{L}_x^{(2)}u(x)=0$ admits a \emph{modular parametrisation}, by which we mean that there is a modular function $x(\tau)$ and a modular form $h_1(\tau)$ of weight 1 for $\Gamma_2$ such that \begin{equation} \textrm{Sol}(\tilde\mathcal{L}_x^{(2)}) = h_1(\tau)\big(\mathbb{C} \oplus \mathbb{C}\tau\big)\,. \end{equation} \paragraph{Mathematical interpretation.} The solutions of $\tilde\mathcal{L}_x^{(2)}$ define multivalued holomorphic functions on $X$. We can ask the question: On which surface these functions are single-valued holomorphic functions? This can be realised when expressing the solutions in the new variable $\tau\in\mathfrak{H}$. The monodromy group $\Gamma_2\subset \textrm{GL}_2(\mathbb{C})$ associated to the differential operator acts on $\mathfrak{H}$ via M\"obius transformations. We can identify the space on which $\psi_1(x(\tau))=h_1(\tau)$ is holomorphic and single-valued with $\mathfrak{H}$. Let us mention, however, that the action of $\Gamma_2$ on $\mathfrak{H}$ factors through its projection $\overline{\Gamma}_2$ on $\textrm{PGL}_2(\mathbb{C})$, where we have identified matrices that only differ by a non-zero multiplicative constant. Indeed, it is easy to see that $\left(\begin{smallmatrix}a & b\\ c& d\end{smallmatrix}\right) \in \textrm{GL}_2(\mathbb{C})$ and $\lambda\left(\begin{smallmatrix}a & b\\ c& d\end{smallmatrix}\right) \in \textrm{GL}_2(\mathbb{C})$ lead to the same M\"obius transformation in eq.~\eqref{eq:Moebiusaction}, for all $\lambda\in \mathbb{C}^*$. The action on $h_1(\tau)$, however, may be sensitive to $\lambda$. Different points $\tau$ in $\mathfrak{H}$ correspond to the same value of $x$ in our original space $X$, and the points that are identified are precisely those related by the action of the monodromy group $\Gamma_2$. It is thus natural to consider the space $Y_{\Gamma_2}=\Gamma_2\backslash\mathfrak{H}$. The function $x(\tau)$ defines a holomorphic map from $\mathfrak{H}$ to $X$, and it is a bijection between $Y_{\Gamma_2}$ and $X$. The punctured Riemann sphere can be compactified to $\overline{X}\simeq \mathbb{P}^1_{\mathbb{C}}$ by adding the singularities. Similarly, we can compactify $Y_{\Gamma_2}$ to the space $X_{\Gamma_2} = \Gamma_2\backslash \mathfrak{H}^*$, with $\mathfrak{H}^*:=\mathfrak{H}\cup\mathbb{P}^1_\mathbb{Q}$ the extended upper half-plane. The pre-images of the singular points at the orbits $\Gamma_2\backslash \mathbb{P}^1_{\mathbb{Q}}$ are called the \emph{cusps} of $X_{\Gamma_2}$ (see section~\ref{sec:modular_review}). Let us finish this interlude by mentioning that $X_{\Gamma_2}$ and $Y_{\Gamma_2}$ are not manifolds, but \emph{orbifolds}. Loosely speaking, an $n$-dimensional manifold is a topological space that locally `looks like' $\mathbb{R}^n$. Similarly, an $n$-dimensional orbifold locally looks like a quotient $\Gamma_2\backslash \mathbb{R}^n$. This has a bearing on how we choose coordinates on $X_{\Gamma_2}$ and $Y_{\Gamma_2}$. Indeed, the chosen coordinate in a neighbourhood of $\tau\in\mathfrak{H}^*$ will depend on whether $\tau$ has a non-trivial stabilizer $(\Gamma_2)_{\tau} = \{\gamma\in\Gamma_2: \gamma\cdot\tau=\tau\}$. We will discuss this in more detail in section~\ref{sec:modular_review}. \paragraph{The modular parametrisation for ${\mathcal{L}}_x^{(r)}$.} Since the solution spaces of $\tilde{\mathcal{L}}_x^{(2)}$ and ${\mathcal{L}}_x^{(r)}$ are related, it is not surprising that all the symmetric powers of $\tilde{\mathcal{L}}_x^{(2)}$ will also admit a modular prarametrisation. If we define $\tau$ again by eq.~\eqref{eq:tau_def_generic}, we have \begin{equation} \textrm{Sol}(\mathcal{L}_x^{(r)}) = h_1(\tau)^{r-1}\,\bigoplus_{s=0}^{r-1}\mathbb{C}\tau^s\,. \end{equation} Since the elements of $\textrm{Sol}(\mathcal{L}_x^{(r)})$ are the maximal cuts of the Feynman integral $I(x,0)$ in $d=d_0$ dimensions, we see that the maximal cuts are linear combinations of a modular form of weight $r-1$ for $\Gamma_2$, multiplied by additional powers of $\tau$. More generally, $h_1(\tau)^{r-1}$ is also a modular form of weight $r-1$ for the monodromy group of $\mathcal{L}_x^{(r)}$. The maximal cuts of the other master integrals are obtained by differentiation. Using \begin{equation} \label{eqn:inverseJacobian} \partial_x = \frac{\mathcal{D}(\tau)}{h_1(\tau)^2}\,\partial_{\tau}\,, \qquad \mathcal{D}(\tau):=D(x(\tau))\,, \end{equation} we see that the maximal cuts of the other master integrals also involve the derivatives of $h_1(\tau)$ (with respect to $\tau$). As we will see in the next section, the latter are no longer modular forms, but they give rise to so-called quasi-modular forms. Let us now return to the original inhomogeneous differential equation. To solve this equation in terms iterated integrals, we can turn it into a first-order inhomogeneous system for the vector $\mathcal{I}(x,\epsilon)$ and proceed similar to section~\ref{sec:FIs_and_deqs}. The entries of the Wronskian matrix of $\mathcal{L}_x^{(r)}$ can be expressed in terms of $\psi_1(x)$ and $\psi_2(x)$: \begin{align} &W_r(x)_{ij} = \\ \nonumber&\,=\binom{r-1}{i-1}^{-1}\,\sum_{k=0}^{j-1}\binom{r-j}{i-k-1}\binom{j-1}{k} \psi_1(x)^{r-i-j+k+1}\,\psi_2(x)^{j-k-1}\psi_1'(x)^{i-k-1}\,\psi_2'(x)^{k}\,, \end{align} with determinant \begin{equation} \det W_r(x) = D(x)^{r(r-1)/2}\,\prod_{k=1}^{r-1}\frac{k!}{k^k}\,. \end{equation} Note that $W_r(x)$ is rational whenever $D(x)$ is. The iterated integrals that arise from expanding the path-ordered exponential in eq.~\eqref{eq:path_ordered} will involve differential one-forms of the form \begin{equation}\label{eq:diff_form_sample} dx\,R(x)\,\psi_1(x)^{\alpha}\,\psi_2(x)^{\beta}\psi_1'(x)^{\gamma}\,\psi_2'(x)^{\delta}\,, \end{equation} where $R(x)$ is a rational function and $\alpha$, $\beta$, $\gamma$, $\delta$ are positive integers. For applications, one is often interested in knowing a basis of special functions and associated differential forms by integration of which all the iterated integrals can be built. In the case $\alpha=\beta=\gamma=\delta=0$, the answer to this question is well known, and the corresponding basis of special functions are the multiple polylogarithms in eq.~\eqref{eq:MPL_def}. In the case where at least one of the exponents is non-zero, we can change variables to $\tau$. The Jacobian is (cf.~eq.~\eqref{eqn:inverseJacobian}) \begin{equation}\label{eq:jacobian} dx = \frac{h_1(\tau)^2}{\mathcal{D}(\tau)}\,d\tau\,. \end{equation} Since $D(x)$ is a rational function, we can eliminate $\psi_2'(x)$. We can also eliminate $\psi_2(x)$ in favour of $\psi_1(x)$ and $\tau$. Hence, it is sufficient to consider differential forms of the form \begin{equation}\label{eq:diff_form} d\tau\,R(x(\tau))\,h_1(\tau)^m\,h_1'(\tau)^s\,\tau^p\,, \end{equation} where $m,s,p\in\mathbb{Z}$, with $s,p$ positive. We can write \begin{equation} \frac{1}{p!}\tau^p = \int_{i\infty}^\tau d\tau_1\int_{i\infty}^{\tau_1}d\tau_2\cdots \int_{i\infty}^{\tau_{p-1}}d\tau_p\,, \end{equation} where the divergence at $i\infty$ is regulated by interpreting the lower integration boundary as a tangential base-point~ \cite{Brown:mmv}. It is therefore sufficient to consider differential forms with $p=0$. One of the main tasks of the remainder of this paper is to answer this question for the class of iterated integrals in eq.~\eqref{eq:diff_form_sample}. More precisely, we will give a constructive proof of the following result. \begin{thm}\label{thm:section2} With assumptions and notations as in section \ref{ssec:ClassModularParametrization}, at every order in $\epsilon$, the solution of the differential equation~\eqref{eq:DEQ_generic_higher} can be written as a $\mathbb{C}$-linear combination of iterated integrals of meromorphic modular forms for the monodromy group $\Gamma_2$. \end{thm} We will give a constructive proof of Theorem \ref{thm:section2} in section~\ref{sec:mero_sec}. In addition, in section~\ref{sec:mero_sec} we will completely classify the relevant iterated integrals and give an explicit basis. \section{Introduction} Feynman integrals are a cornerstone of perturbative computations in Quantum Field Theory, and so it is important to have a good knowledge of the mathematics underlying them, including efficient techniques for their computation and a solid understanding of the space of functions needed to express them. The simplest class of functions that arise in Feynman integral computations are multiple polylogarithms (MPLs)~\cite{Lappo:1927,GoncharovMixedTate,Goncharov:1998kja} (see also refs.~\cite{Remiddi:1999ew,Gehrmann:2000zt,Ablinger:2011te}). The success of MPLs in Feynman integral computations can to a large extent be traced back to the fact that their algebraic properties are well understood (see, e.g., ref.~\cite{Duhr:2014woa}), and there are several efficient public implementations for their numerical evaluation~\cite{Gehrmann:2001pz,Gehrmann:2001jv,Buehler:2011ev,Vollinga:2004sn,Frellesvig:2016ske,Ablinger:2018sat,Naterop:2019xaf}. Moreover, it is well known that Feynman integrals satisfy systems of coupled first-order differential equations~\cite{Kotikov:1990kg,Kotikov:1991hm,Kotikov:1991pm,Gehrmann:1999as}, and MPLs are closely connected to the concepts of pure functions~\cite{ArkaniHamed:2010gh} and canonical differential equations~\cite{Henn:2013pwa}. It is fair to say that, whenever one can find a system of canonical differential equations that can be solved in terms of MPLs, the problem can be considered solved. However, it was realised already early on that MPLs do not suffice to express solutions to higher-loop Feynman diagrams \cite{Sabry,Broadhurst:1987ei,Bauberger:1994by,Bauberger:1994hx,Laporta:2004rb,Kniehl:2005bc,Aglietti:2007as,Czakon:2008ii,Brown:2010bw,MullerStach:2011ru,CaronHuot:2012ab,Huang:2013kh,Brown:2013hda,Nandan:2013ip,Ablinger:2017bjx}, though no analytic results in terms of a well-defined class of functions was available. The situation changed less than a decade ago, when it was shown that the two-loop sunrise integral can be expressed in terms of so-called elliptic dilogarithms~\cite{Bloch:2013tra,Adams:2015ydq,Adams:2016xah,Adams:2014vja,Adams:2013nia,Adams:2013kgc,Adams:2015gva,Broedel:2017siw}. The elliptic dilogarithm is a special case of elliptic multiple polylogarithms~\cite{BeilinsonLevin,LevinRacinet,BrownLevin,Broedel:2017kkb}, which also play a prominent role in the context of string amplitudes at one-loop, cf.~e.g., refs.~\cite{Broedel:2014vla,Broedel:2015hia,Broedel:2017jdo}. Soon after, it was realised that in the equal-mass case the two-loop sunrise integral can also be expressed as iterated integrals of modular forms~\cite{Adams:2017ejb,Broedel:2018iwv}. This class of functions is also of interest in pure mathematics~\cite{ManinModular,Brown:mmv,Matthes:QuasiModular,Brown:mmv2,matthes2021iterated}, and it is understood how to manipulate and evaluate these integrals efficiently~\cite{Duhr:2019rrs,Walden:2020odh}. More generally, it was suggested that modularity is an important feature of Feynman integrals associated to families of elliptic curves~\cite{Weinzierl:2020fyx}. Despite all this progress in understanding Feynman integrals that do not evaluate to MPLs, there are still many questions left unanswered, and no general and algorithmic solution to evaluate and manipulate them is known, contrary to the case of ordinary MPLs. For example, while the importance of iterated integrals of modular forms is by now well established, the reason for why modular forms appear in the first place, and if so of which type, is not completely settled in the literature, and there was even an argument in the literature as to which congruence subgroup to attach to the two-loop sunrise integral~\cite{Adams:2017ejb,Frellesvig:2021vdl}. Also the link between differential equations and the appearance of these functions is not completely satisfactory (though there are indications that the concepts of pure functions and canonical forms known from MPLs carry over to Feynman integrals associated to families of elliptic curves~\cite{Broedel:2018qkq,Adams:2018yfj,Bogner:2014mha}). Finally, and probably most importantly, holomorphic modular forms are not sufficient to cover even the simplest cases of Feynman integrals depending on one variable. Indeed, it is known that, while in general higher-loop analogues of the sunrise integral -- the so-called $l$-loop banana integrals -- are associated to families of Calabi-Yau $(l-1)$-folds~\cite{Bloch:2014qca,Bloch:2016izu,Bourjaily:2018ycu,Bourjaily:2018yfy,Klemm:2019dbm,Bonisch:2020qmm,Bonisch:2021yfw}, the three-loop equal-mass banana integral in $D=2$ dimensions can be expressed in terms of the same class of functions as the two-loop equal-mass sunrise integral~\cite{Bloch:2014qca,Bloch:2016izu,Broedel:2019kmn}. However, if higher terms in the $\epsilon$-expansion in dimensional regularisation are considered, new classes of iterated integrals are required, which cannot be expressed in terms of modular forms alone. The goal of this paper is to describe the (arguably) simplest class of differential equations beyond MPLs for which the space of solutions can be explicitly described, to all orders in the $\epsilon$-expansion. The relevance of this class of differential equations for Feynman integrals stems from the fact that they cover in particular the two- and three-loop equal-mass banana integrals. Their solution space can be described in terms of iterated integrals of meromorphic modular forms, introduced and studied by one of us in the context of the full modular group $\mathrm{SL}_2(\bZ)$~\cite{matthes2021iterated}. For Feynman integrals, however, modular forms for the full modular group are insufficient. We extend the results of ref.~\cite{matthes2021iterated} to arbitrary finite-index subgroups of genus zero, and we provide in particular a basis for the algebra of iterated integrals they span. Our construction also naturally provides an identification of the type of modular forms required, namely those associated to the monodromy group of the associated homogeneous differential operator. This explains in particular the origin and the type of iterated integrals of modular forms encountered in Feynman integral computations. As an application of our formalism, we provide for the first time complete analytic for results for all master integrals of the three-loop equal-mass banana integrals in dimensional regularisation, including the higher orders in the $\epsilon$-expansion, and we see the explicit appearance of iterated integrals of meromorphic modular forms. The article is organized as follows: in section \ref{sec:DEQs} we review general material on Feynman integrals and the differential equations they satisfy, and we describe the class of differential operators that we consider. In section~\ref{sec:modular} we review modular and quasi-modular forms. Section~\ref{sec:mero_sec} presents the main results of this paper, and we consider iterated integrals of meromorphic modular forms and present the main theorems. In section~\ref{sec:sunban} we calculate the monodromy groups for the equal-mass two- and three-loop and banana integrals, while section~\ref{sec:bananameromorphic} is devoted to framing the higher-orders in $\epsilon$ results for the three-loop banana integrals in terms of iterated integrals of meromorphic modular forms. We include several appendices. In appendix~\ref{app:mathy} we present a rigorous mathematical proof the main theorem from section~\ref{sec:mero_sec}, and in appendix~\ref{app:sunban} we collect formulas related to the sunrise and banana integrals \section{The monodromy groups of the equal-mass sunrise and banana integrals} \label{sec:sunban} In the remainder of this paper we will illustrate the abstract mathematical concepts on two very concrete families of Feynman integrals, namely the equal-mass two-loop sunrise and three-loop banana integrals, defined by: \begin{align} \label{eq:sunrise-family} I^{\textsf{sun}}&_{a_1,\dots,a_5}(p^2,m^2;d)=\\ \nonumber& = \int \prod_{i=1}^2 \mathfrak{D}^d \ell_i \frac{(\ell_1\cdot p)^{a_4}(\ell_2\cdot p)^{a_5}}{[\ell_1^2-m^2]^{a_1}[\ell_2^2-m^2]^{a_2}[(\ell_1-\ell_2-p)^2-m^2]^{a_3}}\,,\\ % \label{eq:banana-family} I^{\textsf{ban}}&_{a_1,\dots,a_9}(p^2,m^2;d)=\\ \nonumber& = \int \prod_{i=1}^3 \mathfrak{D}^d \ell_i \frac{(\ell_3^2)^{a_5}(\ell_1\cdot p)^{a_6}(\ell_2\cdot p)^{a_7}(\ell_3\cdot p)^{a_8}(\ell_1\cdot\ell_2)^{a_9}}{[\ell_1^2-m^2]^{a_1}[\ell_2^2-m^2]^{a_2}[(\ell_1-\ell_3)^2-m^2]^{a_3}[(\ell_2-\ell_3-p)^2-m^2]^{a_4}}\,, \end{align} where $a_i\ge 0$ are positive integers, $m^2>0$ and $p^2$ are real. We work in dimensional regularisation in $d=2-2\epsilon$ dimensions. The integration measure reads \begin{equation} \label{eqn:intemeasure} \int\mathfrak{D}^d\ell=\frac{1}{\Gamma\left( 2-\frac{d}{2} \right)}\int\frac{d^d\ell}{i\pi^{d/2}}\,. \end{equation} We follow refs.~\cite{Remiddi:2016gno,Primo:2017ipr} for the choice of master integrals and the differential equations (see also section~\ref{sec:bananameromorphic} and appendix~\ref{app:sunban}). In this section we focus on identifying the maximal cuts of these integrals as modular forms for the congruence subgroup $\Gamma_1(6)$, and in the next section we see how iterated integrals of meromorphic modular forms arise. This gives another way to resolve the debate in the literature whether the two-loop sunrise integral is associated with modular forms for $\Gamma_1(12)$ or $\Gamma_1(6)$; see, e.g., refs.~\cite{Adams:2017ejb,Frellesvig:2021vdl}. While the discussion in this section focuses on these specific Feynman integrals, it is easy to transpose the discussion to other differential operators of degree two or three. This may then provide a roadmap to identify the modular forms obtained from maximal cuts of one-parameter families of Feynman integrals that are not described by the same Picard-Fuchs operators as the examples considered here. \subsection{The sunrise family} \label{ssec:sunrisefamily} \paragraph{The monodromy group.} The maximal cuts of the integral $I^{\textsf{sun}}_{1,1,1,0,0}(p^2,m^2;2)$ are annihilated by the second-order differential operator~\cite{Laporta:2004rb}: \begin{align} \label{eq:sunriseDO} \mathcal{L}^\textsf{sun}_t&:=\partial_t^2 + \left(\frac{1}{t-9}+\frac{1}{t-1}+\frac{1}{t}\right)\partial_t + \left(\frac{1}{12(t-9)}+\frac{1}{4(t-1)}-\frac{1}{3t}\right)\,, \end{align} where we defined \begin{equation}\label{eq:tt_def} t := \frac{p^2}{m^2}\,. \end{equation} It is well-known that $\mathcal{L}^\textsf{sun}_t$ is the Picard-Fuchs operator describing a family of elliptic curves. Consequently, its solutions can be expressed in terms of elliptic integrals of the first kind (see appendix~\ref{app:sun}). In the following we explicitly construct the solutions using the Frobenius method reviewed in section~\ref{sec:frobenius_review} in order to outline the general strategy. While we only perform the calculations for the differential operator $\mathcal{L}^\textsf{sun}_t$, the different steps can be applied very generally to second-order differential operators describing one-parameter families of elliptic curves. \begin{figure}[h] \begin{center}\includegraphics{sunmonodromy.pdf}\end{center} \caption{Geometry associated to the sunrise differential operator $\mathcal{L}^\textsf{sun}_t$ in eq.~\eqref{eq:sunriseDO}. The coefficient functions have poles at $(t_0,\ldots,t_3)=(0,1,9,\infty)$. The corresponding radii of convergence are shaded in green.} \label{fig:sunmonodromy} \end{figure} The coefficients in $\mathcal{L}^\textsf{sun}_t$ have poles at $(t_0,\ldots,t_3)=(0,1,9,\infty)$, all of which are regular singular points. One can show that all of these points are MUM-points~\cite{Bonisch:2020qmm}, and close to each singular point we can choose a basis of solutions that consists of one holomorphic and one logarithmically-divergent function. For the singular point $t_0$ at the origin, the Frobenius method delivers the two power series solutions whose first terms in the expansion read: \begin{equation}\bsp \label{eqn:sunzerolocsolall} \phi_2(t_0;t)&=\frac{4 t}{9} + \frac{26t^2}{81} + \frac{526t^3}{2187} + \frac{1253t^4}{6561} +\mathcal{O}(t^5) + \phi_1(t_0;t)\log t\,,\\ \phi_1(t_0;t)&=1 + \frac{t}{3} + \frac{5t^2}{27} + \frac{31t^3}{243} + \frac{71t^4}{729} +\mathcal{O}(t^5)\,. \esp\end{equation} It is easy to check that these two local solution can be used to express the functions $\Psi_1(t)$ and $\Psi_2(t)$ in eq.~\eqref{eq:psi1_def}: \begin{equation}\bsp\label{eq:phi_to_psi} \Psi_1(t)&\, = \frac{2\pi}{\sqrt{3}}\,\phi_1(0;t)\,,\\ \Psi_2(t)&\, =-\frac{i}{\sqrt{3}}\,\phi_2(0;t)-\frac{2i}{\sqrt{3}}\,\log(3)\,\phi_1(0;t)\,. \esp\end{equation} Given the form of the solutions in eq.~\eqref{eqn:sunzerolocsolall}, it is not difficult to derive a representation of the local monodromy: the logarithm will acquire a phase of $2\pi i$ when transported around the pole at $t=t_0=0$ counterclockwise. Accordingly, one finds \begin{align} \label{eqn:locmonzeropre} \begin{pmatrix}\phi_2(t_0,t) \\ \phi_1(t_0;t)\end{pmatrix}_{\circlearrowleft}=\begin{pmatrix}1 & -2\pi i \\ 0 & 1 \end{pmatrix}\begin{pmatrix}\phi_2(t_0;t) \\ \phi_1(t_0;t)\end{pmatrix}. \end{align} Repeating the calculation for $t_1=1$ and $t_2=9$ shows that the structure of the solutions equals those in eq.~\eqref{eqn:sunzerolocsolall}, just the coefficients are different. This is expected, since all singular points are MUM-points. Accordingly, the three local monodromy matrices are equal: \begin{align} \label{eqn:locmonzeron} \rho_{0}(\gamma_{0})=\rho_{1}(\gamma_{1})=\rho_{9}(\gamma_{9})=\rho_{\infty}(\gamma_{\infty})=\begin{pmatrix}1 & -2\pi i \\ 0 & 1 \end{pmatrix}. \end{align} Matching the local solutions according to eq.~\eqref{eqn:calcmatchingmatrices} leads to: \begin{subequations} \begin{align} R_{0,1}&= \left( \begin{array}{cc} -\frac{3 \sqrt{3} \log (3)}{2 \pi } & \frac{24 \sqrt{2} \log (3)-\sqrt{3} \pi ^3}{2 \pi ^2}+\frac{3}{2} i \sqrt{3} \log (3) \\ -\frac{3 \sqrt{3}}{4 \pi } & \frac{6 \sqrt{2}}{\pi ^2}+\frac{3 i \sqrt{3}}{4} \\ \end{array} \right),\\ R_{1,9}&= \left( \begin{array}{cc} -\frac{8 \sqrt{2}}{3 \pi ^2} & \frac{1}{3} \sqrt{\frac{2}{3}} \pi \log ^2(3)+\frac{8 i \sqrt{2}}{3 \pi } \\ -\frac{1}{\sqrt{3} \pi } & \frac{8 i \sqrt{6}+\pi ^2+\sqrt{2} \pi ^2 \log ^2(3)}{24 \sqrt{2}} \\ \end{array} \right)\,,\\ R_{9,\infty}&= \left( \begin{array}{cc} -\frac{3 \pi^2 +3\sqrt{2} \pi^2 \log ^2(3)}{4 \sqrt{2}} & -4 \sqrt{3} \pi \\ -\frac{6 \sqrt{3}}{\pi } & 0 \\ \end{array} \right) \,. \end{align} \end{subequations} Using eq.~\eqref{eqn:monodromytranslation} one can straightforwardly calculate the monodromy matrices in the basis of solutions $\phi(0;t)$: \begin{align} \rho_0(\gamma_0)&=\left( \begin{array}{cc} 1 & -2 i \pi \\ 0 & 1 \\ \end{array} \right)\,,\qquad \rho_0(\gamma_9)=\left( \begin{array}{cc} 1-\frac{6 i \log (3)}{\pi } & \frac{12 i \log ^2(3)}{\pi } \\ -\frac{3 i}{\pi } & 1+\frac{6 i \log (3)}{\pi } \\ \end{array} \right),\\ \nonumber \rho_0(\gamma_1)&=\left( \begin{array}{cc} 7-\frac{18 i \log (3)}{\pi } & -\frac{4 i (\pi -3 i \log (3))^2}{\pi } \\ -\frac{9 i}{\pi } & -5+\frac{18 i \log (3)}{\pi } \\ \end{array} \right), \rho_0(\gamma_\infty)=\left( \begin{array}{cc} 7-\frac{12 i \log (3)}{\pi } & -\frac{6 i (\pi -2 i \log (3))^2}{\pi } \\ -\frac{6 i}{\pi } & -5+\frac{12 i \log (3)}{\pi } \\ \end{array} \right)\,. \end{align} This form of the monodromy matrices is not very enlightening. However, we can choose a basis of solutions such that the entries of the monodromy matrices have integer entries. Using a little algebra one can show that conjugation with the following matrix will bring all three monodromy matrices into integral form: \begin{align} \label{eq:sunrise_a_p_rot} a\,\left( \begin{array}{cc} 1 & -2 \log (3)+2{\pi i}\,p \\ 0 & 2 i \pi\,s \\ \end{array} \right)\,,\qquad p\in\mathbb{Z},\,s=\pm 1. \end{align} While the scaling parameter $a$ will drop out in the similarity transformation, choosing different parameters $p$ and $s$ will lead to different choices of generators. Setting $p=0$ and $s=-1$ for example leads to the following four matrices: \begin{equation}\bsp \label{eqn:sunresult} \tilde{\rho}_0(\gamma_0)=\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right)\!,&\qquad \tilde{\rho}_0(\gamma_1)=\left( \begin{array}{cc} 1 & 0 \\ -6 & 1 \\ \end{array} \right)\!,\\ \tilde{\rho}_0(\gamma_9)=\left( \begin{array}{cc} 7 & 2 \\ -18 & -5 \\ \end{array} \right)\!, &\qquad \tilde{\rho}_0(\gamma_\infty)=\left( \begin{array}{cc} 7 & 3 \\ -12 & -5 \\ \end{array} \right)\,. \esp\end{equation} Combining the four matrices according to the succession of poles in figure~\ref{fig:sunmonodromy}, one finds $\tilde{\rho}_0(\gamma_{\infty})\tilde{\rho}_0(\gamma_9)\tilde{\rho}_0(\gamma_1)\tilde{\rho}_0(\gamma_0)=(\begin{smallmatrix}1 & 0\\0 & 1\end{smallmatrix}$). Note that the change of basis in eq.~\eqref{eq:phi_to_psi} corresponds to $(a,s,p) = (-i/\sqrt{3},1,0)$. This shows that the monodromy matrices in the basis $(\Psi_2(t),\Psi_1(t))$ have integer entries. The matrices $\tilde{\rho}_0(\gamma_{i})$, $i\in\{0,1,9\}$, generate the monodromy group of $\mathcal{L}_2^{\mathsf{sun}}$. We can see that \begin{equation} \tilde{\rho}_0(\gamma_{i}) = \left(\begin{matrix}1 & \ast \\ 0 &1\end{matrix}\right)\!\!\!\! \mod 6\,. \end{equation} Since this relation holds for the generators, it must hold for all elements of the monodromy group, and so we see that the monodromy group must be a subgroup of $\Gamma_1(6)$ (cf.~eq.~\eqref{eq:Gamam1(N)_def}). We can show that the converse is also true. A short crosscheck shows that the generators for $\Gamma_1(6)$ delivered\footnote{The SAGE command to get a $2{\times}2$ matrix representation of a minimal set of generators of $\Gamma_1(6)$ reads \texttt{Gamma1(6).generators()} and delivers $(\begin{smallmatrix}1 & 1 \\ 0 & 1\end{smallmatrix}),\,(\begin{smallmatrix}-5 & 1 \\ -6 & 1\end{smallmatrix})$ and $(\begin{smallmatrix}7 & -3 \\ 12 & -5\end{smallmatrix})$.} by SAGE~\cite{SAGE} indeed generate the matrices in eq.~\eqref{eqn:sunresult}. Checking furthermore independence of the three matrices and noting that $\dim\Gamma_1(6)=3$, the monodromy group of $\mathcal{L}_2^{\mathsf{sun}}$ is indeed $\Gamma_1(6)$. \paragraph{The modular forms for the sunrise graph.} Having identified the monodromy group of the two-loop sunrise integral with the congruence subgroup $\Gamma_1(6)$, it follows from the general discussion in section~\ref{sec:DEQs} that we expect the maximal cuts to define modular forms of weight 1 for $\Gamma_1(6)$. This agrees with the findings of refs.~\cite{Bloch:2013tra,Adams:2017ejb}. To make this explicit, we introduce a modular parametrisation and we define the new variables $\tau$ and $q$ by (cf. eq.~\eqref{eq:tau_def_generic}): \begin{equation}\bsp\label{eq:tau_def} \tau = \frac{\Psi_2(t)}{\Psi_1(t)} &\,= \log(t/9) + \frac{4t}{9} + \frac{14t^2}{81} + \mathcal{O}(t^3)\,,\\ q &\, =e^{2\pi i\tau} = \frac{t}{9} + \frac{4t^2}{81} + \mathcal{O}(t^3)\,. \esp\end{equation} Note that we have chosen $\Psi_1(t)$ and $\Psi_2(t)$ such that $\Im\tau>0$ for $t\in X^{\mathsf{sun}}=\mathbb{P}^1_{\mathbb{C}}\setminus\{0,1,9,\infty\}$, and so $\tau\in \mathfrak{H}$. The change of variable from $t$ to $q$ is holomorphic at $t=0$ and can be inverted to express $t$ in terms of $q$. The result is the well-known expression for Hauptmodul for $\Gamma_1(6)$ in terms of Dedekind's eta function~\cite{Maier,Bloch:2013tra}: \begin{equation}\label{eq:t_def} t(\tau) = 9\,\frac{\eta(\tau)^4\eta(6\tau)^8}{\eta(2\tau)^8\eta(3\tau)^4} = 9\,q+\mathcal{O}(q^2)\,, \end{equation} where $\eta(\tau)$ denotes Dedekind's eta function: \begin{equation} \eta(\tau) = e^{i\pi\tau/12}\,\prod_{n=1}^{\infty}(1-e^{2\pi in\tau})\,. \end{equation} It is easy to check (e.g., by comparing $q$-expansions with the basis of modular forms given by {\tt Sage}) that the function $h_1(\tau) := \Psi_1(t(\tau))$ defines a modular form of weight 1 for $\Gamma_1(6)$, as expected. In fact, it admits itself an expression in terms of Dedekind eta functions~\cite{Maier,Bloch:2013tra,Adams:2017ejb}: \begin{equation} h_1(\tau) = \frac{2\pi}{\sqrt{3}}\,\frac{\eta(2\tau)^6\eta(3\tau)}{\eta(\tau)^3\eta(6\tau)^2}\,. \end{equation} Note that this is another way to see that the congruence subgroup naturally associated to the two-loop equal-mass sunrise integrals is $\Gamma_1(6)$ rather than $\Gamma_1(12)$, in agreement with the analysis in the literature~\cite{Adams:2017ejb,Frellesvig:2021vdl}. Moreover, we emphasise that nothing in our analysis is specific to the sunrise integral, and the exact same reasoning can be applied to other second-order differential operators describing families of elliptic curves, in particular those that appear in Feynman integrals computations. \subsection{The banana family} \paragraph{The third-order operator for the banana graph.} We now repeat the computations of the previous section in the case of the three-loop equal-mass banana integral. Our goal is to determine the monodromy group of the differential operator in eq.~\eqref{eq:L_ban_3}. The calculation is slightly more complicated than in the two-loop case because the differential operator that annihilates the maximal cuts of $I^{\textsf{ban}}_{1,1,1,1,0,0,0,0,0}(p^2,m^2;2)$ is of order three~\cite{Bloch:2014qca,Bloch:2016izu,Primo:2017ipr}: \begin{equation}\label{eq:L_ban_3} \mathcal{L}^{\textsf{ban},(3)}_x = \partial_x^3+\frac{3(8x-5)}{2(x-1)(4x-1)}\partial_x^2+\frac{4x^2-2x+1}{(x-1)(4x-1)x^2}\partial_x+\frac{1}{x^3(4x-1)}\,, \end{equation} with \begin{equation} x = \frac{4m^2}{p^2}\,. \end{equation} In general, finding the kernel of a high-order operator can be a monumental task, and no closed form for the solution is necessarily known. The kernel of $\mathcal{L}^{\textsf{ban},(3)}_x$ can be determined by noting that it is~\cite{Primo:2017ipr,joyce} the symmetric square of (see subsection~\ref{ssec:ClassModularParametrization}) \begin{equation}\label{eq:L2x} \mathcal{L}^{\textsf{ban},(2)}_x = \partial_x^2 + \frac{8x-5}{2(x-1)(4x-1)}\partial_x-\frac{2x-1}{4x^2(x-1)(4x-1)}\,. \end{equation} The fact that $\mathcal{L}^{\textsf{ban},(3)}_x$ is a symmetric square has a geometric origin: The $l$-loop equal-mass banana integral is associated to a one-parameter family of Calabi-Yau $(l-1)$-folds~\cite{Bloch:2014qca,Bloch:2016izu,Klemm:2019dbm,Bonisch:2020qmm,Bonisch:2021yfw}, and the maximal cuts of the $l$-loop equal-mass banana integral are annihilated by the Picard-Fuchs operator for this family, which has degree $l$. It is expected that the degree-three Picard-Fuchs operator of a one-parameter family of Calabi-Yau two-folds (also known as K3 surfaces) is always the symmetric square of a Picard-Fuchs operator describing a one-parameter family of elliptic curves, cf., e.g., ref.~\cite{Doran:1998hm}. If we want to apply the results of section~\ref{sec:DEQs}, in particular Theorem~\ref{thm:section2}, we need all singular points of the second-order operator to be MUM-points. This, however, is not the case here, but only the singularities at $x=0$ and $x=\infty$ are MUM-points. We therefore perform the change of variables \begin{equation}\label{eq:change_of_vars} x(t)=\frac{-4\,t}{(t-1)(t-9)}\,. \end{equation} After this change of variables, one can see that $\textrm{Sol}(\mathcal{L}^{\textsf{ban},(2)}_x)$ is spanned by $\sqrt{t}\Psi_1(t)$ and $\sqrt{t}\Psi_2(t)$, with $(\Psi_2(t),\Psi_1(t))$ defined in eq.~\eqref{eq:phi_to_psi}, i.e., they form a basis for $\textrm{Sol}(\mathcal{L}_t^{\textsf{sun}})$. It follows that \begin{equation} \textrm{Sol}(\mathcal{L}^{\textsf{ban},(3)}_x) = \mathbb{C}\,t\Psi_1(t)^2 \oplus \mathbb{C}\,t\Psi_1(t)\Psi_2(t) \oplus\mathbb{C}\,t\Psi_2(t)^2\,. \end{equation} In other words, we see that the solution space has the structure of the solution space of a symmetric square (up to the overall factor of $t$). The change of variables in eq.~\eqref{eq:change_of_vars} is 2-to-1, and the four MUM-points $t\in \{0,1,9,\infty\}$ of $\mathcal{L}_x^{\textsf{sun}}$ are mapped to the two MUM-points $x\in \{0,\infty\}$ of $\mathcal{L}^{\textsf{ban},(2)}_x$. The upshot is that after the change of variables in eq.~\eqref{eq:phi_to_psi}, Theorem~\ref{thm:section2} applies, and we expect the three-loop equal-mass banana integrals to be expressible in terms of iterated integrals of meromorphic modular forms for $\Gamma_1(6)$. We will investigate this in detail in section~\ref{sec:bananameromorphic}. In the remainder of this section we analyse the monodromy group associated to the banana integral in more detail. \paragraph{The monodromy group.} There are four regular singular points, and the coefficient functions in $\mathcal{L}_x^{\mathsf{ban},(3)}$ have poles at $(x_0,x_1,x_2,x_3)=(0,1/4,1,\infty)$ (see figure~\ref{fig:banana}). \begin{figure} \begin{center}\includegraphics{banmonodromy.pdf}\end{center} \caption{Geometry of the banana differential operator in eq.~\eqref{eq:L_ban_3}. The coefficient functions have poles at $(x_0,x_1,x_2,x_3)=(0,1/4,1,\infty)$. The corresponding radii of convergence are shaded in green.} \label{fig:banana} \end{figure} The singular point $x_0=0$ is a MUM-point, and the Frobenius method delivers three solutions, which read: \begin{equation}\bsp \label{eqn:banzerolocsolall} \chi_3(x_0;x)&=x \left(\frac{9 x^2}{4}+\frac{135 x^3}{16}+\frac{7089 x^4}{256}+ \mathcal{O}(x^5)\right)+2 \chi_2(x_0;x) \log (x)\\ &\,+\chi_1(x_0;x) \log ^2(x)\,,\\ \chi_2(x_0;x)&=x \left(\frac{3 x}{2}+\frac{57 x^2}{16}+\frac{73 x^3}{8}+\frac{13081 x^4}{512} + \mathcal{O}(x^5)\right)+\chi_1(x_0;x) \log (x)\,,\\ \chi_1(x_0;x)&=x \left(1 + x + \frac{7 x^2}{4} + 4 x^3 + \frac{679 x^4}{64} + \mathcal{O}(x^5)\right)\,. \esp\end{equation} This basis is related to the basis of ref.~\cite{Primo:2017ipr} (see also eq.~\eqref{eq:H1_to_Psi1}) via a constant rotation \begin{align} \label{eqn:Brot} \begin{pmatrix}\chi_3(x_0;x)\\\chi_2(x_0;x)\\\chi_1(x_0;x)\end{pmatrix} = B \begin{pmatrix}I_1(x)\\J_1(x)\\H_1(x)\end{pmatrix}, \end{align} with \begin{align} B=\left( \begin{array}{ccc} \frac{4}{3} & -\frac{8 \log (2)}{\pi }+\frac{4 i}{3} & -1+\frac{4 \log ^2(2)}{\pi ^2}-\frac{4 i \log (2)}{\pi } \\ 0 & -\frac{2}{\pi } & \frac{2 \log (2)}{\pi ^2}-\frac{i}{\pi } \\ 0 & 0 & \frac{1}{\pi ^2} \\ \end{array} \right)\,. \end{align} In eq.~\eqref{eq:H1_to_Psi1} we also show how the functions $(H_1(x), I_1(x), J_1(x))$ are related to the maximal cuts of the two-loop sunrise integral. The hierarchy of logarithms in eq.~\eqref{eqn:banzerolocsolall} allows us to read off the monodromy matrix:\footnote{Different than for the sunrise above, this time we are not going to normalize the matrix right away, for reasons to become clear below.} \begin{align} \label{eqn:locmonzero} \rho_0(\gamma_{0})= \left( \begin{array}{ccc} 1 & -4 i \pi & -4 \pi ^2 \\ 0 & 1 & -2 i \pi \\ 0 & 0 & 1 \\ \end{array} \right)\,. \end{align} The structure of the local solutions around the poles at $x_1=1/4$ and $x_2=1$ is different. For the singularity point $x_1=1/4$, the Frobenius method delivers the following functions: \begin{equation}\bsp \label{eqn:banquarterlocsol} \chi_3(x_1;x)&=1+4 \left(x-\nicefrac{1}{4}\right)+\frac{64}{45} \left(x-\nicefrac{1}{4}\right)^3-\frac{512}{945} \left(x-\nicefrac{1}{4}\right)^4+\mathcal{O}\left(\left(x-\nicefrac{1}{4}\right)^5\right)\,,\\ \chi_2(x_1;x)&= \sqrt{x-\nicefrac{1}{4}}\left(1+2 \left(x-\nicefrac{1}{4}\right)-2 \left(x-\nicefrac{1}{4}\right)^2+\frac{548}{105} \left(x-\nicefrac{1}{4}\right)^3\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.-\frac{1306}{105} \left(x-\nicefrac{1}{4}\right)^4+\mathcal{O}\left(\left(x-\nicefrac{1}{4}\right)^5\right)\right)\,,\\ \chi_1(x_1;x)&= \left(x-\nicefrac{1}{4}\right)\left(1+\frac{4}{3} \left(x-\nicefrac{1}{4}\right)-\frac{16}{9} \left(x-\nicefrac{1}{4}\right)^2+\frac{1088}{189} \left(x-\nicefrac{1}{4}\right)^3\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left. -\frac{8704}{567} \left(x-\nicefrac{1}{4}\right)^4+\mathcal{O}\left(\left(x-\nicefrac{1}{4}\right)^5\right)\right)\,. \esp\end{equation} The structure of the solution space close to $x_2=1$ is similar. We see that the local exponents for $x_1$ and $x_2$ are $0,1/2,1$. Hence, the singular points $x_1$ and $x_2$ are not MUM-points. However, the corresponding monodromy matrices can be read off immediately also in this case: while the polynomials in $\chi_3$ and $\chi_1$ have trivial monodromy, the square root in $\chi_2(x_1;x)$ acquires a minus sign when transported around the singularity. Thus one finds: \begin{align} \label{eqn:locmonquarterone} \rho_{\nicefrac{1}{4}}(\gamma_{\nicefrac{1}{4}})=\rho_1(\gamma_{1})=\begin{pmatrix}1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1\end{pmatrix}. \end{align} The singular point $x_3=\infty$ is a MUM-point. Substituting $x\to\frac{1}{y}$ in $\mathcal{L}^{\textsf{ban},(3)}_x$ leads to the differential operator \begin{equation} \mathcal{L}^{\textsf{ban},(3)}_{1/y} = -y^3 \partial^3_y-\frac{3 (y (4 y-15)+8) y^2 }{2 (y-4) (y-1)}\partial^2_y-\frac{(y (7 y-17)+4) y }{(y-4) (y-1)}\partial_y-\frac{y}{y-4} \end{equation} Since $y=0$ is a MUM point, the structure of the solutions has a logarithmic hierarchy again, and the monodromy matrix equals the one in eq.~\eqref{eqn:banzerolocsolall}: $\rho_\infty(\gamma_{\infty})=\rho_0(\gamma_0)$. Again, the matching matrices can be calculated using eq.~\eqref{eqn:calcmatchingmatrices}. However, while it was comparably easy to infer the analytic values of the matrix entries in the sunrise case, the banana matrices require rather high orders in the expansion of the solutions. Using expansions up to order 120 and the PSLQ~\cite{PSLQ} algorithm, as well as eq.~\eqref{eqn:monodromytranslation}, one obtains the monodromy matrices ($\mathrm{L}_2=\log(2)$): \begin{small} \begin{align} \label{eqn:banresult} \rho_0(\gamma_0)&= \left( \begin{array}{ccc} 1 & -4 i \pi & -4 \pi ^2 \\ 0 & 1 & -2 i \pi \\ 0 & 0 & 1 \\ \end{array} \right)\!,\\ \nonumber \rho_0(\gamma_{\nicefrac{1}{4}}) & =\frac{1}{\pi^2} \left( \begin{array}{ccc} 12 \mathrm{L}_2^2 & 4 \mathrm{L}_2 \left(\pi ^2-12 \mathrm{L}_2^2\right) & \frac{1}{3} \left(\pi ^2-12 \mathrm{L}_2^2\right)^2 \\ 6 \mathrm{L}_2 & \pi ^2-24 \mathrm{L}_2^2 & -2 \mathrm{L}_2 \left(\pi ^2-12 \mathrm{L}_2^2\right) \\ 3 & -12 \mathrm{L}_2 & 12 \mathrm{L}_2^2 \\ \end{array} \right)\!,\,\\ \nonumber \rho_0(\gamma_1)&=\frac{1}{\pi^2} \left( \begin{array}{ccc} 3 (4 \mathrm{L}_2{+}i \pi )^2 & -4 (4 \mathrm{L}_2{+}i \pi ) \left(12 \mathrm{L}_2^2{+}6 i \pi \mathrm{L}_2{-}\pi ^2\right) & \frac{4}{3} \left(-12 \mathrm{L}_2^2-6 i \pi \mathrm{L}_2+\pi ^2\right)^2 \\ 6 (4 \mathrm{L}_2{+}i \pi ) & -96 \mathrm{L}_2^2-48 i \pi \mathrm{L}_2+7 \pi ^2 & 2 (4 \mathrm{L}_2{+}i \pi ) \left(12 \mathrm{L}_2^2{+}6 i \pi \mathrm{L}_2{-}\pi ^2\right) \\ 12 & -12 (4 \mathrm{L}_2+i \pi ) & 3 (4 \mathrm{L}_2+i \pi )^2 \\ \end{array} \right)\!,\,\\ \nonumber \rho_0(\gamma_\infty)&=\frac{1}{\pi^2} \left( \begin{array}{ccc} 4 (2 \pi -3 i \mathrm{L}_2)^2 & -12 i (\pi -2 i \mathrm{L}_2)^2 (2 \pi -3 i\mathrm{L}_2) & -9 (\pi -2 i \mathrm{L}_2)^4 \\ -18 \mathrm{L}_2-12 i \pi & 72\mathrm{L}_2^2+72 i \pi \mathrm{L}_2-17 \pi ^2 & 6 (3 \mathrm{L}_2+i \pi ) (\pi -2 i\mathrm{L}_2)^2 \\ -9 & 12 (3 \mathrm{L}_2+i \pi ) & 4 (\pi -3 i\mathrm{L}_2)^2 \\ \end{array} \right). \end{align} \end{small} We can check that $\rho_0(\infty)=(\rho_0(1)\rho_0(\nicefrac{1}{4})\rho_0(0))^{-1}$. These matrices generate the monodromy group associated to the banana differential operator $\mathcal{L}^{\textsf{ban},(3)}_x$ as a subgroup of $\textrm{GL}_3(\mathbb{C})$. It is possible to choose a basis for the solution space so that the monodromy matrices have integer entries. However, this will not be needed in our case. Instead, we want to use the fact that $\mathcal{L}^{\textsf{ban},(3)}_x$ is a symmetric square to identify the image of the monodromy group in $\textrm{GL}_3(\mathbb{C})$ as arising from a group of $2\times2$ matrices. More precisely, we are looking for a set of $2\times2$ matrices such that acting with the monodromy matrix $\rho_0(0)$ on a $3$-dimensional solutions vector should equal the action of the $2\times 2$-representation on the solution vector $\begin{pmatrix}\Psi_2\\\Psi_1\end{pmatrix}$. Since the relation between the two- and three-dimensional solution spaces is most transparent from eq.~\eqref{eq:H1_to_Psi1}, we prefer to work here with the basis of the solution space of $\mathcal{L}_x^{\mathsf{ban}}$ from eq.~\eqref{eq:H1_to_Psi1}. Due to the change of basis we need to conjugate the monodromy matrices in eq.~\eqref{eqn:banresult} with the matrix $B$ from eq.~\eqref{eqn:Brot} resulting in \begin{align} \label{eqn:BMon} &\tilde{\rho}_0(\gamma_0)=\left( \begin{array}{ccc} 1 & 6 i & -5 \\ 0 & 1 & i \\ 0 & 0 & 1 \\ \end{array} \right)\!,\quad\tilde{\rho}_0(\gamma_{\nicefrac{1}{4}})=\left( \begin{array}{ccc} 1 & 0 & 0 \\ -2 i & 3 & 2 i \\ 4 & 4 i & -3 \\ \end{array} \right)\!,\\ &\tilde{\rho}_0(\gamma_1)\left( \begin{array}{ccc} -3 & -10 i & 7 \\ -12 i & 31 & 21 i \\ 16 & 40 i & -27 \\ \end{array} \right)\,.\nonumber \end{align} The comparison is made in components, here for example the equation for the third component: \begin{equation} \left(\underbrace{B^{-1}\rho_0(0) B}_{\tilde{\rho}_0(\gamma_0)} \begin{pmatrix}I_1(x)\\J_1(x)\\H_1(x)\end{pmatrix}\right)_{\!\!3}\stackrel{!}{=}-\frac{1}{2}\,t\,\left(\Psi_1(t)^2\right)_{\circlearrowleft}\,, \end{equation} where we use the following ansatz for the monodromy matrices: \begin{equation} \label{eqn:tbtansatz} \begin{pmatrix}\Psi_2(t)\\\Psi_1(t)\end{pmatrix}_{\circlearrowleft} = \begin{pmatrix}c_{11} & c_{12} \\ c_{21} & c_{22}\end{pmatrix}\begin{pmatrix}\Psi_2(t)\\\Psi_1(t)\end{pmatrix}\,. \end{equation} Plugging in a couple of numerical values for $t$, which are selected such as to place $x$ in the corresponding region, allows to determine the values $c_{ij}$ in the ansatz in eq.~\eqref{eqn:tbtansatz} for each generator. Finally, one finds the following representations for the generators of the monodromy group: \begin{equation}\bsp \label{eqn:banresult2} \mathcal{R}_0:=\rho_0^\mathsf{2\times2}(\gamma_0) &=\begin{pmatrix}1 & -1 \\ 0 & 1 \end{pmatrix},\quad \mathcal{R}_{\nicefrac{1}{4}}:=\rho_0^\mathsf{2\times2}(\gamma_{\nicefrac{1}{4}}) =-i{\sqrt{3}}\begin{pmatrix}1 & 2/3 \\ -2 & -1 \end{pmatrix},\\ \mathcal{R}_1:=\rho_0^\mathsf{2\times2}(\gamma_1) &=-i{\sqrt{3}}\begin{pmatrix}1 & 1/3 \\ -4 & -1 \end{pmatrix}\,. \esp\end{equation} These three matrices generate a subgroup $\Gamma^{\mathsf{ban},(2)}$ of $\textrm{GL}_2(\mathbb{C})$, which is closely related to the monodromy group $\Gamma^{\mathsf{ban},(3)}\subset\textrm{GL}_3(\mathbb{C})$ generated by the matrices $\tilde{\rho}_0(\gamma_0)$, $\tilde{\rho}_0(\gamma_{\nicefrac{1}{4}})$ and $\tilde{\rho}_0(\gamma_1)$ in eq.~\eqref{eqn:BMon}. More precisely, consider the map $\sigma: \textrm{GL}_2(\mathbb{C}) \to \textrm{GL}_3(\mathbb{C})$ defined by \begin{equation} \sigma\left(\begin{smallmatrix}a& b\\c& d\end{smallmatrix}\right) = \frac{1}{3}\,\left( \begin{smallmatrix} (a+c) (3 a+c) & 2 i \left(6 a^2-9 a b+8 a c-6 a d-6 b c+2 c^2-3 c d\right) & -3 (3 a-3 b+c-d) (a-b+c-d) \\ i c (a+c) & -4 a c+3 a d+3 b c-4 c^2+6 c d & -3 i (c-d) (a-b+c-d) \\ -c^2 & -2 i c (2 c-3 d) & 3 (c-d)^2 \\ \end{smallmatrix} \right)\,. \end{equation} One can show that $\sigma$ is a group homomorphism with kernel $\textrm{Ker }\sigma = \mathbb{Z}_2$ such that \begin{equation} \tilde\rho_0 = \sigma\circ \rho_0^{2\times2}\,. \end{equation} Together with $-\mathds{1}\notin\Gamma^{\mathsf{ban},(2)}$, it follows that $\sigma(\Gamma^{\mathsf{ban},(2)}) = \Gamma^{\mathsf{ban},(3)}$, and so $\Gamma^{\mathsf{ban},(2)}$ and $\Gamma^{\mathsf{ban},(3)}$ are isomorphic. Let us discuss the structure of the group $\Gamma^{\mathsf{ban},(2)}$ in a bit more detail. First, one can check (e.g., by comparing to {\tt Sage}) that $\mathcal{R}_0$, $\mathcal{R}_0^{-1}\mathcal{R}_{1}\mathcal{R}_{\nicefrac{1}{4}}\mathcal{R}_0$, $\mathcal{R}_0^{-1}\mathcal{R}_{\nicefrac{1}{4}}\mathcal{R}_0\mathcal{R}_{\nicefrac{1}{4}}\mathcal{R}_0$ and $-\mathds{1}$ are generators of $\Gamma_0(6)$. Note that $\mathcal{R}_1$ and $\mathcal{R}_{\nicefrac{1}{4}}$ are self-inverse. We thus see that, while $\Gamma^{\mathsf{ban},(2)}$ does not contain $\Gamma_0(6)$ as a subgroup, it does contain\footnote{The notion $\overline{\Gamma}$ has been defined at the end of subsection~\ref{ssec:sunrisefamily}.} $\overline{\Gamma_0(6)}$. Moreover, one can easily check that \begin{equation}\label{eq:Rel_to_Verrill} \overline{\Gamma^{\mathsf{ban},(2)}}\simeq \overline{\Gamma_0(6)^{+3}}\,, \end{equation} with \begin{equation}\bsp \Gamma_0(6)^{+3} &\,= \left\{\left(\begin{smallmatrix} a& b\\ 6c &d\end{smallmatrix}\right),\sqrt{3}\left(\begin{smallmatrix} a& b/3\\ 2c &d\end{smallmatrix}\right)\in\textrm{SL}_2(\mathbb{R})\big| a,b,c,d\in\mathbb{Z}\right\}\\ &\,=\Gamma_0(6) \cup (i\mathcal{R}_{\nicefrac{1}{4}})\Gamma_0(6)\,. \esp\end{equation} Next, let us discuss how modular forms and modular functions make an appearance here. We define (cf.~eq.~\eqref{eq:tau_def}): \begin{equation}\bsp \tau &\,= i\frac{J_1(x)}{H_1(x)}-1 = \frac{\Psi_2(t)}{\Psi_1(t)}\,. \esp\end{equation} We can invert this relation to express $x$ in terms of $\tau$~\cite{verrill1996}: \begin{equation} x(\tau) = -4\,\left(\frac{\eta(2\tau)\eta(6\tau)}{\eta(\tau)\eta(3\tau)}\right)^6\,. \end{equation} We also define:\footnote{Our definition of $\varpi(\tau)$ differs from the one used by Verrill in ref.~\cite{verrill1996} by a factor $(2\pi i)^2$, i.e., $\varpi^{\textrm{our}}(\tau) = (2\pi i)^2\varpi^{\textrm{Ver.}}(\tau)$.} \begin{equation} \varpi(\tau) = H_1(x(\tau)) = \frac{\eta(2\tau)^4\eta(6\tau)^4}{\eta(\tau)^2\eta(3\tau)^2}\,. \end{equation} Let us discuss the modular properties of $x(\tau)$ and $\varpi(\tau)$. One finds that $x(\tau)$ is a modular function and $\varpi(\tau)$ is a modular form of weight two for $\Gamma_0(6)$ Moreover, we find \begin{equation} x(\mathcal{R}_{\nicefrac{1}{4}}\cdot \tau) = x(\mathcal{R}_{1}\cdot \tau) = x(\tau)\,, \end{equation} which shows that $x(\tau)$ is a modular function for the monodromy group $\Gamma^{\mathsf{ban},(3)}\simeq\Gamma^{\mathsf{ban},(2)}$, as expected. In addition, since $\Gamma^{\mathsf{ban},(2)}$ acts via M\"obius transformations via $\overline{\Gamma^{\mathsf{ban},(2)}}$, eq.~\eqref{eq:Rel_to_Verrill} implies that $x(\tau)$ is also a modular function for $\Gamma_0(6)^{+3}$. Similarly, we find: \begin{equation}\bsp\label{eq:modular_varpi} \varpi(\mathcal{R}_{\nicefrac{1}{4}}\cdot \tau) &\,= -3(2\tau+1)^2\,\varpi(\tau)\,,\\ \varpi(\mathcal{R}_{1}\cdot \tau) &\,= -3(4\tau+1)^2\,\varpi(\tau)\,. \esp\end{equation} Accordingly, $\varpi(\tau)$ is a modular form of weight two for the monodromy group $\Gamma^{\mathsf{ban},(3)}\simeq\Gamma^{\mathsf{ban},(2)}$, again as expected. However, $\varpi(\tau)$ is not a modular form for $\Gamma_0(6)^{+3}$, which would require the factor of automorphy to be $+3(c\tau+d)$ in eq.~\eqref{eq:modular_varpi}. \subsection*{Acknowledgments} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No.~724638). JB is grateful to Pietro Longhi for discussions. Furthermore, the authors would like to thank Helena Verrill for correspondence. \section{Review of (quasi-)modular forms and their iterated integrals} \label{sec:modular} The previous section has shown that there are certain classes of Feynman integrals whose differential equations admit a modular parametrisation. This is to say that their maximal cuts in $D=d_0$ dimensions are linear combinations of derivatives of modular forms multiplied by powers of $\tau$, and the higher orders in $\epsilon$ of the maximal cuts and the full uncut integral can be expressed in terms of iterated integrals of such functions. The aim of this section is to briefly review the theory of holomorphic modular forms and their iterated integrals. In the next section we will extend this to include iterated integrals of meromorphic modular forms. \subsection{The modular group $\mathrm{SL}_2(\bZ)$ and its subgroups} \label{sec:modular_review} We start by reviewing some general facts about (certain) subgroups of the modular group $\mathrm{SL}_2(\bZ)$. For a review, see ref.~\cite{diamond2005first}, and references therein. Let $\Gamma$ denote a subgroup of $\mathrm{SL}_2(\bZ)$ of finite index, i.e., the quotient $\Gamma\backslash\mathrm{SL}_2(\bZ)$ is finite (which means, intuitively, the we can cover $\mathrm{SL}_2(\bZ)$ by a finite number of copies of $\Gamma$). In the following we denote the index of $\Gamma$ in $\mathrm{SL}_2(\bZ)$ by \begin{equation} [\mathrm{SL}_2(\bZ):\Gamma] = \left|\Gamma\backslash\mathrm{SL}_2(\bZ)\right| < \infty\,. \end{equation} An important example of finite-index subgroups are the \emph{congruence subgroups of level $N$}, with $N$ a positive integer, defined as those subgroups $\Gamma$ that contain the principal congruence subgroups $\Gamma(N) = \{\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right) \in \mathrm{SL}_2(\bZ) \, :\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right) = \left(\begin{smallmatrix}1&0\\0&1\end{smallmatrix}\right)\bmod N\}$. An important example of congruence subgroup are the groups \begin{equation}\label{eq:Gamam1(N)_def} \Gamma_1(N) := \{\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right) \in \mathrm{SL}_2(\bZ) \, :\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right) = \left(\begin{smallmatrix}1&*\\0&1\end{smallmatrix}\right)\bmod N\}\,. \end{equation} In the following we will keep the discussion general, and we do not restrict ourselves to congruence subgroups, unless specified otherwise. The modular group and its subgroups naturally act on the extended upper half-plane $\mathfrak{H}^* = \mathfrak{H}\cup \mathbb{P}^1_\mathbb{Q}$ by M\"obius transformations via \begin{equation} \label{eq:modulartrafo} \gamma \cdot \tau = \frac{a\tau + b}{c\tau + d},\quad \gamma = \left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in\mathrm{SL}_2(\bZ). \end{equation} $\Gamma$ acts separately on $\mathfrak{H}$ and $\mathbb{P}^1_\mathbb{Q}$, and decomposes $\mathbb{P}^1_\mathbb{Q}$ into disjoint orbits, called \emph{ cusps of }$\Gamma$:\footnote{By abuse of language, one also often calls the elements of $\mathbb{P}^1(\mathbb{Q})$ cusps.} \begin{equation} S_\Gamma:=\Gamma\setminus\mathbb{P}^1_\mathbb{Q}\,. \end{equation} The number of cusps of $\Gamma$ is always finite and we denote it by $\epsilon_{\infty}(\Gamma) := \#S_{\Gamma} < \infty$. The stabilizer of a cusp $s\in\mathbb{P}^1_\mathbb{Q}$ is generated by an element of the form $\pm T^{h} = \pm\left(\begin{smallmatrix} 1& h\\0&1\end{smallmatrix}\right)$, for some integer $h$ called the \emph{width} of the cusp. In case the stabilizer of the cusp $s$ contains an element $-T^h\in\Gamma_{s}$, the cusp is called \emph{irregular}, otherwise it is \emph{regular}. The numbers of regular and irregular cusps are denoted by $\epsilon_r(\Gamma)$ and $\epsilon_i(\Gamma)$ respectively. Note that for every cusp $s\in \mathbb{Q}$ there exists a $\gamma\in\mathrm{SL}_2(\bZ)$ such that $\gamma \cdot s=i\infty$. A point $\tau\in\mathfrak{H}$ is called an \emph{elliptic point for $\Gamma$} if $\tau$ has a non trivial stabilizer group in $\Gamma$: \begin{equation} \Gamma_{\tau} := \{\gamma\in\Gamma: \gamma\cdot \tau = \tau\}\,. \end{equation} One can show that $\Gamma_\tau$ is always a finite-cyclic group. If $\Gamma_{\tau}$ is cyclic of order $n$, then $\tau$ is called an elliptic point of order $n$. $\mathrm{SL}_2(\bZ)=\Gamma(1)$ has exactly two elliptic points, $i$ and $\rho := e^{2\pi i/3}$ in its fundamental domain $\mathcal{D}_1\cup \mathcal{D}_2\cup \mathcal{D}_3$, with \begin{equation}\bsp \mathcal{D}_1 &\,:= \Big\{\tau\in \mathfrak{H}: |\tau|>1\textrm{ and } |\Re\tau|<{\frac{1}{2}}\Big\}\,,\\ \mathcal{D}_2 &\,:= \Big\{\tau\in \mathfrak{H}: |\tau|\ge1\textrm{ and } \Re\tau={-\frac{1}{2}}\Big\}\,,\\ \mathcal{D}_3 &\,:= \Big\{\tau\in \mathfrak{H}: |\tau|=\textrm{ and } {-\frac{1}{2}}<\Re\tau\ge 0\Big\}\,. \esp\end{equation} They are of order two and three respectively, \begin{equation} \Gamma_i \simeq \mathbb{Z}/2\mathbb{Z} \text{~~~and~~~} \Gamma_{\rho} \simeq \mathbb{Z}/3\mathbb{Z}\,,\qquad \rho := e^{2\pi i/3}\,. \end{equation} Every elliptic point is $\mathrm{SL}_2(\bZ)$-equivalent to either $i$ or $\rho:=e^{2\pi i/3}$. The number of elliptic points of order two or three of $\Gamma$ will be denoted by $\epsilon_2(\Gamma)$ and $\epsilon_3(\Gamma)$. The principal congruence subgroups $\Gamma(N)$ have no elliptic points for $N>1$. The subgroups $\Gamma_1(N)$ have no elliptic points for $N>3$, while $\Gamma_1(3)$ has no elliptic points of order two and $\Gamma_1(2)$ has no elliptic points of order three. \subsection{Modular curves} The space of orbits $X_{\Gamma} := \Gamma\backslash\mathfrak{H}^*$ can be equipped with the structure of a compact Riemann surface, called the \emph{modular curve for $\Gamma$}. The genus of $\Gamma$ is defined as the genus of $X_{\Gamma}$ and is related to the number of cusp and elliptic points of $\Gamma$: \begin{equation}\label{eq:genus} g = 1+d_{\Gamma}-\frac{\epsilon_2(\Gamma)}{4}-\frac{\epsilon_3(\Gamma)}{3}-\frac{\epsilon_{\infty}(\Gamma)}{2}\,, \end{equation} where we introduced the shorthand $d_{\Gamma} := \frac{[\mathrm{SL}_2(\bZ):\{\pm1\}\Gamma]}{12}$. In the remainder of this paper we are only concerned with the case where $\Gamma$ has genus zero. It is known that $\Gamma_1(N)$ an $\Gamma(N)$ have genus zero for $N\le12$ and $N\le5$ respectively. In particular, the group $\Gamma_1(6)$ relevant to the equal-mass sunrise and banana graphs has genus zero. A complete list of all genus zero subgroups can be found in refs.~\cite{YifanYang,allgenus0}. In the following it will be important to know how we can define local coordinate charts on the Riemann surface $X_{\Gamma}$. We recall that $X_{\Gamma}$ is an orbifold, and the points of $X_{\Gamma}$ are equivalence classes $[\tau] = \{\gamma\cdot \tau:\gamma\in\Gamma\}$. Let $P=[\tau_0]\in X_{\Gamma}$. To define a local coordinate $z$ such that $z(P)=0$ in a neighbourhood of $P$, we need to distinguish three cases: \begin{itemize} \item If $\tau_0$ is an elliptic point of order $h$, a local coordinate is defined by $z=(\tau-\tau_0)^h$. \item If $\tau_0$ is a cusp of width $h$, such that $\gamma\cdot \tau_0=i\infty$, a local coordinate is defined by $z= e^{2\pi i(\gamma\cdot\tau)/h'}$, with $h'=h$ is $\tau_0$ is a regular cusp, and $h'=2h$ otherwise. \item If $\tau_0$ is neither a cusp nor an elliptic point, $z=\tau-\tau_0$ is a good local coordinate. \end{itemize} The field of meromorphic functions of $X_{\Gamma}$ is isomorphic to the field $\mathcal{M}_0(\Gamma)$ of modular functions, i.e., meromorphic functions $f:\mathfrak{H}^*\to \mathbb{C}$ that satisfy \begin{equation}\label{eq:modular_function} f\left(\frac{a\tau+b}{c\tau+d}\right) = f(\tau)\,,\qquad \forall \left(\begin{smallmatrix} a& b\\c& d\end{smallmatrix}\right)\in \Gamma\,. \end{equation} For every meromorphic function, we denote by $\nu_P(f)\in \mathbb{Z}$ the \emph{order of vanishing at $P$}, i.e., $\nu_P(f)>0$ ($<0$) if $f$ has a zero (pole) of order $|\nu_P(f)|$ at $P$. If $z$ denotes the local coordinate introduced above, we have $f(\tau) = A\,z^{\nu_P(f)} + \mathcal{O}(z^{\nu_P(f)+1})$, with $A\neq 0$. If $X_{\Gamma}$ has genus zero, the field of meromorphic functions on $X_{\Gamma}$ has a single generator, $\mathcal{M}_0({\Gamma})\simeq \mathbb{C}(\xi)$, for some $\xi\in \mathcal{M}_0({\Gamma})$ called a \emph{Hauptmodul}. Every modular function is a rational function in the Hauptmodul $\xi$. If $h$ is the width of the infinite cusp, then we can choose the Hauptmodul to have the $q$-expansion~\cite{YifanYang} \begin{equation} \label{eqn:Hauptmodul} \xi(\tau) = q^{-1/h} + \sum_{n\ge 0} a_0\,q^{n/h}\,,\qquad q= e^{2\pi i\tau}\,. \end{equation} In the following we always assume that such a Hauptmodul $\xi$ has been fixed. \subsection{Review of (quasi-)modular forms} \label{ssec:mfcs} \subsubsection{Meromorphic modular forms} Let $k$ be an integer, $\Gamma\subseteq \mathrm{SL}_2(\bZ)$. We define the action of weight $k$ of $\Gamma$ on a function $f:\mathfrak{H}^*\to \mathbb{C}$ by \begin{equation} f[\gamma]_k(\tau) := (c\tau+d)^{-k}\,f(\gamma\cdot\tau)\,,\qquad \gamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in\Gamma\,. \end{equation} A weakly modular form of weight $k$ for $\Gamma$ is a function that is invariant under this $\Gamma$-action, \begin{equation} \label{eq:defmf} f[\gamma]_k(\tau) = f(\tau)\,,\qquad \forall \gamma\in\Gamma\,. \end{equation} A \emph{meromorphic modular form of weight $k$ for $\Gamma$} is a weakly modular form $f$ of weight $k$ for $\Gamma$ that is meromorphic on $\mathfrak{H}$ and at every cusp, i.e., it admits a $q$-expansion of the form \begin{equation} f[\gamma]_k(\tau) = \sum_{n\ge n_0}a_n\,q^{n/h}\,, \qquad \forall\gamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in\mathrm{SL}_2(\bZ)\,, \end{equation} where $h$ is the width of the cusp $\frac{a}{c}$. We denote the $\mathbb{C}$-vector space of meromorphic modular forms of weight $k$ for $\Gamma$ by $\mathcal{M}_k(\Gamma)$, and we write $\mathcal{M}(\Gamma) := \bigoplus_k\mathcal{M}_k(\Gamma)$. In particular, $\mathcal{M}_0(\Gamma)$ is the field of modular functions for $\Gamma$ (see eq.~\eqref{eq:modular_function}). Holomorphic modular forms are defined in an analogous manner. The $\mathbb{C}$-vector space of holomorphic modular forms of weight $k$ for $\Gamma$ is denoted by $M_k(\Gamma)$, and we define $M(\Gamma) := \bigoplus_kM_k(\Gamma)$. Note that $M_k(\Gamma)$ is always finite-dimensional, and $\dim_{\mathbb{C}} M_k(\Gamma)=0$ for $k\le 0$. In the following we refer to holomorphic modular forms simply as modular forms. A (meromorphic) \emph{cusp form} is a (meromorphic) modular form for which $a_0=0$ for every cusp. We denote the vector space of (meromorphic) cusp forms of weight $k$ by $S_k(\Gamma)$ ($\mathcal{S}_k(\Gamma)$). The space of cusp forms $S_k(\Gamma)$ is an ideal in $M_k(\Gamma)$. The quotient is the space of Eisenstein series $E_k(\Gamma)$, and there is a direct sum decomposition \begin{equation} M_k(\Gamma) = E_k(\Gamma) \oplus S_k(\Gamma)\,. \end{equation} \subsubsection{Meromorphic quasi-modular forms} In general, the derivative of a (meromorphic) modular form is no longer a modular form, but we need to introduce a more general class of functions. A \emph{meromorphic quasi-modular form of weight $k$ and depth $p$ for $\Gamma$} is a function $f:\mathfrak{H}^*\to\mathbb{C}$ that is meromorphic on $\mathfrak{H}$ and at the cusps, and it transforms according to \begin{equation}\label{eq:quasi_modular} f[\gamma]_k(\tau) = \sum_{r=0}^{p}f_r(\tau)\left(\frac{c}{c\tau+d}\right)^r\,,\quad\gamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in\Gamma\,, \end{equation} where the $f_0,\ldots,f_p$ are meromorphic functions, with $f_p\neq 0$. Note that eq.~\eqref{eq:quasi_modular} for $\gamma=\operatorname{id}$ implies that $f_0=f$. The $\mathbb{C}$-vector space of meromorphic quasi-modular forms of weight $k$ and depth at most $p$ is denoted by $\mathcal{Q}\mathcal{M}_k^{\le p}(\Gamma)$. Quasi-modular forms of depth zero are precisely the modular forms. Holomorphic quasi-modular forms are defined in an analogous way, and the corresponding (finite-dimensional) vector space is denoted by $QM_k^{\le p}(\Gamma)$. Note that $\dim_{\mathbb{C}}QM_k^{\le p}(\Gamma)=0$ for $k\le 0$ and $2p>k$. We also use the notations \begin{equation} QM_k(\Gamma) := \bigcup_{p\ge 0}QM_k^{\le p}(\Gamma) \textrm{~~and~~} QM(\Gamma) := \bigoplus_{k}QM_k(\Gamma)\,. \end{equation} The vector spaces $\mathcal{Q}\mathcal{M}_k(\Gamma)$ and $\mathcal{Q}\mathcal{M}(\Gamma)$ are defined in a similar fashion. The algebra of all (meromorphic) quasi-modular forms is closed under differentiation. We use the notation $\delta:=\frac{1}{2\pi i} \partial_{\tau}= q\,\partial_q$. If $f$ is a quasi-modular form of weight $k$ and depth at most $p$, then $\delta f$ has weight $k+2$ and depth at most $p+1$. The Eisenstein series $G_2(\tau)$ of weight two is the prime example of a quasi-modular form of depth 1. The Eisenstein series are defined as\footnote{For $k=1$ the series is not absolutely convergent. Here we assume the standard summation convention for $G_2(\tau)$, cf., e.g., ref.~\cite{diamond2005first}} \begin{equation} G_{2k}(\tau) = \sum_{(m,n)\in\mathbb{Z}^2\setminus(0,0)}\frac{1}{(m\tau+n)^{2k}}\,. \end{equation} For $k\neq 1$, $G_{2k}(\tau)$ is a modular form of weight $2k$ for $\mathrm{SL}_2(\bZ)$. For $k=1$, we have \begin{equation}\label{eq:G2_transform} G_2[\gamma]_2(\tau) = (c\tau+d)^{-2} G_2(\gamma\cdot\tau)=G_2(\tau)-\frac{1}{4\pi i}\frac{c}{c\tau+d}\,, \end{equation} for every $\gamma= \left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in\mathrm{SL}_2(\bZ)$. Hence $G_{2}(\tau)$ defines a (holomorphic) quasi-modular form of weight 2 and depth 1 for $\mathrm{SL}_2(\bZ)$. In fact, one can show that every meromorphic quasi-modular form of depth $p$ can be written as a polynomial of degree $p$ in $G_2(\tau)$: \begin{equation} \mathcal{Q}\mathcal{M}(\Gamma) = \mathcal{M}(\Gamma)[G_2(\tau)]\,. \end{equation} \subsection{Iterated integrals of holomorphic quasi-modular forms} The previous discussion makes it clear that the functions $f(\tau) := R(x(\tau))\,h_1(\tau)^m\,h_1'(\tau)^s$ in eq.~\eqref{eq:diff_form} (with $p=0$) are a meromorphic quasi-modular forms of weight $m+3s$ and depth at most $s$. Hence, we see that the iterated integrals encountered at the end of section~\ref{sec:DEQs} are iterated integrals of meromorphic quasi-modular forms for the monodromy group $\Gamma_2$. In the remainder of this section we give a short review of iterated integrals of holomorphic quasi-modular forms, following refs.~\cite{ManinModular,Brown:mmv}. In the next section we present the extension to the meromorphic case. Let $h_1,\ldots, h_k$ be meromorphic quasi-modular forms for $\Gamma\subseteq \mathrm{SL}_2(\bZ)$ We define their iterated integral by~\cite{ManinModular,Brown:mmv} \begin{equation}\label{eq:II_def} I(h_1,\ldots, h_k;\tau) := \int_{i\infty}^\tau d{\tau_1}\,h_1(\tau_1)\int_{i\infty}^{\tau_1} d{\tau_2}\,h_2(\tau_2)\int_{i\infty}^{\tau_2}\cdots\int_{i\infty}^{\tau _{k-1}}d{\tau_k}\,h_k(\tau_k)\,. \end{equation} At this point we have to mention that this definition requires a regularisation of the divergence at $\tau=i\infty$, already in the holomorphic case (at least for Eisenstein series). For the holomorphic case we follow ref.~\cite{Brown:mmv}, and interpret the lower integration boundary as a tangential base point. In the meromorphic case, the regularisation requires the use of tools from renormalisation theory, see ref.~\cite{matthes2021iterated}. We refer to refs.~\cite{Brown:mmv,matthes2021iterated} for a detailed discussion. The iterated integrals in eq.~\eqref{eq:II_def} are not necessarily independent, even if the $h_1,\ldots, h_k$ are linearly independent in $\mathcal{Q}\mathcal{M}(\Gamma)$. Rather, we have to identify a set of quasi-modular forms that are linearly independent up to total derivatives, i.e., modulo $\delta \mathcal{Q}\mathcal{M}(\Gamma)$ (see also the discussion in section~\ref{sec:main_thm}). Said differently, we need to would like to know which quasi-modular forms can be expressed as derivatives of (quasi-)modular forms. This question can be answered completely in the holomorphic case. Writing $QM_k(\Gamma) := \bigcup_{p\ge 0}QM_k^{\le p}(\Gamma)$, we have the decomposition~\cite{,ZagierModular,AMBP_2012__19_2_297_0} \begin{equation}\label{eq:holomorphic_decomposition} QM_k(\Gamma) = \left\{\begin{array}{ll} \mathbb{C}\,,& k = 0\,,\\ M_1(\Gamma)\,,& k=1\,,\\ M_2(\Gamma)\oplus\mathbb{C}\,G_2(\tau)\,,&k=2\,,\\ M_k(\Gamma)\oplus\delta QM_{k-2}(\Gamma)\,,&k>3\,. \end{array}\right. \end{equation} Note that the sums are direct, i.e., every holomorphic quasi-modular form of weight $k>2$ can be written modulo derivatives as a holomorphic modular form, and this decomposition is unique. The decomposition can be performed in an algorithmic way, cf.~ref.~\cite{AMBP_2012__19_2_297_0}. In other words, modulo derivatives, $QM(\Gamma)$ is generated as a vector space by $M(\Gamma)$ and $G_2(\tau)$. Consequently, in the holomorphic case it is sufficient to consider iterated integrals of modular forms and $G_2(\tau)$~\cite{Matthes:QuasiModular}. The equivalent of the decomposition in eq.~\eqref{eq:holomorphic_decomposition} in the meromorphic case for general subgroups $\Gamma$ is currently still unknown, and results are only available for \emph{weakly holomorphic modular forms} (i.e., with poles at most at the cusps)~\cite{MR2407067} and for meromorphic quasi-modular forms for the whole modular group, $\Gamma=\mathrm{SL}_2(\bZ)$~\cite{matthes2021iterated}. One of the main results of this paper is the generalisation of eq.~\eqref{eq:holomorphic_decomposition} and the results of ref.~\cite{matthes2021iterated} to arbitrary subgroups of genus zero. \section{Iterated integrals of meromorphic modular forms} \label{sec:mero_sec} \subsection{A decomposition theorem for meromorphic quasi-modular forms} \label{sec:main_thm} In this section we state and prove the generalisation of eq.~\eqref{eq:holomorphic_decomposition} for all genus-zero subgroups. The special case $\Gamma=\mathrm{SL}_2(\bZ)$ was proved by one of us in ref.~\cite{matthes2021iterated}, and the proof presented here is a generalisation of that proof. Before we state the main theorem in this section, we need to introduce some notation. Let $R\subset X_{\Gamma}\setminus S_{\Gamma}$ be a finite set of points which are not cusps, and let $s_0\in S_{\Gamma}$ be a cusp of $X_{\Gamma}$, and $R_{s_0} := R\cup \{s_0\}$ and $R_S:=R\cup S_{\Gamma}$. We define $\mathcal{M}_k(\Gamma,R_{s_0})$ to be the sub-vector space of $\mathcal{M}_k(\Gamma)$ consisting of all meromorphic modular forms of weight $k$ for $\Gamma$ with poles at most at points in $R_{s_0}$. \begin{defi}\label{defi:Mtilde} Define $\widetilde{\mathcal{M}}_k(\Gamma,R_{s_0})$ to be the subset of those $f\in\mathcal{M}_k(\Gamma,R_{s_0})$ which satisfy: \begin{enumerate} \item $\nu_P(f)\ge\frac{1-k}{h_P}$, for all $P\in R$; \item $\nu_s(f)\ge 0$, for all $s\in S_{\Gamma}\setminus\{s_0\}$; \item $\lfloor\nu_{s_0} (f)\rfloor \ge -\dim S_k(\Gamma)$. \end{enumerate} \end{defi} \noindent In the previous definition $\lfloor x \rfloor$ is the floor function, i.e., the largest integer less or equal than $x\in \mathbb{R}$. \begin{thm}\label{thm:main} Let $\Gamma\subseteq \mathrm{SL}_2(\bZ)$ have genus zero, $R_{s_0}$ as defined above. We have a decomposition \begin{equation}\nonumber \mathcal{Q}\mathcal{M}_k(\Gamma,R_S) = \left\{\begin{array}{ll} \delta\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S) \oplus \mathcal{M}_{k}(\Gamma,R_S)\,, & \text{ for $k<2$}\,,\\ \displaystyle \delta\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_S) \oplus \mathcal{M}_{2-k}(\Gamma,R_S)\,G_2^{k-1}\oplus \widetilde{\mathcal{M}}_k(\Gamma,R_{s_0})\,,&\text{ for $k\ge 2$}\,. \end{array}\right. \end{equation} \end{thm} The proof is presented in appendix~\ref{app:mathy}. Theorem~\ref{thm:main} generalises the result of ref.~\cite{matthes2021iterated} to arbitrary subgroups of genus zero. In section~\ref{sec:neat_proof} we sketch the proof for a subset of subgroups of genus zero, the so-called \emph{neat} subgroups (see Definition~\ref{defi:neat}). The proof of section~\ref{sec:neat_proof} is constructive, and allows one to perform the decomposition in Theorem~\ref{thm:main} explicitly for neat subgroups. We expect that this case covers most of the applications to Feynman integrals. Before we discuss the proof for neat subgroups, however, we review some consequences of Theorem~\ref{thm:main}. \paragraph{Proof of Theorem~\ref{thm:section2}.} We now show that the decomposition in Theorem~\ref{thm:main} immediately leads to a proof of Theorem~\ref{thm:section2}. We have already argued that the class of differential equations considered in section~\ref{sec:modular_DEQs} leads to iterated integrals involving the one-forms in eq.~\eqref{eq:diff_form} with $p=0$, and the functions $f(\tau) := R(x(\tau))\,h_1(\tau)^m\,h'_1(\tau)^s$ are quasi-modular forms of weight $k:=3s+m$ and depth at most $s$ for the monodromy group $\Gamma_2$. Let $f$ have poles at most at the cusps and at some finite set of points $R\subset \mathfrak{H}$, i.e., $f\in \mathcal{Q}\mathcal{M}_{k}^{\le s}(\Gamma_2,R_S)$. Theorem~\ref{thm:main} then implies that, for some fixed choice of cusp $s_{0}\in S_{\Gamma_2}$: \begin{itemize} \item If $k<2$, there is $h\in \mathcal{M}_{k}(\Gamma_2,R_S)$ and $g\in\mathcal{Q}\mathcal{M}_{k-2}(\Gamma_2,R_S)$ such that $f = h+\delta g$. \item If $k\ge 2$, there are modular forms $\tilde{h}\in \widetilde{\mathcal{M}}_{k}(\Gamma_2,R_{s_0})$, ${h}\in {\mathcal{M}}_{2-k}(\Gamma_2,R_S)$ and a quasi-modular form $g\in\mathcal{Q}\mathcal{M}_{k-2}(\Gamma_2,R_S)$ such that $f = \tilde{h}+{h}\,G_2^{k-1}+\delta g$. \end{itemize} The derivatives $\delta g$ can be trivially integrated away, and we see that we only need to consider the meromorphic modular form $\tilde{h}$ or the quasi-modular form ${h}\,G_2^{k-1}$, the latter being characterised by the fact that it has the maximally allowed depth, $s=k-1$. In order to show that Theorem~\ref{thm:section2} holds, we need to show that these quasi-modular forms with maximally allowed depth $s=k-1$ do not arise from our class of differential equations. To see this, we start from eq.~\eqref{eq:diff_form_sample} with $\alpha$, $\beta$, $\gamma$, $\delta$ positive integers, and we change variables from $x$ to $\tau$, and we trace the powers of $G_2$ that are produced along the way. The Jacobian is given in eq.~\eqref{eq:jacobian}. Moreover, we use eq.~\eqref{eqn:inverseJacobian} to obtain \begin{equation} \psi'_1(x) = h_1'(\tau)\,\partial_x\tau = h_1'(\tau)\,\frac{\mathcal{D}(\tau)}{h_1(\tau)^2}\,, \end{equation} and \begin{equation} \psi_2'(x) = \frac{\psi_2(x)\psi_1'(x)+D(x)}{\psi_1(x)} = \frac{\mathcal{D}(\tau)}{h_1(\tau)^2}\,\left(\tau\,{h_1'(\tau)}+{h_1(\tau)}\right)\,. \end{equation} Since $h_1$ is a modular form of weight 1, $h'_1$ is a quasi-modular form of weight 3 and depth at most 1, i.e., there are $A_1\in M_1(\Gamma_2)$ and $A_3\in M_3(\Gamma_2)$ such that $h_1' = A_1\,G_2 + A_3$. This gives: \begin{equation}\bsp dx\,&\psi_1(x)^{\alpha}\,\psi_2(x)^{\beta}\,\psi_1'(x)^{\gamma}\,\psi_2'(x)^{\delta} = \\ &\,=d\tau\,\mathcal{D}(\tau)^{\gamma+\delta-2}\,h_1(\tau)^{2+\alpha+\beta-2\gamma-2\delta}\,\tau^{\beta}\,(A_1\,G_2 + A_3)^{\gamma}\\ &\,\quad\times(\tau\,A_1(\tau)\,G_2(\tau)+\tau\,A_3(\tau)+h_1(\tau))^\delta\\ &\,=d\tau\,\mathcal{D}(\tau)^{\gamma+\delta-2}\,h_1(\tau)^{2+\alpha+\beta-2\gamma-2\delta}\,\tau^{\beta}\\ &\,\quad\times \sum_{p=0}^{\gamma}\sum_{q=0}^{\delta}\binom{\gamma}{p}\binom{\delta}{q}A_1(\tau)^{p+q}\,G_2(\tau)^{p+q}\,\tau^q\,A_3(\tau)^{\gamma-p}\,(\tau\,A_3(\tau)+h_1(\tau))^{\delta-q}\,. \esp\end{equation} The term with the highest power of $G_2$ is: \begin{equation} d\tau\,\mathcal{D}(\tau)^{\gamma+\delta-2}\,h_1(\tau)^{2+\alpha+\beta-2\gamma-2\delta}\,\tau^{\beta}\,A_1(\tau)^{\gamma+\delta}\,G_2(\tau)^{\gamma+\delta}\,\tau^\delta\,. \end{equation} It has depth $s=\gamma+\delta$ and weight $ k =\alpha+\beta+\gamma+\delta+2 = \alpha+\beta+s+2$. Since $\alpha$ and $\beta$ are positive integers, we have $k \ge s+2 > s+1$, and so we never reach the maximally allowed depth $s=k-1$. To finish the proof of Theorem~\ref{thm:section2}, we need to comment on the inhomogeneous term $N(x,\epsilon)$ in eq.~\eqref{eq:DEQ_generic_higher}. By assumption, the $\epsilon$-expansion of $N(x,\epsilon)$ involves at every order only sums of rational functions of $x$ multiplied by MPLs of the form $G(a_1,\ldots, a_n;x)$, with $a_i$ independent of $x$. It is easy to see that MPLs of this form can always be written as iterated integrals of modular forms for $\Gamma_2$, because \begin{equation} \frac{dx}{x-a_i} = \frac{h_1(\tau)^2\,d\tau}{\mathcal{D}(\tau)\,(x(\tau) - x(\tau_i))}\,,\textrm{~~with~~} x(\tau_i) = a_i\,. \end{equation} This finishes the proof of Theorem~\ref{thm:section2}. \paragraph{Linear independence for iterated integrals.} We have seen how Theorem~\ref{thm:main} leads to a proof of Theorem~\ref{thm:section2}, which characterises the iterated integrals that arise as solutions to a certain class of differential equations. In applications one is usually also interested in having a minimal set of of iterated integrals, i.e., a basis of linearly-independent iterated integrals. We now show how Theorem \ref{thm:main} yields the desired linear independence result. The crucial mathematical ingredient is a linear independence criterion for iterated integrals, which is very general and not at all limited to meromorphic modular forms. We first describe this criterion in a special case which, while being far from the most general possible statement, is sufficiently general to exhibit all essential features of the general case (for details, see ref.~\cite{DDMS}). Suppose that $\mathcal{F}=\{f_i\}_{i\in I}$ is a family of meromorphic functions on the upper half-plane, and let $K$ be a differential subfield of the field of meromorphic functions on $\mathfrak{H}$, which contains all $f_i$. Here, `differential subfield' means that $K$ is a subfield which is closed under differentiation of meromorphic functions. The following theorem is a variant of a classical result due to Chen, \cite[Theorem 4.2]{ChenSymbol}. \begin{thm}[{\cite[Theorem 2.1]{DDMS}}] \label{thm:lindep} The following assertions are equivalent. \begin{itemize} \item[(i)] The family of all iterated integrals (viewed as functions of $\tau$) of the form \[ \int^\tau_{i\infty}d\tau_1\,f_{i_1}(\tau_1)\int_{i\infty}^{\tau_1}d\tau_2\,f_{i_2}(\tau_2)\ldots \int_{i\infty}^{\tau_{n-1}}d\tau_{n}\,f_{i_n}(\tau_n) \, , \] for all $n\geq 0$ and all $f_i \in \mathcal{F}$, is $K$-linearly independent. \item[(ii)] The family $\mathcal{F}$ is $\mathbb C$-linearly independent and we have \[ \partial_{\tau}(K)\cap \operatorname{Span}_\mathbb C\mathcal{F}=\{0\} \, , \] where $\operatorname{Span}_\mathbb C\mathcal{F}$ denotes the vector space of all $\mathbb C$-linear combinations of functions in $\mathcal{F}$. \end{itemize} \end{thm} Let us apply this theorem to our setting. Here $K:= \mathcal{M}(\Gamma_2,R_S)(G_2)$ is the field of fractions of $\mathcal{Q}\mathcal{M}(\Gamma_2, R_S)$, i.e., the field whose elements are ratios of quasi-modular forms, or equivalently ratios of polynomials in $G_2$ with coefficients that are meromorphic modular forms. $K$ is a differential subfield, because quasi-modular forms are closed under differentiation. Clearly, we have $\mathcal{Q}\mathcal{M}(\Gamma_2, R_S)\subset K$. For $\mathcal{F}$ we choose \begin{equation} \mathcal{F} = \bigcup_{k\in\mathbb{Z}}\mathcal{F}_k\,, \end{equation} with $\mathcal{F}_k := \{f_1^{(k)},\ldots, f_{p_k}^{(k)}\}$ a $\mathbb{C}$-linearly independent set of modular forms from $\mathcal{M}_k(\Gamma_2,R_S)$ for $k<2$ and from $\widetilde{\mathcal{M}}_k(\Gamma_2,R_{s_0})$ for $k\ge 2$ and some fixed choice of cusp $s_0\in S_{\Gamma_2}$ (see section~\ref{sec:neat_proof} how to construct explicit bases for these vector spaces). Since the sums in Theorem~\ref{thm:main} are direct, it is easy to see that we have $\partial_{\tau}(K)\cap\textrm{Span}_{\mathbb{C}}\mathcal{F}=\{0\}$, and so Theorem~\ref{thm:lindep} implies that the corresponding iterated integrals are $K$-linear independent. \subsection{Sketch of the proof for neat subgroups} \label{sec:neat_proof} We now return to the proof of Theorem~\ref{thm:main} for a special class of of subgroups. \begin{defi}\label{defi:neat} A subgroup $\Gamma\subseteq\mathrm{SL}_2(\bZ)$ is called \emph{neat} if $\left(\begin{smallmatrix}-1 & 0\\0&-1\end{smallmatrix}\right)\notin\Gamma$ and $\Gamma$ has no elliptic points nor irregular cusps, $\epsilon_2(\Gamma)=\epsilon_3(\Gamma)=\epsilon_i(\Gamma)=0$. \end{defi} One can show that \begin{enumerate} \item $\Gamma_1(N)$ is neat and has genus zero for $N\in\{5,\ldots,10,12\}$; \item $\Gamma(N)$ is neat and has genus zero for $N\in\{3,4,5\}$. \end{enumerate} In particular, the congruence subgroup $\Gamma_1(6)$ relevant to the banana graph is neat and has genus zero. For $k<2$, the proof of Theorem~\ref{thm:main} is identical to the proof for $\Gamma=\mathrm{SL}_2(\bZ)$ considered in ref.~\cite{matthes2021iterated}, and we do not consider it here. In order to see that Theorem~\ref{thm:main} holds also for $k\ge 2$, we first note that for every $f\in\mathcal{Q}\mathcal{M}_k(\Gamma,R_{s_0})$ there are meromorphic modular forms $h_1\in \mathcal{M}_k(\Gamma,R_{s_0})$ and $h_2\in \mathcal{M}_{2-k}(\Gamma,R_{s_0})$ and a meromorphic quasi-modular form $g\in \mathcal{Q}\mathcal{M}_{k-1}(\Gamma,R_{s_0})$ such that $f = h_1 + h_2\,G_2^{k-1}+\delta g$. This decomposition holds for all subgroups $\Gamma$, and does not require $\Gamma$ to be neat. It is a direct consequence of the algorithms of ref.~\cite{AMBP_2012__19_2_297_0}. Theorem~\ref{thm:main} then follows from the following claim: For every meromorphic modular form $f\in\mathcal{M}_k(\Gamma,R_{s_0})$ of weight $k\ge2$ there is $h\in \widetilde{\mathcal{M}}_{k}(\Gamma,R_{s_0})$ and $g\in\mathcal{Q}\mathcal{M}_{k-2}(\Gamma,R_{s_0})$ such that \begin{equation}\label{eq:decomp_1} f=h+\delta g\,. \end{equation} In the remainder of this section we show how to construct $h$ and $g$ explicitly for neat subgroups of genus zero. Before doing so, we review some mathematical tools required to achieve this decomposition. \paragraph{Bol's identity.} A complication when trying to decompose a meromorphic modular $f$ into an elements $h\in \widetilde{\mathcal{M}}_{k}(\Gamma,R_{s_0})$ up to a total derivative is the fact that in general derivatives of modular forms are themselves not modular, but only quasi-modular. However, an important result due to Bol~\cite{Bol} states that we recover again a modular form if we take enough derivatives. More precisely, Bol's identity states that for $k\ge 2$ there is a linear map \begin{equation}\label{eq:Bol} \delta^{k-1}: {\mathcal{M}}_{2-k}(\Gamma) \to {\mathcal{S}}_{k}(\Gamma)\,. \end{equation} In other words, if $k\ge 2$ and $f\in {\mathcal{M}}_{2-k}(\Gamma)$, then in general $\delta f$ will not be a modular form, i.e., $\delta f\notin {\mathcal{M}}_{4-k}(\Gamma)$, but the $(k-1)^{\textrm{th}}$ derivative will be a modular form of weight $k$, $\delta^{k-1} f\in {\mathcal{M}}_{k}(\Gamma)$ (and in fact, it will even be a cusp form). Note that eq.~\eqref{eq:Bol} remains true if we replace ${\mathcal{M}}_{2-k}(\Gamma)$ and ${\mathcal{S}}_{k}(\Gamma)$ by ${\mathcal{M}}_{2-k}(\Gamma,R_{s_0})$ and ${\mathcal{S}}_{k}(\Gamma,R_{s_0})$ respectively. The main idea to achieve the decomposition in eq.~\eqref{eq:decomp_1} then goes as follows: Assume we are given $f\in \mathcal{M}_{k}(\Gamma,R_{s_0})$ with a pole of order $m>0$ at a point $P\in R_{s_0}$, and assume the order of the pole is too high for $f$ to lie in $\widetilde{\mathcal{M}}_{k}(\Gamma,R_{s_0})$. We will show how to construct $\tilde{g}\in\mathcal{M}_{2-k}(\Gamma,R_{s_0})$ such that $f-\delta^{k-1}\tilde{g}$ has a pole of order at most $m-1$ at $P$. Applying this approach recursively, we obtain the decomposition in eq.~\eqref{eq:decomp_1}. \paragraph{Divisors and the valence formula.} From the previous discussion it becomes clear that an important step in achieving the decomposition in eq.~\eqref{eq:decomp_1} is the construction of $\tilde{g}\in\mathcal{M}_{2-k}(\Gamma,R_{s_0})$ with prescribed poles. An important tool to understand meromorphic functions (or, more generally, meromorphic sections of line bundles) on a Riemann surface are divisors, which we review in this section. The material in this section is well-known in the mathematics literature, but probably less so in the context of Feynman integrals. A \emph{divisor} on $X_{\Gamma}$ is an element in the free group $\text{Div}(X_{\Gamma})$ generated by the points of $X_{\Gamma}$ (divided by their order $h_P$), i.e., a divisor is an expression of the form $D=\sum_{P\in X_{\Gamma}}\frac{n_P}{h_P} [P]$, where the $n_P$ are integers, and only finitely many of the $n_P$ are non zero. We can use divisors to encode the information on the zeroes and poles of a meromorphic function or modular form. More precisely, if $0\neq f\in \mathcal{M}_k(\Gamma)$, we can associate a divisor to it, defined by \begin{equation} (f) = \sum_{s\in S_\Gamma} \frac{\nu_s(f)}{h_s}\,[s] + \sum_{P\in X_{\Gamma}\setminus S_\Gamma} \frac{\nu_P(f)}{\#\overline{\Gamma}_P}\,[P]\,, \end{equation} where $h_s=2$ if $s$ is irregular and $h_s=1$ otherwise, and $\overline{\Gamma}_P$ is the projection of ${\Gamma}_P$ to $\mathrm{PSL}_2(\bZ)$ (i.e., we have identified $\gamma\in \mathrm{SL}_2(\bZ)$ and $-\gamma\in\mathrm{SL}_2(\bZ)$). Note that we have the obvious relation $(fg)=(f)+(g)$. To every divisor $D=\sum_{P\in X_{\Gamma}}\frac{n_P}{h_P} [P]$ we can associate its \emph{degree} $\deg D = \sum_{p\in X_{\Gamma}}n_p$. Since every meromorphic function on a compact Riemann surface must have the same number of zeroes and poles (counted with multiplicity), we must have $\deg(f)=0$ for all $f\in \mathcal{M}_0(\Gamma)$. For meromorphic modular forms of weight $k$, the degree of the associated divisor is no longer zero, but it is given by the \emph{valence formula}: \begin{equation}\label{eq:valence_formula} \deg(f) = k\,d_{\Gamma}\,. \end{equation} \paragraph{Modular forms for neat subgroups of genus zero.} From now on we focus on the case where $\Gamma$ is neat and has genus zero. Equation~\eqref{eq:genus} then implies \begin{equation}\label{eq:eps_infty_to_dGamma} \epsilon_{\infty}(\Gamma) =2(1+ d_{\Gamma})\,. \end{equation} As we will now show, the spaces of meromorphic modular forms for neat subgroups can be described very explicitly. \begin{lemma} Let $\Gamma$ be a neat subgroup of genus zero. Then there exists $\aleph_k \in \mathcal{M}_k(\Gamma)$ such that $\nu_\infty(\aleph_k)=k\,d_{\Gamma}$ and $\nu_P(\aleph_k)=0$ otherwise. In particular, for $k>0$, $\aleph_k$ is a modular form of weight $k$ for $\Gamma$. \end{lemma} \begin{proof} If $k=0$, we simply choose $\aleph_0=1$, and for $k<0$ we set $\aleph_k = 1/\aleph_{-k}$. So it is sufficient to discuss $k>0$. Since $\dim_{\mathbb{C}}\mathcal{M}_k(\Gamma)\neq 0$,\footnote{This can be seen by thinking of (meromorphic) modular forms of weight $k$ for the group $\Gamma$ as (meromorphic) sections of a certain line bundle (the $k$-th power of the Hodge bundle) on the modular curve $X_\Gamma$. It then follows from the Riemann--Roch formula that every line bundle on a compact Riemann surface admits a meromorphic section. See also ref.~\cite{diamond2005first}, the discussion after Theorem 3.6.1, for a detailed proof.} it contains a meromorphic modular form $h$ with divisor \begin{equation} (h) = \sum_{P\in X_{\Gamma}}n_P\,[P] = kd_{\Gamma}\,[\infty] + D \,, \end{equation} where we used the fact that $h_P=1$ for neat subgroups, and we defined \begin{equation} D := \left(n_{\infty}-kd_{\Gamma}\right)[\infty] + \sum_{\substack{P\in X_{\Gamma}\\ P\neq \infty}}n_P\,[P]\,. \end{equation} The valence formula implies $\deg D=0$, and so there is a meromorphic function $f\in\mathcal{M}_0(\Gamma)$ such that $(f)=D$. Since $\Gamma$ has genus zero, every meromorphic function is a rational function in the Hauptmodul $\xi$, and it is sufficient to pick \begin{equation} f := \prod_{\substack{P\in X_{\Gamma}\\ P\neq \infty}}(\xi-P)^{n_P}\,. \end{equation} It is now easy to check that $\aleph_k := \frac{1}{f}\,h$ has the desired property. The fact that $\aleph_k$ is holomorphic follows from $\nu_P(\aleph_k)\ge 0$ for all $P\in X_{\Gamma}$. \end{proof} Note that $\aleph_k$ is unique, up to overall normalisation. Indeed, assume that $\aleph_k^{(1)}$ and $\aleph_k^{(2)}$ both satisfy the condition, then $(\aleph_k^{(1)}/\aleph_k^{(2)}) = (\aleph_k^{(1)}) - (\aleph_k^{(2)}) = 0$, and so there is $\alpha \in \mathbb{C}$ such that $\aleph_k^{(1)} = \alpha \aleph_k^{(2)}$. We assume from now on that the normalisation of $\aleph_k$ is chosen such that at the infinite cusp we have the $q$-expansion ($h$ is the width at the infinite cusp): \begin{equation}\label{eq:Hk_normalisation} \aleph_k(\tau) = q^{kd_{\Gamma}/h}\left[1 +\sum_{n\ge 1} a_n\,q^{n/h}\right]\,,\qquad q=e^{2\pi i\tau}\,. \end{equation} We can use $\aleph_k$ to give an explicit representation of the spaces of modular forms of weight $k$ in terms of rational functions, \begin{equation}\bsp \mathcal{M}_k(\Gamma) &\,= \aleph_k\cdot \mathbb{C}(\xi)\,, \esp\end{equation} with holomorphic modular forms of weight $k$ corresponding to polynomials of degree at most $k\,d_{\Gamma}$: \begin{equation}\bsp M_k(\Gamma) &\,= \aleph_k\cdot \mathbb{C}[\xi]_{\le kd_{\Gamma}}\,, \esp\end{equation} where $\mathbb{C}[X]_{\le m}$ denotes the vector space of polynomials of degree at most $m$. We can use this representation to write down a generating set for $\mathcal{M}_k(\Gamma)$. For $p\in X_{\Gamma}$ and $m\in\mathbb{Z}_{>0}$, we define: \begin{equation}\bsp u_{P,m}(\tau) &\,= \left\{\begin{array}{ll} (\xi(\tau)-P)^{-m}\,, & \text{ if }P\neq\infty\,,\\ \xi(\tau)^m\,,& \text{ if }P=\infty\,, \end{array}\right.\\ u_{\infty,0}(\tau) &\,= 1\,. \esp\end{equation} It is an easy exercise (based on partial fractioning) to show that every rational function has a unique representation as a finite linear combination of the $u_{P,m}$. As a consequence, the meromorphic modular forms $U_{k,P,m} := \aleph_k\,u_{P,m}$ are a generating set for $\mathcal{M}_k(\Gamma)$. In particular, a basis for $M_k(\Gamma)$ is $\{\aleph_k\,u_{\infty,m}:0\le m\le kd_{\Gamma}\}$. Moreover, we can use this generating set to write down an explicit basis for $\widetilde{\mathcal{M}}_k(\Gamma,R_{\infty})$ in definition~\ref{defi:Mtilde}. For $k\ge 2$, we have \begin{equation}\label{eq:Mtilde_def} \widetilde{\mathcal{M}}_k(\Gamma,R_{\infty}) := M_k(\Gamma) \cup \widehat{\mathcal{S}}_k(\Gamma)\cup\widehat{\mathcal{M}}_k(\Gamma,R)\,, \end{equation} where we defined \begin{equation}\bsp\label{eq:Shat_def} \widehat{\mathcal{M}}_k(\Gamma,R):=\bigoplus_{\substack{p\in R\setminus \{\infty\} \\ 1\le m\le k-1}}\!\!\!\! \mathbb{C}\,U_{k,P,m} = \bigoplus_{\substack{P\in R \\ 1\le m\le k-1}}\!\!\!\! \mathbb{C}\,\frac{\aleph_{k}}{(\xi-P)^m}\,,\\ \widehat{\mathcal{S}}_k(\Gamma):=\bigoplus_{kd_{\Gamma}< m< 2d_{\Gamma}\,(k-1)}\!\!\!\!\!\!\!\!\!\! \mathbb{C}\,U_{k,\infty,m}=\bigoplus_{kd_{\Gamma}< m< 2d_{\Gamma}\,(k-1)} \!\!\!\!\!\!\!\!\mathbb{C}\,\aleph_k\,\xi^m\,. \esp\end{equation} Note that $\dim_{\mathbb{C}}\widehat{\mathcal{S}}_k(\Gamma) = \dim_{\mathbb{C}}S_k(\Gamma)$. \paragraph{Sketch of the proof of Theorem~\ref{thm:main} for neat subgroups.} We now show how we can use Bol's identity to construct for each $f=U_{k,P,m}$ a function $\tilde{g}$ such that the decomposition in eq.~\eqref{eq:decomp_1} holds. We will make repeated use of the following result: \begin{lemma}\label{lem:valence} Let $\Gamma$ be neat and have genus zero, $f\in \mathcal{M}_{2-k}(\Gamma)$, for $k\ge 2$. \begin{enumerate} \item If $P\in X_{\Gamma}\setminus S_{\Gamma}$, then $\nu_P(\delta^{k-1}f) \ge 0$ or $\nu_P(\delta^{k-1}f) =1-k+\nu_P(f)$. \item If $s\in S_{\Gamma}$, then $\nu_s(\delta^{k-1}f) \ge 0$ or $\nu_s(\delta^{k-1}f)=\nu_s(f)$. \end{enumerate} \end{lemma} \begin{proof} It is sufficient to prove the claim for the elements of the generating set, $U_{2-k,P,m} = \aleph_{2-k} u_{P,m} = \aleph_{k-2}^{-1}u_{P,m}$. We need to show that if $\delta^{k-1}f$ has a pole at $P$, i.e., $\nu_P(\delta^{k-1}f)<0$, then it satisfies the claim of the lemma. Note that if $\nu_P(\delta^{k-1}f)<0$, then also $\nu_P(f)<0$. Let $P\in X_{\Gamma}\setminus S_{\Gamma}$. We have $\nu_P(U_{2-k,P,m}) = -m<0$. Then, if $\tau_P$ is such that $t(\tau_P)=P$, $U_{k,P,m}$ admits a Laurent expansion of the form \begin{equation} U_{2-k,P,m}(\tau) = \frac{\alpha}{(\tau-\tau_P)^m} + \mathcal{O}\left(\frac{1}{(\tau-\tau_P)^{m-1}}\right)\,,\qquad \alpha\in\mathbb{C}\setminus\{0\} \end{equation} and so \begin{equation}\bsp \delta^{k-1}U_{2-k,P,m}(\tau)&\, =(2\pi i)^{1-k}\partial_{\tau}^{k-1}U_{2-k,P,m}(\tau)\\ &\, = \frac{(m)_{k-2}\,\alpha}{(-2\pi i)^{k-1}\,(\tau-\tau_P)^{m+k-1}}+ \mathcal{O}\left(\frac{1}{(\tau-\tau_P)^{m+k-2}}\right)\,, \esp\end{equation} where $(a)_n = a(a+1)\ldots(a+n)$. Hence $\nu_P(\delta^{k-1}U_{2-k,P,m}) = 1-k-m = 1-k+\nu_P(U_{2-k,P,m})$. Let $s\in S_{\Gamma}$, $s\neq [\infty]$. We have $\nu_P(U_{2-k,P,m}) = -m<0$. If $q$ is a local coordinate in a neighbourhood of the cusp $s$, $U_{2-k,P,m}$ admits the Fourier expansion: \begin{equation} U_{2-k,P,m}(q)=\frac{\alpha}{q^m} + \mathcal{O}(q^{-m+1})\,,\qquad \alpha\in\mathbb{C}\setminus\{0\}\,, \end{equation} and so \begin{equation} \delta^{k-1}U_{2-k,P,m}(q) = (q\partial_q)^{k-1}U_{2-k,P,m}(q) = \frac{(-1)^{k-1}\,(m)_{k-2}\,\alpha}{q^m} + \mathcal{O}(q^{-m+1})\,. \end{equation} Hence, $\nu_s(\delta^{k-1}U_{2-k,P,m}) = \nu_s(U_{2-k,P,m})$. If $s=[\infty]$, then $\nu_{\infty}(U_{2-k,\infty,m}) = -m-\nu_{\infty}(\aleph_{k-2}) = -m - (k-2)d_{\Gamma}$. By the same argument as in the case $s\neq[\infty]$, we conclude $\nu_{\infty}(\delta^{k-1}U_{2-k,P,m}) = -m - (k-2)d_{\Gamma} = \nu_{\infty}(U_{2-k,P,m})$. \end{proof} We are now in a position to prove the decomposition in eq.~\eqref{eq:decomp_1}. The proof is constructive, and allows one to recursively construct the functions $h$ and $\tilde{g}$ in eq.~\eqref{eq:decomp_1}. The decomposition in eq.~\eqref{eq:decomp_1} is equivalent to the following result: \begin{thm}\label{thm:bijection} For $k\ge 2$, there is a decomposition \begin{equation} \mathcal{M}_k(\Gamma,R_S)=\widetilde{\mathcal{M}}_k(\Gamma,R_\infty)\oplus \delta^{k-1}\mathcal{M}_{2-k}(\Gamma,R_S)\,. \end{equation} \end{thm} \begin{proof} It is sufficient to consider the case $\#R=1$. We first show surjectivity. For this it is sufficient to show that all those $U_{k,P,m}$ not in $\widetilde{\mathcal{M}}_k(\Gamma,R_{\infty})$ do not define independent classes modulo objects that lie in the image of Bol's identity, i.e., these classes can be expressed as linear combinations in $\widetilde{\mathcal{M}}_k(\Gamma,R_{\infty})$, modulo $\delta^{k-1}\mathcal{M}_{2-k}(\Gamma,R_S)$. Let $s\in S_{\Gamma}$, $s\neq [\infty]$ and $m>0$. Let $q$ be a local coordinate around $s$. Then there are $\alpha_1,\alpha_2\in\mathbb{C}\setminus\{0\}$ such that \begin{equation}\bsp U_{k,s,m}(q) &\,= \frac{\alpha_1}{q^m} + \mathcal{O}(q^{-m+1})\,,\\ U_{2-k,s,m}(q) &\,= \frac{\alpha_2}{q^m} + \mathcal{O}(q^{-m+1})\,. \esp\end{equation} Lemma~\ref{lem:valence} implies \begin{equation} U_{k,s,m}(q) - \frac{\alpha_1}{\alpha_2}\,(-m)^{1-k}\,\delta^{k-1}U_{2-k,s,m}(q) = \mathcal{O}(q^{-m+1})\,. \end{equation} Applying this identity recursively, we arrive at the conclusion that \begin{equation} U_{k,s,m} = 0\!\!\! \mod \delta^{k-1}\mathcal{M}_{2-k}(\Gamma,R_S)\,,\quad \text{for all } m>0\,. \end{equation} For the infinite cusp, we know from the proof of Lemma~\ref{lem:valence} that $\nu_\infty(\delta^{k-1}U_{2-k,\infty,m'}) = \nu_\infty(U_{2-k,\infty,m'}) = -m'-(k-2)d_{\Gamma}$, for all $m'>0$. Hence, for all $m\ge 2d_{\Gamma}(k-1)$ there is $m' = m-2d_{\Gamma}(k-1)\ge 0$, and we can pick a local coordinate $q$ at the infinite cusp such that there is $\alpha_1,\alpha_2\neq0$ such that \begin{equation}\bsp U_{k,\infty,m}(q) &\,= \frac{\alpha_1}{q^{m-kd_{\Gamma}}} + \mathcal{O}(q^{-m+kd_{\Gamma}+1})\,,\\ U_{2-k,\infty,m'}(q) &\,= \frac{\alpha_2}{q^{m'-(2-k)d_{\Gamma}}} + \mathcal{O}(q^{-m'+(2-k)d_{\Gamma}+1})= \frac{\alpha_2}{q^{m-kd_{\Gamma}}} + \mathcal{O}(q^{-m+kd_{\Gamma}+1})\,. \esp\end{equation} Lemma~\ref{lem:valence} then implies \begin{equation} U_{k,\infty,m}(q) - \frac{\alpha_1}{\alpha_2}\,(kd_{\Gamma}-m)^{1-k}\,\delta^{k-1}U_{2-k,\infty,m'}(q) = \mathcal{O}(q^{-m+kd_{\Gamma}+1})\,. \end{equation} It follows that $U_{k,\infty,m}$ for $m\ge 2d_{\Gamma}(k-1)$ does not define an independent class modulo total derivatives. Finally, let $R=\{P\}$, and $m>k-1$. We can take $m'=m-k+1>0$, and Lemma~\ref{lem:valence} implies $\nu_P(\delta^{k-1}U_{2-k,P,m'}) = 1-k-m'=-m$. Hence, with $\tau_P$ such that $t(\tau_P)=P$, there is $\alpha_1,\alpha_2\neq0$ such that \begin{equation}\bsp U_{k,P,m}(\tau) &\,= \frac{\alpha_1}{(\tau-\tau_P)^m} + \mathcal{O}((\tau-\tau_P)^{-m+1})\,,\\ \delta^{k-1}U_{2-k,P,m}(\tau) &\,= \frac{\alpha_2}{(\tau-\tau_P)^{m'+k-1}} + \mathcal{O}((\tau-\tau_P)^{-m'-k+2})\\ &\, = \frac{\alpha_2}{(\tau-\tau_P)^m} + \mathcal{O}((\tau-\tau_P)^{-m+1})\,. \esp\end{equation} Hence \begin{equation} U_{k,P,m}(\tau)-\frac{\alpha_1}{\alpha_2}\delta^{k-1}U_{2-k,P,m}(\tau) = \mathcal{O}((\tau-\tau_P)^{-m+1})\,, \end{equation} and so $U_{k,P,m}$ for $m>k-1$ does not define an independent class modulo total derivatives. This finishes the proof of surjectivity. Let us now show injectivity (for $R=\{P\}$). Consider the following general linear combination of elements from $\widetilde{\mathcal{M}}_k(\Gamma,R_{\infty})$: \begin{equation}\bsp f&\,:=\sum_{m=1}^{k-1}\alpha_m\,U_{k,P,m} + \sum_{n=0}^{2d_{\Gamma}(k-1)-1}\beta_n\,U_{k,\infty,n}\\ &\,=\aleph_k\sum_{m=1}^{k-1}\frac{\alpha_m}{(\xi-P)^m}+ \aleph_k\sum_{n=0}^{2d_{\Gamma}(k-1)-1}\beta_n\xi^n\,. \esp\end{equation} We need to show that whenever there is $g\in \mathcal{M}_{2-k}(\Gamma,R_S)$ such that $f=\delta^{k-1}g$, then necessarily $\alpha_m=0$ for $1\le m<k$ and $\beta_m=0$ for $0\le n<2d_{\Gamma}(k-1)$. Let us start by showing that the coefficients $\alpha_m$ must vanish. To see this, assume that $\alpha_m\neq 0$ for some $1\le m<k$. Then $f$ has a pole at $\xi=P$, i.e., $0>\nu_P(f)>-k$. Hence, $g$ must also have a pole at $\xi=P$, i.e., $\nu_P(g)<0$. Lemma~\ref{lem:valence} then implies $\nu_P(f) = \nu_P(\delta^{k-1}g) = 1-k+\nu_P(g)\le -k$, which is a contradiction. Hence $\alpha_m=0$ for all $1\le m<k$. Next, let us assume that $\beta_n\neq0$ for some $2d_{\Gamma}k<n<2d_{\Gamma}(k-1)$. Then $f$ has a pole at the infinite cusp, with $0>\nu_{\infty}(f) \ge \nu_{\infty}(U_{k,\infty,2d_{\Gamma}(k-1)-1}) = (2-k)d_{\Gamma}+1$. Then $g$ must also have a pole. The order of the pole is bounded by \begin{equation} \nu_{\infty}(f) = \nu_{\infty}(\delta^{k-1}g) \le \nu_\infty(\delta^{k-1}U_{2-k,\infty,0}) = (2-k)d_{\Gamma}< \nu_{\infty}(f)\,, \end{equation} which is a contradiction. Hence $\beta_n=0$ for all $2d_{\Gamma}k<n<2d_{\Gamma}(k-1)$. It follows that $f$ must be a holomorphic modular form of weight $k$, but $M_k(\Gamma)\cap \delta^{k-1}(\mathcal{M}_{2-k}(\Gamma)) = 0$ for $k\ge0$, and therefore $f=0$. \end{proof}
train/arxiv
BkiUePDxK6nrxl9bOZel
5
1
\section{Introduction} The advent of high precision experiments dedicated to measuring the radiation polarization on cosmological scale or exploring the more local properties of our Galaxy, leads us to revisit the statistical properties of estimators related to the polarization amplitude. Polarimeters decompose the incoming monochromatic plane wave radiation into its $(I,Q,U)$ Stokes components \citep{Chandrasekar1950} in the linear case. According to the scanning strategy of the instrument, repeated measurements are conducted and combined, which, owing to the Central Limit Theorem, ensures that the Stokes parameters follow a Gaussian distribution. However the construction of physical models is most naturally performed in polar coordinates, {\rm i.e.}\xspace using the normalized polarization amplitude (or degree) and angle. More precisely, astrophysicists are interested in the \bsq{true} degree of polarization $p_0=\tsqrt{q_0^2+u_0^2}$, and angle $\psi_0=\tfrac{1}{2}\atan \tfrac{u_0}{q_0}$, where $q_0=Q_0/I_0$, $u_0=U_0/I_0$, and the subscript \bsq{0} emphasizes that we are considering true quantities. Working with amplitude and angle data helps assessing the underlying physical processes and deserves some statistical attention. Unlike in the angular case where the naive estimate $\hat \psi=\tfrac{1}{2}\atan\tfrac{u}{q}$ is unbiased \citep{Vinokur1965}, getting a \bsq{correct} point-estimate for the amplitude from a \textit{single} $(q,u)$ measurement is more involved. The naive estimate $p=\sqrt{q^2+u^2}$ is indeed strongly biased at low SNR, since it does not correct for the power of the experimental noise. Working instead on $p^2$, one can remove this bias \citeg{Gudbjartsson1995}, but the resultant distribution, a non-central $\chi^2$ one, is extremely skewed for low SNR and the unbiasing induces many negative values. It is sometimes believed that the Maximum Likelihood (ML) estimator is the optimal solution since it is known to reach the minimum variance bound. But this is valid only \textit{asymptotically}, {\rm i.e.}\xspace in the limit of a large number of samples. There is only one case where the ML estimator is optimal for finite samples: when the parent distribution is of the exponential form \citeg{James2007}, which is not the case here at least in the low SNR regime. When combining several measurements it however still remains a good solution \citep{Taludkar1991,Sijbers1998}. An estimator often used in cosmology is based on the most-probable value \citep{Wardle1974}. Its properties together with a set of other standard estimators was reviewed in \citet{Simmons1985}. All these estimators are however \textit{discontinuous}: their distribution is a mixture of a discrete peak at zero and a positive tail. While statistically valid, this in practice is very undesirable. Their bias and risk are small because they are computed in a \textit{ensemble average} sense. But an ergodicity argument cannot be invoked since the user generally works on a single realization of the sky. In practice when applying these estimators, for instance to a pixelized map, the user ends up with a large number of zeros and does not know how to treat them. Bayesian estimators that are continuous were proposed by \citet{Quinn2012}. However, as we will see in Sect. \ref{sec:mas}, their distribution is very skewed and has a cutoff value. The aim of this work is to cure these issues and provide a polarization amplitude estimator from a bi-variate normally distributed $(q,u)$ measurement that is continuous and lies in the whole positive region. We will particularly take care of the overall shape of the estimator distribution, not only its first two moments as characterized by the bias and risk. Previous works focused on a $[q,u]$ covariance matrix proportional to identity, $C=\sigma\mathbf{1}$, what we will call the \textit{canonical} case. Given the extreme sensitivity of the current and planned experiments, we will also consider the case of a general covariance matrix, {\rm i.e.}\xspace including some ellipticity ($\ensuremath{\sigma_q} \ne \ensuremath{\sigma_u}$) and correlation ($\rho$): \begin{equation} C= \begin{pmatrix} \ensuremath{\sigma_q}^2 & \rho \ensuremath{\sigma_q} \ensuremath{\sigma_u}\\ \rho \ensuremath{\sigma_q} \ensuremath{\sigma_u} & \ensuremath{\sigma_u}^2 \end{pmatrix}. \end{equation} In Sect \ref{sec:asymptotic}, we will first review the asymptotic properties of the naive estimator, in the canonical case of a $[q,u]$ covariance matrix proportional to identity, {\rm i.e.}\xspace $\ensuremath{\sigma_q}=\ensuremath{\sigma_u}=\sigma,\rho=0$. This will allow us to retrieve the asymptotic estimator and cure its discontinuity while still keeping rapid convergence to the asymptotic limit. We will characterize our estimator in Sect. \ref{sec:perf} not only with its first order moments but with its full distribution for which we will provide an analytic approximation. When building confidence intervals in Sect. \ref{sec:cl}, we will cure the classical problem of regions lying into the unphysical region by applying the Feldman-Cousins prescription. It will allow us to obtain physical intervals without ever being \bsq{conservative} (as defined in Sect.\ref{sec:cl}). An analytic description of the interval will be given for our estimator. Then in Sect. \ref{sec:general} we will consider the case of a general $[q,u]$ covariance matrix before concluding that our estimator can be used efficiently to provide reliable (Gaussian) estimates in regions of SNR above 2, and conversely construct polarization masks for regions with a low statistical significance. \section{Asymptotic properties of the amplitude distribution} \label{sec:asymptotic} \subsection{Approximations to the Rice distribution} \label{sec:canon} We begin by revisiting the asymptotic properties of the amplitude distribution in the case where the $(q,u)$ Stokes parameters are drawn from a Gaussian centred around the true values ($q_0,u_0)$ and with a simple covariance matrix proportional to the identity ($\ensuremath{\sigma_q}=\ensuremath{\sigma_u}=\sigma$). The change of $(q,u)$ variables into polar coordinates \footnote{Throughout the text we will work with the angular polar coordinates $\phi$, keeping in mind that the polarization angle, which is a spin-2 quantity, is defined by $\psi=\phi/2$. The \atan function is classically generalized to span the whole $[-\pi,\pi]$ range.} \begin{equation} \label{eq:polar} \begin{split} p&=\sqrt{q^2+u^2},\\ \phi&=\atan\dfrac{u}{q}, \end{split} \end{equation} leads to the bi-variate polar distribution: \begin{equation} \label{eq:rice2d} f_{p,\phi}(p,\phi)= \dfrac{p}{2\pi\sigma^2} e^{-\dfrac{p^2+p_0^2}{2\sigma^2}}e^{\dfrac{p p_0\cos(\phi-\phi_0)}{\sigma^2}}, \end{equation} where we have introduced the true polar values: \begin{equation} \begin{split} p_0&=\sqrt{q_0^2+u_0^2},\\ \phi_0&=\atan\dfrac{u_0}{q_0}. \end{split} \end{equation} Our aim is then to estimate the true amplitude $p_0$ and angle $\phi_0$. Marginalization over the angle leads to the Rice distribution \citep{Rice1945} that does not depend anymore on the true $\phi_0$ value: \begin{equation} \label{eq:rice} \ensuremath{f_p}\xspace(p)=\dfrac{p}{\sigma^2} e^{-\dfrac{p^2+p_0^2}{2\sigma^2}} \Izero{\dfrac{p p_0}{\sigma^2}}, \end{equation} where $I_0$ denotes the modified Bessel function of order 0. Its moments can be computed exactly using \citet{GradshteynRyzhik2007} Eq.~(6.631), $I_0(z)=J_0(i z)$ and the connection between Kummer's confluent hypergeometric function (noted $_1F_1$ or $M$) and the Laguerre polynomials \lag{k} \citep[][Eq.~(18.11.2)]{NIST2010}, which gives: \begin{align} \label{eq:ricemom} \E{p}&=\sqrt{\dfrac{\pi}{2}} \sigma \lag{\frac{1}{2}}\left(-\dfrac{p_0^2}{2\sigma^2}\right),\\ \E{p^2}&=2\sigma^2 + p_0^2, \end{align} where the half-order Laguerre polynomial $\lag{\frac{1}{2}}$ can be conveniently computed from: \begin{equation} \label{eq:laghalf} \lag{\frac{1}{2}}(z)=e^{z/2}\left( (1-z)I_0(-z/2)-z I_1(-z/2) \right). \end{equation} The moments allow us to build the variance $\E{p^2}-\E{p}^2$ and the risk $=\E{(p-p_0)^2}$, which depends on the true $p_0$ value. For a large SNR, {\rm i.e.}\xspace when $\epsilon\equiv\dfrac{\sigma}{p_0} \to 0$, the leading order expansion of the mean is: \begin{align} \label{eq:mean} \E{p}&=p_0(1+\epsilon^2/2) +\bigO{\epsilon^4},\nonumber \\ &=p_0+\dfrac{\sigma^2}{2p_0}+\bigO{\epsilon^4}, \end{align} while, to same order, the variance is: \begin{equation} \label{eq:variance} V(p)=\sigma^2+\bigO{\epsilon^4}. \end{equation} The mean and variance both involve the Gaussian variance $\sigma^2$. To avoid confusion in the following, we will denote its first meaning as a (non-linear) \bsq{noise-bias} and call it $b^2$. It is often claimed \citeg{Gudbjartsson1995,SijbersThesis1998,Cardenas2008} that the Rice distribution converges asymptotically to a Gaussian: \begin{equation} \label{eq:gaussapprox} f_p\to \mathcal{N}(\sqrt{p_0^2+\sigma^2},\sigma^2), \end{equation} where $\mathcal{N}(\mu,\sigma^2)$ denotes a Gaussian distribution of mean $\mu$ and variance $\sigma^2$. \begin{figure} \centering \includegraphics[width=.45\textwidth]{geo_simple} \caption{\label{fig:geo_simple} Illustration of the mean and variance of the amplitude distribution in the canonical case from a sampling point of view. $(q,u)$ samples are drawn according to a Gaussian of mean $(q_0,u_0)$ and variance $\sigma$. The circle represents the 1-$\sigma$ iso-probability contour. One considers the distance to the origin of samples located uniformly on that circle. In the asymptotic case, {\rm i.e.}\xspace when the circle is far from the origin, the distance distribution is (almost) symmetric around the value corresponding to that of the $M$ point, which is \textit{orthogonal} to the direction towards the circle centre. The mean value there is ${\tsqrt{p_0^2+\sigma^2}}$. The distribution lies in the $p_0\pm\sigma$ range and has a variance of $\sigma$ estimated \textit{along} the direction to the centre. By considering the angular distribution of the samples, one also finds that it is centred on $\phi_0$ ({\rm i.e.}\xspace unbiased) and has a deviation of $\tfrac{\sigma}{p_0}$, as confirmed by a direct calculation \citep{Vinokur1965}. This construction is only approximate, but captures the essentials of the mean and variance computations. } \end{figure} The origin of these values for the mean and variance can be understood from the simple geometric construction of Fig.~\ref{fig:geo_simple}. That the distribution converges to a Gaussian one is, as far as we know, not justified in the literature so we re-examine that statement in some detail. For a large argument, the modified Bessel function converges to \citep[][Eq.~(10.40.1)]{NIST2010}: \begin{equation} \label{eq:I0expansion} \Izero{z} \to \dfrac{e^{z}}{\sqrt{2\pi z}}, \end{equation} and then the Rice distribution to: \begin{equation} \label{eq:riceapprox} f_p \rightarrow \sqrt{\dfrac{p}{p_0}}{\cal N}(p_0,\sigma^2). \end{equation} This approximation is valid for a SNR above about 1 (see Fig.~\ref{fig:riceapprox}). \begin{figure} \centering \includegraphics[width=.5\textwidth]{riceapprox} \caption{Approximations to the Rice distribution for $p_0/\sigma=2$ (solid lines) and $p_0/\sigma=1$ (dashed lines). The black curves correspond to the exact Rice scaled distribution, the red ones to the traditional Gaussian approximation (\refeq{gaussapprox}), and the green ones to our \refeq{riceapprox} approximation.} \label{fig:riceapprox} \end{figure} This distribution then converges to a Gaussian, for a SNR larger than about 2, as shown on Fig.~\ref{fig:riceapprox}. The reason can be understood by making the change of variable $p^\prime=\tfrac{p-p_0}{\sigma}$ and expanding the square-root to first order in \sigopt, the distribution of the scaled variable tends to: \begin{equation} f_{p^\prime} \rightarrow \mathcal{N}(0,1)+\dfrac{\sigma}{2 p_0}p^\prime\mathcal{N}(0,1), \end{equation} which exhibits a corrective term to a pure Gaussian that is getting smaller with $\epsilon=\dfrac{\sigma}{p_0}$. It can then be verified that this approximation leads indeed to the two moments of \refeq{mean} and \refeq{variance}. The first order effect of the corrective term can thus be captured into a bias of the Gaussian mean which converges to \refeq{mean}. Up to \textit{first order} this is indeed the Taylor expansion of $\sqrt{p_0^2+\sigma^2}$. However the next order term in this expansion is negative ($-\tfrac{1}{8}\sigma^4/p_0^3$), while the one from the exact mean expression is positive ($+\tfrac{1}{8}\sigma^4/p_0^3$). It is therefore more correct to use simply $p_0+\tfrac{\sigma^2}{2p_0}$ for the Gaussian mean. What we learned so far, is that the ${\cal N}(\sqrt{p_0^2+\sigma^2},\sigma^2)$ Rice approximation is a first-order asymptotic expansion valid for $p_0/\sigma\gtrsim 2$. A slightly better approximation is obtained from the first-order expansion of the mean, $\mathcal{N}(p_0+\tfrac{\sigma^2}{2p_0},\sigma^2)$, and yet a better one by $\sqrt{\tfrac{p}{p_0}}{\cal N}(p_0,\sigma^2)$, which is valid above $p_0/\sigma\gtrsim 1$. \subsection{Modified ASymptotic estimator (MAS)} \label{sec:mas} We now address the question of building an estimator of the true $p_0$ value with \bsq{good} properties, which is a somewhat subjective notion. We feel however that an essential property is convergence as fast as possible with the SNR to the true value but also that the estimator distribution has a \bsq{reasonable} shape (this will be clarified later). Keeping in mind that building a perfectly unbiased estimator for a very low $p_0$ is mathematically impossible (see Appendix \ref{app:A}), we will focus on the asymptotic approximations to the Rice distribution. To avoid confusion in the following, we will add an index \bsq{i} to the measurement, even-though we are considering a single sample. We are looking for a \bsq{satisfactory} estimator given a single sample $p_i=\sqrt{q_i^2+u_i^2}$. From the standard Rice approximation $\mathcal{N}(\sqrt{p_0^2+\sigma^2},\sigma^2)$, the maximum likelihood estimator in this case is straightforwardly: \begin{equation} \label{eq:as} \hat p_{AS}=\sqrt{p_i^2-\sigma^2}. \end{equation} Using our slightly more precise approximation $\mathcal{N}(p_0+\tfrac{\sigma^2}{2p_0},\sigma^2)$ one obtains: \begin{equation} \label{eq:as2} \hat p_{AS^\prime}=\half (p_i+\sqrt{p_i^2-2\sigma^2}), \end{equation} which is also the ML estimator using our most precise approximation $\sqrt{\tfrac{p}{p_0}}\mathcal{N}(p_0,\sigma^2)$. In this form we encounter the problem of dealing with negative values under the square-root as discussed in the introduction. We show how to build a simple continuous analytic estimator that expands in the whole positive region, and converges rapidly to the asymptotic limit. The first order expansion of both \refeq{as} and \refeq{as2} is \begin{equation} \label{eq:mas1} \hat p=p_i-\dfrac{\sigma^2}{2p_i}, \end{equation} which is also the most probable estimator of our $\sqrt{\tfrac{p}{p_0}}\mathcal{N}(p_0,\sigma^2)$ approximation. This estimator diverges for low values. We want to modify it based on the following requirements: \begin{enumerate} \item the transformation must be smooth, in order to avoid Jacobian peak effects, \item it must converge to the asymptotic result (\refeq{mas1}) for a SNR around 2, \item the samples must always remain positive, \item the estimator distribution transforms smoothly to an unbiased Gaussian as the SNR increases. \end{enumerate} We then consider transformations of the form: \begin{equation} \hat p =p_i- \sigma^2\frac{1-e^{-\lambda p_i^2 / \sigma^2}}{2p_i}, \end{equation} where $\lambda>0$ is to be discussed, which preserves the correct asymptotic limit while converging linearly to 0 for low values: \begin{equation} \label{eq:lim} \hat p =\left(1-\dfrac{\lambda}{2}\right) p_i +\bigO{p_i^2}. \end{equation} In order to fulfill (ii) we wish $\lambda \ge 1$. On the other hand, $\lambda$ should not exceed $2$ since otherwise the derivative around 0 would become negative (see \refeq{lim}) and we would fail (iii). For $\lambda$ around 2, the estimator distribution is peaked at 0 and similar to an exponential. When transforming to a Gaussian with the SNR, it develops an intermediate minimum that complicates its overall shape. In contrast, for $\lambda$ around 1, the distribution transforms from a Rayleigh-like one to a Gaussian one without introducing a secondary extremum, which is similar to the Rice case and will be further discussed in Sect. \ref{sec:distrib}. Given the marginal gain of using $\lambda=2$ and its induced complexity on the distribution, we consider $\lambda=1$ as our optimal solution. We then propose the following Modified ASymptotic (MAS) estimator: \begin{equation} \label{eq:mas} \ensuremath{\hat p_\mathrm{MAS}}\xspace=p_i- \sigma^2\frac{1-e^{-p_i^2 / \sigma^2}}{2p_i}. \end{equation} \begin{figure} \centering \includegraphics[width=.5\textwidth]{curves} \caption{\label{fig:curve} Transformation curve of the MAS estimator (in red). We also show some other classical estimator curves: in light-blue, the Asymptotic (\refeq{as}), in blue the Most Probable \citep{Wardle1974} and in black the Maximum Likelihood \citep{Simmons1985}. They are discontinuous and the latter two non-analytic. Also shown in green is the curve of the posterior-mean Bayesian estimator \citep{Quinn2012} with a uniform prior on $p_0/\sigma$. The dashed line represents the naive estimator.} \end{figure} We show on Fig.~\ref{fig:curve} its transformation curve, together with some other classical estimators, demonstrating how it extrapolates smoothly from the asymptotic regime down to 0. This figure reveals that: \begin{itemize} \item the Most Probable estimator \citep{Wardle1974} has essentially the same properties as the simple asymptotic one of \refeq{as}; \item these two, together with the ML one \citep{Simmons1985}, are discontinuous, {\rm i.e.}\xspace have a non-differentiable transform at one point which leads to a set of discrete samples at 0; \item the one-dimensional posterior-mean Bayesian estimator with a uniform prior in $p_0$ is lower-bounded at $\tsqrt{\tfrac{2}{\pi}}$$\simeq 0.8 \times\sigma$ which can be verified from its expression that is analytic in the moderate SNR regime: \footnote{The analytic computation is performed after a change of variable into the scaled (SNR) variable and letting $1/\sigma \to \infty$. The results holds up to very high polarization values.} \begin{align} \hat p_\mathrm{mean}&=\dfrac{\int_0^1 p_0 f_p(p|p_0) dp_0}{\int_0^1 f_p(p|p_0) dp_0} \nonumber \\ & \simeq \left [ \tfrac{1}{\sigma} \sqrt{\tfrac{\pi}{2}} e^{-\tfrac{p^2}{4\sigma^2}} \Izero{\tfrac{p^2}{4\sigma^2}} \right]^{-1}. \end{align} Furthermore, such curves that have a null derivative at low SNR, which is the case of all Bayesian estimators presented in \citet{Quinn2012}, lead to extremely skewed distribution at low SNR as can be inferred from transforming samples drawn from a Rayleigh-type distribution along the $p/\sigma$ axis. \item all these estimators but the naive one have the correct asymptotic limit (which is \refeq{mean}) and differ by the way they behave at low values. \end{itemize} \section{Performance of the MAS estimator} \label{sec:perf} \subsection{Distribution} \label{sec:distrib} We study the distribution of the MAS estimator \refeq{mas}, in the canonical case, using Monte-Carlo simulations. For a given $p_0$ value, we shoot $10^{6}~ (q_i,u_i)$ normally distributed samples centred on $q_0=p_0 \cos \phi_0, u_0=p_0 \sin\phi_0$, where $\phi_0$ is drawn from a uniform distribution on $[-\pi,\pi]$. We then compute $p_i=\sqrt{q_i^2+u_i^2}$, transform the samples according to \refeq{mas} and project them into a histogram in order to obtain the probability density function. Fig.~\ref{fig:pmas_distrib} shows some distributions for increasing $p_0$ values which exhibit how they change smoothly from Rayleigh-like at low SNR to Gaussian as soon as $p_0/\sigma \gtrsim 2$. We work out in the following an analytic description of its distribution, which is useful for implementing a likelihood function. Using the scaled variable $p\leftarrow \tfrac{p}{\sigma}, p_0\leftarrow \tfrac{p_0}{\sigma}$, the MAS transformation reads, dropping out the \bsq{i} subscript: \begin{equation} \hat p=p-\dfrac{1-e^{-p^2}}{2p}. \end{equation} The standard rules of random variable transformation requires inverting this equation which does not have an exact analytic expression. We note however that in the asymptotic limit, the exponential can be neglected and the inverse is $p =\tfrac{1}{2} (\hat p+\sqrt{\hat p^2+2})$. From numerical comparison to the inverse, we find it sufficient to complement it with an exponential. We obtain the following approximate inverse relation: \begin{equation} \label{eq:pmas_inv} p\simeq g(\hat p)=\half (\hat p+\sqrt{\hat p^2+2})(1-e^{-a \hat p}), \end{equation} with $a=3.17$. This approximation is valid in the whole positive range below the percent level. The distribution of the $\hat p$ estimator is then obtained from the transformation of the Rice distribution \ensuremath{f_p}\xspace (\refeq{rice}) as: \begin{align} \label{eq:pmas_distrib} f_{\hat p}(p)&=g^\prime(p) \ensuremath{f_p}\xspace(g(p)) \nonumber \\ & \simeq \dfrac{(p + \sqrt{p^2 + 2})(a\sqrt{p^2 + 2} + e^{a p} -1)}{2e^{ap}\sqrt{p^2 + 2}} \nonumber \\ & \quad \times \ensuremath{f_p}\xspace\left(\half (p+\sqrt{p^2+2})(1-e^{-a p})\right), \end{align} and the complete distribution is given by $f_{\hat p}\left(\tfrac{p}{\sigma}\right)/\sigma$. This analytic approximation is excellent as shown for some examples on Fig.~\ref{fig:pmas_distrib}. \begin{figure} \centering \includegraphics[width=.5\textwidth]{pmas_distrib} \caption{\label{fig:pmas_distrib} MAS estimator distribution in the canonical case, as obtained from the Monte-Carlo simulations, for several $p_0/\sigma$ values (shown as the vertical line). From left to right and top to bottom $p_0=0,0.5,1,1.5,2,3$. The analytic approximation discussed in the text (\refeq{pmas_distrib}) is superimposed in red.} \end{figure} \subsection{Bias and risk} The first two orders of the estimator statistics are characterized by the normalized bias $E[\ensuremath{\hat p_\mathrm{MAS}}\xspace-p_0]/\sigma$ and risk $E[(\ensuremath{\hat p_\mathrm{MAS}}\xspace-p_0)^2]/\sigma^2$, using Monte-Carlo simulations. They are shown on Fig.~\ref{fig:bias_risk}. For a SNR as low as 2, the estimator is essentially unbiased and has a $\sigma^2$ risk. \begin{figure} \centering \includegraphics[width=.5\textwidth]{bias_risk} \caption{\label{fig:bias_risk} Estimate of the normalized bias (top) and risk (bottom) of the modified asymptotic estimator (MAS) in the canonical case, as obtained from Monte-Carlo simulations.} \end{figure} \subsection{Confidence intervals} \label{sec:cl} We emphasize that the characterization of estimators in terms of their mean and risk may lead to over-simplification and misunderstandings in a community accustomed to considering a number with an \bsq{error} as originating from a Gaussian distribution. Instead, providing a confidence interval at some given significance level $\alpha$ is more complete since it is independent of the shape of the estimator distribution. The construction of a classical confidence interval is an old and well defined statistical procedure \citep{Neyman1937}. It however does not specify uniquely the acceptance region, since given some fixed $p_0$ value $Pr(\hat p\in [\ensuremath{p_\mathrm{min}},\ensuremath{p_\mathrm{max}}]|p_0)=\alpha$ is insufficient to fix the interval. One must choose an additional free criterion. A common choice is to use the central confidence interval: \begin{equation} \label{eq:central} Pr(\hat p < \ensuremath{p_\mathrm{min}})=Pr(\hat p > \ensuremath{p_\mathrm{max}}) =\dfrac{1-\alpha}{2}. \end{equation} It may however lead to the situation of providing an \bsq{empty-set} $\{0\}$ or, equivalently, an interval lying entirely in the unphysical region (see Fig.~\ref{fig:rice_cl}), which is statistically valid, but uncomfortable to an analyst. One solution is to enlarge the interval given an arbitrary construction to provide a \bsq{conservative} one, a procedure already used for the naive estimator \citep{Simmons1985}. Here we rather advocate using the Feldman-Cousins prescription \citep{FC}, which, for the free criterion, uses an ordering of the likelihood ratios. The authors showed that the problem of empty-sets relates to intervals failing a goodness-of-fit test. Their procedure naturally decouples this test from the construction of the interval, effectively removing the empty-set issue without ever being conservative. We show in the following how to perform it in our case. We consider some estimator $\hat p$ for which we can compute the distribution $\hat f(p | p_0)$, possibly via Monte-Carlo simulation. We pre-compute first its maximum likelihood curve, {\rm i.e.}\xspace the $p_0$ value for which $\hat f(p | p_0)$ is maximum. We then \textit{scan} $p_0$ values, and at each step: \begin{enumerate} \item compute the likelihood ratio curve as $R(p)=\dfrac{\hat f(p|p_0)}{\hat f(p|p_{ML})}$, where $p_{ML}(p)$ is obtained from our pre-computation, \item solve numerically the system $\begin{cases} R(\ensuremath{p_\mathrm{min}})=R(\ensuremath{p_\mathrm{max}}) \\ \int_{\ensuremath{p_\mathrm{min}}}^{\ensuremath{p_\mathrm{max}}} dp \hat f(p|p_0)=\alpha\\ \end{cases}$, \item report the $[\ensuremath{p_\mathrm{min}},\ensuremath{p_\mathrm{max}}]$ interval for this $p_0$ value horizontally on a graph known as the \bsq{confidence belt} \citep[e.g.][see also Fig.~\ref{fig:rice_cl}]{PDG2012}. \end{enumerate} The standard Neyman's inversion statement then allows, for a given $p/\sigma$ sample, to measure its $\alpha$-level confidence interval on the vertical axis. We show on Fig.~\ref{fig:rice_cl} the result of this computation at the $\alpha=0.90$ level for the Rice distribution ({\rm i.e.}\xspace the naive estimator) and compare the limits obtained to the classical central intervals. The empty-set at low $p/\sigma$ values indeed disappears and the user can now report a confidence intervals for any measured value, without ever being conservative. Asymptotically ($p/\sigma \gtrsim 3.5$) both constructions agree, but we obtain tighter constraints in the intermediate region $p/\sigma\in[2.2,3.5]$. \begin{figure} \centering \includegraphics[width=.5\textwidth]{rice_cl} \caption{\label{fig:rice_cl} Construction of a 90\% CL interval for the naive estimator in the canonical case, using the central confidence region (black lines) or the Feldman-Cousins prescription (red lines). For a measured sample value $p/\sigma$ one reads off the associated confidence interval on the vertical axis. For low values ($p/\sigma < 0.29$) the central interval lies entirely inside the unphysical negative region. This is cured by applying the Feldman-Cousins prescription.} \end{figure} Using the Feldman-Cousins prescription, we then build the confidence belts of the MAS estimator at the 0.68, 0.90 and 0.95 confidence levels. They are shown on Fig.~\ref{fig:pmas_cl}. For convenience we provide the following analytic approximations to the scaled upper and lower limits at the $\alpha$ significance level: \begin{equation} \label{eq:approxcls} \begin{split} \ensuremath{p_\mathrm{min}}^\alpha&=\ensuremath{\hat p_\mathrm{MAS}}\xspace -p_\alpha(1+\beta e^{-\gamma \ensuremath{\hat p_\mathrm{MAS}}\xspace}\sin(\omega \ensuremath{\hat p_\mathrm{MAS}}\xspace+\phi)), \\ \ensuremath{p_\mathrm{max}}^\alpha&=\ensuremath{\hat p_\mathrm{MAS}}\xspace+p_\alpha(1-\beta e^{-\gamma \ensuremath{\hat p_\mathrm{MAS}}\xspace}), \end{split} \end{equation} where $p_\alpha=\sqrt{2} \mathrm{Erf}^{-1}(\alpha)$ is the $\alpha$-point of the Gaussian distribution that is reached asymptotically, and the parameters are given in Table \ref{tab:clparams} for the three significance levels. \begin{figure} \centering \includegraphics[width=.5\textwidth]{pmas_cl} \caption{\label{fig:pmas_cl} Confidence belts of the normalized MAS estimator, using the Feldman-Cousins prescription, for 0.68 (blue dots) , 0.90 (red dots) and 0.95 (green dots) confidence levels. The dashed lines correspond to the Gaussian intervals that are reached asymptotically. The solid lines correspond to the analytic description provided in the text. For a given $\ensuremath{\hat p_\mathrm{MAS}}\xspace/\sigma$ value, the corresponding confidence interval is read vertically.} \end{figure} \begin{table} \centering \begin{tabular}{ccccccc} \hline\hline Bound & $\alpha$ & $p_\alpha$ & $\beta$ & $\gamma$ & $\omega$ & $\phi$ \\ \hline $\ensuremath{p_\mathrm{min}}$ & 0.68 & 1 & 0.72 & 0.60 &-0.83 &4.41 \\ $\ensuremath{p_\mathrm{max}}$ & 0.68 & 1 & 0.97 & 2.01 & - & - \\ \hline $\ensuremath{p_\mathrm{min}}$ & 0.90 & 1.64 & 0.88 & 0.68 & 2.03 & -0.76 \\ $\ensuremath{p_\mathrm{max}}$ & 0.90 & 1.64 & 0.31 & 2.25 & - & - \\ \hline $\ensuremath{p_\mathrm{min}}$ & 0.95 & 1.95 & 0.56 & 0.48 & 1.79 & -1.03 \\ $\ensuremath{p_\mathrm{max}}$ & 0.95 & 1.95 & 0.22 & 2.54 & - & -\\ \hline \end{tabular} \caption{\label{tab:clparams} Parameters of the analytic approximation to the $\alpha$ level confidence intervals \refeq{approxcls} for the MAS normalized estimator.} \end{table} \section{The case of a general covariance matrix} \label{sec:general} We address now the issue of generalizing the MAS estimator to any Stokes parameters covariance matrix. We however consider that the intensity measurement $I$ is essentially decoupled from $(Q,U)$, as is generally the case in real-life experiments, and therefore only consider the bi-variate $[q,u]$ covariance matrix. \subsection{Noise-bias and variance} \label{sec:nocor} We ask the following question: for a general $[q,u]$ covariance matrix, what are the asymptotic equivalents of the noise-bias and variance of the $p=\sqrt{q^2+u^2}$ distribution? In the uncorrelated case $(\rho=0$), we formally demonstrate in Appendix \ref{app:B} that the first two $p$ moments in the asymptotic regime give: \begin{align} \label{eq:b2theo} \E{p}&=p_0+\dfrac{b^2}{2p_0}; \quad b^2=\ensuremath{\sigma_u}^2\cos^2\phi_0+\ensuremath{\sigma_q}^2\sin^2\phi_0,\\ \sigma^2_p&=\ensuremath{\sigma_q}^2\cos^2\phi_0+\ensuremath{\sigma_u}^2\sin^2\phi_0. \end{align} Unlike in the canonical case (see \refeq{mean} and \refeq{variance}) the non-linear \bsq{noise-bias} $b^2$ is now different from the variance. As in the canonical case, these formulas can be understood using the simple geometric construction of Fig.~\ref{fig:geo_ellipse}. \begin{figure} \centering \includegraphics[width=.45\textwidth]{geo_ellipse} \caption{\label{fig:geo_ellipse} Same construction as on Fig.~\ref{fig:geo_simple} in the (uncorrelated) elliptical case, $\ensuremath{\sigma_q} \ne \ensuremath{\sigma_u}, \rho=0$. The ellipse denotes the 1-$\sigma$ iso-probability $(q,u)$ contour and one considers the distribution of the distance to the origin of points located on it. The variance is computed along the centre direction and gets some contribution from the $\sigma_q\cos\phi_0$ and $\sigma_u\sin\phi_0$ projections, while the noise-bias has contributions from the orthogonal combinations $\sigma_q\sin\phi_0$ and $\sigma_u\cos\phi_0$. In the correlated case, one just needs to rotate the ellipse by the $\theta$ (\refeq{theta}) angle and re-compute the semi-axis lengths (\refeq{sigrot}). } \end{figure} We do not know what the true $\phi_0$ angle is. We can either marginalize over this unknown angle or estimate it for each sample. If we marginalize over the unknown angle $\phi_0$, we obtain the \textit{variance arithmetic mean} for both the noise-bias and the variance: \begin{equation} \label{eq:ari} \sigma_a^2=\half(\ensuremath{\sigma_q}^2+\ensuremath{\sigma_u}^2). \end{equation} In the second approach, we use the fact that $\phi_i=\atan\tfrac{u_i}{q_i}$ is an asymptotically unbiased estimator of the angle, even in the elliptical case, and replace the true angle by it to obtain the \textit{variable bias}: \begin{align} \label{eq:varbias} b_i^2& =\ensuremath{\sigma_u}^2\cos^2\phi_i+\ensuremath{\sigma_q}^2\sin^2\phi_i, \nonumber \\ &=\dfrac{q_i^2\sigma_u^2+u_i^2\sigma_q^2}{q_i^2+u_i^2}, \end{align} and similarly the \textit{variable variance}: \begin{align} \label{eq:varvar} \sigma_i^2&=\ensuremath{\sigma_q}^2\cos^2\phi_i+\ensuremath{\sigma_u}^2\sin^2\phi_i \nonumber \\ &=\dfrac{u_i^2\sigma_u^2+q_i^2\sigma_q^2}{q_i^2+u_i^2}. \end{align} In the presence of a non-null correlation coefficient $\rho$ (and $\sigma_q\ne\sigma_u$), the principal axes of the iso-probability ellipse are rotated by the angle ({\rm e.g.}\xspace \cite*{Aalo2007}): \begin{equation} \label{eq:theta} \theta=\half\atan\dfrac{2\rho\ensuremath{\sigma_q}\ensuremath{\sigma_u}}{\ensuremath{\sigma_q}^2-\ensuremath{\sigma_u}^2}, \end{equation} and the semi-diameters along the principal axes are the eigenvalues of the covariance matrix: \begin{equation} \label{eq:sigrot} \begin{split} \ensuremath{\sigma_q}^{\prime2}&=\ensuremath{\sigma_q}^2\cos^2\theta+\ensuremath{\sigma_u}^2\sin^2\theta+\rho\ensuremath{\sigma_q}\ensuremath{\sigma_u}\sin2\theta, \\ \ensuremath{\sigma_u}^{\prime2}&=\ensuremath{\sigma_q}^2\sin^2\theta+\ensuremath{\sigma_u}^2\cos^2\theta-\rho\ensuremath{\sigma_q}\ensuremath{\sigma_u}\sin2\theta. \end{split} \end{equation} Relying on Fig.~\ref{fig:geo_ellipse}, the variance is computed along the $\phi_0$ direction and the bias along the orthogonal one, in that case after a rotation of the principal axes by $\theta$. This result can also be established more formally using computations along the lines of Appendix \ref{app:B}, by diagonalizing the covariance matrix in the exponential argument of the Gaussian. The results depend however very loosely on the correlation value, since $\ensuremath{\sigma_q}^{\prime}~(\ensuremath{\sigma_u}^{\prime})$ represents also essentially a rotation of $\ensuremath{\sigma_q}~(\ensuremath{\sigma_u}^{\prime})$. For values of $\rho \lesssim 0.5$ one can safely neglect it and use the previous results. The marginalized result with a correlation gives back the variance arithmetic mean since: \begin{equation} \label{eq:aricor} \half (\ensuremath{\sigma_q}^{\prime2}+\ensuremath{\sigma_u}^{\prime2})=\half(\ensuremath{\sigma_q}^2+\ensuremath{\sigma_u}^2)=\sigma_a^2, \end{equation} and the variable estimates from $\phi_i=\atan\dfrac{u_i}{q_i}$ is: \begin{align} \label{eq:varbiasrho} b_i^2&=\ensuremath{\sigma_u}^{\prime2}\cos^2(\phi_i-\theta)+ \ensuremath{\sigma_q}^{\prime2}\sin^2(\phi_i-\theta),\\ \label{eq:varvarrho} \sigma_i^2&=\ensuremath{\sigma_q}^{\prime2}\cos^2(\phi_i-\theta)+ \ensuremath{\sigma_u}^{\prime2}\sin^2(\phi_i-\theta). \end{align} We test the validity of these estimates in a highly elliptic and correlated case $\ensuremath{\sigma_q}=1,\ensuremath{\sigma_u}=2,\rho=0.7$. Results are presented on Fig.~\ref{fig:mom1_cor} for the bias and Fig.~\ref{fig:mom2_cor} for the variance, for several true polarization angles. \begin{figure} \centering \includegraphics[width=.5\textwidth]{mom1_cor} \caption{\label{fig:mom1_cor} Validation of the Rice equivalent noise-bias in the elliptic case $\sigma_q=1,\sigma_u=2, \rho=0.7$, for several polarization angles: upper left for a uniform angle distribution, upper right for $\phi_0=0\mbox{$^{\circ}$}$, lower left $\phi_0=40\mbox{$^{\circ}$}$, lower right $\phi_0=80\mbox{$^{\circ}$}$. In each case, the expectation value of the complete distribution $E[p]$ is obtained from Monte-Carlo simulation and is compared to the Rice expectation value (\refeq{mean}) using for the $\sigma$ term, either the variance arithmetic mean (red line, \refeq{ari}) or the mean of the variable noise estimate (blue line, \refeq{varbiasrho}). In this latter case the shaded blue region shows the $1\sigma$ variation of the estimates. } \end{figure} \begin{figure} \centering \includegraphics[width=.5\textwidth]{mom2_cor} \caption{ \label{fig:mom2_cor} Same as Fig.~\ref{fig:mom1_cor} but for the Rice equivalent variance in the same elliptic case $\sigma_q=1,\sigma_u=2, \rho=0.7$. Upper left is for a uniform angle distribution, upper right for $\phi_0=0\mbox{$^{\circ}$}$, lower left $\phi_0=40\mbox{$^{\circ}$}$, lower right $\phi_0=80\mbox{$^{\circ}$}$. We compare the empirical variance $V[p]$, obtained from Monte-Carlo simulations, to the variance of the Rice distribution (\refeq{variance}) using for the $\sigma$ term, either the variance arithmetic mean (red line, \refeq{ari}), or the mean of the variable noise estimate (blue line, \refeq{varvarrho}). In this latter case the shaded blue region shows the $1\sigma$ variation of the estimates.} \end{figure} The variable noise-bias is found to match very precisely and rapidly the empirical expectation value, while the variance arithmetic mean may slightly over- or under-estimate the asymptotic values, depending on the underlying true angle. For the Rice-equivalent variance, the variable variance is reasonable in the whole $p_0$ range, while the arithmetic mean may lead to a severe asymptotic discrepancy for some angles. \subsection{Generalized MAS estimator} Since our aim is to build an estimator which is unbiased as fast as possible with the SNR, we generalize the MAS estimator to: \begin{equation} \label{eq:pmas2} \ensuremath{\hat p_\mathrm{MAS}}\xspace=p_i- b_i^2\frac{1-e^{-p_i^2 /b_i^2}}{2p_i}, \end{equation} where the noise-bias $b_i$ is computed on a sample by sample basis, either from \refeq{varbias} for the uncorrelated case, or \refeq{varbiasrho} for the (strongly) correlated one. We re-consider its bias and risk on Fig.~\ref{fig:bias_risk_scan} in the highly elliptic regime, for several $\phi_0$ angles. \begin{figure} \centering \includegraphics[width=.5\textwidth]{bias_risk_scan} \caption{\label{fig:bias_risk_scan} Bias and risk, normalized by the variance arithmetic mean $\sigma_a$, of the generalized MAS estimator in the $\ensuremath{\sigma_q}=1,\ensuremath{\sigma_u}=2,\rho=0.7$ case, as obtained from Monte-Carlo simulations. The true $\phi_0$ angle is varied according to the following color code: 0\mbox{$^{\circ}$}(black), 30\mbox{$^{\circ}$}(blue), 60\mbox{$^{\circ}$}(red), 90\mbox{$^{\circ}$}(green). The dashed lines show the variance estimates $\sigma_i^2$ from \refeq{varvarrho}.} \end{figure} As may have been anticipated from the previous section results, even in this rather extreme case, the bias is insensitive to the true angle and is very similar to the canonical case, {\rm i.e.}\xspace essentially unbiased above 2. The risk depends now on the true angles, but since the estimator has no bias in this region, its risk is equivalent to its variance and our \refeq{varvarrho} estimate provides a reasonable asymptotic description. \section{Conclusion} \label{sec:conclusion} We have developed and characterized an estimator of the polarization amplitude that enjoys several desirable properties. Its distribution lies in the positive region, is continuous, and transforms smoothly with the SNR from a Rayleigh-like to a Gaussian one, the latter being essentially reached above 2. We revisited the construction of confidence intervals and solved efficiently the empty-set (or unphysical) region problem encountered at low SNR using the Feldman-Cousins prescription. We provided analytic approximations to the 0.68, 0.90 and 0.95 confidence level regions. We have generalized the estimator to the case of a global covariance matrix, and shown that its bias is universal, {\rm i.e.}\xspace independent of the true $\phi_0$ angle. We provided an analytic estimate of the variance of the estimator that can be used to assess the risk in the large SNR region. Given its very simple analytic form, the estimator can be applied efficiently on large data-sets, in particular for providing Gaussian-like point-estimates in regions of reasonably large SNR values, and conversely build masks to identify regions not bearing enough statistical significance. This can be performed using the following procedure: \begin{enumerate} \item compute \ensuremath{\hat p_\mathrm{MAS}}\xspace from \refeq{pmas2} and the variance arithmetic mean $\sigma_a$ from \refeq{ari} from all data pixels. \item according to Sect. \ref{sec:cl} results, a SNR above 2 at the 90\% CL is obtained by keeping samples satisfying $\tfrac{\ensuremath{\hat p_\mathrm{MAS}}\xspace}{\sigma_a} > 3.8$. This is used to build a mask, that can possibly be spatially smoothed. \item in the rest of the data, point-estimates can be given safely since we have shown that in this regime the estimator is unbiased and essentially Gaussian. One can compute the estimator variance using \refeq{varvarrho} and consider it as its associated \bsq{error}. \end{enumerate} For values within the mask, reporting a point-estimate is unsafe and one should instead report a confidence interval, as the ones given in Sect. \ref{sec:cl}, or a full likelihood function. This work was oriented towards estimating the polarization amplitude but is obviously much more general. It is perhaps surprising that such a fundamental question as characterizing the amplitude of a vector or the modulus of a complex number from its normally distributed Cartesian components did not receive more attention. A part of the reason is maybe related to defining precisely the question: what is a \bsq{good} estimator? We tried to answer it in a user-oriented way. \section*{Acknowledgments} We thank Jason L. Quinn for an efficient and in-depth refereeing of the manuscript. \bibliographystyle{mn2e}
train/arxiv
BkiUfyjxK7kjXIK5uHkP
5
1
\section{Introduction} In recent years, the concept of topological band theory has been extended to nonelectronic systems such as magnons \cite{alex1, alex0, alex2,alex5,alex4, sol1,sol,sol2, alex5a, alex6, sol4, mok,su,fyl,lifa} and phonons \cite{pho,pho1,pho2,pho3,pho4,pho5}. In the former, the DMI \cite{dm} is the primary source of topological magnon bands and magnon edge states \cite{lifa,alex4}, as well as thermal magnon Hall effect \cite{alex0,alex1}. These systems are dubbed topological magnon insulators \cite{lifa} in analogy to topological insulators in fermionic systems with spin-orbit coupling (SOC) \cite{guo,kane,kane1}. However, as magnons are charge-neutral quasiparticles, the perfectly conducting edge states are believed to be useful for dissipationless transport applicable to magnon spintronics. Our conception of DMI being the primary source of topological effects in magnonic systems has been firmly established by recent experimental realization of topological magnon insulator in collinear Kagom\'e ferromagnet Cu(1-3, bdc) \cite{alex5a}. In these systems the DMI comes naturally because the Kagom\'e lattice lacks an inversion center. In reality, however, there are more frustrated Kagom\'e magnets than collinear ferromagnets. The physics associated with the former has no analogy with the latter. The former are considered as a candidate for quantum spin liquid physics due to an extensive classical degeneracy that prevent magnetic ordering to lowest accessible temperatures. However, in recent experimental synthesis it has been shown that the effects of SOC or DMI are not negligible in most Kagom\'e antiferromagnets. The DMI appears as a perturbation to the Heisenberg spin exchange. One of the striking features of this perturbation in frustrated Kagom\'e systems is that it induces magnetic ordering with a $\bold q=0$ propagation wavevector. Thus, it suppresses the spin liquid phase of Kagom\'e antiferromagnets up to a critical value \cite{sup1, men3}. Various experimentally accessible frustrated Kagom\'e antiferromagnets show evidence of coplanar/noncollinear $\bold q=0$ magnetic ordering at specific temperatures. The famous one is iron jarosite KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ \cite{sup1a,sup2}. The unanswered question is how does topological effects arise in these systems? From the experimental perspective, it has been previously shown that iron jarosite posseses a finite spin scalar chirality induced by an out-of-plane magnetic field but no topological properties were measured \cite{sup1a}. In a recent Letter \cite{me}, we have provided evidence of magnon Hall effect and thermal Hall conductivity $\kappa_{xy}$ for this Kagom\'e material, which originates from non-coplanar spin texture and survives in the absence of DMI and magnetic ordering. Topological magnon insulator with magnon edge modes are another area of recent development \cite{alex5a, lifa, alex4}. In this report, we complete our analysis of topological magnon effects in geometrically frustrated Kagom\'e antiferromagnets by providing evidence of topological magnon edge modes. The main purpose of this report is to relate finite thermal Hall conductivity to topologically protected magnon edge states. We consider three different models, viz: $(i)$ bilayer Kagom\'e ferromagnets coupled antiferromagnetically or layer antiferromagnets. $(ii)$ The model Hamiltonian for iron jarosite KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ with DMI and second nearest neighbour interaction \cite{sup2}. $(iii)$ The XXZ model for Kagom\'e antiferromagnet without the DMI. The arrangement of this paper is as follows. In Sec. II we introduce the three models, analyze the magnon tight binding models, and present the distinctive magnon bands. Sec. III introduces the concept of topologically protected magnon edge modes and fictitious Chern numbers. We relate these concepts to experimentally accessible thermal magnon Hall effect. In Sec. IV we conclude and discuss potential methods to realize topological magnon bands in geometrically frustrated Kagom\'e antiferromagnets with/without DMI. \section{ Model Hamiltonian } \subsection{Model I} In various frustrated Kagom\'e magnets, a strong out-of-plane magnetic field is sufficient to circumvent frustrated interactions and leads to magnetic ordering. A good example is the bilayer frustrated Kagom\'e magnet Ca$_{10}$Cr$_7$O$_{28}$ \cite{Balz}, which shows evidence of ferromagnetic alignment at a magnetic field of magnitude $h\sim 11~\text{Tesla}$. The frustrated Kagom\'e volborthite Cu$_3$V$_2$O$_7$(OH)$_2$ $\cdot$2H$_2$O also shows evidence of magnetic ordering at several field values \cite{Yo,Yo1}. In the ordered regimes, the magnetic excitations are definitely magnons. Owing to the fact that many Kagom\'e magnets come naturally in bilayer forms, we first consider bilayer Kagom\'e magnets with non-negligible interlayer coupling. We assume that the top layer is placed right above the bottom layer forming AAA-stacked pattern. The Hamiltonian is given by \begin{align} \mathcal H&= \sum_{\langle i, j\rangle\tau} \left( \mathcal{J}{\bf S}_{i}^\tau\cdot{\bf S}_{j}^\tau + \boldsymbol{\mathcal D}_{ij}\cdot{\bf S}_{i}^\tau\times{\bf S}_{j}^\tau\right) -h\hat{\bold z}\cdot\sum_{i\tau} {\bf S}_{i}^\tau\label{h}\\&\nonumber + \mathcal J_t\sum_{i}{\bf S}_{i}^t\cdot{\bf S}_{i}^b, \end{align} where ${\bf S}_{i}$ is the spin moment at site $i$, $\tau$ labels the top ($t$) and bottom ($b$) layers respectively, $h$ is an external magnetic field in units of $g\mu_B$. We consider the case of ferromagnetic intra-layer exchange $\mathcal J<0$ and antiferromagnetic interlayer coupling $\mathcal J_t>0$ with an out-of-plane magnetic field $h$. At zero magnetic field, the spins on the top and bottom layers lie in opposite directions on the $x$-$y$ Kagom\'e planes, and interlayer coupling is antiferromagnetic. A nonzero magnetic field is expected to introduce canting along the direction of the field. Hence, the ground state is no longer the collinear ferromagnets. In the classical limit, the spin operators can be represented as classical vectors, written as $\bold{S}_\tau= S\bold{n}_\tau$, where $\bold{n}_\tau=\left(\sin\chi\cos\theta_\tau, \sin\chi\sin\theta_\tau,\cos\chi \right)$ is a unit vector. The classical energy gives \begin{align} e_{cl}=-|\mathcal J|+\frac{\mathcal J_t}{2}\cos2\chi- h\cos\chi, \end{align} where $e_{cl}=E_{cl}/6NS^2$, $N$ is the number of sites per unit cell, and the magnetic field is rescaled in units of $S$. Minimizing the classical energy yields the canting angle $\cos\chi= h/(h_s= 2 \mathcal J_t )$. We see that both the out-of-plane DM vector ($\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold z}$) and the in-plane DM vector ($\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold x}$) does not contribute to the classical energy due to ferromagnetic ordering on each layer. For the magnon excitations above the classical ground state, the basic procedure involves rotating the spins from laboratory frame to local frame by the spin oriented angles $\theta_\tau$ about the $z$-axis. Due to the field-induced canting, we perform another rotation about the $y$-axis with the canting angle $\chi$. The total rotation matrix is given by \begin{align} \mathcal{R}_z(\theta_\tau)\cdot\mathcal{R}_y(\chi) =\begin{pmatrix} \cos\theta_\tau\cos\chi & -\sin\theta_\tau & \cos\theta_\tau\sin\chi\\ \sin\theta_\tau\cos\chi & \cos\theta_\tau &\sin\theta_\tau\sin\chi\\ -\sin\chi & 0 &\cos\chi \end{pmatrix}, \label{rot} \end{align} where $\theta_\tau$ labels the spin oriented angles on each layer with $\theta_t=\pi$ for the top layer, $\theta_b=0$ for the bottom layer, and $\chi$ is the field canting angle. Hence, \begin{eqnarray} \bold{S}_i=\mathcal{R}_z(\theta_\tau)\cdot\mathcal{R}_y(\chi)\cdot\bold S_i^\prime,\end{eqnarray} which can be written explicitly as \begin{align} &S_{i\tau}^x=\pm S_{i\tau}^{\prime x}\cos\chi \pm S_{i\tau}^{\prime z}\sin\chi,\label{trans}\nonumber\\& S_{i\tau}^y=\pm S_{i\tau}^{\prime y},\\&\nonumber S_{i\tau}^z=- S_{i\tau}^{\prime x}\sin\chi + S_{i\tau}^{\prime z}\cos\chi, \end{align} where $-(+)$ applies to the layers $t(b)$ respectively. This rotation does not affect the ferromagnetic $\mathcal J$-term on each layer. A crucial distinguishing feature of this system is that both the out-of-plane DMI ($\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold z}$) and the in-plane DMI ($\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold x}$) contribute in the present system. This is as a result of field-induced canting of the system. It should be noted that this is not the case in previously studied Kagom\'e ferromagnets \cite{alex1, alex0, alex2,alex5,alex4, sol1,sol,sol2, alex5a, alex6, lifa}. To linear order in $1/S$ expansion we have \begin{align} &\mathcal H_{DM,z}= \mathcal D\cos\chi\sum_{\langle i, j\rangle\tau}\hat{\bold z}\cdot {\bf S}^{\prime\tau}_{i}\times {\bf S}_{j}^{\prime\tau} +\mathcal{O}(1/S),\\& \mathcal H_{DM,x}= \sigma \mathcal D\sin\chi\sum_{\langle i, j\rangle\tau}\hat{\bold z}\cdot {\bf S}_{i}^{\prime\tau}\times {\bf S}_{j}^{\prime\tau} +\mathcal{O}(1/S), \end{align} where $\sigma=\mp$ for top and bottom layers respectively. We see that the latter case has opposite signs on both layers. Next, we study the excitations above the classical ground state by using the Holstein-Primakoff spin bosonic operators for the rotated prime coordinates: $S_{i,\tau}^{\prime x}=\sqrt{S/2}(b_{i,\tau}^\dagger+b_{i,\tau})$, $S_{i,\tau}^{\prime y}=i\sqrt{S/2}(b_{i,\tau}^\dagger-b_{i,\tau})$, and $S_{i,\tau}^{\prime z}=S-b_{i,\tau}^\dagger b_{i,\tau}$. The Hamiltonian maps to a magnon tight binding model\begin{align} \mathcal H_{SW}&=v_0\sum_{i\tau} n_{i\tau} -v_D\sum_{\langle ij\rangle\tau} \left( e^{-i \phi_{ij}}b_{i\tau}^\dagger b_{j\tau} +h.c.\right) \label{hpp3}\\&\nonumber-v_t\sum_{i\in \tau, j\in \tau^\prime}\bigg[(n_{i\tau}+ n_{j\tau^\prime})\cos2\chi \\&\nonumber+( b_{i\tau}^\dagger b_{j\tau^\prime}+ h.c.)\cos^2\chi-( b_{i\tau}^\dagger b_{j\tau^\prime}^\dagger+ h.c.)\sin^2\chi\bigg], \end{align} where $n_{i\tau}=b_{i\tau}^\dagger b_{i\tau}$ is the occupation number, $v_0= 4v_{J} +h\cos\chi,~v_t=\mathcal J_t S,~v_J=|\mathcal J|S$, and $v_D= S\sqrt{\mathcal J^2 +\mathcal D_{x,z}^2}$ with $\mathcal D_{z}=\mathcal D\cos\chi$, $\mathcal D_{x}=\sigma\mathcal D\sin\chi$. The fictitious magnetic flux on each triangle of the Kagom\'e lattice is given by $\phi=\arctan (\mathcal D_{x,z}/|\mathcal J|)$. Using the vectors $\Psi_\bold{k}^\dagger=(\psi_\bold{k}^\dagger, ~\psi_{-\bold{k}})$, with $\psi^\dagger_{\bold k}= (b_{\bold{k}A}^{\dagger},\thinspace b_{\bold{k} B}^{\dagger},b_{\bold{k} C}^{\dagger},\thinspace b_{\bold{k} A^\prime}^{\dagger},\thinspace b_{\bold{k} B^\prime}^{\dagger},\thinspace b_{\bold{k} C^\prime}^{\dagger})$, the momentum space Hamiltonian is given by $\mathcal H_{SW}=\frac{1}{2}\sum_{\bold k}\Psi^\dagger_{\bold k}\cdot \boldsymbol{\mathcal{H}}_{AFM}(\bold k)\cdot\Psi_{\bold k},$ where \begin{align} \boldsymbol{\mathcal{H}}_{AFM}(\bold k)=\left( \begin{array}{cc} \bold A(\bold{k},\phi)& \boldsymbol{{B}}\\ \boldsymbol{{B}}& \bold A^*(-\bold{k},\phi) \end{array} \right), \end{align} \begin{align} \boldsymbol A(\bold{k},\phi)&= \begin{pmatrix} \bold a(\bold{k},\phi)& \boldsymbol{{b}}\\ \boldsymbol{{b}} &\bold a(\bold{k}, \phi) \end{pmatrix},\thinspace \boldsymbol{{B}}&= \begin{pmatrix} 0&\bold c\\ \bold c&0 \end{pmatrix}, \label{A4} \end{align} and $ \bold a(\bold k, \phi)= \tilde v_0 \bold{I}_{3\times 3} -2v_D{\bf \Lambda}(\bold{k},\phi) $, \begin{align} {\bf \Lambda}(\bold{k},\phi)&= \begin{pmatrix} 0&\cos k_1 e^{-i\phi}&\cos k_3 e^{i\phi}\\ \cos k_1 e^{i\phi}&0&\cos k_2 e^{-i\phi}\\ \cos k_3 e^{-i\phi}&\cos k_2 e^{i\phi} &0\\ \end{pmatrix}, \end{align} \begin{align} \boldsymbol{b}=-v_t\cos^2\chi\bold{I}_{3\times 3}, \thinspace \bold c =v_t\sin^2\chi\bold{I}_{3\times 3}, \end{align} where $\bold{I}_{3\times 3}$ is a $3\times 3$ identity matrix, $\tilde v_0= 4v_J +h\cos\chi - v_t\cos 2\chi= 4v_J +v_t$, $k_i=\bold k\cdot \bold e_i$, $\bold e_1=-(1/2,~\sqrt 3/2),~\bold e_2=(1,0),~\bold e_3=(-1/2,~\sqrt 3/2)$. At the saturation field $h=h_s,~\chi=0$, we obtain ferromagnetically coupled layers applicable to Cu(1-3, bdc)\cite{alex6, alex5a} assuming strong interlayer coupling. \begin{figure} \centering \includegraphics[width=3in]{TMI_Kagome_band} \caption{Color online. Magnon bands of bilayer Kagom\'e antiferromagnets with $\boldsymbol{\mathcal D}=\mathcal D\bold{\hat z}$ at two values of magnetic field. $\mathcal D/\mathcal J=0.2,~\mathcal J_t/\mathcal J=0.11$. } \label{band} \end{figure} \begin{figure} \centering \includegraphics[width=3in]{TMI_Kagome_band1} \caption{Color online. Magnon bands of bilayer Kagom\'e antiferromagnets with $\boldsymbol{\mathcal D}=\mathcal D\bold{\hat x}$ at two values of magnetic field. $\mathcal D/\mathcal J=0.2,~\mathcal J_t/\mathcal J=0.11$. } \label{band1} \end{figure} Plotted in Figs.~\ref{band} and \ref{band1} are the magnon bands along the Brillouin zone paths in Fig.~\ref{neel}(c), with the parameter values of Ca$_{10}$Cr$_7$O$_{28}$, $\mathcal J=0.76~\text{meV},~ \mathcal J_t/\mathcal J=0.11$ \cite{Balz} and the DM value $\mathcal D/\mathcal J=0.2$. For $\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold z}$, the DMI does not contribute at zero field because the spins are along the $x$-$y$ Kagom\'e plane. The resulting magnon bands are doubly degenerate between $S_z \to S_x=\pm S$. At finite magnetic field increases, each spin has a component along the $z$-axis, hence the degeneracy of the bands between $S_x=\pm S$ is lifted and the effects of the DMI results in a gap opening at ${\bf K}$. At the saturation point $h=h_s$ (not shown) each layer is fully polarized along the field $z$-direction, again the DMI leads to gapped non-degenerate magnon bands. For the in-plane DMI $\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold x}$ the situation is different. The degeneracy persists at zero field but since the spins are along the $xy$ plane the DMI leads to gap magnon bands. The degeneracy is always lifted at nonzero magnetic field, but in this case the there is a staggered flux emanating from both layers and the bands cross each other at $\pm {\bf K}$. At the saturation point $h=h_s$ the spin are fully polarized along the $z$-axis and the in-plane DMI does not contribute and the non-degenerate magnon bands are gapless (not shown). \begin{figure} \centering \includegraphics[width=3in]{XXZ_Neel} \caption{Color online. $(a)$~ The zero field coplanar $120^\circ$ N\'eel order on the Kagom\'e lattice corresponding to the $\bold q=0$ ground state of Kagom\'e antiferromagnets. Solid lines connect NN sites and dash lines connect NNN sites. $(b)$~ Field-induced non-coplanar out-of-plane spin canting with nonzero spin scalar chirality $\kappa$, where $\phi$ is the field-induced fictitious flux. } \label{neel} \end{figure} \subsection{Model II} In the previous section, we studied one of the possible field-induced magnetically ordered phases in geometrically frustrated bilayer Kagom\'e magnets. In this section, we study another magnetically ordered phase which has been realized experimentally in various frustrated Kagom\'e magnets. Without loss of generality, we focus on the ideal Kagom\'e material KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ \cite{sup1a,sup2}. The Hamiltonian is given by \begin{align} \mathcal H&= \sum_{ i, j} \mathcal J_{ij}{\bf S}_{i}\cdot{\bf S}_{j} + \sum_{\langle i, j\rangle} \boldsymbol{\mathcal D}_{ij}\cdot{\bf S}_{i}\times{\bf S}_{j}-h\hat{\bold z}\cdot\sum_i \bold{S}_{i} \label{apen1}, \end{align} where $\mathcal J_{ij}=\mathcal J>0 $ and $\mathcal J_2>0$ are the isotropic antiferromagnetic couplings for nearest-neighbour (NN) and next-nearest-neighbour (NNN) sites respectively, $\boldsymbol{\mathcal D}_{ij}=(0,0, \mp \mathcal D_z)$, where $-/+$ denotes the directions of the out-of-plane DMI in the up/down triangles of the Kagom\'e lattice, and $h$ is the out-of-plane magnetic field in units of $g\mu_B$. For the iron jarosite KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ the interlayer exchange interaction can be neglected for two reasons. First, it is very small compare to $\mathcal J $ and $\mathcal J_2$. Second, a single crystal of iron jarosite can be synthesized \cite{sup2}. We have also neglected the in-plane DM vector because it neither stabilizes magnetic ordering nor induces topological effects. Therefore, it cannot change the results obtain here. \begin{figure} \centering \includegraphics[width=3in]{TMI_Kagome_band2} \caption{Color online. Magnon bands of iron jarosite KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ with $\boldsymbol{\mathcal D}=\mathcal D\bold{\hat z}$ at two values of magnetic field. $\mathcal D/\mathcal J=0.06,~\mathcal J_2/\mathcal J=0.03$.} \label{band2} \end{figure} \begin{figure} \centering \includegraphics[width=3in]{TMI_Kagome_band3} \caption{Color online. Magnon bands of XXZ Kagom\'e antiferromagnets without DMI at two values of magnetic field. $\delta=0.4$.} \label{band3} \end{figure} The alternating out-of-plane DMI between the up and down triangles of the Kagom\'e lattice is fictitious in that only one ground state is selected for each sign with positive chirality $(\mathcal D_z >0)$ or negative chirality $(\mathcal D_z < 0)$. At zero magnetic field, the out-of-plane DMI induces a coplanar 120$^\circ$ N\'eel order on the $x$-$y$ Kagom\'e plane with positive chirality \cite{sup1,sup1a, sup2} as shown in Fig.~\ref{neel}(a). For nonzero out-of-plane field, inelastic neutron scattering experiment on the ideal Kagom\'e material KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ \cite{sup1,sup1a,sup2} has uncovered a non-coplanar spin texture with nonzero spin chirality $\kappa=\sum{\bf S}_i\cdot\left({\bf S}_j\times {\bf S}_k\right)$ \cite{sup1a} as shown in Fig.~\ref{neel}(b). However, the topological effects of this non-collinear system have not been studied both theoretically and experimentally. In a recent Letter \cite{me}, we showed for the first time that this material possesses a finite thermal Hall conductivity in this non-collinear regime. Thus, the iron jarosite KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ is a possible candidate for investigating thermal Hall effect of magnon, which is accessible by using inelastic neutron scattering. In the present study, we analyze the finite thermal Hall conductivity in terms of topological magnon edge states. We consider the $\bold q=0$ ground state with positive chirality $\boldsymbol{\mathcal D}_{ij}=(0,0, -\mathcal D_z)$ and $\mathcal D_z>0$. The basic procedure is similar to the bilayer system studied above. The rotation matrix is same as Eq.~\ref{rot}, however, with the oriented angles $\theta_A=0,~\theta_B=2\pi/3,~\theta_C=-2\pi/3$. The classical energy is given by \begin{align} e_{cl}= \tilde {\mathcal J}\left( -1 + 3\cos^2\chi\right) -\sqrt{3}\mathcal D_z\sin^2\chi-h\cos\chi, \end{align} where $e_{cl}=E_{cl}/3NS^2$, $\tilde {\mathcal J}=\mathcal J +\mathcal J_2$, and the magnetic field is rescaled in units of $S$. Minimizing $e_{cl}$ yields the canting angle $\cos\chi = h/h_s$, with $h_s=(6\tilde {\mathcal J}+2\sqrt{3}\mathcal D_z)$. We see that the DMI depends on the classical energy as it contributes to the stability of the $\bold q=0$ ground state. Besides, $\mathcal J_2>0$ can also stabilize the coplanar magnetic structure in the absence of the DMI. The topological excitations above the classical ground state is mediated by a field-induced scalar chirality $\kappa_{\chi}=\cos\chi\sum{\bf S}_i\cdot\left({\bf S}_j\times {\bf S}_k\right)$ defined as the solid angle subtended by three neighbouring spins. As mentioned previously, the scalar spin chirality originates from non-coplanar spin texture and does not need the presence of DMI or magnetic ordering. It is also the basis of chiral spin liquid states which suggests that topological effects may persist in the spin liquid regime of frustrated kagom\'e magnets. This model does not have an analogy to collinear ferromagnets \cite{alex1, alex0, alex2,alex5,alex4, sol1,sol,sol2, alex5a, alex6, mok,su,fyl,lifa} or bilayer collinear in Model I. It also differs significantly from field-induced topological magnons in bipartite frustrated honeycomb lattice because $\kappa_{\chi}=0$ \cite{sol4}. In fact, the bipartite honeycomb antiferromagnets fall into the class of Model I as they are doubly degenerate at zero magnetic field and require an explicit DMI. The magnon tight binding Hamiltonian for Model II is given by \begin{align} &\mathcal H_{SW}= S\sum_{\bold{k},\alpha,\beta; 1,2}2\left( \mathcal{M}_{\alpha\beta}^0\delta_{\alpha\beta} +\mathcal{M}_{\alpha\beta; 1,2}\right) b_{\bold{k} \alpha}^\dagger b_{\bold{k} \beta}\label{main}\\&\nonumber +\mathcal{M}_{\alpha\beta; 1,2}^{\prime} \left( b_{\bold{k} \alpha}^\dagger b_{-\bold{k} \beta}^\dagger +b_{\bold{k} \alpha} b_{-\bold{k} \beta}\right), \end{align} where $\alpha,\beta=A,B,C$ and the coefficients are given by $ \boldsymbol{\mathcal{M}_0}=\zeta\bold{I}_{3\times 3},$ with $\zeta=( \tilde {\mathcal J} +\sqrt{3}\mathcal D_z)$. \begin{align} &\boldsymbol{\mathcal{M}}_{1,2}= \Delta_{1,2}\begin{pmatrix} 0& \gamma_{AB}^{1,2}e^{-i\phi_{1,2}}&\gamma_{CA}^{1,2}e^{i\phi_{1,2}} \\ \gamma_{AB}^{*1,2} e^{i\phi_{1,2}}& 0&\gamma_{BC}^{1,2}e^{-i\phi_{1,2}}\\ \gamma_{CA}^{*1,2} e^{-i\phi_{1,2}}& \gamma_{BC}^{*1,2} e^{i\phi_{1,2}} & 0 \\ \end{pmatrix};\\ &\boldsymbol{\mathcal{M}}_{1,2}^\prime=\Delta_{1,2}^\prime\begin{pmatrix} 0& \gamma_{AB}^{1,2}&\gamma_{CA}^{1,2} \\ \gamma_{AB}^{*1,2} & 0&\gamma_{BC}^{1,2}\\ \gamma_{CA}^{*1,2} & \gamma_{BC}^{*1,2} & 0 \\ \end{pmatrix}; \end{align} where $\gamma_{AB}^1=\cos k_1,~ ~\gamma_{BC}^1=\cos k_2, \gamma_{CA}^1=\cos k_3;~\gamma_{AB}^2=\cos p_1,~ ~\gamma_{BC}^2=\cos p_2, \gamma_{CA}^2=\cos p_3$ and $p_i=\bold{p}\cdot\bold{e}_i^\prime,~\bold e_1^\prime=\bold e_3-\bold e_2,~\bold e_2^\prime=\bold e_1-\bold e_3,~\bold e_3^\prime=\bold e_2-\bold e_1$. The fictitious magnetic fluxes are given by $\phi_{1,2}=\tan^{-1}\left(\Delta_{1,2}^M/\Delta_{1,2}^R\right)$, and $\Delta_{1,2}=\sqrt{(\Delta_{1,2}^R)^2+(\Delta_{1,2}^M)^2}$, where \begin{align} &\Delta^R_{1}= \mathcal J\left( -\frac{1}{2} +\frac{3}{4}\sin^2\chi\right)-\frac{\sqrt 3 \mathcal D_z}{2}\left( 1-\frac{\sin^2\chi}{2}\right),\\&\Delta_1^M=\frac{\cos\chi}{2}\left( -\sqrt{3}\mathcal J +\mathcal D_z\right),\\& \Delta_{1}^{\prime}=\frac{\sin^2\chi}{4}\left( 3\mathcal J+\sqrt 3\mathcal D_z\right), \\&\Delta^{R(M)}_{2}=\Delta^{R(M)}_{1}(\mathcal D_z\to 0, \mathcal J\to \mathcal J_2),\\& \Delta_2^\prime= \Delta_1^\prime (\mathcal D_z\to 0, \mathcal J\to \mathcal J_2). \end{align} Using the vector notation \begin{eqnarray} \Psi^\dagger_\bold{k}= (b_{\bold{k} A}^{\dagger},\thinspace b_{\bold{k} B}^{\dagger},\thinspace b_{\bold{k} C}^{\dagger}, \thinspace b_{-\bold{k} A},\thinspace b_{-\bold{k} B},\thinspace b_{-\bold{k} C} ),\end{eqnarray} the Hamiltonian can be written as \begin{eqnarray} \mathcal H_{SW}=\mathcal{E}_0+ S\sum_{\bold{k}} \Psi^\dagger_\bold{k} \boldsymbol{ \mathcal{H}}(\bold{k})\Psi_\bold{k}, \end{eqnarray} where \begin{align} \boldsymbol{\mathcal{H}}(\bold{k})=\mathbb{I}_{2\times 2}\otimes\left(\boldsymbol{\mathcal M_0}+\boldsymbol{\mathcal M}\right) +\sigma_x\otimes \boldsymbol{\mathcal{M}}^\prime, \end{align} and $\mathcal{E}_0$ is a constant. $\mathbb{I}_{2\time 2}$ is an identity $2\times 2$ matrix and $\sigma_x$ is a Pauli matrix. $\boldsymbol{\mathcal M}=\boldsymbol{\mathcal M}_1+\boldsymbol{\mathcal M}_2$, the same for $\boldsymbol{\mathcal M}^\prime$. The eigenvalues of this Hamiltonian cannot be obtained analytically as opposed to the zero field case, $\chi=\pi/2$, with coplanar 120$^\circ$ N\'eel order on the $x$-$y$ Kagom\'e plane. It is important to note that both fluxes do not vanish at zero DMI. This means that the DMI does not provide topological effects in stark contrast to ferromagnets \cite{alex1, alex0, alex2,alex5,alex4, sol1,sol,sol2, alex5a, alex6, su, lifa,mok}. As shown in Fig.~\ref{band2} the magnon bands are not topological at zero magnetic field even in the presence of DMI. \subsection{Model III} The final model we shall consider is the XXZ Kagom\'e antiferromagnet without DMI subject to an out-of-plane magnetic field. The Hamiltonian is governed by \begin{align} \mathcal H&= \mathcal J\sum_{ \langle i, j\rangle}\left( {\bf S}_{i}\cdot{\bf S}_{j}-\delta S_{i}^zS_{j}^z\right) -h\hat{\bold z}\cdot\sum_i \bold{S}_{i}\label{eq1}, \end{align} where $\mathcal J>0$ and $0\leq \delta\leq 1$ is the easy-plane anisotropy, and $h$ is the magnetic field in units of $g\mu_B$. At zero field, the easy-plane anisotropy favours the positive chirality $\bold q=0$ ground state \cite{sup1,sup1a, sup2,sup3}. The canting angle is determined from the classical energy \begin{align} e_{cl}= \mathcal J[ -1+ \left( 3-2\delta\right)\cos^2\chi]-h\cos\chi, \end{align} where $\cos\chi = h/h_s$ and $h_s=2J(3-2\delta)$. In this system, topological magnon bands originate from $\kappa_{\chi}$ as in Model II. The fictitious magnetic flux encountered by propagating magnon is given by $\tan\phi=\Delta_M/\Delta_R$ where $\Delta=\sqrt{\Delta_R^2+\Delta_M^2}$, and \begin{align} &\Delta_R= \bigg[ -\frac{1}{2} +\frac{1}{2}\left(\frac{3}{2}-\delta\right)\sin^2\chi\bigg],\\&\Delta_M=-\frac{\sqrt{3} }{2}\cos\chi,~ \Delta^\prime= \frac{\sin^2\chi}{2}\left(\frac{3}{2}-\delta\right). \end{align} In Fig.~\ref{band3} we show the magnon bands for $\delta=0.4$. At zero field, the magnon bands are very similar to Model II. We see that the flat mode acquires a small dispersion at nonzero field and the magnon bands are gapped at all points in the Brillouin zone. \subsection{Finite thermal Hall conductivity at zero DMI} As mentioned in the text, the DMI does not provide any topological effects for the noncollinear $\bold q=0$ spin configuration on the kagom\'e lattice. This means that topological effects persist at zero DMI and nonzero out-of-plane magnetic field via an induced spin scalar chirality $\mathcal H_{\chi}\sim\cos\chi \sum {\bold S}_i\cdot\left( \bold S_{j}\times\bold S_{k}\right)$, where $\cos\chi=h/h_s$ with $h_s= 6(\mathcal J+\mathcal J_2)$. However, DMI is usually present on the kagom\'e lattice due to lack of inversion center. In this section, we show that anomalous magnon Hall effect is present at zero DMI in contrast to collinear ferromagnets. Figures~\ref{band4} and \ref{N_thd} show the magnon bands and the corresponding $\kappa_{xy}$ respectively for zero DMI. \begin{figure}[!] \centering \includegraphics[width=3.75in]{SM_band} \caption{Color online. Magnon bands of noncollinear $\bold q=0$ kagom\'e antiferromagnet with zero DMI at three field values. The parameters are $\mathcal D_z/\mathcal J=0.0,~\mathcal J_2/\mathcal J=0.3$.} \label{band4} \end{figure} \begin{figure}[!] \centering \includegraphics[width=3in]{SM_TH} \caption{Color online. Low-temperature dependence of $\kappa_{xy}$ for the bands in Fig.~\ref{band4} at three field values.} \label{N_thd} \end{figure} \section{Topological Magnon Edge Modes} The Hamiltonians for insulating antiferromagnets are diagonalized by the generalized Bogoliubov transformation $\Psi_\bold{k}= \mathcal{P}_\bold{k} Q_\bold{k}$, where $\mathcal{P}_\bold{k}$ is a $2N\times 2N$ paraunitary matrix and $Q^\dagger_\bold{k}= (\mathcal{Q}_\bold{k}^\dagger,\thinspace \mathcal{Q}_{-\bold{k}})$ with $ \mathcal{Q}_\bold{k}^\dagger=(\beta_{\bold{k} A}^{\dagger}\thinspace \beta_{\bold{k} B}^{\dagger}\thinspace \beta_{\bold{k} C}^{\dagger})$ being the quasiparticle operators. The matrix $\mathcal{P}_\bold{k}$ satisfies the relations, \begin{align} &\mathcal{P}_\bold{k}^\dagger \boldsymbol{\mathcal{H}}(\bold{k}) \mathcal{P}_\bold{k}=\mathcal{E}_\bold{k}\label{eqn1}\\ &\mathcal{P}_\bold{k}^\dagger \boldsymbol{\tau}_3 \mathcal{P}_\bold{k}= \boldsymbol{\tau}_3, \label{eqna} \end{align} where $\mathcal{E}_\bold{k}= \text{diag}(\epsilon_{\bold{k}\alpha},~\epsilon_{-\bold{k}\alpha})$, $ \boldsymbol{\tau}_3= \text{diag}( \mathbf{I}_{N\times N}, -\mathbf{I}_{N\times N} )$, and $\epsilon_{\bold{k}\alpha}$ are the energy eigenvalues. From Eq.~\ref{eqna} we get $\mathcal{P}_\bold{k}^\dagger= \boldsymbol{\tau}_3 \mathcal{P}_\bold{k}^{-1} \boldsymbol{\tau}_3$, and Eq.~\ref{eqn1} is equivalent to saying that we need to diagonalize the Hamiltonian $\boldsymbol{\mathcal{H}}^\prime(\bold{k})= \boldsymbol{\tau}_3\boldsymbol{\mathcal{H}}(\bold{k}),$ whose eigenvalues are given by $ \boldsymbol{\tau}_3\mathcal E_\bold{k}$ and the columns of $\mathcal P_\bold{k}$ are the corresponding eigenvectors. The paraunitary operator defines a Berry curvature given by \begin{align} \Omega_{ij;\alpha}(\bold{k})=-2\text{Im}[ \boldsymbol{\tau}_3\mathcal (\partial_{k_i}\mathcal P_{\bold{k}\alpha}^\dagger) \boldsymbol{\tau}_3(\partial_{k_j}\mathcal P_{\bold{k}\alpha})]_{\alpha\alpha}, \label{bc1} \end{align} with $i,j=\lbrace x,y\rbrace$ and $\mathcal P_{\bold{k}\alpha}$ are the columns of $\mathcal P_{\bold{k}}$. This form of the Berry curvature simply extracts the diagonal components which are the most important. Suppose the explicit analytical form of $\mathcal P_{\bold{k}\alpha}$ is known as in honeycomb-lattice hardcore bosons \cite{sol2}, the Berry curvature can be computed directly from Eq.~\ref{bc1}. Unfortunately, this is not the case in the present models. From Eq.~\ref{eqn1} the Berry curvature can be written alternatively as \begin{align} \Omega_{ij;\alpha}(\bold k)=-2\sum_{\alpha^\prime\neq \alpha}\frac{\text{Im}[ \braket{\mathcal{P}_{\bold{k}\alpha}|v_i|\mathcal{P}_{\bold{k}\alpha^\prime}}\braket{\mathcal{P}_{\bold{k}\alpha^\prime}|v_j|\mathcal{P}_{\bold{k}\alpha}}]}{\left(\epsilon_{\bold{k}\alpha}-\epsilon_{\bold{k}\alpha^\prime}\right)^2}, \label{chern2} \end{align} where $\bold v=\partial\boldsymbol{\mathcal{H}}^\prime(\bold{k})/\partial \bold k$ defines the velocity operators. The present form can be computed once the eigenvalues and eigenvectors of the Hamiltonian are obtained numerically. Similar to fermionic systems, the Chern number can still be defined for bosonic systems as the integration of the Berry curvature over the first Brillouin zone, \begin{equation} \mathcal{C}_\alpha= \frac{1}{2\pi}\int_{{BZ}} dk_idk_j~ \Omega_{ij;\alpha}(\bold k). \label{chenn} \end{equation} Indeed, topologically protected magnon edge states are characterized by nonzero Chern numbers. However, the Chern numbers are fictitious because the notion of completely filled bands and Fermi energy do not apply to bosonic (magnonic) systems. \begin{figure} \centering \includegraphics[width=3in]{Edge1} \caption{Color online. Magnon edge states of bilayer Kagom\'e antiferromagnets (Model I) for a strip geometry with $\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold z}$. The parameters are $\mathcal D/\mathcal J=0.2,~\mathcal J_t/\mathcal J=0.11$.} \label{mo1} \end{figure} \begin{figure} \centering \includegraphics[width=3in]{Edge} \caption{Color online. Magnon edge states of bilayer Kagom\'e antiferromagnets (Model I) for a strip geometry with $\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold x}$. The parameters are $\mathcal D/\mathcal J=0.2,~\mathcal J_t/\mathcal J=0.11$.} \label{mo2} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{AFM_edge} \caption{Color online. Magnon edge states of Kagom\'e antiferromagnets for Model II with the parameter values of jarosite KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ $\mathcal D_z/\mathcal J=0.06,~\mathcal J_2/\mathcal J=0.03$. } \label{mo3} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{XXZ_ed} \caption{Color online. Magnon edge states of Kagom\'e antiferromagnets for Model III with $\delta=0.4$. } \label{mo4} \end{figure} Experimentally accessible magnon Hall effect has been previously analyzed in Ref.~\cite{me}. Now, we complete this study by providing evidence of topological magnon insulator with Chern number protected magnon edge modes. We have solved for the magnon edge modes using a strip geometry with open boundary conditions along the $y$ direction and infinite along $x$ direction \cite{guo}. First, let us consider Model I with $\boldsymbol{\mathcal D}_{ij}=\mathcal D\hat{\bold z}$ shown in Fig.~\ref{mo1}. In this case the magnon bulk bands are degenerate between $S_z\to S_x=\pm S$ sectors at $h=0~ (\chi=\pi/2)$ and the DMI vanishes in the noninteracting limit as the spins are along the $x$-$y$ Kagom\'e plane at zero field. This results in gapless magnon bulk bands at $\pm{\bf K}$ and ${\bf \Gamma}$ with a single edge mode connecting these points and $\kappa_{xy}$ vanishes as expected. For $h<h_s$ the spins are non-collinear and the degeneracy between $S_z\to S_x=\pm S$ is lifted because each spin has a component along the $z$-axis. The DMI opens a small gap between the bands. We see that pairs of gapless magnon edge states appear in the vicinity of bulk gap signifying the strong topology of the system, yielding nonzero $\kappa_{xy}$ \cite{me}. At the saturation field $h=h_s$ the spins are collinear along the $z$-axis corresponding to bilayer ferromagnet coupled ferromagnetically. The DMI again leads to gap magnon bulk bands with counter-propagating gapless edge states and again with nonzero $\kappa_{xy}$. For the in-plane DMI $\boldsymbol{\mathcal D}_{ij}=\mathcal D{\hat{\bold x}}$ shown in Fig.~\ref{mo2}, the situation is different. There is no magnon Hall effect and $\kappa_{xy}$ vanishes in all regimes \cite{me}, but there is topological magnon insulator as we now explain. Indeed, we have degenerate magnon bulk bands between $S_z\to S_x=\pm S$ sectors at zero field, but the DMI has a profound effect since the spins are along the $x$-$y$ Kagom\'e plane. As shown in Fig.~\ref{mo2}(a) there is a pair of edge state modes for each spin sector, and they are related by time-reversal symmetry. This is an evidence of topological magnon insulator. However, $\kappa_{xy}$ vanishes as a consequence of time-reversal symmetry between the degenerate spin sectors. In fact, this system is a magnonic counterpart to fermionic topological insulator with imaginary second-nearest-neighbour SOC between electron spin up and down \cite{guo}. For $h<h_s$ the bands cross at ${\pm\bf K}$ as shown above due to staggered flux configurations and the edge modes are not topologically protected as we have confirmed by computing the Berry curvatures and the Chern numbers. For this reason $\kappa_{xy}$ again vanishes. At the saturation field $h=h_s$ the in-plane DMI disappears in the noninteracting limit and the system is topologically trivial with vanishing $\kappa_{xy}$. The key observation in the layer antiferromagnetic system is that although topologically protected edge states are present at zero magnetic magnetic field, $\kappa_{xy}$ is suppressed by antiferromagnetic coupling. Now, let us consider Model II which corresponds to the Kagom\'e material KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ \cite{sup1,sup1a,sup2}. As shown above this model differs significantly from Model I due to the presence of spin scalar chirality which survives in the absence of DMI and magnetic ordering. The associated magnon edge modes are depicted in Fig.~\ref{mo3}. At zero field $h=0$ it is evident that there are no protected chiral edge modes. This shows that the system is topologically trivial and $\kappa_{xy}$ vanishes for any values of DMI. In the presence of a magnetic field perpendicular to the Kagom\'e plane there is an induced noncoplanar spin texture which provides spin scalar chirality \cite{sup1a}. Figures~ \ref{mo3}(b) and (c) show that the system is topologically nontrivial in this regime with protected gapless edge states which yield nonzero Chern number and finite $\kappa_{xy}$ even without the presence of DMI \cite{me}. Indeed, the presence of spin scalar chirality is the basis of chiral spin liquid physics, therefore it will not be surprising that the nontrivial topology of this system persists in the spin liquid phase of the frustrated Kagom\'e magnets. Model III explicitly ignores the DMI and the easy-plane anisotropy stabilizes the $\bold q=0$ magnetic ordering. This system is analogous to Model II as shown in Fig.~\ref{mo4}. This is because the presence of the DMI does not have any topological effects in frustrated Kagom\'e lattice unlike insulating Kagom\'e ferromagnets. The layer antiferromagnetic system and the frustrated system have similarities and differences. In both systems $\kappa_{xy}$ vanishes at zero field which can be attributed to zero net magnetic moment. In other words, the degeneracy at zero field between $S_z\to S_x=\pm S$ sectors in layer antiferromagnetic system yields a zero net magnetic moment and for the coplanar/non-collinear system at zero field we have $\sum_{\Delta}{\bf S}_{\Delta}=0$ on each triangular plaquette which also yields a zero net magnetic moment. However, the origin of finite $\kappa_{xy}$ is different in both systems. For the layer antiferromagnets, topological magnon bands is induced by the DMI, whereas in the frustrated system with coplanar/non-collinear ordering the concept of topological magnon bands originates from field-induced spin scalar chirality which is nonzero even in the absence of DMI and magnetic ordering $\langle {\bf S}_j\rangle=0$. \section{Discussion and Conclusion} It is natural to ask the importance of this investigation and whether such nontrivial topological effects can be experimentally realized in insulating antiferromagnets. Recently, topological magnon insulator has been realized in the Kagom\'e ferromagnet Cu(1-3, bdc) \cite{alex5a}. This material is also the first Kagom\'e ferromagnet that shows magnon Hall effect with finite $\kappa_{xy}$ \cite{alex6}. A recent experiment has reported a finite $\kappa_{xy}$ in frustrated Kagom\'e volborthite Cu$_3$V$_2$O$_7$(OH)$_2$ $\cdot$2H$_2$O in the presence of an out-of-plane magnetic field $h=15~\text{Tesla}$ \cite{wat}. This result is attributed to spin excitations in the spin liquid regime. As mentioned previously, the Kagom\'e volborthite is known to exhibit different magnetic-field-induced ordered phases for $h< 15 ~\text{Tesla}$ \cite{Yo,Yo1}, and the frustrated Kagom\'e compound Ca$_{10}$Cr$_7$O$_{28}$ \cite{Balz} also exhibits ferromagnetic ordered states for $h\sim 11 ~\text{Tesla}$. This suggests that the observed low temperature dependence of $\kappa_{xy}$ in Kagom\'e volborthite might not be due to spin excitations in the spin liquid regime, but magnon excitations in the field-induced ordered phases. The iron jarosite KFe$_3$(OH)$_{6}$(SO$_{4}$)$_2$ \cite{sup1a,sup2} is an ideal Kagom\'e antiferromagnet with $\bold q=0$ ground state and nonzero field-induced spin scalar chirality \cite{sup1a}. This is a perfect material to search for topologically nontrivial excitations with finite $\kappa_{xy}$ as described in the present study and Ref.~\cite{me}. At the moment, inelastic neutron scattering experiment has not figured out how to measure magnon edge state modes, which are consequences of the magnon bulk topology that gives rise to finite $\kappa_{xy}$ because it is a bulk sensitive method. The magnon edge modes can be probed by edge sensitive methods such as light \cite{luuk} or electronic \cite{kha} scattering method. This is not an impossible task in principle, and we believe it will be measured in the near future. \section*{Acknowledgement} Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
train/arxiv
BkiUbhLxK0iCl4WD22Pz
5
1
\section{Introduction} In contrast to concentration quenching, aggregation-induced enhanced emission (AIEE) yields strong luminescence in the aggregation phases \cite{Hong2009_4332,Hong2011_5361}. The restriction of intramolecular motions is generally accepted as the reason behind AIEE, that is, the restriction of intramolecular rotations or vibrations arising from the physical constraint in the aggregation phases blocks non-radiative transition pathways \cite{Mei2014_5429}. Hexaphenylsilole (HPS) is one of the AIEE dyes, and the restricted rotation of the side phenyl ring is found to be a key factor for AIEE \cite{Chen2003_1535,Yu2005_6335,Zhao2015_5347}. This is corroborated by quantum mechanical and molecular mechanical (QM/MM) calculations where the Huang--Rhys factors and reorganization energies at low-frequency modes are reduced in the solid phase compared to the gas phase owing to the packing effect \cite{Zhang2014_9094}. In addition, HPS having a bulky shape does not form a cofacial configuration in the solid phase \cite{Hong2009_4332,Hong2011_5361}, which is considered to prevent concentration quenching. Cofacial aggregation is regarded as one of the reasons behind concentration quenching. This is because a destructive alignment of the transition dipole moment renders the lowest excited state that is symmetry forbidden \cite{Cornil2001_1053}. Nevertheless, some dyes have been reported to be emissive in spite of the formation of cofacial aggregations \cite{Rosch2006_7184,Yoon2010_13675,Yao2011_834,Wang2014_8723, Basak2015_30398,Lucenti2017_1894,Qian2017_83,Ryu2017_8870}. In general, according to Kasha's rule, emissions have been attributed to the lowest excited states \cite{Kasha1950_14}. However, if all the radiative and non-radiative transitions from a higher excited state to all the lower excited states are suppressed, fluorescence from the higher excited state can be expected to occur against Kasha's rule. Sato {\it et al.} have already reported that, in fluorescent dopants employed in organic light-emitting diodes (OLEDs), radiative and non-radiative transitions from a triplet excited state T$_n$ ($n>1$) to all the lower triplet excited states can be suppressed due to the pseudo-degenerate electronic states \cite{Sato2015_189,Sato2017_4820,Pu2019_2541}. The pseudo-degeneracy leads to cancellation of the overlap density between the excited states, which generates T$_n$ excitons with long lifetimes. This enables the fluorescence via higher triplets (FvHT) mechanism for OLEDs, that is fluorescence utilizing the reverse intersystem crossing (RISC) from T$_n$ to singlets. FvHT is different from thermally activated delayed fluorescence (TADF) in that TADF undergoes the thermally activated RISC from T$_1$ to S$_1$ by decreasing the energy difference, $\Delta E_{{\rm S_1-T_1}}$ \cite{Endo2009_4802,Adachi2014_060101}. The FvHT mechanism is proposed to explain the mechanism of electroluminescence in some OLED dopants with large $\Delta E_{{\rm S_1-T_1}}$, where T$_1$ excitons cannot overcome the energy difference thermally \cite{Hu2014_2064,Sato2017_4820}. A cyano-substituted 1,2-bis(pyridylphenyl)ethene (CNPPE) (Fig.~\ref{FIG1}) has been reported to exhibit the AIEE behavior in solid phase \cite{Nishio2014_686}. The rate constants of the non-radiative transitions are decreased from $> 1.0\times10^{10}$ s$^{-1}$ in CH$_2$Cl$_2$ solution to $5.0\times10^7$ s$^{-1}$ in solid phase. The rate constants of the radiative transitions are comparable between CH$_2$Cl$_2$ solution ($> 2.0\times10^7$ s$^{-1}$) and solid phase ($1.3\times10^8$ s$^{-1}$). Accordingly, the fluorescence quantum yield is increased from 0.002 to 0.72 by aggregation. Since CNPPE forms cofacial configurations in solid phase, the occurrence of concentration quenching is predicted. Some cofacial CNPPE molecules have $C_i$ symmetry in the crystal structure. This suggests the possibility of pseudo-degenerate electronic states delocalized over the cofacial molecules. In this case, fluorescence from higher singlets than S$_1$ is expected to occur against Kasha's rule because the transitions between the excited states can be suppressed as in the case of the FvHT mechanism. The electronic states delocalized over molecules may decrease the vibronic coupling constants (VCCs) \cite{Shizu2013_215}. These indicate that the internal conversion can be more suppressed in solid phase than in solution phase as long as excimer formation occurs in solid phase. This mechanism is different from what is proposed in styrylbenzene, i.e. di \cite{Shi2017_23166}, tri \cite{Garzon2017_4720,Domiguez2020_1}, and tetra(styryl)benzene \cite{Domiguez2020_1}, which asserts that the origin of the AIEE behavior is attributed to blocking the trans-cis photoisomerization of stilbene unit in solid phase. In this study, we investigated the role of pseudo-degeneracy in the appearance of AIEE considering CNPPE as an example. Vibronic coupling density (VCD) analyses were performed to elucidate the local picture of VCC \cite{Sato2008_758,Sato2009_99,Sato2013_012010}. To explain the results obtained by the TD-DFT calculations, we discuss the electron density difference and overlap density in the pseudo-degenerate electronic system based on the Hubbard model. \begin{figure}[h] \centering \includegraphics[width=0.8\hsize]{./FIG1.eps} \caption{ Chemical structure of cyano-substituted 1,2-bis(pyridylphenyl)ethene (CNPPE). } \label{FIG1} \end{figure} \section{Theory} We consider the internal conversion from an initial vibronic state $\ket{\Phi_{mi} ({\bf r}, {\bf Q})}$ associated with electronic $m$ and vibrational $mi$ states to a final vibronic state $\ket{\Phi_{nj} ({\bf r}, {\bf Q})}$. Here, ${\bf r}=({\bf r}_1, \cdots, {\bf r}_i, \cdots {\bf r}_N)$ is a set of $N$ electronic coordinates, and ${\bf Q} = (Q_1, \cdots, Q_{\alpha}, \cdots Q_{M})$ is a set of $M$ mass-weighted normal coordinates. Within the crude adiabatic approximation \cite{Fischer1984,Azumi1977_315}, the vibronic states are represented as the product of vibrational and electronic states fixed at the nuclear configuration ${\bf R}_0$: $\ket{\Phi_{mi} ({\bf r}, {\bf Q})} = \ket{\chi_{mi} ({\bf Q})} \ket{\Psi_m ({\bf r}; {\bf R}_0)}$. ${\bf R}_0$ is chosen as the equilibrium nuclear configuration of the ground or excited optimized structures. The rate constant of the internal conversion from electronic state $m$ to $n$ is expressed as \cite{Uejima2014_14244} \begin{equation} k_{n \leftarrow m}^{{\rm IC}} (T) = \frac{2\pi}{\hbar} \sum_{\alpha} |V_{mn,\alpha}|^2 \sum_{ij} P_{mi} (T) |\braket{\chi_{mi}|Q_{\alpha}|\chi_{nj}}|^2 \delta (E_{mi} - E_{nj}), \end{equation} where $P_{mi}(T)$ is the Boltzmann distribution function of the initial vibronic state at temperature $T$, and $E_{mi}$ and $E_{nj}$ are the eigenenergies of $\ket{\Phi_{mi} ({\bf r}, {\bf Q})}$ and $\ket{\Phi_{nj} ({\bf r}, {\bf Q})}$, respectively. $E_{mi}$ is represented as the sum of electronic $E_m$ and vibrational energies. $V_{mn, \alpha}$ is the off-diagonal VCC given by \begin{equation} V_{mn, \alpha} = \left< \Psi_m ({\bf r}; {\bf R}_0) \left| \left( \frac{\partial \hat{H} ({\bf r}, {\bf R}) }{\partial Q_{\alpha}} \right)_{{\bf R}_0} \right| \Psi_n ({\bf r}; {\bf R}_0) \right>, \end{equation} where $\hat{H} ({\bf r}, {\bf R})$ is the molecular Hamiltonian, and ${\bf R}$ is a set of nuclear configuration. $V_{n}:=V_{nn}$ is called the diagonal VCC. Ignoring the Duschinsky effect, which means vibrational modes do not change during an excitation, the matrix element of the vibrational states is written as \begin{equation} \braket{\chi_{mi}|Q_{\alpha}|\chi_{nj}} = \braket{n_{mi,\alpha}|Q_{\alpha}|n_{nj, \alpha}} \prod_{\beta \neq \alpha} \braket{n_{mi,\beta}|n_{nj,\beta}}, \end{equation} where $\ket{n_{mi,\alpha}}$ is a vibrational state of a single mode. The Franck--Condon (FC) overlap integral is expressed as \cite{Hutchisson1930_410} \begin{eqnarray} & & \braket{n_{mi,\alpha}|n_{nj,\alpha}} = \sqrt{\frac{ n_{mi, \alpha}! n_{nj, \beta}!}{2^{n_{mi,\alpha}+n_{nj,\alpha}}}} e^{- \frac{1}{4} g_{n,\alpha}^2} \notag \\ & \times & \sum_{l=0}^{ {\rm min} [n_{mi,\alpha} n_{nj,\alpha}]} (-1)^{n_{mi,\alpha}-l} 2^l \frac{ g_{n,\alpha}^{n_{mi,\alpha}+n_{nj,\alpha}-2l} }{ l! (n_{mi,\alpha}-l)! (n_{nj,\alpha}-l)!}. \end{eqnarray} Here, $g_{n,\alpha}$ is the dimensionless diagonal VCC (the Huang--Rhys factor): \begin{equation} g_{n,\alpha} = \frac{ V_{n,\alpha} }{ \sqrt{\hbar \omega_{n,\alpha}^3}}, \end{equation} where $\omega_{n,\alpha}$ is the angular frequency of vibrational mode $\alpha$. In general, a rate constant of an internal conversion is small when diagonal and off-diagonal VCCs are small \cite{Uejima2014_14244}. Particularly, the dependence of the rate constant on the diagonal VCC is strong, and the reduction of the diagonal VCCs significantly contributes to the suppression of the internal conversion \cite{Uejima2014_14244}. Within a single mode approximation at 0 K, the rate constant of the internal conversion is reduced to \cite{Uejima2014_14244} \begin{equation} k_{n \leftarrow m}^{{\rm IC}} (T) = \frac{2\pi}{\hbar} |V_{mn,\alpha}|^2 \sum_j |\braket{\chi_{mi}|Q_{\alpha}|\chi_{nj}}|^2 \int_{E_{{\rm min}}}^{E_{{\rm max}}} \frac{1}{\sqrt{2 \pi} \sigma} e^{ -\frac{(E - (E_{mi}-E_{nj}))^2}{2\sigma^2} } dE, \label{Eq:IC2} \end{equation} where the delta function is replaced with the Gaussian function which represents the density of states with the linewidth $\sigma$. In addition to the internal conversion, vibrational relaxation from FC to adiabatic (AD) states is a non-radiative process that should suppress for emission \cite{Uejima2014_14244}. Within the crude adiabatic representation assuming the harmonic approximation, the reorganization energy due to vibrational relaxation is evaluated by \begin{equation} \Delta E = \sum_{\alpha} \frac{V_{n,\alpha}^2}{2 \omega_{n,\alpha}^2}. \label{Eq:DeltaE} \end{equation} Thus, the reduction of the diagonal VCCs leads to the small reorganization energy that depends on the square of the diagonal VCCs. The VCD is expressed as the density form of the VCC \cite{Sato2008_758,Sato2009_99,Sato2013_012010}: \begin{equation} V_{mn, \alpha} = \int d \textbf{\textit{x}} \ \eta_{mn,\alpha} (\textbf{\textit{x}}), \end{equation} where $\textbf{\textit{x}}=(x,y,z)$ is a three dimensional coordinate. The diagonal VCD $\eta_{n, \alpha} (\textbf{\textit{x}}) := \eta_{nn,\alpha} (\textbf{\textit{x}})$ is defined by \begin{equation} \eta_{n,\alpha} (\textbf{\textit{x}}) = \Delta \rho_{nm} (\textbf{\textit{x}}) \times v_{\alpha} (\textbf{\textit{x}}). \end{equation} Here, $\Delta \rho_{nm} (\textbf{\textit{x}})$ is the electron density difference between $\ket{\Psi_n ({\bf r}; {\bf R}_0)}$ and the reference state $\ket{\Psi_m ({\bf r}; {\bf R}_0)}$: \begin{equation} \Delta \rho_{nm} (\textbf{\textit{x}}) = \braket{\Psi_n ({\bf r};{\bf R}_0) | \hat{\rho}(\textbf{\textit{x}}) | \Psi_n ({\bf r}; {\bf R}_0)} - \braket{\Psi_m ({\bf r};{\bf R}_0) | \hat{\rho}(\textbf{\textit{x}}) | \Psi_m ({\bf r}; {\bf R}_0)}. \end{equation} $\ket{\Psi_m ({\bf r}; {\bf R}_0)}$ is taken as the electronic state in the equilibrium nuclear configuration. $\hat{\rho}(\textbf{\textit{x}})$ is the electron density operator defined by \begin{equation} \hat{\rho} (\textbf{\textit{x}}) = \sum_{ij} \sum_{\sigma \tau} \hat{c}_{i \sigma}^{\dagger} \hat{c}_{j \tau} \psi_{i \sigma}^{*} (\textbf{\textit{x}}) \psi_{j \tau} (\textbf{\textit{x}}), \end{equation} where $\psi_{i \sigma}^{*} (\textbf{\textit{x}})$ and $\psi_{j \tau} (\textbf{\textit{x}})$ are spatial orbitals, and $\hat{c}_{i \sigma}^{\dagger}$ and $\hat{c}_{j \tau}$ are creation and annihilation operators, respectively. $i$ and $j$ refer to the orbital indices, and $\sigma$ and $\tau$ refer to the spin indices. $v_{\alpha} (\textbf{\textit{x}})$ is the potential derivative expressed as \begin{equation} v_{\alpha} (\textbf{\textit{x}}) = \left( \frac{\partial u (\textbf{\textit{x}})}{\partial Q_{\alpha}} \right)_{{\bf R}_0}, \end{equation} where $u(\textbf{\textit{x}})$ is the electron-nucleus potential acting on a single electron. The off-diagonal VCD is defined by \begin{equation} \eta_{mn,\alpha} (\textbf{\textit{x}}) = \rho_{mn} (\textbf{\textit{x}}) \times v_{\alpha} (\textbf{\textit{x}}), \end{equation} where $\rho_{mn} (\textbf{\textit{x}})$ is the overlap density between $\ket{\Psi_m({\bf r}; {\bf R}_0)}$ and $\ket{\Psi_n({\bf r}; {\bf R}_0)}$: \begin{equation} \rho_{mn} (\textbf{\textit{x}}) = \braket{\Psi_m({\bf r}; {\bf R}_0) | \hat{\rho} (\textbf{\textit{x}}) | \Psi_n({\bf r}; {\bf R}_0)}. \end{equation} The VCD enables us to understand the vibronic couplings (VCs) arising from the electronic factor $\Delta \rho_{nm} (\textbf{\textit{x}})$ or $\rho_{mn} (\textbf{\textit{x}})$ and the vibrational factor $v_{\alpha} (\textbf{\textit{x}})$. It should be noted that the disappearance of $\Delta \rho_{nm} (\textbf{\textit{x}})$ and $\rho_{mn} (\textbf{\textit{x}})$ gives rise to the suppression of the internal conversions via the reduction of $V_{n,\alpha}$ and $V_{mn,\alpha}$, respectively. Since the transition dipole moment also depends on $\rho_{mn} (\textbf{\textit{x}})$ \cite{Uejima2014_14244}, the disappearance of $\rho_{mn} (\textbf{\textit{x}})$ suppresses both the radiative and non-radiative transitions. \section{Methods of calculations} \begin{figure}[h] \centering \includegraphics[width=0.6\hsize]{./FIG2.eps} \caption{ (a) Crystal structure of CNPPE solid \cite{Nishio2014_686}. (b) Dimer Model \textbf{1} for the CNPPE solid where centered cofacial molecules with $C_i$ symmetry are selected as the QM region and the surrounding 16 molecules are selected as the MM region. } \label{FIG2} \end{figure} Figure~\ref{FIG2} (a) shows the crystal structure of the CNPPE solid \cite{Nishio2014_686}. We modeled the CNPPE solid as a dimer with a cofacial configuration (Fig.~\ref{FIG2} (b)), Dimer Model \textbf{1}, where the cofacial dimer was calculated by the QM method and the surrounding 16 molecules were calculated by the MM method based on the ONIOM (our own $n$-layered integrated molecular orbital and molecular mechanics) approach \cite{Svensson1996_19357,Chung2015_5678}. Dimer Model \textbf{1} has $C_i$ symmetry. The other dimer models aligned in different directions, Dimer Model \textbf{2} and Dimer Model \textbf{3} (see Fig. S1 in the Supplementary Information), having $C_1$ symmetry were also investigated. The ground and excited states of the QM region were computed at the M06-2X/6-31G(d,p) and TD-M06-2X/6-31G(d,p) levels of theory, respectively, whereas the MM region was computed using the universal force field (UFF). The M06-2X functional was employed to incorporate dispersion interactions between adjacent molecules \cite{Zhao2008_215}. The coordinates of the MM region were fixed during the geometry optimizations and vibrational analyses for the ground and excited states. The CNPPE in CH$_2$Cl$_2$ solution was modeled as a single molecule with $C_1$ symmetry. The ground and excited states of a single molecule in solution, Monomer Model, were computed at the M06-2X/6-31G(d,p) and TD-M06-2X/6-31G(d,p) levels of theory, respectively, including the solvent effect through the polarizable continuum model (PCM) \cite{Tomasi2005_2999}. The above calculations were carried out using the Gaussian 09 \cite{Frisch2013D,Frisch2013E}. The VCCs and VCD were calculated using our own code. \section{Results and discussion} \subsection{Vibronic coupling constants (VCCs) and vibronic coupling density (VCD)} There are a few possibilities for selecting dimers from the crystal structure. The total energies of excited states of the three types of dimer models were compared (Fig. S2). The S$_1$ and S$_2$ states of Dimer Model \textbf{1} are energetically more stable than those of the other dimer models, and the fluorescence from Dimer Model \textbf{1} is expected. Therefore, we concentrate ourselves on Dimer Model \textbf{1}. Table~\ref{TABLE1} lists the excited states of Dimer Model \textbf{1} at the S$_0$ and S$_2$ optimized structures. From the selection rule of the electric dipole transition, S$_1$ ($A_g$) is symmetry-forbidden and S$_2$ ($A_u$) is symmetry-allowed (Laport\'{e} rule). Although, according to Kasha's rule, an emission does not occur from the second excited states, the fluorescence from S$_2$ is possible if all the transitions from S$_2$ to S$_1$ are suppressed. \begin{table}[h] \small \caption{\label{TABLE1} Excited states of Dimer Model \textbf{1} at the S$_0$ and S$_2$ optimized structures. $f$ denotes the oscillator strength. } \begin{tabular*}{0.48\textwidth}{@{\extracolsep{\fill}}cccccc} \hline & & \multicolumn{2}{c}{Excitation Energy} & & Major Configuration \\ \cline{3-4} & State & eV & nm & $f$ & (CI coefficient)\\ \hline @S$_0$ & S$_1$(A$_g$) & 3.7359 & 331.88 & 0.0000 & HO-1 $\rightarrow$ LU+1 (0.306)\\ & & & & & HO $\rightarrow$ LU (0.622) \\ & S$_2$(A$_u$) & 3.9052 & 317.49 & 2.2807 & HO-1 $\rightarrow$ LU (0.366)\\ & & & & & HO $\rightarrow$ LU+1 (0.585) \\ @S$_2$ & S$_1$(A$_g$) & 3.1397 & 394.89 & 0.0000 & HO-1 $\rightarrow$ LU+1 (0.212)\\ & & & & & HO $\rightarrow$ LU (0.664) \\ & S$_2$(A$_u$) & 3.3074 & 374.86 & 1.8099 & HO-1 $\rightarrow$ LU (0.215)\\ & & & & & HO $\rightarrow$ LU+1 (0.663)\\ \hline \end{tabular*} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.8\hsize]{./FIG3.eps} \caption{ Diagonal VCCs of (a) S$_0$@S$_2$ and (b) S$_1$@S$_2$ as well as off-diagonal VCCs of (c) S$_0$@S$_2$ $\leftarrow$ S$_2$@S$_2$ and (d) S$_1$@S$_2$ $\leftarrow$ S$_2$@S$_2$ for Dimer Model \textbf{1}. } \label{FIG3} \end{figure} Figures~\ref{FIG3} (a) and (b) show the diagonal VCCs of S$_0$@S$_2$ and S$_1$@S$_2$, respectively. Furthermore, Figs.~\ref{FIG3} (c) and (d) show the off-diagonal VCCs of S$_0$@S$_2$ $\leftarrow$ S$_2$@S$_2$ and S$_1$@S$_2$ $\leftarrow$ S$_2$@S$_2$, respectively. The diagonal VCCs of S$_1$@S$_2$ are extremely small where the largest VCC of vibrational mode 233 is $0.79 \times 10^{-4}$ a.u. In addition, both the off-diagonal VCCs of S$_1$@S$_2$ $\leftarrow$ S$_2$@S$_2$ and S$_0$@S$_2$ $\leftarrow$ S$_2$@S$_2$ are small. These results indicate that the internal conversions from S$_2$ to S$_0$ as well as to S$_1$ are suppressed while the radiative transition between S$_2$ and S$_0$ is enabled with the large oscillator strength (see Table \ref{TABLE1}). It should be noted, because of the extremely small diagonal VCCs of S$_1$@S$_2$, that the internal conversion from S$_2$ to S$_1$ is suppressed, thereby enabling the fluorescence from S$_2$. \begin{figure*}[h] \centering \includegraphics[width=0.8\hsize]{./FIG4.eps} \caption{ (a) Frontier orbitals and (b) orbital levels of Dimer Model \textbf{1} at the S$_2$ optimized structure. $X_1$ and $X_2$ are the constituent molecules of Dimer Model \textbf{1}. Isosurface values of the frontier orbitals are $3.0\times 10^{-2}$ a.u. } \label{FIG4} \end{figure*} Figures~\ref{FIG4} (a) and (b) present the frontier orbitals and orbital levels at the S$_2$ optimized structure, respectively. The adiabatic wave functions are delocalized over the molecules, thereby indicating the excimer formation in solid phase. Herein, an excimer is defined as an excited state delocalized over a dimer. The AD S$_2$ is an excimer state without an intermolecular charge transfer character. The relaxed S$_1$ also forms an excimer state similar to S$_2$. In the present case, the delocalized electronic states are obtained because Dimer Model \textbf{1} belongs to $C_i$ symmetry even in the AD state. The NHOMO and HOMO as well as the LUMO and NLUMO of the excimer are pseudo-degenerate, which are approximately expressed as \begin{eqnarray} \psi_{{\rm NHO}} &\approx& \frac{1}{\sqrt{2}} ( \phi_{{\rm HO}} (X_1) + \phi_{{\rm HO}} (X_2)), \\ \psi_{{\rm HO}} &\approx& \frac{1}{\sqrt{2}} ( \phi_{{\rm HO}} (X_1) - \phi_{{\rm HO}} (X_2)), \\ \psi_{{\rm LU}} &\approx& \frac{1}{\sqrt{2}} ( \phi_{{\rm LU}} (X_1) - \phi_{{\rm LU}} (X_2)), \\ \psi_{{\rm NLU}} &\approx& \frac{1}{\sqrt{2}} ( \phi_{{\rm LU}} (X_1) + \phi_{{\rm LU}} (X_2)), \end{eqnarray} where $\phi_{{\rm HO/LU}} (X_1/X_2)$ denotes the HOMO/LUMO of molecule $X_1$/$X_2$ comprising the model. $\phi_{{\rm HO/LU}} (X_2)$ are obtained by a symmetry operation of $\phi_{{\rm HO/LU}} (X_1)$. The frontier orbitals of the excimer are represented as the linear combinations of the HOMOs and LUMOs of the constituents with opposite signs. \begin{figure*}[h] \centering \includegraphics[width=0.8\hsize]{./FIG5.eps} \caption{ Electron density differences of (a) S$_1$@S$_2$-S$_0$@S$_2$, (b) S$_2$@S$_2$-S$_0$@S$_2$, and (c) S$_2$@S$_2$-S$_1$@S$_2$ of Dimer model \textbf{1}. Isosurface values are $1.0\times 10^{-3}$ a.u. Overlap densities of (d) S$_2$@S$_2$-S$_0$@S$_2$, and (e) S$_2$@S$_2$-S$_1$@S$_2$ of Dimer model \textbf{1}. Isosurface values are $2.0\times 10^{-3}$ a.u. } \label{FIG5} \end{figure*} S$_1$ mainly consists of the HOMO-LUMO and NHOMO-NLUMO excited configurations, whereas S$_2$ mainly consists of the HOMO-NLUMO and NHOMO-LUMO excited configurations (Table~\ref{TABLE1}). Since the NHOMO/HOMO and LUMO/NLUMO are pseudo-degenerate, S$_1$ and S$_2$ are pseudo-degenerate. Figure~\ref{FIG5} (a) and (b) show the electron density differences of S$_2$@S$_2$-S$_0$@S$_2$, $\Delta \rho_{20}$, and S$_1$@S$_2$-S$_0$@S$_2$, $\Delta \rho_{10}$, respectively. $\Delta \rho_{20}$ and $\Delta \rho_{10}$ exhibit similar distributions. On the contrary, the electron density difference of S$_2$@S$_2$-S$_1$@S$_2$, $\Delta \rho_{21}$ (Fig.~\ref{FIG5} (c)) exhibits an extremely small distribution. This leads to the small diagonal VCD of S$_1$@S$_2$, resulting in the small diagonal VCCs of S$_1$@S$_2$. Figures~\ref{FIG5} (d) and (e) show the overlap densities of S$_2$@S$_2$-S$_0$@S$_2$, $\rho_{20}$, and S$_2$@S$_2$-S$_1$@S$_2$, $\rho_{21}$, respectively. $\rho_{21}$ exhibits a smaller distribution than that of $\rho_{20}$. The small $\rho_{21}$ contributes to the small off-diagonal VCCs of S$_1$@S$_2$ $\leftarrow$ S$_2$@S$_2$. From the distributions of the overlap densities, the off-diagonal VCCs of S$_0$@S$_2$ $\leftarrow$ S$_2$@S$_2$ could be much larger than those of S$_1$@S$_2$ $\leftarrow$ S$_2$@S$_2$. However, the values of these off-diagonal VCCs are comparable (Figs.~\ref{FIG3} (c) and (d)) because $\rho_{20}$ is symmetrically localized on the atoms while $\rho_{21}$ is localized on the bonds. In general, an overlap density that is symmetrically localized on atoms weakly couples to a potential derivative, resulting in small off-diagonal VCCs \cite{Uejima2014_14244}. Therefore, the off-diagonal VCCs of S$_0$@S$_2$ $\leftarrow$ S$_2$@S$_2$ are not large. Consequently, the small electron density difference and overlap density between the pseudo-degenerate S$_1$ and S$_2$ excited states leads to the extremely small diagonal VCCs of S$_1$@S$_2$ and the small off-diagonal VCCs of S$_1$@S$_2$$\leftarrow$S$_2$@S$_2$. We discuss the mechanism of the vanished electron density difference and overlap density in the pseudo-degenerate electronic system using the Hubbard model in Sec.~\ref{SEC4-2}. Below, we compare the VCs of Dimer Model \textbf{1} in solid phase with that of Monomer Model in solution phase. The reducible representation of vibrational modes for the monomer belonging to $C_1$ symmetry contains only $A$ irreducible representations, and the number of the vibrational modes is \begin{equation} \Gamma_{{\rm vib}} (C_1) = 129 A, \end{equation} where all vibrational modes are vibronic active modes that provide the non-zero diagonal and off-diagonal VCCs. On the other hand, the reducible representation for the dimer belonging to $C_i$ symmetry is decomposed as \begin{equation} \Gamma_{{\rm vib}} (C_i) = 132 A_g + 132 A_u, \end{equation} where $A_g$ and $A_u$ are the totally and non-totally symmetric modes, respectively. The vibronic active mode for the diagonal VC is $A_g$ and that for the off-diagonal VC is $A_u$. Therefore, the number of the vibronic active modes is almost the same in the monomer and dimer although the total number of vibrational modes is higher in the dimer. The numbers of irreducible representations of vibrational modes in excimers with $C_i$, $C_2$, and $C_s$ symmetry are summarized in Section S2. \begin{figure}[h] \centering \includegraphics[width=0.8\hsize]{./FIG6.eps} \caption{ Diagonal VCCs (a) of Monomer Model in the FC S$_1$ state and (b) of Dimer Model \textbf{1} in the FC S$_2$ state. (c) Off-diagonal VCCs of S$_0$@S$_1$ $\leftarrow$ S$_1$@S$_1$ of Monomer Model. } \label{FIG6} \end{figure} Figures~\ref{FIG6} (a) and (b) show the diagonal VCCs of Monomer Model in the FC S$_1$ state and Dimer Model \textbf{1} in the FC S$_2$ state, respectively. The diagonal VCCs are greatly reduced due to the excimer formation where the largest VCC of mode $109$ in Monomer Model is $8.42\times10^{-4}$ a.u. and that of mode $230$ in Dimer Model \textbf{1} is $5.27\times10^{-4}$ a.u. This result indicates that the internal conversion from S$_2$ to S$_0$ in Dimer Model \textbf{1} is suppressed in comparison with the one from S$_1$ to S$_0$ in Monomer Model because the rate constant of the internal conversion is strongly correlated with the diagonal VCCs \cite{Uejima2014_14244}. The off-diagonal VCCs of S$_0$@S$_1$ $\leftarrow$ S$_1$@S$_1$ for Monomer Model are presented in Fig.~\ref{FIG6} (c). The off-diagonal VCCs of Dimer Model \textbf{1} (Fig.~\ref{FIG3} (c)) are smaller than those of Monomer Model. For example, the off-diagonal VCC of mode $49$ in Monomer Model is $4.49\times10^{-4}$ and that of mode $111$ in Dimer Model \textbf{1}, corresponding to mode $49$ in Monomer Model, is $0.89 \times 10^{-4}$ a.u. The reduction of the off-diagonal VCCs in Dimer Model \textbf{1} also contributes to the suppression of the internal conversion. The off-diagonal VCCs of the maximum coupling modes are comparable between Monomer Model ($5.30\times10^{-4}$ a.u.) and Dimer Model \textbf{1} ($5.33\times10^{-4}$ a.u.). In contrast, the diagonal VCC of the maximum coupling mode at $1692$ cm$^{-1}$ for Monomer Model ($8.42\times10^{-4}$ a.u.) is larger than that at $1701$ cm$^{-1}$ for Dimer Model \textbf{1} ($5.27\times10^{-4}$ a.u.). The differences of electronic energies between S$_1$ and S$_0$ is 3.14 eV for Monomer Model, and between S$_2$ and S$_0$ is 3.51 eV for Dimer Model \textbf{1}. Suppose that $V_{mn,\alpha}=5.00\times10^{-4}$ a.u., $\omega_{\alpha}=1700$ cm$^{-1}$, $E_m-E_n=3.0$ eV, $\sigma=500$ cm$^{-1}$, $E_{{\rm min}}=-11500$ cm$^{-1}$, and $E_{{\rm max}}=11500$ cm$^{-1}$ in Eq.~(\ref{Eq:IC2}), the rate constants of the internal conversion considering only a diagonal VCC maximum coupling mode are estimated to be $1.569\times10^{10}$ s$^{-1}$ for Monomer Model and $5.651\times10^7$ s$^{-1}$ for Dimer Model \textbf{1}. The ratio of the rate constants of Monomer Model to Dimer Model \textbf{1} is in the order of $10^2$. These are in good agreement with experiments ($>1.0\times10^{10}$ s$^{-1}$ in CH$_2$Cl$_2$ solution and $5.0\times10^7$ s$^{-1}$ in solid phase) \cite{Nishio2014_686}. Therefore, the reduction of the diagonal VCCs mainly contributes to the suppression of the internal conversion in the solid phase. It should be noted however that the rate constant depends on the broadening of the density of states $\sigma$ (Table S4) \cite{Zhang2019_264}. The broadening of the density of states occurs by the interactions with the surrounding environment and vibronic couplings. Since the evaluation of these effects is out of scope in the present study, $\sigma$ was treated as an external parameter. We previously adopted the values of $\sigma$ ranging from 300 to 1200 cm$^{-1}$ to compute the absorption spectra that reproduces the experimental results \cite{Uejima2014_80,Shigemitsu2012_9100}. We employed $\sigma=500$ cm$^{-1}$ in estimating the rate constants, and obtained the good agreement of the experimental values. Even when different $\sigma$ is used, the ratios of the rate constants of Monomer Model to Dimer Model \textbf{1} do not drastically change (Table S4). \begin{figure*}[h] \centering \includegraphics[width=0.8\hsize]{./FIG7.eps} \caption{ (a) Electron density difference, $\Delta \rho_{nm} (\textbf{\textit{x}})$, (b) potential derivative, $v_{\alpha} (\textbf{\textit{x}})$, and (c) diagonal VCD, $\eta_{n, \alpha} (\textbf{\textit{x}})$, of Monomer Model in the FC S$_1$ state for the maximum coupling mode (mode 109). (d) $\Delta \rho_{nm} (\textbf{\textit{x}})$, (e) $v_{\alpha} (\textbf{\textit{x}})$, and (f) $\eta_{n, \alpha} (\textbf{\textit{x}})$ of Dimer Model \textbf{1} in the FC S$_2$ state for the maximum coupling mode (mode 230). Isosurface values of $\Delta \rho_{nm}(\textbf{\textit{x}})$, $v_{\alpha} (\textbf{\textit{x}})$, and $\eta_{n,\alpha}(\textbf{\textit{x}})$ are $2.0\times10^{-3}$, $1.0\times10^{-2}$, and $1.0\times10^{-5}$ a.u., respectively. } \label{FIG7} \end{figure*} To determine the origin of the VCCs, VCD analyses are performed. Figure~\ref{FIG7} shows the results of the diagonal VCD analyses for the maximum coupling mode of Monomer Model and Dimer Model \textbf{1}. $\Delta \rho_{nm} (\textbf{\textit{x}})$ of Monomer Model strongly couples with $v_{\alpha}(\textbf{\textit{x}})$ of mode 109, leading to a large $\eta_{n,\alpha} (\textbf{\textit{x}})$ that is particularly localized on the C=C bond. Thus, the spatial integration of $\eta_{n,\alpha} (\textbf{\textit{x}})$ of mode 109 provides the largest VCC. $\Delta \rho_{nm} (\textbf{\textit{x}})$, $v_{\alpha} (\textbf{\textit{x}})$, and $\eta_{n,\alpha} (\textbf{\textit{x}})$ of Dimer Model \textbf{1} are delocalized over the molecules. In general, delocalized $\Delta \rho_{nm} (\textbf{\textit{x}})$ and $v_{\alpha} (\textbf{\textit{x}})$ yield smaller diagonal VCCs than localized ones \cite{Shizu2013_215}. $\Delta \rho_{nm} (\textbf{\textit{x}})$ of $X_1$ (or $X_2$) exhibits a similar distribution to that of Monomer Model. The value of $\Delta \rho_{nm} (\textbf{\textit{x}})$ of $X_1$ is one half of that of Monomer Model because the spatial integration of $\Delta \rho_{nm} (\textbf{\textit{x}})$ is zero by definition. In addition, $v_{\alpha}(\textbf{\textit{x}})$ of $X_1$ is $1/\sqrt{2}$ times that of Monomer Model owing to the normalized condition of vibrational modes. Since $\eta_{n,\alpha}(\textbf{\textit{x}})$ is expressed as the product of $\Delta \rho_{nm} (\textbf{\textit{x}})$ and $v_{\alpha}(\textbf{\textit{x}})$, the diagonal VCD of $X_1$ is $1/(2\sqrt{2})$ times that of Monomer Model. Therefore, the diagonal VCCs of Dimer Model \textbf{1}, obtained by the spatial integration of the diagonal VCD, are $1/\sqrt{2}$ times those of the Monomer Model. The ratio of the largest diagonal VCC of Dimer Model \textbf{1} to Monomer Model is $0.626$, which is approximately equal to $1/\sqrt{2} \approx 0.707$; however deviation occurs because the structures of $X_1$ and the monomer are not the same. In Dimer Model \textbf{1}, the reduction of the diagonal VCCs results from the delocalized electronic states. In contrast, in Dimer Model \textbf{2} the electronic states are localized on a single molecule in the adiabatic excited states (Fig. S3), which causes the properties of excited states to be similar to those of the monomer. Therefore, the reduction of the diagonal VCCs is not expected in Dimer Model \textbf{2}. The electronic states are delocalized over molecules in Dimer Model \textbf{3} (Fig. S4), suggesting small diagonal VCCs as in the case for Dimer Model \textbf{1}. \begin{figure}[h] \centering \includegraphics[width=0.4\hsize]{./FIG8.eps} \caption{ (a) Effective mode for the excimer formation of Dimer Model \textbf{1} in the FC S$_2$ state. This is an intramolecular vibration. (b) Vibrational mode 111 of Dimer Model \textbf{1} in the AD S$_2$ state; the off-diagonal VCC of this mode is decreased due to the packing effect. This is an intermolecular vibration. } \label{FIG8} \end{figure} The driving force of the excimer formation is the diagonal VCCs. Figure~\ref{FIG8} (a) shows the effective mode for the excimer formation of Dimer Model \textbf{1} in the FC S$_2$ state. The effective mode is defined by the sum of normal modes weighted by the diagonal VCCs: \begin{equation} \xi = \sum_{\alpha} \frac{V_{n,\alpha}}{\sqrt{ \sum_{\alpha} |V_{n,\alpha}|^2}} Q_{\alpha}, \end{equation} which is the steepest descent direction of the vibrational relaxation \cite{Sato2012_257}. The effective mode is the intramolecular vibration rather than the intermolecular one. This indicates that the excimer formation is induced by the intramolecular vibration. \begin{figure}[h] \centering \includegraphics[width=0.8\hsize]{./FIG9.eps} \caption{ (a) Potential derivative, $v_{\alpha} (\textbf{\textit{x}})$, and (b) off-diagonal VCD, $\eta_{mn, \alpha} (\textbf{\textit{x}})$, of S$_1$@S$_1$-S$_0$@S$_1$ of Monomer Model. (c) $v_{\alpha} (\textbf{\textit{x}})$ and (b) $\eta_{mn, \alpha} (\textbf{\textit{x}})$ of S$_2$@S$_2$-S$_0$@S$_2$ of Dimer Model \textbf{1}. Isosurface values of $v_{\alpha} (\textbf{\textit{x}})$ and $\eta_{mn,\alpha}(\textbf{\textit{x}})$ are $1.0\times10^{-2}$ and $1.0\times10^{-5}$ a.u., respectively. } \label{FIG9} \end{figure} Figure~\ref{FIG9} shows the results of the off-diagonal VCD analyses of mode $49$ for Monomer Model and mode $111$ for Dimer Model \textbf{1}. $v_{49} (\textbf{\textit{x}})$ of Monomer Model is distributed over the stilbene unit. On the contrary, $v_{111} (\textbf{\textit{x}})$ of Dimer Model \textbf{1} is localized on one side of the stilbene unit. This is because vibrational mode $111$ is the intermolecular vibration (Fig.~\ref{FIG8} (b)), i.e. torsional mode of stilbene unit, that is restricted by the surrounding molecules in the solid phase. In other words, the localization of $v_{\alpha} (\textbf{\textit{x}})$ arises from the packing effect. As a result, $\eta_{mn,\alpha} (\textbf{\textit{x}})$ of Dimer Model \textbf{1} is also localized on one side of the stilbene unit giving the small off-diagonal VCCs. The degree of localization of $v_{\alpha}(\textbf{\textit{x}})$ is expected to depend on the types of vibrational modes. The off-diagonal VCC of mode $51$ in Monomer Model is $4.98\times10^{-4}$ a.u. (Fig.~\ref{FIG6} (c)), which is comparable to that of mode $113$ in Dimer Model \textbf{1}, $5.34\times10^{-4}$ a.u., (Fig.~\ref{FIG3} (c)). Figure S5 shows $v_{51}(\textbf{\textit{x}})$ of Monomer Model and $v_{113} (\textbf{\textit{x}})$ of Dimer Model \textbf{1}. $v_{113}(\textbf{\textit{x}})$ of Dimer Model \textbf{1} is distributed over the stilbene unit rather than localized on one side of the stilbene unit in a similar manner to $v_{51} (\textbf{\textit{x}})$ of Monomer Model. This is because vibrational mode 113 is the intramolecular vibration that is not easily affected by the surrounding molecules (Fig. S5). Therefore, the off-diagonal VCCs corresponding to these modes are not reduced by the packing effect. The packing effect in solid phase is visualized by the potential derivative. The small diagonal VCCs contribute to the suppression not only of the internal conversion but also of the vibrational relaxation \cite{Uejima2014_14244}. The reorganization energies due to vibrational relaxation were calculated to be $\Delta E$ = 0.582 eV for Monomer Model and $\Delta E$ = 0.275 eV for Dimer Model \textbf{1}. For Monomer Model, since the potential energy surfaces of low-frequency torsional modes were not approximated to harmonic potentials, the reorganization energies for these vibrational modes were evaluated from the potential energy surfaces \cite{Uejima2014_14244} instead of using Eq.~(\ref{Eq:DeltaE}) (Fig. S6). The reorganization energy of Dimer Model \textbf{1} is smaller than that of Monomer Model, and suppressed vibrational relaxation is expected in Dimer Model \textbf{1}. Furthermore, we calculated the electronic states of Decamer Model for the CNPPE solid (Fig. S7). Decamer Model has a large oscillator strength in the FC S$_8$ state (Table S7), and the S$_8$ exciton is expected to be generated by absorption. The electron density difference between the FC S$_8$ and S$_0$ states is delocalized as in the case for the combination of Dimer Model \textbf{1} and \textbf{2} (Fig. S8). The electronic states of Dimer Model \textbf{2} are localized on a single molecule after vibrational relaxation. Therefore, Decamer Model can be reduced to Dimer Model \textbf{1} in the adiabatic excited states. Thus, the modeling of the CNPPE solid using Dimer Model \textbf{1} is considered to be reasonable. We also performed the calculations of Dimer Model \textbf{1} using the long-range corrected $\omega$B97X-D functional \cite{Chai2008_6615} in order to check the robustness of the results. S$_2$ ($A_u$) consisting of the HOMO-1-LUMO and HOMO-LUMO+1 excited configurations is symmetry-allowed from the Laport\'{e} rule (Table S8). In addition, the frontier orbitals in the AD S$_2$ state are delocalized over the molecules (Fig. S9), and the excited states do not have a charge transfer character. Therefore, the same conclusions as using the M06-2X functional can be deduced. \subsection{Hubbard model in a pseudo-degenerate electronic system\label{SEC4-2}} \begin{figure}[h] \centering \includegraphics[width=0.4\hsize]{./FIG10.eps} \caption{ (a) Hubbard model of a system consisting of $X_1$ and $X_2$. $\epsilon$ is the energy gap between the HOMO and LUMO. $t_1$, $t_2$, and $t_3$ are the hopping integrals for the HOMO-HOMO, LUMO-LUMO, and HOMO-LUMO, respectively, between $X_1$ and $X_2$. $U_1$ and $U_2$ are the Coulomb interactions for the HOMO-HOMO and HOMO-LUMO, respectively, within $X_1$ or $X_2$. (b) Electronic ground and excited configurations assuming a single excitation. } \label{FIG10} \end{figure} In this section, we discuss the disappearance mechanism of the electron density difference and overlap density based on the Hubbard model. We consider an excited electronic structure of a system consisting of the same molecules $X_1$ and $X_2$ (Fig.~\ref{FIG10} (a)). Each molecule is not necessarily symmetric, but the dimer is assumed to have a symmetry such as $C_i$, $C_2$, or $C_s$. For simplicity, only the HOMOs and LUMOs of $X_1$ and $X_2$ are considered. The energy gap between the HOMO and LUMO is expressed as $\epsilon$. The hopping integrals for the HOMO-HOMO, LUMO-LUMO, and HOMO-LUMO between $X_1$ and $X_2$ are denoted by $t_1$, $t_2$, and $t_3$, respectively. The Coulomb interactions for the HOMO-HOMO and HOMO-LUMO within $X_1$ or $X_2$ are denoted by $U_1$ and $U_2$, respectively. Considering only a single excitation as done by the TD-DFT calculations, there are 9 electronic configurations (Fig.~\ref{FIG10} (b)) where $\ket{\Phi_0}$ is the ground configuration, $\ket{\Phi_1}$, $\ket{\Phi_2}$, $\ket{\Phi_5}$, and $\ket{\Phi_6}$ are locally-excited configurations, and $\ket{\Phi_3}$, $\ket{\Phi_4}$, $\ket{\Phi_7}$, and $\ket{\Phi_8}$ are charge-transfer configurations. $t_1$, $t_2$, and $t_3$ are considered to be small compared with $U_1$ and $U_2$ because $t_1$, $t_2$, and $t_3$ are the interactions between $X_1$ and $X_2$ separated from each other while $U_1$ and $U_2$ are the interactions within $X_1$ or $X_2$. The model Hamiltonian for the basis defined in Fig.~\ref{FIG10} (b) is given by \begin{equation} \scalebox{0.55}{$ \begin{pmatrix} 2U_1 & 0 & 0 & t_3 & t_3 & 0 & 0 & t_3 & t_3 \\ & \epsilon+U_1+U_2 & 0 & t_2 & t_1 & 0 & 0 & 0 & 0 \\ & & \epsilon+U_1+U_2 & t_1 & t_2 & 0 & 0 & 0 & 0 \\ & & & \epsilon+U_1+2U_2 & 0 & 0 & 0 & 0 & 0 \\ & & & & \epsilon+U_1+2U_2 & 0 & 0 & 0 & 0 \\ & & & & & \epsilon+U_1+U_2 & 0 & t_2 & t_1 \\ & & & & & & \epsilon+U_1+U_2 & t_1 & t_2 \\ & & & & & & &\epsilon+U_1+2U_2 & 0 \\ & & & & & & & & \epsilon+U_1+2U_2 \\ \end{pmatrix}. $} \end{equation} Employing the Rayleigh--Schr\"odinger perturbation theory considering $t_1$, $t_2$, and $t_3$ as perturbations, the electronic states of the excited states are expressed as \begin{eqnarray} \ket{\Psi^s_1} & = & \frac{1}{2} ( \ket{\Phi_1} + \ket{\Phi_5} + \ket{\Phi_2} + \ket{\Phi_6} ) \notag \\ & - & \frac{t_1+t_2}{2 U_2} ( \ket{\Phi_3} + \ket{\Phi_7} + \ket{\Phi_4} + \ket{\Phi_8} ), \\ \ket{\Psi^s_2} & = & \frac{1}{2} (\ket{\Phi_3} + \ket{\Phi_7} + \ket{\Phi_4} + \ket{\Phi_8}) \notag \\ & + & \frac{t_1+t_2}{2 U_2} ( \ket{\Phi_1} + \ket{\Phi_5} + \ket{\Phi_2} + \ket{\Phi_6} ), \\ \ket{\Psi^a_1} & = & \frac{1}{2} ( \ket{\Phi_1} + \ket{\Phi_5} - \ket{\Phi_2} - \ket{\Phi_6} ) \notag \\ & - & \frac{t_2-t_1}{2 U_2} ( \ket{\Phi_3} + \ket{\Phi_7} - \ket{\Phi_4} - \ket{\Phi_8} ), \\ \ket{\Psi^a_2} & = & \frac{1}{2} (\ket{\Phi_3} + \ket{\Phi_7} - \ket{\Phi_4} - \ket{\Phi_8}) \notag \\ & + & \frac{t_2-t_1}{2 U_2} ( \ket{\Phi_1} + \ket{\Phi_5} - \ket{\Phi_2} - \ket{\Phi_6} ), \end{eqnarray} where $\ket{\Psi^s_1}$ and $\ket{\Psi^s_2}$ are symmetric and $\ket{\Psi^a_1}$ and $\ket{\Psi^a_2}$ are antisymmetric electronic states. $\ket{\Psi^s_1}$ and $\ket{\Psi^s_2}$ belong to different symmetries from $\ket{\Psi^a_1}$ and $\ket{\Psi^a_2}$. The electronic states of the ground state is expressed as \begin{equation} \ket{\Psi_0} = \ket{\Phi_0} - \frac{t_3}{\epsilon-U_1+U_2} ( \ket{\Phi_3} + \ket{\Phi_7} + \ket{\Phi_4} + \ket{\Phi_8}), \end{equation} which depends on $t_3$ unlike the electronic states of the excited states. Employing the Rayleigh--Schr\"odinger perturbation theory, the energies of the ground and excited states are given by \begin{eqnarray} E_0 & = & 2U_1-\frac{4 t_3^2}{\epsilon-U_1+2U_2}, \\ E_1^s & = & \epsilon+U_1+U_2-\frac{(t_1+t_2)^2}{U_2}, \\ E_2^s & = & \epsilon+U_1+2U_2+\frac{(t_1+t_2)^2}{U_2},\\ E_1^a & = & \epsilon+U_1+U_2-\frac{(t_2-t_1)^2}{U_2}, \\ E_2^a & = & \epsilon+U_1+2U_2+\frac{(t_2-t_1)^2}{U_2}. \end{eqnarray} The energy difference between $E_1^s$ and $E_1^a$ as well as $E_2^s$ and $E_2^a$ is $|4 t_1 t_2/U_2|$. Since $4t_1 t_2/U_2$ is assumed to be small, $\ket{\Psi^s_1}$ and $\ket{\Psi^a_1}$ as well as $\ket{\Psi^s_2}$ and $\ket{\Psi^a_2}$ are pseudo-degenerate. The orders of the excited states are $E_1^s \lesssim E_1^a < E_2^a \lesssim E_2^s$ for $t_1 t_2 > 0$ and $E_1^a \lesssim E_1^s < E_2^s \lesssim E_2^a$ for $t_1 t_2 < 0$: i.e. the symmetry of excited states changes according to the sign of the product $t_1 t_2$ determined by the alignment of molecules. The excited state degenerates for $t_1 t_2 = 0$. When $t_1$, $t_2$, and $t_3$ are no longer small compared to $U_1$ and $U_2$, namely the perturbation theory is not valid, the degeneracy occurs in the case of $t_1 \neq 0$ or $t_2 \neq 0$ (see Fig. S10). When the system has $C_i$ symmetry and $t_1 t_2 > 0$, S$_1$ is symmetry-forbidden $A_g$ and S$_2$ is symmetry-allowed $A_u$. This is the case for Dimer Model \textbf{1} of the CNPPE solid. The electron density differences between S$_1$ ($\ket{\Psi_1^s}$) and S$_0$ ($\ket{\Psi_0}$) as well as S$_2$ ($\ket{\Psi_1^a}$) and S$_0$ are given by \begin{eqnarray} \Delta \rho_{20} & = & \Delta \rho_{10} \notag \\ & = & \left\{ ( |\phi_{{\rm LU}} (X_1)|^2 - |\phi_{{\rm HO}} (X_1)|^2) - ( |\phi_{{\rm LU}} (X_2)|^2 - |\phi_{{\rm HO}} (X_2)|^2) \right\}. \notag \\ \end{eqnarray} Thus, both $\Delta \rho_{20}$ and $\Delta \rho_{10}$ are expressed as the difference between the LUMOs and HOMOs of $X_1$ and $X_2$. In contrast, the electron density difference between S$_2$ and S$_1$ is expressed as \begin{equation} \Delta \rho_{21} = \Delta \rho_{20} - \Delta \rho_{10} = 0. \end{equation} Therefore, $\Delta \rho_{21}$ is cancelled because $\Delta \rho_{20}$ and $\Delta \rho_{10}$ are the same due to the pseudo-degeneracy of S$_1$ and S$_2$. The overlap density between S$_2$ and S$_0$ as well as between S$_2$ and S$_1$ are given by \begin{eqnarray} \rho_{20} & = & \frac{1}{\sqrt{2}} \left( \phi_{{\rm HO}} (X_1) \phi_{{\rm LU}}(X_1) - \phi_{{\rm HO}} (X_2) \phi_{{\rm LU}}(X_2) \right), \\ \rho_{21} & = & \frac{1}{2} \left\{ ( |\phi_{{\rm LU}} (X_1)|^2 - |\phi_{{\rm HO}} (X_1)|^2) - ( |\phi_{{\rm LU}} (X_2)|^2 - |\phi_{{\rm HO}} (X_2)|^2) \right\} \notag \\ & \approx & 0. \label{Eq:RHOS2-S1} \end{eqnarray} $\rho_{20}$ is expressed as the non-vanishing product of HOMOs and LUMOs. In contrast, $\rho_{21}$ is cancelled because it is expressed as the difference between the square of the HOMOs and LUMOs. The other pairs of electron density differences and overlap densities in the pseudo-degenerate electronic system are summarized in Table S9 and Table S10, respectively. Kasha originally discussed the spectroscopic properties of aggregates (H- and J-aggregations) using the long-range Coulomb couplings alone \cite{Kasha1963_55}. Spano \textit{et al.} extended the Kasha's model by adding the short-range charge-transfer mediated excitonic couplings \cite{Hestand2017_341,Hestand2018_7069}. This model explains that, when the short-range couplings, or intermolecular hopping integrals, are dominant, excited states properties are determined from the relative phases of the overlap between adjacent molecules, and the fluorescence could occur even in the H-aggregation. In the present system, however, the intermolecular hopping integrals are not dominant to apply the short-range coupling model. \section{Concluding remarks} The origin of AIEE in the CNPPE solid was investigated by the ONIOM method using the TD-DFT calculations. The pseudo-degeneracy arising from the excimer formation in the solid phase gives the vanished electron density difference and overlap density between S$_1$ and S$_2$, which suggests that the fluorescence from the second excited state was possible against Kasha's rule because the transitions from the second to first excited states are suppressed. The electronic states delocalized over the molecules reduce the diagonal VCCs in the solid phase to approximately $1/\sqrt{2}$ times those in solution phase. In addition, the packing effect in the solid phase reduces the off-diagonal VCCs of the intermolecular vibrations. These results indicate that the internal conversions from excited to ground states are more suppressed in the solid phase than in the solution phase. The molecular orientation affecting the relative signs of the intermolecular hopping integrals plays an important role in determining the pseudo-degenerate excited states properties. When the product of the hopping integrals for HOMOs and LUMOs is negative, the first excited state is fluorescent (J-type aggregation). In contrast, the product of the hopping integrals is positive, the first excited state is dark and the second one is fluorescent (H-type aggregation). Whether the first or second excited states are fluorescent, the AIEE could occur as long as the excimer forms in the aggregation phases because the excimer formation gives rise to the decrease of the diagonal VCCs. In this study, we discussed a dimer with $C_i$ site symmetry. Other cyano-substituted compounds exhibiting AIEE, such as cyano-substituted bis(4-bromophenyl)-fumaronitrile, bis(3-trifluoromethylphenyl)fumaronitrile, bis(4-methoxyphenyl)-fumaronitrile \cite{Yeh2004_6455}, and cyano-substituted oligo(\textit{para}-phenylene vinylene) (CN-DPDSB) \cite{Li2007_231} also have $C_i$ site symmetry in their crystal structures, which suggests that AIEE may occur due to the pseudo-degeneracy in these compounds. It should be noted, however, that other symmetries including $C_1$ could be also possible for the appearance of AIEE if the pseudo-degenerate electronic states are generated in the aggregation phases. A molecule which forms excimers with delocalized excited electronic states can be fluorescent in the aggregation phases, even if the molecule is not fluorescent in an isolated state, such as in solution or vacuum. Accordingly, we can obtain the following design principle for AIEE: \textit{ a candidate molecule for AIEE should have pseudo-degenerate adiabatic electronic states in the aggregation phases originating from the excimer formation. } \begin{acknowledgement} This study was supported by JSPS KAKENHI Grant Number JP17H05259 in Scientific Research on Innovative Areas "Photosynergetics" and by Element Strategy Initiative of MEXT, Grant Number JPMXP0112101003. Numerical calculations were partly performed at Supercomputer System, Institute for Chemical Research, Kyoto University, Academic Center for Computing and Media Studies (ACCMS), Kyoto University, and Research Center for Computational Science, Okazaki. \end{acknowledgement} \begin{suppinfo} \end{suppinfo} \@startsection{subsubsection}{3}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\bf}{\@startsection{subsubsection}{3}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\bf}} \def\@startsection{paragraph}{4}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\textit}{\@startsection{paragraph}{4}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\textit}} \renewcommand\@biblabel[1]{#1} \renewcommand\@makefntext[1 {\noindent\makebox[0pt][r]{\@thefnmark\,}#1} \makeatother \renewcommand{\figurename}{\small{Fig.}~} \sectionfont{\large} \subsectionfont{\normalsize} \fancyfoot{} \fancyfoot[RO]{\footnotesize{\sffamily{1--\pageref{LastPage} ~\textbar \hspace{2pt}\thepage}}} \fancyhead{} \renewcommand{\headrulewidth}{1pt} \renewcommand{\footrulewidth}{1pt} \setlength{\arrayrulewidth}{1pt} \setlength{\columnsep}{6.5mm} \setlength\bibsep{1pt} \renewcommand{\baselinestretch}{1.67} \renewcommand{\thefigure}{S\arabic{figure}} \renewcommand{\thetable}{S\arabic{table}} \renewcommand{\thesection}{S\arabic{section}} \author{ \bf Wataru Ota,\textit{$^{a,b}$} Ken Takahashi,\textit{$^{c}$} Kenji Higashiguchi,\textit{$^{d}$} \\ \bf Kenji Matsuda,\textit{$^{d}$} and Tohru Sato,$^{\ast}$\textit{$^{a,b,e}$} } \title{ \Large \bf \flushleft{ Supplementary Information\newline } \begin{center} Origin of Aggregation-Induced Enhanced Emission:\\ Role of Pseudo-Degenerate Electronic States \\ of Excimers Formed in Aggregation Phases \end{center} } \begin{document} \maketitle \footnotetext{ \it $^{a}$~ Fukui Institute for Fundamental Chemistry, Kyoto University, Sakyo-ku, Kyoto 606-8103, Japan; } \footnotetext{ \it $^{b}$~ Department of Molecular Engineering, Graduate School of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510, Japan } \footnotetext{ \it $^{c}$~ Undergraduate School of Industrial Chemistry, Faculty of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510, Japan } \footnotetext{ \it $^{d}$~ Department of Synthetic Chemistry and Biological Chemistry, Kyoto University, Nishikyo-ku, Kyoto 615-8510, Japan } \footnotetext{ \it $^{e}$~ Unit of Elements Strategy Initiative for Catalysts \& Batteries, Kyoto University, Nishikyo-ku, Kyoto 615-8510, Japan } \footnotetext{ \it E-mail: [email protected] } \clearpage \tableofcontents \clearpage \section{Dimer Models in Solid Phase} \vfill \begin{figure}[!h] \centering \begin{tabular}{c} \includegraphics[scale=0.20]{./FIGS1.eps} \end{tabular} \caption{ Dimer Model (a) \textbf{1}, (b) \textbf{2}, and (c) \textbf{3} for the CNPPE solid. In Dimer Model \textbf{2}, a dimer aligned in the $ca$-plane was selected as the QM region and the surrounding 38 molecules were selected as the MM region. In Dimer Model \textbf{3}, a dimer aligned in the $b$-direction was selected as the QM region and the surrounding 29 molecules were selected as the MM region. The QM region was computed at the M06-2X/6-31G(d,p) levels of theory and the MM region was computed using the UFF. } \label{FIGS1} \end{figure} \vfill \begin{figure}[!h] \centering \begin{tabular}{c} \includegraphics[scale=0.60]{./FIGS2.eps} \end{tabular} \caption{ Total energy of the excited states of Dimer Model \textbf{1}, \textbf{2}, and \textbf{3} in the QM region. The energy reference is the total energy of S$_0$@S$_0$ of Dimer Model \textbf{1}, -2253.41783105 a.u. } \label{FIGS2} \end{figure} \vfill \clearpage \section{Irreducible Representations of Excimers} \subsection{$C_i$ symmetry} Table~\ref{TABLES1} presents the character table of $C_i$ point group \cite{Atkins2010}. In $C_i$ point group, there are cases where an atom does not exist at the inversion center and where an atom exists at the inversion center. When an atom does not exist at the inversion center, the reducible representation of vibrational modes is decomposed as \begin{equation} \Gamma_{{\rm vib}} = \frac{3N-6}{2} A_g + \frac{3N-6}{2} A_u, \end{equation} where $N$ denotes the number of atoms. Half of the vibrational modes belongs to $A_g$ and the other half to $A_u$. On the other hand, when an atom exists at the inversion center, the number of vibrational modes is \begin{equation} \Gamma_{{\rm vib}} = \frac{3N-9}{2} A_g + \frac{3N-3}{2} A_u. \end{equation} The numbers of vibrational modes belonging to $A_g$ and $A_u$ are different. \begin{table}[!h] \centering \caption{ \label{TABLES1} Character table of $C_i$ point group. } \vspace*{-\intextsep} \begin{tabular}{ccccc} \hline\hline $\bm{C_i}$ & $E$ & $i$ & & $h$=2 \\ \hline $A_g$ & 1 & 1 & $R_x, R_y, R_z$ & $x^2, y^2, z^2, xy, yz, zx$ \\ $A_u$ & 1 & -1 & $x, y, z$ & \\ \hline\hline \end{tabular} \end{table} \clearpage \subsection{$C_2$ symmetry} Table~\ref{TABLES2} presents the character table of $C_2$ point group \cite{Atkins2010}. In $C_2$ point group, there are cases where no atoms exist on a rotational axis and where atoms exist on a rotational axis. When atoms do not exist on a rotational axis, the reducible representation of vibrational modes is decomposed as \begin{equation} \Gamma_{{\rm vib}} = \frac{3N-4}{2} A + \frac{3N-8}{2} B. \end{equation} The numbers of vibrational modes belonging to $A$ and $B$ are not the same. When $N_a$ atoms exist on a rotational axis, the number of vibrational modes is \begin{equation} \Gamma_{{\rm vib}} = \frac{3N-N_a-4}{2} A + \frac{3N+N_a-8}{2} B. \end{equation} The number of vibrational modes belonging to $B$ increases with $N_a$. \begin{table}[!h] \centering \caption{ \label{TABLES2} Character table of $C_2$ point group. } \vspace*{-\intextsep} \begin{tabular}{ccccc} \hline\hline $\bm{C_2}$ & $E$ & $C_2$ & & $h$=2 \\ \hline $A$ & 1 & 1 & $z, R_z$ & $x^2, y^2, z^2, xy$ \\ $B$ & 1 & -1 & $x, y, R_x, R_y$ & $yz, zx$ \\ \hline\hline \end{tabular} \vspace*{-\intextsep} \end{table} \clearpage \subsection{$C_s$ symmetry} Table~\ref{TABLES3} presents the character table of $C_s$ point group \cite{Atkins2010}. In $C_s$ point group, there are cases where no atoms exist on a mirror plane and where atoms exist on a mirror plane. When atoms do not exist on a mirror plane, the reducible representation of vibrational modes is decomposed as \begin{equation} \Gamma_{{\rm vib}} = \frac{3N-6}{2} A' + \frac{3N-6}{2} A''. \end{equation} Half of the vibrational modes belongs to $A'$ and the other half to $A''$. When $N_p$ atoms exist on a mirror plane, the number of vibrational modes is \begin{equation} \Gamma_{{\rm vib}} = \frac{3N+N_p-6}{2} A' + \frac{3N-N_p-6}{2} A''. \end{equation} The number of vibrational modes belonging to $A'$ increases with $N_p$. \begin{table}[!h] \centering \caption{ \label{TABLES3} Character table of $C_s$ point group. } \vspace*{-\intextsep} \begin{tabular}{ccccc} \hline\hline $\bm{C_s}$ & $E$ & $\sigma_h$ & & $h$=2 \\ \hline $A'$ & 1 & 1 & $x, y, R_z$ & $x^2, y^2, z^2, xy$ \\ $A''$ & 1 & -1 & $z, R_x, R_y$ & $yz, zx$ \\ \hline\hline \end{tabular} \vspace*{-\intextsep} \end{table} \clearpage \section{ Rate Constant of Internal Conversion for Monomer Model and Dimer Model \textbf{1} } ~\\ \begin{table}[!h] \centering \vspace*{-\intextsep} \caption{\label{TABLES4} Dependence of $k_{n \leftarrow m}^{{\rm IC}} (T)$ within a single mode approximation on the linewidth of the Gaussian function $\sigma$. $V_{mn,\alpha}=5\times10^{-4}$ a.u., $\omega_{\alpha}=1700$ cm$^{-1}$, $E_m - E_n=3.0$ eV, $E_{{\rm min}}=-11500$ cm$^{-1}$, and $E_{{\rm max}}=11500$ cm$^{-1}$. $V_{n,\alpha}$ at maximum coupling mode of Monomer model is $8.42\times10^{-4}$ a.u. and of Dimer Model \textbf{1} is $5.27\times10^{-4}$ a.u. Ratios of $k_{n \leftarrow m}^{{\rm IC}} (T)$ of Monomer Model to Dimer Model \textbf{1} are also given. } \scalebox{0.8}{ \begin{tabular}{cccccc} \hline\hline & \multicolumn{2}{c}{$k_{n\leftarrow m}^{{\rm IC}}$ (s$^{-1}$)} \\ \cline{2-3} $\sigma$ (cm$^{-1}$) & Monomer Model & Dimer Model \textbf{1} & Ratio \\ \hline 300 & $1.195\times 10^{10}$ & $2.998\times 10^7$ & 388\\ 400 & $1.335\times 10^{10}$ & $3.985\times 10^7$ & 335\\ 500 & $1.569\times 10^{10}$ & $5.651\times 10^7$ & 277\\ 600 & $1.832\times 10^{10}$ & $7.552\times 10^7$ & 242\\ 700 & $2.092\times 10^{10}$ & $9.525\times 10^7$ & 219\\ \hline\hline \end{tabular} } \end{table} \clearpage \section{Electron Density Differences of Dimer Model \textbf{2} and \textbf{3}} \begin{table}[!h] \centering \vspace*{-\intextsep} \caption{ Excited states of Dimer Model \textbf{2} at the S$_0$ and S$_1$ optimized structures. \label{TABLES5} } \scalebox{.7}{ \begin{tabular}{cccccc} \hline\hline & State & \multicolumn{2}{c}{Excitation Energy} & $f$ & Major Configuration \\ \cline{3-4} & & eV & nm & & (CI Coefficient)\\ \hline @S$_0$ & S$_1$ ($A$) & 3.8692 & 320.44 & 3.2829 & HO-1 $\rightarrow$ LU (0.398), HO $\rightarrow$ LU+1 (0.565) \\ & S$_2$ ($A$) & 3.9294 & 315.53 & 0.0977 & HO-1 $\rightarrow$ LU (0.565), HO $\rightarrow$ LU+1 (-0.399) \\ @S$_1$ & S$_1$ ($A$) & 3.1122 & 398.39 & 1.8360 & HO $\rightarrow$ LU (0.700) \\ & S$_2$ ($A$) & 3.9107 & 317.04 & 1.5560 & HO-1 $\rightarrow$ LU+1 (0.791)\\ \hline\hline \end{tabular} } \end{table} \vfill \begin{figure}[!h] \centering \begin{tabular}{c} \includegraphics[scale=0.30]{./FIGS3.eps} \end{tabular} \caption{ Electron density differences of (a) S$_1$@S$_1$-S$_0$@S$_1$ and (b) S$_2$@S$_1$-S$_0$@S$_1$ for Dimer Model \textbf{2}. Isosurface values are $1.0\times10^{-3}$ a.u. } \label{FIGS3} \end{figure} \vfill \begin{table}[!h] \centering \vspace*{-\intextsep} \caption{ Excited states of Dimer Model \textbf{3} at the S$_0$ and S$_2$ optimized structures. \label{TABLES6} } \scalebox{.7}{ \begin{tabular}{cccccc} \hline\hline & State & \multicolumn{2}{c}{Excitation Energy} & $f$ & Major Configuration \\ \cline{3-4} & & eV & nm & & (CI Coefficient)\\ \hline @S$_0$ & S$_{1}$ ($A$) & 3.7862 & 327.46 & 0.1146 & HO-1 $\rightarrow$ LU ( 0.57769), HO $\rightarrow$ LU+1 (-0.37639)\\ & S$_{2}$ ($A$) & 3.9346 & 315.12 & 2.6588 & HO-1 $\rightarrow$ LU ( 0.38099), HO $\rightarrow$ LU+1 ( 0.57373)\\ @S$_2$ & S$_{1}$ ($A$) & 3.3696 & 367.95 & 0.0005 & HO-1 $\rightarrow$ LU ( 0.49586), HO $\rightarrow$ LU+1 (-0.48388)\\ & S$_{2}$ ($A$) & 3.5310 & 351.13 & 2.8119 & HO-1 $\rightarrow$ LU ( 0.48848), HO $\rightarrow$ LU+1 ( 0.49548)\\ \hline\hline \end{tabular} } \end{table} \vfill \begin{figure}[!h] \centering \begin{tabular}{c} \includegraphics[scale=0.35]{./FIGS4.eps} \end{tabular} \caption{ Electron density differences of (a) S$_1$@S$_2$-S$_0$@S$_2$ and (b) S$_2$@S$_2$-S$_0$@S$_2$ for Dimer Model \textbf{3}. Isosurface values are $1.0\times10^{-3}$ a.u. } \label{FIGS4} \end{figure} \vfill \clearpage \section{Potential Derivatives of Monomer Model and Dimer Model \textbf{1}} \begin{figure}[!h] \centering \includegraphics[scale=0.20]{./FIGS5.eps} \caption{ Potential derivatives of (a) mode 49 for Monomer Model and (b) mode 111 for Dimer Model \textbf{1} of which the off-diagonal VCC is reduced by the packing effect. Potential derivatives of (c) mode 51 for Monomer Model and (d) mode 113 for Dimer Model \textbf{1} of which the off-diagonal VCC is not reduced by the packing effect. Isosurface value for Monomer Model is $1\times10^{-2}$ a.u. and for Dimer Model \textbf{1} is $1/\sqrt{2}\times10^{-2}$ a.u. } \label{FIGS5} \end{figure} \clearpage \section{Potential Energy Surface of Monomer Model} \begin{figure}[!h] \centering \includegraphics[scale=0.5]{./FIGS6.eps} \caption{ Potential energy surfaces of S$_0$ and S$_1$ of Monomer Model along with (a) mode 1 and (b) mode 3. $q_{\alpha}$ denotes the Cartesian displacement from the S$_0$ optimized structure. The potential energy surfaces along with these modes were not approximated to harmonic potentials. The reorganization energies of modes 1 and 3 including the anharmonicity effect were calculated to be 0.028 and 0.067 eV, respectively. } \label{FIGS6} \end{figure} \clearpage \section{ Decamer Model in Solid Phase } ~\\ \vfill \begin{figure}[!h] \centering \includegraphics[scale=0.4]{./FIGS7.eps} \caption{ Decamer Model for the CNPPE solid. Centered decamer is selected as the QM region (M06-2X/3-21G) and the surrounding 60 molecules are selected as the MM region (UFF). } \label{FIGS7} \end{figure} \vfill \begin{figure}[!h] \centering \includegraphics[scale=0.4]{./FIGS8.eps} \caption{ Electron density difference of S$_8$@S$_0$-S$_0$@S$_0$ for Decamer Model. Isosurface value is $3.0\times10^{-4}$ a.u. } \label{FIGS8} \end{figure} \vfill \clearpage \begin{table}[!h] \centering \caption{ \label{TABLES7} Excited states of Decamer Model at the S$_0$ optimized structure. } \scalebox{.8}{ \begin{tabular}{cccccccc} \hline\hline State & \multicolumn{2}{c}{Excitation Energy} & $f$ & Major Configuration \\ \cline{2-3} & eV & nm & & (CI Coefficient)\\ \hline S$_1$ ($A_u$) & 3.7447 & 331.09 & 0.0052 & HO-4 $\rightarrow$ LU (0.2715), HO-4 $\rightarrow$ LU+5 (-0.2389) \\ & & & & HO-3 $\rightarrow$ LU+1 (-0.2715), HO-3 $\rightarrow$ LU+4 (-0.2389) \\ S$_2$ ($A_g$) & 3.7447 & 331.09 & 0.0000 & HO-4 $\rightarrow$ LU+1 (0.2717), HO-4 $\rightarrow$ LU+4 (0.2392) \\ & & & & HO-3 $\rightarrow$ LU (-0.2717), HO-3 $\rightarrow$ LU+5 (0.2392) \\ S$_3$ ($A_g$) & 3.7775 & 328.21 & 0.0000 & HO-4 $\rightarrow$ LU+1 (0.2197), HO-3 $\rightarrow$ LU (-0.2199) \\ & & & & HO-2 $\rightarrow$ LU+3 (0.2583), HO-1 $\rightarrow$ LU+2 (0.2583) \\ S$_4$ ($A_u$) & 3.7775 & 328.21 & 0.0618 & HO-4 $\rightarrow$ LU (-0.2204), HO-3 $\rightarrow$ LU+1 (0.2204) \\ & & & & HO-2 $\rightarrow$ LU+2 (0.2580), HO-1 $\rightarrow$ LU+3 (0.2580) \\ S$_5$ ($A_u$) & 3.8860 & 319.06 & 0.7377 & HO-4 $\rightarrow$ LU+2 (-0.2070), HO-4 $\rightarrow$ LU+5 (0.2771) \\ & & & & HO-3 $\rightarrow$ LU+3 (-0.2070), HO-3 $\rightarrow$ LU+4 (0.2771) \\ & & & & HO-2 $\rightarrow$ LU+6 (-0.2052), HO-1 $\rightarrow$ LU+7 (0.2052) \\ S$_6$ ($A_g$) & 3.8871 & 318.96 & 0.0000 & HO-4 $\rightarrow$ LU+3 (-0.2030), HO-4 $\rightarrow$ LU+4 (0.2690) \\ & & & & HO-3 $\rightarrow$ LU+2 (-0.2030), HO-3 $\rightarrow$ LU+5 (0.2690) \\ & & & & HO-2 $\rightarrow$ LU+7 (0.2208), HO-1 $\rightarrow$ LU+6 (-0.2208) \\ S$_7$ ($A_g$) & 3.9259 & 315.81 & 0.0000 & HO-5 $\rightarrow$ LU+9 (-0.2354), HO $\rightarrow$ LU+8 (0.6484) \\ S$_8$ ($A_u$) & 3.9942 & 310.41 & 9.0385 & HO-2 $\rightarrow$ LU+6 (-0.3474), HO-1 $\rightarrow$ LU+7 (0.3474) \\ & & & &HO $\rightarrow$ LU+9 (-0.2498) \\ S$_{9}$ ($A_g$) & 4.0290 & 307.73 & 0.0000 & HO-2 $\rightarrow$ LU+7 (-0.3649), HO-1 $\rightarrow$ LU+6 ( 0.3649)\\ S$_{10}$ ($A_u$) & 4.1224 & 300.76 & 1.1521 & HO $\rightarrow$ LU+9 (-0.5282)\\ \hline\hline \end{tabular} } \end{table} \clearpage \section{ Dimer Model \textbf{1} Calculated by $\omega$B97X-D Functional } ~\\ \vfill \begin{table}[!h] \vspace*{-\intextsep} \centering \caption{ \label{TABLES8} Excited states of Dimer Model \textbf{1} at the S$_0$ and S$_2$ optimized structures calculated by $\omega$B97X-D/6-31G(d,p) level of theory. $f$ denotes the oscillator strength. } \scalebox{.8}{ \begin{tabular}{cccccc} \hline\hline & & \multicolumn{2}{c}{Excitation Energy} & & Major Configuration \\ \cline{3-4} & State & eV & nm & $f$ & (CI coefficient)\\ \hline @S$_0$ & S$_1$(A$_g$) & 3.8193 & 324.63 & 0.0000 & HO-1 $\rightarrow$ LU+1 (-0.379)\\ & & & & & HO $\rightarrow$ LU (0.564) \\ & S$_2$(A$_u$) & 3.9453 & 314.26 & 2.3698 & HO-1 $\rightarrow$ LU (-0.406)\\ & & & & & HO $\rightarrow$ LU+1 (0.542) \\ @S$_2$ & S$_1$(A$_g$) & 3.3319 & 372.12 & 0.0000 & HO-1 $\rightarrow$ LU+1 (0.347)\\ & & & & & HO $\rightarrow$ LU (0.592) \\ & S$_2$(A$_u$) & 3.4697 & 357.33 & 2.3091 & HO-1 $\rightarrow$ LU (0.379)\\ & & & & & HO $\rightarrow$ LU+1 (0.570)\\ \hline\hline \end{tabular} } \vspace*{-\intextsep} \end{table} ~\\ \vfill \begin{figure}[!h] \vspace*{-\intextsep} \centering \includegraphics[scale=0.6]{./FIGS9.eps} \caption{ (a) Frontier orbitals and (b) orbital levels of Dimer Model \textbf{1} at the S$_2$ optimized structure calculated by $\omega$B97X-D/6-31G(d,p) level of theory. $X_1$ and $X_2$ are the constituent molecules of Dimer Model \textbf{1}. Isosurface values of the frontier orbitals are $3.0\times 10^{-2}$ a.u. } \label{FIGS9} \vspace*{-\intextsep} \end{figure} \vfill ~\\ \clearpage \section{Hubbard Model of a Pseudo-Degenerate Electronic System} \begin{figure}[!h] \centering \includegraphics[scale=0.45]{./FIGS10.eps} \caption{ Energy difference between first and second excited states, $E^s_1-E_1^a$, calculated by numerically diagonalizing the Hubbard Hamiltonian. $\epsilon$, $U_1$, and $U_2$ were set to $3.0$, $1.0$, and $1.0$, respectively. } \label{FIGS10} \end{figure} \clearpage In this section, we describe the electron density differences and overlap densities in the pseudo-degenerate electronic system using the Hubbard model. The electron density in the ground state is defined by $\rho_0$. The orbital overlap densities are defined by \begin{gather} p_1 = |\phi_{{\rm HO}} (X_1)|^2, \ \ \ p_2 = |\phi_{{\rm HO}} (X_2)|^2, \ \ \ q_1 = |\phi_{{\rm LU}} (X_1)|^2, \ \ \ q_2 = |\phi_{{\rm LU}} (X_2)|^2, \\ r_1 = \phi_{{\rm HO}} (X_1) \phi_{{\rm LU}} (X_1), \ \ \ r_2 = \phi_{{\rm HO}} (X_2) \phi_{{\rm LU}} (X_2), \\ s_1 = \phi_{{\rm HO}} (X_1) \phi_{{\rm HO}} (X_2), \ \ \ s_2 = \phi_{{\rm LU}} (X_1) \phi_{{\rm LU}} (X_2), \ \ \ s_3 = \phi_{{\rm HO}} (X_1) \phi_{{\rm LU}} (X_2) = \phi_{{\rm LU}} (X_1) \phi_{{\rm HO}} (X_2), \end{gather} where $\phi_{{\rm HO/LU}} (X_1/X_2)$ represent the HOMO/LUMO of $X_1$/$X_2$. The overlaps between $X_1$ and $X_2$ separated from each other, namely $s_1$, $s_2$ and $s_3$, are supposed to be 0. Table~\ref{TABLES9} and \ref{TABLES10} respectively present the electron density differences and overlap densities in the pseudo-degenerate electronic system. \vfill \begin{table}[h!] \vspace*{-\intextsep} \centering \caption{ \label{TABLES9} Electron density differences in the pseudo-degenerate system. $ \alpha = \frac{1}{2} ( (\rho_0-p_1+q_1)+(\rho_0-p_2+q_2)). $ For Dimer Model \textbf{1} of the CNPPE solid, S$_1$ corresponds to $\ket{\Psi_1^s}$, and S$_2$ $\ket{\Psi_1^a}$. Therefore, the electron density difference between S$_1$ and S$_2$ is given by 0. } \begin{tabular}{cccccc} \hline\hline & $\Psi_0$ & $\Psi_1^s$ & $\Psi_2^s$ & $\Psi_1^a$ & $\Psi_2^a$ \\ \hline $\Psi_0$ & $0$ & $\rho_0-\alpha$ & $\rho_0-\alpha$ & $\rho_0-\alpha$ & $\rho_0-\alpha$ \\ $\Psi_1^s$ & & $0$ & $0$ & $0$ & $0$ \\ $\Psi_2^s$ & & & $0$ & $0$ & $0$ \\ $\Psi_1^a$ & & & & $0$ & $0$ \\ $\Psi_2^a$ & & & & & $0$ \\ \hline\hline \end{tabular} \vspace*{-\intextsep} \end{table} \vfill \begin{table}[!h] \vspace*{-\intextsep} \centering \caption{ \label{TABLES10} Overlap densities in the pseudo-degenerate system. $ \alpha = \frac{1}{2} ( (\rho_0-p_1+q_1)+(\rho_0-p_2+q_2)),\ \beta_1 = \frac{1}{2} (q_1-p_1-q_2+p_2) \approx 0,\ \beta_2 = \frac{1}{2} (q_2-p_1-q_1+p_2),\ \gamma_1 = \frac{1}{\sqrt{2}} (r_1+r_2),\ \gamma_2 = \frac{1}{\sqrt{2}} (r_1-r_2). $ For Dimer Model \textbf{1} of the CNPPE solid, S$_1$ corresponds to $\ket{\Psi_1^s}$, and S$_2$ $\ket{\Psi_1^a}$. Therefore, the overlap density between S$_1$ and S$_2$ is given by $\beta_1 \approx 0$. } \begin{tabular}{cccccc} \hline\hline & $\Psi_0$ & $\Psi_1^s$ & $\Psi_2^s$ & $\Psi_1^a$ & $\Psi_2^a$ \\ \hline \hline $\Psi_0$ & $\rho_0$ & $\gamma_1$ & $\frac{t_1+t_2}{U_2}\gamma_1-\frac{t_3}{\epsilon-U_1+2U_2}\alpha$ & $\gamma_2$ & $\frac{t_2-t_1}{U_2}\gamma_2-\frac{t_3}{\epsilon-U_1+2U_2}\beta_2$\\ $\Psi_1^s$ & & $\alpha$ & $0$ & $\beta_1$($\approx$ 0) & $\frac{t_2-t_1}{U_2}\beta_1-\frac{t_1+t_2}{U_2}\beta_2$ \\ $\Psi_2^s$ & & & $\alpha$ & $\frac{t_1+t_2}{U_2}\beta_1-\frac{t_2-t_1}{U_2}\beta_2$ & $\beta_2$ \\ $\Psi_1^a$ & & & & $\alpha$ & $0$ \\ $\Psi_2^a$ & & & & & $\alpha$ \\ \hline\hline \end{tabular} \vspace*{-\intextsep} \end{table} \vfill \clearpage
train/arxiv
BkiUd9k4uzki04H6kluu
5
1
\section{Introduction} Multimodal data modeling, which combines information from different sources, is increasingly attracting attention in computer vision~\cite{barnard2003matching,blei2003modeling,socher2010connecting,jia2011learning,putthividhy2010topic, guillaumin2010multimodal,rasiwasia2010new}. One of the leading approaches is based on topic modelling, the most popular model being latent Dirichlet allocation or LDA~\cite{blei2003latent}. LDA is a generative model for documents that originates from the natural language processing community, but has had great success in computer vision~\cite{blei2003latent, wang2009simultaneous}. LDA models a document as a multinomial distribution over topics, where a topic is itself a multinomial distribution over words. While the distribution over topics is specific for each document, the topic-dependent distributions over words are shared across all documents. Topic models can thus extract a meaningful, semantic representation from a document by inferring its latent distribution over topics from the words it contains. In the context of computer vision, LDA can be used by first extracting so-called ``visual words'' from images, convert the images into visual word documents and training an LDA topic model on the bags of visual words. \begin{figure}[t] \begin{center} \includegraphics[width=0.65\linewidth]{supdocnade_framed.pdf} \end{center} \caption{Illustration of a single hidden layer SupDocNADE model for multimodal image data. Visual words, annotation words and class label $y$ are modeled as $p({\bf v},y) = p(y|{\bf v}) \prod_i p(v_i|v_1,\dots,v_{i-1})$. All conditionals $p(y|{\bf v})$ and $ p(v_i|v_1,\dots,v_{i-1})$ are modeled using neural networks with shared weights. Each predictive word conditional $p(v_i|v_1,\dots,v_{i-1})$ (noted ${\hat v}_i$ for brevity) follows a tree decomposition where each leaf is a possible word. At test time, the annotation words are not used (trated with a dotted box) to compute the image's topic feature representation.} \label{fig:supdocnade} \end{figure} To deal with multimodal data, some variants of LDA have been proposed recently~\cite{blei2003modeling, putthividhy2010topic, jia2011learning, wang2009simultaneous}. For instance, Correspondence LDA (Corr-LDA)~\cite{blei2003modeling} was proposed to discover the relationship between images and annotation modalities, by assuming each image topic must have a corresponding text topic. Multimodal LDA~\cite{putthividhy2010topic} generalizes Corr-LDA by learning a regression module relating the topics from the different modalities. Multimodal Document Random Field Model (MDRF)~\cite{jia2011learning} was also proposed to deal with multimodal data, which learns cross-modality similarities from a document corpus containing multinomial data. Besides the annotation words, the class label modality can also be embedded into LDA, such as in supervised LDA (sLDA)~\cite{blei2007supervised, wang2009simultaneous}. By modeling the image visual words, annotation words and their class labels, the discriminative power of the learned image representations could thus be improved. At the heart of most topic models is a generative story in which the image's latent representation is generated first and the visual words are subsequently produced from this representation. The appeal of this approach is that the task of extracting the representation from observations is easily framed as a probabilistic inference problem, for which many general purpose solutions exist. The disadvantage however is that as a model becomes more sophisticated, inference becomes less trivial and more computationally expensive. In LDA for instance, inference of the distribution over topics does not have a closed-form solution and must be approximated, either using variational approximate inference or MCMC sampling. Yet, the model is actually relatively simple, making certain simplifying independence assumptions such as the conditional independence of the visual words given the image's latent distribution over topics. Another approach to model the statistical structure of words is through the use of distributed representations modeled by artificial neurons. In the realm of document modeling, \citet{salakhutdinov2009replicated} proposed a so-called Replicated Softmax (RS) model for bags of words. The RS model was later used for multimodal data modeling~\cite{srivastava2012multimodal}, where pairs of images and text annotations were modeled jointly within a deep Boltzmann machine (DBM)~\cite{srivastava2013discriminative}. This deep learning approach to the generative modeling of multimodal data achieved state-of-the-art performance on the MIR Flickr data set~\cite{HuiskesM2008}. On the other hand, it also shares with LDA and its different extensions the reliance on a stochastic latent representation of the data, requiring variational approximations and MCMC sampling at training and test time. Another neural network based state-of-the-art multimodal data modeling approach is Multimodal Deep Recurrent Neural Network (MDRNN)~\cite{sohn2014improved} which aims at predicting missing data modalities through the rest of data modalities by minimizing the variation of information rather than maximizing likelihood. Recently, an alternative generative modeling approach for documents was proposed in \citet{larochelle2012neural}. In this work, a Document Neural Autoregressive Distribution Estimator (DocNADE) is proposed, which models directly the joint distribution of the words in a document by decomposing it as a product of conditional distributions (through the probability chain rule) and modeling each conditional using a neural network. Hence, DocNADE doesn't incorporate any latent random variables over which potentially expensive inference must be performed. Instead, a document representation can be computed efficiently in a simple feed-forward fashion, using the value of the neural network's hidden layer. \citet{larochelle2012neural} also show that DocNADE is a better generative model of text documents than LDA and the RS model, and can extract a useful representation for text information retrieval. In this paper, we consider the application of DocNADE to deal with multimodal data in computer vision. More specifically, we first propose a supervised variant of DocNADE (SupDocNADE), which can be used to model the joint distribution over an image's visual words, annotation words and class label. The model is illustrated in Figure~\ref{fig:supdocnade}. We investigate how to successfully incorporate spatial information about the visual words and highlight the importance of calibrating the generative and discriminative components of the training objective. Our results confirm that this approach can outperform other topic models, such as the supervised variant of LDA. Moreover, we propose a deep extension of SupDocNADE, that learns a deep and discriminative representation of pairs of images and annotation words. The deep version of SupDocNADE, which is illustrated in Figure~\ref{fig:supdeepdocnade}, outperforms its shallow one and achieves state-of-the-art performance on the challenging MIR Flickr data set. \begin{figure}[t] \begin{center} \includegraphics[width=0.65\linewidth]{DeepDocNADE_diagram_framed.pdf} \end{center} \caption{Illustration of the deep extension of Supervised DocNADE (SupDeepDocNADE) model. At the training phase, the input $\bf v$ (visual and annotation words) is first shuffled randomly based on an ordering $o$ and then randomly split into two parts, $\mathbf{v}_{o_{<d}}$ and $\mathbf{v}_{o_{\geq d}}$. Then we compute each of the conditionals in Equation~\ref{eqn:DocNADE_est} and use backpropagation to optimize the parameters of the model. To deal with the imbalance between the visual and annotation words, the histogram of $\mathbf{v}_{o_{<d}}$ and $\mathbf{v}_{o_{\geq d}}$ is weighted by $\omega \left(\rho\right)$. At test time, all the words in $\bf v$ are fed to the model to compute a discriminative deep representation. Besides the visual and annotation words, global features $\bf f$ are also leveraged by the model. } \label{fig:supdeepdocnade} \end{figure} \section{Related Work} \label{relaged works} As previously mentioned, multimodal data is often modeled using extensions of the basic LDA topic model, such as Corr-LDA~\cite{blei2003modeling}, Multimodal LDA~\cite{putthividhy2010topic} and MDRF~\cite{jia2011learning}. In this paper, we focus on learning a joint representation from three different modalities: \textit{image visual words, annotations}, and \textit{class labels}. The class label describes the image globally with a single descriptive label (such as {\it coast}, {\it outdoor}, {\it inside city}, etc.), while the annotation focuses on tagging the local content within the image. \citet{wang2009simultaneous} proposed a supervised LDA formulation to tackle this problem. \citet{wang2011max} opted instead for a maximum margin formulation of LDA (MMLDA). Our work also belongs to this line of work, extending topic models to a supervised variant: our first contribution in this paper is thus to extend a different topic model, DocNADE, to this context for multimodal data modeling. What distinguishes DocNADE from other topic models is its reliance on an autoregressive neural network architecture. Recently, deep neural networks are increasingly used for the probabilistic modeling of images and text (see~\cite{bengio2012representation} for a review). The work of \citet{srivastava2012multimodal} on DBMs and \citet{sohn2014improved} on MDRNN are good recent examples. \citet{ngiam2011multimodal} also proposed deep autoencoder networks for multimodal learning, though this approach was recently shown to be outperformed by DBMs~\cite{srivastava2013discriminative} and MDRNN~\cite{sohn2014improved}. Although DocNADE shows favorable performance over other topic models, the lack of an efficient deep formulation reduces its ability of modeling multimodal data, especially compared with the deep neural network based models~\cite{ngiam2011multimodal,srivastava2012multimodal,srivastava2013discriminative}. Thus, the second contribution of this paper is to propose an efficient deep version of DocNADE and its supervised variant. As we'll see, the deep version of our DocNADE model will outperform the DBM approach of \citet{srivastava2013discriminative}. \section{Document NADE} \label{sec: DocNADE intro} In this section, we describe the original DocNADE model. In \citet{larochelle2012neural}, DocNADE was used to model documents of real words, belonging to some predefined vocabulary. To model image data, we assume that images have first been converted into a bag of visual words. A standard approach is to learn a vocabulary of visual words by performing $K$-means clustering on SIFT descriptors densely exacted from all training images. See Section~\ref{experiment:conf} for more details about this procedure. From that point on, any image can thus be represented as a bag of visual words ${\bf v}=[v_1,v_2,\ldots,v_{D_{\bf v}}]$, where each $v_i$ is the index of the closest $K$-means cluster to the $i^{\rm th}$ SIFT descriptor extracted from the image and $D_{\bf v}$ is the number of extracted descriptors for image ${\bf v}$. DocNADE models the joint probability of the visual words $p({\bf v})$ by rewriting it as \begin{equation} p\left({\bf v}\right)=\prod_{i=1}^{D_{\bf v}} p\left ( v_i| {\bf v}_{<i}\right ) \label{eqn:prob_chain_rule} \end{equation} and modeling instead each conditional $p( v_i| {\bf v}_{<i} )$, where $\mathbf{v}_{<i}$ is the subvector containing all $v_j$ such that $j<i$\footnote{ We use a random ordering of the visual words in Equation~\ref{eqn:prob_chain_rule} for each image, and we find it works well in practice. See the discussion in Section~\ref{sec:SupDocNADE} for more details. }. Notice that Equation~\ref{eqn:prob_chain_rule} is true for any distribution, based on the probability chain rule. Hence, the main assumption made by DocNADE is in the form of the conditionals. Specifically, DocNADE assumes that each conditional can be modeled and learned by a feedforward neural network. One possibility would be to model $ p( v_i| \mathbf{v}_{<i}) $ with the following architecture: \begin{eqnarray} &&\hspace{-1.5cm}\mathbf{h}_i\left ( \mathbf{v}_{<i} \right ) = {\bf g}\left( \mathbf{c}+\sum_{k<i}\mathbf{W}_{:,v_k} \right ) \label{eqn:docnade_hidden}\\ &&\hspace{-1.5cm} p\left ( v_i=w|\mathbf{v}_{<i} \right ) = \frac{\exp\left ( b_w +\mathbf{V}_{w,:}\mathbf{h}_i\left ( \mathbf{v}_{<i} \right )\right )}{\sum_{w{}'}\exp\left ( b_{w{}'} +\mathbf{V}_{w{}',:}\mathbf{h}_i\left ( \mathbf{v}_{<i} \right )\right )}\label{eqn:docnade_softmax} \end{eqnarray} where $g(\cdot)$ is an element-wise non-linear activation function, $\mathbf{W} \in \mathbb{R}^{H \times Q}$ and $\mathbf{V} \in \mathbb{R}^{Q \times H}$ are the connection parameter matrices, $\mathbf{c} \in \mathbb{R}^N$ and $\mathbf{b} \in \mathbb{R}^Q$ are bias parameter vectors and $H,Q$ are the number of hidden units (topics) and vocabulary size, respectively. Computing the distribution $p( v_i=w|\mathbf{v}_{<i} )$ of Equation~\ref{eqn:docnade_softmax} requires time linear in $Q$. In practice, this is too expensive, since it must be computed for each of the $D_{\bf v}$ visual words $v_i$. To address this issue, \citet{larochelle2012neural} propose to use a balanced binary tree to decompose the computation of the conditionals and obtain a complexity logarithmic in $Q$. This is achieved by randomly assigning all visual words to a different leaf in a binary tree. Given this tree, the probability of a word is modeled as the probability of reaching its associated leaf from the root. \citet{larochelle2012neural} model each left/right transition probabilities in the binary tree using a set of binary logistic regressors taking the hidden layer $\mathbf{h}_{i}({\bf v}_{<i})$ as input. The probability of a given word can then be obtained by multiplying the probabilities of each left/right choices of the associated tree path. Specifically, let $\mathbf{l}\left(v_i\right) $ be the sequence of tree nodes on the path from the root to the leaf of $v_i$ and let $\pi \left(v_i\right)$ be the sequence of binary left/right choices at the internal nodes along that path. For example, $l\left(v_i\right)_1 $ will always be the root node of the binary tree, and $\pi \left(v_i\right)_1$ will be $0$ if the word leaf $v_i$ is in the left subtree or $1$ otherwise. Let $\mathbf{V} \in \mathbb{R}^{T\times H} $ now be the matrix containing the logistic regression weights and $\mathbf{b} \in \mathbb{R}^T$ be a vector containing the biases, where $T$ is the number of inner nodes in the binary tree and $H$ is the number of hidden units. The probability $ p( v_i=w|\mathbf{v}_{<i} )$ is now modeled as \begin{equation} p( v_i=w|\mathbf{v}_{<i}) = \prod_{k=1}^{|\pi\left(v_i\right)|} p(\pi\left(v_i\right)_k|\mathbf{v}_{<i})~, \label{eqn:docnade_tree} \end{equation} where \begin{equation} p(\pi\left(v_i\right)_k=1|\mathbf{v}_{<i})=\textup{ sigm}\left( b_{l\left(v_i\right)_m} +\mathbf{V}_{l\left(v_i\right)_m,:}\mathbf{h}_i\left ( \mathbf{v}_{<i} \right )\right) \label{eqn:docnade_tree_lr} \end{equation} are the internal node logistic regression outputs and $\textup{ sigm}(x) = 1/(1+\exp(-x))$ is the sigmoid function. By using a balanced tree, we are guaranteed that computing Equation~\ref{eqn:docnade_tree} involves only $O(\log_2 Q)$ logistic regression outputs. One could attempt to optimize the organization of the words within the tree, but a random assignment of the words to leaves works well in practice \cite{larochelle2012neural}. Thus, by combining Equations~\ref{eqn:docnade_hidden}, \ref{eqn:docnade_tree} and \ref{eqn:docnade_tree_lr}, we can compute the probability $p\left( {\bf v} \right)=\prod_{i=1} p\left ( v_i|{\bf v}_{<i}\right ) $ for any document under DocNADE. To train the parameters $\theta = \lbrace{{\bf W},{\bf V},{\bf b},{\bf c}\rbrace}$ of DocNADE, we simply optimize the average negative log-likelihood of the training set documents using stochastic gradient descent. Equations~\ref{eqn:docnade_tree},\ref{eqn:docnade_tree_lr} indicate that the conditional probability of each word $v_i$ requires computing the position dependent hidden layer $\mathbf{h}_i\left( \mathbf{v}_{<i} \right )$, which extracts a representation out of the bag of previous visual words $\mathbf{v}_{<i}$. Since computing $\mathbf{h}_i\left( \mathbf{v}_{<i} \right )$ is in $O(H D_{\bf v})$ on average, and there are $D_{\bf v}$ hidden layers $\mathbf{h}_i\left( \mathbf{v}_{<i} \right )$ to compute, then a naive procedure for computing all hidden layers would be in $O(H D_{\bf v}^2)$. However, noticing that \begin{eqnarray} {\mathbf{h}_{i+1}\left ( \mathbf{v}_{<i+1} \right )} & =& {\bf g}\left( \mathbf{c}+\sum_{k<i+1}\mathbf{W}_{:,v_k} \right ) \\ &=& {\bf g}\left( \mathbf{W}_{:,v_i}+\mathbf{c}+\sum_{k<i}\mathbf{W}_{:,v_k} \right ) \end{eqnarray} and exploiting that fact that the weight matrix $\mathbf{W}$ is the same across all conditionals, the linear transformation $\mathbf{c}+\sum_{k<i}\mathbf{W}_{:,v_k}$ can be reused from the computation of the previous hidden layer $\mathbf{h}_{i}( \mathbf{v}_{<i})$ to compute $\mathbf{h}_{i+1}( \mathbf{v}_{<i+1})$. With this procedure, computing all hidden layers $\mathbf{h}_{i}( \mathbf{v}_{<i} ) $ sequentially from $i=1$ to $i=D_{\bf v}$ becomes in $O(H D_{\bf v})$. Finally, since the computation complexity of each of the $O(\log_2 Q)$ logistic regressions in Equation~\ref{eqn:docnade_tree} is $O(H)$, the total complexity of computing $ p( v_i=w|\mathbf{v}_{<i} )$ is $O(\log_2(Q) H D_{\bf v})$. In practice, the length of document $D_{\bf v}$ and the number of hidden units $H$ tends to be small, while $\log_2(Q)$ will be small even for large vocabularies. Thus DocNADE can be used and trained efficiently. Once the model is trained, a latent representation can be extracted from a new document $\mathbf{v^{\ast}}$ as follows: \begin{equation} \mathbf{h}_y\left ( \mathbf{v}^{\ast} \right ) = {\bf g}\left( \mathbf{c}+\sum_{i}^{D_{\bf v}}\mathbf{W}_{:,v^{\ast}_i} \right )~. \end{equation} This representation could be fed to a standard classifier to perform any supervised computer vision task. The index $y$ is used to highlight that it is the representation used to predict the class label $y$ of the image. \section{SupDocNADE for Multimodal Data} \label{SupDocNADE intro} In this section, we describe the approach of this paper, inspired by DocNADE, to learn jointly from multimodal data. Here, we will concentrate on the single layer version of our model and discuss its deep extension later, in Section~\ref{sec:SupDeepDocNADE}. First, we describe a supervised extension of DocNADE (SupDocNADE), which incorporates the class label modality into training to learn more discriminative hidden features for classification. Then we describe how we exploit the spatial position information of the visual words. Finally, we describe how to jointly model the text annotation modality with SupDocNADE. \subsection{Supervised DocNADE} \label{sec:SupDocNADE} It has been observed that learning image feature representations using unsupervised topic models such as LDA can perform worse than training a classifier directly on the visual words themselves, using an appropriate kernel such as a pyramid kernel~\cite{lazebnik2006beyond}. One reason is that the unsupervised topic features are trained to explain as much of the entire statistical structure of images as possible and might not model well the particular discriminative structure we are after in our computer vision task. This issue has been addressed in the literature by devising supervised variants of LDA, such as Supervised LDA or sLDA~\cite{blei2007supervised}. DocNADE also being an unsupervised topic model, we propose here a supervised variant of DocNADE, SupDocNADE, in an attempt to make the learned image representation more discriminative for the purpose of image classification. Specifically, given an image $ {\bf v}=[ v_1,v_2,\ldots,v_{D_{\bf v}}]$ and its class label $y\in \{1,\dots,C\}$, SupDocNADE models the full joint distribution as \begin{equation} p( {\bf v}, y)=p(y|{\bf v})\prod_{i=1}^{D_{\bf v}} p\left ( v_i| {\bf v}_{<i}\right ) ~~. \label{eqn:supdocnade} \end{equation} As in DocNADE, each conditional is modeled by a neural network. We use the same architecture for $p\left ( v_i| {\bf v}_{<i}\right )$ as in regular DocNADE. We now only need to define the model for $p(y|{\bf v})$. Since $\mathbf{h}_y\left ( \mathbf{v} \right )$ is the image representation that we'll use to perform classification, we propose to model $p\left ( y| {\bf v}\right )$ as a multiclass logistic regression output computed from $\mathbf{h}_y\left ( \mathbf{v} \right )$: \begin{equation} p\left( y|{\bf v}\right) = {\rm softmax}\left( \mathbf{d} + \mathbf{U}\mathbf{h}_y\left (\mathbf{v} \right) \right)_y\label{eqn:h_class} \end{equation} where ${\rm softmax}({\bf a})_i = \exp(a_i) / \sum_{j=1}^C \exp(a_j)$, $ \mathbf{d} \in \mathbb{R}^C $ is the bias parameter vector in the supervised layer and $ \mathbf{U} \in \mathbb{R}^{C \times H} $ is the connection matrix between hidden layer $\mathbf{h}_y $ and the class label. Put differently, $p\left ( y| {\bf v}\right )$ is modeled as a regular multiclass neural network, taking as input the bag of visual words ${\bf v}$. The crucial difference however with a regular neural network is that some of its parameters (namely the hidden unit parameters ${\bf W}$ and ${\bf c}$) are also used to model the visual word conditionals $p\left ( v_i| {\bf v}_{<i}\right )$. Maximum likelihood training of this model is performed by minimizing the negative log-likelihood \begin{equation} -\log p\left( {\bf v},y\right) = - \log p\left ( y| {\bf v} \right) +\label{eqn:objectfunc} \sum_{i=1}^{D_{\bf v}} -\log p( v_i | {\bf v}_{<i}) \end{equation} averaged over all training images. This is known as generative learning~\cite{bouchard2004tradeoff}. The first term is a purely discriminative term, while the second is unsupervised and can be understood as a regularizer, that encourages a solution which also explains the unsupervised statistical structure within the visual words. In practice, this regularizer can bias the solution too strongly away from a more discriminative solution that generalizes well. Hence, similarly to previous work on hybrid generative/discriminative learning, we propose instead to weight the importance of the generative term \begin{equation} L({\bf v},y;\theta) = - \log p\left ( y| {\bf v} \right) + \lambda \sum_{i=1}^{D_{\bf v}} -\log p( v_i | {\bf v}_{<i}) \label{eqn:objectfunc_hybrid} \end{equation} where $\lambda$ is treated as a regularization hyper-parameter. Optimizing the training set average of Equation~\ref{eqn:objectfunc_hybrid} is performed by stochastic gradient descent, using backpropagation to compute the parameter derivatives. As in regular DocNADE, computation of the training objective and its gradient requires that we define an ordering of the visual words. Though we could have defined an arbitrary path across the image to order the words (e.g. from left to right, top to bottom in the image), we follow~\citet{larochelle2012neural} and randomly permute the words before every stochastic gradient update. The implication is that the model is effectively trained to be a good inference model of {\it any} conditional $p( v_i | {\bf v}_{<i})$, for any ordering of the words in ${\bf v}$. This again helps fighting against overfitting and better regularizes our model. One could thus think of SupDocNADE as learning from a sequence of \textit{random} fixations performed in a visual scene. In our experiments, we used the rectified linear function as the activation function \begin{equation} {\bf g}({\bf a}) = \max(0, {\bf a}) = [\max(0,a_1),\dots,\max(0,a_H)] \end{equation} which often outperforms other activation functions~\cite{glorot2011deep} and has been shown to work well for image data~\cite{nair2010rectified}. Since this is a piece-wise linear function, the (sub-)gradient with respect to its input, needed by backpropagation to compute the parameter gradients, is simply \begin{equation} {\bf 1}_{({\bf g}({\bf a})>0)}= [1_{(g(a_1)>0)},\dots,1_{(g(a_H)>0)}] \end{equation} where $1_{P}$ is 1 if $P$ is true and 0 otherwise. Algorithms~\ref{alg:fprop}~and~\ref{alg:bprop} give pseudocodes for efficiently computing the joint distribution $p\left({\bf v},y \right)$ and the parameter gradients of Equation~\ref{eqn:objectfunc_hybrid} required for stochastic gradient descent training. \begin{algorithm}[t] \caption{ Computing $p\left({\bf v},y \right)$ using SupDocNADE} \begin{algorithmic} \STATE {\bf Input:} bag of words representation ${\bf v}$, target $y$ \STATE {\bf Output:} $p\left({\bf v},y \right)$ \STATE $\mathbf{act}\gets \mathbf{c}$ \STATE $p\left(\mathbf{v} \right) \gets 1$ \FOR{$i$ from $1$ to $D_{\bf v}$} \STATE $\mathbf{h}_i \gets$ ${\bf g}\left( \mathbf{act}\right)$ \STATE $p\left(v_i|\mathbf{v}_{<i}\right)=1$ \FOR{$m$ from 1 to $|\pi \left(v_i\right)|$} \STATE $p\left(v_i|\mathbf{v}_{<i}\right) \gets p\left(v_i|\mathbf{v}_{<i}\right) p\left(\pi\left(v_i\right)_m|\mathbf{v}_{<i}\right)$ \ENDFOR \STATE $p\left(\mathbf{v} \right) \gets p\left(\mathbf{v} \right)p\left(v_i|\mathbf{v}_{<i}\right)$ \STATE $\mathbf{act}\gets \mathbf{act} + \mathbf{W}_{:,v_i}$ \ENDFOR \STATE $\mathbf{h}^{c}\left (\mathbf{v} \right ) \gets \max(0,\mathbf{act}) $ \STATE $p\left( y|\mathbf{v}\right) \gets \textup{softmax} \left( \mathbf{d} + \mathbf{U}\mathbf{h}^{c}\left (\mathbf{v} \right ) )\right)_{|y}$ \STATE $p\left({\bf v},y \right) \gets p\left(\mathbf{v} \right)p\left( y|\mathbf{v}\right) $ \end{algorithmic} \label{alg:fprop} \end{algorithm} \begin{algorithm}[t] \caption{ Computing SupDocNADE training gradients} \begin{algorithmic} \STATE {\bf Input:} training vector ${\bf v}$, target $y$,\\ \hspace{10mm} unsupervised learning weight $\lambda$ \STATE {\bf Output:} gradients of Equation~\ref{eqn:objectfunc_hybrid} w.r.t. parameters \STATE $f\left(\mathbf{v}\right) \gets \textup{softmax} \left( \mathbf{d} + \mathbf{U}\mathbf{h}^{c}\left (\mathbf{v} \right ) )\right)$ \STATE $\delta \mathbf{d} \gets \left(f \left(\mathbf{v}\right)-1_y\right)$ \STATE $\delta \mathbf{act} \gets (\mathbf{U}^\intercal \delta \mathbf{d}) \circ 1_{{\bf h}_y > 0}$ \STATE $\delta \mathbf{U} \gets \delta \mathbf{d}~{\mathbf{h}^{c^\intercal}}$ \STATE $\delta \mathbf{c} \gets 0$, $\delta \mathbf{b} \gets 0$, $\delta \mathbf{V} \gets 0$, $\delta \mathbf{W} \gets 0$ \FOR{$i$ from $D_{\bf v}$ to $1$} \STATE $\delta\mathbf{h}_i \gets 0$ \FOR {$m$ from $1$ to $|\pi\left(v_i\right)|$} \STATE $ \delta t \gets \lambda \left(p\left(\pi\left(v_i\right)_m|\mathbf{v}_{<i}\right)-\pi\left(v_i\right)_m\right)$ \STATE $\delta b_{l\left(v_i\right)_m} \gets \delta b_{l\left(v_i\right)_m}+\delta t$ \STATE $\delta \mathbf{V}_{l\left(v_i\right)_m,:} \gets \delta \mathbf{V}_{l\left(v_i\right)_m,:}+ \delta t~\mathbf{h}_i^\intercal$ \STATE $\delta \mathbf{h}_i \gets \delta \mathbf{h}_i + \delta t~\mathbf{V}_{l\left(v_i\right)_m,:}^\intercal$ \ENDFOR \STATE $\delta \mathbf{act} \gets \delta \mathbf{act}+ \delta \mathbf{h}_i\circ 1_{{\bf h}_i > 0}$ \STATE $\delta \mathbf{c} \gets \delta \mathbf{c} + \delta \mathbf{h}_i\circ 1_{{\bf h}_i > 0}$ \STATE $\delta \mathbf{W}_{:,v_i} \gets \delta \mathbf{W}_{:,v_i} + \delta \mathbf{act}$ \ENDFOR \end{algorithmic} \label{alg:bprop} \end{algorithm} \subsection{Dealing with Multiple Regions} \label{sec:multiple regions} Spatial information plays an important role for understanding an image. For example, the sky will often appear on the top part of the image, while a car will most often appear at the bottom. A lot of previous work has exploited this intuition successfully. For example, in the seminal work on spatial pyramids~\cite{lazebnik2006beyond}, it is shown that extracting different visual word histograms over distinct regions instead of a single image-wide histogram can yield substantial gains in performance. We follow a similar approach, whereby we model both the presence of the visual words and the identity of the region they appear in. Specifically, let's assume the image is divided into several distinct regions ${\cal R} = \lbrace{ R_1,R_2, \ldots, R_M \rbrace }$, where $M$ is the number of regions. The image can now be represented as \begin{eqnarray} {\bf v}^{\cal R}& =& [ v^{\cal R}_1,v^{\cal R}_2, \ldots, v^{\cal R}_D ] \\ & =& [ \left( v_1,r_1 \right),\left( v_2, r_2 \right),\ldots,\left(v_{D_{\bf v}},r_{D_{\bf v}} \right) ]\nonumber \end{eqnarray} where $r_i \in {\cal R}$ is the region from which the visual word $v_i$ was extracted. To model the joint distribution over these visual words, we decompose it as $p({\bf v}^{\cal R}) = \prod_i p((v_i,r_i) | {\bf v}^{\cal R}_{<i})$ and treat each $Q\times M$ possible visual word/region pair as a distinct word. One implication of this is that the binary tree of visual words must be larger so as to have a leaf for each possible visual word/region pair. Fortunately, since computations grow logarithmically with the size of the tree, this is not a problem and we can still deal with a large number of regions. \subsection{Dealing with Annotations} \label{sec: anno} So far, we've described how to model the visual word and class label modalities. In this section, we now describe how we also model the annotation word modality with SupDocNADE. Specifically, let ${\cal A}$ be the predefined vocabulary of all annotation words, we will note the annotation of a given image as $ {\bf a}= [ a_1,a_2, \ldots ,a_L ] $ where $a_i \in {\cal A}$, with $L$ being the number of words in the annotation. Thus, the image with its annotation can be represented as a mixed bag of visual and annotation words: \begin{eqnarray} {\bf v}^{\cal A} & = & [ v_1^{\cal A},\ldots,v_{D_{\bf v}}^{\cal A} , v_{D_{\bf v}+1}^{\cal A}, \ldots, , v_{D_{\bf v}+L}^{\cal A}] \\ & = & [ v^{\cal R}_1,\ldots,v^{\cal R}_{D_{\bf v}} , a_1, \ldots ,a_L ]~~. \nonumber \end{eqnarray} To embed the annotation words into the SupDocNADE framework, we treat each annotation word the same way we deal with visual words. Specifically, we use a joint indexing of all visual and annotation words and use a larger binary word tree so as to augment it with leaves for the annotation words. By training SupDocNADE on this joint image/annotation representation ${\bf v}^{\cal A}$, it can learn the relationship between the labels, the spatially-embedded visual words and the annotation words. At test time, the annotation words are not given and we wish to predict them. To achieve this, we compute the document representation ${\bf h}_y({\bf v}^{\cal R})$ based only on the visual words and compute for each possible annotation word $a \in {\cal A}$ the probability that it would be the next observed word $p(v_i^{\cal A} = a |{\bf v}^{\cal A} = {\bf v}^{\cal R})$, based on the tree decomposition as in Equation~\ref{eqn:docnade_tree}. In other words, we only compute the probability of paths that reach a leaf corresponding to an annotation word (not a visual word). We then rank the annotation words in ${\cal A}$ in decreasing order of their probability and select the top 5 words as our predicted annotation. \section{Deep Extension of SupDocNADE} \label{sec:SupDeepDocNADE} Although SupDocNADE has achieved better performance than the other topic models in our previous work \cite{zhengtopic}, the lack of an efficient deep formulation of SupDocNADE reduces its capability of modeling multimodal data, especially compared with other models based on deep neural network~\cite{srivastava2013discriminative, srivastava2012multimodal}. Recently, \citet{uria2013deep} proposed an efficient extension of the original NADE model~\cite{larochelle2011neural} for binary vector observations, from which DocNADE was derived. We take inspiration from \citet{uria2013deep} and propose SupDeepDocNADE, i.e.\ a supervised deep autoregressive neural topic model for multimodal data modeling. In this section, we introduce the deep extension of DocNADE (DeepDocNADE) and then describe how to incorporate supervised information into its training. We also discuss how to deal with the inbalance between the number of visual words and annotation words, in order to obtain good performances. Before we start the discussion, we note that the notation $\bf v$, which denotes the words of an image, includes both visual words and annotation words of an image in the following section, as is discussed in Section~\ref{sec: anno} \subsection{DocNADE revisited} \label{sec: docnade_revisit} We first revisit the training procedure for DocNADE. We will concentrate on the unsupervised version of DocNADE for now and discuss the supervised case later. In Section~\ref{sec:SupDocNADE} we mentioned that words are randomly permuted before every stochastic gradient update, to make DocNADE be a good inference model for any ordering of the words. As \citet{uria2013deep} notice, we can think of the use of many orderings as the instantiation of many different DocNADE models, one for each distinct ordering. From that point of view, by training a single set of parameters (connection matrices and biases) on all these orderings, we are effectively employing a parameter sharing strategy across these models and the training process can be interpreted as training a factorial number of DocNADE models simultaneously. We will now make the notion of ordering more explicit in our notation. Following \citet{uria2013deep}, we now denote $p\left({\bf v}|{\bf \theta}, o\right)$ as the joint distribution of the DocNADE model over the words of an image given the parameters $\mathbf{\theta}$ and ordering $o$. We will also note $p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o\right)$ as the conditional distribution described in Equation~\ref{eqn:docnade_softmax} or \ref{eqn:docnade_tree}, where ${\bf v}_{o_{<d}}$ is the subvector of the previous $d-1$ words extracted from an ordered word vector ${\bf v}_{o}$, and $v_{o_{d}}$ is the $d^{\rm th}$ word of ${\bf v}_{o}$. Notice that the ordering $o$ is now treated explicitly as a random variable. Thus, training DocNADE on stochastically sampled orderings corresponds, in expectation, to minimize the negative log-likelihood $-\log p\left({\bf v}|{\bf \theta}, o\right) $ across {\it all possible orderings}, for each training example ${\bf v}$: \begin{equation} L\left({\bf v};{\bf \theta}\right) = \underset{o\in \mathcal{O}}{\mathbb{E}}-\log p\left({\bf v}|{\bf \theta}, o\right) \label{eqn: DocNADE_ord} \end{equation} where $\mathcal{O}$ is the set of all orderings. Applying DocNADE's autoregressive expression for the conditionals in Equation~\ref{eqn:prob_chain_rule}, Equation~\ref{eqn: DocNADE_ord} can be rewritten as: \begin{equation} L\left({\bf v};{\bf \theta}\right) = \underset{o\in \mathcal{O}}{\mathbb{E}}\sum_{d} -\log p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o\right) \label{eqn: DocNADE_autoreg} \end{equation} By moving the expectation over orderings, $\underset{o\in \mathcal{O}}{\mathbb{E}}$, inside the summation over the conditionals, the expectation can be split into three parts\footnote{ The split is done in a modality-agnostic way, i.e. the visual words and annotations words are mixed together and are treated equally when training the model.}: one over $o_{<d}$, standing for the first $d-1$ indices in the ordering $o$; one over $o_d$, which is the $d^{\rm th}$ index of the ordering $o$; and one over $o_{>d}$, standing for the remaining indices of the ordering. Hence, the loss function can be rewritten as: \begin{equation} L\left({\bf v};{\bf \theta}\right) = \sum_{d}\underset{o_{<d}}{\mathbb{E}} \underset{o_{d}}{\mathbb{E}}\underset{o_{>d}}{\mathbb{E}} -\log p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d, o_{>d}\right) \label{eqn: DocNADE_split} \end{equation} Noting that the value of each conditional does not depend on $o_{>d}$, Equation~\ref{eqn: DocNADE_split} can then be simplified as: \begin{equation} L\left({\bf v};{\bf \theta}\right) = \sum_{d}\underset{o_{<d}}{\mathbb{E}} \underset{o_{d}}{\mathbb{E}} -\log p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right)~. \label{eqn: DocNADE_split_simplified} \end{equation} In practice, Equation~\ref{eqn: DocNADE_split_simplified} still sums over a number of terms of too large to be performed exhaustively. For training, we thus use a stochastic estimation and replace the expectations/sums over $d$ and $o_{<d}$ with samples. On the other hand, the innermost expectation over $o_{d}$ can be obtained cheaply. Indeed, for a given value of $d$ and $o_{<d}$, all terms $p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right)$ require the computation of the same hidden layer representation ${\bf h}_d \left({\bf v}_{o_{<d}}\right)$ from the subvector ${\bf v}_{o_{<d}}$. Therefore, $L\left({\bf v},{\bf \theta}\right)$ can be estimated by: \begin{equation} \hat{L}\left({\bf v},{\bf \theta}\right) = \frac{D_{\bf v}}{D_{\bf v}-d+1}\sum_{o_d}-\log p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right)\label{eqn:DocNADE_est} \end{equation} where $D_{\bf v}$ is the number of words (including both visual and annotation words) in $\mathbf{v}$. In words, Equation~\ref{eqn:DocNADE_est} measures the ability of the model to predict, from a fixed and random context of $d-1$ words ${\bf v}_{o<d}$, any of the remaining words in the image/annotation. From this, training of DocNADE can be performed by stochastic gradient descent. For a given training example ${\bf v}$, a training update is performed as follows\footnote{In experiments, both visual words and annotation words are represented in Bag of Words (BoW) fashion. As is shown in Section~\ref{sec: DeepDocNADE}, the training processing actually equals to generating a word vector $\mathbf{v}$ from BoW, shuffling the word vector $\bf{v}$ and splitting it, and then regenerating the histogram $\mathbf{x}\left({\bf v}_{o_{<d}}\right)$ and $\mathbf{x}\left({\bf v}_{o_{\geq d}}\right)$, which is inefficient for processing samples in a mini-batch fashion. Hence, in practice, we split the original histogram $\mathbf{x}\left({\bf v}\right)$ directly by uniformly sampling how many are put in the left of the split (the others are put on the right of the split) for each individual word. This is not equivalent to the one mentioned in this paper, but it works well in practice. }: \begin{itemize} \item[$1)$.] Shuffle ${\bf v}$ to specify an ordering $o$; \item[$2)$.] Sample $d$ uniformly from $\left[0, D_{\bf v}\right]$, which separates ${\bf v}$ into two parts: $\mathbf{v}_{o_{<d}}$ as inputs and $\mathbf{v}_{o_{\geq d}}$ as outputs; \item[$3)$.] Compute each of the conditionals in Equation~\ref{eqn:DocNADE_est}, where $o_d \in \mathbf{v}_{o_{\geq d}} $ ; \item[$4)$.] Compute and sum the gradients for each of the conditionals in Equation~\ref{eqn:DocNADE_est}, and rescale by $\frac{D_{\bf v}}{D_{\bf v}-d+1}$. \end{itemize} It should be noticed that, since the number of words in an image/annotation pair can vary across examples, the value of $D_{\bf v}$ will vary between updates, unlike in \citet{uria2013deep} will models binary vectors of fixed size. We can contrast this procedure from the one described in Section~\ref{sec:SupDocNADE}, which prescribed a stochastic estimation with respect to the possible orderings of the words and an exhaustive sum in predicting all the words in the sequence. Here, we have the opposite: it is stochastic by predicting a subset of the words but is (partially) exhaustive by implicitly summing the gradient contributions over several orderings sharing the same permutation up to position $d$. \subsection{Deep Document NADE} \label{sec: DeepDocNADE} As shown in Section~\ref{sec: docnade_revisit}, training of DocNADE can be performed by randomly splitting the words $\mathbf{v}$ into two parts, $\mathbf{v}_{o_{<d}}$ and $\mathbf{v}_{o_{\geq d}}$, and applying stochastic gradient descent on the loss function of Equation~\ref{eqn:DocNADE_est}. Thus, the training procedure now corresponds to a neural network, with $\mathbf{v}_{o_{<d}}$ being the input and $\mathbf{v}_{o_{\geq d}}$ as the output's target. The advantage of this approach is that DocNADE can more easily be extended to a deep version this way, which we will refer to as DeepDocNADE. Indeed, as mentioned in the previous section, all conditionals $p\left(v_{o_{d}}|\mathbf{v}_{o_{<d}}, \theta,o_{<d}, o_{d}\right)$ in the summation of Equation~\ref{eqn:DocNADE_est} require the computation of a single hidden layer representation: \begin{eqnarray} \mathbf{h}_d^{\left(1\right)}\left({\bf v}_{o<d}\right) &= {\bf g}\left( \mathbf{c}^{\left(1\right)}+\sum_{k<d}\mathbf{W}^{\left(1\right)}_{:,v_{o_k}} \right )\\ &= {\bf g}\left({\bf c}^{\left(1\right)} + {\bf W}^{\left(1\right)}\mathbf{x}\left({\bf v}_{o_{<d}}\right) \right) \label{eqn: deepdocnade_h1} \end{eqnarray} where $\mathbf{x}\left({\bf v}_{o_{<d}}\right)$ is the histogram vector representation of the word sequence ${\bf v}_{o_{<d}}$ and where the exponent $(1)$ is used to index the first hidden layer and its parameters. So, unlike in the original training procedure for DocNADE, a training update now requires the computation of a single hidden layer, instead of $D_{\bf v}$ hidden layers. This way, adding more hidden layers only has an additive, instead of multiplicative, effect on the complexity of each training update. Hidden layers are added as in regular deep feedforward neural networks, as follows: \begin{equation} \mathbf{h}^{\left(n\right)} = {\bf g}\left({\bf c}^{\left(n\right)} + {\bf W}^{\left(n\right)}\mathbf{h}^{\left(n-1\right)} \right) \label{eqn: deepdocnade_h} \end{equation} where ${\bf W}^{\left(n\right)}$ and ${\bf c}^{\left(n\right)}$ are the connection matrix and bias for hidden layer $\mathbf{h}^{\left(n\right)}$, $n=1,\ldots, N$, where $N$ is the number of hidden layers. To compute the conditional $p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right)$ in Equation~\ref{eqn:DocNADE_est} after obtaining the hidden representation $\mathbf{h}^{\left(N\right)}$, the binary tree introduced in Section~\ref{sec: DocNADE intro} could be used for an efficient implementation. However, in cases where the histogram $\mathbf{x}\left({\bf v}_{o_{\geq d}}\right)$ of future words is not sparse, the binary tree output model might not be the most efficient approach. For example, suppose $\mathbf{x}\left({\bf v}_{o_{\geq d}}\right)$ is full (has no zero entries) and the vocabulary size is $Q$, the computation of Equation~\ref{eqn:DocNADE_est} via the binary tree is in $O\left(Q\log_2 Q\right)$, since it has to compute $O\left(\log Q\right)$ logistic regressions for each of the $Q$ words in $\mathbf{x}\left({\bf v}_{o_{\geq d}}\right)$. In this specific scenario however, going back to a softmax model of the conditionals is preferable. Indeed, since all conditionals in Equation~\ref{eqn:DocNADE_est} share the same hidden representation $\mathbf{h}^{\left(N\right)}$ and thus the normalization term in the softmax is the same for all future words, it is only in $O\left(Q\right)$. Another advantage of the softmax over the binary tree is that the softmax is more amenable to an efficient implementation on the GPU, which will also speed up the training process. In the end, for the experiments with the deep extension of DocNADE of this paper, we opted for the softmax model as we've found it to be more efficient. We emphasize however that the binary tree is still the most efficient option for the loss function of Equation~\ref{eqn:objectfunc_hybrid} or when the histogram of future words $\mathbf{x}\left({\bf v}_{o_{\geq d}}\right)$ is sparse. \subsection{Supervised Deep Document NADE} Deep Document NADE can also be extended to a supervised variant, which is referred to as SupDeepDocNADE, following the formulation in Section~\ref{sec:SupDocNADE}. Specifically, to add the supervised information into DeepDocNADE, the negative log-likelihood function in Equation~\ref{eqn: DocNADE_ord} could be extended as follows: \begin{eqnarray} L\left({\bf v},y;{\bf \theta}\right) &=& \underset{o\in \mathcal{O}}{\mathbb{E}}-\log p\left({\bf v}, y|{\bf \theta}, o\right) \\ &=& \underset{o\in \mathcal{O}}{\mathbb{E}}-\log p\left(y| {\bf v}, {\bf \theta}\right)-\log p\left({\bf v}|{\bf \theta}, o\right) \label{eqn: supdocnade_ord} \end{eqnarray} Since $p\left(y| {\bf v}, {\bf \theta}\right)$ is independent of $o$, Equation~\ref{eqn: supdocnade_ord} can be rewritten as: \begin{equation} L\left({\bf v},y;{\bf \theta}\right)=-\log p\left(y| {\bf v}, {\bf \theta}\right)-\underset{o\in \mathcal{O}}{\mathbb{E}}\log p\left({\bf v}|{\bf \theta}, o\right) \label{eqn: supdocnade_split} \end{equation} Then $L\left({\bf v},y;{\bf \theta}\right)$ can be approximated by sampling ${\bf v}$, $d$ and $o_{<d}$ as follows: \begin{eqnarray} \hat{L}\left({\bf v},y;{\bf \theta}\right) &=& -\log p\left(y| {\bf v}, {\bf \theta}\right)\label{eqn:SupDocNADE_est}\\ \nonumber &-&\frac{D_{\bf v}}{D_{\bf v}-d+1}\sum_{o_d}\log p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right) \end{eqnarray} Similar to Equation~\ref{eqn:objectfunc_hybrid}, the first term in Equation~\ref{eqn:SupDocNADE_est} is supervised, while the second term is unsupervised and can be interpreted as a regularizer. Thus, we can also weight the importance of the unsupervised part by a hyperparameter $\lambda$ and obtain a hybrid cost function: \begin{eqnarray} \hat{L}\left({\bf v},y;{\bf \theta}\right) &=& -\log p\left(y| {\bf v}, {\bf \theta}\right)\label{eqn:SupdeepDocNADE_hybrid}\\ \nonumber &-&\lambda\frac{D_{\bf v}}{D_{\bf v}-d+1}\sum_{o_d}\log p\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right) \end{eqnarray} Equation~\ref{eqn:SupdeepDocNADE_hybrid} can then be used as the per-example loss and optimized over the training set using stochastic gradient descent. \subsection{Weighting the Annotation Words} \label{sec:weight_anno} As mentioned in Section~\ref{sec: anno}, the annotation words can be embedded into the framework of SupDocNADE by treating them the same way we deal with visual words. In practice, however, the number of visual words could be much larger than that of the annotation words. For example, in the MIR Flickr data set, with the experimental setup of \citet{srivastava2012multimodal}, the average number of visual words for an image is about $69~011$, which is much larger than the average number of annotation words for an image ($5.15$). The imbalance of visual words and annotation words might cause some problems. For example, the contribution to the hidden representation from the annotation words is so small that it might be ignored compared with the contribution from the huge mount of visual words, and the gradients coming from the annotation words might also be too small to have any meaningful effect for increasing the conditionals probability of the annotation words. To deal with this problem, we propose to weight the annotation words in the histogram $\mathbf{x}\left({\bf v}_{o_{<d}}\right)$ and $\mathbf{x}\left({\bf v}_{o_{\geq d}}\right)$. More specifically, let $\omega\left(\rho\right) \in \mathbb{R}^{Q}$ be a vector containing $Q$ components, where $Q$ is the vocabulary size (including both visual and annotation words), each component corresponding to a word (either visual or annotation). The components corresponding to the visual words is set to $1$ and the components corresponding to the annotation word is set to $\rho$. Then the new histogram of $\mathbf{\tilde{x}}\left({\bf v}_{o_{<d}}\right)$ and $\mathbf{\tilde{x}}\left({\bf v}_{o_{\geq d}}\right)$ is computed as \begin{eqnarray} \mathbf{\tilde{x}}\left({\bf v}_{o_{<d}}\right) &=& \mathbf{x}\left({\bf v}_{o_{<d}}\right) \odot \omega\left(\rho\right)\\ \mathbf{\tilde{x}}\left({\bf v}_{o_{\geq d}}\right) &=& \mathbf{x}\left({\bf v}_{o_{\geq d}}\right) \odot \omega\left(\rho\right) \end{eqnarray} where $\odot$ is element-wise multiplication. Moreover, the hybrid cost function of Equation~\ref{eqn:SupdeepDocNADE_hybrid} is rewritten as: \begin{eqnarray} &&\hat{L}\left({\bf v},y;{\bf \theta}\right) = -\log p\left(y| {\bf v}, {\bf \theta}\right)\label{eqn:SupdeepDocNADE_hybri_weight}\\ \nonumber &&~~~~-\frac{\lambda D_{\bf v}}{D_{\bf v}-d+1}\sum_{o_d}\Phi_{o_{d}}\left(\rho\right)\log\tilde{p}\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right) \end{eqnarray} where $\tilde{p}\left(v_{o_{d}}|{\bf v}_{o_{<d}}, {\bf \theta}, o_{<d}, o_d\right)$ is a conditional probability obtained by replacing $\mathbf{x}\left({\bf v}_{o_{<d}}\right)$ with $\mathbf{\tilde{x}}\left({\bf v}_{o_{<d}}\right)$ in Equation~\ref{eqn: deepdocnade_h1}, and $\Phi_{o_{d}}\left(\rho\right)$ is a function that assigns weight $\rho$ if $o_{d}$ is an annotation word, and $1$ otherwise. By weighting annotation words in the histogram, the model will pay more attention to the annotation words, reducing the problem caused by the imbalance between visual and annotation words. In practice, the weight $\rho$ is a hyper-parameter and can be selected by cross-validation. As we'll see in Section~\ref{sec:SupDeepDocNADE annoweight}, weighting annotation words more heavily can significantly improve the performance. \subsection{Exploiting Global Image Features} \label{sec:global_features} Besides the spatial information and annotation which are embedded into the framework of DocNADE in Section~\ref{sec:multiple regions} and Section~\ref{sec: anno}, bottom-up global features, such as Gist~\cite{oliva2001modeling} and MPEG-7 descriptors~\cite{manjunath2001color}, can also play an important role in multimodal data modeling \citep{srivastava2012multimodal}. Global features can, among other things, complement the local information extracted from patch-based visual words. In this section, we describe how to embed such features into the framework of our model. Specifically, let $\mathbf{f}\in \mathbb{R}^{N_f}$ be the global feature vector extracted from an image, where $N_f$ is the length of the global feature vector. One possibility for embedding $\mathbf{f}$ into the model could be to condition the hidden representation on the global feature $\mathbf{f}$ as follows: \begin{equation} \mathbf{h}^{\left(1\right)} = {\bf g}\left({\bf c}^{\left(1\right)} + {\bf W}^{\left(1\right)}\mathbf{x}\left({\bf v}_{o_{<d}}\right) +\mathbf{P}\mathbf{f} \right) \label{eqn:global_feature} \end{equation} where $\mathbf{P}$ is a connection matrix specific to the global features. This can be understood as a hidden layer whose hidden unit biases are conditioned on the image's global features vector $\mathbf{f}$. Thus, the whole model is conditioned not only on previous words but also on the global features $\mathbf{f}$. \section{Experiments and Results} \label{experiment} In this section, we compare the performance of our model over the other models for multimodal data modeling. Specifically, we first test the ability of the single hidden layer SupDocNADE to learn from multimodal data on two real-world data sets which are widely used in the research on other topic models. Then we test the performance of SupDeepDocNADE on the largescale multimedia informaton retrieval (MIR) Flickr data set and show that SupDeepDocNADE achieves state-of-the-art performance. The code to download the data sets and for SupDocNADE and SupDeepDocNADE is available at \url{https://sites.google.com/site/zhengyin1126/home/supdeepdocnade}. \subsection{Experiments for SupDocNADE} \label{sec:expsupdocnade} To test the ability of the single hidden layer SupDocNADE to learn from multimodal data, we measured its performance under simultaneous image classification and annotation tasks. We tested our model on 2 real-world data sets: a subset of the LabelMe data set~\cite{russell2008labelme} and the UIUC-Sports data set~\cite{li2007and}. LabelMe and UIUC-Sports come with annotations and are popular classification and annotation benchmarks. We performed extensive quantitative comparisons of SupDocNADE with the original DocNADE model and supervised LDA (sLDA)\footnote{We mention that \cite{wang2009simultaneous} has shown that sLDA performs better than Corr-LDA\cite{blei2003modeling}. Moreover, \cite{jia2011learning} found that Multimodal LDA~\cite{putthividhy2010topic} did not improve on the performance of Corr-LDA. Finally, sLDA distinguishes itself from the other models in the fact that it also supports the class label modality and has code available online. Hence, we compare directly with sLDA only.}~\cite{blei2007supervised,wang2009simultaneous}. We also provide some comparisons with MMLDA~\cite{wang2011max} and a Spatial Pyramid Matching (SPM) approach~\cite{lazebnik2006beyond}. \subsubsection{Data sets Description} Following \citet{wang2009simultaneous}, we constructed our LabelMe data set using the online tool to obtain images of size $256 \times 256$ pixels from the following 8 classes: {\it highway}, {\it inside city}, {\it coast}, {\it forest}, {\it tall building}, {\it street}, {\it open country} and {\it mountain}. For each class, 200 images were randomly selected and split evenly in the training and test sets, yielding a total of 1600 images. The UIUC-Sports data set contains 1792 images, classified into 8 classes: {\it badminton} (313 images), {\it bocce} (137 images), {\it croquet} (330 images), {\it polo} (183 images), {\it rockclimbing} (194 images), {\it rowing} (255 images), {\it sailing} (190 images), {\it snowboarding} (190 images). Following previous work, the maximum side of each image was resized to 400 pixels, while maintaining the aspect ratio. We randomly split the images of each class evenly into training and test sets. For both LabelMe and UIUC-Sports data sets, we removed the annotation words occurring less than 3 times, as in \citet{wang2009simultaneous}. \subsubsection{Experimental Setup for SupDocNADE} \label{experiment:conf} Following \citet{wang2009simultaneous}, 128 dimensional, densely extracted SIFT features were used to extract the visual words. The step and patch size of the dense SIFT extraction was set to 8 and 16, respectively. The dense SIFT features from the training set were quantized into 240 clusters, to construct our visual word vocabulary, using $K$-means. We divided each image into a $2 \times 2 $ grid to extract the spatial position information, as described in Section~\ref{sec:multiple regions}. This produced $2\times 2\times 240=960$ different visual word/region pairs. We use classification accuracy to evaluate the performance of image classification and the average F\textup{-measure} of the top 5 predicted annotations to evaluate the annotation performance, as in previous work. The F\textup{-measure} of an image is defined as \begin{equation} F\textup{-measure} = \frac{2\times \textup{Precision}\times \textup{Recall}}{ \textup{Precision}+\textup{Recall}} \end{equation} where recall is the percentage of correctly predicted annotations out of all ground-truth annotations for an image, while the precision is the percentage of correctly predicted annotations out of all predicted annotations\footnote{When there are repeated words in the ground-truth annotations, the repeated terms were removed to calculate the F\textup{-measure}.}. We used 5 random train/test splits to estimate the average accuracy and F\textup{-measure}. Image classification with SupDocNADE is performed by feeding the learned document representations to a RBF kernel SVM. In our experiments, all hyper-parameters (learning rate, unsupervised learning weight $\lambda$ in SupDocNADE, $C$ and $\gamma$ in RBF kernel SVM), were chosen by cross validation. We emphasize that, again from following \citet{wang2009simultaneous}, the annotation words are not available at test time and all methods predict an image's class based solely on its bag of visual words. \subsubsection{Quantitative Comparison} In this section, we describe our quantitative comparison between SupDocNADE, DocNADE and sLDA. We used the implementation of sLDA available at \url{http://www.cs.cmu.edu/~chongw/slda/} in our comparison, to which we fed the same visual (with spatial regions) and annotation words as for DocNADE and SupDocNADE. \begin{figure*}[t] \includegraphics[width=0.24\linewidth]{labelme_compare_slda_crop.pdf} \includegraphics[width=0.24\linewidth]{uiuc_compare_slda_crop.pdf} \includegraphics[width=0.24\linewidth]{labelme_compare_position_crop.pdf} \includegraphics[width=0.24\linewidth]{uiuc_compare_position_crop.pdf} \caption{Classification performance comparison on LabelMe (even) and UIUC-Sports (odd). On the left, we compare the classification performance of SupDocNADE, DocNADE and sLDA. On the right, we compare the performance between different variants of SupDocNADE. The ``{\it $\lambda$ varies}'' means the unsupervised weight $\lambda$ in Equation \ref{eqn:objectfunc_hybrid} is chosen by cross-validation.} \label{fig:labelme_and_uiuc_comp} \end{figure*} \begin{table}[t] \caption{Performance comparison of SupDocNADE with different models on LabelMe and UIUC-Sports data sets.} \begin{center} \begin{tabular}{@{}l|l@{\hspace{1mm}}l@{\hspace{1mm}}l@{\hspace{1mm}}l@{}} \toprule & \multicolumn{2}{c}{LabelMe} & \multicolumn{2}{@{}c@{}}{UIUC-Sports} \\ \multicolumn{1}{c|}{Model} & Accuracy$\%$ & F\textup{-measure}$\%$ & Accuracy$\%$& F\textup{-measure}$\%$ \\ \midrule SPM~\cite{lazebnik2006beyond} &$80.88$&$43.68 $&$72.33 $&$41.78 $ \\ MMLDA~\cite{wang2011max} &$81.47 ^{\dagger}$&$\mathbf{46.64} ^{\dagger \ast}$&$74.65 ^{\dagger}$&$44.51 ^{\dagger}$ \\ sLDA ~\cite{wang2009simultaneous} &$81.87 $&$38.7 ^{\dagger}$&$76.87 $&$35.0 ^{\dagger}$ \\ DocNADE &$81.97 $&$43.32 $&$74.23 $&$46.38 $\\ \midrule SupDocNADE &$\mathbf{83.43} $&$43.87 $&$\mathbf{77.29} $&$\mathbf{46.95} $\\ \bottomrule \end{tabular} \begin{minipage}{0.48\textwidth} {\small $\dagger$: Taken from the original paper. \\ $\ast$: MMLDA performs classification and annotation separately and doesn't learn jointly from all 3 modalities. } \end{minipage} \end{center} \label{table:comparison} \end{table} The classification results are illustrated in Figure~\ref{fig:labelme_and_uiuc_comp}. Similarly, we observe that SupDocNADE outperforms DocNADE and sLDA. Tuning the trade-off between generative and discriminative learning and exploiting position information is usually beneficial. There is just one exception, on LabelMe, with 200 hidden topic units, where using a $1\times 1$ grid slightly outperforms a $2\times 2$ grid. As for image annotation, we computed the performance of our model with 200 topics. As shown in Table~\ref{table:comparison}, SupDocNADE obtains an $F$\textup{-measure} of $43.87\%$ and $46.95\%$ on the LabelMe and UIUC-Sports data sets respectively. This is slightly superior to regular DocNADE. Since code for performing image annotation using sLDA is not publicly available, we compare directly with the results found in the corresponding paper~\cite{wang2009simultaneous}. \citet{wang2009simultaneous} report $F$\textup{-measures} of $38.7\%$ and $35.0\%$ for sLDA, which is below SupDocNADE by a large margin. We also compare with MMLDA~\cite{wang2011max}, which has been applied to image classification and annotation separately. The reported classification accuracy for MMLDA is less than SupDocNADE as shown in Table~\ref{table:comparison}. The performance for annotation reported in ~\citet{wang2011max} is better than SupDocNADE on LabelMe but worse on UIUC-Sports. We highlight that MMLDA did not deal with the class label and annotation word modalities {\it jointly}, the different modalities being treated separately. The spatial pyramid approach of \citet{lazebnik2006beyond} could also be adapted to perform both image classification and annotation. We used the code from \citet{lazebnik2006beyond} to generate two-layer SPM representations with a vocabulary size of 240, which is the same configuration as used by the other models. For image classification, an SVM with Histogram Intersection Kernel (HIK) is adopted as the classifier, as in \citet{lazebnik2006beyond}. For annotation, we used a $k$ nearest neighbor (KNN) prediction of the annotation words for the test images. Specifically, the top 5 most frequent annotation words among the $k$ nearest images (based on the SPM representation with HIK similarity) in the training set were selected as the prediction of a test image's annotation words. The number $k$ was selected by cross validation, for each of the 5 random splits. As shown in Table~\ref{table:comparison}, SPM achieves a classification accuracy of $80.88\%$ and $72.33\%$ for LabelMe and UIUC-Sports, which is lower than SupDocNADE. As for annotation, the $F$\textup{-measure} of SPM is also lower than SupDocNADE, with $43.68\%$ and $41.78\%$ for LabelMe and UIUC-Sports, respectively. Figure~\ref{fig:figure_results} illustrates examples of correct and incorrect predictions made by SupDocNADE on the LabelMe data set. \begin{figure}[t] \begin{center} \includegraphics[width=.75\linewidth]{labelme_demo_framed.pdf} \end{center} \caption{Predicted class and annotation by SupDocNADE on LabelMe data set. We list some correctly (top row) and incorrectly (bottom row) classified images. The predicted (in blue) and ground-truth (in black) class labels and annotation words are presented under each image.} \label{fig:figure_results} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.75\linewidth]{label_visualword_annoword.pdf} \end{center} \caption{Visualization of learned representations. Class labels are colored in red. For each class, we list 4 visual words (each represented by 16 image patches) and 5 annotation words that are strongly associated with each class. See Sec.~\ref{sec:visualization of representation} for more details. } \label{fig:label_visual_anno} \end{figure} \subsubsection{Visualization of Learned Representations} \label{sec:visualization of representation} Since topic models are often used to interpret and explore the semantic structure of image data, we looked at how we could observe the structure learned by SupDocNADE. We extracted the visual/annotation words that were most strongly associated with certain class labels within SupDocNADE as follows. Given a class label {\it street}, which corresponds to a column $\mathbf{U}_{:,i}$ in matrix $\mathbf{U}$, we selected the top 3 topics (hidden units) having the largest connection weight in $\mathbf{U}_{:,i}$. Then, we averaged the columns of matrix $\mathbf{W}$ corresponding to these 3 hidden topics and selected the visual/annotation words with largest averaged weight connection. The results of this procedure for classes {\it street}, {\it sailing}, {\it forest} and {\it highway} is illustrated in Figure~\ref{fig:label_visual_anno}. To visualize the visual words, we show 16 image patches belonging to each visual word's cluster, as extracted by $K$-means. The learned associations are intuitive: for example, the class {\it street} is associated with the annotation words ``{\it building}'', ``{\it buildings}'', ``{\it window}'', ``{\it person walking}'' and ``{\it sky}'', while the visual words showcase parts of buildings and windows. \subsection{Experiments for SupDeepDocNADE} We now test the performance of SupDeepDocNADE, the deep extension of SupDocNADE, on the large-scale MIR Flickr data set~\cite{HuiskesM2008}. MIR Flickr is a challenging benchmark for multimodal data modeling task. In this section, we will show that SupDeepDocNADE achieves state-of-the-art performance on the MIR Flickr data set over strong baselines : the DBM apporach of~\citet{srivastava2013discriminative}, MDRNN~\cite{sohn2014improved}, TagProp~\cite{guillaumin2010multimodal} and the multiple kernel learning approach of \citet{verbeek2010image}. \subsubsection{MIR Flickr Data Set} \label{sec:MIR intro} The MIR Flickr data set contains $1$ million real images that are collected from the image hosting website Flickr. The social tags of each image are also collected and used as annotations in our experiments. Among the $1$ million images, there are $25~000$ images that is labeled into $38$ classes, such as {\it sky, bird, people, animals, car, etc.}, giving us a subset of labeled images. Each image in the labeled subset can have multiple class labels. In our experiments, we used $15~000$ images for training and $10~000$ images for testing. The remaining $975~000$ images do not have labels and thus were used for the unsupervised pretraining of SupDeepDocNADE (see next section). The most frequent $2000$ tags are collected for the annotation vocabulary, following previous work~\cite{srivastava2012multimodal,srivastava2013discriminative}. The averaged number of annotations for an image is $5.15$. In the whole data set, $128~501$ images do not have annotations, out of which $4551$ images are in the labeled subset. \subsubsection{Experimental Setup for SupDeepDocNADE} \label{sec:SupDeepDocNADE config} In order to compare directly with the DBM approach of \citet{srivastava2013discriminative}, we use the same experimental configuration. Specifically, the images in MIR Flickr are first rescaled to make the maximum side of each image be $480$ pixels, keeping the aspect ratio. Then, $128$ dimensional SIFT features are densely sampled on these images to extract the visual words. Following \citet{srivastava2013discriminative}, we used $4$ different scales of patch size, which are $4, 6, 8, 10$ pixels, respectively, and the patch step is fixed to $3$ pixels. The SIFT features from the unlabeled images were quantized into 2000 clusters, which is used as the visual word vocabulary. Thus, the image modality is represented by the bag of visual words representation using this vocabulary. As preliminary experiments suggested that spatial information (see Section~\ref{sec:multiple regions}) wasn't useful on the Flickr data set, we opted for not using it here. Similarly, the text modality for SupDeepDocNADE is represented using the annotation vocabulary, which is built upon the most frequent $2000$ tags, as is mentioned in Section~\ref{sec:MIR intro}. The visual words and annotation words are combined together and treated as the input of SupDeepDocNADE. As for the global features (Section~\ref{sec:global_features}), a combination of Gist~\cite{oliva2001modeling} and MPEG-7 descriptors~\cite{manjunath2001color}(EHD, HTD, CSD, CLD, SCD) is adopted in our experiments, as in \citet{srivastava2013discriminative}. The length of the global features is $1857$. We used a $3$ hidden layers architecture in our experiments, with the size of each hidden layer being $2048$. Note that the DBM~\cite{srivastava2012multimodal,srivastava2013discriminative} also use $3$ hidden layers with $2048$ hidden units for each layer, thus our comparison with the DBM is fair. The activation function for the hidden units is the rectified linear function. We used a softmax output layer instead of a binary tree to compute the conditionals $p\left(v_{o_{d}}|\mathbf{v}_{o_{<d}}, \theta,o_{<d},o_{d}\right)$ for SupDeepDocNADE, as discussed in Section~\ref{sec: DeepDocNADE}. For the prediction of class labels, since images in MIR Flickr could have multiple labels, we used a sigmoid output layer instead of the softmax to compute the probability that an image belongs to a specific class $c_i$ \begin{equation} p\left(c_i=1|\mathbf{v},\mathbf{\theta}\right)= {\rm sigmoid}\left(d_{c_i}+\mathbf{U}_{c_i,:}\mathbf{h}^{\left(N\right)}\right) \end{equation} where $\mathbf{h}^{\left(N\right)}$ is the hidden representation of the top layer. As a result, the supervised cost part in Equation~\ref{eqn:SupdeepDocNADE_hybrid} is replaced by the cross entropy $\sum_{i=1}^{C} - c_i\log p\left(c_i=1|\mathbf{v},\mathbf{\theta}\right)-\left(1- c_i\right)\log p\left(c_i=0|\mathbf{v},\mathbf{\theta}\right)$, where $C$ is the number of classes. In all experiments, the unlabeled images are used for unsupervised pretraining. This is achieved by first training a DeepDocNADE model, without any output layer predicting class labels. The result of this training is then used to initialize the parameters of a SupDeepDocNADE model, which is finetuned on the labeled training set based on the loss of Equation~\ref{eqn:SupdeepDocNADE_hybri_weight}. Once training is finalized, the hidden representation from the top hidden layer after observing all words (both visual words and annotation words) of an image is feed to a linear SVM~\cite{fan2008liblinear} to compute confidences of an image belonging to each class. The average precision (AP) for each class is obtained based on these confidences, where AP is the area under the precision-recall curve. After that, the mean average precision (MAP) over all classes is computed and used as the metric to measure the performance of the model. We used the same $5$ training/validation/test set splits on the labeled subset of MIR Flickr as \citet{srivastava2013discriminative} and report the average performance on the $5$ splits. To initialize the connection matrices, we followed the recommendation of \citet{glorot2010understanding} used a uniform distribution: \begin{equation} \Theta \sim U\left[-\frac{\sqrt{6}}{\sqrt{l_{\Theta}+w_{\Theta}}}, \frac{\sqrt{6}}{\sqrt{l_{\Theta}+w_{\Theta}}}\right] \end{equation} where $\Theta \in \lbrace \mathbf{W}, \mathbf{U}, \mathbf{V} \rbrace$ is a connection matrix, $l_{\Theta}$,$w_{\Theta}$ are the number of rows and columns respectively of matrix $\Theta$, respectively, and $U$ is the uniform distribution. In practice, we've also found it useful to normalize the input histograms $\mathbf{\tilde{x}}\left({\bf v}_{o_{<d}}\right)$ for each image, by rescaling them to have unit variance. The hyper-parameters (learning rate, unsupervised weight $\lambda$, and the parameter for linear SVM, etc.) are chosen by cross-validation. To prevent overfitting, dropout~\cite{hinton2012improving} is adopted during training, with a dropout rate of $0.5$ for all hidden layers. We also maintained an exponentially decaying average of the parameter values throughout the gradient decent training procedure and used the averaged parameters at test time. This corresponds to Polyak averaging~\citep{swersky2010tutorial}, but where the linear average is replaced by a weighting that puts more emphasis on recent parameter values. For the annotation weight, it was fixed to $12~000$, which is approximately the ratio of the averaged visual words and annotation words of the data set. We will investigate the impact of the annotation weight on the performance in Section~\ref{sec:SupDeepDocNADE annoweight}. \begin{table}[t] \caption{Performance comparison on MIR Flickr data set.} \begin{center} \begin{tabular}{@{}l|l@{\hspace{1mm}}l@{\hspace{1mm}}l@{\hspace{1mm}}l@{}} \toprule \multicolumn{1}{c|}{Model} & MAP\\ \midrule TF-IDF &$0.384 \pm 0.004$ \\ Multiple Kernel Learning SVMs~\cite{guillaumin2010multimodal} &$0.623$ \\ TagProp~\cite{verbeek2010image} &$0.640$ \\ Multimodal DBM~\cite{srivastava2013discriminative} &$0.651\pm 0.005$ \\ MDRNN~\cite{sohn2014improved} &$0.686\pm 0.003$ \\ \midrule SupDeepDocNADE (1 hidden layer, 625 epochs pretraining) &$0.654\pm 0.004$\\ SupDeepDocNADE (2 hidden layers, 625 epochs pretraining) &$0.671\pm 0.006$\\ SupDeepDocNADE (3 hidden layers, 625 epochs pretraining) &$0.670\pm 0.005$\\ \midrule SupDeepDocNADE (2 hidden layers, 2325 epochs pretraining) &$0.682\pm 0.005$\\ SupDeepDocNADE (3 hidden layers, 2325 epochs pretraining) &$0.686\pm 0.005$\\ \midrule SupDeepDocNADE (2 hidden layers, 4125 epochs pretraining) &$0.684\pm 0.005$\\ SupDeepDocNADE (3 hidden layers, 4125 epochs pretraining) &$\mathbf{0.691}\pm 0.005$\\ \bottomrule \end{tabular} \begin{minipage}{0.48\textwidth} \end{minipage} \end{center} \label{table:comparison_DBM} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=0.7\linewidth]{map_pretrain.pdf} \end{center} \caption{Performance of SupDeepDocNADE w.r.t the number of epochs pretrained on unlabeled data. } \label{fig:map_pretrain} \end{figure} \subsubsection{Comparison with other baselines} \label{sec:compare with DBM} Table~\ref{table:comparison_DBM} presents a comparison of the performance of SupDeepDocNADE with the DBM approach of \citet{srivastava2013discriminative} and MDRNN of \citet{sohn2014improved} as well as other strong baselines, in terms of MAP performance. We also provide the simple and popular TF-IDF baseline in Table~\ref{table:comparison_DBM} to make the comparison more complete. The TF-IDF baseline is conducted only on the bag-of-words representations of images without global features. We feed the TF-IDF representations to a linear SVM to obtain confidences of an image belonging to each class and then we compute the Mean AP, as for SupDeepDocNADE. We can see that SupDeepDocNADE achieves the best performance among all methods. More specifically, we first pretrained the model for $625$ epochs on the unlabeled data with $1$, $2$ and $3$ hidden layers. The results illustrated in Table~\ref{table:comparison_DBM} show that SupDeepDocNADE outperforms the DBM baseline by a large margin. Moreover, we can see that SupDeepDocNADE with $2$ and $3$ hidden layers performs better than with only $1$ hidden layer, with $625$ epochs of pretraining. We then pretrained the model for more epochs on the unlabeled data ($2325$ epochs). As shown in Table~\ref{table:comparison_DBM}, with more pretraining epochs, the deeper model ($3$ hidden layers) performs even better. This confirms that the use of a deep architecture is beneficial. When the number of pretraining epochs reaches $4125$, the SupDeepDocNADE model with $3$ hidden layers achieves a MAP of $0.691$, which outperforms all the strong baselines and increases the performance gap with the 2-hidden-layers model. From Tabel~\ref{table:comparison_DBM} we can also see that the performance of 2-layers SupDeepDocNADE does not improve as much as 3-layers SupDeepDocNADE when the number of pretraining epochs increases from $2325$ to $4125$. Figure~\ref{fig:map_pretrain} shows the the performance of SupDeepDocNADE w.r.t the number of pretraining epochs. We can see from Figure~\ref{fig:map_pretrain} that with more epochs of pretraining, the performance of 3-layers SupDeepDocNADE increases faster than the 2-layers models, which indicates that the capacity of 3-layers SupDeepDocNADE is bigger than the 2-layers model and the capacity could be leveraged by more pretraining. Figure~\ref{fig:map_pretrain} also suggests that the performance of SupDeepDocNADE could be even better than $0.691$ with more pretraining epochs. Figure~\ref{fig:failure_analysis} illustrates some failed predictions of SupDeepDocNADE, where the reasons for failure are shown on the left-side of each row. One of the reasons for failure is that the local texture/color is ambiguous or misleading. For example, in the first image of the top row, the blue color in the upper side of the wall misleads the model to predict "sky" with a confidence of $0.995$. Another type of failure, which is shown in the middle row of Figure~\ref{fig:failure_analysis}, is caused by images of an abstract illustration of the class. For instance, the model fails to recognize the bird, car and tree in the images of the middle row, respectively, as these images are merely abstract illustrations of these concepts. The third reason illustrated in the bottom row is that the class takes a small portion of the image, making it more likely to be ignored. For example, the female face on the stamp in the first image of the bottom row is too small to be recognized by the model. Note that we just illustrated some failed examples and there might be other kinds of failures. In practice, we also find that some images are not correctly labeled, which might also cause some failures. Having established that SupDeepDocNADE achieves state-of-the-art performance on the MIR Flickr data set and also discussed some failed examples, we now explore in more details some of its properties in the following sections. \begin{figure}[t] \begin{center} \includegraphics[width=0.65\linewidth]{deepdocnade_failures.pdf} \end{center} \caption{Illustration of some failed examples of SupDeepDocNADE. The reasons for failure are listed on the left-side of each row. For each reason, we list $3$ examples. The text below each image is the confidence of either the wrongly predicted class (the top row) or the ground truth class (the middle and bottom rows). The maximum value of confidence is $1$ and minimum is $0$.} \label{fig:failure_analysis} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.6\linewidth]{AnnotationW.pdf} \end{center} \caption{Comparison between different annotation weights. } \label{fig:anno_weight} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{anno2.pdf} \end{center} \caption{The illustration of generated texts from images by SupDeepDocNADE. The input for this task is the image modality only and the output is the generated text. We put the ground truth annotations in the second column and illustrate the top $8$ words generated using SupDeepDocNADE in the third column. If there is no ground truth annotations, the corresponding part is left blank. We can see that SupDeepDocNADE can generate meaningful annotations from images.} \label{fig:predict_anno} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{MultimodalRetrieve-part.pdf} \end{center} \caption{The illustration of multimodal retrieval results for SupDeepDocNADE. Both the query input and retrieved results contain image and text modalities. The annotations (text modality) are shown under the image. The query input is shown in the first column, and the $4$ most similar image/annotation pairs according to SupDeepDocNADE are shown in the following columns, ranked by similarity from left to right. } \label{fig:multimodal_retrieve} \end{figure} \subsubsection{The Impact of the Annotation Weight} \label{sec:SupDeepDocNADE annoweight} In Section~\ref{sec:SupDeepDocNADE annoweight}, we proposed to weight differently the annotation words to deal with the problem of imbalance in the number of visual and annotation words. In this part, we investigate the influence of the annotation weight on the performance. Specifically, we set the annotation weight to $\left\lbrace 1,4000,8000,12~000,16~000 \right\rbrace$, and show the performance for each of the annotation weight values. Note that when the annotation weight equals $1$, there is no compensation for the imbalance of visual words and annotation words. The other experimental configurations are the same as in Section~\ref{sec:SupDeepDocNADE config}. Figure~\ref{fig:anno_weight} shows the performance comparison between different annotation weights. As expected, SupDeepDocNADE performs extremely bad when the annotation equals to $1$, When the annotation weight is increased, the performance gets better. Among all the chosen annotation weights, $12~000$ performs best, which achieves a MAP of $0.671$. The other annotation weights also achieves good performance compared with the DBM model~\citep{srivastava2013discriminative}: MAP of $0.658$, $0.669$ and $0.670$ for annotation weight values of $4000$, $8000$ and $16~000$, respectively. \subsubsection{Visualization of the Retrieval Results} \label{sec:SupDeepDocNADE demo} Since SupDeepDocNADE is used for multimodal data modeling, we illustrate here some results for multimodal data retrieval tasks. More specifically, we show some qualitative results in two multimodal data retrieval scenarios: multimodal data query and generation of text from images.\\ \textbf{Multimodal Data Query:} Given a query corresponding to an image/annotation pair, the task is to retrieve other similar pairs from a collection, using the hidden representation learned by SupDeepDocNADE. In this task, the cosine similarity is adopted as the similarity metric. In this experiment, each query corresponds to an individual test example and the collection corresponds to the rest of the test set. Figure~\ref{fig:multimodal_retrieve} illustrates the retrieval results for multimodal data query task, where we show the $4$ most similar images to the query input in the testset.\\ \textbf{Generating Text from Image:} As SupDeepDocNADE learns the relationship between the image and text modalities, we test its ability to generate text from given images. This task is implemented by feeding SupDeepDocNADE only the bag of visual words and selecting the annotation words according to their probability of being the next word, similarly to Section~\ref{sec: anno}. Figure~\ref{fig:predict_anno} illustrates the ground truth annotation and the most probable $8$ annotations generated by SupDeepDocNADE. We can see that SupDeepDocNADE generated very meaningful texts according to the image modality, which shows that it effectively learned about the statistical structure between the two modalities. \section{Conclusion and Discussion} \label{conclusion} In this paper, we proposed SupDocNADE, a supervised extension of DocNADE, which can learn jointly from visual words, annotations and class labels. Moreover, we proposed a deep extension of SupDocNADE which outperforms its shallow version and can be trained efficiently. Although both SupDocNADE and SupDeepDocNADE are the same in nature, SupDeepDocNADE differs from the single layer version in its training process. Specifically, the training process of SupDeepDocNADE is performed over a subset of the words by summing the gradients over several orderings sharing the same permutation up to a randomly selected position $d$, while the single layer version does the opposite and exploits a single randomly selected ordering but updates all the conditionals on the words. Like all topic models, our model is trained to model the distribution of the bag-of-word representations of images and can extract a meaningful representation from it. Unlike most topic models however, the image representation is not modeled as a latent random variable in a model, but instead as the hidden layer of a neural autoregressive network. A distinctive advantage of SupDocNADE is that it does not require any iterative, approximate inference procedure to compute an image's representation. Our experiments confirm that SupDocNADE is a competitive approach for multimodal data modeling and SupDeepDocNADE achieves state-of-the-art performance on the challenging multimodal data benchmark MIR Flickr. {\small \bibliographystyle{IEEEtranNAT}
train/arxiv
BkiUbHXxK3YB9m8SUyez
5
1
\section*{Introduction} The euclidean 3-sphere $\bbS^3$ admits compact embedded minimal surfaces of any genus \cite{Law:S3, KarPS, KapY}. For simple examples like a great 2-sphere or the (minimal) Clifford torus there is a smooth deformation through constant mean curvature (\cmc) surfaces with the same topology, which can be expressed in terms of changing radii. For other minimal tori, as well as higher genus compact embedded minimal surfaces in $\bbS^3$ it might be useful to have a general deformation technique. We do this for the case of \cmc tori, and present a smooth topology preserving deformation for \cmc tori in $\bbS^3$ in Theorem~\ref{th:nocommon}. By this theorem the moduli space of \cmc tori in $\bbS^3$ is locally one dimensional. To get an idea of the global structure of the moduli space we turn to equivariant \cmc tori in $\bbS^3$ \cite{HsiL, Uhl:equi, BurK}, which are either flat or a truncation of some member of an associated family of a Delaunay surface \cite{DoCD}. We show that the moduli of equivariant \cmc tori in $\bbS^3$ is a connected infinite graph whose edges are parameterized by the mean curvature, and by flowing through this moduli space of equivariant \cmc tori we classify the minimal, the embedded, and the Alexandrov embedded tori therein. Amongst harmonic maps \cite{Poh, Uhl, Uhl:connection} there is an important class consisting of harmonic maps of finite type \cite{BurFPP, hit:tor, PinS}. To a harmonic map of finite type there corresponds an associated algebraic curve whose compactification is called the spectral curve. The genus of the curve is called the spectral genus, and denoted by $g$. The crucial fact that makes it possible to adapt the Whitham deformation technique \cite{MaOs,Kri_whi,GriS1} to the case of \cmc tori in $\bbS^3$, is that a \cmc torus in $\bbS^3$ always has finite spectral genus \cite{hit:tor, PinS}. The spectral curve of a \cmc torus is a double cover of the Riemann sphere with $2g+2$ many branch points. During the deformation the spectral curve changes such that two branch points remain fixed, while the other $2g$ branch points may move around. The closing conditions involve a choice of two double points on the real part of the spectral curve: we call these the \sym points, and to ensure that the topology of the surface stays intact during the deformation, the flow of the \sym points has to be controlled. In the smooth deformation family of the Clifford torus there is a $\bbZ$-family which allow a deformation into absolute cohomogeneity one rotational embedded \cmc tori. Such a deformation is possible when in addition to the two \sym points there is a further double point on the real part of the spectral curve. By opening up this additional double point and moving the resulting two branch points off the real part, the spectral curve becomes a double cover of the Riemann sphere branched now at four points: It has spectral genus $g=1$ and is the spectral curve of a Delaunay surface, and the corresponding \cmc torus is a truncation of a Delaunay surface in $\bbS^3$. We show that at the end of the flow the new branch points pair-wise coalesce with the two fixed branch points. Hence in the limit the coalescing pairs of branch points disappear, and the limit curve is an unbranched double cover of the sphere: the spectral curve of a bouquet of spheres. In the rotational case, see also \cite{HynPM}, our deformation corresponds to pinching the neck of a Delaunay surface, starting at a flat torus and continuing through to a bouquet of spheres. Thus the connected component of the Clifford torus is an infinite comb: The spine ($g=0$) consists of embedded flat \cmc tori parameterized by the mean curvature, and each tooth ($g=1$) of embedded Delaunay tori ends in a bouquet of spheres. By considering covers of Clifford tori the moduli space of rotational \cmc tori is a $\bbZ^2$--family of such combs. It turns out that each bouquet of spheres occurs exactly twice in this moduli space (Theorem~\ref{thm:sphere-bouquet}), so that we may glue the two families together there. Thus the moduli space of rotational \cmc tori in $\bbS^3$ is an infinite connected graph. A similar picture transpires in the non-rotational case. In each isogeny class there is a sequence of $g=0$ tori that can be deformed into $g=1$ tori. In the non-rotational case a $g=1$ deformation family stays away from bouquets of spheres, and we prove that every $g=1$ deformation family begins and ends at a $g=0$ torus (Theorem~\ref{thm:moduli-space}). The above results combined give that every deformation family of absolute cohomogeneity one \cmc tori ends at a flat \cmc torus. The classification of absolute cohomogeneity one \cmc tori is thus reduced to that of spectral curves of flat \cmc tori with a double point on the real part; this initial data is classified and interpreted geometrically. We classify the equivariant minimal tori, as well as the embedded and Alexandrov embedded equivariant \cmc tori, and prove that the minimal Clifford torus is the only embedded minimal equivariant torus in the 3-sphere. We also show that the spectral curve of an equivariant \cmc torus has no double points off the real part (Theorem~\ref{th:no_equi_bub}), which implies that there can not be a Bianchi-B\"acklund transform of an equivariant \cmc torus into a \cmc torus. We conclude the paper by proving that the moduli space of equivariant \cmc tori in $\bbS^3$ is connected. Throughout the text we provide graphics of some of the surfaces under discussion. More images and videos of deformation families can be viewed at the website \cite{Sch:gallery}. \section{Spectral curve} \label{sec:spectral_curve} We start by recalling the description of {\sc{cmc}} tori in terms of spectral curves and abelian differentials \cite{Bob:tor,hit:tor, McI:tor,KilS_osaka}. We then adapt a deformation technique from \cite{GriS1} and prove that any generic \cmc torus in $\bbS^3$ lies in a smooth family of \cmc tori in $\bbS^3$. Let $\curve$ be a hyperelliptic Riemann surface with meromorphic function $\lambda$ of degree two and with branch points over $\lambda = 0 \,(y^+)$ and $\lambda = \infty \,(y^-)$. Then $\curve$ is the \emph{spectral curve} of an immersed {\sc{cmc}} torus in $\mathbb{S}^3$ if and only if the following four conditions hold: \begin{enumerate} \item Besides the hyperelliptic involution $\sigma$, the surface $\curve$ has two further anti-holomorphic involutions $\eta$ and $\varrho = \eta \circ \sigma = \sigma \circ \eta$, such that $\eta$ has no fix points and $\eta(y^+) = y^-$. \item There exist two non-zero holomorphic functions $\mu_1,\,\mu_2$ on $\curve\setminus\{y^+,\,y^-\}$ such that for $i=1,\,2$ \begin{align*} \mu_i \circ \sigma &=\mu_i^{-1}& \mu_i \circ \eta &=\bar{\mu}_i& \mu_i \circ \varrho &=\bar{\mu}_i^{-1}. \end{align*} \item The forms $d \ln \mu_i$ are meromorphic differentials of the second kind with double poles at $y^\pm$. The singular parts at $y^+$ respectively $y^-$ of these two differentials are linearly independent. \item There are four fixed points $y_1,\,y_2 = \sigma(y_1),\,y_3,\,y_4 = \sigma(y_3)$ of $\varrho$, such that the functions $\mu_1$ and $\mu_2$ are either $1$ or $-1$ there. \end{enumerate} \begin{definition} Given the spectral curve of an immersed {\sc{cmc}} torus in $\mathbb{S}^3$, let $\lambda_1\in\mathbb{S}^1$ denote the value $\lambda(y_1) = \lambda(y_2)$, and $\lambda_2\in\mathbb{S}^1$ the value $\lambda(y_3) = \lambda(y_4)$ at the four fixed points of $\varrho$ where $\mu_1$ and $\mu_2$ are either $1$ or $-1$. We call $\lambda_1$ and $\lambda_2$ the \sym points. \end{definition} For $\mi = \sqrt{-1}$, the mean curvature of the corresponding \cmc torus in terms of the \sym points is \begin{equation} \label{eq:cmc-I-II} H = \mi\,\frac{\lambda_2+\lambda_1}{\lambda_2-\lambda_1}\spaceperiod \end{equation} We shall describe spectral curves of \cmc tori in $\mathbb{S}^3$ via hyperelliptic surfaces of the form \begin{align*} \NU^2 = \lambda\,a(\lambda) \end{align*} where $a \in \bbC^g[\lambda]$ is a polynomial of degree $g$ and $$ \bar{\lambda}^{-2g}\bar{a}(\lambda) = a(\bar{\lambda}^{-1}) \quad\mbox{ and } \lambda^{-g}a(\lambda)\geq 0\quad\mbox{ for }|\lambda|=1. $$ The involutions of the spectral curve are \begin{equation} \label{eq:involutions} \begin{split} \sigma\,(\lambda,\,\NU) &=(\lambda,\,-\NU)\,,\\ \eta\,(\lambda,\,\NU) &= (\bar{\lambda}^{-1},\, -\bar{\NU}\bar{\lambda}^{-g-1})\,,\\ \varrho\,(\lambda,\,\NU) &= (\bar{\lambda}^{-1},\,\bar{\NU}\bar{\lambda}^{-g-1})\,. \end{split} \end{equation} Making use of a rotation of $\lambda$ and a rescaling of $\NU$ we may assume that $a$ is a polynomial with highest coefficient one. The meromorphic differentials $d\ln\mu_i$ have the form \begin{equation} \label{eq:b_i} d\ln\mu_i\coloneq\pi\frac{b_i(\lambda)}{\NU}\frac{d\lambda}{\lambda} \end{equation} with polynomials $b_i \in \bbC^{g+1}[\lambda]$ of degree $g+1$ and $\bar{\lambda}^{-g-1}\bar{b}_i(\lambda)=b_i(\bar{\lambda}^{-1})$. We next describe a one parameter family of deformations of the spectral curve, that depends on a deformation parameter $t$. We view all functions on the corresponding spectral curves as functions in the variables $\lambda$ and $t$. Since the path integrals of the differentials $d \ln \mu_i$ along all cycles in $H_1(\curve,\,\bbZ)$ are integer multiples of $2\pi\mi$, these differentials do not depend on the deformation parameter $t$. Further, see \cite{Mir} (ch.3, Prop. 1.10), a meromorphic function $f$ on a hyperelliptic Riemann surface given by $\NU^2 = h(\lambda)$ is of the form $f(\lambda) = r(\lambda) + \NU \,s(\lambda)$ with rational functions $r,\,s$. Hence both $\del_{\,t} \ln \mu_i$ are global meromorphic functions on $\curve$ with only possible poles at the branch points of $\curve$. More precisely these meromorphic functions are of the form \begin{equation} \label{eq:c_i} \del_{\,t} \ln \mu_i =\pi\frac{\mi \,c_i(\lambda)}{\NU} \end{equation} with polynomials $c_i\in \bbC^{g+1}[\lambda]$ of degree $g+1$ and $\bar{\lambda}^{-g-1}\bar{c}_i(\lambda)=c_i(\bar{\lambda}^{-1})$. Integrability $\del^{\,2}_{\,t\lambda} \ln \mu_i = \del^{\,2}_{\,\lambda t} \ln \mu_i$ reads $\del_{\,t} \left( \lambda^{-1}\NU^{-1}b_i \right) = \del_{\,\lambda} \NU^{-1}\mi c_i$ which yields \begin{equation}\label{eq:adot} 2 a(\lambda)\dot{b}_i(\lambda)-\dot{a}(\lambda)b_i(\lambda)= \mi(2\lambda a(\lambda)c_i'(\lambda)- \lambda a'(\lambda)c_i(\lambda)-a(\lambda)c_i(\lambda)). \end{equation} Here dash and dot denote the derivatives with respect to $\lambda$ respectively $t$. The differential $$ \Omega = (\del_{\,t} \ln \mu_1)\,d\ln \mu_2 - (\del_{\,t} \ln \mu_2 ) \,d\ln \mu_1 $$ is a meromorphic $1$-form on $\curve$ with only poles at most of order three at $\lambda = 0,\,\infty$, and roots at the sym points $\lambda_1,\,\lambda_2$. Further, since $\eta^*\bar{\Omega} = \Omega$, $\varrho^*\bar{\Omega} = \Omega$ and $\sigma^*\Omega = \Omega$, we conclude that \begin{equation*} \Omega = \pi^2 \cc_1\frac{(\lambda - \lambda_1)(\lambda -\lambda_2)} {\lambda\sqrt{\lambda_1\,\lambda_2}} \frac{d\lambda}{\mi\lambda} \end{equation*} with a real function $\cc_1 = \cc_1(t)$. Using \eqref{eq:b_i} and \eqref{eq:c_i} we obtain \begin{equation} \label{eq:c1c2} b_1(\lambda)c_2(\lambda)-b_2(\lambda)c_1(\lambda)= \cc_1 a(\lambda)\frac{(\lambda-\lambda_1)(\lambda-\lambda_2)} {\sqrt{\lambda_1\lambda_2}}\,, \end{equation} and combining the above gives \begin{equation} \label{eq:diff_ode} (\del_{\,t} \ln \mu_1)\,d\ln \mu_2 - (\del_{\,t} \ln \mu_2 ) \,d\ln \mu_1 = \pi^2 \cc_1\frac{(\lambda - \lambda_1)(\lambda -\lambda_2)} {\lambda \,\sqrt{\lambda_1\,\lambda_2}} \frac{d\lambda}{\mi\lambda}\,. \end{equation} To preserve the topology during the flow, the values of $\ln\mu_i$ should be fixed at the two sym points $\lambda=\lambda_1$ and $\lambda=\lambda_2$. Consequently $\del_{\,t} \ln\mu_i |_{\lambda = \lambda_j} = \bigl( (\del_\lambda \ln\mu_i) \,\dot{\lambda} + \del_{\,t} \ln\mu_i \bigr)|_{\lambda = \lambda_j} =0$, which gives \begin{equation} \label{eq:sym-point-deformation} \dot{\lambda}_j = -\left.\frac{\del_{\,t}\ln\mu_i} {\del_\lambda\ln\mu_i} \right|_{\lambda=\lambda_j}\,. \end{equation} \begin{theorem} \label{th:nocommon} Let $\curve$ be a genus $g$ spectral curve of a {\sc{cmc}} torus in $\bbS^3$. If the two differentials $d \ln \mu_i$ for $i=1,\,2$ have no common roots, then $\curve$ is contained in a unique smooth one dimensional family of spectral curves of {\sc{cmc}} tori in $\mathbb{S}^3$. \end{theorem} \begin{proof} If the roots of $b_1$ and $b_2$ are pairwise distinct, then the $2g+2$ values of equation \eqref{eq:c1c2} at these roots uniquely determine the values of $c_1$ and $c_2$ there. Therefore the equation \eqref{eq:c1c2} determines the polynomials $c_1$ and $c_2$ uniquely up to a real multiple of $b_1$ and $b_2$. The choice $c_1=b_1$ and $c_2=b_2$ in \eqref{eq:c_i} and \eqref{eq:adot} corresponds to a rotation of $\lambda$. For given $c_1$ and $c_2$ the equations \eqref{eq:adot} determine uniquely the derivatives of all roots of the polynomials $a$ and $b_i$. The condition $\bar{\lambda}^{-2g}\bar{a}(\lambda)=a(\bar{\lambda}^{-1})$ together with the assumption that the highest (and lowest) coefficient of the polynomial $a$ has absolute value one determines the polynomial $a$ in terms of its roots up to multiplication with $\pm 1$. We conclude that these conditions on $a$ determine $\dot{a}$, $\dot{b}_1$ and $\dot{b}_2$ in terms of $c_1$ and $c_2$. We remark that due to equation \eqref{eq:c1c2} the solutions $\dot{a}$ of both equations \eqref{eq:adot} coincide. Finally the condition that the highest (and lowest) coefficient of $a$ is qual to one, fixes the rotations. Therefore there exist unique solutions $c_1$ and $c_2$ of \eqref{eq:c1c2}. \end{proof} We expect that one can pass through common zeroes of the $b_i's$, therefore making the deformation global, but this will be considered elsewhere. Below we will restrict to the cases of spectral genera $g=0$, where this is trivially true, and $g=1$ where the roots of $b_1$ and $b_2$ turn out to always be distinct throughout the flow (Corollary~\ref{th:no_common_zeroes}). \section{Flows of equivariant cmc tori in the 3-sphere} The double cover of the isometry group of $\bbS^3 \cong \mathrm{SU}_2$ is $\mathrm{SU}_2 \times \mathrm{SU}_2$ via the action $P \mapsto F\,P\,G^{-1}$. A surface is \emph{equivariant} if it is preserved set-wise by a one-parameter family of isometries. To an equivariant surface we associate two \emph{axes}: These are geodesics which are fixed set-wise by the one-parameter family of isometries. An equivariant surface is rotational precisely when one of its axes is fixed point-wise. Equivariant {\sc{cmc}} surfaces have spectral genus zero or one \cite{BurK}. By a theorem of DoCarmo and Dajczer \cite{DoCD}, an equivariant {\sc{cmc}} surface is a member of an associated family of a Delaunay surface, so up to isometry determined by an elliptic modulus and its \sym points. Hence an equivariant \cmc surface in the 3-sphere is parameterized by its elliptic modulus, its mean curvature, and its associated family parameter. We will express the differential equations \eqref{eq:diff_ode} and \eqref{eq:sym-point-deformation} in terms of the three coordinates \begin{equation}\label{eq:coor0} (\jq,\,\jk,\,\jh) \in [-1,\,1]^3 \end{equation} where $\jq$ is the elliptic modulus, and $\jk,\,\jh$ are defined in terms of the \sym points as \begin{equation}\label{eq:coor} \jk \coloneq \frac{1}{2} \left(\sqrt{\lambda_1\lambda_2}\, + \, 1/\sqrt{\lambda_1\lambda_2}\,\,\right) \quad \mbox{ and } \quad \jh \coloneq \frac{1}{2}\left(\sqrt{\frac{\lambda_1}{\lambda_2}}\, + \, \sqrt{\frac{\lambda_2}{\lambda_1}} \,\,\right)\,. \end{equation} The mean curvature $H$ in \eqref{eq:cmc-I-II} can then be expressed as \begin{equation} \label{eq:H_and_h} H = \frac{\jh}{\sqrt{1-\jh^2}}\,. \end{equation} The following identities will be used below: \begin{equation} \label{eq:sqrt_q_formula} \begin{split} 2\sqrt{\jk^2 - 1} &= \sqrt{\lambda_1\lambda_2}\, - \, 1/\sqrt{\lambda_1\lambda_2} \,, \quad 2\sqrt{\jh^2 - 1} = \sqrt{\lambda_1/\lambda_2}\, - \, \sqrt{\lambda_2/\lambda_1} \,,\\ 4(\jk^2 - \jh^2) &= \left( \lambda_1 -\lambda_1^{-1} \right) \left(\lambda_2 - \lambda_2^{-1} \right)\,, \quad 4\jk\sqrt{\jk^2 -1} = \lambda_1\lambda_2 - \lambda_1^{-1}\lambda_2^{-1} \\ 4\jh\sqrt{\jk^2-1} &= \lambda_1 - \lambda_1^{-1} + \lambda_2 - \lambda_2^{-1}\,, \quad 4\jk\sqrt{\jh^2-1} = \lambda_1 - \lambda_1^{-1}-\lambda_2+\lambda_2^{-1} \\ 4\jh\sqrt{\jh^2-1} &= \lambda_1\lambda_2^{-1} - \lambda_1^{-1}\lambda_2\,. \end{split} \end{equation} The derivatives of $\jk,\,\jh$ with respect to the flow parameter are \begin{equation} \label{eq:qdot_hdot} \dot{\jk} = \frac{\sqrt{\jk^2-1}}{2} \left(\frac{\dot{\lambda}_1}{\lambda_1}+ \frac{\dot{\lambda}_2}{\lambda_2}\right)\,, \qquad \dot{\jh} = \frac{\sqrt{\jh^2-1}}{2} \left(\frac{\dot{\lambda}_1}{\lambda_1}- \frac{\dot{\lambda}_2}{\lambda_2}\right)\,. \end{equation} \subsection{Spectral genus zero.} In the spectral genus zero case, $a \equiv 1$, the spectral curve is $\nu^2 = \lambda$, and the elliptic modulus $\jq \equiv \pm 1$. The functions $b_i$ in \eqref{eq:b_i} for $i=1,\,2$ are polynomial in $\lambda$ of degree one, and from $\bar{\lambda}^{-1}\bar{b}_i(\lambda)=b_i(\bar{\lambda}^{-1})$ we conclude that $b_i = \beta_i\,\lambda + \bar{\beta}_i$ for some smooth complex valued functions $t \mapsto \beta_i(t)$. Thus $d\ln\mu_i$ integrates to \begin{equation*} \ln \mu_j = \frac{2 \pi (\beta_j\,\lambda - \bar{\beta}_j)}{\nu}\, , \end{equation*} and consequently the functions $c_i,\,i=1,\,2$ in \eqref{eq:c_i} are $c_i = 2\mi (\dot{\bar\beta}_i - \dot\beta_i\lambda)$. We may fix one of the \sym points, and assume without loss of generality that $\lambda_1 \equiv 1$ during the flow. Then $\left.\partial_{\,t}\ln\mu_j\right|_{\lambda =1} =0$, which is equivalent to $c_i(\lambda=1) =0$, or equivalently $\dot{\beta}_i \in \bbR$. Thus equation \eqref{eq:diff_ode} reads \begin{equation*} 2\mi\dot{\beta}_2(\beta_1\lambda+\bar{\beta}_1) - 2\mi\dot{\beta}_1(\beta_2\lambda+\bar{\beta}_2) = \frac{C_1}{\sqrt{\lambda_2}}\,(\lambda_2 -\lambda)\,. \end{equation*} Evaluating this at $\lambda = -\bar{\beta}_i/\beta_i$ gives \begin{equation*} \dot{\beta}_i = \frac{C_1\,(\beta_i \lambda_2^{1/2} + \bar{\beta}_i \lambda_2^{-1/2})}{2\mi\,(\bar{\beta}_1\beta_2 - \beta_1\bar{\beta}_2)} \end{equation*} From \eqref{eq:sym-point-deformation} we get \begin{equation*} \frac{\dot{\lambda}_2}{\lambda_2} = \frac{c_1}{\mi b_1} = \frac{2\dot{\beta}_1(1-\lambda_2)}{\beta_1\lambda_2 +\bar{\beta}_1} = C_1 \frac{\lambda_2^{1/2}-\lambda_2^{-1/2}}{\mi (\beta_1\bar{\beta}_2 - \bar{\beta}_1\beta_2)} \end{equation*} Set $C_1 \coloneq \mi (\beta_1\bar{\beta}_2 - \bar{\beta}_1\beta_2)$. Then together with \eqref{eq:sqrt_q_formula} and the fact that $\jk=\jh$ when $\lambda_1 \equiv 1$, the flow equations \eqref{eq:diff_ode} and \eqref{eq:sym-point-deformation} reduce to the single equation $\dot{\jh} = \jh^2 - 1$. The solution to this equation is given by $\jh(t) = -\tanh(t + C)$ with some constant of integration $C\in\bbR$. Consequently, the argument of the \sym point $\mathrm{arg}[\lambda_2] = 2 \arccos(-\tanh(t))$ can vary over all of the interval $(0,\,2\pi)$, and is strictly monotonic. Hence also the mean curvature $H$ by equation \eqref{eq:H_and_h} is strictly monotonic, and given by $H(t) = -\sinh(t)$. In summary, we have proven the following \begin{theorem} \label{th:cliff_fam} Every flat \cmc torus in the 3-sphere lies in a smooth $\bbR$--family of flat \cmc tori. Each family is parameterized by the mean curvature. \end{theorem} \subsection{Spectral genus one.} In the spectral genus one case, the meromorphic differentials $d\ln\mu_i$ are linear combinations of the derivative of a meromorphic function and an elliptic integral. We write these meromorphic differentials in terms of Jacobi's elliptic functions. We define an elliptic curve by \begin{equation}\label{eq:nu} 4\NU^2=(\lambda-\jq)(\lambda^{-1}-\jq). \end{equation} Here $\jq\in[-1,1]$ is the real modulus. Then \begin{equation} \label{eq:dnu} d\NU = \jq\frac{\lambda^{-1} - \lambda}{8\NU}\frac{d\lambda}{\lambda}\,, \end{equation} and the only roots of $d\NU$ are at $\lambda = \jq,\,\jq^{-1}$. In addition to the three involutions in \eqref{eq:involutions}, the elliptic curve \eqref{eq:nu} has a further holomorphic involution \begin{equation}\label{eq:involtau} \tau \,(\lambda,\,\NU) = (\lambda^{-1},\NU)\,. \end{equation} We decompose $\frac{\ln\mu_i}{\pi\mi}$ into the symmetric and skew symmetric part with respect to $\tau$. The symmetric part is a real multiple of the single valued meromorphic function $\NU$, and the skew symmetric part is a real multiple of a multi valued function $\OMEGA$ with real periods. The first homology group of the elliptic curve \eqref{eq:nu} is generated by a cycle around the two branch points at $\lambda=\jq^{\pm 1}$ and the cycle $\bbS^1=\{\lambda\mid|\lambda|=1\}$. The first cycle is symmetric with respect to $\tau$ and the second skew symmetric. Therefore the integral of $d\OMEGA$ along the first cycle vanishes. We assume that the integral of $d\OMEGA$ along the second cycle is equal to $2$. Together with the first order poles of $\OMEGA$ at $y^{\pm}$ and the skew symmetry with respect to $\tau$ this normalization determines $\OMEGA$ uniquely. Further, since $\int_{\bbS^1} d\ln\mu_i \in 2\pi\mi\bbZ$, we conclude that the functions $\ln\mu_i$ are linear combinations of $\NU$ and $\OMEGA$, and given by \begin{equation}\label{eq:lnmu} \ln\mu_1 = \pi\mi\,(x_1\,\NU + p_{10}\,\OMEGA)\,,\quad \ln\mu_2 = \pi\mi\,(x_2\,\NU + p_{20}\,\OMEGA)\,\quad \mbox{for }\, p_{10},\,p_{20} \in \bbZ\,. \end{equation} We shall express $\OMEGA$ as a linear combination of complete elliptic integrals of the first and second kind. First we shall relate the curve \eqref{eq:nu} to the elliptic curve $\curve$ in Legendre's form with elliptic modulus $\tfrac{1-\jq}{1+\jq}$ given by \begin{equation*} \jy^2=\left(1-\jx^2\right)\left(1-(\tfrac{1-\jq}{1+\jq})^2\jx^2\right). \end{equation*} Let $\JK$ and $\JE$ denote the complete integrals of the first and second kind \begin{equation*} \JK(\jq) \coloneq\int_0^1\tfrac{d\jx}{\sqrt{(1-\jx^2)(1-\jq^2\jx^2)}}\,,\quad \JE(\jq) \coloneq\int_0^1\sqrt{\frac{1-\jq^2\jx^2}{1-\jx^2}}\,d\jx\,. \end{equation*} Since $\JE$ has first order poles at the two points over $\jx=\infty$, these two points correspond to $y^{\pm}$. The involution $\tau$ corresponds to $(\jx,\jy)\mapsto(\jx,-\jy)$. Therefore we set \begin{align*} \lambda&= \frac{1+(\tfrac{1-\jq}{1+\jq})^2(1-2\jx^2)-2\frac{1-\jq}{1+\jq}\jy} {1-(\tfrac{1-\jq}{1+\jq})^2}\,,& \lambda^{-1}&= \frac{1+(\tfrac{1-\jq}{1+\jq})^2(1-2\jx^2)+2\frac{1-\jq}{1+\jq}\jy} {1-(\tfrac{1-\jq}{1+\jq})^2}\,,& \NU&=\frac{\tfrac{1-\jq}{1+\jq}\jx}{1+\tfrac{1-\jq}{1+\jq}}. \end{align*} The integrals of the meromorphic differentials $d\ln\mu_i$ along all cycles of $\curve$ are purely imaginary. Define \begin{equation} \label{eq:omega-defn} d\OMEGA\coloneq\left(\JE(\tfrac{1-\jq}{1+\jq})-\JK(\tfrac{1-\jq}{1+\jq}) \left(1-(\tfrac{1-\jq}{1+\jq})^2\jx^2\right)\right) \frac{2d\jx}{\pi\mi\jy}. \end{equation} The cycle around the two branch points $\lambda=\jq^{\pm 1}$ corresponds to the real period and the cycle $\bbS^1$ to the imaginary period of Jacobi's elliptic functions. The integral along the real period vanishes and due to Legendre's relations \cite[17.3.13]{AbrS} the integral along the imaginary period is equal to $2$. In summary, we have found a linear combination of elliptic integrals of the first and second kind which obeys the conditions that characterize $\OMEGA$. In terms of the complete elliptic integrals $\JK'=\JK'(\jq)=\JK(\sqrt{1-\jq^2})$ and $\JE'=\JE'(\jq)=\JE(\sqrt{1-\jq^2})$ and the functions $\lambda$ and $\NU$, and the formulas \cite[17.3.29 and 17.3.30]{AbrS} \begin{equation*} \JE(\tfrac{1-\jq}{1+\jq}) = (1+\tfrac{1-\jq}{1+\jq})\JE'+(1-\tfrac{1-\jq}{1+\jq})\JK'\,,\quad \JK(\tfrac{1-\jq}{1+\jq}) = \frac{2}{1+\tfrac{1-\jq}{1+\jq}}\JK'\,, \end{equation*} the differential $d\OMEGA$ in \eqref{eq:omega-defn} simplifies to \begin{equation}\label{eq:omega} d\OMEGA=\frac{2\JE'-\jq\JK'(\lambda+\lambda^{-1})}{4\pi\NU}\frac{d\lambda}{\mi\lambda}\,. \end{equation} \begin{proposition}\label{th:alpha_less_beta} The complete elliptic integrals $\JK'$ and $\JE'$ satisfy \begin{equation* 1\leq\frac{2\JE'}{1+\jq^2}<\JK'<\frac{\JE'}{|\jq|}\quad \mbox{for}\quad 0<|\jq|<1. \end{equation*} \end{proposition} \begin{proof} Assume first that $\jq\in(0,1)$. Since $\tau^*d\OMEGA = -d\OMEGA$, we have that $$ \int_{\lambda=\jq}^{\lambda=\jq^{-1}}d\OMEGA = 2\int_{\lambda=\jq}^{\lambda=1}d\OMEGA = 0\,. $$ Hence the function $2\JE'-\jq\JK'(\lambda+\lambda^{-1})$ has a real root in $\lambda\in(\jq,\,1)$, say at $r \in (\jq,\,1)$, and thus also at $r^{-1} \in (1,\,\jq^{-1})$. Then $2< r+1/r < k +1/k$, and together with $2\JE'-\jq\JK'(r + r^{-1})=0$ gives $2\JE' > 2\jq\JK'$ and $2\JE'<\JK'(1+\jq^2)$. Finally, from $2>\jq^2 + 1$ and $\JE' \geq 1$ (13.8.11 in \cite{Bat2}), we obtain $2\JE' \geq 1+\jq^2$. For $\jq\in(-1,\,0)$, the function $2\JE'-\jq\JK'(\lambda+\lambda^{-1})$ has a pair of reciprocal real roots $s,\,s^{-1}$ with $-1<s<\jq^{-1}$, and analogous arguments prove the assertion in this case. \end{proof} \begin{corollary}\label{th:no_common_zeroes} The differentials $d\NU$ and $d\OMEGA$ have no common zeroes. \end{corollary} \begin{proof} While $d\NU$ in \eqref{eq:dnu} has only roots at $\lambda = \jq,\,\jq^{-1}$, we saw in the proof of Proposition \ref{th:alpha_less_beta} that $d\OMEGA$ has only roots $\lambda\in(\jq,\,1) \cup (1,\,\jq^{-1})$ when $\jq\in(0,1)$, and $\lambda\in(-1,\,\jq) \cup (\jq^{-1},\,-1)$ when $\jq\in(-1,\,0)$. \end{proof} \begin{theorem} \label{thm:flow} Every spectral genus one \cmc torus in $\bbS^3$ lies on an integral curve of the vector field \begin{equation}\label{eq:torus-flow} \left( \begin{array}{c} \dot{\jq} \\ \dot{\jk} \\ \dot{\jh} \end{array} \right) = \left( \begin{array}{c} \jq\,(\JE'\,\jk - \jq\,\JK'\,\jh) \\ \frac{1-\jk^2}{1-\jq^2}\,((1 + \jq^2)\JE'-2\jq^2\JK') \\ \jq\frac{1-\jh^2}{1-\jq^2}\,(2\JE'-(1+\jq^2)\JK')) \end{array} \right)\,. \end{equation} The vector field $(\XzeroDot,\,\XoneDot,\,\XtwoDot)$ is analytic on the set $\calD \coloneq \{(\Xzero,\,\Xone,\,\Xtwo)\in\bbR^3\suchthat \Xzero\not=0\}$, and its zero set is $\{\Xzero^2=1\}\cap\{\Xone=\Xzero\Xtwo\}\cap\calD$. \end{theorem} \begin{proof} We shall calculate the differential equations \eqref{eq:diff_ode} and \eqref{eq:sym-point-deformation} in terms of the coordinates \eqref{eq:coor0}. From \eqref{eq:nu} we compute the derivative of $\NU$ with respect to $t$, and obtain \begin{equation} \label{eq:nu dot} \dot{\NU} =\dot{\jq}\,\frac{2\jq - \lambda - \lambda^{-1}}{8\NU}\,. \end{equation} In agreement with the skew-symmetry of $\OMEGA$ with respect to $\tau$ we set \begin{equation*} \dot{\OMEGA}=\cc_2\frac{\lambda-\lambda^{-1}}{4\pi\mi\NU}. \end{equation*} From \eqref{eq:lnmu} we obtain $\del_{\,t}\ln\mu_j = \pi\mi\,(\dot{x}_j\,\NU + x_j\dot{\NU} + p_{j0}\,\dot{\OMEGA})$ and $d\ln\mu_j = \pi\mi\,(x_j\,d\NU + p_{j0}\,d\OMEGA)$ for $j = 1,\,2$. Putting all this together, the differential $\Omega = (\del_{\,t} \ln \mu_1)\,d\ln \mu_2 - (\del_{\,t} \ln \mu_2 ) \,d\ln \mu_1$ reads \begin{equation*} \begin{split} \Omega &= \pi^2\bigl( (\dot{x}_1x_2-\dot{x}_2x_1)\,\NU\,\NU' + (\dot{x}_2 p_{10} - \dot{x}_1 p_{20})\,\NU\,\OMEGA' + (p_{10} x_2 - p_{20} x_1) (\dot{\NU}\,\OMEGA' - \NU'\dot{\OMEGA}) \bigr) \,\,\mbox{ with } \\ \NU\,\NU' &= \frac{\jq\,(\lambda^{-1} - \lambda)}{8\,\lambda}\,,\qquad \NU\,\OMEGA' = \frac{2\,\JE' - \jq\,(\lambda + \lambda^{-1})\,\JK'}{4\,\pi\,\mi\,\lambda}\,,\\ \dot{\NU}\,\OMEGA' - \NU'\dot{\OMEGA} &= \frac{\cc_2\jq(\lambda - \lambda^{-1})^2 + \dot{\jq}\,(\lambda - 2\,\jq + \lambda^{-1})(\jq\,(\lambda+\lambda^{-1})\,\JK' - 2\,\JE')}{8\,\pi\,\mi\,\lambda\,(\lambda-\jq)(\lambda^{-1} - \jq)}\,. \end{split} \end{equation*} Note that while $\dot{\NU}\,\OMEGA' - \NU'\dot{\OMEGA}$ has simple poles at $\lambda = \jq,\,\jq^{-1}$, but $\Omega$ does not, we conclude that the numerator of $\dot{\NU}\,\OMEGA' - \NU'\dot{\OMEGA}$ must vanish at $\lambda = \jq,\,\jq^{-1}$, giving \begin{equation*} \cc_2 = \dot{\jq}\,\frac{(\jq^2 + 1)\,\JK' - 2\,\JE'}{\jq^2 - 1}\,. \end{equation*} The unique solution of equation \eqref{eq:diff_ode} can now be computed, and is given by \begin{equation*} \begin{split} &\dot{\jq} = \frac{4\pi \cc_1(\jq^2 - 1)(\jq\,\jh\,\JK' - \jk\,\JE')} {(x_1 p_{20} - x_2 p_{10} )(\JE'^2-\jq^2\JK'^2)}\,,\qquad \dot{x}_1x_2-\dot{x}_2x_1 = \frac{8\cc_1\sqrt{1-\jk^2}}{\jq}\,,\\ &\dot{x}_1p_{20}-\dot{x}_2p_{10} = \frac{4\pi \cc_1(\jq\jk(\JE'-\JK') + \jh(\JE'-\jq^2\JK'))}{\JE'^2-\jq^2\JK'^2}\,. \end{split} \end{equation*} Since $\dot{\OMEGA} = \partial_{\jq} \OMEGA\, \dot{\jq}$, the above imply that \begin{equation} \label{eq:omega'} \frac{\del\OMEGA}{\del\jq} = \frac{(1+\jq^2)\JK'-2\JE'}{4\pi\mi(1-\jq^2)\NU}\, (\lambda-\lambda^{-1})\,. \end{equation} Now setting \begin{equation*} \cc_1 \coloneq -\frac{\jq\,(x_1 p_{20} - x_2 p_{10})(\JE'^2-\jq^2\JK'^2)} {4\pi(\jq^2 - 1)} \end{equation*} gives $\dot{\jq} = \jq(\JE'\jk-\jq\JK'\jh)$ as required. The deformation equations of the \sym points \eqref{eq:sym-point-deformation} read \begin{equation*} \begin{split} \dot{\lambda}_j\,(x_1\,\NU' + p_{10}\,\OMEGA') &= -(\dot{x}_1\,\NU + x_1\,\dot\NU + p_{10}\,\dot\OMEGA ) \\ \dot{\lambda}_j\,(x_2\,\NU' + p_{20}\,\OMEGA') &= -(\dot{x}_2\, \NU + x_2\,\dot\NU + p_{20}\,\dot\OMEGA ) \end{split} \end{equation*} Multiplying the first equation by $p_{20}$ and the second equation by $p_{10}$ and subtracting gives \begin{equation*} \dot{\lambda}_j = -\left.\frac{(\dot{x}_1\,p_{20} - \dot{x}_2\,p_{10})\,\NU + (x_1\,p_{20} - x_2\,p_{10})\,\dot{\NU}}{(x_1\,p_{20} - x_2\,p_{10})\,\NU'} \right|_{\lambda = \lambda_j}\,. \end{equation*} Using the above formulae as well as \eqref{eq:sqrt_q_formula} we obtain \begin{equation*} \begin{split} \frac{\dot{\lambda}_1}{\lambda_1} + \frac{\dot{\lambda}_2}{\lambda_2}&= \frac{2\sqrt{\jk^2-1}}{\jh^2 - \jk^2} \left( \frac{\jq\jk(\JE'-\JK')+\jh(\JE'-\jq^2\JK')}{1-\jq^2}(2\jq\jk-\jh(\jq^2+1)) + (\jk\JE'-\jh\jq\JE')(\jk-\jh\jq) \right) \\ &= \frac{2\sqrt{(\jk^2-1)}}{\jq^2 - 1} \left( (1+\jq^2)\JE' - 2\jq^2\JK' \right)\,, \\ \frac{\dot{\lambda}_1}{\lambda_1} - \frac{\dot{\lambda}_2}{\lambda_2}&= \frac{2\sqrt{\jh^2-1}}{\jh^2 - \jk^2} \left( \frac{\jq\jk(\JE'-\JK')+\jh(\JE'-\jq^2\JK')}{1-\jq^2}((\jq^2+1)\jk - 2\jh\jq) + (\jk\JE'-\jh\jq\JE')(\jq\jk-\jh) \right) \\ &= \frac{2\sqrt{\jh^2-1}}{\jq^2 - 1}\,\jq \left( 2\JE'-(1+ \jq^2) \JK' \right) \end{split} \end{equation*} and putting these into \eqref{eq:qdot_hdot} gives the equations for $\dot\jk$ and $\dot\jh$, and concludes the proof of \eqref{eq:torus-flow}. The elliptic integrals $\JK'$ and $\JE'$ are analytic on $\jq\in\Rstar$, and at $\jq=\pm1$ equal to $\frac{\pi}{2}$. Therefore the right hand sides of \eqref{eq:torus-flow} extend analytically to $\jq\in\Rstar$. Due to Proposition \ref{th:alpha_less_beta}, for $\jk,\jh\in(-1,1)$ we have \begin{align}\label{eq:monotonicity} \dot{\jk}>0&\mbox{ for }0<|\jq|<1\,,& \dot{\jh}>0&\mbox{ for }-1<\jq<0\,,& \dot{\jh}<0&\mbox{ for }0<\jq<1\,. \end{align} By the properties of $\JK'$ and $\JE'$ the vector field $(\XzeroDot,\,\XoneDot,\,\XtwoDot)$ is analytic in $\Xzero$ on $\Rstar$ and has simple zeros at $\Xzero=\pm 1$. Thus the vector field is analytic on $\calD$. The zero set statement follows from the fact that on $\{\Xzero\not=0\}$, the functions $(1+\jq^2)\JE'-2\jq^2\JK'$ and $2\JE'-(1+\jq^2)\JK'$ have zeros only at $\jq =\pm 1$, and $\JE'=\jq\JK'$ holds only at $\jq =1$. That $\jq =\pm 1$ is a simple root follows from the series expansions at $\jq =1$ (similarly at $\jq =-1$), given by \begin{align*} \JK'(\jq) &= \pi \left( \tfrac{1}{2} -\tfrac{1}{4}(\jq - 1) + \tfrac{5}{32}(\jq-1)^2 - \tfrac{7}{64}(\jq-1)^3 + \mathrm{O}(\jq-1)^4 \right)\,, \\ \JE'(\jq) &= \pi \left( \tfrac{1}{2} + \tfrac{1}{4}(\jq - 1) + \tfrac{1}{32}(\jq-1)^2 - \tfrac{1}{64}(\jq-1)^3 + \mathrm{O}(\jq-1)^4 \right)\,. \end{align*} \end{proof} \begin{figure}[t] \centering \includegraphics[width=5.4cm]{torus213.eps}\hspace{0.0cm} \includegraphics[width=5.4cm]{torus214.eps}\hspace{0.0cm} \includegraphics[width=5.4cm]{torus215.eps}\hspace{0.0cm} \caption{ \label{fig:twizzled2} Equivariant $(2,\,1,\,n)$ \cmc tori ($n=3,\,4,\,5$). By Proposition~\ref{th:minlobes}, there are no twizzled tori with one or two major lobes. } \end{figure} \subsection{Global behaviour of the integral curves} The proof of the following Proposition~\ref{prop:torus-of-revolution} is deferred to section \ref{sec:frame}. \begin{proposition} \label{prop:torus-of-revolution} An equivariant \cmc torus in $\mathbb{S}^3$ is rotational if and only if the \sym points are reciprocal. \end{proposition} Special values of $\Xone$ and $\Xtwo$ include \begin{equation*} \begin{aligned} \Xone^2=1 &\iff \lambda_1=\lambda_2^{-1} (rotational)\spacecomma& \Xtwo^2=1 &\iff \lambda_1 = \lambda_2\ (H=\infty)\spacecomma\\ \Xone=0 &\iff \lambda_1 = -\lambda_2^{-1}\spacecomma& \Xtwo=0 &\iff \lambda_1 = -\lambda_2\ (H=0)\spacecomma\\ \Xone=\Xtwo &\iff \lambda_1=1\text{ or }\lambda_2 = 1\spacecomma& \Xone=-\Xtwo &\iff \lambda_1=-1\text{ or }\lambda_2 = -1 \spaceperiod \end{aligned} \end{equation*} By the deformation equations \eqref{eq:torus-flow} if $\jk^2(t_0) =1$, for some $t_0$, then $\jk^2 \equiv 1$ throughout the flow. Hence rotational tori stay rotational during the flow. We begin the qualitative analysis of the flow by first considering equivariant \cmc tori which are not rotational ($\Xone^2<1$): we call these \emph{twizzled} tori. The tori of revolution ($\Xone^2=1$) are treated subsequently in Proposition~\ref{prop:moduli-tori-of-revolution}. The flow will be investigated in the open solid cuboid \begin{equation*} \calB \coloneq \{(\jq,\jk,\jh)\in(-1,1)^3\suchthat\Xzero\not=0\}\spaceperiod \end{equation*} Due to \eqref{eq:monotonicity} $\jk-\sign(\jq)\jh$ is strictly monotonic on $\calB$ with the locally constant function $\sign(\jq)$. \begin{proposition} \label{prop:levelset} Define the set \begin{equation*} \calL_c \coloneq \{ (\Xzero,\,\Xone,\,\Xtwo)\in\calB\suchthat\quad (1-\Xone^2)(1-\Xtwo^2) =c(\tfrac{1+\jq^2}{2\jq}-\Xone\Xtwo)^2\} \spacecomma\interspace c\in\bbR_+ \spaceperiod \end{equation*} \begin{enumerate} \item\label{item:levelset1} Then every integral curve in $\calB$ lies in $\calL_c\cap\calB$ for some $c\in(0,\,1)$. \item\label{item:levelset2} The following uniform estimates hold on the integral curve through $(\jq_0,\jk_0,\jh_0)\in\calL_c$: \begin{equation* |\jq| \geq \frac{\sqrt{c}}{2+2\sqrt{c}} \quad \mbox{ and } \quad \min\{1-\jk^2,1-\jh^2\} \geq c\,(\min\{1-|\jk_0|,1-|\jh_0|\})^2 \end{equation*} \item \label{prop:dq-sign} The function $\XzeroDot$ has at most one zero along any integral curve in $\calB$. \end{enumerate} \end{proposition} \begin{proof} (1) From~\eqref{eq:torus-flow} we compute $\calW_t[ (1-\Xone^2)(1-\Xtwo^2),\, (\frac{1+\jq^2}{2\jq}-\Xone \Xtwo)^2 ] = 0$, where $\calW_t[X,\,Y] \coloneq X \dot Y- \dot X Y$ is the Wronskian with respect to the flow parameter $t$. Since $(1-\Xone^2)(1-\Xtwo^2)$ and $(\frac{1+\jq^2}{2\jq}-\Xone \Xtwo)^2$ are strictly positive in $\calB$, then every integral curve in $\calB$ lies in $\calL_c$ for some $c\in\bbR_+$. In $\calB$, we have $(1-\Xone^2)(1-\Xtwo^2) \le {(1-\Xone\Xtwo)}^2 < (\frac{1+\jq^2}{2\jq}- \Xone \Xtwo)^2$, so $c\in(0,\,1)$. (2) For $(\jq,\jk,\jh)\in\calL_c$ the first inequality of follows from \begin{equation*}\frac{1}{|\jq|}\leq\frac{1+\jq^2}{|\jq|} \leq 2\sqrt{\frac{(1-\jk^2)(1-\jh^2)}{c}}+2|\jk\jh|\leq \frac{2+2\sqrt{c}}{\sqrt{c}}. \end{equation*} On each integral curve $\jq$ is either positive or negative. The second inequality follows from \begin{equation*} \min\{1-\jk^2,1-\jh^2\}\geq(1-\jk^2)(1-\jh^2)\geq c(1-\sign(\jq)\jk\jh)^2\geq c(\max\{1-|\jk|,1-|\jh|\})^2. \end{equation*} In fact, due to \eqref{eq:monotonicity} either $\sign(\jq)\jk\jh\leq0$ or one of the functions $1-|\jk|$ and $1-|\jh|$ is increasing and the other one decreasing. Furthermore, if at some point $t$ with $\sign(\jq)\jk\jh>0$ one of these functions is increasing, it stays increasing for all $t_0\leq t$, and if it is decreasing, it stays decreasing for all $t_0\geq t$, since the derivative of these functions can change sign only at the maximal value $1$. (3) If $\XzeroDot(t_0)=0$ for some value of the flow parameter $t=t_0$, then due to \eqref{eq:torus-flow} and \eqref{eq:monotonicity} \begin{equation*} \sign(\Xzero)\XzeroDdot= \sign(\Xzero) \tfrac{d}{dt}\Xzero(\JE'\Xone-\Xzero\JK'\Xtwo)= |\Xzero|(\JE'\XoneDot-\Xzero\JK'\XtwoDot))>0 \end{equation*} for $t=t_0$. Thus $\sign(\Xzero)\XzeroDot$ is increasing at each of its zeros, and hence can have at most one zero. \end{proof} The next result shows that one endpoint of each integral curve corresponds to a flat \cmc torus. \begin{proposition} \label{prop:moduli-space} Every maximal integral curve of \eqref{eq:torus-flow} in $\calB$ is defined on a finite interval and passes from a point in $\{\Xzero=\pm1\}\cap\{\Xone-\Xzero\Xtwo<0\}\cap\boundary\calB$ to a point in $\{\Xzero=\pm1\}\cap\{\Xone-\Xzero\Xtwo>0\}\cap\boundary\calB$ with the same sign of $\Xzero$. Furthermore, the mean curvature $t \mapsto H(t)$ is a diffeomorphism of the maximal interval of definition onto a finite interval. \end{proposition} \begin{proof} Due to \eqref{eq:monotonicity} both functions $\Xone$ and $\Xtwo$ are strictly monotonic. Since $\XoneDot$ and $\XtwoDot$ have no roots in $\calB$, the integral curve must hit the boundary of $\calB$ at the end points (either infinite or finite) of the maximal interval of definition. Due to Proposition~\ref{prop:levelset}~\eqref{item:levelset2}, $\Xzero$ takes the same value $\pm 1$ at both endpoints and the vector field~\eqref{eq:torus-flow} does not vanish there. Hence the maximal interval of definition is finite. Due to Proposition~\ref{prop:levelset} (3) the sign of $\XzeroDot$ changes once and is proportional to $\Xone-\sign(\Xzero)\Xtwo$ at the end points. Due to \eqref{eq:monotonicity} $\Xone-\sign(\Xzero)\Xtwo$ is strictly increasing and the first claim follows. Since $\Xtwo\mapsto\Xtwo/\sqrt{1-\Xtwo^2}$ is strictly increasing on $(-1,\,1)$, the mean curvature \eqref{eq:H_and_h} is strictly monotonic, and by Proposition~\ref{prop:levelset}~\eqref{item:levelset2} it is bounded. \end{proof} \subsection{Global behaviour of the integral curves on the boundary} The boundary $\boundary\calB$ consists of the three parts with $\jq=\pm1$ or $\jq=0$, $\jk=\pm 1$ and $\jh=\pm 1$. The first set contains the flat \cmc tori and the bouquets of spheres, which we consider later. The third set corresponds to the infinite mean curvature limit, that is \cmc tori in $\mathbb{R}^3$. In this limit the two \sym points coalesce and the differentials $d\ln\mu_i$ have zeroes there. Since $d\nu$ and $d\omega$ do not have common zeroes by Corollary \ref{th:no_common_zeroes}, there are no such examples with spectral genus zero or one. Therefore we treat only $\jk^2=1$ and $\jh\in(-1,1)$. By Proposition~\ref{prop:torus-of-revolution} the tori of revolution appear in the two-dimensional boundary $\{\Xone^2=1\}$ of the moduli space of equivariant \cmc surfaces in $\mathbb{S}^3$. Since $\dot\Xone \equiv 0$ along $\Xone^2 =1$, tori of revolution stay tori of revolution throughout the flow. The flow~\eqref{eq:torus-flow} thus describes one parameter families of tori of revolution. Later in Theorem~\ref{thm:rev-mean-curvature} we describe the range of mean curvature for the families in the flow, and exhibit those that contain a minimal torus of revolution. A consequence of the next result is that spectral genus $g=1$ tori of revolution lie in 1-parameter families with one endpoint at a flat \cmc torus and the other endpoint at a sphere bouquet. In the process we also show that the mean curvature stays bounded during the flow. Since we often need to evaluate the functions $\NU$ and $\OMEGA$ at the two \sym points we set \begin{equation} \label{eq:nu-omega-k} \NU_k = \NU(\lambda_k) \mbox{ and } \OMEGA_k = \OMEGA(\lambda_k) \mbox{ for } k=1,\,2\,. \end{equation} \begin{proposition} \label{prop:moduli-tori-of-revolution} On the integral curve of \eqref{eq:torus-flow} through $(\jq_0,\jk_0,\jh_0)\in\boundary\calB$ with $\jk_0=\pm1$ and $\jq_0,\jh_0\in(-1,1)$ the function $\jk$ is equal to $\pm1$ and $1-\jh^2$ is bounded away from zero. The maximal interval of definition is of the form $(-\infty,\,t_{\max})$ with $$ \lim_{t\downarrow -\infty}\jq=0 \quad \mbox{ and } \quad \lim_{t\uparrow t_{\max}}\jq=\pm1 \spaceperiod $$ The mean curvature $t\mapsto H(t)$ is a diffeomorphism from $(-\infty,\,t_{\max})$ onto a finite interval. \end{proposition} \begin{proof} By \eqref{eq:torus-flow} $\Xone \equiv \Xone_0$ is constant throughout the flow. Let $\lambda_i = e^{2\mi\theta_i}$ for $i=1,\,2$ be the two \sym points. If $|\theta_2-\theta_1|$ is bounded away form zero, then $1-\Xtwo^2$ is bounded away from zero. For $\jk=\pm1$ the values of $\NU$ at the \sym points coincide. By \eqref{eq:sym-point-deformation} we have that $\omega_2 - \omega_1 = \int_{\theta_1}^{\theta_2}d\omega$ is constant throughout the flow. We claim that \begin{equation* |d\omega | \leq 2|d\theta |\quad \mbox{ for }\theta\in\mathbb{R}\mbox{ and }\jq\in[-1,1]. \end{equation*} We will need to consider the two cases $\jq\cos(2\theta)\leq 0$ and $\jq\cos(2\theta)\geq 0$ separately. In the first case we use $\JE'\leq\frac{\pi}{2}$ \cite[13.8.(11)]{Bat2}, Proposition \ref{th:alpha_less_beta} and \eqref{eq:nu} to obtain $$ |d\omega| = \left| \frac{\JE'-\jq\JK'\cos(2\theta)}{\pi\NU}\right|\,|d\theta | \leq\left| \frac{\JE'(1-\tfrac{2\jq}{1+\jq^2}cos(2\theta))} {\pi\NU}\right|\,|d\theta| \leq\left|\frac{2\NU}{1+\jq^2}\right|\,|d\theta| \leq2|d\theta| \spaceperiod $$ When $\jq\cos(2\theta)\leq 0$, then again by $\JE'\leq\frac{\pi}{2}$, Proposition \ref{th:alpha_less_beta}, and $2|\NU | \geq \sqrt{1+\jq^2} \geq 1$ \eqref{eq:nu} we get $$ |d\omega| = \left| \frac{\JE'-\jq\JK'\cos(2\theta)}{\pi\NU}\right|\,|d\theta | \leq\left| \frac{4\JE'}{\pi\sqrt{1+\jq^2}}\right|\,|d\theta| \leq2|d\theta| \spaceperiod $$ Thus $|\theta_2-\theta_1|\geq\half|\omega_2-\omega_1|$. Due to \eqref{eq:omega} and Proposition \ref{th:alpha_less_beta}, the differential $d\OMEGA$ has no root between $\theta_1$ and $\theta_2$. Therefore $|\theta_2-\theta_1|$ and $1-\jh^2$ are bounded away from zero. Due to Proposition \ref{th:alpha_less_beta}, the function $\jq^2$ is strictly increasing for $0<|\jq|<1$. Therefore the maximal integral curve passes from $\jq=0$ to $\jq=\pm1$. The vector field is at the left end point of order $\Order(\jq)$ and at the right end point not zero. Therefore the maximal interval of definition has the form $(-\infty,\,t_{\max})$. Since $\jh$ is strictly monotonic, and $1-\jh^2$ is bounded away from zero, the mean curvature \eqref{eq:H_and_h} is a diffeomorphism from $(-\infty,\,t_{\max})$ onto a finite interval. \end{proof} \subsection{Regularity of spectral curve} A corollary of Theorem~\ref{th:no_equi_bub} below is that equivariant \cmc tori cannot be dressed to tori by simple factors \cite{KilSS,TerU}. While there exist large families of \cmc cylinders which can be dressed to cylinders with bubbletons \cite{SteW:bub}, it is still an open question raised by Bobenko \cite{Bob:tor}, whether there are \cmc tori with bubbletons. A \emph{double point} is a point on the spectral curve at which both logarithms $\mu_1$ and $\mu_2$ are unimodular, but the spectral curve is not branched. \begin{theorem} \label{th:no_equi_bub} The spectral curve of an equivariant {\sc{cmc}} torus has no double points in $\Cstar\setminus\bbS^1$. \end{theorem} \begin{proof} At a double point both logarithms $\mu_1$ and $\mu_2$ are unimodular. Therefore it suffices to prove that the set of all $\lambda\in\Cstar$, where $\mu_1$ and $\mu_2$ are unimodular is the set $\bbS^1\cup\{\jq,\,\jq^{-1}\}$. This set coincides with the subset of $(\lambda,\,\nu)\in\curve$, such that $\nu$ and $\omega$ are real. Due to \eqref{eq:nu} and \eqref{eq:omega} we have for $\jq\in(0,1]$: $\nu\in\bbR$ if and only if $\lambda\in\bbS^1\cup[\jq,\,\jq^{-1}]\cup\bbR_-$, and $\lambda\in\bbR$ and $\omega\in\bbR$ if and only if $\lambda\in[0,\,\jq]\cup[\jq^{-1},\,\infty)$. For $\jq\in[-1,\,0)$: $\nu\in\bbR$ if and only if $\lambda\in\bbS^1\cup[\jq^{-1},\,\jq]\cup\bbR_+$; $\lambda\in\bbR$ and $\omega\in\bbR$ if and only if $\lambda\in(-\infty,\,\jq^{-1}]\cup[\jq,\,0]$. With $\jq=\pm1$ we cover the genus zero case. \end{proof} \section{Equivariant extended frames} \label{sec:frame} To obtain a more geometric description of the above deformation we compute the frames of the surfaces. We do this in a slightly broader context by studying equivariant maps $\bbC^2 \to \matSL{2}{\bbC}$ with complex mean curvature as in~\cite{DorKP}. This generalized framework for complex equivariant surfaces is then applicable to not only \cmc immersions into 3-dimensional space forms , but also pseudospherical surfaces in $\mathbb{R}^3$, \cmc surfaces in Minkowski space, and other integrable surfaces in many others spaces by performing an appropriate reduction. Let $\Sigma\subset\bbC^2$ be an open and simply-connected domain with complex coordinates $(z,\,w)$, and define $\ast$ on $1$-forms on $\Sigma$ by $\ast dz = \mi dz$, $\ast dw = -\mi dw$ extended complex linearly. Let $\matsl{2}{\bbC}=\Diagonal\oplus\OffDiagonal$ be a Cartan decomposition. For a $1$-form $\alpha$ on $\Sigma$, we will write $\alpha = \alpha_\Diagonal + \alpha_\OffDiagonal$ where with $\alpha_\Diagonal\in\Diagonal$ and $\alpha_\OffDiagonal\in\OffDiagonal$. If we equip the matrix Lie algebra $\matsl{2}{\bbC}$ with a multiple of the Killing form $\DOTC{X}{Y} \coloneq -\half\trace(XY)$, then we may choose a basis $e_0,\,e_1,\,e_2$ for $\matsl{2}{\bbC}$ satisfying $e_0\in\Diagonal$, $\DOTC{e_0}{e_0}=1$ and $[e_0,\,e_1] = e_2$, $[e_1,\,e_2] = e_0$ and $[e_2,\,e_0] = e_1$, and set $$ \epsilon_1 = \half(e_1-\mi e_2) \spacecomma\interspace \epsilon_2 = \half(e_1+\mi e_2)\,. $$ Let $f:\Sigma \to \matSL{2}{\bbC}$ be a conformal immersion such that for the smooth function $v:\Sigma\to\bbC^\ast$ we have $\DOTC{f_z}{f_z} = 0 = \DOTC{f_w}{f_w}$ and $2\DOTC{f_z}{f_w} = v^2$. The remaining invariants of $f$ are smooth functions $H,\,Q,\,R:\Sigma\to\bbC$ such that $2\DOTC{f_{zz}}{N} = Q$, $2\DOTC{f_{zw}}{N} = v^2H$ and $2\DOTC{f_{ww}}{N} = R$, and the normal map is $N = -\mi v^{-2}[f_z,\,f_w]$. Further, there exists a unique pair of maps $F,\,G:\Sigma\to\matSL{2}{\bbC}$ which frame $f$ in the sense that \begin{equation} \label{eq:framing} f = FG^{-1}\spacecomma\interspace f_z = vF\epsilon_1G^{-1}\spacecomma\interspace f_w = vF\epsilon_2G^{-1}\spacecomma\interspace N = Fe_0G^{-1}\spaceperiod \end{equation} Then $\alpha_{+1} \coloneq F^{-1}dF$ and $\alpha_{-1} \coloneq G^{-1}dG$, are smooth $1$-forms on $\Sigma$ with values in $\matsl{2}{\bbC}$ given by \begin{equation} \label{eq:maurer-cartan-general} \begin{aligned} \alpha_{\sigma} &= A_{\sigma} dz + B_{\sigma} dw \quad \mbox{where } \sigma\in\{\pm 1\} \quad \mbox{and}\\ 2\mi A_{\sigma} &\coloneq v^{-1}v_z e_0 + v (H+\mi\sigma) \ep_1 - v^{-1}Q\ep_2 \spacecomma\\ 2\mi B_{\sigma} &\coloneq -v^{-1}v_w e_0 + v^{-1}R \ep_1 - v(H-\mi\sigma)\ep_2 \spaceperiod \end{aligned} \end{equation} Integrability $2 d\alpha_\sigma+[\alpha_\sigma\wedge\alpha_\sigma]=0$ is equivalent to the Gauss-Codazzi equations \begin{subequations} \label{eq:gauss-codazzi} \begin{gather} \label{eq:gauss} {(\log v)}_{zw} - v^2(H^2+\sigma^{-2}) + v^{-2}QR=0\spacecomma\\ \label{eq:codazzi1} v^2 H_w = R_z\spacecomma\interspace v^2 H_z=Q_w\spaceperiod \end{gather} \end{subequations} Conversely, given smooth functions $v,\,H,\,Q,\,R:\Sigma\to\bbC$ with $v$ non-vanishing, satisfying the integrability equations~\eqref{eq:gauss-codazzi}, a conformal immersion $f$ with these invariants can be recovered by integrating \eqref{eq:maurer-cartan-general} to obtain maps $F,\,G:\Sigma\to\matSL{2}{\bbC}$ which are unique up to transforms $(F,\,G)\mapsto(AF,\,BG)$, $A,\,B\in\matSL{2}{\bbC}$. By the form of $\alpha_\sigma$, the maps $F$ and $G$ frame $f \coloneq FG^{-1}$ in the sense of~\eqref{eq:framing}, and $f$ has the specified invariants. The map $f$ is unique up to $f\mapsto AFB^{-1}$, $A,\,B\in\matSL{2}{\bbC}$. \subsection{Equivariance} Let us first introduce new coordinates $(x,\,y)$ on $\Sigma$ such that \begin{equation} \label{eq:xyzw} z=x + \mi\,y\, ,\qquad w = x -\mi\,y\spaceperiod \end{equation} We say a smooth map $F=F(x,\,y):\bbC^2\to\matSL{2}{\bbC}$ is $\bbC$-equivariant, or simply equivariant if it is of the form $F(x,\,y) = \exp(x\,A)\,P(y)$ for some $A\in\matsl{2}{\bbC}$ and smooth map $P:\bbC\to\matSL{2}{\bbC}$. What characterizes an equivariant map is that $F^{-1}dF = P^{-1}A\,P\,dx + P^{-1}P'\,dy$ is $x$-independent~\cite{BurK}. We first compute the logarithmic derivative of an equivariant map, and then integrate this to obtain the equivariant map itself. \begin{theorem} \label{thm:equivariant-frame} {\rm{(1)}} The $x$-independent solution $\alpha = \alpha_\mathfrak{k} + \alpha_\mathfrak{p}$ to \begin{equation} \label{eq:harmonic-map-equations} \begin{split} d\alpha_\mathfrak{k}&+ \half[\alpha_\mathfrak{p}\wedge\alpha_\mathfrak{p}] = 0 \spacecomma\\ d\alpha_\mathfrak{p}&+ [\alpha_\mathfrak{k}\wedge\alpha_\mathfrak{p}] = 0 \spacecomma\\ d(\ast\alpha_\mathfrak{p})&+ [\alpha_\mathfrak{k}\wedge(\ast\alpha_\mathfrak{p})] = 0 \end{split} \end{equation} are $\alpha = g^{-1}\Omega\,g + g^{-1}dg$, where $g$ is a smooth $x$-independent map from $\Sigma$ into the Lie group of $\Diagonal$, and $\Omega = \Omega_1 \,dz + \Omega_2 \,dw$, where \begin{equation} \label{eq:equivariant-harmonic-maurer-cartan} \begin{aligned} 2\mi\Omega_1 &\coloneq -\tfrac{\mi}{2} \Metric^{-1}\Metric' e_0 + a_2 \Metric \epsilon_1 - b_1 \Metric^{-1} \epsilon_2 \spacecomma\\ 2\mi\Omega_2 &\coloneq -\tfrac{\mi}{2} \Metric^{-1}\Metric' e_0 + b_2 \Metric^{-1}\epsilon_1 - a_1 \Metric\epsilon_2 \spacecomma \end{aligned} \end{equation} with $a_1,\,a_2,\,b_1,\,b_2,\,\Vconst\in\bbC$, and $\Metric=\Metric(y):\bbC\to\bbC$ is defined by \begin{equation} \label{eq:metric} \begin{aligned} &{(\Metric^{-1}\Metric')}^2 + (a_1 \Metric + b_1 \Metric^{-1})(a_2 \Metric + b_2 \Metric^{-1}) = 4\Vconst^2 \spacecomma\\ &{(\Metric^{-1}\Metric')}' + a_1 a_2 \Metric^2 - b_1 b_2 \Metric^{-2} = 0 \spaceperiod \end{aligned} \end{equation} {\rm{(2)}} A solution to $dF=F\Omega$ is given by \begin{equation} \begin{gathered} \label{eq:frame-parts} \FEQ(x,\,y) \coloneq \exp\left((x\,\Vconst + \half\,\AngleZero) e_0\right) \exp\left( \half \AngleOne e_1 \right) \exp\left( \half \AngleTwo e_0 \right) \spacecomma \\ \mbox{ {\rm{with}} } \interspace \AngleZero \coloneq 2\mi\Vconst \int_0^y (J_1(t)-J_2(t))dt\spacecomma \\ \AngleOne \coloneq \arccos(-\half\Vconst^{-1}\Metric^{-1}\Metric') \spacecomma \interspace \AngleTwo \coloneq \tfrac{\mi}{2} \log (X_1^{-1}X_2) \spacecomma \\ X_1 \coloneq a_1 \Metric + b_1 \Metric^{-1} \spacecomma \interspace X_2 \coloneq a_2 \Metric + b_2 \Metric^{-1} \spacecomma J_1 \coloneq b_1\Metric^{-1}X_1^{-1} \spacecomma \interspace J_2 \coloneq b_2\Metric^{-1}X_2^{-1} \spaceperiod \end{gathered} \end{equation} \end{theorem} \begin{remark} The second equation in~\eqref{eq:metric} is the derivative of the first, and eliminates spurious constant solutions which would appear if only the first equation were present. \end{remark} \begin{proof} (1) Write $\alpha_\Diagonal = ae_0\, dx + be_0\, dy$ for some functions $a=a(y)$ and $b=b(y)$, and let $g = \exp(-(\int^y b(t)dt) e_0)$. Then $dg\, g^{-1} = -b e_0 dy$, so the form $g^{-1}\alpha g +g^{-1}dg = a e_0 dx + g^{-1}\alpha_\OffDiagonal g$ is a multiple of $dx = \tfrac{1}{2}(dz+dw)$. Then $\alpha_\Diagonal = \tfrac{\mi}{4}f(dz+dw)e_0$ for some function $f=f(y)$. Let $v = \exp(\int^y f(t)dt)$, so $\alpha_\Diagonal = \tfrac{\mi}{4}v^{-1}v' e_0(dz+dw)$, where prime is differentiation with respect to $y$. Then there exist functions $a_1,\,b_1,\,a_2,\,b_2$ of $y$ such that $2\alpha_\OffDiagonal = (a_2 v \epsilon_1 - b_1 v^{-1}\epsilon_2)\,dz + (b_2 v \epsilon_1 - a_1 v^{-1}\epsilon_2)\,dw$. By a calculation, ~\eqref{eq:harmonic-map-equations} is equivalent to the second equation of~\eqref{eq:metric}, along with $a_1'=b_1'=a_2'=b_2'=0$. Hence $\alpha$ is of the required form, and concludes the proof of (1). To prove (2), we have $\Omega = \Omega_x dx + \Omega_y dy$ in~\eqref{eq:equivariant-harmonic-maurer-cartan}, where $2\mi\Omega_x \coloneq -\mi\Metric^{-1}\Metric' e_0 +(a_2 \Metric+b_2 \Metric^{-1})\ep_1 -(a_1 \Metric + b_1 \Metric^{-1})\ep_2$ and $2\Omega_y \coloneq (a_2 \Metric - b_2 \Metric^{-1})\ep_1 + (a_1 \Metric-b_1 \Metric^{-1})\ep_2$. To compute $F^{-1}dF$, we will first show that $P(x) \coloneq \exp\left( \half \AngleZero e_0 \right) \exp\left( \half \AngleOne e_1 \right) \exp\left( \half \AngleTwo e_0 \right)$ satisfies \begin{subequations} \label{eq:finalframe-P} \begin{align} \label{eq:finalframe-P1a-X} \Vconst P^{-1}e_0P &= \Omega_x \spacecomma \\ \label{eq:finalframe-P2a-X} P^{-1}P' &= \Omega_y \spaceperiod \end{align} \end{subequations} We have $P^{-1}e_0P = \cos\AngleOne e_0 + \mi e^{-\mi\AngleTwo} \sin\AngleOne\ep_1 - \mi e^{\mi\AngleTwo}\sin\AngleOne \ep_2$ and $\cos\AngleOne = -\half\Vconst^{-1}\Metric^{-1}\Metric'$, $\sin\AngleOne = -\half\Vconst^{-1}X_1^{\frac{1}{2}}X_2^{\frac{1}{2}}$, $e^{\mi\AngleTwo} = X_1^{\frac{1}{2}}X_2^{-\frac{1}{2}}$ and thus $2\mi\Vconst P^{-1}e_0P =\mi\Metric^{-1}\Metric'e_0 - X_2 \ep_1 + X_1 \ep_2 = -2(\Omega_1-\Omega_2) = 2\mi\Omega_x$, proving~\eqref{eq:finalframe-P1a-X}. Now $2P^{-1} P' = (\AngleZero'\cos\AngleOne + \AngleTwo')e_0 +e^{-\mi\AngleTwo}(\mi\AngleZero'\sin\AngleOne + \AngleOne')\ep_1 -e^{\mi\AngleTwo}(\mi\AngleZero'\sin\AngleOne - \AngleOne')\ep_2$. With $\cc_5 \coloneq a_1 b_2 - a_2 b_1$ we have \begin{equation} \label{eq:angle-deriv} \AngleZero' = -\frac{2\mi\Vconst \cc_5}{X_1X_2} \spacecomma\quad \AngleOne' = \frac{a_1 a_2 v^2 - b_1 b_2 v^{-2}}{\sqrt{X_1}\sqrt{X_2}} \spacecomma\quad \AngleTwo' = -\frac{\mi \cc_5 v'}{v X_1 X_2} \spaceperiod \end{equation} Hence $2P^{-1} P' = (a_2 \Metric - b_2 \Metric^{-1})\ep_1 + (a_1 \Metric - b_1 \Metric^{-1})\ep_2 = 2\Omega_y$, proving~\eqref{eq:finalframe-P2a-X}. Thus $F^{-1}dF = F^{-1} F_x dx + F^{-1} F_y dy = \Vconst P^{-1}e_0 P dx + P^{-1} P' dy = \Omega_x dx + \Omega_y dy = \Omega$. \end{proof} \begin{example} \label{sec:vacuum} The vacuum is the case in which the function $\Metric$ in~\eqref{eq:metric} is constant. By the second equation in~\eqref{eq:metric}, $\Metric \equiv v_0$ with $v_0^4 \coloneq (b_1 b_2)/(a_1 a_2)$. The potential for the vacuum is $\Omega_{\mathrm{V}} \coloneq \Omega_z dz + \Omega_w dw$, where $2\mi\Omega_z \coloneq a_2 v_0\epsilon_1 - b_1 v_0^{-1} \epsilon_2$ and $2\mi\Omega_w \coloneq b_2 v_0^{-1}\epsilon_1 - a_1 v_0\epsilon_2$. Since $[\Omega_z,\,\Omega_w]=0$, the extended frame of the vacuum is $F = \exp(\Omega_z z + \Omega_w w)$ with eigenvalues $\exp\bigl(\pm\tfrac{\mi}{2}(rz+sw)\bigr)$ where $r,\,s\in\bbC$ are determined by $r^2=a_2b_1$, $s^2=a_1b_2$, $rs = a_1a_2v_0^2$. This differs from the frame in Theorem~\ref{thm:equivariant-frame} by left multiplication by a $z$-and $w$-independent element of $\matSL{2}{\bbC}$. \end{example} \begin{figure}[t] \centering \includegraphics[width=8.15cm]{torus-13lobe.eps} \includegraphics[width=8.15cm]{torus-13lobe2.eps} \caption{ \label{fig:13lobe} Two views of an equivariant \cmc $(2,\,1,\,13)$ torus in $\bbS^3$. } \end{figure} To describe equivariant \cmc immersions into $\bbS^3 = \matSU{2}{}$ we specialize the above formulas. We first make the reduction in \eqref{eq:xyzw} that $w = \bar{z}$, which is equivalent to $x,\,y \in \bbR$. Given an extended frame $F_\lambda :\bbR^2 \to \mathrm{SU}_2$ and two distinct \sym points $\lambda_1,\,\lambda_2 \in \bbS^1$, then \begin{equation} \label{eq:sym_s3} f \coloneq F_{\lambda_1}F^{-1}_{\lambda_2} \end{equation} is a conformal immersion $f:\bbR^2 \to \bbS^3$ with constant mean curvature $H$ given in \eqref{eq:cmc-I-II}. For the translation $\tau_\gamma: \bbC \to \bbC,\,p\mapsto p + \gamma$ we write $\tau_\gamma^* f = f \circ \tau_\gamma$. If $F_\lambda^{-1}dF_\lambda$ is periodic with period $\gamma$, then we define the monodromy $M_\lambda$ of $F_\lambda$ with respect to $\gamma$ as \begin{equation} \label{eq:monodromy} M_\lambda(\gamma) = \tau_\gamma^*F_\lambda \, F^{-1}_\lambda\,. \end{equation} The closing condition $\tau_\gamma^* f = f$ with respect to a translation is equivalent to \begin{equation} \label{eq:closing-s3} M_{\lambda_1}(\gamma)= M_{\lambda_2}(\gamma) = \id \OR M_{\lambda_1}(\gamma)= M_{\lambda_2}(\gamma) = -\id\spaceperiod \end{equation} If $\mu_1$ and $\mu_2 = \mu_1^{-1}$ denote the eigenvalues of the monodromy, then \eqref{eq:closing-s3} reads $\mu_j(\lambda_k) = \pm 1$, or equivalently that there exist four integers $p_{jk} \in \bbZ$ such that for $j,\,k=1,\,2$ we have \begin{equation}\label{eq:pjk} \ln \mu_j(\lambda_k) = \mi\pi\,p_{jk} \quad \mbox{ and } \quad p_{j1}-p_{j2}\in 2\bbZ \spaceperiod \end{equation} The torus is embedded if and only if the \emph{winding numbers} $p_{jk}$ all have absolute value equal to one. \subsection{Flat cmc tori in $\bbS^3$.} Integrating $\Omega$~\eqref{eq:equivariant-harmonic-maurer-cartan} with $(a_1,\,b_1,\,a_2,\,b_2) \coloneq (\lambda,\,1,\,\lambda^{-1},\,1),\, v \equiv 1$ yields an extended frame $F_\lambda$ of a flat \cmc surface in $\bbS^3$. By Theorem~\ref{thm:equivariant-frame} and Example \ref{sec:vacuum}, the extended frame of any flat \cmc surface (up to isometry and conformal change of coordinates) is \begin{equation} \label{eq:flat-frame} F_\lambda = \exp \bigl( \pi\,\mi \left( (z \lambda^{-1} +\ol{z})\epsilon_1 - (z+\ol{z}\lambda) \epsilon_2 \right) \bigr)\spaceperiod \end{equation} Then also (up to isometry and conformal change of coordinates) any flat \cmc immersion $f:\bbR^2 \to \bbS^3$ is of the form $f = F_{\lambda_1}F_{\lambda_2}^{-1}$ with extended frame $F_\lambda$ as in \eqref{eq:flat-frame} and two distinct \sym points $\lambda_1,\,\lambda_2 \in \bbS^1$. The eigenvalues of $F_\lambda$ are $\mu^{\pm 1}$ with \begin{equation} \label{eq:eigenvalues} \mu(z,\,\lambda) = \exp( \pi\,\mi (z\lambda^{-\frac{1}{2}}+ \ol{z}\lambda^{\frac{1}{2}}))\,. \end{equation} Define \begin{equation} \DOTR{x}{y} \coloneq \tfrac{1}{2}\,(x\,\ol y + y\,\ol x ) = \Re(x\ol y) \spaceperiod \end{equation} Then by equation \eqref{eq:pjk} the immersion $f$ factors through the lattice $\Gamma = \gamma_1\,\bbZ + \gamma_2\,\bbZ$ if and only if \begin{equation} \label{eq:closing-dot} \DOTR{\gamma_j}{\lambda_k^{1/2}}\in\bbZ \AND \DOTR{\gamma_j}{\lambda_1^{1/2} - \lambda_2^{1/2}}\in 2\bbZ\,. \end{equation} The dual of a lattice $\Gamma$ in $\bbC$ is the lattice $\Gamma^\ast = \{\kappa\in\bbC \mid \DOTR{\kappa}{\gamma}\in\bbZ \text{ for all }\gamma\in\Gamma\}$. \begin{proposition} \label{prop:flat-torus} \mbox{} {\rm{(i)}} A flat \cmc immersion $f = F_{\lambda_1}F^{-1}_{\lambda_2}$ with extended frame $F_\lambda$~\eqref{eq:flat-frame} is closed with respect to a lattice $\Gamma\subset\bbC$ if and only if $\,\Gamma \subseteq \Lambda^\ast$, where $\Lambda \coloneq \kappa_1\bbZ+\kappa_2\bbZ$ and $$ \kappa_1\coloneq \half(\sql{1}+\sql{2}) \AND \kappa_2\coloneq \half(\sql{1}-\sql{2})\,. $$ {\rm{(ii)}} The torus is rectangular and embedded if and only if $\,\Gamma=\Lambda^\ast$. {\rm{(iii)}} Every flat \cmc torus is isogenic to a rectangular embedded flat \cmc torus. \end{proposition} \begin{proof} (i) Since $\lambda_k^{1/2}=\kappa_1\pm\kappa_2$ the condition \eqref{eq:closing-dot} is equivalent to $\,\Gamma \subseteq \Lambda^\ast$. (ii) The torus is embedded if and only if $\DOTR{\gamma_j}{\lambda_k^{1/2}}\in \{ \pm 1\}$, and rectangular if and only if $\gamma_1/\gamma_2 \in \mi\,\bbR$. The corresponding periods $\gamma_1$ and $\gamma_2$ are dual to $\kappa_1$ and $\kappa_2$ and generate $\Lambda^\ast$. Clearly $\Lambda$, and consequently $\Lambda^\ast$ are rectangular, since $\kappa_1/\kappa_2 \in \mi\,\bbR$. (iii) From (i) we know that a lattice $\Gamma$ of any torus is a sublattice of $\Lambda^\ast$, which by (ii) is the lattice of the embedded rectangular torus. Hence there is an isogeny taking $\Lambda^\ast$ to $\Gamma$. \end{proof} \begin{proposition} \label{th:clifford} The lattice of an embedded flat \cmc torus is square if and only if it the mean curvature is zero. Swapping the \sym points does not affect the period lattice of a flat \cmc torus. \end{proposition} \begin{proof} Solving the four equations \eqref{eq:pjk} for the periods gives \begin{equation} \label{eq:periods_general} \gamma_1 = \frac{\lambda_1^{1/2}\lambda_2\,p_{11} - \lambda_1\lambda_2^{1/2}\,p_{12}} {\lambda_2 - \lambda_1}\,,\quad \gamma_2 = \frac{\lambda_1^{1/2}\lambda_2\,p_{21} - \lambda_1\lambda_2^{1/2}\,p_{22}} {\lambda_2 - \lambda_1}\,. \end{equation} In particular, setting $p_{11} = p_{12} = p_{21} = -p_{22} = 1$, a computation shows that the generators of a lattice of an embedded flat \cmc torus satisfy $$ \frac{|\gamma_1|}{|\gamma_2|} = \left| \pm\sqrt{1+H^2} + H \right|\,. $$ Thus $|\gamma_1|=|\gamma_2|$ if and only if the mean curvature is zero. Setting $\lambda_1 = \mi,\,\lambda_2 = -\mi$ we obtain the generators $\gamma_1 = 1/\sqrt{2}$ and $\gamma_2 = \mi /\sqrt{2}$ of the square lattice of the Clifford torus. Swapping the \sym points $\lambda_1 \leftrightarrow \lambda_2$, results in the integers $p_{jk}$ swapping second indices: $p_{11} \leftrightarrow p_{12},\,p_{21} \leftrightarrow p_{22}$. This does not change the periods $\gamma_1,\,\gamma_2$ in \eqref{eq:periods_general}. \end{proof} \subsection{Spectral genus one} Consider the $1$-form $\Omega = \Omega_1 \,dz + \Omega_2 \,d\bar{z}$, where $\Omega_j$ are given in equation~\eqref{eq:equivariant-harmonic-maurer-cartan} with $\NU$ as in \eqref{eq:nu} for some $0<|\Bpoint|\leq 1$ and $(a_1,\,b_1,\,a_2,\,b_2)\coloneq (\lambda,\,-\Bpoint,\,\lambda^{-1},\,-\olBpoint)$. Integrating the $1$-form $\Omega$ with these choices then yields an $\bbR$-equivariant extended frame $F_\lambda(x,\,y) = \exp(x\,A_\lambda)\,P_\lambda(y)$ for smooth maps $\lambda \mapsto A_\lambda,\,\bbS^1 \to \matsu{2}{}$ and $(\lambda,\,y) \mapsto P_\lambda(y),\,\bbS^1 \times \bbR \to \matSU{2}{}$. Now let $\lambda_1,\,\lambda_2 \in \mathbb{S}^1$ be two distinct \sym points. The immersion $f:\bbC \to \matSU{2}{}$ in \eqref{eq:sym_s3} is then a conformal equivariant {\sc{cmc}} immersion with mean curvature \eqref{eq:cmc-I-II}. Equivariance reduces the Gauss equation~\eqref{eq:metric} to $(\Metric')^2 + (\Metric^2 - 1)(\Metric^2 - \Bpoint^2) = 0$. A solution to this equation is the \emph{square root of the conformal factor} $\Metric:\bbR\to\bbR_+$ given by $\Metric (y) = \jacobiDN(y,\,1-\Bpoint^2)$ where $\jacobiDN(y,\,m)$ is the Jacobi elliptic function. All other solutions are of the form $\Metric(y+y_0)$, $y_0\in\bbR$. The function $\Metric$ is even and has no zeros on $\bbR$. The range of $\Metric$ is $[\min(1,\,\abs{\Bpoint}),\,\max(1,\,\abs{\Bpoint})]$. The function satisfies $\Metric(0)=1$ and $\Metric'(0)=0$, and limits to $\lim_{\abs{\Bpoint}\to 1}\Metric(y)=1$ and $\lim_{\Bpoint\to 0}\Metric(y)=\sech y$. The period of $\Metric$ depends on the parameter $0<|\jq|\leq1$ and is equal to $2\JK'(\Bpoint)$. Now a straightforward calculation shows that the fundamental forms of such an equivariant conformal \cmc immersion are \begin{subequations} \label{eq:cmc-equi-I-II} \begin{gather} -\frac{ {(\lambda_2-\lambda_1)}^2 }{4\lambda_1\lambda_2}\, \Metric^2\,(dx^2 + dy^2) \spacecomma\\ (v^2 H + \Real(Q))\,dx^2 -\Imag(Q)\,dx\, dy + (v^2 H - \Real(Q))\,dy^2 \spacecomma \end{gather} \end{subequations} with $H$ as in \eqref{eq:cmc-I-II} and Hopf differential $Q \,dz^2$ with $Q\coloneq \tfrac{\mi}{4} \jq(\lambda_2^{-1}-\lambda_1^{-1})$. \begin{proposition} A period of an equivariant extended frame is of the form \begin{equation} \label{eq:period_gamma} \gamma = x\,\pi + 2\mi\, p\,\JK' \end{equation} where $x\in\bbR$, $p\in\bbZ$, and $\Period$ is the period of $\Metric$. The monodromy \eqref{eq:monodromy} with respect to such a period is \begin{equation} \label{eq:equi-monodromy} M_\lambda (\gamma) = \exp(\pi(\,x\,\NU + p\,\OMEGA\,)\,e_0 ) \spaceperiod \end{equation} \end{proposition} \begin{proof} The imaginary part of a frame period has to be a period of the square root of the conformal factor $\Metric:\bbR\to\bbR_+,\,y \mapsto \jacobiDN(y,\,1-\Bpoint^2)$. Since $\Metric$ has period $2\JK'(\Bpoint)$, the imaginary part of $\gamma$ in \eqref{eq:period_gamma} has to be of the form $2\, p\,\JK'$ for some $p \in \bbZ$. From \eqref{eq:frame-parts} the extended frame of an equivariant \cmc torus is of the form $\FEQ(x,\,y) \coloneq \exp\left((x\,\Vconst + \half\,\AngleZero) e_0\right) \exp\left( \half \AngleOne e_1 \right) \exp\left( \half \AngleTwo e_0 \right)$. The middle and right factor are periodic in $y$ and do not depend on $x$. Hence both these factors have trivial monodromy if $y \in 2\,\JK'\bbZ$, and the monodromy with respect to a translation by $\gamma = x\,\pi + 2\mi\, p\,\JK'$ is $$ \FEQ(x\pi,\,2n\JK') = \exp( (\pi\,x\,\NU + \half\,\AngleZero(2p\JK'))\,e_0 ) $$ Clearly $\AngleZero(2p\JK')=p\AngleZero(2\JK')$ so it suffices to show that $\AngleZero(2\JK',\,\jq,\,\lambda) = 2\pi \omega(\jq,\,\lambda)$ to conclude the proof. Let $\lambda=e^{i\theta}$, then a calculation yields \begin{align} \label{eq:dJ1} d(\NU J_1) &= -\frac{\tfrac{d}{d\theta}(-\jq(\lambda+\lambda^{-1}))} {4\NU\jq(\lambda^{-1}-\lambda)} \left( \frac{-\jq\lambda^{-1}}{J_1} - \frac{1}{2} \frac{d^2}{dt^2}\log J_1(t) \right)\,d\theta \spacecomma \\ \label{eq:dJ2} d(\NU J_2) &=\frac{\tfrac{d}{d\theta}(-\jq(\lambda+\lambda^{-1}))} {4\NU\jq(\lambda^{-1}-\lambda)} \left( \frac{-\jq\lambda}{J_2} - \frac{1}{2} \frac{d^2}{dt^2}\log J_2(t) \right)\,d\theta \spaceperiod \end{align} Since $J_1$ and $J_2$ are functions of $\Metric$, they are also periodic with period $\Period$, so \begin{equation*} \int_0^{\Period} \frac{d^2}{dt^2}\log J_k dt = \left.\frac{\tfrac{d}{dt}J_k}{J_k}\right|_{0}^{\Period} = 0\spaceperiod \end{equation*} Subtracting~\eqref{eq:dJ2} from~\eqref{eq:dJ1} and integrating over the interval $[0,\,2\JK']$ gives \begin{equation*} d\AngleZero = \frac{1}{4\NU} \left( 4\JE' - \Period\jq\,(\lambda+\lambda^{-1}) \right)\,d\theta = 2\pi\,d\OMEGA \spaceperiod \end{equation*} Further, since \begin{equation*} \AngleZero(2\JK',\,\jq,\,\lambda) = 2\mi \NU \int_0^{2\JK'} \frac{\jq}{v(\lambda^{-1}v - \jq v^{-1})} - \frac{\jq}{v(\lambda v - \jq v^{-1})} \,dt \end{equation*} and $\NU(\lambda^{-1}) = \NU(\lambda)$ it follows that $\AngleZero(2\JK',\,\jq,\,\lambda^{-1}) = - \AngleZero(2\JK',\,\jq,\,\lambda)$. Thus $\AngleZero$ shares the properties of $\OMEGA$ which determine it uniquely. Hence $\AngleZero(2\JK',\,\jq,\,\lambda) = 2\pi \omega(\jq,\,\lambda)$. \end{proof} By \eqref{eq:equi-monodromy} we have $\tau_{\gamma} f = M_{\lambda_1}(\gamma)\,f\,M_{\lambda_2}^{-1}(\gamma) = \exp(\pi( x\,\NU_1 + n\,\OMEGA_1)\,e_0 )\,f\, \exp( -\pi( x\,\NU_2 + n\, \OMEGA_2)\,e_0 )$, so translation by a period $\gamma$ \eqref{eq:period_gamma} induces an ambient isometry. The \emph{equivariant action} is the action of the 1-parameter group $\GK$ of isometries of $\bbS^3$ defined by \begin{equation} \label{eq:equi_action} \GK = \{ g_x \in \mathrm{Iso}(\mathbb{S}^3) \mid g_x(p) = \exp(x\,\Vconst_1\,e_0)\, p \, \exp(-x\,\Vconst_2\,e_0)\,, x \in \bbR \}\,. \end{equation} Since $\NU_1\not=0\not=\NU_2$ the commutator of the equivariant action \eqref{eq:equi_action} \begin{equation}\label{eq:commut} \hat{\GK} = \{ g \in \mathrm{Iso}(\mathbb{S}^3) \mid gk = kg \mbox{ for all } k\in\GK\} \end{equation} in the group $\mathrm{Iso}(\bbS^3)$ of orientation preserving isometries of $\bbS^3$ is a two-dimensional torus. With the exception of two geodesics the orbits of $\hat{\GK}$ are two-dimensional embedded tori. These geodesics, which we call the \emph{axes} of the surface, are linked, and are situated so that every geodesic 2-sphere through one is orthogonal to the other. Every orbit of the equivariant action~\eqref{eq:equi_action}, with the exception of the two axes, is a $(m,\,n)$-torus knot in the corresponding orbit of $\hat{\GK}$, with \begin{equation} \label{eq:pq_nu} \frac{m}{n}=\frac{\NU_1-\NU_2}{\NU_1+\NU_2}\,. \end{equation} If we identify $$ \bbS^3=\matSU{2}{} = \left\{ \left. \begin{pmatrix} a & b\\ -\bar{b} & \bar{a} \end{pmatrix} \,\, \right| \,\, |a|^2+|b|^2 =1 \right\} \quad \mbox{and choose} \quad e_0 = \begin{pmatrix} \mi & 0 \\ 0 & -\mi \end{pmatrix} \spacecomma $$ then the equivariant action extends to an action on $(a,b)\in\bbC^2$ given by $R(s,\,t)(a,\,b)=(e^{\mi s}a,\,e^{\mi t}b)$, called the \emph{extended action}. In particular, the translation $\tau_\gamma$ by a period $\gamma$ \eqref{eq:period_gamma} induces the ambient isometry $R(s,\,t)$ with $s= x\,(\NU_1-\NU_2) + p\,(\OMEGA_1-\OMEGA_2)$ and $t= x\,(\NU_1+\NU_2) + p\,(\OMEGA_1+\OMEGA_2)$. \begin{proof}[Proof of Proposition \ref{prop:torus-of-revolution}] For a flat \cmc torus this can always be achieved since we have not used the freedom of the M\"obius transformation. A spectral genus one torus is a surface of revolution, if and only if the equivariant action is the rotation around a geodesic, or equivalently if \eqref{eq:equi_action} fixes point wise one geodesic of $\bbS^3$. The generator of the extended action has eigenvalues $\mi(\Vconst_1\pm\Vconst_2)$. Thus there exists a zero eigenvalue, if and only if $\Vconst_2=\pm\Vconst_1$, which is equivalent to $\lambda_2 = \lambda_1^{-1}$. \end{proof} \begin{proposition} \label{prop:closing-conditions} A spectral genus one \cmc surface in $\bbS^3$ is closed along two independent periods if and only if there exists a $T\in\bbZ^3\setminus\{0\}$ with $T\cdot X = 0 = T\cdot Y$ for $X\coloneq \left( 0,\,\NU_1 ,\,\NU_2 \right)$ and $Y\coloneq \left(1,\,\OMEGA_1,\,\OMEGA_2 \right)$. \end{proposition} \begin{proof} Suppose we have two $\bbR$-independent periods $\gamma_j = x_j\pi + 2\mi p_{j0}\JK' \in \bbC^\times$ for some $x_j\in\bbR$ and $p_{j0}\in\bbZ$, $j=1,\,2$. Let $M_\lambda(\gamma_j)$ be the respective frame monodromies with eigenvalues $\mu_j^{\pm 1}$. Then there exist four further integers $p_{jk} \in \bbZ$ as in \eqref{eq:pjk} for $j,\,k=1,\,2$. Hence $p_{jk} = x_j \, \NU_k + p_{j0}\,\OMEGA_k$, and we write this system as \begin{equation} \label{eq:closing2} \begin{pmatrix} x_1 & p_{10} \\ x_2 & p_{20} \end{pmatrix} \begin{pmatrix} X\\Y \end{pmatrix} = \begin{pmatrix} p_{10} & p_{11} & p_{12}\\ p_{20} & p_{21} & p_{22} \end{pmatrix} \coloneq \begin{pmatrix} P\\Q \end{pmatrix} \spaceperiod \end{equation} Hence $T\coloneq P\times Q$ is in $\bbZ^3\setminus\{0\}$ and satisfies $T\cdot X=0$ and $T\cdot Y=0$. Conversely suppose that there exists $T\in \bbZ^3\setminus\{0\}$ satisfying $T\cdot X=0$ and $T\cdot Y=0$. Let $\{P,\,Q\}$ be a basis for the lattice $\Lambda=\{P\in\bbZ^3\suchthat T\cdot P=0\}$. Then there exist $x_1,\,x_2\in\bbR$ and $p_{10},\,p_{20}\in\bbZ$ such that~\eqref{eq:closing2} holds. Then $\gamma_j = x_j\pi + 2\mi\, p_{j0}\JK'\in\Cstar$ generate a lattice with respect to which the surface is doubly periodic. \end{proof} \begin{remark}\rm{ The closing conditions in Proposition~\ref{prop:closing-conditions} can be used to describe the intersection of the zero sets of two functions on the parameter space $(\jq,\,\jk,\,\jh)$. The curve forming the intersection of two level sets then integrates to the vector field~\eqref{eq:torus-flow}. For $X,\,Y$ as in Proposition~\ref{prop:closing-conditions}, there exists $s=(s_0,\,s_1,\,s_2)\in\bbZ^3$ such that $s\cdot X=0$ and $s\cdot Y=0$. The closing conditions are thus $F=0,\,G=0$, where $F\coloneq s\cdot X$ and $G\coloneq s\cdot Y$. If we set $\lambda_k= e^{2\mi\theta_k}$, then the system of implicit flow equations is \begin{equation*} \begin{pmatrix} \frac{\del F}{\del\Bpoint} & \frac{\del F}{\del\theta_1} & \frac{\del F}{\del\theta_2}\\ \frac{\del G}{\del\Bpoint} & \frac{\del G}{\del\theta_1} & \frac{\del G}{\del\theta_2} \end{pmatrix} \begin{pmatrix} \dot\Bpoint \\ \dot \theta_1 \\ \dot \theta_2 \end{pmatrix} = 0\spacecomma \end{equation*} of which we next compute the matrix on the right hand side. Up to scale, $(\dot \Bpoint,\, \dot \theta_1,\, \dot \theta_2)$ is a cross product of its rows. Hence \begin{equation*} \begin{pmatrix} \frac{\del F}{\del \Bpoint} & \frac{\del F}{\del \theta_1} & \frac{\del F}{\del \theta_2}\\ \frac{\del G}{\del \Bpoint} & \frac{\del G}{\del \theta_1} & \frac{\del G}{\del \theta_2} \end{pmatrix} = \begin{pmatrix} s_1\frac{\del\NU_1}{\del \Bpoint} + s_2\frac{\del\NU_2}{\del \Bpoint} & s_1\frac{\del\NU_1}{\del\theta_1} & s_2\frac{\del\NU_2}{\del\theta_2} \\ s_1\frac{\del\OMEGA_1}{\del \Bpoint} + s_2\frac{\del\OMEGA_2}{\del \Bpoint} & s_1\frac{\del\OMEGA_1}{\del\theta_1} & s_2\frac{\del\OMEGA_2}{\del\theta_2} \end{pmatrix}\spaceperiod \end{equation*} Since $G=s_1\NU_1+s_2\NU_2=0$, this matrix is a scalar multiple of \begin{equation*} \begin{pmatrix} V_1\\V_2 \end{pmatrix} \coloneq \begin{pmatrix} \NU_2\frac{\del\NU_1}{\del \Bpoint} -\NU_1\frac{\del\NU_2}{\del \Bpoint} & \NU_2\frac{\del\NU_1}{\del\theta_1} & -\NU_1\frac{\del\NU_2}{\del\theta_2} \\ \NU_2\frac{\del\OMEGA_1}{\del \Bpoint} -\NU_1\frac{\del\OMEGA_2}{\del \Bpoint} & \NU_2\frac{\del\OMEGA_1}{\del\theta_1} & -\NU_1\frac{\del\OMEGA_2}{\del\theta_2} \end{pmatrix}\spaceperiod \end{equation*} Hence $(\dot \Bpoint,\,\dot\theta_1,\,\dot\theta_2)$ is a scalar multiple of $V_1\cross V_2$. The derivatives of $\NU$ and $\OMEGA$ with respect to $\Bpoint$ and $\theta$, where $2\mi\,\theta \coloneq \ln\lambda$ were computed in \eqref{eq:omega}, \eqref{eq:nu dot} and \eqref{eq:omega'}. A calculation yields that $V_1\cross V_2$ is a scalar multiple of \begin{equation*} \begin{pmatrix} 2\Bpoint(\JE'\cos(\theta_1+\theta_2)- \Bpoint\JK'\cos(\theta_1-\theta_2) ) \\ -\frac{(1+\jq^2)\JE'-2\JK'}{1-\jq^2} \sin\bigl(\theta_1+\theta_2\bigr) +\frac{2\JE'-(1+\jq^2)\JK'}{1-\jq^2} \sin\bigl(\theta_1-\theta_2\bigr) \\ -\frac{(1+\jq^2)\JE'-2\JK'}{1-\jq^2} \sin\bigl(\theta_1+\theta_2\bigr) -\frac{2\JE'-(1+\jq^2)\JK'}{1-\jq^2} \sin\bigl(\theta_1-\theta_2\bigr) \end{pmatrix} \spaceperiod \end{equation*} Changing variables from $(\Bpoint,\,\theta_1,\,\theta_2)$ to $(\Xzero,\,\Xone,\,\Xtwo)$, and rescaling, yields~\eqref{eq:torus-flow}.} \end{remark} \begin{figure}[t] \centering \includegraphics[width=5.35cm]{torus1-0-13.eps} \includegraphics[width=5.35cm]{torus1-0-13-inv.eps} \includegraphics[width=5.35cm]{torus12-0-13.eps} \caption{ \label{fig:thirteenlobe} A sampling of $(k,\,13)$ \cmc tori of revolution in $\bbS^3$. } \end{figure} \section{The moduli space of equivariant \cmc tori in $\mathbb{S}^3$} We next determine those flat \cmc tori which allow a bifurcation into genus $g=1$ \cmc tori. These are precisely those flat \cmc tori whose spectral curves have a double point on the unit circle. We will show in Theorem~\ref{thm:moduli-space} that the spectral genus one tori lie in flow families which begin at a flat \cmc torus with a double point on $\bbS^1$. These flat \cmc tori hence serve as initial conditions for the spectral genus $g=1$ flow~\eqref{eq:torus-flow}. The purpose of this section is to classify the flat \cmc tori in $\bbS^3$ with a double point on $\bbS^1$. We will show that three integers determine a primitive flat \cmc torus with a double point on $\bbS^1$ up to the $\bbS^1$ action and complex conjugation on the spectral curve. This is done by parameterizing the configuration space of three marked points on $\bbS^1$ as follows: Let $\Delta \subset \bbT^3 \coloneq \bbS^1 \times \bbS^1 \times \bbS^1$ denote the union of all triples of unimodular numbers in which two of the three entries coincide. Let $\calT= (\bbT^3 \setminus\Delta)/\thicksim$, where we identify triples obtained by the three transformations: \begin{itemize} \item[(A)] Rotation $(\lambda_0,\,\lambda_1,\,\lambda_2)\mapsto\delta(\lambda_0,\,\lambda_1,\,\lambda_2)$ with $\delta\in\bbS^1$. \item[(B)] Inversion $(\lambda_0,\,\lambda_1,\,\lambda_2)\mapsto (\lambda_0^{-1},\,\lambda_1^{-1},\,\lambda_2^{-1})$. \item[(C)] Swapping of the \sym points $(\lambda_0,\,\lambda_1,\,\lambda_2)\mapsto(\lambda_0,\,\lambda_2,\,\lambda_1)$. \end{itemize} Now we define a mapping \begin{equation}\label{eq:phi} \begin{split} \phi:\,(\lambda_0,\,\lambda_1,\,\lambda_2)&\mapsto \left(\frac{|s_0|}{\max\{|s_1+s_2|,\,|s_1-s_2|\}},\, \frac{\min\{|s_1+s_2|,\,|s_1-s_2|\}}{\max\{|s_1+s_2|,\,|s_1-s_2|\}} \right)\quad \mbox{ where }\\ (s_0,\,s_1,\,s_2)&= \mi(\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})\times (\lambda_0^{-1/2},\,\lambda_1^{-1/2},\,\lambda_2^{-1/2}) \end{split} \end{equation} \begin{proposition} \label{thm:tau} The map~\eqref{eq:phi} is bijective from $\calT$ onto $\calR=\{(t_0,\,t_1)\in [0,\,1)^2 \mid t_1<t_0\}$. \end{proposition} \begin{proof} For $(\lambda_0,\lambda_1,\lambda_2)\in\bbT^3$ choose square roots and set $m := (\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})$. Then $s = \mi m\cross \ol m$ and $\ol s = -\mi \ol m\cross m =\mi m\cross\ol m = s$, so $s$ is real. If $s=0$, then $\ol m$ is contained in the $\bbS^1$ orbit of $m$, and $\lambda_0=\lambda_1=\lambda_2$ so $(\lambda_0,\lambda_1,\lambda_2)\in\Delta$. This shows $s\ne 0$, and $s\in\bbR\bbP^2$. For the map $\maptau: \bbR\bbP^3 \to \bbR$ defined by \begin{align*} \label{eq:tau} \maptau(s_0,\,s_1,\,s_2)& = -(s_0+s_1+s_2)(-s_0+s_1+s_2)(s_0-s_1+s_2)(s_0+s_1-s_2)\\ &=(\lambda_0^{1/2}\lambda_1^{-1/2}-\lambda_1^{1/2}\lambda_0^{-1/2})^2 (\lambda_0^{1/2}\lambda_2^{-1/2}-\lambda_2^{1/2}\lambda_0^{-1/2})^2 (\lambda_1^{1/2}\lambda_2^{-1/2}-\lambda_2^{1/2}\lambda_1^{-1/2})^2 \end{align*} we have $\maptau(s) \leq 0$ since each of the three squared factors is non-positive. Furthermore, $\maptau(s)=0$ is equivalent to $(\lambda_0,\,\lambda_1,\,\lambda_2)\in\Delta$. Hence we have $\maptau(s)<0$. Define \begin{itemize} \item[(A')] Rotation $(\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})\mapsto \delta(\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})$ with $\delta\in\bbS^1$. \item[(B')] Inversion $(\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})\mapsto (\lambda_0^{-1/2},\,\lambda_1^{-1/2},\,\lambda_2^{-1/2})$. \item[(C')] Swapping of the \sym points $(\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})\mapsto (\lambda_0^{1/2},\,\lambda_2^{1/2},\,\lambda_1^{1/2})$. \item[(D')] Changing the signs of the entries of $(\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})$ independently. \end{itemize} The transformations (A')-(B') do not change $s\in\bbR\bbP^2$, while (C') negates signs and swaps $s_1$ and $s_2$, while (D') negates two signs in $s$ for each sign change. If $s_j =0$ for at least one $j\in\{0,\,1,\,2\}$ then $\maptau(s_0,\,s_1,\,s_2) \geq 0$. Hence if $s\in\bbR\bbP^2$, then $\maptau(s)<0$ implies $s_j\ne 0$ for all $j\in\{0,\,1,\,2\}$. Then $s=\mi m \times \ol m$ if and only if $s \cdot m = 0$ and $s \cdot \ol m = 0$. These are equivalent to \begin{equation*} \lambda_1 - 2A_1 \lambda_1^{1/2}\lambda_0^{1/2} + \lambda_0 = 0\spacecomma\interspace \lambda_2 - 2A_2 \lambda_2^{1/2}\lambda_0^{1/2} + \lambda_0 = 0\spacecomma\interspace s_0 \lambda_0^{1/2} + s_1 \lambda_1^{1/2} + s_2 \lambda_2^{1/2} = 0\spacecomma \end{equation*} where \begin{equation*} A_1 = \frac{-s_0^2-s_1^2+s_2^2}{2s_0s_1} = 1+\frac{\maptau(s)}{{(2s_0s_1)}^2} \AND A_2 = \frac{-s_0^2+s_1^2-s_2^2}{2s_0s_1} = 1+\frac{\maptau(s)}{{(2s_0s_2)}^2} \spaceperiod \end{equation*} Thus $s\in\bbR\bbP^2$ with $\maptau(s)<0$ determines the following elements of $\bbT^3$ uniquely up to (A') and (B'): \begin{equation} \label{eq:t-vector} (\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2}) = \left( 1,\, \frac{-s_0^2-s_1^2+s_2^2 \pm \sqrt{\maptau(s)}}{2s_0s_1},\ \frac{-s_0^2+s_1^2-s_2^2 \mp \sqrt{\maptau(s)}}{2s_0s_2} \right)\spaceperiod \end{equation} Furthermore the numbers \begin{equation}\label{eq:integers-flat} (\TurnZero,\,\TurnOne,\,\TurnTwo)=\tfrac{1}{2}\, (|s_0|,\,\min\{|s_1+s_2|,|s_1-s_2|\},\, \max\{|s_1+s_2|,\,|s_2-s_2|\}) \end{equation} determine $(s_0,\,s_1,\,s_2)$ up to the transformations $C'$ and $D'$. Due to $\maptau(s) = (s_0^2-(s_1-s_2)^2)(s_0^2-(s_1+s_2)^2) = 4(\TurnZero^2-\TurnOne^2)(\TurnZero^2-\TurnTwo^2)$ the condition $\maptau(s)<0$ is equivalent to $0\leq\TurnOne<\TurnZero<\TurnTwo$. This shows that $\phi$ is surjective from $\bbT^3\setminus\Delta$ onto $\calR$, whose pre-images are uniquely determined up to (A)-(C). \end{proof} The {\emph{spectral data}} of a flat \cmc torus with a double point is a triple $(\lambda_0,\,\lambda_1,\,\lambda_2)\in\calT$ of values of the spectral parameter at the double point and the two \sym points. We identify triples obtained by the transformations~(A)-(C). Hence the set of spectral data is isomorphic to $\calR \cap \bbQ^2$. The \emph{turning number} of a plane curve is the degree of its Gauss map; we take the turning number to be unsigned. The \emph{total turning number} of a collection of immersed curves is the sum of their turning numbers. For twizzled surfaces, a \emph{profile curve set} of the surface with respect to one of its axes $A$ is constructed as follows: Let $\bbS^2_A$ be a geodesic 2-sphere containing $A$. The axis $A$ divides $\bbS^2_A$ into two hemispheres. Then a profile curve set of the torus with respect to $A$ is the intersection of the surface with $\bbS^2_A$ or with one of the hemispheres of $\bbS^2_A$. By the equivariant action, all profile curve sets associated with an axis are isometric. For surfaces of revolution, there is a profile curve set with respect to the axis which is not the revolution axis. \begin{figure}[t] \centering \includegraphics[width=8.15cm]{torus213-minimal.eps} \includegraphics[width=8.15cm]{torus316-minimal.eps} \caption{ \label{fig:minimal} The minimal $(2,\,1,\,3)$ and $(3,\,1,\,6)$ tori. Stereographically projected from $\bbS^3$ to $\bbE^3$. } \end{figure} \begin{theorem} \label{thm:flat-torus-integers} \mbox{} {\rm{(1)}} \label{item:integer-inequality} The set of spectral data of flat \cmc tori with a double point on $\bbS^1$ up to transformations~(A)-(C) is in one-to-one correspondence with integer triples \begin{equation}\label{eq:triple} (\TurnZero,\,\TurnOne,\,\TurnTwo)\in\bbZ^3 \mbox{ with } \gcd(\TurnZero,\,\TurnOne,\,\TurnTwo)=1 \mbox{ satisfying } 0 \le \TurnOne < \TurnZero < \TurnTwo\,. \end{equation} {\rm{(2)}} \label{item:flat-torus-covering} Let $\bbT^2$ be a $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ flat \cmc torus. Then $\bbT^2$ covers its underlying flat embedded rectangular \cmc torus $\TurnZero$ times. Each profile curve set $\calC_k$ ($k=1,\,2$) of $\bbT^2$ has total turning number $\Turn_0$. The set is a union of $\gcd(\TurnZero,\,\Turn_k)$ coinciding circles, and each circle is wrapped $\Turn_0/\gcd(\TurnZero,\,\Turn_k)$ times. {\rm{(3)}} \label{item:flat-torus-rev} The case $\TurnOne=0$ occurs if and only if the torus is rotational. \end{theorem} \begin{proof} Let $m = (\lambda_0^{1/2},\,\lambda_1^{1/2},\,\lambda_2^{1/2})$ be the square roots of spectral data of a flat \cmc torus, and assume the torus has period lattice $\Gamma = \gamma_1\bbZ \times \gamma_2 \bbZ$. We can frame the torus by an extended frame \eqref{eq:flat-frame}, and then the logarithmic eigenvalues of the monodromy with respect to these periods are $\ln \mu(\gamma_j,\,\lambda_k)$ as in \eqref{eq:eigenvalues}. Then there exist six integers $p_{jk} \in \bbZ$ such that $\ln \mu(\gamma_j,\,\lambda_k) = \pi\mi p_{jk}$ for $j=1,\,2$ and $k=0,\,1,\,2$. As in Proposition~\ref{prop:flat-torus} let $\Lambda= \kappa_1\bbZ \times \kappa_2\bbZ$, and $\Lambda^\ast = \gamma^*_1\bbZ \times \gamma^*_2\bbZ$ the lattice of the underlying embedded rectangular torus with orthogonal basis $\DOTR{\gamma^*_j}{\kappa_k} = \delta_{jk}$. Since $\Gamma \subset \Lambda^\ast$, we can expand \begin{equation} \label{eq:flat-2.1} \begin{pmatrix} \gamma_1 \\ \gamma_2 \end{pmatrix} = \begin{pmatrix} \DOTR{\gamma_1}{\kappa_1} & \DOTR{\gamma_1}{\kappa_2}\\ \DOTR{\gamma_2}{\kappa_1} & \DOTR{\gamma_2}{\kappa_2} \end{pmatrix} \,\begin{pmatrix} \gamma^*_1 \\ \gamma^*_2 \end{pmatrix} \,. \end{equation} Since $p_{jk} = \DOTR{\gamma_j}{\lambda_k^{1/2}}$, the determinant of the above change of basis is \begin{equation} \label{eq:det_is_l0} \left|\,\det\left(\DOTR{\gamma_j}{\kappa_k}\right)\, \right| = \tfrac{1}{2}\left| \,p_{12}p_{21} - p_{11}p_{22} \,\right|\,. \end{equation} (1) Define $(\tilde{s}_0,\, \tilde{s}_1,\,\tilde{s}_2) \coloneq (p_{10},\,p_{11},\,p_{12}) \times (p_{20},\,p_{21},\,p_{22})$. By construction $(\tilde{s}_0,\,\tilde{s}_1,\,\tilde{s}_2) \in \bbZ^3$. A computation gives $(\tilde{s}_0,\,\tilde{s}_1,\,\tilde{s}_2) = (\overline{\gamma}_1\gamma_2- \gamma_1\overline{\gamma}_2) \,m \times \ol m$. Note that $\overline{\gamma}_1\gamma_2 - \gamma_1\overline{\gamma}_2 \in \mi \bbR$, and is never zero, since $\gamma_1,\,\gamma_2$ are not collinear. From \eqref{eq:closing-dot} we additionally have that $p_{11}-p_{12} \in 2\,\bbZ$ and $p_{21}-p_{22} \in 2\,\bbZ$. Hence it follows that $\tilde{s}_0,\,\tilde{s}_1 + \tilde{s}_2,\,\tilde{s}_1 - \tilde{s}_2 \in 2\,\bbZ$. There exists a unique representative $(s_0,\,s_1,\,s_2)\in\bbZ^3$ of $(\tilde{s}_0,\,\tilde{s}_1,\,\tilde{s}_2)\in\bbR\bbP^2$ with $\gcd(s_0,\,s_1 + s_2,\,s_1 - s_2) =2$. The corresponding numbers $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ defined in \eqref{eq:integers-flat} then obey \eqref{eq:triple}. We thus have a map from spectral data to integer triples obeying~\eqref{eq:triple}, and by Proposition~\ref{thm:tau} this map is bijective. (2) The vector $s= (s_0,\,s_1,\,s_2)$ determines a lattice $\Gamma_s\subset\Lambda^\ast$ defined by \begin{equation}\label{eq:sublattice} \Gamma_s=\left\{ \tfrac{p_1+p_2}{2}\gamma_1^\ast+\tfrac{p_1-p_2}{2}\gamma_2^\ast\mid (p_0,\tfrac{p_1+p_2}{2},\tfrac{p_1-p_2}{2})\in\mathbb{Z}^3 \mbox{ with }s_0p_0+s_1p_1+s_2p_2=0\right\} \spaceperiod \end{equation} The lattice $\Gamma_s$ contains all the periods of the torus with respect to which the logarithmic eigenvalues of the monodromy do not change their values at the double point. The lattice $\Gamma_s$ does not change if $s$ is multiplied by some integer, or the sign of $s_0$ is changed. Due to \eqref{eq:integers-flat} the integers $s$ are determined by $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ up to transformations (C') and (D'). Two co-linear $s$ correspond to the same lattice $\Gamma_s$. Switching the signs of $s_0$ does not change $\Gamma_s$. Therefore the lattices $\Gamma_s$ of the flat \cmc tori with triple $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ satisfying \eqref{eq:triple} are those $\Gamma_s$ with one of the following vectors $s$ given by \begin{equation}\label{eq:fours} \begin{split} &(2\TurnZero,\TurnOne+\TurnTwo,\TurnOne-\TurnTwo)\,,\qquad (2\TurnZero,\TurnOne+\TurnTwo,\TurnTwo-\TurnOne)\,,\\ &(2\TurnZero,\TurnOne-\TurnTwo,\TurnOne+\TurnTwo)\,,\qquad (2\TurnZero,\TurnTwo-\TurnOne,\TurnOne+\TurnTwo)\,. \end{split} \end{equation} The transformations (C') and (D') act on sublattices $\Upsilon\subset\Lambda^\ast$ as follows: \begin{itemize} \item[(C'')] $C''\Upsilon = \left\{ n_1\gamma_1^\ast+n_2\gamma_2^\ast \mid n_1\gamma_1^\ast - n_2\gamma_2^\ast\in\Upsilon \right\}$, \item[(D'')] $D''\Upsilon = \left\{ n_1\gamma_1^\ast+n_2\gamma_2^\ast \mid n_2\gamma_1^\ast + n_1\gamma_2^\ast\in\Upsilon \right\}$. \end{itemize} Since $p_02\TurnZero+p_1(\TurnOne+\TurnTwo)+p_2(\Turn_1-\TurnTwo)= 2(p_0\TurnZero+\frac{p_1+p_2}{2}\TurnOne+\frac{p_1+p_2}{2}\TurnTwo)$ the lattices corresponding to the four vectors $s$ in \eqref{eq:fours} are respectively equal to \begin{equation}\label{eq:sublattices} \begin{split} &\Gamma_{[\TurnZero,\,\TurnOne,\,\TurnTwo]}= \{n_1\gamma_1^\ast+n_2\gamma_2^\ast\mid n_1\TurnOne+n_2\TurnTwo\in\TurnZero\,\bbZ\,\}\,,\quad C''\;\Gamma_{[\TurnZero,\,\TurnOne,\,\TurnTwo]}\,,\\ &D''\;\Gamma_{[\TurnZero,\,\TurnOne,\,\TurnTwo]}\, \quad \mbox{ and } \quad D''\; C''\;\Gamma_{[\TurnZero,\,\TurnOne,\,\TurnTwo]}\,. \end{split} \end{equation} For flat \cmc tori the two-dimensional group $\hat{K}$~\eqref{eq:commut} acts on the torus. Since the axes do not depend on the subgroup $K\subset\hat{K}$~\eqref{eq:equi_action}, the two subgroups corresponding to the rotations of the embedded torus act on a geodesic sphere containing one of the axes. Hence the smallest periods in $\Gamma_s\cap\gamma_1^\ast\,\bbZ$ and $\Gamma_s\cap\gamma_2^\ast\,\bbZ$ represent components of the profile curve sets. These wrapping numbers are $\TurnZero/\gcd(\TurnZero,\TurnOne)$ and $\TurnZero/\gcd(\TurnZero,\TurnTwo)$. The number of components times these wrapping numbers is equal to $|\Lambda^\ast/\Gamma_s|$. But $\TurnZero =|\Lambda^\ast/\Gamma_s|$ by \eqref{eq:det_is_l0}. Hence the total turning number is equal to $\TurnZero$. Moreover, the corresponding embedded torus is covered $\TurnZero$ times. (3) Clearly $\TurnOne=0$ holds if and only if $s_1 = \pm s_2$. We may assume that $\lambda_0 = 1$ and then by \eqref{eq:t-vector} this holds if and only if $\lambda_2 = \lambda_1^{-1}$. By Proposition \ref{prop:torus-of-revolution} we conclude that $\TurnOne=0$ holds if and only if the torus is rotational. \end{proof} \begin{proposition} \label{prop:involution} The family of non-rotational spectral genus 1 tori starting at the $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ flat \cmc torus ends at the $(\TurnOne+\TurnTwo-\TurnZero,\,\TurnOne,\,\TurnTwo)$ flat \cmc torus. \end{proposition} \begin{proof} Due to Proposition~\ref{prop:levelset}~\eqref{item:levelset2} the difference $(\cos(2\theta_1)-\cos(2\theta_2))^2=4(1-\jk^2)(1-\jh^2)$ is uniformly bounded away from zero. After possibly swapping the \sym points we may assume $\sign(\Xzero)\cos(2\theta_1)<\sign(\Xzero)\cos(2\theta_2)$ during the flow. We remark that $\sign(\Xzero)$ is constant throughout the flow. In particular, $\lambda_1$ cannot pass through $\lambda_1=\sign(\Xzero)$ and $\lambda_2$ cannot pass through $\lambda_2=-\sign(\Xzero)$. Due to Proposition~\ref{prop:moduli-space} the function $\Xone-\sign(\Xzero)\Xtwo$ changes the sign, and the flow passes through a root of this function. This implies that $\lambda_2$ has to pass through $\lambda_2=\sign(\jq)$. We want to describe how the final six integers in \eqref{eq:closing2} are related to the corresponding initial six integers. Since $\OMEGA$ is multi-valued on the fixed point set $\bbS^1$ of $\varrho$, we cut this circle at $\lambda=\sign(\Xzero)$. At the end points of the flow with $\Xzero=\pm1$ this point is a double point. This is a good choice for the cut, since at the end points of the flow the \sym points cannot sit there. The multi-valued function $\OMEGA$ is single valued on $\bbS^1\setminus\{\sign(\Xzero)\}$. The difference between the two boundary values of $\OMEGA$ on this interval is equal to $\pm2$ due to Legendre's relation \cite[17.3.13]{AbrS}. With $p_{jk} = x_j \, \NU_k + p_{j0}\,\OMEGA_k$ with respect to the periods $\gamma_j = x_j \pi + 2\mi p_{j0}\JK'$ as in \eqref{eq:closing2}, the final and initial integers are related by \begin{equation*} \left.\begin{pmatrix} p_{10} & p_{11} & p_{12}\\ p_{20} & p_{21} & p_{22} \end{pmatrix}\right|_{t=t_{\max}}= \left.\begin{pmatrix} p_{10} & p_{11} & p_{12}\\ p_{20} & p_{21} & p_{22} \end{pmatrix}\right|_{t=t_{\min}}\pm 2 \begin{pmatrix} 0 & 0 & p_{10}\\ 0 & 0 & p_{20} \end{pmatrix} \spaceperiod \end{equation*} We remark that due to our choice these integers change their values only, when $\lambda_2$ passes through $\lambda_2=\sign(\Xzero)$. Now let $(\NU_1,\,\OMEGA_1)$ and $(\NU_2,\,\OMEGA_2)$ denote the values of $(\NU,\,\OMEGA)$ at the \sym points. After possibly independent hyperelliptic involutions, we have $0<\NU_2<\NU_1$ in agreement with our choice of the \sym points. Note that $(s_0,\,s_1,\,s_2) = (p_{10},\,p_{11},\,p_{12}) \times (p_{20},\,p_{21},\,p_{22}) = (x_1 p_{20}-x_2 p_{10})\,(0,\,\NU_1,\,\NU_2) \times (1,\,\OMEGA_1,\,\OMEGA_2)$. Since the integers change only, when $\lambda_2$ passes through $\lambda_2=\sign(\Xzero)$, we can calculate the change of the $s_j$ in terms of the change of the values of $\NU$ and $\OMEGA$ at $\lambda_2=\sign(\Xzero)$. At the beginning and end of the flow we have $\left. (s_0,\,s_1,\,s_2)\right|_{t=t_{\min}} = (0,\,\NU_1,\,\NU_2) \times (1,\,\OMEGA_1,\,\OMEGA_2)$ and $\left. (s_0,\,s_1,\,s_2) \right|_{t=t_{\max}} = (0,\,\NU_1,\,\NU_2) \times (1,\,\OMEGA_1,\,\OMEGA_2 \pm 2)$. Due to $0<\NU_2<\NU_1$ we have \begin{equation}\label{eq:integers nu omega} (\TurnZero,\,\TurnOne,\,\TurnTwo) = |\,x_1 p_{20} - x_2 p_{10}\,|\,(|\NU_1\OMEGA_2-\NU_2\OMEGA_1|,\, \NU_1-\NU_2,\,\NU_1+\NU_2)\,. \end{equation} Therefore the final values are $(|\TurnZero \pm(\TurnOne+\TurnTwo)|,\,\TurnOne,\,\TurnTwo)$ in terms of the initial values. The inequality $\TurnZero<\TurnTwo$ excludes the plus sign, and concludes the proof. \end{proof} \subsection{Formulae} In a few places we will need formulae for equivariant \cmc tori and their profile curves in terms of the extended frame \eqref{eq:frame-parts}. Identifying the unit quaternions with $\bbi \coloneq \exp((\pi/2) e_0)$, $\bbj \coloneq \exp((\pi/2) e_1)$, $\bbk \coloneq \exp((\pi/2)e_2)$, and identifying $\bbi = \sqrt{-1}$, a computation gives \begin{gather} \label{eq:twizzled-profile} f = \alpha_1\alpha_2^{-1}\beta_1\beta_2^{-1} (\gamma_1\gamma_2^{-1}c_1c_2+\gamma_1^{-1}\gamma_2 s_1s_2) + \alpha_1\alpha_2\beta_1\beta_2 (\gamma_1^{-1}\gamma_2 s_1 c_2 - \gamma_1\gamma_2^{-1}c_1 s_2)\bbj \spacecomma\\ \alpha = \exp(\mi x\Vconst) \spacecomma\interspace \beta = \exp( \tfrac{\mi}{2}\AngleZero) \spacecomma\interspace \gamma = \exp(\tfrac{\mi}{2}\AngleTwo) \spacecomma\\ c = \cos(\half\AngleOne) \spacecomma\interspace s = \sin(\half\AngleOne) \spacecomma\\ \alpha_k = \alpha(\lambda_k) \spacecomma\quad \beta_k = \beta(\lambda_k) \spacecomma\quad \gamma_k = \gamma(\lambda_k) \spacecomma\quad c_k = c(\lambda_k) \spacecomma\quad s_k = s(\lambda_k) \spacecomma \quad k=1,\,2 \spaceperiod \end{gather} With $\tau$~\eqref{eq:involtau}, we have $\tau^\ast\NU = \NU$, $\tau^\ast\chi_0 = -\chi_0$, $\tau^\ast\chi_1 = \chi_1$, $\tau^\ast\chi_2 = -\chi_2$. Applying these symmetries to the formula~\eqref{eq:twizzled-profile} for the immersion at $y=0$ shows the profile curve of an equivariant \cmc surface of revolution in $\mathbb{S}^3$ is $f_0 =\exp(\mi\AngleZero)(\cos\AngleTwo + \mi\cos\AngleOne\sin\AngleTwo)+\mi\sin\AngleOne\sin\AngleTwo\bbj$. More explicitly, \begin{gather} \label{eq:rev-profile-curve2} f_0 =\exp(\mi\AngleZero)(g_1+ \mi g_2) + g_{0}\bbk \spacecomma \\ g_0 \coloneq\half\NU^{-1}\Metric\sin(2\theta_1) \spacecomma\interspace g_1 \coloneq \cc^{-1}(\Metric\cos(2\theta_1) - \Metric^{-1}\jq) \spacecomma\interspace g_2\coloneq \half \cc^{-1}\NU^{-1}\Metric'\sin(2\theta_1)\spacecomma \\ \cc^2 \coloneq \Metric^{-1}\Metric' -4\Vconst^2 = \Metric^2-2\Bpoint\cos(2\theta_1)+\Bpoint^2\Metric^{-2} \spaceperiod \end{gather} \subsection{Sphere bouquets} \label{sec:sphere-bouquet} \begin{figure}[t] \centering \includegraphics[height=5cm]{sphere.eps} \caption{ \label{fig:sphere} The profile curves of the two five-lobed sphere bouquets $(1,\,5)$ and $(2,\,5)$, stereographically projected to $\bbE^2$. The gray central circles are the axes of revolution. } \end{figure} For $m,\,n\in\bbZ$ with $1 \le m < n$ and $\gcd(m,\,n)=1$, the $(m,\,n)$ \emph{sphere bouquet} is constructed as follows. With $\alpha=\exp(2\pi\mi m/n)\in\Cstar$, let $C$ be the union of the $n$ circles through $\alpha^k$ and $\alpha^{k+1}$ perpendicular to $\bbS^1$ ($k=0,\dots,n-1$). Inverse stereographic projection of $C$ to a geodesic 2-sphere and revolution about the image of $\bbS^1$ as axis produces the $(m,\,n)$ sphere bouquet, consisting of $n$ pairwise tangent spheres forming a necklace. Since the radius of each circle is $r = \pi m/n$, the mean curvature of the $(m,\,n)$ sphere bouquet is $\cot(\pi m/n)$. Since the $(m,\,n)$ sphere bouquet is the same as the $(n-m,\,m)$ sphere bouquet, then for $n>2$, the number of distinct sphere bouquets with $n$ spheres is half the number of generators of $\bbZ_n$. The $(1,\,2)$ sphere bouquet is a special case because its two spheres coincide. In the rotational case we write $(\TurnZero,\,0,\,\TurnTwo)=(\TurnZero,\,\TurnTwo)$ for brevity. \begin{lemma} \label{thm:sphere-bouquet} The $(\TurnZero,\,\TurnTwo)$ family of rotational tori converges to the $(\TurnZero,\,\TurnTwo)$ sphere bouquet as $\Bpoint\to 0$. \end{lemma} \begin{proof} For rotational tori we have $(\lambda_2,\,\NU_2,\,\OMEGA_2) = (\lambda_1^{-1},\,-\NU_1,\,-\OMEGA_1)$. By Proposition~\ref{prop:closing-conditions} there exists $(s_0,\,s_1,\,s_2)\in\bbZ^3$ that is perpendicular to $(0,\,\NU_1,\,\NU_2)$ and $(1,\,\OMEGA_1,\,\OMEGA_2)$. Then $s_2=-s_1$ and $s_0+2s_1\OMEGA_1=0$. Hence $\TurnOne=0$ and $\TurnZero/\TurnTwo = \half\abs{s_0/s_1} = \abs{\OMEGA_1}$. Define $\theta_1:\bbR \to \bbR$ by $2\mi\,\theta_1\coloneq \ln\lambda_1$. The limiting profile curve as $\jq\to 0$ can be computed from the profile curve for tori of revolution $f_0$ in~\eqref{eq:rev-profile-curve2}. Let $\jq(t)$, $\theta_1(t)$ vary with the flow parameter $t\in[t_{\min},\,t_{\max}]$, and assume without loss of generality that $\lim_{t\to t_{\min}}\jq(t)=0$. From $\lim_{\jq\to 0}\OMEGA = 1- 2\theta/\pi$ we conclude that $\theta_0 = \lim_{t\to t_{\min}}\theta_1(t) =\tfrac{\pi}{2}(1 - \TurnZero/\TurnTwo)$. With $\lim_{\jq\to 0}\Metric = \sech x$ and $\lim_{\jq\to 0}{2\Vconst}=1$ we have \begin{equation*} \lim_{\Bpoint\to 0}g_0 = \sin(2\theta_0) \sech x \spacecomma\interspace \lim_{\Bpoint\to 0}g_1 = \cos(2\theta_0) \spacecomma\interspace \lim_{\Bpoint\to 0}g_2 = -\sin(2\theta_0) \tanh x \spaceperiod \end{equation*} Since the integrand in $\AngleZero$ goes to $0$ as $\jq\to 0$, then $\lim_{\jq\to 0}\AngleZero = 0$. The limiting profile curve as $\jq\to 0$ is thus $\gamma_0(x) \coloneq \lim_{\jq\to 0}f_0 = (\cos(2\theta_0)-\mi\sin(2\theta_0)\tanh x)+(\sin(2\theta_0)\sech x)\bbj$. Since $\cos(2\theta_0)-\mi\sin(2\theta_0)\tanh x$ traces out a straight line segment in $\bbC$, then $\gamma_0(x)$ traces out a circle. The discrete rotational symmetry of $\gamma$ implies that the limiting set as $\jq\to 0$ is a sphere bouquet. The limiting points are $\gamma_0(\pm\infty)=(e^{2\mp\mi\theta_0},\,0)$ on $\bbS^1$. The angle between the radii is $4\theta_0 = 2\pi(1-\TurnZero/\TurnTwo)$. Hence the limiting curve is the $(\TurnTwo-\TurnZero,\,\TurnTwo)$ sphere bouquet, which is the same as the $(\TurnZero,\,\TurnTwo)$ sphere bouquet. Note that in the case $(\TurnZero,\,\TurnTwo)=(1,\,2)$ the circle is a geodesic. \end{proof} We bring together Proposition~\ref{prop:moduli-space}, Proposition~\ref{prop:moduli-tori-of-revolution} and Proposition~\ref{prop:involution} in the following Theorem. \begin{theorem} \label{thm:moduli-space} Spectral genus 1 \cmc tori lie in 1-parameter families with monotonic mean curvature. The family starting at the $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ flat \cmc torus ends at the $(\TurnOne+\TurnTwo-\TurnZero,\,\TurnOne,\,\TurnTwo)$ flat \cmc torus. \end{theorem} \begin{proof} By Propositions~\ref{prop:moduli-space} and \ref{prop:moduli-tori-of-revolution} the mean curvature is monotonic. By Proposition~\ref{prop:moduli-space}, every flow starts and ends at a flat \cmc torus with a double point on $\bbS^1$. The integers associated to these two flat \cmc tori endpoints are computed next. As shown in Lemma~\ref{thm:sphere-bouquet}, the two flows ending at the $(\TurnZero,\,0,\,\TurnTwo)$ and $(\TurnTwo-\TurnZero,\,0,\,\TurnTwo)$ flat \cmc tori at $\Bpoint=1$ start at the same sphere bouquet at $\Bpoint=0$. Because only tori of revolution flow to sphere bouquets, we conclude that every sphere bouquet is the limit of these two flows and no others. While the flow is singular at $\Bpoint=0$, Proposition~\ref{prop:involution} nevertheless holds for the family constructed by gluing these two families together along the sphere bouquet. \end{proof} \section{Geometry} \label{sec:torus} \subsection{The torus knot and symmetry group} \label{sec:symmetry} Every orbit of the equivariant action~\eqref{eq:equi_action} with the exception of the two axes is a $(p,\,q)$-torus knot in the corresponding orbit of $\hat{\GK}$. Due to \eqref{eq:pq_nu} and \eqref{eq:integers nu omega} this implies that the orbit of a point on $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ torus is generically a torus knot in the corresponding orbit of $\hat{\GK}$. If a $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ torus does not meet the axes, then the linking numbers of the $\GK$ orbit of a point on the torus and the two axes are $\left( \tfrac{\TurnOne}{\gcd(\TurnOne,\TurnTwo)},\, \tfrac{\TurnTwo}{\gcd(\TurnOne,\TurnTwo)}\right)$. \begin{proposition} \label{thm:symmetry} With $n\coloneq\gcd(\TurnOne,\,\TurnTwo)$, the symmetry group of an $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ cohomogeneity one \cmc torus is a semidirect product of $\bbS^1\times\bbZ_n$ and $\bbZ_2$ if it is twizzled, and a semidirect product of $\bbS^1\times\bbZ_n$ and $\bbZ_2\times\bbZ_2$ if it is a torus of revolution. \end{proposition} \begin{proof} Let $T\coloneq (t_0,\,t_1,\,t_2)\in\bbZ^3$ with $\gcd(t_0,\,t_1,\,t_2)=1$ and $t_0\ne 0$, and let $n\coloneq \gcd(t_1,\,t_2)$. We first show that $\calZ\coloneq \{ n_0\in\bbZ \suchthat T\cdot(n_0,\,n_1,\,n_2) = 0 \text{ for some $n_1,\,n_2\in\bbZ$} \} = n\bbZ$. If $T \cdot (n_0,\,n_1,\,n_2)=0$, then since $n\divides t_1$ and $n\divides t_2$, then $n\divides(t_0n_0)$. Since $\gcd(t_0,\,t_1,\,t_2)=1$, then $\gcd(t_0,\,n)=1$. Hence $n\divides n_0$. Thus $n$ divides every element of $\calZ$. Since $\gcd(t_1/n,\,t_2/n)=1$, by the Euclidean algorithm, there exist $x_1,\,x_2\in\bbZ$ such that $n + t_1x_1+t_2x_2 = 0$. Hence with $N=t_0(1,\,x_1,\,x_2)$, then $T\cdot N=0$. This shows that $n\in\calZ$. Hence $\calZ = n\bbZ$. There exists a basis $\gamma_1,\,\gamma_2\in\Cstar$ for the torus lattice so that $\DOTR{\gamma_1}{\sql{0}}=0$ and $p_{20} \coloneq \DOTR{\gamma_2}{\sql{0}}=\gcd(\TurnOne,\,\TurnTwo)$, where $\sql{0}=\mi$. Then $\gamma_1\in\bbR$ and $\gamma_2 = \pi x_2 + 2\mi p_{20}\JK'$ for some $x_2\in\bbR$. By~\eqref{eq:cmc-equi-I-II}, the first fundamental form is preserved if and only if $\Metric$ is preserved. The symmetry group thus contains the three conformal automorphisms $z\mapsto z + t\gamma_1,\ t\in\bbR$, $z\mapsto z + \gamma_2/n_2$ and $z\mapsto -z$. For tori of revolution, the \sym points satisfy $\lambda_2 = \lambda_1^{-1}$ by Proposition~\ref{prop:torus-of-revolution}. Hence the coefficient $\half Q$ of the Hopf differential in~\eqref{eq:cmc-equi-I-II} is real. Since the mean curvature $H$ is real, the second fundamental form is preserved under complex conjugation. Hence in this case there is a further anti-conformal automorphism $z\mapsto \ol{z}$. \end{proof} \subsection{Lobe counts} The two lobe counts are the orders of the two orientation-preserving cyclic subgroups of the symmetry group which fix one or the other axis point wise. \begin{proposition} \label{thm:profile-curve-symmetry} The lobe counts of a twizzled $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ torus are $\TurnOne$ and $\TurnTwo$, and for a rotational torus it is $\TurnTwo$. \end{proposition} \begin{proof} Let $\hat{\GK}$ be the two-dimensional torus \eqref{eq:commut}. Let $G$ be a subgroup of the orientation preserving subgroup of the isometry group of the torus which fixes one axis of the equivariant action point wise. Such groups are homeomorphic to $\mathbb{S}^1$, and thus closed. Closed subgroups of $\mathbb{S}^1$ are either finite or all of $\mathbb{S}^1$, and since we are not considering surfaces of revolution, the group $G$ is finite, and thus cyclic. Let $H \subset G$ be a subgroup which fixes every orbit of the equivariant action set wise. We compute $\ord(G/H)$ and $\ord(H)$. From the proof of Proposition~\ref{thm:symmetry} we conclude that $n_2 = \ord(G/H) = \gcd(\ell_1,\,\ell_2)$, since $H=G \cap \GK$. Now $G\subset \{ (1,\,e^{is}) \mid s \in \bbR \}$ and $\GK=\{(e^{\mi\ell_1 t},\,e^{\mi\ell_2 t}) \mid t \in \bbR\}$, and $\ord(\{(1,\,e^{is}) \mid s \in \bbR \}\cap \{ (e^{\mi\ell_1 t},\,e^{\mi\ell_2t}) \mid t \in \bbR \})$ is equal to $\ell_1/n$, and generated by $(1,\,e^{2\pi\mi n/\ell_1}) \in G$, the order coincides with the order of $\GK \cap G$. Hence $\ord(G) = \ord(G/H) \ord(G) = \ell_1$. Similarly for the other axis. This proves the claim for the twizzled case. For the rectangular case a similar argument holds, and concludes the proof. \end{proof} In view of Proposition~\ref{thm:profile-curve-symmetry} we call $\TurnOne$ and $\TurnTwo$ respectively the \emph{minor and major lobe counts}. The tori shown in Figure~\ref{fig:twizzled2} have major and minor lobe counts $n$ and $1$ respectively. By Theorem~\ref{thm:moduli-space} and Theorem~\ref{thm:flat-torus-integers} we have \begin{proposition}\label{th:minlobes} The major lobe count of a non-rotational spectral genus 1 \cmc torus is at least 3, and that of a rotational spectral genus 1 \cmc torus of revolution is at least 2. \end{proposition} \subsection{Profile curve sets} \begin{figure}[t] \centering {\includegraphics[width=5cm]{twizzled-profile0.eps}}\hspace{\PSPACE} {\includegraphics[width=5cm]{twizzled-profile1.eps}}\hspace{\PSPACE} {\includegraphics[width=5cm]{twizzled-profile2.eps}} \caption{ \label{fig:profile-twizzled} Profile curves of the $(\TurnZero,\,\TurnOne,\,\TurnTwo) = (2,\,1,\,5)$ twizzled torus family as the torus flows through its axis. The turning number of the inner profile curve jumps from $\TurnZero=2$ to $\TurnOne+\TurnTwo-\TurnZero=4$. Figure~\ref{fig:twizzled2} shows a $5$-lobed torus in the family of which these are cross-sections. } \end{figure} The discrepancy between the two endpoints of the $g=1$ flow in Proposition~\ref{prop:involution} is associated to the fact that at two points during the flow, the corresponding torus intersects one and then the other of its axes. At each of these two tori, one of the torus knots degenerates to a circle. The combinatorics of the profile curve sets of equivariant tori are almost invariant during the flow: they are invariant on two disjoint intervals. When the torus intersects its axis, the connectivity and turning numbers of the profile curve set jumps as described in Lemma~\ref{thm:twizzled-profile-curve}. This phenomenon is depicted in Figure~\ref{fig:profile-twizzled}. \begin{lemma} \label{thm:twizzled-profile-curve} If a profile curve set is immersed then $(\Xtwo\Bpoint - \Xone)(\Xtwo - \Bpoint\Xone) \neq 0$. The total turning number of each profile curve set of a non-flat twizzled $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ \cmc torus is $\TurnZero$ or $\TurnOne+\TurnTwo-\TurnZero$. \end{lemma} \begin{proof} Claim 1: If a profile curve set is not immersed, then $(\Xtwo\Bpoint - \Xone)(\Xtwo - \Bpoint\Xone) = 0$. To prove the claim, let $f$ be the immersion of the torus as in \eqref{eq:twizzled-profile}. Writing $f=f_1+f_2\bbj$, the two profile curve sets are defined implicitly by $\Real f_1=0$ and $\Real f_2=0$ respectively. The profile curves are singular wherever $\Real f_1$, ${(\Real f_1)}_x$ and ${(\Real f_1)}_y$ all vanish or $\Real f_2$, ${(\Real f_2)}_x$ and ${(\Real f_2)}_y$ all vanish. The function $f_k$ decouples into $f_k = \phi_k(x)\psi_k(y)$, where $\gamma=\gamma_1\gamma_2^{-1}$ and $\phi_1 = \alpha_1\alpha_2^{-1}$ and $\psi_1 = \beta_1\beta_2^{-1} (\gamma c_1 c_2+\gamma^{-1}s_1s_2)$, and $\phi_2 = \alpha_1\alpha_2$ and $\psi_2 = \beta_1 \beta_2 (\gamma^{-1} s_1 c_2 - \gamma c_1s_2)$. Then \begin{equation*} 2\Real f_k = \phi_k\psi_k + \ol\phi_k\ol\psi_k \spacecomma\interspace 2{(\Real f_k)}_x = {(\phi_k)}_x\psi_k + \ol{(\phi_k)}_x \ol\psi_k \spacecomma\interspace k=1,\,2 \spaceperiod \end{equation*} For $k=1,\,2$, Since ${(\phi_k)}_x$ never vanishes, it follows that $\Real f_k$ and ${(\Real f_k)}_x$ vanish if and only if $\psi_k$ vanishes. The additional condition that ${(\Real f_k)}_y$ vanishes is ignored; it specifies for which values of $x$, if any, the curve fails to be immersed. Since $\beta_1$ and $\beta_2$ are unimodular and $c_j$ and $s_j$ are real, this occurs if $\gamma^4=1$ and either $c_1^2c_2^2 - s_1^2 s_2^2=0$ or $s_1^2c_2^2 - c_1^2 s_2^2=0$. We have \begin{align*} 2(c_1^2c_2^2 - s_1^2 s_2^2) &= \cos(\AngleOne(\lambda_2)) + \cos(\AngleOne(\lambda_1)) = \Metric^{-1}\Metric'(\half (\Vconst^{-1}(\lambda_2)+\Vconst^{-1}(\lambda_1))) \spacecomma \\ 2(s_1^2c_2^2 - c_1^2 s_2^2) &= \cos(\AngleOne(\lambda_2)) - \cos(\AngleOne(\lambda_1)) = \Metric^{-1}\Metric'(\half( \Vconst^{-1}(\lambda_2)-\Vconst^{-1}(\lambda_1))) \spaceperiod \end{align*} The zero sets of each of these expressions is the zero set of $\Metric'$. At the zeros of $\Metric'$, $\Metric=1$ or $\Metric=\Bpoint$. A computation shows that the zero set of $\gamma^4-1$ is the zero set of $\Xtwo\Metric^2-\Xone\Bpoint$. Hence if the curve is not immersed, then either $\Metric=\jq$ and $\Xtwo\Bpoint=\Xone$, or else $\Metric=1$ and $\Xtwo=\Xone\Bpoint$. This proves claim 1. Let \begin{gather*} \calI_1=\{\Xone-\Xzero\Xtwo<0\} \spacecomma\quad \widehat\calI_1=\{\Xone-\Xzero\Xtwo>0\} \spacecomma\quad \calI_2=\{\Xtwo-\Xzero\Xone>0\} \spacecomma\quad \widehat\calI_2=\{\Xtwo-\Xzero\Xone<0\} \spacecomma\\ (\tilde\TurnZero,\,\tilde\TurnOne,\,\tilde\TurnTwo) = (\TurnOne+\TurnTwo-\TurnZero,\,\TurnOne,\,\TurnTwo) \spacecomma\interspace c_{jk} = \gcd(\Turn_j,\,\Turn_k) \spacecomma \interspace \hat c_{jk} = \gcd(\hat\Turn_j,\,\hat\Turn_k) \spaceperiod \end{gather*} Claim 2: on $\calI_k$ (respectively $\widehat\calI_k$), $\calC_k$ has $c_k$ (respectively $\hat c_k$) components. Each component of $\calC_k$ has turning number $\TurnZero/c_k$ (respectively $\hat\TurnZero/c_k$). The total turning number of $\calC_k$ is $\TurnZero$ on each of $\calI_k$ and $\widehat\calI_k$. By Theorem~\ref{thm:flat-torus-integers} (2), at the flat \cmc torus at the beginning (respectively end) of the flow, the profile curve sets are $\TurnZero$ (respectively $\TurnOne+\TurnTwo-\TurnZero$) wrapped circles. Since turning numbers of the immersed profile curves are homotopy invariants, the total turning number of $\calC_1$ is preserved in $\calI_1$ and $\widehat\calI_1$. Similarly the total turning number of $\calC_2$ is preserved in $\calI_2$ and $\widehat\calI_2$. \end{proof} Lemma~\ref{thm:twizzled-profile-curve} simplifies in the case of tori of revolution. By Theorem~\ref{thm:flat-torus-integers}, the profile curve at the flat \cmc torus is an $\TurnZero$-wrapped circle, with turning number $\TurnZero$. Since the flow $\Xzero\in(0,\,1]$\ induces a regular homotopy of the profile curve, then by the Whitney-Graustein theorem, every profile curve in the flow has turning number $\TurnZero$. Figure~\ref{fig:profile-3lobe} illustrates the profile curves of the $3$-lobed tori of revolution. \begin{figure}[t] \centering \frame{\includegraphics[width=3.35cm]{torusprofile123a.eps}} \frame{\includegraphics[width=3.35cm]{torusprofile123b.eps}} \frame{\includegraphics[width=3.7cm]{torusprofile123c.eps}} \frame{\includegraphics[width=3.7cm]{torusprofile123d.eps}} \frame{\includegraphics[width=3.45cm]{torusprofile123e.eps}} \frame{\includegraphics[width=3.78cm]{torusprofile123f.eps}} \frame{\includegraphics[width=3.45cm]{torusprofile123g.eps}} \frame{\includegraphics[width=3.45cm]{torusprofile123h.eps}} \caption{ \label{fig:profile-3lobe} The flow through the $3$-lobed tori of revolution. Starting at a singly-wrapped flat \cmc torus, the inner profile curve of the embedded $(1,\,3)$ tori flows to the $(1,\,3)$ sphere bouquet in the third frame. It continues through the non-embedded $(2,\,3)$ tori, crosses itself in the fifth frame, and ends at a doubly-wrapped flat \cmc torus. As the inner profile curve passes through the origin, and the outer curve through infinity, their turning numbers remain fixed but their winding numbers jump. The curves are stereographically projected to $\bbE^2$; the central gray circle is the axis of revolution. } \end{figure} \subsection{Embeddedness} We show that twizzled \cmc tori are never embedded, and classify embedded \cmc tori of revolution. As a corollary of Lemma~\ref{thm:twizzled-profile-curve} we have \begin{corollary} \label{cor:twizzled-nonembedded} A non-rotational spectral genus one \cmc torus in $\bbS^3$ is never embedded. \end{corollary} \begin{proof} By Lemma~\ref{thm:twizzled-profile-curve}, the profile curve sets of a $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ twizzled \cmc torus have total turning number $\TurnZero$ or $\TurnOne+\TurnTwo-\TurnZero$. By Theorem~\ref{thm:flat-torus-integers}, each of these turning numbers is strictly bigger than $1$. Hence the profile curve sets are not embedded. To show the surface is not embedded assume first that $(\Xtwo\Bpoint - \Xone)(\Xtwo - \Bpoint\Xone) \neq 0$. If $f$ where embedded, then by Lemma~\ref{thm:twizzled-profile-curve} the profile curve is immersed. By the inverse function theorem the inverse image under $f$ of the profile curve is embedded, hence if it where embedded the profile curve would be embedded, which is not true since its turning number is at least two by Lemma~\ref{thm:twizzled-profile-curve}, giving the contradiction. Because embeddedness is an open condition, during the flow $(\Xtwo\Bpoint - \Xone)(\Xtwo - \Bpoint\Xone) \neq 0$ away from isolated points, the surface is embedded also at the zeroes. \end{proof} \begin{theorem} \label{thm:embedded} A $(\TurnZero,\,\TurnTwo)$ torus of revolution is embedded if and only if $\TurnZero=1$. \end{theorem} \begin{proof} A surface of revolution in $\bbS^3$ is embedded if and only if its profile curve is embedded and does not meet the revolution axis. To show the embeddedness, we show that the curvature of the orthographic projection of the profile curve $f_0 =\exp(\mi\AngleZero)(g_1 + \mi g_2) + g_{0}\bbk$ in \eqref{eq:rev-profile-curve2} is strictly positive. Write $\exp(\mi\chi_0)(g_1+\mi g_2) = r\exp(\mi\psi)$ and $s=g_0$ so the profile curve is $f_0 = re^{\mi\psi} + s\bbk$. To compute $\psi'$, note that $\psi = \AngleZero + \arg(g_1+\mi g_2)$. The expression for $\AngleZero'$ in~\eqref{eq:angle-deriv} yields after a calculation \begin{equation} \label{eq:theta-prime} \psi' = \AngleZero' + \frac{g_1g_2'-g_1'g_2}{g_1^2+g_2^2} = \frac{2\Vconst \sin(2\theta_1) (\Metric^2\cos(2\theta_1) - \jq)} {\Metric^2\sin^2(2\theta_1) -4\Vconst^2} \spaceperiod \end{equation} with $2\mi\theta_1 = \ln\lambda_1$. The curvature of the plane curve $re^{\mi\psi}$ is $\kappa =8 \cc^{-3}\Vconst^2 \jq\Metric^{-1} $. Note that the plane curve $re^{\mi\psi}$ is an orthographic projection of the hemisphere to $\bbR^2$, not stereographic; the curvature of the stereographic projection may change sign, as seen in Figure~\ref{fig:profile-3lobe}. We next show that the profile curve does not meet the revolution axis for $\Bpoint\in(0,\,1]$. Since the range of $\Metric$ is $[\Bpoint,\,1]$, then the range of $s$ is $[\half\Vconst^{-1}\Bpoint\sin(2\theta_1),\, \half\Vconst^{-1}\sin(2\theta_1)]$. Hence $s>0$, because $\Vconst>0$, $\Bpoint>0$ and $\sin(2\theta_1)>0$. Hence $\abs{r}<1$, so the profile curve does not meet the axis of revolution. If the profile curve is embedded, then its turning number is $1$. But by the discussion after Theorem~\ref{thm:twizzled-profile-curve}, its turning number is $\TurnZero$. Hence $\TurnZero=1$. Conversely, assume $\TurnZero=1$, so its turning number is $1$. The curvature $\kappa$ of the orthographic projection $re^{\mi\psi}$ of the profile curve to $\bbC$ computed above is strictly positive, so the profile curve is convex and hence embedded (see e.g \cite[5-7, Proposition 1]{DoC:dg1}). \end{proof} \begin{theorem} An equivariant {\sc{cmc}} torus in $\bbS^3$ is Alexandrov embedded if and only if it is a surface of revolution and singly wrapped with respect to the rotational period. \end{theorem} \begin{proof} A rotational torus is Alexandrov embedded if and only if there exists an immersion $\bbS^1 \times [0,\,1] \to \bbS^2_+$ into a hemisphere $\bbS^2_+$ such that $\bbS^1 \times \{0\}$ is mapped to the equator of $\bbS^2_+$ and $\bbS^1 \times \{1 \}$ is mapped onto a profile curve of the torus. The resulting 3-manifold, obtained by rotating the strip is then the Alexandrov embedding. Every flat \cmc torus is a covering of an embedded flat \cmc torus. Hence the 3-manifold is always a solid torus, and thus has fundamental group $\bbZ$. The compact coverings correspond to proper subgroups of $\bbZ$. Hence flat Alexandrov embedded \cmc tori have to be singly wrapped. This condition is stable under continuous deformations which stay away from bouquets of spheres. \end{proof} \begin{figure}[t] \centering \includegraphics[width=4cm]{torus102.eps} \includegraphics[width=4cm]{torus103.eps} \includegraphics[width=4cm]{torus104.eps} \includegraphics[width=4cm]{torus105.eps} \caption{ \label{fig:revtorus} Embedded $(1,\,n)$ \cmc tori of revolution in $\bbS^3$, with $n=2,\,3,\,4,\,5$. } \end{figure} \subsection{Mean curvature and minimal tori} There are infinitely many minimal tori in $\bbS^3$ \cite{Car:tor}. There are in fact already infinitely many minimal equivariant ones. Theorem~\ref{thm:mean-curvature} shows the existence of infinitely many minimal twizzled tori. For example the $(\TurnZero,\,\TurnOne,\,\TurnTwo)=(n-k,\,n,\,n+k)$ flow family with $0<k<n$ is a fixed point of the involution of Proposition~\ref{prop:involution}. The flow starts and ends at the same flat \cmc torus with opposite mean curvature, and hence it contains a minimal torus. A (non-minimal) example from the $(\TurnZero,\,\TurnOne,\,\TurnTwo)=(2,\,1,\,3)$ family is shown in Figure~\ref{fig:twizzled2}. \begin{lemma} \label{thm:mean-curvature} A spectral genus 1 flow family with endpoints $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ and $(\TurnOne+\TurnTwo-\TurnZero,\,\TurnOne,\,\TurnTwo)$ contains exactly one minimal torus if $(\TurnOne^2+\TurnTwo^2)^{1/2} \ge \sqrt{2} \max\{\TurnZero,\,\TurnOne+\TurnTwo-\TurnZero\}$ and no minimal tori otherwise. \end{lemma} \begin{proof} Consider the flow from a flat \cmc torus to a flat torus through spectral genus 1 tori as described in Theorem~\ref{thm:moduli-space}. Since the mean curvature \eqref{eq:H_and_h} is monotonic, the flow contains a minimal torus if and only if the mean curvature of the flat \cmc tori have opposite signs, or if one of these flat \cmc tori is minimal. By a calculation, $\TurnOne = \TurnZero\min\{X,\,Y\}$, $\TurnTwo = \TurnZero\max\{X,\,Y\}$, where $X = \sqrt{(1+\Xone)/(1-\Xtwo)}$ and $Y = \sqrt{(1-\Xone)/(1+\Xtwo)}$, and then \begin{equation} \label{eq:mean-curvature-with-sign} H = \sign(\Xone+\Xtwo)\tfrac{\TurnOne^2+\TurnTwo^2-2\TurnZero^2 {2\sqrt{(\TurnTwo^2-\TurnZero^2)(\TurnZero^2-\TurnOne^2)}} \spaceperiod \end{equation} Since $\Xone+\Xtwo$ is negative at the beginning of the flow and positive at the end (Theorem~\ref{thm:moduli-space}), by Proposition~\ref{prop:involution}, mean curvatures $H_0$ and $H_1$ of the flat \cmc tori at the beginning and end of the flow are respectively \begin{equation} \label{eq:mean-curvature-interval} H_0 \coloneq \tfrac{\TurnOne^2+\TurnTwo^2-2\TurnZero^2} {2\sqrt{(\TurnTwo^2-\TurnZero^2)(\TurnZero^2-\TurnOne^2)}} \spacecomma\interspace H_1 \coloneq -\tfrac{\TurnOne^2+\TurnTwo^2-2\hat\TurnZero^2} {2\sqrt{(\TurnTwo^2-\hat\TurnZero^2)(\hat\TurnZero^2-\TurnOne^2)}} \end{equation} where $\hat\TurnZero\coloneq \TurnOne+\TurnTwo-\TurnZero$. The flow contains a minimal torus if and only if $\TurnOne^2+\TurnTwo^2-2\TurnZero^2$ and $\TurnOne^2+\TurnTwo^2-2(\TurnOne+\TurnTwo-\TurnZero)^2$ have the same sign, or either is $0$. Since the sum of these two integers is equal to $4(\TurnTwo-\TurnZero)(\TurnZero-\TurnOne)>0$, they are not both negative. The condition that they have the same sign, or either is $0$, is then equivalent to the asserted inequality. \end{proof} Consider the $(\TurnZero,\,\TurnTwo)$ family of tori of revolution, and let $\Ratio \coloneq \TurnZero/\TurnTwo$. The mean curvature for the flat \cmc tori is chosen to be positive for $\Ratio\in(0,\,1/\sqrt{2})$ and negative for $\Ratio\in(1/\sqrt{2},\,1)$. \begin{lemma} \label{thm:rev-mean-curvature} For spectral genus 1 \cmc tori of revolution, the mean curvature decreases monotonically from $H_0 = (1-2\Ratio^2)/(2\Ratio\sqrt{1-\Ratio^2})$ at the flat \cmc torus to $H_s = \cot\pi\Ratio$ at the sphere bouquet. This family contains exactly one minimal torus if $\Ratio\in(\frac{1}{2},\,1/\sqrt{2}]$, and no minimal tori otherwise. \end{lemma} \begin{proof} By~\eqref{eq:mean-curvature-interval}, the mean curvature of the flat \cmc torus at the end of the flow ($\jq=\pm1$) is $H_0$ as in the assertion. By Lemma~\ref{thm:sphere-bouquet}, the $(\TurnZero,\,\TurnTwo)$ family of tori of revolution converges to the $(\TurnZero,\,\TurnTwo)$ sphere bouquet as $\jq\to 0$. The limiting sphere bouquet has mean curvature $H_s = \cot\pi\Ratio$. The mean curvature is monotonic by Theorem~\ref{thm:moduli-space} and hence has the specified range. The family contains a minimal torus if and only if the mean curvature has different signs at the endpoints of the flow. This occurs if and only if $\Ratio\in (\frac{1}{2},\,1/\sqrt{2}]$. \end{proof} \begin{figure}[t] \centering \includegraphics[width=5.4cm]{torus205.eps} \includegraphics[width=5.4cm]{torus305.eps} \includegraphics[width=5.4cm]{torus405.eps} \caption{ \label{fig:revtorusfive} Alexandrov embedded five-lobed $(k,\,5)$ \cmc tori of revolution in $\bbS^3$. The turning number of the inner profile curve is $k = 2,\,3,\,4$. } \end{figure} \begin{corollary} The Clifford torus is the only minimal embedded rotational torus in the 3-sphere. \end{corollary} By combining the above results we have shown that amongst the infinitely many minimal equivariant tori in the 3-sphere, only one is embedded \cite{HsiL}. \begin{theorem} The Clifford torus is the only embedded minimal equivariant torus in $\bbS^3$. \end{theorem} \section{Connectedness of the moduli space} Let $M$ denote the set of \cmc immersions from the oriented 2-torus $\bbT^2$ into the oriented 3-sphere $\bbS^3$. We define an equivalence relation by identifying two maps in $M$ if they differ by an orientation preserving diffeomorphism of $\bbT^2$ and an orientation preserving isometry of $\bbS^3$, and set $\mathcal{M} = M/\sim$. We denote the spectral genus zero maps by $\mathcal{M}_0 \subset \mathcal{M}$, that is \[ \mathcal{M}_0 = \left\{ \mbox{ equivalence classes of flat \cmc tori in } \bbS^3\,\, \right\}\,. \] Thus $\mathcal{M}_0$ consists of infinitely many $\bbR$-families of flat \cmc tori. Even though each of these families by Proposition~\ref{prop:flat-torus} (iii) is a finite cover of the family of the underlying embedded rectangular torus, we need the full diversity of $\mathcal{M}_0$ to bifurcate into all possible spectral genus one \cmc tori. The spectral genus one \cmc tori will be denoted by $\mathcal{M}_1 \subset \mathcal{M}$, that is \begin{equation*} \mathcal{M}_1 = \left\{ \mbox{ equivalence classes of spectral genus one \cmc tori in } \bbS^3\,\, \right\} \,. \end{equation*} Since deformation families of rotational tori in $\mathcal{M}_1$ flow into bouquets of spheres, we take the closure of $\mathcal{M}_1$ by supplementing it with the limiting bouquets of spheres, and set \begin{equation*} \overline{\mathcal{M}}_1 = \mathcal{M}_1 \cup \left\{ \mbox{ equivalence classes of sphere bouquets in } \bbS^3\,\, \right\}\,. \end{equation*} The aim of this section is to show that this \emph{completed} moduli space of equivariant \cmc tori $\mathcal{M}_0 \cup \overline{\mathcal{M}}_1$ is connected (Theorem~\ref{th:connected}). We have already seen that the moduli space of equivariant \cmc tori in the 3-sphere is a graph, which consists of: \begin{enumerate} \item Edges of families of spectral genus zero tori; \item Edges of families of spectral genus one tori; \item 'Bifurcation' vertices in $\mathcal{M}_0$ that connect with other vertices in $\mathcal{M}_0$ via spectral genus one edges by Theorem~\ref{thm:moduli-space}. \end{enumerate} By Proposition~\ref{prop:flat-torus} (iii) any element in $\mathcal{M}_0$ is isogenic to the unique (up to isomorphism) embedding of a rectangular lattice with the same mean curvature. Hence each edge in $\mathcal{M}_0$ contains a unique minimal torus, obtained via an isogeny from the Clifford torus. If we identify two isogenies which differ only by an isomorphism of the domain, then we have a one-to-one correspondence between isomorphy classes of isogenies and co-finite sublattices of $\Lambda^\ast$. Hence we can identify the connected components of $\mathcal{M}_0$ with co-finite sublattices of $\Lambda^\ast$. We say that two such sublattices are connected, if the corresponding genus zero edges are connected in $\mathcal{M}_0\cup\overline{\mathcal{M}}_1$. We associated to the bifurcation vertices triples $(\TurnZero,\,\TurnOne,\,\TurnTwo) \in \bbZ^3$ with $\gcd(\TurnZero,\,\TurnOne,\,\TurnTwo)=1$ and $0\leq\TurnOne < \TurnZero < \TurnTwo$. In the proof of Theorem~\ref{thm:flat-torus-integers}~(2) we showed that triples~\eqref{eq:triple} correspond to the lattices~\eqref{eq:sublattices}. The genus one edges described in Theorem~\ref{thm:moduli-space} yield four isomorphisms of each of these lattices onto one of those lattices corresponding to the triples $(\TurnOne+\TurnTwo-\TurnZero,\,\TurnOne,\,\TurnTwo)$. Furthermore, the genus zero edge corresponding to a sublattice of one of the former lattices is connected by an isogenic genus one edge with the corresponding sublattice of one of the latter lattices. But we do not use these isomorphisms of the lattices~\eqref{eq:sublattices} onto those lattices corresponding to $(\TurnOne+\TurnTwo-\TurnZero,\,\TurnOne,\,\TurnTwo)$. We shall make use of these isomorphisms only in case of embedded tori with $\TurnZero=1$ and $\TurnOne = 0$. In this case rotation periods are preserved, and up to transformations $C'$ and $D'$ these isomorphisms are of the form \begin{equation}\label{eq:isomorphism} \Lambda^\ast\to p\,\gamma_1^\ast\bbZ \oplus q\,\gamma_2^\ast\bbZ, \quad n_1\gamma_1^\ast+n_2\gamma_2^\ast\mapsto p\,n_1\gamma_1^\ast\oplus q\,n_2\gamma_2^\ast\quad\mbox{ with }p=1\mbox{ or }q=1. \end{equation} \begin{proposition} \label{th:lattices1} \mbox{} {\rm{(i)}} The edge of a lattice $\Gamma \subset \Lambda^\ast$ contains a vertex with triple $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ if and only if $\Gamma$ is a sublattice of one of the lattices in \eqref{eq:sublattices}. {\rm{(ii)}} For any $\TurnZero,\,\TurnOne,\,\TurnTwo$ we have that $\Gamma_{[\TurnZero,\,\TurnOne,\,\TurnTwo]} = \Lambda^\ast$ if and only if $\TurnZero = 1$. {\rm{(iii)}} The unique edge of embedded rotational tori in $\mathcal{M}_0$ contains only the vertices with triples of the form $(\TurnZero,\TurnOne,\TurnTwo)=(1,0,\TurnTwo)$ with $\TurnTwo \geq 2$. \end{proposition} \begin{proof} (i) In the proof of Theorem~\ref{thm:flat-torus-integers}~(2) we determined the lattices that correspond to a triple~\eqref{eq:triple}. They are given in ~\eqref{eq:sublattices}. (ii) If $\TurnZero = 1$, then $n_1\TurnOne + n_2\TurnTwo \in \bbZ$ holds for all $(n_1,\,n_2) \in \bbZ^2$. Hence $\Gamma_{[1,\,\TurnOne,\,\TurnTwo]} = \Lambda^\ast$. Conversely, if $\Gamma_{[\TurnZero,\,\TurnOne,\,\TurnTwo]} = \Lambda^\ast$, then $n_1\TurnOne + n_2\TurnTwo \in \TurnZero\bbZ$ holds for all $(n_1,\,n_2) \in \bbZ^2$. In particular $\TurnOne,\,\TurnTwo \in\TurnZero \bbZ$, which implies $\TurnZero =1$ so as not to contradict the assumption $\gcd(\TurnZero,\,\TurnOne,\,\TurnTwo) =1$. (iii) By Proposition~\ref{thm:flat-torus-integers} we have that $\TurnOne =0$ in the rotational case. By Theorem~\ref{thm:embedded} the $(\TurnZero,\,\TurnTwo)$ torus of revolution is embedded if and only if $\TurnZero=1$. \end{proof} \begin{lemma}\label{rotational connected} The moduli of rotational \cmc tori in the 3-sphere, supplemented by bouquets of spheres is connected. \end{lemma} \begin{proof} We will show that any edge of rotational tori in $\mathcal{M}_0$ is connected to the edge of embedded rotational tori, or equivalently that any lattice $p\,\gamma_1^\ast\bbZ + q\,\gamma_2^\ast\bbZ$ of a rotational torus is connected to the $\Lambda^\ast$ lattice. By Proposition~\ref{th:lattices1}~(iii) the edge of embedded rotational tori in $\mathcal{M}_0$ contains all the vertices $(1,\,0,\,\TurnTwo)$ with $\TurnTwo \geq 2$. This edge is connected with all the edges that contain the vertices $(\TurnTwo-1,\,0,\,\TurnTwo)$ with $\TurnTwo \geq 2$. Hence the lattice $\Gamma_{[1,\,0,\,\TurnTwo]}=\Lambda^\ast$ is connected to the lattice $\Gamma_{[\TurnTwo -1,\,0,\,\TurnTwo]}= \gamma_1^\ast\bbZ+(\TurnTwo-1)\gamma_2^\ast\bbZ$. Furthermore, the lattice $D\;\Gamma_{[1,\,0,\,\TurnTwo]} = \Lambda^\ast$ is connected to the lattice $D\;\Gamma_{[\TurnTwo -1,0,\TurnTwo]}= (\TurnTwo-1)\gamma_1^\ast\bbZ + \gamma_2^\ast\bbZ$. Sublattices $\Gamma=p\,\gamma_1^\ast\bbZ + \gamma_2^\ast \bbZ\subset\Gamma_{[1,0,\TurnTwo]}$ are connected along genus one edges isogenic to the former genus one edges with $p\,\gamma_1^\ast\bbZ + (\TurnTwo-1)\,\gamma_2^\ast\bbZ \subset\Gamma_{[\TurnTwo -1,0,\TurnTwo]}$ by the isomorphism \eqref{eq:isomorphism}. \end{proof} In the following we shall combine deformations to pass between bifurcation vertices in $\mathcal{M}_0$. There are three different types of deformations: \ding{192} The deformation through spectral genus one, possibly also passing through bouquets of spheres as described in Theorem~\ref{thm:moduli-space}. In this case we write \[ (\TurnZero,\,\TurnOne,\,\TurnTwo) \xrightarrow{\text{\ding{192}}} (\TurnTwo + \TurnOne - \TurnZero,\,\TurnOne,\,\TurnTwo).\] \ding{193} The deformation along an edge of flat \cmc tori, passing from one 'bifurcation' vertex to another one: Suppose $(\TurnZero,\,\TurnOne,\,\TurnTwo) \in \bbZ^3$ is such that $0<\TurnOne$ and $\TurnTwo<2\TurnZero$. Then from $n_1\TurnOne+n_2\TurnTwo \in \TurnZero \bbZ$ get that also $n_1(\TurnOne+\TurnZero)+n_2(\TurnTwo-\TurnZero)\in\TurnZero\bbZ$. In this case the transformation $(\TurnZero,\TurnOne,\TurnTwo)\mapsto (\TurnZero,\TurnTwo-\TurnZero,\TurnZero+\TurnOne)$ acts on the corresponding lattices as the transformation~(D''), which interchanges the four lattices~\eqref{eq:sublattices}. In such a case we write \[ (\TurnZero,\TurnOne,\TurnTwo)\xrightarrow{\text{\ding{193}}} (\TurnZero,\TurnTwo-\TurnZero,\TurnZero+\TurnOne).\] \ding{194} If $\TurnOne$ is odd, then we have \begin{equation} \label{eq:turnone_odd} n_1\gamma_1^\ast+n_2\gamma_2^\ast\in\Gamma_{[\TurnZero,\TurnOne,\TurnTwo]} \Longleftrightarrow 2n_1\gamma_1^\ast+n_2\gamma_2^\ast\in \Gamma_{[2\TurnZero,\TurnOne,2\TurnTwo]}. \end{equation} Due to Lemma~\ref{rotational connected} both genus zero edges corresponding to the lattices $2\bbZ\gamma_1^\ast+\bbZ\gamma_2^\ast$ and $\bbZ\gamma_1^\ast+2\bbZ\gamma_2^\ast$ are connected with the edge corresponding to $\Lambda^\ast$. If we apply an isogeny to these families we obtain with \eqref{eq:isomorphism} a deformation of a genus zero edge corresponding to the lattice $\Gamma\subset\Lambda^\ast$ to another genus zero edge corresponding to the lattices $\{2n_1\gamma_1^\ast+n_2\gamma_2^\ast\mid n_1\gamma_1^\ast+n_2\gamma_2^\ast\in\Gamma\}$ and $\{n_1\gamma_1^\ast+2n_2\gamma_2^\ast\mid n_1\gamma_1^\ast+n_2\gamma_2^\ast\in\Gamma\}$, respectively. In combination with \eqref{eq:turnone_odd} we write \begin{equation*} (\TurnZero,\TurnOne,\TurnTwo)\xrightarrow{\text{\ding{194}}} (2\TurnZero,\TurnOne,2\TurnTwo). \end{equation*} \begin{lemma} \label{th:z2lattice} For co-prime integers $0\leq\TurnOne<\TurnZero<\TurnTwo$ the lattices~\eqref{eq:sublattices} are connected with $\Lambda^\ast$. \end{lemma} \begin{proof} We shall show that it is possible to successively reduce $\TurnOne$ until $\TurnOne = 0$. We can add to $\TurnTwo$ multiples of $\TurnZero$ without changing the lattices \eqref{eq:sublattices}. We pick the smallest of all possible $\TurnTwo$ and obtain $\TurnTwo\leq2\TurnZero.$ In case of equality, the lattices \eqref{eq:sublattices} are of the form $p\bbZ\gamma_1^\ast+q\bbZ\gamma_2^\ast$. In this case Lemma~\ref{rotational connected} connects $\Gamma$ with $\Lambda^\ast$. Therefore we may assume $\TurnTwo<2\TurnZero$. In the deformation \ding{192} we pick the smaller of the first entries $\TurnZero$ and $\TurnTwo+\TurnOne-\TurnZero$. Hence we can assume that $2\TurnZero\leq\TurnOne+\TurnTwo$, and now we have the following sequence of deformations \begin{equation*} \label{eq:step1} (\TurnZero,\TurnOne,\TurnTwo)\xrightarrow{\text{\ding{193}}} (\TurnZero,\TurnTwo-\TurnZero,\TurnZero+\TurnOne) \xrightarrow{\text{\ding{192}}} (\TurnTwo+\TurnOne-\TurnZero,\TurnTwo-\TurnZero,\TurnZero+\TurnOne) \xrightarrow{\text{\ding{193}}} (\TurnTwo+\TurnOne-\TurnZero, 2\TurnZero-\TurnTwo,\TurnOne+2\TurnTwo-2\TurnZero). \end{equation*} If $2\TurnZero<\TurnOne+\TurnTwo$ then $2\TurnZero-\TurnTwo<\TurnOne$, so $\TurnOne$ has decreased by this deformation. If $2\TurnZero = \TurnOne + \TurnTwo$, then we distinguish two cases: If $\TurnOne$ and $\TurnZero - \TurnOne$ were both even, then $\TurnTwo$ would be even, contradicting that $\gcd(\TurnZero,\,\TurnOne,\,\TurnTwo) =1$. Hence we just need to consider the two cases $\TurnZero - \TurnOne$ is odd, and $\TurnOne$ is odd. If $\TurnZero-\TurnOne$ is odd, then \[ (\TurnZero,\TurnOne,2\TurnZero-\TurnOne)\xrightarrow{\text{\ding{193}}} (\TurnZero,\TurnZero-\TurnOne,\TurnZero+\TurnOne) \xrightarrow{\text{\ding{194}}} (2\TurnZero,\TurnZero-\TurnOne,2\TurnZero+2\TurnOne) \xrightarrow{\text{\ding{192}}} (\TurnZero+\TurnOne,\TurnZero-\TurnOne,2\TurnZero+2\TurnOne).\] \item If $\TurnOne$ is odd, then \[ (\TurnZero,\TurnOne,2\TurnZero-\TurnOne)\xrightarrow{\text{\ding{194}}} (2\TurnZero,\TurnOne,4\TurnZero-2\TurnOne)\xrightarrow{\text{\ding{192}}} (2\TurnZero-\TurnOne,\TurnOne,4\TurnZero-2\TurnOne).\] Obviously for $\TurnTwo=2\TurnZero$ the lattices~\eqref{eq:sublattices} are of the form $p\bbZ\gamma_1^\ast+q\bbZ\gamma_2^\ast$. Hence in all cases either the lattices \eqref{eq:sublattices} are connected with $\Lambda^\ast$ or $\TurnOne$ is reduced. By repeating the above procedure finitely many times we eventually achieve $\TurnOne=0$, which we have already dealt with in Lemma~\ref{rotational connected}. \end{proof} \begin{lemma} \label{th:lattice-lemma} For every co-finite sublattice $\Gamma \varsubsetneq \Lambda^*$ there exists a triple $(\TurnZero,\,\TurnOne,\,\TurnTwo) \in \bbZ^3$ with $\gcd(\TurnZero,\,\TurnOne,\,\TurnTwo)=1$, $0\leq\TurnOne < \TurnZero < \TurnTwo$ and $\TurnZero >1$, such that $\Gamma$ is contained in a lattice of \eqref{eq:sublattices}. \end{lemma} \begin{proof} Since $\Gamma \neq\Lambda^\ast$, there exists $\gamma_1\in\Lambda^\ast$ with $\gamma_1\notin\Gamma$, and $\gamma_1$ is not a multiple of another element in $\Lambda^\ast$. Let $\gamma_2\in\Lambda^\ast$ such that $\Lambda^\ast\cong\gamma_1\bbZ\oplus\gamma_2\bbZ$. Since $\Lambda^\ast/\Gamma$ is finite, there exist smallest integers $p,q\in\bbZ$ with $p\geq 2,\,q\geq 0$ such that $\gamma_1p\in\Gamma$ and $\gamma_1q\oplus\gamma_2\in\Gamma$. Consider the homomorphism $g :\Lambda^\ast\to\bbZ,\, \gamma_1l\oplus\gamma_2m\mapsto l-qm$. By definition of $p,q$ we have that $\Gamma\subset\gamma_1p\bbZ\oplus(\gamma_1q+\gamma_2)\bbZ$, so that $g$ maps $\Gamma$ to a sublattice of $p\bbZ$. Every such homomorphism is of the form $a\gamma_1^\ast+b\gamma_2^\ast\in\Lambda^\ast \mapsto\TurnOne a+\TurnTwo b$ with $\TurnOne,\TurnTwo\in\bbZ$. By adding appropriate multiples of $\TurnZero=p$ to $\TurnOne,\TurnTwo$ we can achieve $0\leq\TurnOne<\TurnZero<\TurnTwo$. \end{proof} \begin{theorem} \label{th:connected} The completed moduli space of equivariant \cmc tori in the 3-sphere is connected. \end{theorem} \begin{proof} If $\Gamma \varsubsetneq \Lambda^*$ is a co-finite sublattice, then by Lemma~\ref{th:lattice-lemma} the corresponding genus zero edge contains a bifurcation vertex with triple $(\TurnZero,\,\TurnOne,\,\TurnTwo)$ and integers $0\leq\TurnOne<\TurnZero<\TurnTwo$. By Lemma~\ref{th:z2lattice} the corresponding lattices \eqref{eq:sublattices} are connected to the lattice $\Lambda^\ast$. An isogeny of this path connects the edge corresponding to $\Gamma$ with a genus zero edge corresponding to $\Gamma'$ with $|\Lambda^\ast/\Gamma'|<|\Lambda^\ast/\Gamma|$. Repeating this argument we can successively reduce the order until $|\Lambda^\ast/\Gamma'|= 1$. \end{proof} \bibliographystyle{amsplain} \def\cydot{\leavevmode\raise.4ex\hbox{.}} \def\cprime{$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
train/arxiv
BkiUdgw4uzlh5qyYgWrB
5
1
\section{Introduction} From the view of differential geometry, a {\it straight line} is a geometric curve with the curvature $\kappa(s)=0$. A {\it plane curve} is a family of geometric curves with torsion $\tau(s)=0$. Helix is a geometric curve with non-vanishing constant curvature $\kappa$ and non-vanishing constant torsion $\tau$ \cite{barros}. The helix may be called a {\it circular helix} or {\it $W$-curve} \cite{ilarslan}. It is known that straight line ($\kappa(s)=0$) and circle ($\kappa(s)=a,\,\tau(s)=0$) are degenerate-helices examples \cite{kuhn}. In fact, circular helix is the simplest three-dimensional spirals \cite{camci}. A curve of constant slope or {\it general helix} in Euclidean 3-space $\hbox{\bf E}^3$ is defined by the property that the tangent makes a constant angle with a fixed straight line called the axis of the general helix. A classical result stated by Lancret in 1802 and first proved by de Saint Venant in 1845 (see \cite{struik} for details) says that: {\it A necessary and sufficient condition that a curve be a general helix is that the function $$f=\dfrac{\tau}{\kappa}$$ is constant along the curve, where $\kappa$ and $\tau$ denote the curvature and the torsion, respectively}. General helices or {\it inclined curves} are well known curves in classical differential geometry of space curves and we refer to the reader for recent works on this type of curves \cite{ali1, ali2, gluck, mont2, turgut}. In 2004, Izumiya and Takeuchi \cite{izumi} have introduced the concept of {\it slant helix} by saying that the normal lines make a constant angle with a fixed straight line. They characterize a slant helix if and only if the {\it geodesic curvature} of the principal image of the principal normal indicatrix $$ \sigma=\frac{\kappa^2}{(\kappa^2+\tau^2)^{3/2}}\Big(\frac{\tau}{\kappa}\Big)' $$ is a constant function. Kula and Yayli \cite{kula1} have studied spherical images of tangent indicatrix and binormal indicatrix of a slant helix and they showed that the spherical images are spherical helices. Recently, Kula et al. \cite{kula2} investigated the relation between a general helix and a slant helix. Moreover, they obtained some differential equations which are characterizations for a space curve to be a slant helix. A family of curves with constant curvature but non-constant torsion is called Salkowski curves and a family of curves with constant torsion but non-constant curvature is called anti-Salkowski curves $\cite{salkow}$. Monterde \cite{mont1} studied some characterizations of these curves and he proved that the principal normal vector makes a constant angle with fixed straight line. So that: Salkowski and anti-Salkowski curves are the important examples of slant helices. A unit speed curve of {\it constant precession} in Euclidean 3-space $\hbox{\bf E}^3$ is defined by the property that its (Frenet) Darboux vector $$ W=\tau\,\hbox{\bf T}+\kappa\,\hbox{\bf B} $$ revolves about a fixed line in space with constant angle and constant speed. A curve of constant precession is characterized by having $$ \kappa=\frac{\mu}{m}\sin[\mu\,s],\,\,\,\,\,\,\,\,\,\,\tau=\frac{\mu}{m}\cos[\mu\,s] $$ or $$ \kappa=\frac{\mu}{m}\cos[\mu\,s],\,\,\,\,\,\,\,\,\,\,\tau=\frac{\mu}{m}\sin[\mu\,s] $$ where $\mu$ and $m$ are constants. This curve lies on a circular one-sheeted hyperboloid $$ x^2+y^2-m^2\,z^2=4m^2 $$ The curve of constant precession is closed if and only if $n=\frac{m}{\sqrt{1+m^2}}$ is rational \cite{scofield}. Kula and Yayli \cite{kula1} proved that the geodesic curvature of the spherical image of the principal normal indicatrix of a curve of constant precession is a constant function equals $-m$. So, one can say that: the curves of constant precessions are the important examples of slant helices. In this work, we define a new curve and we call it a {\it $k-$slant helix} and we introduce some characterizations of this curve. Furthermore, we have given some necessary and sufficient conditions for the $k-$slant helix. We hope these results will be helpful to mathematicians who are specialized on mathematical modeling as well as other applications of interest. \section{Preliminaries } In Euclidean space $\hbox{\bf E}^3$, it is well known that each unit speed curve with at least four continuous derivatives, one can associate three mutually orthogonal unit vector fields $\hbox{\bf T}$, $\hbox{\bf N}$ and $\hbox{\bf B}$ are respectively, the tangent, the principal normal and the binormal vector fields \cite{hacis}. We consider the usual metric in Euclidean 3-space $\hbox{\bf E}^3$, that is, $$ \langle,\rangle=dx_1^2+dx_2^2+dx_3^2, $$ where $(x_1,x_2,x_3)$ is a rectangular coordinate system of $\hbox{\bf E}^3$. Let $\psi:I\subset\hbox{\bb R}\rightarrow\hbox{\bf E}^3$, $\psi=\psi(s)$, be an arbitrary curve in $\hbox{\bf E}^3$. The curve $\psi$ is said to be of unit speed (or parameterized by the arc-length) if $\langle\psi'(s),\psi'(s)\rangle=1$ for any $s\in I$. In particular, if $\psi(s)\not=0$ for any $s$, then it is possible to re-parameterize $\psi$, that is, $\alpha=\psi(\phi(s))$ so that $\alpha$ is parameterized by the arc-length. Thus, we will assume throughout this work that $\psi$ is a unit speed curve. Let $\{\hbox{\bf T}(s),\hbox{\bf N}(s),\hbox{\bf B}(s)\}$ be the moving frame along $\psi$, where the vectors $\hbox{\bf T}, \hbox{\bf N}$ and $\hbox{\bf B}$ are mutually orthogonal vectors satisfying $\langle\hbox{\bf T},\hbox{\bf T}\rangle=\langle\hbox{\bf N},\hbox{\bf N}\rangle=\langle\hbox{\bf B},\hbox{\bf B}\rangle=1$. The Frenet equations for $\psi$ are given by (\cite{struik,turgut}) \begin{equation}\label{u1} \left[ \begin{array}{c} \hbox{\bf T}'(s) \\ \hbox{\bf N}'(s) \\ \hbox{\bf B}'(s) \\ \end{array} \right]=\left[ \begin{array}{ccc} 0 & \kappa(s) & 0 \\ -\kappa(s) & 0 & \tau(s) \\ 0 & -\tau(s) & 0 \\ \end{array} \right]\left[ \begin{array}{c} \hbox{\bf T}(s) \\ \hbox{\bf N}(s) \\ \hbox{\bf B}(s) \\ \end{array} \right]. \end{equation} If $\tau(s)=0$ for all $s\in I$, then $\hbox{\bf B}(s)$ is a constant vector $V$ and the curve $\psi$ lies in a $2$-dimensional affine subspace orthogonal to $V$, which is isometric to the Euclidean $2$-space $\hbox{\bf E}^2$. \section{New representation of spherical indicatrices} In this section we introduce a {\it new representation} of spherical indicatrices of the regular curves in Euclidean 3-space $\hbox{\bf E}^3$ by the following: \begin{definition}\label{df-01} Let $\psi$ be a unit speed regular curve in Euclidean 3-space with Frenet vectors $\hbox{\bf T}$, $\hbox{\bf N}$ and $\hbox{\bf B}$. The unit tangent vectors along the curve $\psi(s)$ generate a curve $\psi_{\mathbf{t}}=\hbox{\bf T}$ on the sphere of radius $1$ about the origin. The curve $\psi_{\mathbf{t}}$ is called the spherical indicatrix of $\hbox{\bf T}$ or more commonly, $\psi_{\mathbf{t}}$ is called tangent indicatrix of the curve $\psi$. If $\psi=\psi(s)$ is a natural representations of the curve $\psi$, then $\psi_{\mathbf{t}}(s)=\hbox{\bf T}(s)$ will be a representation of $\psi_{\mathbf{t}}$. Similarly, one can consider the principal normal indicatrix $\psi_{\mathbf{n}}=\hbox{\bf N}(s)$ and binormal indicatrix $\psi_{\mathbf{b}}=\hbox{\bf B}(s)$. \end{definition} \begin{lemma}\label{lm-01} If the Frenet frame of the tangent indicatrix $\psi_{\mathbf{t}}=\hbox{\bf T}$ of a space curve $\psi$ is $\{\hbox{\bf T}_{\mathbf{t}},\hbox{\bf N}_{\mathbf{t}},\hbox{\bf B}_{\mathbf{t}}\}$, then we have Frenet formula: \begin{equation}\label{u2} \left[ \begin{array}{c} \hbox{\bf T}^{\,'}_{\mathbf{t}}(s_{\mathbf{t}})\\ \hbox{\bf N}^{\,'}_{\mathbf{t}}(s_{\mathbf{t}})\\ \hbox{\bf B}^{\,'}_{\mathbf{t}}(s_{\mathbf{t}})\\ \end{array} \right]=\left[ \begin{array}{ccc} 0 & \kappa_{\mathbf{t}} & 0 \\ -\kappa_{\mathbf{t}} & 0 & \tau_{\mathbf{t}} \\ 0 & -\tau_{\mathbf{t}} & 0 \\ \end{array} \right]\left[ \begin{array}{c} \hbox{\bf T}_{\mathbf{t}}(s_{\mathbf{t}})\\ \hbox{\bf N}_{\mathbf{t}}(s_{\mathbf{t}})\\ \hbox{\bf B}_{\mathbf{t}}(s_{\mathbf{t}})\\ \end{array} \right], \end{equation} where \begin{equation}\label{u3} \hbox{\bf T}_{\mathbf{t}}=\hbox{\bf N},\,\,\,\,\,\hbox{\bf N}_{\mathbf{t}}=\frac{-\hbox{\bf T}+f\,\hbox{\bf B}}{\sqrt{1+f^2}},\,\,\,\,\, \hbox{\bf B}_{\mathbf{t}}=\frac{f\,\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}, \end{equation} and \begin{equation}\label{u4} s_{\mathbf{t}}=\int\kappa(s)ds,\,\,\,\,\,\kappa_{\mathbf{t}}=\sqrt{1+f^2},\,\,\,\,\,\tau_{\mathbf{t}}=\sigma\sqrt{1+f^2}, \end{equation} where \begin{equation}\label{u41} f=\frac{\tau(s)}{\kappa(s)} \end{equation} and \begin{equation}\label{u5} \sigma=\frac{f'(s)}{\kappa(s)\Big(1+f^2(s)\Big)^{3/2}} \end{equation} is the geodesic curvature of the principal image of the principal normal indicatrix of the curve $\psi$, $s_{\mathbf{t}}$ is natural representation of the tangent indicatrix of the curve $\psi$ and equal the total curvature of the curve $\psi$ and $\kappa_{\mathbf{t}}$ and $\tau_{\mathbf{t}}$ are the curvature and torsion of $\psi_{\mathbf{t}}$. \end{lemma} Therefore we can see that: \begin{equation}\label{u6} \frac{\tau_{\mathbf{t}}}{\kappa_{\mathbf{t}}}=\sigma. \end{equation} \begin{lemma}\label{lm-02} If the Frenet frame of the principal normal indicatrix $\psi_{\mathbf{n}}=\hbox{\bf N}$ of a space curve $\psi$ is $\{\hbox{\bf T}_{\mathbf{n}},\hbox{\bf N}_{\mathbf{n}},\hbox{\bf B}_{\mathbf{n}}\}$, then we have Frenet formula: \begin{equation}\label{u7} \left[ \begin{array}{c} \hbox{\bf T}^{\,'}_{\mathbf{n}}(s_{\mathbf{n}})\\ \hbox{\bf N}^{\,'}_{\mathbf{n}}(s_{\mathbf{n}})\\ \hbox{\bf B}^{\,'}_{\mathbf{n}}(s_{\mathbf{n}})\\ \end{array} \right]=\left[ \begin{array}{ccc} 0 & \kappa_{\mathbf{n}} & 0 \\ -\kappa_{\mathbf{n}} & 0 & \tau_{\mathbf{n}} \\ 0 & -\tau_{\mathbf{n}} & 0 \\ \end{array} \right]\left[ \begin{array}{c} \hbox{\bf T}_{\mathbf{n}}(s_{\mathbf{n}})\\ \hbox{\bf N}_{\mathbf{n}}(s_{\mathbf{n}})\\ \hbox{\bf B}_{\mathbf{n}}(s_{\mathbf{n}})\\ \end{array} \right], \end{equation} where \begin{equation}\label{u8} \left\{ \begin{array}{ll} \hbox{\bf T}_{\mathbf{n}}=\frac{-\hbox{\bf T}+f\,\hbox{\bf B}}{\sqrt{1+f^2}},\\ \hbox{\bf N}_{\mathbf{n}}=\frac{\sigma}{\sqrt{1+\sigma^2}} \Big[\frac{f\,\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}-\frac{\hbox{\bf N}}{\sigma}\Big],\\ \hbox{\bf B}_{\mathbf{n}}=\frac{1}{\sqrt{1+\sigma^2}}\Big[\frac{f\,\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}+\sigma\,\hbox{\bf N}\Big], \end{array} \right. \end{equation} and \begin{equation}\label{u9} s_{\mathbf{n}}=\int\kappa(s)\sqrt{1+f^2(s)}\,ds,\,\,\,\,\,\kappa_{\mathbf{n}}=\sqrt{1+\sigma^2},\,\,\,\,\, \tau_{\mathbf{n}}=\Gamma\sqrt{1+\sigma^2}, \end{equation} where \begin{equation}\label{u10} \Gamma=\frac{\sigma'(s)}{\kappa(s)\sqrt{1+f^2(s)}\Big(1+\sigma^2(s)\Big)^{3/2}}, \end{equation} $s_{\mathbf{n}}$ is natural representation of the principal normal indicatrix of the curve $\psi$ and $\kappa_{\mathbf{n}}$ and $\tau_{\mathbf{n}}$ are the curvature and torsion of $\psi_{\mathbf{n}}$. \end{lemma} Therefore we have: \begin{equation}\label{u11} \frac{\tau_{\mathbf{n}}}{\kappa_{\mathbf{n}}}=\Gamma. \end{equation} \begin{lemma}\label{lm-03} If the Frenet frame of the binormal indicatrix $\psi_{\mathbf{b}}=\hbox{\bf B}$ of a space curve $\psi$ is $\{\hbox{\bf T}_{\mathbf{b}},\hbox{\bf N}_{\mathbf{b}},\hbox{\bf B}_{\mathbf{b}}\}$, then we have Frenet formula: \begin{equation}\label{u12} \left[ \begin{array}{c} \hbox{\bf T}^{\,'}_{\mathbf{b}}(s_{\mathbf{b}})\\ \hbox{\bf N}^{\,'}_{\mathbf{b}}(s_{\mathbf{b}})\\ \hbox{\bf B}^{\,'}_{\mathbf{b}}(s_{\mathbf{b}})\\ \end{array} \right]=\left[ \begin{array}{ccc} 0 & \kappa_{\mathbf{b}} & 0 \\ -\kappa_{\mathbf{b}} & 0 & \tau_{\mathbf{b}} \\ 0 & -\tau_{\mathbf{b}} & 0 \\ \end{array} \right]\left[ \begin{array}{c} \hbox{\bf T}_{\mathbf{b}}(s_{\mathbf{b}})\\ \hbox{\bf N}_{\mathbf{b}}(s_{\mathbf{b}})\\ \hbox{\bf B}_{\mathbf{b}}(s_{\mathbf{b}})\\ \end{array} \right], \end{equation} where \begin{equation}\label{u13} \hbox{\bf T}_{\mathbf{b}}=-\hbox{\bf N},\,\,\,\,\,\hbox{\bf N}_{\mathbf{b}}=\frac{\hbox{\bf T}-f\,\hbox{\bf B}}{\sqrt{1+f^2}},\,\,\,\,\, \hbox{\bf B}_{\mathbf{b}}=\frac{f\,\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}, \end{equation} and \begin{equation}\label{u14} s_{\mathbf{b}}=\int\tau(s)ds,\,\,\,\,\,\kappa_{\mathbf{b}}=\frac{\sqrt{1+f^2}}{f},\,\,\,\,\, \tau_{\mathbf{b}}=-\frac{\sigma\sqrt{1+f^2}}{f}, \end{equation} where $s_{\mathbf{b}}$ is natural representation of the binormal indicatrix of the curve $\psi$ and equal the total torsion of the curve $\psi$ and $\kappa_{\mathbf{b}}$ and $\tau_{\mathbf{b}}$ are the curvature and torsion of $\psi_{\mathbf{b}}$. \end{lemma} Therefore we obtain: \begin{equation}\label{u15} \frac{\tau_{\mathbf{b}}}{\kappa_{\mathbf{b}}}=-\sigma. \end{equation} \section{$k$-slant helix and its characterizations} In this section we generalize the concept of the general helix and a slant helix by a new curve which we call it $k$-slant helix. \begin{definition}\label{df-02} Let $\psi=\psi(s)$ a natural representation of a unit speed regular curve in Euclidean 3-space with Frenet apparatus $\{\kappa,\tau,\hbox{\bf T},\hbox{\bf N},\hbox{\bf B}\}$. A curve $\psi$ is called a $k$-slant helix if the unit vector \begin{equation}\label{u151} \psi_{\kappa+1}=\frac{\psi'_{k}(s)}{\|\psi'_{k}(s)\|} \end{equation} makes a constant angle with a fixed direction, where $\psi_0=\psi(s)$ and $\psi_1=\frac{\psi'_{0}(s)}{\|\psi'_{0}(s)\|}$. \end{definition} From the above definition we can see that: {\bf (1):} The {\it $0$-slant helix} is the curve whose the unit vector \begin{equation}\label{u16} \psi_{1}=\frac{\psi'_{0}(s)}{\|\psi'_{0}(s)\|}=\frac{\psi'(s)}{\|\psi'(s)\|}=\hbox{\bf T}(s), \end{equation} (which is the tangent vector of the curve $\psi$) makes a constant angle with a fixed direction. So that the $0$-slant helix is the general helix. By using the Frenet frame (\ref{u1}), it is easy to prove the following two well-known lemmas: \begin{lemma}\label{lm-04} Let $\psi:I \rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq 0$ and $\tau(s)\neq 0$. The curve $\psi$ is a $0$-slant helix or general helix (the vector $\psi_1$ makes a constant angle, $\phi$, with a fixed straight line in the space) if and only if the function $f(s)=\frac{\tau}{\kappa}=\cot[\phi]$. \end{lemma} \begin{lemma}\label{lm-05} Let $\psi:I \rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq 0$ and $\tau(s)\neq 0$. The curve $\psi$ is a $0$-slant helix or general helix if and only the binormal vector $\hbox{\bf B}$ makes a constant angle with fixed direction. \end{lemma} {\bf (2):} The {\it $1$-slant helix} is the curve whose the unit vector \begin{equation}\label{u161} \psi_{2}=\frac{\psi'_{1}(s)}{\|\psi'_{1}(s)\|}=\frac{\hbox{\bf T}'(s)}{\|\hbox{\bf T}'(s)\|}=\hbox{\bf N}(s), \end{equation} (which is the principal normal vector of the curve $\psi$) makes a constant angle with a fixed direction. So that the $1$-slant helix is the slant helix. If we using the Frenet frame (\ref{u2}) of the tangent indicatrix of the the curve $\psi$, it is easy to prove the following two lemmas. The first lemma is introduced in \cite{ali3, bukcu, izumi, kula1, kula2}. Here, we state this lemma and introduce {\it new representation and its simple proof} using spherical tangent indicatrix of the curve. The second lemma is a new. \begin{lemma}\label{lm-06} Let $\psi:I\rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq0$ and $\tau(s)\neq0$. The curve $\psi$ is a $1$-slant helix or slant helix (the vector $\psi_2$ makes a constant angle, $\phi$, with a fixed straight line in the space) if and only if the function $\sigma(s)=\frac{\tau_{\mathbf{t}}}{\kappa_{\mathbf{t}}}=\cot[\phi]$. \end{lemma} {\bf Proof:} $(\Rightarrow)$ Let $\mathbf{d}$ be the unitary fixed vector makes a constant angle, $\phi$, with the vector $\psi_2=\hbox{\bf N}=\hbox{\bf T}_{\mathbf{t}}$. Therefore \begin{equation}\label{u20} \langle\hbox{\bf T}_{\mathbf{t}},\mathbf{d}\rangle=\cos[\phi]. \end{equation} Differentiating the equation (\ref{u20}) with respect to the variable $s_{\mathbf{t}}$ and using Frenet equations (\ref{u2}), we get \begin{equation}\label{u21} \kappa_{\mathbf{t}}\langle\hbox{\bf N}_{\mathbf{t}},\mathbf{d}\rangle=0. \end{equation} Because $\kappa_{\mathbf{t}}=\sqrt{1+f^2}\neq0$, then we have \begin{equation}\label{u22} \langle\hbox{\bf N}_{\mathbf{t}},\mathbf{d}\rangle=0. \end{equation} From the above equation, the vector $\mathbf{d}$ is perpendicular to the vector $\hbox{\bf N}_{\mathbf{t}}$ and so that the vector $\mathbf{d}$ lies in the space consists with the vectors $\hbox{\bf T}_{\mathbf{t}}$ and $\hbox{\bf B}_{\mathbf{t}}$. Therefore the vector $\mathbf{d}$ makes a constant angles with the two vectors $\hbox{\bf T}_{\mathbf{t}}$ and $\hbox{\bf B}_{\mathbf{t}}$. Hence, the vector $\mathbf{d}$ can be written as the following form: \begin{equation}\label{u23} \mathbf{d}=\cos[\phi]\hbox{\bf T}_{\mathbf{t}}+\sin[\phi]\hbox{\bf B}_{\mathbf{t}}. \end{equation} If we differentiate equation (\ref{u23}), we have \begin{equation}\label{u24} 0=(\cos[\phi]\kappa_{\mathbf{t}}-\sin[\phi]\tau_{\mathbf{t}})\hbox{\bf N}_{\mathbf{t}}, \end{equation} which leads to $\sigma(s)=\frac{\tau_{\mathbf{t}}}{\kappa_{\mathbf{t}}}=\cot[\phi]$. $(\Leftarrow)$ Suppose $\sigma=\cot[\phi]$, i.e., $\tau_{\mathbf{t}}=\cot[\phi]\kappa_{\mathbf{t}}$ and let us consider the vector \begin{equation}\label{u25} \mathbf{d}=\cos[\phi]\hbox{\bf T}_{\mathbf{t}}+\sin[\phi]\hbox{\bf B}_{\mathbf{t}}. \end{equation} From the Frenet formula (\ref{u2}), it is easy to prove the vector $\mathbf{d}$ is constant and $\langle\hbox{\bf T}_{\mathbf{t}},\mathbf{d}\rangle=\cos[\phi]$. This concludes the proof of lemma (\ref{lm-06}). \begin{lemma}\label{lm-07} Let $\psi:I \rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq 0$ and $\tau(s)\neq 0$. The curve $\psi$ is a $1$-slant helix or slant helix if and only the unit Darboux (modified Darboux \cite{koend}) vector field $\hbox{\bf B}_{\mathbf{t}}=\frac{f\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}$ of $\psi$ makes a constant angle with fixed direction. \end{lemma} {\bf Proof:} $(\Rightarrow)$ The proof of the necessary condition is the same as the necessary condition of the above lemma. $(\Leftarrow)$ Let $\mathbf{d}$ be the unitary fixed vector makes a constant angle, $\frac{\pi}{2}-\phi$, with the vector $\hbox{\bf B}_{\mathbf{t}}=\frac{f\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}$. Therefore \begin{equation}\label{u26} \langle\hbox{\bf B}_{\mathbf{t}},\mathbf{d}\rangle=\sin[\phi]. \end{equation} Differentiating the equation (\ref{u26}) with respect to the variable $s_{\mathbf{t}}$ and using Frenet equations (\ref{u2}), we get \begin{equation}\label{u27} -\tau_{\mathbf{t}}\langle\hbox{\bf N}_{\mathbf{t}},\mathbf{d}\rangle=0. \end{equation} Because $\tau_{\mathbf{t}}=\sigma\sqrt{1+f^2}\neq0$, then we have \begin{equation}\label{u28} \langle\hbox{\bf N}_{\mathbf{t}},\mathbf{d}\rangle=0. \end{equation} From the above equation, the vector $\mathbf{d}$ is perpendicular to the vector $\hbox{\bf N}_{\mathbf{t}}$ and so that the vector $\mathbf{d}$ lies in the space consists with the vectors $\hbox{\bf B}_{\mathbf{t}}$ and $\hbox{\bf T}_{\mathbf{t}}$. Therefore the vector $\mathbf{d}$ makes a constant angles with the two vectors $\hbox{\bf B}_{\mathbf{t}}$ and $\hbox{\bf T}_{\mathbf{t}}$. This concludes the proof of lemma (\ref{lm-07}). {\bf (3):} The {\it $2$-slant helix} is the curve whose the unit vector \begin{equation}\label{u29} \psi_{3}=\frac{\psi'_{2}(s)}{\|\psi'_{2}(s)\|}=\frac{\hbox{\bf N}'(s)}{\|\hbox{\bf N}'(s)\|}=\frac{-\hbox{\bf T}+f\hbox{\bf N}}{\sqrt{1+f^2}}, \end{equation} makes a constant angle with a fixed direction. So that the $2$-slant helix is a new special curves we can call it {\it slant-slant helix}. If we using the Frenet frame (\ref{u8}) of the principal normal indicatrix of the the curve $\psi$, it is easy to prove the following two new lemmas. \begin{lemma}\label{lm-08} Let $\psi:I\rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq0$ and $\tau(s)\neq0$. The curve $\psi$ is a $2$-slant helix or slant-slant helix (the vector $\psi_3$ makes a constant angle, $\phi$, with a fixed straight line in the space) if and only if the function $\Gamma(s)=\frac{\tau_{\mathbf{n}}}{\kappa_{\mathbf{n}}}=\cot[\phi]$. \end{lemma} The proof of the above lemma (using the Frenet frame (\ref{u8})) is similar as the proof of lemma (\ref{lm-06}) (using the Frenet frame (\ref{u2})). \begin{lemma}\label{lm-09} Let $\psi:I \rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq 0$ and $\tau(s)\neq 0$. The curve $\psi$ is a $2$-slant helix or slant-slant helix if and only if the vector $\hbox{\bf B}_{\mathbf{n}}=\frac{1}{\sqrt{1+\sigma^2}}\Big[\frac{f\,\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}+\sigma\,\hbox{\bf N}\Big]$ makes a constant angle with fixed direction. \end{lemma} The proof of the above lemma (using the Frenet frame (\ref{u8})) is similar as the proof of lemma (\ref{lm-07}) (using the Frenet frame (\ref{u2})). {\bf (4):} The {\it $3$-slant helix} is the curve whose the unit vector \begin{equation}\label{u30} \psi_{4}=\frac{\psi'_{3}(s)}{\|\psi'_{3}(s)\|}=\frac{\sigma}{\sqrt{1+\sigma^2}} \Big[\frac{f\,\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}-\frac{\hbox{\bf N}}{\sigma}\Big], \end{equation} makes a constant angle with a fixed direction. So that the $2$-slant helix is a new special curves we can call it {\it slant-slant-slant helix}. \begin{lemma}\label{lm-10} Let $\psi:I\rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq0$ and $\tau(s)\neq0$. The curve $\psi$ is a $3$-slant helix or slant-slant-slant helix (the vector $\psi_4$ makes a constant angle, $\phi$, with a fixed straight line in the space) if and only if the function \begin{equation}\label{u301} \Lambda=\frac{\Gamma'(s)}{\kappa(s)\sqrt{1+f^2(s)}\sqrt{1+\sigma^2(s)}\Big(1+\Gamma^2(s)\Big)^{3/2}}=\cot[\phi]. \end{equation} \end{lemma} {\bf proof:} $(\Rightarrow)$ Let $\textbf{d}$ be the unitary fixed vector makes a constant angle, $\phi$, with the vector $\psi_{4}=\hbox{\bf N}_{\mathbf{n}}$. Therefore \begin{equation}\label{u31} \langle\hbox{\bf N}_{\mathbf{n}},\textbf{d}\rangle=\cos[\phi]. \end{equation} Differentiating the equation (\ref{u31}) with respect to the variable $s_{\mathbf{n}}=\int\kappa(s)\sqrt{1+f^2(s)}ds$ and using the Frenet equations (\ref{u8}), we get \begin{equation}\label{u32} \langle-\kappa_{\mathbf{n}}\hbox{\bf T}_{\mathbf{n}}+\tau_{\mathbf{n}}\hbox{\bf B}_{\mathbf{n}},\textbf{d}\rangle=0. \end{equation} Therefore, $$ \langle\hbox{\bf T}_{\mathbf{n}},\textbf{d}\rangle=\frac{\tau_{\mathbf{n}}}{\kappa_{n}}\langle\hbox{\bf B}_{\mathbf{n}},\textbf{d}\rangle= \Gamma\langle\hbox{\bf B}_{\mathbf{n}},\textbf{d}\rangle. $$ If we put $\langle\hbox{\bf B}_{\mathbf{n}},\textbf{d}\rangle=g(s)$, we can write $$ \textbf{d}=\Gamma\,g\,\hbox{\bf T}_{\mathbf{n}}+\cos[\phi]\hbox{\bf N}_{\mathbf{n}}+g\,\hbox{\bf B}_{n}. $$ From the unitary of the vector $\textbf{d}$ we get $g=\pm \frac{\sin[\phi]}{\sqrt{1+\Gamma^2}}$. Therefore, the vector $\textbf{d}$ can be written as \begin{equation}\label{u33} \textbf{d}=\pm\,\frac{\Gamma\,\sin[\phi]}{\sqrt{1+\Gamma^2}}\,\hbox{\bf T}_{\mathbf{n}}+\cos[\phi]\,\hbox{\bf N}_{\mathbf{n}} \pm\frac{\sin[\phi]}{\sqrt{1+\Gamma^2}}\,\hbox{\bf B}_{\mathbf{n}}. \end{equation} The equation (\ref{u32}) can be written in the form: \begin{equation}\label{u34} \langle-\hbox{\bf T}_{\mathbf{n}}+\Gamma\,\hbox{\bf B}_{\mathbf{n}},\textbf{d}\rangle=0. \end{equation} If we differentiate the equation (\ref{u32}) with respect to $s_{\mathbf{n}}$, again, we obtain \begin{equation}\label{u35} \langle \dot{\Gamma}\,\hbox{\bf B}_{\mathbf{n}}+(1+\Gamma^2)\sqrt{1+\sigma^2}\hbox{\bf N}_{\mathbf{n}},\textbf{d}\rangle=0, \end{equation} where dot is the differentiation with respect to $s_{\mathbf{n}}$. If we put the vector $\mathbf{d}$ from equation (\ref{u33}) in the equation (\ref{u35}), we obtain the following condition $$ \frac{\dot{\Gamma}}{\sqrt{1+\sigma^2}(1+\Gamma^2)^{3/2}}=\pm\,\cot[\phi]. $$ Finally, $s_{\mathbf{n}}=\int\kappa(s)\sqrt{1+f^2(s)}ds$ and $\dot{\Gamma}=\frac{\Gamma'(s)}{\kappa(s)\sqrt{1+f^2(s)}}$, we express the desired result. $(\Leftarrow)$ Suppose that $\frac{\dot{\Gamma}}{\sqrt{1+\sigma^2}(1+\Gamma^2)^{3/2}}=\pm\,\cot[\phi]$ where $.$ is the differentiation with respect to $s_{\mathbf{n}}$. Let us consider the vector $$ \textbf{d}=\pm\,\cos[\phi]\Big(\frac{\Gamma\,\tan[\phi]}{\sqrt{1+\Gamma^2}}\,\hbox{\bf T}_{\mathbf{n}}\pm\hbox{\bf N}_{\mathbf{n}} +\frac{\tan[\phi]}{\sqrt{1+\Gamma^2}}\,\hbox{\bf B}_{\mathbf{n}}\Big). $$ We will prove that the vector $\textbf{d}$ is a constant vector. Indeed, applying Frenet formula (\ref{u8}) $$ \dot{\textbf{d}}=\pm\sqrt{1+\sigma^2}\cos[\phi]\Big(\pm\hbox{\bf T}_{\mathbf{n}}+\frac{\Gamma\tan[\phi]}{\sqrt{1+\Gamma^2}}\,\hbox{\bf N}_{n} \mp\hbox{\bf T}_{\mathbf{n}}\pm\Gamma\hbox{\bf B}\mp\Gamma\hbox{\bf B}_{\mathbf{n}} -\frac{\Gamma\tan[\phi]}{\sqrt{1+\Gamma^2}}\,\hbox{\bf N}_{n}\Big)=0 $$ Therefore, the vector $\textbf{d}$ is constant and $\langle\hbox{\bf N}_{\mathbf{n}},\textbf{d}\rangle=\cos[\phi]$. This concludes the proof of lemma (\ref{lm-10}). From the section (3), we can see that: {\bf (i):} The function $f(s)$ is equal the ratio of the torsion $(\tau=\tau_0)$ and curvature $(\kappa=\kappa_0)$ of the curve $\psi=\psi_0$ and may be named it $\sigma_0(s)=f(s)=\frac{\tau_0(s)}{\kappa_0(s)}$. {\bf (ii):} The function $\sigma(s)$ is equal the ratio of the torsion $(\tau_{\mathbf{t}}=\tau_1)$ and curvature $(\kappa_{\mathbf{t}}=\kappa_1)$ of the tangent indicatrix $\hbox{\bf T}=\psi_1$ of the curve $\psi$ and may be named it $\sigma_1(s)=\sigma(s)=\frac{\tau_1(s)}{\kappa_1(s)}$. {\bf (iii):} The function $\Gamma(s)$ is equal the ratio of the torsion $(\tau_{\mathbf{n}}=\tau_2)$ and curvature $(\kappa_{\mathbf{n}}=\kappa_2)$ of the principal normal indicatrix $\hbox{\bf N}=\psi_2$ of the curve $\psi$ and may be named it $\sigma_2(s)=\Gamma(s)=\frac{\tau_2(s)}{\kappa_2(s)}$. We expect that: the function $\Lambda(s)$ is equal the ratio of the torsion $\tau_3$ and curvature $\kappa_3$ of the spherical image of $\psi_3$ indicatrix and may be named it $\sigma_3(s)=\Lambda(s)=\frac{\tau_3(s)}{\kappa_3(s)}$. So that, we can write (the proof is classical) the following lemma: \begin{lemma}\label{lm-11} If the Frenet frame of the spherical image of $\psi_3=\frac{-\hbox{\bf T}+f\hbox{\bf B}}{\sqrt{1+f^2}}$ indicatrix of the curve $\psi$ is $\{\hbox{\bf T}_3,\hbox{\bf N}_3,\hbox{\bf B}_3\}$, then we have Frenet formula: \begin{equation}\label{u36} \left[ \begin{array}{c} \hbox{\bf T}^{\,'}_3(s_3)\\ \hbox{\bf N}^{\,'}_3(s_3)\\ \hbox{\bf B}^{\,'}_3(s_3)\\ \end{array} \right]=\left[ \begin{array}{ccc} 0 & \kappa_3 & 0 \\ -\kappa_3 & 0 & \tau_3 \\ 0 & -\tau_3 & 0 \\ \end{array} \right]\left[ \begin{array}{c} \hbox{\bf T}_3(s_3)\\ \hbox{\bf N}_3(s_3)\\ \hbox{\bf B}_3(s_3)\\ \end{array} \right], \end{equation} where \begin{equation}\label{u37} \left\{ \begin{array}{ll} \hbox{\bf T}_3=\frac{\sigma}{\sqrt{1+\sigma^2}} \Big[\frac{f\,\hbox{\bf T}+\hbox{\bf B}}{\sqrt{1+f^2}}-\frac{\hbox{\bf N}}{\sigma}\Big],\\ \hbox{\bf N}_3=\frac{1}{\sqrt{1+\sigma^2}\sqrt{1+\Gamma^2}}\Big[ \frac{\Gamma\big(f\hbox{\bf T}+\hbox{\bf B}\big)+\sqrt{1+\sigma^2}\big(\hbox{\bf T}-f\hbox{\bf B}\big)}{\sqrt{1+f^2}}+\sigma\Gamma\hbox{\bf N}\Big],\\ \hbox{\bf B}_3=\frac{1}{\sqrt{1+\sigma^2}\sqrt{1+\Gamma^2}}\Big[ \frac{f\hbox{\bf T}+\hbox{\bf B}-\Gamma\sqrt{1+\sigma^2}\big(\hbox{\bf T}-f\hbox{\bf B}\big)}{\sqrt{1+f^2}}+\sigma\hbox{\bf N}\Big], \end{array} \right. \end{equation} and \begin{equation}\label{u38} s_3=\int\kappa(s)\sqrt{1+f^2(s)}\sqrt{1+\sigma^2(s)}\,ds,\,\,\,\,\,\kappa_3=\sqrt{1+\Gamma^2},\,\,\,\,\, \tau_3=\Lambda\sqrt{1+\Gamma^2}, \end{equation} where $s_3$ is the natural representation of the spherical image of $\psi_3$ indicatrix of the curve $\psi$ and $\kappa_3$ and $\tau_3$ are the curvature and torsion of this curve. \end{lemma} Therefore it is easy to see that: \begin{equation}\label{u39} \frac{\tau_3}{\kappa_3}=\Lambda=\sigma_3. \end{equation} If we using the Frenet frame (\ref{u37}) of the spherical image of $\psi_3$ indicatrix of the curve $\psi$, it is easy to prove the following new lemma. \begin{lemma}\label{lm-12} Let $\psi:I \rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq 0$ and $\tau(s)\neq 0$. The curve $\psi$ is a $3$-slant helix or slant-slant-slant helix if and only if the vector $\hbox{\bf B}_3=\frac{1}{\sqrt{1+\sigma^2}\sqrt{1+\Gamma^2}}\Big[ \frac{f\hbox{\bf T}+\hbox{\bf B}-\Gamma\sqrt{1+\sigma^2}\big(\hbox{\bf T}-f\hbox{\bf B}\big)}{\sqrt{1+f^2}}+\sigma\hbox{\bf N}\Big]$ makes a constant angle with fixed direction. \end{lemma} The proof of the above lemma (using the Frenet frame (\ref{u37})) is similar as the proof of lemma (\ref{lm-07}) (using the Frenet frame (\ref{u2})). \section{General results} From the above discussions, we can introduce an important lemmas for the $k$-slant helix in general form as follows: \begin{lemma}\label{lm-13} If the Frenet frame of the spherical image of $\psi_{k}=$ indicatrix of the curve $\psi$ is $\{\hbox{\bf T}_k,\hbox{\bf N}_k,\hbox{\bf B}_k\}$, then we have Frenet formula: \begin{equation}\label{u40} \left[ \begin{array}{c} \hbox{\bf T}^{\,'}_k(s_k)\\ \hbox{\bf N}^{\,'}_k(s_k)\\ \hbox{\bf B}^{\,'}_k(s_k)\\ \end{array} \right]=\left[ \begin{array}{ccc} 0 & \kappa_k & 0 \\ -\kappa_k & 0 & \tau_k \\ 0 & -\tau_k & 0 \\ \end{array} \right]\left[ \begin{array}{c} \hbox{\bf T}_k(s_k)\\ \hbox{\bf N}_k(s_k)\\ \hbox{\bf B}_k(s_k)\\ \end{array} \right], \end{equation} where \begin{equation}\label{u41} \hbox{\bf T}_k=\psi_{k+1},\,\,\,\,\,\hbox{\bf N}_k=\psi_{k+2},\,\,\,\,\, \hbox{\bf B}_k=\frac{\psi_{k+1}\times\psi_{k+2}}{\|\psi_{k+1}\times\psi_{k+2}\|}, \end{equation} and \begin{equation}\label{u42} \left\{ \begin{array}{ll} s_k=\int\kappa(s)\sqrt{1+\sigma_0^2(s)}\sqrt{1+\sigma_1^2(s)}\,...\sqrt{1+\sigma_{k-1}^2(s)}\,ds,\\ \kappa_k=\sqrt{1+\sigma_{k-1}^2},\\ \tau_k=\sigma_k\sqrt{1+\sigma_{k-1}^2}, \end{array} \right. \end{equation} where \begin{equation}\label{u43} \sigma_k=\frac{\sigma'_{k-1}}{\kappa(s)\sqrt{1+\sigma_0^2(s)}\sqrt{1+\sigma_1^2(s)}\,...\,\Big(1+\sigma_{k-1}^2(s)\Big)^{3/2}}, \end{equation} $s_k$ is the natural representation of the spherical image of $\psi_k$ indicatrix of the curve $\psi$ and $\kappa_k$ and $\tau_k$ are the curvature and torsion of this curve. \end{lemma} From the the above lemma we have $\frac{\tau_k}{\kappa_k}=\sigma_k$, which leads the following lemma: \begin{lemma}\label{lm-131} Let $\psi:I\rightarrow\hbox{\bf E}^3$ be a $k$-slant helix. The spherical image of $\psi_{k}$ indicatrix of the curve $\psi$ is a spherical helix. \end{lemma} \begin{lemma}\label{lm-14} Let $\psi:I\rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq0$ and $\tau(s)\neq0$. The curve $\psi$ is a $k$-slant helix (the vector $\psi_{k+1}$ makes a constant angle, $\phi$, with a fixed straight line in the space) if and only if the function \begin{equation}\label{u44} \sigma_k=\cot[\phi]. \end{equation} \end{lemma} \begin{lemma}\label{lm-15} Let $\psi:I \rightarrow\hbox{\bf E}^3$ be a curve that is parameterized by arclength with intrinsic equations $\kappa(s)\neq 0$ and $\tau(s)\neq 0$. The curve $\psi$ is a $k$-slant helix if and only if the vector $\hbox{\bf B}_k=\frac{\psi_{k+1}\times\psi_{k+2}}{\|\psi_{k+1}\times\psi_{k+2}\|}$ makes a constant angle with fixed direction. \end{lemma}
train/arxiv
BkiUco3xaKgTq1C9NsG1
5
1
\section{Introduction} Quandles are algebraic structures whose axioms are motivated by the three Reidemeister moves in classical knot theory. They were introduced independently by Joyce \cite{Joyce} and Matveev \cite{Matveev} in the early 1980's. Since then they have been used to construct invariants of knots in the $3$-space and knotted surfaces in $4$-space (see for example \cite{EN} for more on quandles). However, the notion of a quandle can be traced back to the 1940's in the work of Mituhisa Takasaki \cite{Takasaki}. Joyce and Matveev introduced the notion the fundamental quandle of a knot leading to their main theorem which states that two knots $K$ and $K'$ are equivalent (up to reverse and mirror image) if and only if the fundamental quandles $Q(K)$ and $Q(K')$ are isomorphic. This theorem turns the topological problem of classification of knots into the algebraic problem of classification of quandles. Recently, there has been many works on classifying some families of quandles, such as connected quandles, Alexander quandles, medial quandles etc. Quandles also have been investigated recently from other point of views such as relations to Lie algebras \cite{CCES1, CCES2}, Hopf algebras \cite{AG, CCES2}, quasigroups and Moufang loops \cite{Elhamdadi}, ring theory \cite{EFT}, Yang-Baxter equation and its homology \cite{CES} and topology\cite{EM}. Orderability of quandles appeared in \cite{BPS2, DDHPV} where it was shown for example that conjugation quandles (respectively core quandles) of bi-orderable groups are right orderable (respectively left orderable) quandles. The set of left ordering on a group $G$ can be given a topology, and this space is called the space of left orderings. In \cite{Sikora} it was shown that the space of left orderings of a group is compact. The analogous of this theorem was proved for magmas in general (and quandles in particular) in \cite{DDHPV }. Recently, the notions of circular orderability and left-orderability have been intensively studied in low dimensional topology, particularly for the fundamental group of 3-manifolds (see \cite{[BaC1}, \cite{[BaC2}, \cite{BS}, \cite{BRW} and \cite{Cal}). In \cite{[BaC1} the authors have shown that a compact, connected, $P^2$-irreducible $3$-manifold has a left circularly orderable fundamental group if and only if there exists a finite cyclic cover with left-orderable fundamental group. A consequence of this result is that, if a $3$-manifold admits a cooriented taut foliation then it has a finite cyclic cover with left-orderable fundamental group, and this give a characterization of a topological property of a $3$-manifold (taut foliation) by an algebraic property to its fundamental group (left-orderability and circular orderability). This may also be a motivation to study some algebraic properties of quandles (circular orderability and orderability). In this article we introduce the notion of circular orderability for quandles. We show that the set all right (respectively left) circular orderings of a quandle is a compact topological spaces. Let $Q$ be a quandle, we denote by $RCO(Q)$ (respectively $LCO(Q)$) the set of right circular orderings on $Q$ (respectively the set of left circular orderings on $Q$) \begin{theorem} The space $RCO(Q)$ (respectively $LCO(Q)$) is compact. \end{theorem} We also show that the space of right (respectively left) orderings \cite{DDHPV} of a quandle embeds in its space of right (respectively left) circular orderings. \begin{theorem} If $Q$ is a quandle, then the space of right (respectively left) orderings of $Q$ is a subspace of its space of right (respectively left) circular orderings. \end{theorem} Given a group with a circular ordering which is both right and left invariant, we prove that the conjugation quandle is right circularly orderable and more strongly we shown that the conjugation quandle is right orderable. We also give examples of quandles that are not left circularly orderable and examples of quandles that are neither left nor right circularly orderable. The following is the organization of this article. In section~\ref{Review}, we recall the basics of quandles and also orderability. Section~\ref{Circular} introduces the notion of circular orderability of quandles, and show that a right (respectively left) orderable quandle is right (respectively left) circularly orderable. In this section, it is also shown that the conjugation quandle of a bi-orderable group is a right orderable quandle and we give examples of non-left circularly orderable quandles, and explicit examples of quandles of small cardinalities that are neither left nor right circularly orderable. Section~\ref{Space} deals with the topology of the set of right (respectively left) circular orderings on quandles. Precisely, we introduce a topology on the set of right (respectively left) circular orderings on quandles and prove that it is compact. \section{Review of quandles and orderability}\label{Review} A {\it quandle} is a non-empty set $Q$ with a binary operation $*$ which satisfies the following conditions: \begin{enumerate} \item For any $s\in Q$, $s*s=s$; \item For any $s_1, s_2\in Q$, there exists a unique $s_3\in Q$ such that $s_3*s_2=s_1$; and \item For any $s_1, s_2, s_3\in Q$, $(s_1*s_2)*s_3=(s_1*s_3)*(s_2*s_3)$. \end{enumerate} Let consider right multiplications $R_s: Q \rightarrow Q$ given by $t \mapsto t * s$. Then axioms (2) and (3) state that $R_s$ are automorphisms of the quandle $Q$ and axiom (1) states that $s$ is a fixed point of $R_s$. An element $e$ of a quandle $Q$ is called {\it stabilizer element} if $s*e=s$ for any $s\in Q$. A quandle $Q$ is called {\it trivial} if all its elements are stabilizer elements. A trivial quandle can have an arbitrary number of element, this is one difference between a group and a quandle. By the conditions $(2)$ and $(3)$ of the definition of quandle for any $b\in Q$, there exists an automorphism $R_b:Q\longrightarrow Q$ defined by $R_b(s)=s*b$, for any $s\in Q$. We call $id_Q$ the identity automorphism of $Q$. A quandle is called {\it involutary} if $R_s\circ R_s=id_Q$ for any $s\in Q$. By the conditions $(2)$ and $(3)$ of the definition of quandle, there exists a dual binary operation $*^{-1}$ defined by $s*^{-1}r=t$ if $s=t*r$ for any $r, s$ and $t\in Q$. Hence, we have also that $(s*r)*^{-1}r=s$ and $(s*^{-1}r)*r=s$ for any $r, s\in Q$, this is called right cancellation. Therefore, by the condition $(1)$ of the definition of quandle and right cancellation we have that $s*^{-1}s=s$ for any $s\in Q$. A quandle is called {\it latin} if for any $s\in Q$ the map $L_s:Q\longrightarrow Q$ defined by $L_s(t)=s*t$ is a bijection for any $t\in Q$. A quandle is called {\it semi-latin} if the map $L_s$ is injective for any $s\in Q$. A subquandle of a quandle, $Q$, is a subset $X\subset Q$ which is closed under the quandle operation. The following are some examples of quandle. \begin{itemize} \item Any set with the binary operation $x*y=x$ is a quandle called \emph{trivial} quandle. \item Let $G$ be a group. The set $S=G$ with the binary operation $*$ defined by $g*h=h^{-1}gh$ is a quandle, denoted Conj$(G)$, and called \emph{conjugation} quandle of $G$. \item The set $T=G$, where $G$ is a group, with the binary operation $*$ defined by $g*h=hg^{-1}h$ is a quandle, denoted by Core$(G)$, and called \emph{core} quandle of $G$. If furthermore $G$ is an abelian group then this quandle is called \emph{Takasaki } quandle \cite{Takasaki}. \item If $\phi$ is an automorphism of a group $G$, then the set $R=G$ with the binary operation $*$ defined by $g*h=\phi(gh^{-1})h$ is a quandle, denoted by Aut$(G, \phi)$, and called \emph{generalized Alexander} quandle of $G$ with respect to $\phi$. \end{itemize} Recall that a group $G$ is called \emph{left-orderable} (respectively right-orderable) if there exists a strict total ordering $<$ on $G$ such that $g<h$ implies $fg<fh$ (respectively $gf<hf$) for all $f, g, h \in G$, the relation $<$ is called a \emph{left-ordering} of $G$. A left-orderable (respectively right-orderable) quandle was defined in the same fashion in \cite{BPS}. A quandle $(Q,*)$ is called left-orderable (respectively right-orderable) if there exists a strict total ordering $\prec$ on $Q$ such that $s\prec t$ implies $r*s\prec r*t$ (respectively $s*r\prec t*r$) for all $r, s, t \in Q$, the relation $\prec$ is called a \emph{left-ordering} (respectively right-ordering) on $Q$. The notion of a homomorphism between two ordered quandles is natural. Let $(X,*, \prec_X )$ and $(Y,\diamond, \prec_Y)$ be two right ordered quandles. A map $f:X \rightarrow Y$ is called an order preserving homomorphism of right ordered quandles if $f$ is a quandle homomorphism (that is,$f(x*y)=f(x) \diamond f(y), \forall x,y \in X$) and $ x \prec_X y$ implies $f(x) \prec_Y f(y)$. \begin{example} We use quandle structures defined in \cite{CES} over the real line $\mathbb{R}$ to give explicit structures of right and left orderable quandles. Consider the real line $\mathbb{R}$ with its natural ordering and with quandle operation $x*y=\alpha x +(1-\alpha)y$, where $\alpha \neq 0,1$ \begin{enumerate} \item If $\alpha >0$, then the quandle $\mathbb{R} $ is right orderable. \item If $\alpha <1$, then the quandle $\mathbb{R} $ is left orderable. \item Thus, if $0< \alpha <1$, then the quandle $\mathbb{R} $ is bi-orderable. \end{enumerate} \end{example} \section{circular orderability of quandles}\label{Circular} A {\it circular ordering} on a set $T$ is a map $c: T\times T\times T \rightarrow \{ -1, 0, 1\}$ satisfying: \begin{enumerate} \item If $(t_1, t_2, t_3) \in T^3$ then $c(t_1, t_2, t_3) = 0$ if and only if $\{t_1, t_2, t_3\}$ are not all distinct; \item For all $t_1, t_2, t_3, t_4 \in T$ we have \[ c(t_1, t_2, t_3) - c(t_1, t_2, t_4) + c(t_1, t_3, t_4)-c(t_2, t_3, t_4) = 0. \] \end{enumerate} A set $T$ with a circular ordering is called {\it circularly orderable}. A group $G$ is called {\it left-circularly orderable} (respectively {\it right-circularly orderable}) if it admits a circular ordering $c: G\times G\times G \rightarrow \{ - 1, 0, 1\}$ such that $c(g_1,g_2, g_3)=c(gg_1,gg_2,gg_3)$ (respectively $c(g_1,g_2, g_3)=c(g_1g,g_2g,g_3g)$) for any $g, g_1, g_2$ and $g_3\in G$. In this case we say that the circular ordering $c$ is left-invariant (respectively right-invariant). We say that a quandle $Q$ is {\it left-circularly orderable} (respectively {\it right-circularly orderable}) if it admits a circular ordering $c: Q\times Q\times Q \rightarrow \{ - 1, 0, 1\}$ such that $c(s_1,s_2, s_3)=c(s*s_1,s*s_2,s*s_3)$ (respectively $c(s_1,s_2, s_3)=c(s_1*s,s_2*s,s_3*s)$) for any $s, s_1, s_2$ and $s_3\in Q$. \begin{lemma}\label{ordering} A left (respectively right) orderable quandle is also left (respectively right) circularly orderable. \end{lemma} \begin{proof} Let $\prec$ be a left ordering on a quandle $Q$ then define the map $c_{\prec}:Q^3 \rightarrow \{ - 1, 0, 1\}$ by $$(1)\;\;\;\; c_{\prec}(s_1,s_2,s_3)=\left\{\begin{array}{cl} 1 & \text{if $s_1\prec s_2\prec s_3$ or $s_2\prec s_3\prec s_1$ or $s_3\prec s_1\prec s_2$} \\ -1 & \text{if $s_1\prec s_3\prec s_2$ or $s_3\prec s_2\prec s_1$ or $s_2\prec s_1\prec s_3$ } \\ 0 & \text{otherwise.} \end{array} \right.$$ By definition, $c_{\prec}$ is a circular ordering. Now, it is left to check that $c_{\prec}$ is invariant under left-multiplication. Since $Q$ is left orderable, multiplying on the left by any element $s\in Q$ will not change the inequalities of $(1).$ Similarly, if a quandle $Q$ is right orderable then the same proof show that $Q$ is right circularly orderable. \end{proof} \begin{proposition} Let $G$ be a circularly orderable group with a circular ordering $c$ which is left and right-invariant. Then, the quandle $\mathrm{Conj}(G)$ is a right circularly orderable quandle; \end{proposition} \begin{proof} Let $d: \mathrm{Conj}(G)\times \mathrm{Conj}(G)\times \mathrm{Conj}(G) \rightarrow \{ -1, 0, 1\}$ defined by $d(g_1, g_2, g_3)=c(g_1, g_2, g_3)$ for any $g_1, g_2$ and $g_3\in G$. Let $g_1, g_2$ and $g_3\in G$, since $d(g_1, g_2, g_3)=c(g_1, g_2, g_3)$, $d$ is a circular ordering on $\mathrm{Conj}(G)$ as a set. Now it is left to show the right-invariant condition. Let $g, g_1, g_2$ and $g_3\in G$, we have that $d(g_1*g, g_2*g, g_3*g)=d(g^{-1}g_1g, g^{-1}g_2g, g^{-1}g_3g)=c(g^{-1}g_1g, g^{-1}g_2g, g^{-1}g_3g)=c(g_1, g_2, g_3)$. Hence, $\mathrm{Conj}(G)$ is a right circularly orderable quandle. \end{proof} Recall that a group $G$ is called bi-circularly orderable (respectively bi-orderable) if it admits a circular ordering (respectively an ordering) which is left and right-invariant. Similarly, we say that a quandle $Q$ is bi-circularly orderable if it admits a circular ordering (respectively an ordering) which is left and right-invariant. The notion of bi-circular orderability for groups have been studied since the $1950$'s, and in $1959$ Swierczkowski had given in \cite{Sw} a complete classification of these type of groups. \begin{theorem}{\cite{Sw}}\label{bio} If a group $G$ is bi-circularly orderable, then there exists a bi-orderable group $\Gamma$ such that $G$ is a subgroup of $\Gamma\times \mathbb{S}^1$. Moreover, the bi-circular order on $G$ is determined by the natural bi-circular order on $\Gamma\times \mathbb{S}^1$. \end{theorem} By the natural bi-circular order it is meant that if you equip $\Gamma \times \mathbb{S}^1$ with the lexicographic bi-circular ordering, then the bi-circular ordering on $G$ is just the restriction of this lexicographic circular ordering on $\Gamma \times \mathbb{S}^1$ to the subgroup $G$. The Goursat's Lemma Give a classification of all the subgroups of $\Gamma \times \mathbb{S}^1$. Hence, Theorem \ref{bio} gives infinitely many example of bi-circularly orderable groups that are not bi-orderable. Let $\{Q_i, *_i\}_{\i\in I}$ be a family of quandles. Then the Cartesian product $Q=\prod_{i \in I} Q_i$ with the operation $(x_i)\star (y_i)=(x_i*_iy_i)$ is a quandle called the {\it quandle product}. \begin{proposition} The conjugation quandle of any bi-circularly orderable group $G$ is right orderable. \end{proposition} \begin{proof} Let $H$ be any bi-orderable group. Since $\mathrm{Conj}(H\times \mathbb{S}^1)=\mathrm{Conj}(H)\times \mathrm{Conj}(\mathbb{S}^1)$, and $\mathbb{S}^1$ is commutative, so $\mathrm{Conj}(\mathbb{S}^1)$ is right orderable as trivial quandle. Since $H$ is bi-orderable, $\mathrm{Conj}(H)$ is right orderable. Hence, the quandle $\mathrm{Conj}(H\times \mathbb{S}^1)=\mathrm{Conj}(H)\times \mathrm{Conj}(\mathbb{S}^1)$ is right orderable by \cite[Proposition 4.4 ]{RSS}. Therefore, the result follows from Theorem \ref{bio}. \end{proof} \begin{lemma} If $G$ is a nontrivial group with at least three elements, then the quandle $\mathrm{Conj}(G)$ is not left circularly orderable. \end{lemma} \begin{proof} Let $G$ be any nontrivial group with at least three elements. Consider the conjugation quandle $\mathrm{Conj}(G)$ with operation $x*y=y^{-1}xy$. By contradiction, let $c:\mathrm{Conj}(G)^3 \rightarrow \{ \pm 1, 0\}$ be a left circular ordering. Then $c(s_1,s_2,s_3)=c(e*s_1,e*s_2, e*s_3)=c(e,e,e)=0$, where $e$ is the identity element of $G$. Thus the function $c$ is the constant zero map implying that $\mathrm{Conj}(G)$ cannot be have a left circular ordering. \end{proof} \begin{example} The trivial quandle with \emph{two} elements is bi-circularly orderable since the trivial circular ordering which is always equal to zero is both left and right invariant. Since this quandle is not left orderable, this show that the notion of circular orderability for quandles is different to the notion orderability. \end{example} \begin{example} Consider the three element quandle $X=\{1,2,3\}$ with orbit decomposition $\{1,2\} \sqcup \{3\}$. It's Cayley table is given by $\begin{bmatrix} 1 & 1 & 2 \\ 2 & 2 & 1 \\ 3& 3 & 3 \end{bmatrix}.$ The $(i,j)$-entry of this matrix correspond to the element $i*j$ in the quandle. We show that this quandle is neither left nor right circularly orderable.\\ \noindent (I) Now assume that $c$ is left-invariant circular ordering on $X$, then for any pairwise distinct elements $x,y,z \in X$, we have $c(x,y,z)=c(3*x,3*y,3*z)=c(3,3,3)=0,$ and thus the quandle $X$ can't have a left-invariant circular ordering.\\ (II) Now assume that $c$ is right-invariant circular ordering on $X$, then we have $c(1,2,3)=c(1*3,2*3,3*3)=c(2, 1, 3)$. Since $c$ is a circular ordering, it satisfies condition $(2)$ of the definition, so \[ c(1, 2, 1) - c(1, 2, 3) + c(1, 1, 3)-c(2, 1, 3) = 0. \] Hence, $c(1,2,3)=-c(2,1,3)$ which is a contradiction to the fact that $$c(1,2,3)=c(1*3,2*3,3*3)=c(2, 1, 3).$$ \end{example} \begin{example} Consider the dihedral quandle $X=\mathbb{Z}_3$, where $x*y=2y-x=2y+2x$. We show that this quandle is neither left nor right circularly orderable. The quandle operation satisfies $x*y=y*x$ for all $x,y \in X$, and then left circular orderings and right circular orderings are the same. By contradiction assume that there exists a right circular ordering $c$ on $X$, and let $x,y,z \in X$. If at least two of $x,y,z$ are equal then by definition $c(x,y,z)=0$. Now assume that $x,y,z$ are pairwise distinct. Thus the product of any two of them gives the third element. Then $c(x,y,z)=c(x*x,y*x,z*x)=c(x,z,y)$, $c(x,y,z)=c(x*y,y*y,z*y)=c(z,y,x)$ and $c(x,y,z)=c(x*z,y*z,z*z)=c(y,x,z)$. We thus obtain $c(x,y,z)=c(x,z,y)=c(z,y,x)=c(y,x,z)$ and consequently any of the 6 permutations of the set $\{x,y,z\}$ leaves the value of $c(x,y,z)$ unchanged. Since $c$ is a circular ordering, it satisfies condition $(2)$ of the definition, so \[ c(x, y, x) - c(x, y, z) + c(x, x, z)-c(y, x, z) = 0. \] Hence, $c(x,y,z)=-c(y,x,z)$ which is a contradiction to the fact that $$c(x,y,z)=c(y,x,z).$$ Therefore, the dihedral quandle $X=\mathbb{Z}_3$ with quandle operation $x*y=2y-x=2y+2x$ is neither left nor right circular orderable. \end{example} \section{The space of circular orderings of quandles}\label{Space} In this section we study the set of circular orderings of a quandle. Let $Q$ be a quandle, we denote by $RCO(Q)$ (respectively $LCO(Q)$) the set of right circular orderings on $Q$ (respectively the set of left circular orderings on $Q$). Since a right ordering (respectively left ordering) on a quandle gives a right circular ordering (respectively left circular ordering) by Lemma \ref{ordering}, the set of right orderings $RO(Q)$ (respectively left orderings $LO(Q)$) on a quandle $Q$ can be seen as a subset of the set of left circular orderings $LCO(Q)$. The set of left-orderings (respectively right orderings) on a quandle was already studied in \cite{DDHPV} and they showed that for any magma $\mathcal{M}$, the set of left orderings $LO(M)$ (respectively right orderings $RO(M)$) is a compact topological space. A magma is a set with a binary operation $\cdot :\mathcal{M}\times\mathcal{M}\longrightarrow\mathcal{M}.$ So, a quandle is in fact a magma. We topologize the set $RCO(Q)$ (respectively $LCO(Q)$) as a subspace of the space $\{-1, 0, 1\}^{Q^3}$ of all maps from $Q^3$ to $\{-1, 0, 1\}$ with the Tychonoff topology. We define $$\Gamma(Q):=\{S=(x, y, z)\in Q^3\; |\; x=y\; {\rm or} \; y=z\; {\rm or}\; x=z\},$$ $$R_S=\{c\in RCO(Q)\;|\; c(S)=1\}$$ and $$L_S=\{c\in LCO(Q)\;|\; c(S)=1\}$$ for any $S\in Q^3\setminus \Gamma(Q)$. \begin{lemma} The set $\{R_S\}_{S\in Q^3\setminus\Gamma(Q)}$ (respectively $\{L_S\}_{S\in Q^3\setminus\Gamma(Q)}$) is a subbasis for the topology on $RCO(Q)$ (respectively $LCO(Q)$). \end{lemma} \begin{proof} Since the $\{-1, 0, 1\}^{Q^3}$ is a Cantor set with a subbasis $\{V_{s, -1}, V_{S, 0}\; {\rm and }\; V_{S, 1}\}_{S\in Q^3}$ Where $$V_{S, -1}=\{f:Q^3\rightarrow \{-1, 0, 1\}\; |\; f(S)=-1\},$$ $$V_{S, 0}=\{f:Q^3\rightarrow \{-1, 0, 1\}\; |\; f(S)=0\}$$ and $$V_{S, 1}=\{f:Q^3\rightarrow \{-1, 0, 1\}\; |\; f(S)=1\}.$$ Since $RCO(Q)\cap V_{S, 0}=RCO(Q)$ (respectively $LCO(Q)\cap V_{S, 0}=LCO(Q)$) for any $S\in \Gamma(Q)$ and $RCO(Q)\cap V_{S, 0}=\emptyset$ (respectively $LCO(Q)\cap V_{S, 0}=\emptyset$) for any $S\in Q^3\setminus\Gamma(Q)$, we can throw out all sets of the form $RCO(Q)\cap V_{S, 0}$. Since $RCO(Q)\cap V_{S, -1}=RCO(Q)\cap V_{\tau.S, 1}$ (respectively $LCO(Q)\cap V_{s, -1}=LCO(Q)\cap V_{\tau.S, 1}$) for any transposition $\tau$, we can also throw out all sets of the form $RCO(Q)\cap V_{S, -1}$ (respectively $LCO(Q)\cap V_{\tau.S, 1}$). \end{proof} \begin{theorem}\label{compact} The space $RCO(Q)$ (respectively $LCO(Q)$) is compact. \end{theorem} \begin{proof} We have that \begin{multline*}RCO(Q)=\{c: Q^3 \rightarrow \{ -1, 0, 1\} \;|\; {\rm if}\; (t_1, t_2, t_3) \in Q^3\; {\rm then}\; c(t_1, t_2, t_3) = 0\\ {\rm if\; and\; only\; if}\; (t_1, t_2, t_3)\in \Gamma(Q);\\ {\rm for\; all}\; (t_1, t_2, t_3, t_4) \in Q^4\;\\ {\rm we\; have}\; c(t_1, t_2, t_3) - c(t_1, t_2, t_4) + c(t_1, t_3, t_4)-c(t_2, t_3, t_4) = 0;\\ {\rm and }\; c(t_1*t, t_2*t, t_3*t)=c(t_1,t_2,t_3)\; {\rm for\; any}\; (t, t_1,t_2,t_3)\in Q^4\}\end{multline*} Let \begin{align*} A:&=\{c: Q^3 \rightarrow \{ -1, 0, 1\} \;|\; {\rm if}\; S=(t_1, t_2, t_3) \in Q^3\; {\rm then}\; c(t_1, t_2, t_3) = 0\; {\rm if\; and\; only\; if}\;\\ &S\in \Gamma(Q)\}\\ &= (\bigcap_{S\in \Gamma(Q)}V_{S, 0})\cap(\bigcap_{S\in Q^3\setminus\Gamma(Q)}(V_{S,-1}\cup V_{S,1})),\\ \end{align*} \begin{align*} B:&=\{c: Q^3 \rightarrow \{ -1, 0, 1\} \;|\;{\rm for\; all}\; W=(t_1, t_2, t_3, t_4) \in Q^4\; {\rm we\; have}\\ &d_W(c)=c(t_1, t_2, t_3) - c(t_1, t_2, t_4) + c(t_1, t_3, t_4)-c(t_2, t_3, t_4) = 0\}\\ &=\bigcap_{W\in Q^4}\{c\in \{-1, 0, 1\}^{Q^3}\;|\; d_W(c)=0\} \end{align*} where for any $W=(t_1, t_2, t_3, t_4)\in Q^4$ we define $d_W: \{-1, 0, 1\}^{Q^3}\longrightarrow \mathbb{Z}$ by $$d_W(c)=c(t_1, t_2, t_3) - c(t_1, t_2, t_4) + c(t_1, t_3, t_4)-c(t_2, t_3, t_4)$$ and \begin{align*} C:&=\{c: Q^3 \rightarrow \{ -1, 0, 1\} \;|\; c(t_1*t, t_2*t, t_3.t)=c(t_1,t_2,t_3)\; {\rm for\; any}\; (t, t_1,t_2,t_3)\in Q^4\}\\ &=\bigcap_{t\in Q, (t_1, t_2, t_3)\in Q^3}\{c\in \{-1, 0, 1\}^{Q^3}\;|\; c(t_1*t, t_2*t, t_3*t)=c(t_1,t_2,t_3)\}. \end{align*} Since $RCO(Q)=A\cap B\cap C$ and the subsets $V_{S, 0}$, $V_{S,-1}$, $V_{S,1}$, $\{c\in \{-1, 0, 1\}^{Q^3}\;|\; d_W(c)=0\}$ and $\{c\in \{-1, 0, 1\}^{Q^3}\;|\; c(t_1*t, t_2*t, t_3*t)=c(t_1,t_2,t_3)\}$ are closed, $RCO(Q)$ is closed in $ \{-1, 0, 1\}^{Q^3}$. Therefore, $RCO(Q)$ is compact. A similar proof show that $LCO(Q)$ is compact. \end{proof} Given a quandle $Q$, recall that a topology on $LO(Q)$ is constructed in \cite{DDHPV} by choosing as a subbasis the collection $\{V_{(a, b)}\}_{(a,b)\in Q\times Q\setminus \Delta}$, where $\Delta=\{(a, a)\in Q\;|\; a\in Q\}$, and $V_{(a,b)}=\{<\in LO(Q)\;|\; a<b\}$. Similarly, a topology on $RO(Q)$ is constructed by choosing as a subbasis the collection $\{V_{(a, b)}\}_{(a,b)\in Q\times Q\setminus \Delta}$, where $\Delta=\{(a, a)\in Q\;|\; a\in Q\}$ and $V_{(a,b)}=\{<\in RO(Q)\;|\; a<b\}$. \begin{proposition} \label{embedding} The inclusion map $i:LO(Q) \rightarrow LCO(Q)$ (respectively $j:RO(Q) \rightarrow RCO(Q)$) given by \[ < \mapsto c_<, \] is an embedding. \end{proposition} \begin{proof} Recall that the collection $\{L_S\}_{S\in Q^3\setminus\Gamma(Q)}$ (respectively $\{R_S\}_{S\in Q^3\setminus\Gamma(Q)}$) is a subbasis for the topology on $LCO(Q)$ (respectively $RCO(Q)$), where $$R_S=\{c\in RCO(Q)\;|\; c(S)=1\}$$ and $$L_S=\{c\in LCO(Q)\;|\; c(S)=1\}$$ for any $S=(x, y, z)\in Q^3\setminus \Gamma(Q)$. For any $S=(x, y, z)\in Q^3\setminus \Gamma(Q)$, we have that \begin{align*} i^{-1}(L_S)&=\{<\in LO(Q)\;|\; c_<(S)=1\}\\ &=\{<\in LO(Q)\;|\; x<y<z\; {\rm or}\; y<z<x\; {\rm or} \; z<x<y\}\\ &=(V_{(x,y)}\cap V_{(y,z)})\bigcup (V_{(y,z)}\cap V_{(z,x)})\bigcup (V_{(z,x)}\cap V_{(x,y)}) \end{align*} and \begin{align*} i^{-1}(R_S)&=\{<\in RO(Q)\;|\; c_<(S)=1\}\\ &=\{<\in RO(Q)\;|\; x<y<z\; {\rm or}\; y<z<x\; {\rm or} \; z<x<y\}\\ &=(V_{(x,y)}\cap V_{(y,z)})\bigcup (V_{(y,z)}\cap V_{(z,x)})\bigcup (V_{(z,x)}\cap V_{(x,y)}). \end{align*} Therefore, both of the injective maps $i$ and $j$ are continuous. Since both $LO(Q)$ and $RO(Q)$ are compact and $\{-1, 0, 1\}^{Q^3}$ is Hausdorff, the maps $i$ and $j$ are embeddings. \end{proof} \begin{corollary} The inclusion map $k:BO(Q)= LO(Q)\cap RO(Q)\longrightarrow LCO(Q)$ (respectively $k':BO(Q)= LO(Q)\cap RO(Q)\longrightarrow RCO(Q)$) is an embedding. \end{corollary} We end this section by asking few questions about the notion of circular orderability for quandles. \begin{question} For exactly which quandles $Q$, the space $RCO(Q)$ (respectively $LCO(Q)$) is finite? \end{question} \begin{question} Is there a characterization of the notion of right circular orderability (respectively left circular orderability) of a quandle in term of quandles actions? \end{question} \begin{question} For exactly which quandles $Q$, the embeddings $i:LO(Q) \rightarrow LCO(Q)$ and $j:RO(Q) \rightarrow RCO(Q)$ are homeomorphisms? \end{question}
train/arxiv
BkiUdNLxK7IADTol1ko5
5
1
\section{Introduction} A {\it partition} is any nonincreasing sequence of positive integers, and the partition function $p(n)$ counts the number with size $n$. Euler established the beautiful fact that its generating function is given by the infinite product \begin{equation}\label{GenFcn} P(q)=\sum_{n=0}^{\infty} p(n)q^n=\prod_{n=1}^{\infty}\frac{1}{1-q^n}=1+q+2q^2+3q^3+5q^4+\dots. \end{equation} Ramanujan elegantly made use of this infinite product to prove some of the first deep theorems about the partition function. Indeed, he used it to prove \cite{Ramanujan} his well-known congruences \begin{displaymath} \begin{split} p(5n+4)&\equiv 0\pmod{5},\\ p(7n+5)&\equiv 0\pmod{7},\\ p(11n+6)&\equiv 0\pmod{11}. \end{split} \end{displaymath} These congruences have inspired the entire field of partition congruences \cite{AhlgrenOno}. Although partitions are simple to define and the $p(n)$ congruences above are quite beautiful, they turn out to be notoriously difficult to count. The following table underscores the nature of this problem by exhibiting the astronomical rate of growth of $p(n)$. \renewcommand{\arraystretch}{1} \begin{table}[h] \begin{center} \scalebox{0.9}{\begin{tabular}{|c|c|c|c|c|c|}\hline $n$ & $p(n)$ \\\hline $10$ & $42$ \\\hline $20$ & $627$ \\\hline $40$ & $37,338$ \\\hline $80$ & $15,796,476$ \\\hline $\vdots$ & $\vdots$ \\\hline \end{tabular}} \end{center} \caption{Values of $p(n)$} \label{table_2} \end{table} Hardy and Ramanujan \cite{HardyRamanujan} stunned the mathematical community with their proof of their asymptotic formula. \begin{theorem}{\text {\rm (Hardy-Ramanujan, 1918)}}\label{Asymptotic} As $n\rightarrow +\infty,$ we have $$ p(n)\sim \frac{1}{4n\sqrt{3}}\cdot e^{\pi \sqrt{2n/3}}. $$ \end{theorem} In fact, Hardy and Ramanujan proved a much stronger result than this asymptotic. They showed that $p(n)$ can be approximated by means of a divergent series, sharpening the asymptotic in Theorem~\ref{Asymptotic}. Specifically, they showed that there is a sum of similar terms such that for some constant $C$, \begin{equation}\label{pofnHRExp} p(n)=\sum_{j=1}^{C\cdot\sqrt n}E_j(n)+O(n^{-\frac14}) . \end{equation} Here, $E_1(n)\sim\frac{1}{4n\sqrt{3}}\cdot e^{\pi \sqrt{2n/3}}$ is the main term, and the later terms have similar asymptotics but for exponentials with smaller multiples of $\sqrt n$. However, the series of $E_j(n)$ summed up over all $n$ actually diverges, and so this result falls short of a proper series expansion for $p(n)$. Twenty years later Rademacher perfected \cite{RademacherPn} this idea and obtained a series expansion which does converge, thus giving an {\it exact} formula for $p(n)$. The flavor of both expansions is that they are expressible as sums of Bessel functions times Kloosterman sums. However, Rademacher utilized a different approach which gave different Bessel functions. Although the result is asymptotically the same at each stage, the savings are sufficient to make the sum over all $j$ converge. As we shall see, Rademacher used the same method as Hardy and Ramanujan, namely, their ``circle method,'' though he modified the details in the exact path of integration which led to this improvement. This led to the following formula, where $I_{\frac32}(\cdot)$ is the usual Bessel function, and $A_k(n)$ is the Kloosterman sum \begin{equation}\label{KloostermanPn} A_k(n):=\frac12\sqrt{\frac k{12}}\sum_{\substack{d\pmod{24k} \\ d^2\equiv-24n+1\pmod{24k}}}\left(\frac{12}{d}\right)e^{2\pi i \cdot \frac{d}{12k}} . \end{equation} \begin{theorem} {\text {\rm (Rademacher \cite{RademacherPn})}}\label{Exact} For any natural number $n$, we have \[ p(n)=\frac{2\pi}{(24n-1)^{\frac34}}\sum_{k\geq1}\frac{A_k(n)}{k}I_{\frac32}\left(\frac{\pi\sqrt{24n-1}}{6k}\right) . \] \end{theorem} The shape of Rademacher's formula for $p(n)$ would later be understood to arise naturally from the method of {\it Poincar\'e series} by the work of Petersson, Rademacher, and others (for example, see the exposition in \cite{OurBook}). These are natural modular forms which are built as averages over the translates of suitable special functions under the action of the modular group. In the case of $p(n)$, the generating function is essentially a weight $-1/2$ modular form. In the case of half-integral weight Poincar\'e series, formulas such as Rademacher's, understood via the modern theory of Poincar\'e series, can be used to give finite, exact formulas for coefficients of modular forms. One of the first important examples of this phenomenon was observed by Zagier \cite{Zagier} in his work on traces of singular moduli (see also \cite{BringmannOno}). This idea also applies to $p(n)$. Namely, Rademacher's exact formula for $p(n)$ can be reformulated as a finite sum of values of a single (non-holomorphic) modular function. This fact was first observed by Bringmann and one of the authors \cite{BringmannOnoPn}, and the phenomenon relies on the fact that (\ref{KloostermanPn}) can be reformulated as a sum over equivalence classes of discriminant $-24n+1$ positive definite integral binary quadratic forms. This observation was refined by Bruinier and one of the authors \cite{BruinierOno} to prove a much stronger statement. To make this precise, we let $\eta(\tau):=q^{1/24}\prod_{n=1}^{\infty}(1-q^n)$ ($q:=e^{2\pi i \tau}$ throughout) be Dedekind's weight 1/2 modular form. Furthermore, we let $E_2(\tau):=1-24\sum_{n=1}^{\infty} \sum_{d\mid n}dq^n$ be the usual weight 2 quasimodular Eisenstein series, and we let $F(\tau)$ be the weight $-2$ meromorphic modular form $$ F(\tau):=\frac{1}{2}\cdot \frac{E_2(\tau)-2E_2(2\tau)-3E_2(3\tau)+6E_2(6\tau)}{\eta(\tau)^2 \eta(2\tau)^2 \eta(3\tau)^2\eta(6\tau)^2} =q^{-1}-10-29q-\dots. $$ Using the convention that $\tau=x+iy$, with $x, y\in \R$, we define the weight 2 weak Maass form \begin{equation} \mathcal{P}(\tau):=-\left(\frac{1}{2\pi i}\cdot \frac{d}{d\tau}+\frac{1}{2\pi y}\right) F(\tau)= \left(1-\frac{1}{2\pi y}\right)q^{-1}+\frac{5}{\pi y}+\left(29+\frac{29}{2\pi y}\right)q+\dots. \end{equation} The finite algebraic formula for $p(n)$ is given in terms of the {\it singular moduli} for $\mathcal{P}(\tau)$, the values of this weak Maass forms at CM points. More precisely, we use discriminant $-24n+1=b^2-4ac$ positive definite integral binary quadratic forms $$ Q(x,y)=ax^2+bxy+cy^2, $$ with the property that $6\mid a$. The congruence subgroup $\Gamma_0(6)$ acts on these forms, and we let $\mathcal{Q}_n$ be the (finitely many) equivalence classes with $a>0$ and $b\equiv 1\pmod{12}$. If $Q(x,y)$ is such a form, then we let $\alpha_Q$ be the unique point in the upper-half of the complex plane for which $Q(\alpha_Q,1)=0$. By the theory of complex multiplication, these values are algebraic, and they generate ring class field extensions of $\Q(\sqrt{-24n+1})$. We then define their trace by \begin{equation} \Tr(n):=\sum_{Q\in \mathcal{Q}_n} \mathcal{P}(\alpha_Q). \end{equation} In terms of this notation, we have the following pleasing theorem. \begin{theorem}{\text {\rm (Bruinier-Ono \cite{BruinierOno}, 2013)}}\label{Finite} If $n$ is a positive integer, then we have $$ p(n)=\frac{1}{24n-1}\cdot \Tr(n). $$ The numbers $\mathcal{P}(\alpha_Q)$, as $Q(x,y)$ varies over the finitely many classes in $\mathcal{Q}_n$, form a multiset of algebraic numbers which is the union of Galois orbits for the discriminant $-24n+1$ ring class field. Moreover, for each $Q\in \mathcal{Q}_n$ we have that $(24n-1)\mathcal{P}(\alpha_Q)$ is an algebraic integer. \end{theorem} \begin{remark} Larson and one of the authors {\text {\rm \cite{LarsonRolen}}} established the precise integrality properties of the values $\mathcal{P}(\alpha_Q)$. \end{remark} The proofs of Theorems~\ref{Asymptotic}, \ref{Exact} and \ref{Finite} depend critically on the fact that the generating function in (\ref{GenFcn}), where $q:=e^{2\pi i \tau}$, satisfies $q^{-1/24}P(q)=1/\eta(\tau)$. It is now well understood that the circle method can be applied to modular forms whose poles are supported at cusps. Moreover, for those modular forms that have non-positive weight, the method typically offers exact formulas as in Theorem~\ref{Exact}. Here we describe recent developments in topology in which such modular forms that can be written as infinite products that resemble \eqref{GenFcn} arise as generating functions of topological invariants. Thus, by applying the circle method of Hardy and Ramanujan as perfected by Rademacher, we obtain exact formulas that provide insight into the distribution of these topolological invariants. To discuss this connection between topology and Ramanujan's legacy, we recall the foundational importance of \emph{topological invariants}. One of the broad goals of topology is to determine whether two particular spaces have the same topological, differentiable, or complex analytic structure. When this is the case, one can often find an isomorphism between the two spaces, identifying them in a way that respects this structure. It is, however, generally more difficult to prove that two spaces are fundamentally distinct. \textit{Topological invariants} assign numbers, groups, or other mathematical objects to spaces in such a way that isomorphic spaces yield the same output. In this way, invariants are useful for distinguishing dissimilar spaces. Here the spaces we are concerned with are complex manifolds. An important class of invariants known as the \textit{Hodge numbers} $h^{s,t}$ belong to manifolds of this type. For any $n$-dimensional complex manifold $M$ and any $0 \leq s,t,\leq n$, the Hodge number $h^{s,t}(M)$ gives the dimension of a certain vector space of differential forms on $M$. For the manifolds we will be concerned with, important topological invariants such as the Betti numbers and signature arise as linear combinations of the Hodge numbers (see \cite{Wells}). There are many ways of constructing new spaces from old, and when we study topology we want to understand how invariants interact with these constructions. In algebraic geometry, the $n^{th}$ \textit{Hilbert scheme} of a projective variety $S$ is a projective variety $\mathrm{Hilb}^n(S)$ that can be thought of as a smoothed version of the $n^{th}$ symmetric product of $S$ (for example, see \cite{Hilbert_schemes}). The $n$-th symmetric product of a manifold $M$ admits a simple combinatorial interpretation: outside of a negligible subset, the symmetric product is the collection of subsets of $M$ of size $n$ assembled as a manifold in its own right. Interestingly, the Hodge numbers of a complex projective surface $S$ determine the Hodge numbers of $\mathrm{Hilb}^n(S)$ for all $S$ in a very pleasing combinatorial way. This statement is captured in the following beautiful theorem of G\"ottsche \cite{Gottsche} \begin{theorem}[G\"ottsche]\label{thm:gottsche} If $S$ is a smooth projective complex surface, then we have that \begin{equation} \sum_{\substack{n\geq 0\\\mathbf{0\leq s,t \leq 2n}}}(-1)^{s+t}h^{s,t}(\mathrm{Hilb}^n(S))x^{s-n}y^{t-n}q^n =\prod_{n =1}^\infty \frac{\prod_{s +t \mathrm{\ odd}} (1- x^{s-1}y^{t-1}q^n)^{h^{s,t}(S)}}{\prod_{s +t \mathrm{\ even}} (1- x^{s-1}y^{t-1}q^n)^{h^{s,t}(S)}}. \label{eq:goettsche} \end{equation} \end{theorem} The fortunate feature of this formula is that the Hodge numbers $h^{s,t}(\mathrm{Hilb}^n(S))$ are prescribed by the infinite product in (\ref{eq:goettsche}) which can be specialized to obtain modular forms. In these cases we will use the circle method to obtain exact formulas, as well as asymptotic and distributional information, for these Hodge numbers for a certain class of complex projective surfaces. This work generalizes previous work by some of the authors \cite{reu_paper}. We pursue this task in the same spirit that led Ramanujan to bring forward new information about the partition numbers using the modularity of $\eta(\tau)$. Indeed, the process of taking symmetric powers of surfaces is inherently combinatorial, and in the spirit of Ramanujan gives rise precisely to an infinite family of topologically inspired ``partition problems'' encoded in (\ref{eq:goettsche}). Hence, we expect the circle method to apply. In other words, the role of infinite products in partition theory offers a glimpse of G\"ottsche's theorem as a device which mirrors the assembly required to build $\mathrm{Hilb}^n(S)$. Although the combinatorial object we are studying, $\mathrm{Hilb}^n(S)$, arises in a different field of mathematics than partitions, we find that it is Ramanujan's insight and his novel use of modular forms that illuminates the path to obtaining new and interesting information about these spaces. Towards this end, we begin by collecting the Hodge numbers for a complex manifold $M$ in a generating function called the \emph{Hodge polynomial}: \begin{equation*} \chi_{\mathrm{Hodge}}(M)(x,y) := x^{-d/2}y^{-d/2}\sum_{s,t}h^{s,t}(M)(-x)^s(-y)^t. \end{equation*} Henceforth we refer to the generating function for the Hodge polynomials of Hilbert schemes of a smooth projective complex surface $S$ as \begin{equation}\label{eq:def_z} Z_S(x,y;\tau):=\sum_{n = 0}^\infty \chi_{\mathrm{Hodge}}(\mathrm{Hilb}^n(S))(x,y)q^n. \end{equation}By specializing \eqref{eq:def_z} appropriately, we obtain generating functions for a variety of topological invariants (see \cite{reu_paper}). In order to study the distributional properties of Hodge numbers, we consider \begin{equation}\label{eq:def_gamma} \gamma_{S}(r_1,\ell_1, r_2, \ell_2;n):= \sum_{\substack{{t \equiv r_1} \mod{\ell_1}\\ {s \equiv r_2 } \mod{\ell_2} }} (-1)^{s+t}h^{s+n,t+n}(\mathrm{Hilb}^n(S)), \end{equation} and compile the generating function \begin{equation}\label{eq:def_C} C_S(r_1,\ell_1, r_2, \ell_2;\tau) := \sum_{n \geq 0} \gamma_S(r_1,\ell_1, r_2, \ell_2;n) q^n. \end{equation} We would like to determine when the Hodge numbers of a surface $S$ are equidistributed. We define such an \emph{equidistribution} as follows: \begin{definition} Let $S$ be a smooth projective complex surface. We say that $S$ has $(\ell_1,\ell_2)$-equidistribution if for some $\mathcal{R} \subseteq \Z/{\ell_1}\Z \times \Z/{\ell_2}\Z$ we have, as $n\to\infty$, $$ \gamma_S(r_1,\ell_1,r_2,\ell_2;n) \sim \gamma_S(r_1',\ell_1,r_2',\ell_2;n) $$ for all $(r_1,r_2),(r_1',r_2') \in \mathcal{R}$, and $$\gamma_S(r_1,\ell_1,r_2,\ell_2;n) = 0$$ for all $(r_1,r_2) \not\in \mathcal{R}$ and all $n>0$. \end{definition} Since the generating function in \eqref{eq:def_C} arises as a linear combination of specializations of \eqref{eq:def_z} according to \begin{equation} \label{eq:roots} C_S(r_1,\ell_1, r_2, \ell_2;\tau) = \frac{1}{\ell_1\ell_2} \sum_{\substack{ j_1 \mod{\ell_1} \\ j_2\mod{\ell_2}}} \zeta^{-j_2r_2}_{\ell_2} \zeta^{-j_1r_1}_{\ell_1} Z_S(\zeta_{\ell_2}^{j_2}, \zeta^{j_1}_{\ell_1};\tau), \end{equation} where $\zeta_\ell$ is a primitive $\ell^{th}$ root of unity, determining the behavior of specializations of \eqref{eq:def_z} is useful for studying the distribution of Hodge numbers. We can express these functions $$ Z_S(\zeta_{\ell_1}^{r_2}, \zeta^{r_2}_{\ell_2};\tau) = : \sum _{n \geq 0} \xi_S(r_1,\ell_1,r_2,\ell_2;n) q^n $$ in terms of $\eta(\tau)$ and \emph{generalized Dedekind} $\eta$ \emph{functions}, which are defined on page 187 of \cite{Schoeneberg} as $$ \eta_{\left(u,v,N\right)}\left(\tau\right): = \alpha_N\left(u,v\right)e^{\pi i P_2\left(u/N\right)\tau} \prod_{\substack{m>0 \\ m \equiv u \mod{N}}}\left(1- \zeta_N^{v}e^{2 \pi i \tau m/N}\right) \prod_{\substack{m>0 \\ m \equiv -u \mod{N}}}\left(1- \zeta_N^{-v}e^{2 \pi i \tau m/N}\right)\mathbf{,}$$ {where $\alpha_N(u,v)$ is given by } $$ \alpha_N(u,v):= \begin{cases} \left(1 - \zeta_N^{-v}\right)e^{\pi i P_1\left( \frac{v}{N}\right)} & u \equiv 0 \text{ and } v \not \equiv 0\ \pmod N, \\ \hfil 1 & \text{{otherwise.}} \end{cases} $$ Here, \textbf{$P_1(x)=\{x\}-1/2$} and $P_2(x):= \{x\}^2 -\{x\} + 1/6$. By (\cite{Schoeneberg}, pg. 200) we have that for $u,v,N \in \Z$, $(u, v ) \not \equiv 0 \mod N$, $\eta_{u,v,N}^{N_1}(\tau)$ is a modular function on $\Gamma(N)$, where $N_1 = 12 N^2 / \gcd(6,N)$. The explicit transformation law for $\eta_{(u,v,N)}(\tau)$ on all of $\mathrm{SL}_2(\mathbb{Z})$ is shown on page 198 of \cite{Schoeneberg}. We exploit this transformation law in Section \ref{ssc:prf_sketch} to find an exact formula for the $\xi_S(r_1,\ell_1,r_2,\ell_2;n)$ for suitable surfaces. To make this precise, suppose that $M$ is a $d$-dimensional complex manifold. We let $\chi(M)$ denote its Euler characteristic, and we let $\sigma(M)$ denote the signature of the intersection pairing on $H^d(M)$. Then we obtain the following exact formulas, {where $L:=\mathrm{lcm(\ell_1,\ell_2)}$, $H:=H(\iota_2)$ is given by \eqref{eq:Hdefn}, $a_j$ is a Fourier coefficient defined in \eqref{eq:Zstar},} $B_k$ is a Kloosterman sum defined in \eqref{eq:Kloosterman_def}, and $I^*$ is a scaled modified Bessel function of the first kind defined in \eqref{eq:I_def}. \begin{theorem}\label{thm:exact_formulas} Let $S$ be a smooth projective complex surface such that $\chi(S) \geq 0$ and $\chi(S) \geq \sigma(S)$. Then, \begin{equation} \xi_S(r_1,\ell_1,r_2,\ell_2;n)= 2\pi \alpha \sum_{\substack{\iota_1 \mod{L} \\ \iota_2 \mod{L}}} \sum_{j<-L{H}} \sum_{\substack{k=1 \\ k \equiv \iota_2 \mod{L}}}^\infty \frac{\alpha'a_j}{k^G}B_k(j,L,\iota_1;n) I^*(\iota_1,\iota_2,j,k;n), \label{eq:thm5}. \end{equation} \end{theorem} \begin{remark} Theorem \ref{thm:exact_formulas} holds for a large class of surfaces $S$. In particular, it holds for all but finitely many Hodge structures in each birational equivalence class. In addition, the Enriques-Kodaira Classification Theorem implies that for surfaces of non-general type, the only minimal models which do not satisfy the hypotheses of Theorem \ref{thm:exact_formulas} are ruled surfaces of genus $g$, where $g \geq 2$ (see \cite{reu_paper}). \end{remark} \begin{remark} By the following equality, $$\gamma_{S}(t-n,2n+ 1, s-n, 2n+1;n)= (-1)^{s+t}h^{s,t}(\mathrm{Hilb}^n(S)),$$ Theorem \ref{thm:exact_formulas} and Equation (\ref{eq:roots}) together give exact formulas for $h^{s,t}(\mathrm{Hilb}^n(S))$ for surfaces $S$ such that $\chi(S) \geq 0$ and $\chi(S) \geq \sigma(S)$. \end{remark} Building on work in \cite{manschot_rolon}, from which it follows that for $S$ a K3 surface $\gamma_S(r, \ell, 0, 1; n) \sim \gamma_S(r', \ell, 0, 1; n)$ as $n \to \infty$, and \cite{reu_paper}, which described equidistribution in the case $\ell_1 = \ell_2 = 2$, we use the asymptotics derived from the exact formula in Theorem \ref{thm:exact_formulas} to make the following statement about the equidistribution of Hodge numbers for appropriate surfaces: \begin{theorem}\label{thm:equidistribution} Let $S$ be a smooth projective complex surface such that $\chi(S) \geq \sigma(S)$. Then $S$ has $(\ell_1,\ell_2)$-equidistribution if and only if one or more of the following holds: \begin{enumerate} \item $h^{1,0} = 0$, $h^{2,0}=0$, $\mathcal{R} = \{ (r_1,r_2) \ | \ r_1 \equiv r_2 \mod \gcd(\ell_1,\ell_2)\}$, \item $h^{1,0} = 0$, $h^{2,0} >0$, $\mathcal{R} = \{ (r_1,r_2) \ | \ r_1 \equiv r_2 \mod \gcd(\ell_1,\ell_2,2)\}$, \item $\chi(S) + \sigma(S) = 0$, $\chi(S) \neq 0$, { $\min\{\ell_1,\ell_2\} = 1$,} $\mathcal{R} = \{(0,0)\}$, \item $\chi(S) + \sigma(S) = 0$, $\chi(S) = 0$, { $\min\{\ell_1,\ell_2\} = 1$,} $\mathcal{R} = \emptyset$, { \item $\chi(S) \neq 0$, $\ell_1,\ell_2=1$, $\mathcal{R} = \{(0,0)\}$, \item $\chi(S) = 0$, $\ell_1,\ell_2=1$, $\mathcal{R} = \emptyset$,} \item $h^{1,0} >0$, $\chi(S) + \sigma(S) >0$, $\mathcal{R} = \Z/{\ell_1}\Z \times \Z/{\ell_2}\Z$, and $ { \Lambda(}0,0) < { \Lambda(}j_1/\ell_1,j_2/\ell_2)$, for all $(j_1,j_2) \not \equiv (0,0)$, where $$ { \Lambda(}x,y) := h^{1,0} \left( P_2\left(x\right) + P_2\left(y\right)\right) - h^{0,0} P_2\left( x+y\right) - h^{2,0} P_2\left( x-y\right) $$. \end{enumerate} Case (7) occurs whenever $\min\{\ell_1,\ell_2\} = 1$, and holds for only finitely many $(\ell_1,\ell_2)$ such that $\min\{\ell_1,\ell_2\}> 1$. In addition, case (7) only occurs when $\gcd(\ell_1,\ell_2)=1$. \end{theorem} \section{The partition function} Here we describe the history and main idea of the circle method for the partition function. {This will use the modularity of the generating function for the partition function. We should point out that though the circle method is especially convenient to apply in such cases, many important applications of the circle method are possible in non-modular situations, for example as explored by Hardy and Littlewood \cite{Vaughan} in their study of Waring's problem.} The basic idea is simple. Recall from \eqref{GenFcn} that the generating function for $p(n)$ satisfies a product form first discovered by Euler: \[ P(q)=\prod_{n=1}^{\infty}\frac{1}{1-q^n} . \] Cauchy's residue theorem can then be used to isolate any of the coefficients of this expansion. Specifically, if we divide $P(q)$ by $q^{n+1}$, then as a function of $q$, $P(q)/q^{n+1}$ has residue $p(n)$, and so \[ p(n)=\frac1{2\pi i}\int_C\frac{P(q)}{q^{n+1}}dq , \] where $C$ is any simply closed path around the origin contained in the unit circle, traversed counterclockwise. Though this idea is straightforward in principle, great care must be taken in choosing a suitable path $C$ so that the integral can be closely estimated. To determine a ``good'' path, one must first consider the location of the poles of $P(q)$. Again, this is furnished by Euler's product formula \eqref{GenFcn}, which shows that the poles lie exactly at the roots of unity. This justifies the earlier claim that we may take any path which doesn't cross this wall of singularities, and gives a first indication of how to estimate this integral. We would like to split this up into a ``main term'' and an error term, and so it is important to study where most of the contribution of the integral will come from. If we choose a path which approaches roots of unity quite closely, then the majority of the integral will come from the parts of the path near these poles. However, not all poles will contribute equally to the size of the integral. An analysis of the generating function $P(q)$ shows that ``near'' primitive $j$-th order roots of unity, the size is much smaller the larger $j$ is. Thus, the main contribution is from the pole at $q=1$, secondary terms come from the pole at $q=-1$, and the third order of contribution comes from the third order roots of unity. In fact, the explanation of the terms in the expansion \eqref{pofnHRExp} is that the function $E_j(n)$ is an approximation of the behavior of a suitable Cauchy integral near the primitive $j$-th order roots of unity. As an illustration of this numerical phenomenon, consider the following table of values of $P(q)$ near roots of unity, where the columns correspond to the different roots of unity $\zeta$, and where the values of $t$ in the rows correspond to evaluating $|P(\zeta\cdot e^{-t})|$. \begin{table}[h] \small \begin{center} \scalebox{0.9}{\begin{tabular}{|c|c|c|c|c|}\hline & $\zeta=1$&$\zeta=-1$ &$\zeta=-\frac12\pm\frac{\sqrt{-3}}{2}$ &$\pm i$ \\\hline t=0.5 & 7.4& 0.87&0.68 &0.66 \\ \hline t=0.3 & 51.3& 1.2&0.68 &0.60 \\ \hline t=0.1 & $1.7\cdot 10^6$&10.8 &1.3 &0.70 \\ \hline t=0.01 & $1.1\cdot 10^{70}$&$4.1\cdot 10^{16}$ &$6.0\cdot 10^6$ & 2325.4 \\ \hline \end{tabular}} \end{center} \caption{Illustrative values of $P(q)$ near roots of unity} \label{table_eta_values} \end{table} The exact description of $P(q)$ near roots of unity is afforded by the modular transformation properties of the Dedekind-eta function, for which $$ P(q)=\frac{q^{\frac{1}{24}}}{\eta(\tau)} . $$ Essentially, $\eta(\tau)$ is a level one modular form of weight $1/2$, with a multiplier system consisting of $24$-th roots of unity. That is, for any $\gamma=\left(\begin{smallmatrix}a & b\\ c& d\end{smallmatrix}\right)\in\operatorname{SL}_2(\mathbb Z),$ \[ \eta(\gamma\cdot\tau)=\omega_\gamma(c\tau+d)^{\frac12}\eta(\tau), \] where $\omega_\gamma^{24}=1$. Specifically, the numbers $\omega_\gamma$ are determined by the values at the matrices \[ T:=\begin{pmatrix} 1 & 1\\ 0 & 1\end{pmatrix}, \quad\quad\quad S:=\begin{pmatrix}0 & -1\\ 1 & 0\end{pmatrix} \] as follows: \[ \eta(\tau+1)=e^{\frac{2\pi i}{24}}\eta(\tau), \quad\quad\quad \eta\left(-\frac{1}{\tau}\right)=\sqrt{-i\tau}\eta(\tau) . \] Using matrices to ``connect'' any root of unity to the point at infinity, and dealing with the elementary factor $q^{\frac{1}{24}}$ yields the desired expansions near the cusps. For instance, as $q$ approaches $1$ radially from within the unit disk, $q=e^{2\pi i \tau}$ with $\tau$ tending to zero, we have \[P(q)\sim\sqrt{-i\tau}e^{\frac{\pi i}{12\tau}}.\] Such calculations suffice to estimate the values of $P(q)$ along any desired path. Now, we must discuss which path one should choose. We will follow Rademacher's choice, which yields his exact formula above. The first natural choice, as suggested by the name of the method, could be to let $C$ be a circle centered at the origin. In the upper half plane, that is as a function of $\tau$, this corresponds to choosing a horizontal path with endpoints 1 unit apart. Rademacher replaced this path by an increasingly large number of mutually tangent circles, which are the well-known {\it Ford circles}. For each rational number, there is a single Ford circle tangent to it. For each cutoff $N$, Rademacher considered the subset of these circles which are tangent to the rational numbers with denominator less than or equal to $N$. The exact description is beautifully expressed using the theory of Farey fractions. Explicitly, these circles are precisely the image of the line $\operatorname{Im}(\tau)=1$ in the upper half plane under the action of $\operatorname{SL}_2(\mathbb Z)$. The image on the following page depicts the Ford circles centered around fractions in $[0,1]$ with denominator at most $N=4$. For each $N$, Rademacher's path is to traverse the arcs of these circles, starting at $i$, moving along each arc until the next included Ford circle is intersected, until finally arriving at $i+1$. In the image for $N=4$, the path travels along the solid circular arcs. Rademacher then expressed the Cauchy integral as a sum over each of these paths, and, under a suitable change of variables, then used the modular properties of the eta function to expand the values of $P(q)$ on each of these arcs. These expansions of $P(q)$ can then naturally be split up into a main term and an error term, the main features being that the main terms become integrals which can be evaluated exactly as Bessel functions, and the error terms can be explicitly bounded. After the work of Rademacher, it became apparent that Rademacher's series were in fact Poincar\'e series in disguise. In particular, there are special functions which, when averaged under the slash operator over the modular group, have Fourier expansions which give the same expansions. As such Poincar\'e series are always a basis of the space of all weakly holomorphic modular forms up to cusp forms (as one can explicitly match them to any principal part and constant term which always determines a modular form up to a cusp form). For forms of negative weight such as $1/\eta(\tau)$ this procedure always yields an exact formula. More details on the method of Poincar\'e series and how it can be applied to general weakly holomorphic and mock modular forms can be found in \cite{OurBook}. \begin{figure} \resizebox{13.0cm}{!}{\begin{tikzpicture} \begin{scope}[thick,font=\scriptsize] \draw [->] (-0.5,0) -- (11,0) node [above left] {$\Re\{z\}$}; \draw [->] (0,-0.5) -- (0,11) node [below right] {$\Im\{z\}$}; \draw [thin] (10,-0.25) -- (10, 0.25); \node at (10,-0.5) {$1$}; \draw [thin] (-0.25, 10) -- (0.25, 10); \node at (-0.5, 10) {$i$}; \draw [thin] (5,-0.15) -- (5, 0.15); \node at (5,-0.5) {$1/2$}; \draw [thin] (10/3,-0.15) -- (10/3, 0.15); \node at (10/3,-0.5) {$1/3$}; \draw [thin] (20/3,-0.15) -- (20/3, 0.15); \node at (20/3,-0.5) {$2/3$}; \draw [thin] (2.5,-0.15) -- (2.5, 0.15); \node at (2.5,-0.5) {$1/4$}; \draw [thin] (7.5,-0.15) -- (7.5, 0.15); \node at (7.5,-0.5) {$3/4$}; \draw [blue,domain=-62:90] plot ({0+5*cos(\x)}, {5+5*sin(\x)}); \draw [dotted,blue,domain=-90:-62] plot ({0+5*cos(\x)}, {5+5*sin(\x)}); \draw [blue,domain=90:242] plot ({10+5*cos(\x)}, {5+5*sin(\x)}); \draw [dotted,blue,domain=242:270] plot ({10+5*cos(\x)}, {5+5*sin(\x)}); \draw [blue,domain=-23:203] plot ({5+1.25*cos(\x)},{1.25+1.25*sin(\x)}); \draw [dotted,blue,domain=203:337] plot ({5+1.25*cos(\x)}, {1.25+1.25*sin(\x)}); \draw [blue,domain=23:190] plot ({10/3+(5/9)*cos(\x)},{(5/9)+(5/9)*sin(\x)}); \draw [dotted,blue,domain=127:383] plot ({10/3+(5/9)*cos(\x)},{(5/9)+(5/9)*sin(\x)}); \draw [blue,domain=-10:157] plot ({20/3+(5/9)*cos(\x)},{(5/9)+(5/9)*sin(\x)}); \draw [dotted,blue,domain=157:350] plot ({20/3+(5/9)*cos(\x)},{(5/9)+(5/9)*sin(\x)}); \draw [blue,domain=10:120] plot ({10/4+(10/32)*cos(\x)},{(10/32)+(10/32)*sin(\x)}); \draw [dotted,blue,domain=120:370] plot ({10/4+(10/32)*cos(\x)},{(10/32)+(10/32)*sin(\x)}); \draw [blue,domain=60:170] plot ({30/4+(10/32)*cos(\x)},{(10/32)+(10/32)*sin(\x)}); \draw [dotted,blue,domain=170:420] plot ({30/4+(10/32)*cos(\x)},{(10/32)+(10/32)*sin(\x)}); \end{scope} \end{tikzpicture}} \caption{Ford Circles for $N=4$} \end{figure} \section{Hodge numbers for Hilbert schemes of surfaces} We now apply the circle method to obtain exact formulas for Hodge numbers for Hilbert schemes. In the next two subsections we sketch the proofs of Theorems~\ref{thm:exact_formulas} and \ref{thm:equidistribution}, which generalize earlier work by the authors contained in \cite{reu_paper}. In the last subsection, we illustrate these results in the case of $S=\C\mathbb{P}^2$. \subsection{Sketch of the proof of Theorem \ref{thm:exact_formulas}: Exact formulas}\label{ssc:prf_sketch} For the sake of simplicity, we will only consider the case $r_2 = 0$, $\ell_2=1$, as the case $r_2 \neq 0$ follows \textit{mutatis mutandis}. \begin{proof}[Sketch of the proof of Theorem~\ref{thm:exact_formulas}] By Cauchy's { residue theorem}, we obtain the following integral expression for the coefficient of $q^n$ inside $Z_S(\zeta_\ell^r,1;n)$, \[\xi_S(r,\ell,0,1;n)=\frac{1}{2\pi i}\int_C\frac{Z_S(\zeta_{\ell}^r,1;q)}{q^{n+1}}dq,\] where we choose $C$ to be a circle of radius $e^{-2\pi N^{-2}}$. While Rademacher estimated his contour integral by decomposing the line segment in the upper half plane into arcs of Ford circles, we decompose $C$ into Farey arcs $\Xi_{h,k}$ as in \cite{Rademacher}, yielding \[\xi_S(r,\ell,0,1;n)=\sum_{\substack{0\leq h<k\leq N\\\text{gcd}(h,k)=1}}\frac{1}{2\pi i}\int_{\Xi_{h,k}}\frac{Z_S\left(\zeta_{\ell}^{r},1;q\right)}{q^{n+1}}dq.\] We note that with this path of integration, we recover Rademacher's formula for $p(n)$ in Theorem \ref{Exact}. Both methods make use of the same transformation formula and approximate the contribution from the cusps representatives with the same Bessel functions, but differ in the process of bounding errors. Since we can express our generating function as the modular form \begin{equation} \label{eq:Z_specialized} Z_S(\zeta_{\ell}^{r},1;\tau)= \dfrac{ \alpha q^{\frac{\chi\left(S\right)}{24}} }{\eta_{\left(0,r, \ell\right)}\left(\tau\right)^{(\chi(S) + \sigma(S))/4 } \eta \left(\tau\right)^{(\chi(S) -\sigma(S))/2} }, \end{equation} where $\alpha$ depends only on $r$ and $\ell$, by \cite{Schoeneberg} we have for $(h,k)=1$, $hh' \equiv -1 \ \mod{k}$, and $\mathrm{Re}(z) >0$, $$ Z_S\left(\zeta_{\ell}^{r}, 1; \frac{iz +h}{k}\right) = \omega (h,k) \alpha \alpha' \cdot z^{-G} \cdot {\mathrm{exp}}\left(-\frac{2 \pi}{k} \left( \frac{\chi\left(S\right)}{24}z + \frac{H}{z}\right) \right) \cdot Z^*\left(\frac{iz ^{-1} + h'}{k} \right), $$ where $$\omega(h,k) := {\mathrm{exp}}\left( -\pi i/4\cdot \left(2\left(\chi(S)-\sigma(S)\right) \cdot s(h,k) + \left(\chi(S)+\sigma(S)\right) \cdot s_{{ (r,\ell)}}(h,k) \right) \right) $$ is a root of unity built from the following generalized Dedekind sum, $$s_{{ (r,\ell)}}(h,k) = \sum_{\lambda \mod{k}}\left(\left( \frac{ \lambda }{k}\right)\right)\left(\left( \frac{h\lambda}{k} + \frac{ r}{\ell}\right)\right), $$ $s(h,k)=s_{(0,1)}(h,k)$, the constant $\alpha'$ depends on $h$ and $k$ mod $\ell$, $H{:=H(k)}$ is the order of the zero at the cusp $h/k$, {given by} \begin{equation}\label{eq:Hdefn} \frac{1}{2} \left(h^{1,0} \left( P_2\left(\frac{k r_1}{\ell_1}\right) + P_2\left(\frac{k r_2}{\ell_2}\right)\right) - h^{0,0} P_2\left(\iota_2 \left( \frac{r_1}{\ell_1} + \frac{r_2}{\ell_2}\right)\right) - h^{2,0} P_2\left(\iota_2 \left( \frac{r_1}{\ell_1} - \frac{r_2}{\ell_2}\right)\right) - \frac{h^{1,1}}{12}\right), \end{equation} $G$ is the weight of $Z_S$, and \begin{equation}\label{eq:Zstar} Z^*(\tau)=\sum_{j\geq 0}a_je^{2\pi i\tau j/\ell}. \end{equation} {The modular transformation law for generalized Dedekind $\eta$ functions found in Chapter 8 of \cite{Schoeneberg} allows us to write $Z^*(\tau)$ as a quotient of infinite products. Using this, one can calculate $a_j$ via a simple product expansion. } Since $Z_S(\zeta_\ell^r, 1; \tau)$ is modular on $\Gamma(\ell)$, we deal with the multiple cusps in the spirit of Poincar\'e series by summing over all the Farey arcs near a given cusp, defining \[S(\iota_1, \iota_2,N;n) := \sum_{\substack{0\leq h<k\leq N\\ \mathrm{gcd}(h,k)=1 \\(h,k) \equiv (\iota_1, \iota_2) { \ \mod{\ell}}}}\frac{1}{2\pi i}\int_{\Xi_{h,k}}\frac{Z_S\left(\zeta_{\ell}^{r},1;q\right)}{q^{n+1}}dq,\] from which it immediately follows that \begin{equation}\label{eq:stratify_s} { \xi_S(r,\ell,0,1;n)}= \sum_{\substack{\iota_1 \mod{\ell} \\ \iota_2 \mod{\ell}}} S(\iota_1, \iota_2,N;n). \end{equation} Once we apply the transformation formula, substitute the series expansion for $Z^*((iz^{-1}+h')/k)$, and then make the variable transformation $w=N^{-2}-i\theta$, we obtain \begin{align*} S(\iota_1, \iota_2,N;n) = & \alpha\alpha'\sum_{j=0}^\infty \sum_{\substack{k=1 \\ k \equiv \iota_2 \mod{\ell}}}^N \sum_{\substack{0\leq h<k\\ \mathrm{gcd}(h,k)=1 \\h \equiv \iota_1 \mod{\ell}}} B_{h,k}(j,\ell;n) \\ &\cdot \frac{a_j }{k^G} \int_{\vartheta_{h,k}'}^{\vartheta_{h,k}''}w^{-G} {\mathrm{exp}}\left[ w\left(2\pi n-\frac{\pi\chi(S)}{12}\right) +\frac{1}{w}\left(-\frac{2\pi H}{k^2}-\frac{2\pi j}{k^2\ell}\right) \right] d\theta. \end{align*} % Here, \begin{equation} \label{eq:Kloosterman_def} B_k(j,\ell,\iota_1;n) :=\sum_{\substack{0\leq h<k\\ \mathrm{gcd}(h,k)=1 \\h \equiv \iota_1 \mod{\ell}}}B_{h,k}(j,\ell,n) :=\sum_{\substack{0\leq h<k\\ \mathrm{gcd}(h,k)=1 \\h \equiv \iota_1 \mod{\ell}}} \omega(h,k)\cdot{\mathrm{exp}}\left[-\frac{2\pi inh}{k}+\frac{2\pi ih'j}{k\ell}\right] \end{equation} is a Kloosterman sum. Integrating as in \cite{reu_paper}, we find that \begin{equation}\label{eq:exact_formula_S} S(\iota_1, \iota_2,N;n)=2\pi \alpha\alpha' \sum_{j<-\ell H} \sum_{\substack{k=1 \\ k \equiv \iota_2 \mod{\ell}}}^N \frac{B_k(j,\ell,\iota_1;n)a_j}{k^G}I^*(\iota_1,\iota_2,j,k;n) +O(N^{-\delta}) \end{equation} for some $\delta>0$, where we define the scaled modified Bessel function of the first kind \begin{equation}\label{eq:I_def} I^*(\iota_1,\iota_2,j,k;n) := \left[2\pi n-\frac{\pi\chi(S)}{12}\right]^{(G-1)/2} \left[-\frac{2\pi H}{k^2}-\frac{2\pi j}{k^2 \ell}\right]^{(1-G)/2}I_v(s), \end{equation} where $v := 1 - G$ and $s := 2\sqrt{\left[2\pi n-\frac{\pi\chi(S)}{12}\right]\left[-\frac{2\pi H}{k^2}-\frac{2\pi j}{k^2\ell}\right]}$. Thus, by (\ref{eq:stratify_s}), we see that taking $N\to\infty$ in \eqref{eq:exact_formula_S} yields an exact formula for $\xi_S(r,\ell,0,1;n)$, proving Theorem \ref{thm:exact_formulas}. \end{proof} \begin{remark} Our use of $S(\iota_1, \iota_2, N; n)$ to provide an exact formula for $\xi_S(r, \ell, 0, 1; n)$ in \eqref{eq:stratify_s} hints at the relationship between the circle method and Poincar\'e series. In the method of Poincar\'e series, one averages functions over a group action to get modular forms with prescribed principle part at a given cusp and then uses Poisson summation to get exact formulas for the coefficients of their Fourier expansions in terms of an infinite sum of products of Kloosterman sums and Bessel functions. Our application of the circle method provides a concrete way of seeing how terms in this sum arise from the contribution of the integral near each cusp representative for the cusp under consideration. \end{remark} \begin{remark} The proof of Theorem \ref{thm:exact_formulas} differs most from the proof of the corresponding statement in {\text {\rm \cite{reu_paper}}} in the establishment of the Kloosterman sum bound \begin{equation} \label{eq:Ksum_bound} B_k(j,\ell,\iota_1,n) = O(n^{1/3}k^{2/3+\varepsilon}), \end{equation} where $h'$ is restricted to an interval $0 \leq \sigma_1 \leq h' < \sigma_2 \leq k$ and $\chi(S) = \sigma(S)$. The proof is a modification of the method originally proposed by Lehner in {\text {\rm \cite{Lehner}}}. \iffalse We describe the case $\ell \nmid k$, as the $\ell \mid k$ case is similar. One must first follow the method on pages 644-646 in \cite{Lehner}, making use of 2.4 from \cite{Rademacher_Ded_sums} to see that $12k\ell s_{(r,\ell)}(h,k)$ is an integer and \begin{align} \label{eq:mod1} 12k\ell s_{(r,\ell)}(h,k) & \equiv 6k(rk -c) & & \mod \ell, \\ \label{eq:mod4} 12k\ell s_{(r,\ell)}(h,k) & \equiv 6k(rk -c) & & \mod 3\ell \text{ if 3} \nmid k, \\ \label{eq:mod5} 12k\ell s_{(r,\ell)}(h,k) & \equiv 3k((2r-\ell)(k-1) +2(r-c)) & & \mod 4\ell \text{ if } 2 \nmid k, \end{align} where $c = \{rk, \ell\}$. One then follows the argument on page 646 of {\text {\rm \cite{Lehner}}} to evaluate the sum $$ \sum_{\lambda \mod k} \left(\left( \frac{h \lambda}{k} + \frac{r}{\ell}\right)\right)^2 $$ in two different ways; once the identity $$ \sum_{\lambda \mod k} \left[ \frac{h \lambda}{k} + \frac{r}{\ell} \right] = \frac{(h-1)(k-1)}{2} + \frac{rk - c}{\ell}. $$ is proven, one obtains $$ 12 k\ell s_{(r,\ell)}(h,k) \equiv u(\ell,k)h + v(r ,\ell,k)H' + R(k)\ \mod gk, $$ where $g$ retains the definition from \cite{Lehner}, $hH' \equiv -1 \mod gk$, and $u$ is a polynomial in $k$. The proof of (\ref{eq:Ksum_bound}) then follows {\text {\rm \cite{Lehner}}} with only a few modifications to account for the $\ell$ in the denominator of the exponential term in (\ref{eq:Kloosterman_def}). \fi \end{remark} \subsection{Sketch of the proof of Theorem~\ref{thm:equidistribution}: Equidistribution} Our main tool for proving Theorem \ref{thm:equidistribution} is an asymptotic formula for $\xi_S(r_1,\ell_1,r_2,\ell_2;n)$. We obtain this using Theorem \ref{thm:exact_formulas} by the same process as in the proof of Corollary 7.2 in \cite{reu_paper}. This formula shows that $\min\{{ \Lambda(}kj_1/\ell_1,kj_2/\ell_2)/k^2\}$ determines the dominant exponential term in the asymptotic description of $ \xi(j_1,\ell_1,j_2,\ell_2;n)$, so that $ \xi(j_1,\ell_1,j_2,\ell_2;n)$ dominates $\xi(j_1',\ell_1,j_2',\ell_2;n)$ asymptotically if $\min\{{ \Lambda(}kj_1/\ell_1,kj_2/\ell_2)/k^2\} < \min\{{ \Lambda(}kj_1'/\ell_1,kj_2'/\ell_2)/k^2\}$. If equality holds, a sequence dominates asymptotically if its corresponding modular form is of strictly larger weight. If these modular forms are of the same weight, then $$\xi(j_1,\ell_1,j_2,\ell_2;n) \sim \alpha\xi(j_1',\ell_1,j_2',\ell_2;n) $$ for some $\alpha >0$. Thus one can see by \ref{eq:roots} that if ${ \Lambda(}0,0) < { \Lambda(}j_1/\ell_1,j_2/\ell_2)$ for all $(j_1,j_2) \not \equiv (0,0)$, then we have $(\ell_1,\ell_2)$-equidistribution for $\mathcal{R} = \Z/ \ell_1 \Z \times \Z/\ell_2\Z$. This, along with manipulations of (\ref{eq:roots}), allows one to prove that $S$ has $(\ell_1,\ell_2)$-equidistribution in all five cases in Theorem \ref{thm:equidistribution}. If $ \xi(0,\ell_1,0,\ell_2) = o(\xi(j_1,\ell_1,j_2,\ell_2))$ for some $(j_1,j_2) \not \equiv (0,0)$, then $S$ does not have $(\ell_1,\ell_2)$-equidistribution. To prove this claim, choose $(j_1,j_2)$ so that $\xi(j_1,\ell_1,j_2,\ell_2;n) \neq o( \xi(j_1',\ell_1,j_2',\ell_2;n))$ for all $(j_1',j_2')$. If equidistribution held, then we would have $$ C_S(0,1,0,1) \sim \alpha^* C_S(0,\ell_1,0,\ell_2) $$ for some $\alpha^* >0$, which is false by our assumption that $ \xi(0,\ell_1,0,\ell_2) = o(\xi(j_1,\ell_1,j_2,\ell_2))$. This fact allows us to prove that $S$ does not have $(\ell_1,\ell_2)$-equidistribution in the cases not included in Theorem \ref{thm:equidistribution}. The cases with $\chi(S) <0$ require Theorem 15.1 in \cite{OurBook}. The most difficult case is where $\min\{{ \Lambda(}j_1/\ell_1,j_2/\ell_2)\} = { \Lambda(}j_1/\ell_1,j_2/\ell_2) = { \Lambda(}0,0)$ for some $(j_1,j_2) \not \equiv (0,0)$, where one must prove that the weight of $Z_S(\zeta_{\ell_1}^{j_1},\ell_2^{j_2};\tau)$ is greater than that of $Z_S(1,1;\tau)$. To accomplish this task, one must first make use of the equality $h^{0,0} = 1$, to show that in this case we must have $\gcd(\ell_1,\ell_2) = 1$. One must prove that in this case $h^{1,0} >0$ and $\chi(S) + \sigma(S) >0$, and thus conclude that ${ \Lambda(}j_1/\ell_1,j_2/\ell_2)$ never obtains its minimum for $j_1 \equiv 0$, $j_2 \not \equiv 0$. It follows that $\ell_2 \neq 1$. Also, one can now produce a uniform description of the weight of all $Z_S(\zeta_{\ell_1}^{j_1},\zeta_{\ell_2}^{j_2};\tau)$ such that ${ \Lambda(}j_1/\ell_1,j_2/\ell_2) = { \Lambda(}0,0)$. If this weight is less than or equal to that of ${ \Lambda(}0,0)$, then for all $(x,y) \in [1/3,2/3] \times [2/5,3/5]$, we have ${ \Lambda(}x,y)< { \Lambda(}0,0)$. It follows that there is some $(j_1,j_2) \not \equiv (0,0)$ such that ${ \Lambda(}j_1/\ell_1,j_2/\ell_2) < { \Lambda(}0,0)$, contradicting our initial assumption. Therefore the weight of $Z_S(\zeta_{\ell_1}^{j_1},\zeta_{\ell_2}^{j_2};\tau)$ is greater than that of ${ \Lambda(}0,0)$. The final statement of Theorem \ref{thm:equidistribution} follows from the fact that ${ \Lambda(}0,0) - { \Lambda(}1/2,1/2) = h^{1,0}/2$. \subsection{The case of $S=\C\mathbb{P}^2$} We illustrate Theorem \ref{thm:exact_formulas} and Theorem \ref{thm:equidistribution} with numerics where $S = \C\mathbb{P}^2$. For the purposes of illustrating Theorem \ref{thm:exact_formulas}, we consider \begin{equation*} Z_S(\zeta_3, -1;\tau) = 1 + 2 q + 4 q^2 + 7 q^3 + 12 q^4 + 20 q^5 + \cdots. \end{equation*} While Theorem~\ref{thm:exact_formulas} furnishes an infinite sum in $k$, in Tables \ref{table_2} and \ref{table_75} we approximate our exact formula by summing $k$ up to $N$, where $N$ is $2$ and $75$, respectively. \renewcommand{\arraystretch}{1} \begin{table}[h] \begin{center} \scalebox{0.9}{\begin{tabular}{|c|c|c|c|c|c|}\hline $n$ & $1$ &$2$ & $3$ & $4$ & $5$ \\\hline $\xi_{2,S}(1,3,1,2)$ & $1.9374...$ & $3.8920...$ & $7.0204...$ & $12.1616...$ & $20.0159...$ \\\hline \end{tabular}} \end{center} \caption{Approximate values in Theorem \ref{thm:exact_formulas}, $N=2$} \label{table_2} \end{table} \begin{table}[h] \small \begin{center} \scalebox{0.9}{\begin{tabular}{|c|c|c|c|c|c|}\hline $n$ & $1$ &$2$ & $3$ & $4$ & $5$ \\\hline $\xi_{75,S}(1,3,1,2)$ & $1.9989...$ & $4.0005...$ & $6.9995...$ & $12.0010...$ & $19.9995...$ \\\hline \end{tabular}} \end{center} \caption{Approximate values in Theorem \ref{thm:exact_formulas}, $N=75 $} \label{table_75} \end{table} Tables \ref{table_equidistribution} and \ref{table_zeros} show the asymptotic equidistribution of the Hodge numbers of $\C\mathbb{P}^2$, which falls into case (4) in Theorem \ref{thm:equidistribution}. For this purpose, we define the following proportions $$\Theta^{r_1,r_2}_{\ell_1, \ell_2, S}(n): = \frac{\gamma_S(r_1,\ell_1, r_2, \ell_2;n)}{\sum_{\substack{ j_1 \mod{\ell_1} \\ j_2 \mod{\ell_2}}} \gamma_S(j_1,\ell_1, j_2, \ell_2;n)}.$$ In Table \ref{table_equidistribution}, where $\gcd(\ell_1 = 3, \ell_2 = 2) = 1$, we see asymptotic equidistribution, while in Table \ref{table_zeros}, where $\gcd(\ell_1 = 2, \ell_2 = 4) = 2$, we get asymptotic equidistribution when $r_1 \equiv r_2 \ \mod{2}$ and $0$ otherwise. These numerics also suggest many underlying equalities that exist amongst the $\gamma_S(r_1, \ell_1, r_2, \ell_2)$ for different values of $(r_1, r_2)$ as a result of the symmetries of $Z_S(\zeta_{\ell_1}^{r_1}, \zeta_{\ell_2}^{r_2}; \tau)$. \renewcommand{\arraystretch}{1.5} \begin{table}[h] \small \begin{center} \scalebox{0.9}{\begin{tabular}{|c|c|c|c|c|c|}\hline $n$&$5$ & $10$ &$15$ & $20$ &$25$ \\\hline $\Theta^{0,0}_{3,2,S}(n)$ & $ 0.2222... $ & $0.1886... $ & $0.1752... $ & $0.1708... $ &$0.1687... $ \\\hline $\Theta^{0,1}_{3,2,S}(n)$ & $ 0.1111... $ & $0.1446... $ & $0.1582... $ & $0.1624... $ &$0.1646... $ \\\hline $\Theta^{1,0}_{3,2,S}(n)$ & $ 0.1296... $ & $0.1571... $ & $0.1619... $ & $0.1646... $ &$0.1655... $ \\\hline $\Theta^{1,1}_{3,2,S}(n)$ & $ 0.2037... $ & $0.1761... $ & $0.1712... $ & $0.1686... $ &$0.1677... $ \\\hline $\Theta^{2,0}_{3,2,S}(n)$ & $ 0.1296... $ & $0.1571... $ & $0.1619... $ & $0.1646... $ &$0.1655... $ \\\hline $\Theta^{2,1}_{3,2,S}(n)$ & $ 0.2037... $ & $0.1761... $ & $0.1712... $ & $0.1686... $ &$0.1677... $ \\\hline \end{tabular}} \end{center} \caption{Comparative asymptotic properties of $\gamma_{S}(r_1, 3, r_2, 2; n)$ } \label{table_equidistribution} \end{table} \begin{table}[H] \small \begin{center} \scalebox{0.9}{\begin{tabular}{|c|c|c|c|c|c|}\hline $n$&$5$ & $10$ &$15$ & $20$ &$25$ \\\hline $\Theta^{0,0}_{2,4,S}(n)$ & $0.2592...$ & $ 0.2545... $ & $0.2503... $ & $0.2503... $ & $0.2500... $ \\\hline $\Theta^{0,2}_{2,4,S}(n)$ & $ 0.2222... $ & $0.2484... $ & $0.2488... $ & $0.2498... $ &$0.2498... $ \\\hline $\Theta^{1,1}_{2,4,S}(n)$ & $ 0.2592... $ & $0.2484... $ & $0.2503... $ & $0.2498... $ &$0.2500... $ \\\hline $\Theta^{1,3}_{2,4,S}(n)$ & $ 0.2592... $ & $0.2484... $ & $0.2503... $ & $0.2498... $ &$0.2500... $ \\\hline $\Theta^{r_1,r_2}_{2,4,S}(n)$ & $ 0 $ & $ 0 $ & $0 $ & $0 $ &$0 $ \\\hline \end{tabular}} \end{center} \caption{Comparative asymptotic properties of $\gamma_{S}(r_1, 2, r_2, 4; n)$ } \label{table_zeros} \end{table}
train/arxiv
BkiUdIY25V5ijU6aQ7XH
5
1
\section{Introduction} Electron-phonon coupling in molecular systems is at the heart of several important physical phenomena, including the mobility of carriers in organic electronic devices, \cite{Gosar66,Coropceanu07,Fratini09,Ortmann09} the dissociation of excitons at the donor/acceptor interface in organic photovoltaic cells, \cite{Tamura08} or the superconducting transition in molecular solids, from the most famous fulleride case \cite{Hebard91,Gunnarsson97,Ganin08} to the recent alkali doped picene. \cite{Mitsuhashi10} Even though the interplay between phonon-mediation and electronic correlations is still being discussed to better rationalize the superconducting transition all across the fullerides family, \cite{Tosatti} the magnitude of the electron-phonon coupling (EPC) in $C_{60}$ has been the subject of numerous theoretical and experimental studies since the early 90s in order to evaluate in particular the effective phonon-mediated attractive potential $V^{ep}$ central to the BCS theory. Of particular relevance for electron-doped fullerenes, the coupling to the lowest unoccupied molecular orbital (LUMO) was explored extensively on the basis of various theoretical approaches, \cite{Gunnarsson97} from earlier combinations of semi-empirical and density functional theory (DFT) calculations \cite{Varma91,Schluter92,Mazin92} to fully first-principles DFT studies. \cite{deCoulomb92,Faulhaber93,Antropov93,Breda98,Manini01,Saito02,Janssen10,Iwahara10} Concerning the contribution of the $H_g$ vibrational modes, values from 38 meV to 68 meV were calculated within DFT and (semi)local functionals such as the local density approximation (LDA), while the $A_g$ modes were found to provide a much smaller contribution, consistently below about 10 meV. These calculated energies are significantly lower than the available experimental values extracted from photoemission (PES) experiments on isolated fullerenes with $V^{ep}$ found to extend from 96~meV to 147~meV for the $H_g$ modes contribution, \cite{Gunnarsson95,Wang05,Hands08} and from 107~meV to 158~meV including both $A_g$ and $H_g$ contributions. \cite{Gunnarsson95,Wang05} In recent work, \cite{Saito02,Janssen10,Iwahara10} the EPC in $C_{60}$ was revisited using DFT and hybrid functionals. An important outcome of these studies was a significant increase of $V^{ep}$ with increasing percentage of exact exchange within modified B3LYP or PBE functionals. This clearly indicates that in such systems, not only the electronic excitation energies, but also the EPC constants, are very sensitive to the choice of the exchange-correlation functional. Despite the overall better agreement with experiments when hybrid functionals are used, it is unclear which amount of exact exchange should be used for the fullerenes, or any finite or extended system in general. Further, the evaluation of the coupling constant to individual energy levels such as the $t_{1u}$ state in $C_{60}$ relies on the identification of the Kohn-Sham eigenstates and eigenvalues with proper electronic quasiparticle states and excitation energies. From a pragmatic point of view, the amount of exact exchange in e.g. B3LYP is adjusted to reproduce ground-state properties of a set of molecular systems, \cite{B3LYP} but does not guaranty that Kohn-Sham eigenvalues reproduce correctly quasiparticle energies. For example, the $C_{60}$ Kohn-Sham HOMO-LUMO gap is 2.8 eV within the DFT-B3LYP approach. \cite{ShucklaC60,ZhangC60} This is better than the 1.6 eV obtained within DFT-PBE, \cite{PBE} but still significantly smaller than the 4.9 eV experimental gap in the gas phase. \cite{NIST} In the present work, we study the electron-phonon coupling in the $C_{60}$ fullerene using the first-principles $GW$ approach providing well-defined and accurate quasiparticle energies within a parameter-free many-body perturbation theory framework. We focus on the threefold $t_{1u}$ lowest unoccupied molecular orbital (LUMO) which forms the conducting states in electron-doped fullerides and thus determines the superconducting properties. We find that the electron-phonon potential $V^{ep}$ is increased by as much as 48$\%$ as compared to DFT-LDA calculations, bridging the gap with experimental data. In particular, the contribution from the $H_g$ modes is now found to be within 4$\%$ of the two most recent experimental estimates. The present results may invite to reconsider previous DFT calculations of the electron-phonon coupling constants involved e.g. in the study of the superconductivity in molecular or extended systems. \section{Methodology} In the $GW$ quasiparticle formalism, \cite{Hedin65,Strinati80,Hybertsen86,Godby88,Onida02} for which decades of expertise exist in the case of bulk systems, \cite{Onida02,Aulbur95} the exchange-correlation potential is described by a non-local energy-dependent self-energy $\Sigma({\bf{r,r'}}|E)$ which can be expressed as follows: \begin{eqnarray*} \Sigma^{GW}({\bf{r,r'}}|E) &=& { i \over 2\pi } \int d\omega \; G({\bf{r,r'}}|E+\omega) { W}({\bf{r,r'}}|\omega) \\ G({\bf{r,r'}}|\omega) &=& \sum_n \phi_n({\bf{r}}) \phi_n^*({\bf r'}) / (\omega - {\varepsilon}_n \pm i\delta) \\ W({\bf{r,r'}}|\omega) &=& \int dr" \; V^C({\bf{r,r''}}) \epsilon^{-1}({\bf{r'',r'}}|\omega) \label{sigma} \end{eqnarray*} \noindent where $G$ is the time-ordered Green's function \cite{infinitesimal} and $W$ the dynamically screened Coulomb potential built from the bare Coulomb potential $V^C$ and the non-local inverse dielectric matrix $\epsilon^{-1}$ at finite frequency. For extended solids, the ``starting" $({\varepsilon}_n,\phi_n)$ eigenstates used to build $G$ and the dielectric response are traditionally obtained from a ground-state DFT calculation with (semi)local functionals such as LDA or PBE. In the case of isolated molecules, the $GW$ approach was recently thoroughly validated on a large set of small molecules \cite{Rostgaard10} and larger organic ones such as fullerenes, porphyrins \cite{Blase11} or DNA/RNA nucleobases. \cite{Faber11} An excellent agreement with experiment for the ionization energies and electronic affinities were obtained through a simple self-consistency on the eigenvalues with DFT-LDA eigenstates used as the starting point. \cite{Blase11,Faber11,starting} In this latter approach, labeled $GW$ in what follows, the HOMO-LUMO gap of gas phase $C_{60}$ was calculated to be 4.91 eV, \cite{Blase11} in much better agreement with experiment than the DFT-B3LYP Kohn-Sham value. Our calculations are based on a recently developed \cite{Blase11,Blase04} gaussian-basis implementation of the $GW$ formalism (the {\sc{Fiesta}} code) with explicit treatment of dynamical correlations through contour deformation techniques. We start from DFT-LDA eigenstates calculated with the {\sc Siesta} package \cite{Siesta} and a double-zeta plus polarization (DZP) basis \cite{DZP} for the description of the valence orbitals combined with standard norm-conserving pseudopotentials. \cite{TM} As shown below, the resulting electron-phonon coupling potentials are very similar to that obtained with all-electron calculations, \cite{Antropov93,Saito02,Janssen10} at least at the DFT level for which several studies are available. While $GW$ calculations exploiting DFT eigenstates generated with pseudopotentials represent the most common approach \cite{Hybertsen86,Godby88,Onida02}, a specific aspect of the present gaussian-basis implementation is that the auxiliary basis described here below has been optimized \cite{Kaczmarski10,Blase11,Faber11} to project onto the products of occupied/unoccupied pseudized eigenstates. The needed two-point operators such as the dynamical and non-local free-electron susceptibility $\chi_0(\textbf{r},\textbf{r'}|\omega)$, the screened Coulomb potential $W(\textbf{r},\textbf{r'}|\omega)$ and the self-energy operator $\Sigma^{GW}(\textbf{r},\textbf{r'}|\omega)$, are expressed on an auxiliary even-temperered gaussian basis consisting of 4 gaussians per each (\textit{s,p,d})-channel with localization decay constant (0.2,0.5,1.25,3.2) a.u. Such a basis was thoroughly tested in the $GW$ study of a retinal chromophore, \cite{Kaczmarski10} of fullerenes, porphyrins or phtalocyanines \cite{Blase11} and DNA/RNA nucleobases. \cite{Faber11,productbasis} For numerical accuracy when calculating the correlation contribution to the self-energy, we first evaluate $\Sigma_c^{GW}(E)$ on a coarse energy grid to get a first estimate of the quasiparticle energy, and then recalculate $\Sigma_c^{GW}(E)$ on a fine energy grid around this energy to refine our calculated correlation contribution. We verify as well that performing the imaginary-axis integration needed in the contour deformation technique (see Ref.~\onlinecite{Blase11}) with 12 gaussian points yields results within 0.1 meV as compared to a calculation using 20 gaussian points. Following the results of Ref.~\onlinecite{Janssen10}, we use the relaxed structure and phonon eigenmodes generated within the DFT-B3LYP approach and a 6-311G(d) basis. \cite{modes} This approach was shown to yield the best eigenfrequencies as compared to Raman experiments. \cite{notegw,gaussian} The EPC matrix elements are evaluated using a direct frozen-phonon technique. Namely, we deform the molecule along the $\vec{\textsl e}_{\nu}$ vibrational eigenvectors with typical amplitudes of 0.05~\AA\ and compute the slope $\; (\vec{\textsl e}_{\nu} \cdot \vec{\nabla}) \varepsilon_i$ of the variation with respect to the deformation amplitude of the DFT Kohn-Sham eigenvalues, and further of the $GW$ quasiparticle energies. We verify that we remain in the linear regime as confirmed by the small value of the error on the regression coefficient within the fitting procedure. Group theory analysis shows that the ($t_{1u} \otimes t_{1u}$) direct product projects only onto the non-degenerate $A_g$ modes and the five-fold $H_g$ vibrational modes, significantly reducing the number of matrix elements to be calculated. It remains that ten modes can contribute to the coupling, so that a very large number of $GW$ calculations are needed. To conclude this methodology section, we note that in the present approach based on frozen-phonon techniques where atoms are explicitly displaced, the calculated electron-phonon coupling potentials may be subject to errors related to the use of localized basis (``Pulay errors"). This issue was previously explored at the DFT level by comparing localized-basis and planewave calculations showing small differences (see Supplementary Materials, Ref.~\onlinecite{Janssen10}). In the present case of $GW$ calculations, we verify here below that increasing both the size of the DFT and auxiliary basis, and taking more diffuse auxiliary orbitals, hardly changes the value of the coupling constants, suggesting again small errors related to the use of finite atom-centered bases. \section{Results} Our results are gathered in Table I where we provide the EPC potential contribution $V^{ep}_{\nu}$ for each of the ten relevant modes, including their degeneracy, namely: \begin{eqnarray*} V^{ep}_{\nu} = { g_{\nu} \over M \omega_{\nu}^2 } \sum_{i,j=1}^{3} { | < \phi_i | (\vec{\textsl e}_{\nu} \cdot \vec{\nabla}) V^{SCF} | \phi_j > |^2 \over g_{t1u}^2 } \end{eqnarray*} \noindent where $(\vec{\textsl e}_{\nu} \cdot \vec{\nabla}) V^{SCF}$ is the normalized variation of the self-consistent potential under distortion of the molecule along the vibrational mode with index ($\nu$), degeneracy $g_{\nu}$ and frequency $\omega_{\nu}$. The $(i,j)$ indices run over the $t_{1u}$ manifold with $g_{t1u}$=3 degeneracy. The above formula is the molecular limit \cite{Schluter92,Antropov93} of the central definition used in \textit{ab initio} studies of phonon-mediated superconductivity in extended solids. In the present frozen-phonon approach, \cite{Antropov93,Breda98,Manini01,Saito02,Janssen10} the explicit deformation of the molecule diagonalizes the eigenstates with respect to the perturbation, leaving only the intraband transitions which, thanks to Hellmann-Feynman theorem, can be calculated through the variation of the corresponding energy levels, namely: $$ \sum_{i,j=1}^{3} |< \phi_i | (\vec{\textsl e}_{\nu} \cdot \vec{\nabla}) V^{SCF} | \phi_j > |^2 = \sum_{i=1}^{3} | (\vec{\textsl e}_{\nu} \cdot \vec{\nabla}) \varepsilon_i |^2 $$ \noindent with derivatives calculated through finite differences. This connects EPC matrix elements and the variation of the electronic energy levels with respect to vibrational displacements. This approach is similar to that of Refs.~\onlinecite{Antropov93,Breda98,Manini01,Saito02,Janssen10} but we use both the $GW$ quasiparticle energies and the DFT Kohn-Sham eigenvalues, allowing direct comparison. As an internal accuracy test, following early group symmetry analysis, \cite{Schluter92} the trace of an $H_g$ perturbation is zero in the ($t_{1u}$) subspace, namely: $\; \sum_{i=1}^3 (\vec{\textsl e}_{\nu} \cdot \vec{\nabla}) \varepsilon_i = 0$, a condition which is well verified within our DFT and $GW$ calculations. Our LDA data yield a total 73.4 meV coupling, in good agreement with the 75.8 meV all-electron PBE value of Ref.~\onlinecite{Janssen10}. Comparing to other similar calculations, namely extracting the coupling coefficient from the evolution of the DFT-LDA Kohn-Sham eigenvalues under molecular distortion, our 65 meV value for the $H_g$ modes contribution is also very close to the 68 meV value by Antropov and coworkers, \cite{Antropov93} within a full-potential framework, and the 67 meV obtained with an all-electron gaussian basis. \cite{Saito02} Consistently with early Raman analysis, \cite{Gunnarsson97} all studies agree on the predominance of the two high energy $H_g(8)$ and $H_g(7)$ tangential modes, but contributions at lower energy such as from the $H_g(2)$ and $H_g(3)$ radial modes are found to be important as well. \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline Mode & \multicolumn{6}{c}{Theory} & \multicolumn{3}{c} {Experiments} \\ \hline & $\omega$ (cm$^{-1}$) & LDA & B3LYP & Hybrids & $G_0W_0$(LDA) & $GW$ & Iwahara$^a$ & Hands$^b$ & Gunnarsson$^c $ \\ \hline $A_g(1)$ & 496 & 0.5 & 1.2 & 1.2 - 1.7 & 1.2 & 1.0 (107$\%$) & & & \\ $A_g(2)$ & 1492 & 7.7 & 10.9 & 10.5 - 12.9 & 13.6 & 15.0 (93$\%$) & & & \\ $H_g(1)$ & 265 & 5.1 & 5.8 & 5.3 - 6.0 & 4.4 & 6.4 (27$\%$) & & & \\ $H_g(2)$ & 435 & 9.9 & 10.8 & 10.8 - 13.8 & 15.3 & 11.2 (14$\%$) & & & \\ $H_g(3)$ & 721 & 9.1 & 11.9 & 11.0 - 16.7 & 12.3 & 13.9 (53$\%$) & & & \\ $H_g(4)$ & 785 & 4.2 & 5.2 & 4.2 - 5.3 & 4.7 & 5.6 (36$\%$) & & & \\ $H_g(5)$ & 1123 & 4.2 & 5.0 & 5.0 - 6.7 & 4.2 & 5.2 (23$\%$) & & & \\ $H_g(6)$ & 1265 & 2.1 & 2.1 & 2.0 - 4.2 & 2.3 & 2.3 (9$\%$) & & & \\ $H_g(7)$ & 1442 & 16.9 & 23.0 & 23.0 - 27.7 & 20.0 & 27.6 (63$\%$) & & & \\ $H_g(8)$ & 1608 & 13.7 & 17.7 & 17.0 - 19.3 & 15.6 & 20.4 (49$\%$) & & & \\ \hline Total $A_g$ & - & 8.2 & 12.2 & 12.1 - 14.3 & 14.8 & 16.0 (95$\%$) & 10.6 & - & 11.3 \\ Total $H_g$ & - & 65.2 & 81.5 & 80.0 - 96.3 & 78.8 & 92.6 (42$\%$) & 96.2 & 96.5 & 147.0 \\ \hline Total & - & 73.4 & 93.7 & 93.7 - 110.7 & 93.6 & 108.6 (48$\%$) & 106.8 & - & 158.3 \\ \hline \hline \end{tabular} \caption{Calculated electron-phonon coupling contributions to V$^{ep} $ for the $A_g$ and $H_g$ modes (meV) calculated within LDA, B3LYP (Ref.~ \onlinecite{Janssen10}), DFT with various hybrid functionals at the same 6-311G(d) level (column Hybrids with data from Refs.~ \onlinecite{Saito02,Janssen10,Iwahara10}), non-self-consistent $G_0W_0$(LDA) and $GW$. The percentage of increase as compared to LDA is indicated in parenthesis. The experimental data are compiled in the three last columns. \\ $^a$ Ref.~\onlinecite{Iwahara10}, Table V. \\ $^b$ Ref.~\onlinecite{Hands08} \\ $^c$ Ref.~\onlinecite{Gunnarsson95} \\ } \label{table} \end{table*} The central result of the present study is the dramatic 48$\%$ increase of the total coupling potential within the $GW$ approach as compared to LDA calculations. $V^{ep}$ is indeed found to increase from 73.4 meV (LDA) to 108.6 meV ($GW$). For the $H_g$ modes, the calculated $GW$ value of 92.8 meV agrees well with the two most recent 96.2 meV and 96.5 meV independent experimental estimates of Ref.~\onlinecite{Iwahara10} (Table V) and Ref.~\onlinecite{Hands08}, respectively. Further, the total $GW$ coupling of 108.6 meV is in close agreement with the latest 106.7 meV experimental value. \cite{Wang05,Iwahara10,convergency} The present results clearly question the accuracy of the EPC calculated within DFT and (semi)local functionals. In view of the remarkable agreement with experiment obtained with the present parameter-free $GW$ formalism, one can hope that this approach will improve our understanding of phonon-mediated processes in general. \section{Discussion} We can now comment on the recent studies performed with hybrids functionals. Since both the experimental and $GW$ total coupling potential fall within the rather large $[$93,111$]$ meV energy range obtained by changing the amount of exact exchange from 20$\%$ to 30$\%$ in the hybrid DFT approaches, \cite{Saito02,Janssen10,Iwahara10} one could certainly build a functional yielding excellent agreement with experiment for this specific $C_{60}$ system. However, clearly, the amount of needed exact exchange may vary from one system to another (see below). It is further instructing to compare mode by mode the various approaches. Considering e.g. the $A_g(2)$ and $H_g(8)$ modes showing large coupling, it appears that the largest amount of exact exchange tested so far (30$\%$) is not enough to reach the $GW$ results. In contrast, the $GW$ values for the $H_g(n=2,3,5,6)$ modes are well within the hybrid functionals range. This suggests that even for a given single molecule, it seems difficult to optimize the amount of exact exchange so as to reproduce the $GW$ results mode by mode. This observation leads to emphasizing the importance of the non-local and dynamical correlation part of the $GW$ self-energy. In our approach where only the energy levels are updated, but not the wavefunctions, the differences between DFT-LDA and $GW$ results can only stem from the replacement of the exchange-correlation functional by the $GW$ self-energy. An interesting observation is that a non-self-consistent $G_0W_0$ calculation starting from LDA eigenstates (see column 6 of Table I) leads to a coupling constant which is still significantly larger than the DFT-LDA value, but smaller than the $GW$ one, and very similar to that of the hybrid B3LYP functional. As emphasized in recent work, \cite{Rostgaard10,Blase11,Faber11} in the case of molecular systems, the significantly too small starting LDA gap leads to a large overscreening in the standard $G_0W_0$(LDA) approach. In the present C$_{60}$ case, the $G_0W_0$(LDA) gap is found to be 4.4 eV, much better than the 1.6 eV LDA value, but still smaller than the 4.9 eV experimental and $GW$ values. Qualitatively, this overscreening certainly softens the variations of the ionic and electronic potential seen by the electrons upon lattice distortion. Recently, \cite{Lazzeri08} the EPC matrix elements in graphene for the ${\Gamma}$-$E_{2g}$ and $K$-$A_{1}'$ phonon modes were studied within a non-self-consistent $G_0W_0$(LDA) approach. As compared to DFT-LDA calculations, the square of the deformation potentials, labeled ${\langle}D^2_{\Gamma}{\rangle}$ and ${\langle}D^2_K{\rangle}$, were shown to increase by 41$\%$ and 114$\%$ respectively. \cite{dopedgraphene} This is consistent with our own results, suggesting that EPC matrix elements are significantly affected by the $GW$ correction in both finite and extended systems. A further important outcome of this study was that, in graphene, the DFT-B3LYP approach yields significantly too large coupling constants as compared to experiment, in contrast with the present case of fullerenes where the DFT-B3LYP calculations underestimate the coupling. This certainly points out to the difficulties in obtaining hybrid functionals which are accurate for both finite and extended systems. \section{Conclusion} In conclusion, we have studied using a first-principles $GW$ approach the electron-phonon coupling strength in the $C_{60}$ fullerene, focusing on the $t_{1u}$ LUMO state of interest to the superconducting transition in the fullerides. It is found that within $GW$, the electron-phonon potential $V^{ep}$ increases by 48$\%$ as compared to the value calculated within DFT and (semi)local functionals such as LDA or PBE. The calculated 93 meV $GW$ value for the $H_g$ modes contribution comes within 4$\%$ of the two most recent experimental estimates. This demonstrates that the present parameter-free approach allows a precise determination of the electron-phonon coupling potential in one of the most studied molecular system. Beyond the important case of the fullerenes, the present results call for a reinvestigation of previous DFT-based calculations of the electron-phonon coupling in organic systems, and possibly as well in ``covalent superconducting systems" such as in particular MgB$_2$ or doped diamond. Similarly, the important phonon-induced renormalization of the electron and hole band width in organic semiconductors may deserve further inspection beyond previous DFT calculations. \textbf{Acknowledgements.} C.F. is indebted to the EU Erasmus program for funding. The authors acknowledge numerous suggestions from V. Olevano and C. Attaccalite. Calculations have been performed on the CIMENT platform (Grenoble) thanks to the Nanostar RTRA project and at IDRIS, Orsay (project 100063). M.C. and J.L.J. would like to acknowledge the support of NSERC and FQRNT.
train/arxiv
BkiUejjxK2li-LJsweOE
5
1
\section{Closing Remarks} \label{Section:Conclusion} We believe there is not a unique or even a right or wrong way of defining the features and answering the questions raised in this paper. Instead, we envision researchers investigating and implementing different query languages for the Web of Data. If this is the case, another question arises: How do we evaluate and meaningfully compare different approaches? To summarize, what should a query language for the Web of Data be? We do not know yet! However, we hope to have an answer to this question in the next 10 years. \section{Features} \label{Section:Features} \noindent {\bf Scope:} According to the current SPARQL standard, the scope of a query is a predefined RDF dataset. A query language for the Web of Data, should not have such a fixed scope; instead, it should take advantage of the openness and the unbounded nature of the Web. A basis for defining the scope of queries in this context is a model of the Web of Data. We ask ourselves: \begin{itemize} \addtolength{\itemsep}{-0.5\baselineskip} \item What characteristics of the Web of Data are relevant for a data model that can be used as the foundation for a query language? \item How would such a model deal with the dynamic nature of the Web? \item Should such a model capture different approaches of exposing datasets on the Web? \item How can the scope of queries be restricted to a particular, declaratively defined portion of the Web of Data? \end{itemize} \noindent {\bf Language Expressiveness:} The expressiveness of a query language is characterized by the type of questions that can be asked using the language. However, adding expressive power usually increases the computational complexity of a query language. This issue becomes even more important in the context of computing queries at Web scale. Hence, developing a query language for the Web of Data comprises the challenge of finding a trade-off between expressiveness and complexity. Since the answer to this problem may be different, depending on the usage scenario, we foresee the emergence of multiple approaches. We ask ourselves: \begin{itemize}\addtolength{\itemsep}{-0.5\baselineskip} \item Should the language be concerned with record linkage and semantically overlapping vocabularies? Should the language deal with entity and vocabulary mappings? \item What operators should the language support? Which are unsuitable (e.g.~negation)? \item Could unsuitable operators be included by enabling users to declaratively bound the scope for them? How can such a bounded scope be declared in the queries (e.g.~based on namespaces, based on specific SPARQL endpoints, etc)? \item Should the query language consider the topology of the Web and allow users to specify path expressions for explicitly guiding link traversal based data discovery? \item Can provenance requirements be expressed in the language? \item Should the language be concerned with trustworthiness of data (or other criteria of information quality)? Could we make quality requirements explicit in queries? \end{itemize} \balance \noindent {\bf Query Results:} Queries are executed over collections of data in order to compute results that answer the questions expressed by the queries. In the context of the Web of Data it is not obvious what such an answer should be, because the data collection is unbounded and uncontrolled. Furthermore, some data (and thus query results) may not be considered trustworthy by certain users. On the other hand, personalized query semantics may emerge and query results could be influenced by the query history and behavior of a user's friends. Thus, depending on the use cases we expect different types of results for the same query. Hence, we foresee multiple approaches for defining what a query result is. We ask ourselves: \begin{itemize}\addtolength{\itemsep}{-0.5\baselineskip} \item Should query results be assessed based on soundness and completeness, precision and recall, a combination thereof, or even something else? \item If a query is executed multiple times, should the results be incremental? \item Should query semantics be monotonic or non-monotonic? \item Should query results depend on social aspects? \item Can parts of the Web of Data conceptually be locked during query execution? If not, what should the result be if the execution of a query uses data that might have already been altered by the time the execution terminates? \item Should query results include their provenance? \item Should query results be associated with trustworthiness scores? \end{itemize} \noindent {\bf Implementation Aspects:} Declarative queries can be computed in multiple ways, applying different execution strategies. Different query execution plans may be formed by combining alternative data access paths and join algorithms. Query optimizers estimate costs for such plans in order to determine the most suitable plan. Such costs usually depend on I/O and statistics about the queried data. In the context of the Web of Data, such information may not be available or even relevant. On the other hand, new criteria become relevant (e.g.~network latency). Additionally, query semantics may require the discovery and exploitation of vocabulary mappings or entity mappings. Depending on the expressiveness of the query language, determining suitable query plans may become much more complex than it is in traditional query optimization scenarios. We foresee a significant focus on adaptive query processing instead of traditional optimize-then-execute, due to the lack of control on the Web of Data. We ask ourselves: \begin{itemize}\addtolength{\itemsep}{-0.5\baselineskip} \item What is a logical and a physical execution plan for querying the Web of Data? \item Can optimization strategies developed in the database community be applied? \item How can a cost model be defined? What should it depend on? \item What type of statistics could be used to optimize queries? \item Can a query be optimized based on query plans from my friends' query engines? \item How can discovered data and intermediate results be indexed or cached? \item How can vocabulary mappings be found and used efficiently? \item How can entity mappings (owl:sameAs links) be found and used efficiently? \end{itemize} \section{Introduction} \label{Section:Intro} As of today, 2011, the Web of Data is composed of RDF based datasets that are exposed on the World Wide Web i) in adherence to the Linked Data principles, ii) using the SPARQL protocol \emph{or} iii) as static RDF documents. The Web of Data is in constant growth. We foresee that it will become more wild, uncontrollable and infinite. It will have no boundaries and will grow faster than it can be crawled. It will be gigantic ... and we want to query it! What do we mean by ``querying the Web of Data''? From the current Web search paradigm, it could mean that we first crawl the data, index it, and then search based on keywords over the indexed data. From a data warehouse perspective, it could mean to copy relevant datasets into a local RDF database and execute queries over the local collection. Another approach would be to execute declarative queries on the fly over the Web of Data itself. In this vision paper, we focus on the latter because it entails new challenges and open questions, which we will describe in this paper. The main question is, what should a declarative language for such an approach be? Since the Web of Data is based on the RDF model and SPARQL is the standard query language for RDF, it seems natural to ask: Is SPARQL suitable as a declarative language to query the Web of Data? The semantics of SPARQL, as given in the current standard, considers a single, fixed, a priori defined RDF dataset. It has been defined in the context of databases and logic and query results are assessed based on the concept of soundness and completeness. However, to query the Web of Data, we would need query language that considers an unbounded, distributed collection of RDF data which cannot be assumed to be known completely. What characteristics of the Web of Data should be considered in order to define such a query language? In the remainder of this paper, we first present related work and argue that research on querying the Web of Data is still in its infancy. We then provide an initial set of general features that we envision should be considered in order to define a query language for the Web of Data. Furthermore, for each of these features, we pose questions that have not been addressed before in the context of querying the Web of Data. We believe that addressing these questions and studying these features may guide the next 10 years of research on the Web of Data. \section{Related work} \label{Section:RelWork} Research on querying the World Wide Web started in the mid 1990s~\cite{Florescu98:DBTechniquesForWWW}. It is important to note that the Web at that time only consisted of linked hypertext documents. Most of the research was based on developing models to represent the Web (e.g.~\cite{Mendelzon98:FormalModelsOfWebQueries,Abiteboul00:QueriesAndComputationOnTheWebArticle,Kleinberg99:WebAsGraph}) and approaches for (declarative) queries over the Web (e.g.~\cite{Konopnicki95:WWWQuerySystemW3QS,Mendelzon97:QueryingTheWWW,Spertus00:StructuredQueryLanguageForTheWeb}). To the best of our knowledge, the last paper published on this topic was in 2002~\cite{Spielmann02:DistributedWebQuerying}. Four years later, Tim Berners-Lee proposed the Linked Data principles, which kick-started the Linked Open Data project. This project helped to bootstrap the Web of Data as it exists today. The first paper that explicitly focused on querying the Web of Data was published in 2009~\cite{Hartig09:QueryingTheWebOfLD}. Since then, further papers have been published (e.g.~\cite{Bouquet09:QueryingWebOfData,Harth10:DataSummariesForLDQueryProcessing,Ladwig10:LinkedDataQueryProcessingStrategies,Hartig10:DBPerspectiveOnConsumingLD,Schwarte11:FedX,Acosta11:ANAPSID}). We believe that these works are only the beginning of a new area of research for which we aim to provide inspiration with this paper. Other fields that should be considered relevant in this context, are distributed databases, uncertain and probabilistic databases, data stream management, and Deep Web.
train/arxiv
BkiUdg04dbghcmVn1_eU
5
1
\section{Introduction} Let $F$ be an algebraically closed filed, $G$ be a group and $V$ and $W$ be irreducible $FG$-representation. A natural question to ask is when the tensor product $V\otimes W$ is irreducible. This is always the case if $V$ or $W$ is 1-dimensional, so the interesting cases are those where neither $V$ nor $W$ is 1-dimensional but $V\otimes W$ is irreducible, in which case we say that $V\otimes W$ is a non-trivial irreducible tensor product. One motivation to this question comes from the Aschbacher-Scott classification of maximal subgroups of finite classical groups, see \cite{a,as}. Irreducible tensor products of symmetric groups have been fully classified in \cite{bk,gj,gk,m1,z1}. For alternating groups, apart for some cases in in characteristic $2$, non-trivial tensor products have been classified in \cite{bk3,bk2,m2,m3,z1}. For covering groups of symmetric and alternating groups however only partial results are known, that is the characteristic 0 case for $\widetilde{\sf S}_n$, see \cite{b2,bk4}, as well as some reduction results obtained in \cite{kt} for $\widetilde{\sf S}_n$ and $\widetilde{\sf A}_n$ in characteristic $\geq 5$. In this paper we will consider the case where $G=\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$ is a covering group of a symmetric or alternating group and completely classify non-trivial irreducible tensor products in characteristic $\not=2$. By definition there exists $z\in\widetilde{\sf A}_n\subseteq\widetilde{\sf S}_n$ with $z$ of order 2 and central in $\widetilde{\sf S}_n$ such that ${\sf S}_n\cong \widetilde{\sf S}_n/\langle z\rangle$ and ${\sf A}_n\cong \widetilde{\sf A}_n/\langle z\rangle$. Since $z$ is central of order 2, irreducible representations of $\widetilde{\sf S}_n$ and $\widetilde{\sf A}_n$ are of two types, depending on whether $z$ acts as $1$ or $-1$. Let $V$ be an irreducible representation of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$. If $z$ acts as $1$ on $V$ then $V$ may be viewed also as a representation of ${\sf S}_n$ or ${\sf A}_n$ by factoring through $\langle z\rangle$ (and $V$ is irreducible also as an ${\sf S}_n$- or ${\sf A}_n$-representation). On the other hand if $V$ is an (irreducible) representation of ${\sf S}_n$ or ${\sf A}_n$ then we may lift $V$ to an (irreducible) representation of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$ on which $z$ acts trivially. If on the other hand $z$ acts as $-1$ on $V$ then we say, when $p\not=2$, that $V$ is a spin representation. Thus, for $p\not=2$, when considering tensor products $V\otimes W$ of two irreducible representations $V$ and $W$ of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$ three cases need to be considered: (i) neither $V$ nor $W$ is a spin representation, (ii) $V$ is not a spin representation, while $W$ is a spin representation and (iii) both $V$ and $W$ are spin representations. In case (i) $V\otimes W$ is irreducible as a $\widetilde{\sf S}_n$- or $\widetilde{\sf A}_n$-representation if and only if it is irreducible as a ${\sf S}_n$- or ${\sf A}_n$-representation, so this case is already covered by \cite{bk3,bk,bk2,gk,m2,m3,z1}. So only cases (ii) and (iii) will be considered in this paper. As can be seen from Theorems \ref{TS} and \ref{TA} irreducible tensor products of two spin representations only occur for $n$ small, however there exist infinite families of irreducible tensor products of a spin representation and a non-spin representations (see also \cite{b2,bk4,kt} for partial results). Note that if $p=2$ then $z$ acts trivially on any irreducible representation of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$. So in this case classifying irreducible tensor products of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$ is equivalent to classifying irreducible tensor products for ${\sf S}_n$ or ${\sf A}_n$. So this case will not be considered in this paper. For ${\sf S}_n$ this problem has already been completely solved in \cite{bk,gj,gk,m1}. For ${\sf A}_n$ partial results, including a complete analysis when neither $V$ nor $W$ is basic spin, can be found in \cite{m3}. For $n=6$ or $7$, irreducible tensor products of representations of the triple covers can be easily classified looking at characters table using \cite{Atl}, so they will not be considered here. It is well known that, in characteristic $p$, irreducible representations of symmetric groups are indexed by $p$-regular partitions. Given $\lambda\in{\mathscr {P}}_p(n)$ a $p$-regular partition of $n$ let $D^\lambda$ be the corresponding irreducible representation of ${\sf S}_n$. For any $\lambda\in{\mathscr {P}}_p(n)$ let $\lambda^{\tt M}\in{\mathscr {P}}_p(n)$ be the Mullineux dual of $\lambda$, that is the partition with $D^{\lambda^{\tt M}}\cong D^\lambda\otimes\mathbf{\mathrm{sgn}}$, where $\mathbf{\mathrm{sgn}}$ is the sign representation of ${\sf S}_n$. For $p\geq 3$ it is also well known that $D^\lambda{\downarrow}_{{\sf A}_n}$ is irreducible if and only $\lambda\not=\lambda^{\tt M}$. In this case we will write $E^\lambda$ for $D^\lambda{\downarrow}_{{\sf A}_n}$. Note that $E^\lambda\cong E^{\lambda^{\tt M}}$. On the other hand if $\lambda=\lambda^{\tt M}$ we have that $D^\lambda{\downarrow}_{{\sf A}_n}\cong E^\lambda_+\oplus E^\lambda_-$ with $E^\lambda_\pm$ non-isomorphic irreducible representations of ${\sf A}_n$. Further any irreducible representation of ${\sf A}_n$ is either of the form $E^\lambda$ or of the form $E^\lambda_\pm$ for some $\lambda\in{\mathscr {P}}_p(n)$. As mentioned above the modules $D^\lambda$ (resp. $E^\lambda_{(\pm)}$) can also be viewed as representations of $\widetilde{\sf S}_n$ (resp. $\widetilde{\sf A}_n$). In positive characteristic $p\geq 3$, irreducible spin representations of symmetric and alternating groups have been described in \cite{BK,BK2}. There it has been proved that if ${\mathscr {RP}}_p(n)$ is the set of $p$-restricted $p$-strict partitions of $n$, that is partitions $\lambda$ with $1-\delta_{p\mid\lambda_i}\leq\lambda_i-\lambda_{i+1}\leq p-\delta_{p\mid n}$, then (pairs of) spin irreducible representations of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$ are indexed by elements of ${\mathscr {RP}}_p(n)$. More in particular for any $\lambda\in{\mathscr {RP}}_p(n)$ there either exists an irreducible spin representation $D(\lambda,0)$ of ${\sf S}_n$ or there exist two non-isomorphic representations $D(\lambda,\pm)$ of $\widetilde{\sf S}_n$. In either case we have that $D(\lambda,\epsilon)\cong D(\lambda,-\epsilon)\otimes\mathbf{\mathrm{sgn}}$, so that in the first case $D(\lambda,0){\downarrow}_{\tilde{\sf A}_n}\cong E(\lambda,+)\oplus E(\lambda,-)$ with $E(\lambda,\pm)$ non-isomorphic irreducible spin representations of $\widetilde{\sf A}_n$, while in the second case $D(\lambda,\pm){\downarrow}_{\tilde{\sf A}_n}\cong E(\lambda,0)$ with $E(\lambda,0)$ irreducible. Further again any spin irreducible representation of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$ is of one of these forms. For $n\geq 1$ write $n=dp+e$ with $0\leq e<p$. Define $\beta_n:=(p^d,e)$ if $e>0$ or $\beta_n:=(p^{d-1},p-1,1)$ if $e=0$. Irreducible spin representations indexed by $\beta_n$ are called basic spin modules and will play a special role in this paper. Such representations are the composition factors of the reduction modulo $p$ of basic spin modules in characteristic 0, see \cite{BK3,Wales}. Given $\lambda\in{\mathscr {P}}_p(n)$ write $\lambda=(a_1^{b_1},\ldots,a_h^{b_h})$ with $a_1>\ldots>a_h\geq 1$ and $b_i\geq 1$. We say that $\lambda$ is JS if $a_i-a_{i+1}+b_i+b_{i+1}\equiv 0\!\mod p$ for $1\leq i<h$. It has been proved (see \cite{JS,k2}) that $\lambda\in{\mathscr {P}}_p(n)$ is JS if and only if $D^\lambda{\downarrow}_{{\sf S}_{n-1}}$ is irreducible. For any $a\geq 1$ let $a=bp+c$ with $1\leq c\leq p$ and define $\mathrm{res}(a):=\min\{c-1,p-c\}$. For $\lambda\in{\mathscr {RP}}_p(n)$ we say that $\lambda=(\lambda_1,\ldots,\lambda_h)$ is JS(0) if $\lambda_h=1$ and $\mathrm{res}(\lambda_i)=\mathrm{res}(\lambda_{i+1}+1)$ for $1\leq i<h$. In view of \cite{BK,p} it can be checked that $\lambda\in{\mathscr {RP}}_p(n)$ is JS(0) if and only if $D(\lambda,\epsilon){\downarrow}_{\tilde{\sf S}_{n-1}}$ and $E(\lambda,\epsilon'){\downarrow}_{\tilde{\sf A}_{n-1}}$ are both irreducible. An equivalent characterisation is also that $D(\lambda,0){\downarrow}_{\tilde{\sf S}_{n-1}}$ is irreducible if $\lambda$ indexes only one spin representation of $\widetilde{\sf S}_n$ or that $E(\lambda,0){\downarrow}_{\tilde{\sf A}_{n-1}}$ is irreducible if $\lambda$ indexes two spin representations of $\widetilde{\sf S}_n$. Before stating our main results we list here few irreducible tensor products of representations of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$. As will be seen in the main theorems, any other irreducible tensor product is part of an infinite family of irreducible tensor products. In rows 4 and 5, $\chi_V$ is the character of $V$, $\chi_W$ the character of $W$ and $\widetilde{(1,2,3,4,5)}$ the lift of order 5 of the 5-cycle $(1,2,3,4,5)$. {\small \[\begin{array}{|c|c|c|c|c|c|} \hline G&V&W&V\otimes W&p&\text{further assumptions}\\ \hline\hline \rlap{$\phantom{E^{(3^3)}_\pm}$}\widetilde{\sf S}_6&D((3,2,1),\pm)&D(\beta_6,\pm)&D^{(3,2,1)}&p\geq 7&\\ \hline \rlap{$\phantom{E^{(3^3)}_\pm}$}\widetilde {\sf A}_5&E(\beta_5,+)&E(\beta_5,-)&E^{(4,1)}&p\not=5&\\ \hline \rlap{$\phantom{E^{(3^3)}_\pm}$}\widetilde {\sf A}_6&E(\beta_6,+)&E(\beta_6,-)&E^{(5,1)}&p=3&\\ \hline \rlap{$\phantom{\widetilde{E^{(3^3)}_\pm}}$}\widetilde {\sf A}_5&E^{(3,1^2)}_\pm&E(\beta_5,\pm)&E((4,1),0)&p\not=5&\chi_V\chi_W\widetilde{(1,2,3,4,5)}=1\\ \hline \rlap{$\phantom{\widetilde{E^{(3^3)}_\pm}}$}\widetilde {\sf A}_6&E^{(4,1^2)}_\pm&E(\beta_6,\pm)&E((4,2),\pm)&p=3&\chi_V\chi_W\widetilde{(1,2,3,4,5)}=1\\ \hline \rlap{$\phantom{\widetilde{E^{(3^3)}_\pm}}$}\widetilde {\sf A}_6&E^{(4,1^2)}_+&E^{(4,1^2)}_-&E^{(4,2)}&p=3&\\ \hline \rlap{$\phantom{\widetilde{E^{(3^3)}_\pm}}$}\widetilde {\sf A}_9&E^{(3^3)}_\pm&E(\beta_9,\pm)&E((5,3,1),\pm)&p\geq 7&\\ \hline \end{array}\] \vspace{1mm} \centerline{\sc Table I} } In the next theorems, as well as in the remaining of the paper, if $\alpha$ and $\beta$ are partitions, let $\alpha+\beta:=(\alpha_1+\beta_1,\alpha_2+\beta_2,\ldots)$ and $\alpha\cup\beta$ be the partition obtained by rearranging the parts of $(\alpha,\beta)=(\alpha_1,\alpha_2,\ldots,\beta_1,\beta_2,\ldots)$. The next two theorems completely characterise irreducible tensor of representations of covering groups of symmetric and alternating groups respectively. Parts of the theorems can be recovered from the previously mentioned references, but we still state the theorems in complete form. \begin{theor}\label{TS} Let $p\geq 3$ and $V,W$ be irreducible $F\widetilde{\sf S}_n$-representations which are not 1-dimensional. Then $V\otimes W$ is irreducible if and only if one of the following holds up to exchange of $V$ and $W$: \begin{enumerate}[(i)] \item $n\not\equiv 0\!\mod p$, $V\in\{D^{(n-1,1)},D^{(n-1,1)^{\tt M}}\}$, $W\cong D(\lambda,\pm)$ with $\lambda\in{\mathscr {RP}}_p(n)$ a $\text{JS}(0)$-partition, in which case $V\otimes W\cong D(\nu,0)$ where $\nu=(\lambda\setminus A)\cup B$ with $A$ is the bottom removable node of $\lambda$ and $B$ is the top addable node of $\lambda$, \item $n\not\equiv 0,\pm 2\!\mod p$ is even, $V\cong D^\lambda$ where $\lambda\in{\mathscr {P}}_p(n)$ is a JS-partition with $\min\{h(\lambda),h(\lambda^{\tt M})\}=2$, and $W$ is basic spin, in which case, assuming $h(\lambda)=2$, if $\lambda_1\not=\lambda_2$ then $V\otimes W\cong D(\beta_{\lambda_1}+\beta_{\lambda_2},0)$, while if $\lambda_1=\lambda_2$ then $V\otimes W\cong D(\beta_{n/2+1}\cup\beta_{n/2-1},0)$ \item $V$ and $W$ are as in row 1 of Table I. \end{enumerate} \end{theor} \begin{theor}\label{TA} Let $p\geq 3$ and $V,W$ be irreducible $F\widetilde {\sf A}_n$-representations which are not 1-dimensional. Then $V\otimes W$ is irreducible if and only if one of the following holds up to exchange of $V$ and $W$: \begin{enumerate}[(i)] \item $n\not\equiv 0\!\mod p$, $V\cong E^{(n-1,1)}$, $W\cong E^\lambda_\pm$ with $\lambda\in{\mathscr {P}}_p(n)$ a JS-partition satisfying $\lambda=\lambda^{\tt M}$, in which case $V\otimes W\cong E^\nu$ where $\nu=(\lambda\setminus A)\cup B$ with $A$ is the top removable node of $\lambda$ and $B$ is either of the two bottom addable nodes of $\lambda$, \item $n\not\equiv 0\!\mod p$, $V\cong E^{(n-1,1)}$, $W\cong E(\lambda,\pm)$ with $\lambda\in{\mathscr {RP}}_p(n)$ a $\text{JS}(0)$-partition, in which case $V\otimes W\cong E(\nu,0)$ where $\nu=(\lambda\setminus A)\cup B$ with $A$ is the bottom removable node of $\lambda$ and $B$ is the top addable node of $\lambda$, \item $n\not\equiv 0,\pm 2\!\mod p$ is odd, $V\cong E^\lambda$ where $\lambda\in{\mathscr {P}}_p(n)$ is a JS-partition with $\min\{h(\lambda),h(\lambda^{\tt M})\}=2$, and $W$ is basic spin, in which case, assuming $h(\lambda)=2$, if $\lambda_1\not=\lambda_2+p-2$ then $V\otimes W\cong E(\beta_{\lambda_1}+\beta_{\lambda_2},0)$, while if $\lambda_1=\lambda_2+p-2$ then $V\otimes W\cong E(\beta_{\lambda_1}\cup\beta_{\lambda_2},0)$ \item $V$ and $W$ are as in rows 2-7 of Table I. \end{enumerate} \end{theor} Although in cases (ii) of Theorem \ref{TS} and (iii) of Theorem \ref{TA} we only describe for $V\otimes W$ if $h(\lambda)=2$, in the other case a description can be easily obtained, since $D^{\lambda^{\tt M}}\cong D^\lambda\otimes \mathbf{\mathrm{sgn}}$ and $E^{\lambda^{\tt M}}\cong E^\lambda$. In the next section we will introduce notations that will be used in the paper and state some well known/easy results. In Sections \ref{s3} we will study endomorphism rings $\mathrm{End}_F(V)$ of general classes of modules $V$ of $\widetilde{\sf S}_n$ or $\widetilde{\sf A}_n$. In order to extend these results to some special classes of modules or at least obtain similar results in Section \ref{s5}, we will in Section \ref{s4} study the structure of certain permutation modules. In both Sections \ref{s3} and \ref{s5} we will often use branching results to obtain informations on $\mathrm{End}_F(V)$. In Section \ref{s6} we will study tensor product with certain special classes of modules, using results on branching or known results in characteristic 0 and kknowledge of decomposition matrices. Finally in Section \ref{s7} we will prove Theorems \ref{TS} and \ref{TA}. \section{Notation and basic results} Throughout the paper we will only consider representations in odd characteristic $p$. \subsection{Covering groups} Let $\widetilde{\sf S}_n$ be any of the two double covers of ${\sf S}_n$ and $z$ be the non-trivial central element of $\widetilde{\sf S}_n$ (which has order 2). There exists a short exact sequence \[1\rightarrow\langle z\rangle\rightarrow\widetilde{\sf S}_n\xrightarrow{\pi}{\sf S}_n\rightarrow 1.\] For any group $G\leq{\sf S}_n$ define $\widetilde G:=\pi^{-1}G\leq\widetilde{\sf S}_n$. In particular $\widetilde {\sf A}_n$ is the double cover of ${\sf A}_n$. Further for elements $g\in {\sf S}_n$ let $\widetilde g\in\widetilde{\sf S}_n$ be a (fixed) element in $\pi^{-1}\{g\}$, so that $\pi^{-1}\{g\}=\{\widetilde g,z\widetilde g\}$. If $g$ has odd order, one of the elements in $\pi^{-1}\{g\}$ has order $\mathrm{ord}(g)$, while the other has order $2\mathrm{ord}(g)$. In this case choose $\widetilde g$ to have the same order as $g$. As noted in the introduction, the irreducible representations of $F\widetilde{\sf S}_n$ (resp. $F\widetilde {\sf A}_n$) are given by the irreducible representations of $F{\sf S}_n$ (resp. $F{\sf A}_n$), on which $z$ acts trivially, and the spin irreducible representations, on which $z$ acts as $-1$. Note that it does not matter which double cover of the symmetric group ${\sf S}_n$ we consider, since the group algebras of the two double cover of ${\sf S}_n$ are isomorphic. \subsection Representations of symmetric and alternating groups} As noted in the introduction irreducible representations of ${\sf S}_n$ or ${\sf A}_n$ are indexed by elements of ${\mathscr {P}}_p(n)$, that is $p$-regular partitions of $n$. We write ${\mathscr P}^A_p(n)$ for the set of partitions $\lambda\in{\mathscr {P}}_p(n)$ with $\lambda=\lambda^{\tt M}$, that is partitions $\lambda$ for which $D^\lambda{\downarrow}_{{\sf A}_n}$ splits. Given a partition $\lambda\in{\mathscr {P}}_p(n)$ define normal, good, conormal and cogood nodes of $\lambda$ as in \cite[\S11.1]{KBook}. It can be easily seen from the definition that $\lambda$ is JS if and only if it has only one normal node. If $(a,b)$ is a node, let $(b-a)\!\mod p$ be the residue of $(a,b)$. For any partition $\lambda$ let the content of $\lambda$ be the tuple $(a_0,\ldots,a_{p-1})$, where $a_i$ is the number of nodes of $\lambda$ of residue $i$ for each $0\leq i<p$. It is well known that if $\lambda,\mu\in{\mathscr {P}}_p(n)$, then $D^\lambda$ and $D^\mu$ are in the same block if and only if $\lambda$ and $\mu$ have the same $p$-core. It can be checked that $D^\lambda$ and $D^\mu$ are in the same block if and only if $\lambda$ and $\mu$ have the same content, so that we may speak of content of a block or of a block with a certain content (which is unique if such a block exists). Let $V$ be an $F{\sf S}_n$-module in a block $B$ with content $(a_0,\ldots,a_{p-1})$. For any residue $i$, we define $e_iV$ to be the projection of $V{\downarrow}_{{\sf S}_{n-1}}$ to the block with content $(a_0,\ldots,a_{i-1},a_i-1,a_{i+1},\ldots,a_{p-1})$ and $f_iV$ to be the projection of $V{\uparrow}^{{\sf S}_{n+1}}$ to the block with content $(a_0,\ldots,a_{i-1},a_i+1,a_{i+1},\ldots,a_{p-1})$. We then extend the definition of $e_iV$ and $f_iV$ to arbitrary $\mathbb{F}{\sf S}_n$-modules additively to obtain functors $$ e_i:\mod{\mathbb{F}{\sf S}_n}\to \mod{\mathbb{F}{\sf S}_{n-1}},\quad f_i:\mod{\mathbb{F}{\sf S}_n}\to \mod{\mathbb{F}{\sf S}_{n+1}}. $$ More generally, for any $r\geq 1$ let $$e_i^{(r)}:\mod{\mathbb{F} {\sf S}_n}\rightarrow \mod{\mathbb{F} {\sf S}_{n-r}},\quad f_i^{(r)}:\mod{\mathbb{F} {\sf S}_n}\rightarrow\mod{\mathbb{F} {\sf S}_{n+r}},$$ be the divided power functors, see \cite[\S11.2]{KBook}. The following is well-known, see for example \cite[Lemma 8.2.2(ii), Theorems 8.3.2(i), 11.2.7, 11.2.8]{KBook}: \begin{lemma}\label{Lemma45} For any residue $i$ and any $r\geq 1$, the functors $e_i^{(r)}$ and $f_i^{(r)}$ are biadjoint and commute with duality. Further, for any $\mathbb{F} {\sf S}_n$-module $V$ we have \[V{\downarrow}_{{\sf S}_{n-1}}\cong e_0V\oplus\ldots\oplus e_{p-1}V\hspace{24pt}\text{and}\hspace{24pt}V{\uparrow}^{{\sf S}_{n+1}}\cong f_0V\oplus\ldots\oplus f_{p-1}V.\] \end{lemma} For any partition $\lambda\in{\mathscr {P}}_p(n)$ and any residue $i$, let $\epsilon_i(\lambda)$ be the number of $i$-normal nodes and $\psi_i(\lambda)$ be the number of $i$-conormal nodes. If $\epsilon_i(\lambda)>0$ let $\widetilde e_i\lambda\in{\mathscr {P}}_p(n-1)$ be the partition obtained from $\lambda$ by removing the bottom $i$-normal node, while if $\phi_i(\lambda)>0$ let $\widetilde f_i\lambda\in{\mathscr {P}}_p(n+1)$ be the partition obtained from $\lambda$ by adding the top $i$-conormal node (see \cite[\S 11.1]{KBook}). The following two results hold by \cite[Theorems E(iv), E'(iv)]{BrK1}, \cite[Theorems 11.2.10, 11.2.11]{KBook} and \cite[Theorem 1.4]{KDec}. \begin{lemma}\label{Lemma39} Let $\lambda\in{\mathscr {P}}_p(n)$. Then for any residue $i$ and any $r\geq 1$: \begin{enumerate} \item[{\rm (i)}] $e_i^rD^\lambda\cong(e_i^{(r)}D^\lambda)^{\oplus r!}$; \item[{\rm (ii)}] $e_i^{(r)}D^\lambda\not=0$ if and only if $r\leq \epsilon_i(\lambda)$, in which case $e_i^{(r)}D^\lambda$ is a self-dual indecomposable module with socle and head both isomorphic to $D^{\widetilde e_i^r\lambda}$. \item[{\rm (iii)}] $[e_i^{(r)}D^\lambda:D^{\widetilde e_i^r\lambda}]=\binom{\epsilon_i(\lambda)}{r}=\dim\mathrm{End}_{{\sf S}_{n-r}}(e_i^{(r)}D^\lambda)$; \item[{\rm (iv)}] if $D^\mu$ is a composition factor of $e_i^{(r)}D^\lambda$ then $\epsilon_i(\mu)\leq \epsilon_i(\lambda)-r$, with equality holding if and only if $\mu=\widetilde e_i^r\lambda$; \item[{\rm (v)}] $\dim\mathrm{End}_{{\sf S}_{n-1}}(D^\lambda{\downarrow}_{{\sf S}_{n-1}})=\sum_{j\in I}\epsilon_j(\lambda)$. \item[{\rm (vi)}] Let $A$ be a removable node of $\lambda$ such that $\lambda_A$ is $p$-regular. Then $D^{\lambda_A}$ is a composition factor of $e_i D^\lambda$ if and only if $A$ is $i$-normal, in which case $[e_i D^\lambda:D^{\lambda_A}]$ is one more than the number of $i$-normal nodes for $\lambda$ above $A$. \end{enumerate} \end{lemma} \begin{lemma}\label{Lemma40} Let $\lambda\in{\mathscr {P}}_p(n)$. Then for any residue $i$ and any $r\geq 1$: \begin{enumerate} \item[{\rm (i)}] $f_i^rD^\lambda\cong(f_i^{(r)}D^\lambda)^{\oplus r!}$; \item[{\rm (ii)}] $f_i^{(r)}D^\lambda\not=0$ if and only if $r\leq \phi_i(\lambda)$, in which case $f_i^{(r)}D^\lambda$ is a self-dual indecomposable module with socle and head both isomorphic to $D^{\widetilde f_i^r\lambda}$. \item[{\rm (iii)}] $[f_i^{(r)}D^\lambda:D^{\widetilde f_i^r\lambda}]=\binom{\phi_i(\lambda)}{r}=\dim\mathrm{End}_{{\sf S}_{n+r}}(f_i^{(r)}D^\lambda)$; \item[{\rm (iv)}] if $D^\mu$ is a composition factor of $f_i^{(r)}D^\lambda$ then $\phi_i(\mu)\leq \phi_i(\lambda)-r$, with equality holding if and only if $\mu=\widetilde f_i^r\lambda$. \item[{\rm (v)}] $\dim\mathrm{End}_{{\sf S}_{n+1}}(D^\lambda{\uparrow}^{{\sf S}_{n+1}})=\sum_{j\in I}\phi_j(\lambda)$. \item[{\rm (vi)}] Let $B$ be an addable node for $\lambda$ such that $\lambda^B$ is $p$-regular. Then $D^{\lambda^B}$ is a composition factor of $f_i D^\lambda$ if and only if $B$ is $i$-conormal, in which case $[f_i D^\lambda:D^{\lambda^B}]$ is one more than the number of $i$-conormal nodes for $\lambda$ below~$B$. \end{enumerate} \end{lemma} The next lemma compares the functors $e_ie_j$ and $e_je_i$ for different residues $i$ and $j$. \begin{lemma}\label{l22}{\cite[Lemma 4.8]{m1}} Let $\lambda\vdash n$ be $p$-regular. For $i\not=j$ we have that \[\dim\mathrm{Hom}_{{\sf S}_{n-2}}(e_je_iD^\lambda,e_ie_jD^\lambda)\geq\epsilon_i(\lambda)\epsilon_j(\lambda).\] \end{lemma} When considering (co)good or (co)normal nodes and the Mullineux map we have the following result: \begin{lemma}\label{l17}{\cite[Theorem 4.7]{k4}} For any partition $\lambda$ and for any residue $i$, \[\epsilon_i(\lambda)=\epsilon_{-i}(\lambda^{\tt M})\hspace{12pt}\mbox{and}\hspace{12pt}\phi_i(\lambda)=\phi_{-i}(\lambda^{\tt M}).\] If $\epsilon_i(\lambda)>0$ then $\widetilde{e}_i(\lambda)^{\tt M}=\widetilde{e}_{-i}(\lambda^{\tt M})$, while if $\phi_i(\lambda)>0$ then $\widetilde{f}_i(\lambda^{\tt M})=\widetilde{f}_{-i}(\lambda^{\tt M})$. \end{lemma} \subsection{Spin representations} As noted in the introduction spin irreducible representations of $\widetilde{\sf S}_n$ and $\widetilde{\sf A}_n$ are indexed by elements of ${\mathscr {RP}}_p(n)$, that is $p$-restricted $p$-strict partitions of $n$ (for $p=0$ this is just the set of partitions in distinct parts). For $\lambda\in{\mathscr {RP}}_p(n)$ let $h(\lambda)$ to be the number of parts of $\lambda$ and $h_{p'}(\lambda)$ to be the number of parts of $\lambda$ which are not divisible by $p$. If $n-h_{p'}(\lambda)$ is even let $a(\lambda):=0$, while if $n-h_{p'}(\lambda)$ is odd let $a(\lambda):=1$. In \cite{BK,BK2} it has been proved that if $a(\lambda)=0$ then $\lambda$ indexes one spin irreducible representation of $\widetilde{\sf S}_n$ and two of $\widetilde{\sf A}_n$, while if $a(\lambda)=1$ then $\lambda$ indexes two spin irreducible representations of $\widetilde{\sf S}_n$ and one of $\widetilde{\sf A}_n$. So \begin{align*} &\{D(\lambda,0)|\lambda\in{\mathscr {RP}}_p(n)\text{ with }a(\lambda)=0\}\\ &\cup\{D(\lambda,+),D(\lambda,-)|\lambda\in{\mathscr {RP}}_p(n)\text{ with }a(\lambda)=1\} \end{align*} is a complete set of spin irreducible $F\widetilde{\sf S}_n$-representations up to isomorphism and \begin{align*} &\{E(\lambda,+),E(\lambda,-)|\lambda\in{\mathscr {RP}}_p(n)\text{ with }a(\lambda)=0\}\\ &\cup\{E(\lambda,0)|\lambda\in{\mathscr {RP}}_p(n)\text{ with }a(\lambda)=1\} \end{align*} is a complete set of spin irreducible $F\widetilde {\sf A}_n$-representations up to isomorphism. When $a(\lambda)=1$ it is often easier to work with $D(\lambda,+)\oplus D(\lambda,-)$ instead of working with $D(\lambda,+)$ and $D(\lambda,-)$ separately. For this reason we define the irreducible supermodule \[D(\lambda):=\left\{\begin{array}{ll} D(\lambda,0),&a(\lambda)=0,\\ D(\lambda,+)\oplus D(\lambda,-),&a(\lambda)=1. \end{array}\right.\] Similarly we define $E(\lambda)$. We say that $D(\lambda)$ is of type M if $a(\lambda)=0$ or of type Q if $a(\lambda)=1$. Note that $\dim\mathrm{End}_{\widetilde{\sf S}_n}(D(\lambda))=1+a(\lambda)$. When considering spin modules of ${\sf S}_n$ in characteristic 0 we will also write $S(\lambda)$ for $D(\lambda)$ and similarly $S(\lambda,0)$ or $S(\lambda,\pm)$. Given supermodules $V$ of $\widetilde{\sf S}_{\mu}$ and $W$ of $\widetilde{\sf S}_{\nu}$ (with $\mu,\nu$ compositions) we can consider their ``outer'' tensor product $V\boxtimes W$ as a supermodule of $\widetilde{\sf S}_{\mu,\nu}$. Outer tensor products of supermodules are not always simple as supermodules (see for example \cite[Section 2-b]{BK}). If $V$ and $W$ are irreducible supermodules, then there exists an irreducible supermodule $M\circledast N$ such that: \begin{enumerate}[-] \item if both $V$ and $W$ are of type M then $V\boxtimes W\cong V\circledast W$ is of type M, \item if one $V$ and $W$ is of type M and the other of type Q then $V\boxtimes W\cong V\circledast W$ is of type Q, \item if both $V$ and $W$ are of type Q then $V\boxtimes W\cong (V\circledast W)^{\oplus 2}$ with $V\circledast W$ of type M. \end{enumerate} For partitions $\lambda^j\in{\mathscr {RP}}_p(n_j)$, this can then be extended to define simple supermodules $D(\lambda^1)\circledast\cdots\circledast D(\lambda^h)$. We will write $D(\lambda^1,\ldots,\lambda^h)$ for $D(\lambda^1)\circledast\cdots\circledast D(\lambda^h)$ and $D(\lambda^1,\ldots,\lambda^h,0)$ or $D(\lambda^1,\ldots,\lambda^h,\pm)$ for its simple components (as module). For supermodules $V$ and $W$, we write $V\simeq W$ if there exists an even isomorphism $V\to W$ (see \cite[\S2-b]{BK}). In particular if $V\simeq W$ then $V\cong W$ as modules. There are branching rules for spin irreducible supermodules which are similar to branching rules for irreducible representations of symmetric groups. We start by defining residues of nodes. The residue of the node $(a,b)$ is given by $\mathrm{res}(b)$, where $\mathrm{res}(b)$ is defined as in the introduction. So the residue of any node is an integer $i$ with $0\leq i\leq\ell$, where $\ell=\ell_p=(p-1)/2$, and on any row residues are given by \[0,1,\ldots,\ell-1,\ell,\ell-1,\ldots,1,0,0,1,\ldots,\ell-1,\ell,\ell-1,\ldots,1,0,\ldots.\] Again define the content of a partition $\lambda$ to be $(a_0,\ldots,a_\ell)$ if for every $0\leq i\leq\ell$ we have that $\lambda$ has $a_i$ nodes of residue $i$. Normal nodes (and conormal, good and cogood nodes) can be defined also for $p$-restricted $p$-strict partitions, see for example \cite[Section 9-a]{BK}. Let $\lambda\in{\mathscr {RP}}_p(n)$. For $0\leq i\leq (p-1)/2$ let $\epsilon_i(\lambda)$ be the number of $i$-normal nodes of $\lambda$ and $\phi_i(\lambda)$ be the number of $i$-conormal nodes of $\lambda$. If $\epsilon_i(\lambda)>0$ let $\widetilde e_i\lambda$ be obtained from $\lambda$ by removing the $i$-good node of $\lambda$. Similarly, if $\phi_i(\lambda)>0$ let $\widetilde f_i\lambda$ be obtained from $\lambda$ by adding the $i$-cogood node of $\lambda$. We say that $\lambda\in{\mathscr {RP}}_p(n)$ is JS if it has only one normal node. As will be seen for example in Lemmas \ref{Lemma39s} and \ref{L15}, the residue of the normal node will play an important role (in particular it is important if the unique normal node has residue 0 or not). If $\lambda$ is JS and its normal node has residue $i$ we say that $\lambda$ is $\text{JS}(i)$ (or write $\lambda\in\text{JS}(i)$). For $i=0$ a combinatorial description of JS(0) partitions has been given in the introduction. By definition we easily have that \begin{lemma}\label{Lef} Let $\lambda\in{\mathscr {RP}}_p(n)$ and $0\leq i\leq(p-1)/2$. \begin{enumerate}[-] \item if $\epsilon_i(\lambda)>0$ then $\phi_i(\widetilde e_i\lambda)>0$ and $\widetilde f_i\widetilde e_i\lambda=\lambda$. Further if $i=0$ then $a(\widetilde e_i\lambda)=a(\lambda)$, while if $i>0$ then $a(\widetilde e_i\lambda)=1-a(\lambda)$; \item if $\phi_i(\lambda)>0$ then $\epsilon_i(\widetilde f_i\lambda)>0$ and $\widetilde e_i\widetilde f_i\lambda=\lambda$. Further if $i=0$ then $a(\widetilde f_i\lambda)=a(\lambda)$, while if $i>0$ then $a(\widetilde f_i\lambda)=1-a(\lambda)$. \end{enumerate} \end{lemma} It can be checked that $D(\lambda,\delta)$ and $D(\mu,\epsilon)$ are in the same block if and only if $\lambda$ and $\mu$ have the same content (unless possibly if $\lambda=\mu$ is a $p$-bar core, in which case the blocks have weight 0). If $M$ is a spin module of $\widetilde{\sf S}_n$ contained in the block(s) with content $(a_0,\ldots,a_\ell)$ and $i$ is a residue, let $\mathrm{Res}_i M$ to be the block(s) component(s) of $M{\downarrow}_{\widetilde{\sf S}_{n-1}}$ corresponding to the blocks with content $(a_0,\ldots,a_{i-1},a_i-1,a_{i+1},\ldots,a_\ell)$. Define similarly $\mathrm{Ind}_i M$ as the block(s) component(s) of $M{\uparrow}^{\widetilde{\sf S}_{n+1}}$ corresponding to the blocks with content $(a_0,\ldots,a_{i-1},a_i+1,a_{i+1},\ldots,a_\ell)$. This can then be extended to arbitrary spin modules. Often the modules $\mathrm{Res}_iD(\lambda)$ and $\mathrm{Ind}_iD(\lambda)$ are not indecomposable as supermodules. However there exist modules $e_iD(\lambda)$ and $f_iD(\lambda)$ such that the following, see \cite[Theorems 9.13, 9.14]{BK} and \cite[Theorem A]{ksh}: \begin{lemma}\label{Lemma39s} Let $\lambda\in{\mathscr {RP}}_p(n)$ and $0\leq i\leq\ell$. Then: \begin{enumerate} \item $\mathrm{Res}_i D(\lambda)\cong e_iD(\lambda)^{\oplus 1+a(\lambda)\delta_{i>0}}$; \item $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong e_0D(\lambda)\oplus\bigoplus_{j=1}^\ell e_jD(\lambda)^{\oplus 1+a(\lambda)}$; \item $e_iD(\lambda)\not=0$ if and only if $\epsilon_i(\lambda)>0$, in which case $e_iD(\lambda)$ is a self-dual indecomposable supermodule with socle and head both isomorphic to $D(\widetilde e_i\lambda)$; \item $[e_iD(\lambda):D(\widetilde e_i^r\lambda)]=\epsilon_i(\lambda)$; \item if $D(\mu)$ is a composition factor of $e_iD(\lambda)$ then $\epsilon_i(\mu)\leq \epsilon_i(\lambda)-1$, with equality holding if and only if $\mu=\widetilde e_i\lambda$; \item $\mathrm{End}_{\widetilde{\sf S}_{n-1}}(e_i D(\lambda))\simeq\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\widetilde e_i\lambda))^{\oplus \epsilon_i(\lambda)}$; \item $\mathrm{Hom}_{\widetilde{\sf S}_{n-1}}(e_i D(\lambda),e_iD(\nu))=0$ if $\nu\in{\mathscr {RP}}_p(n)$ with $\nu\not=\lambda$; \item if $A$ is an $i$-normal node of $\lambda$ such that $\lambda\setminus A\in{\mathscr {RP}}_p(n-1)$, then $D(\lambda\setminus A)$ is a composition factor of $e_i D(\lambda)$. \end{enumerate} \end{lemma} \begin{lemma}\label{Lemma40s} Let $\lambda\in{\mathscr {RP}}_p(n)$, $0\leq i\leq(p-1)/2$. Then: \begin{enumerate} \item $\mathrm{Ind}_i D(\lambda)\cong f_iD(\lambda)^{\oplus 1+a(\lambda)\delta_{i>0}}$; \item $D(\lambda){\uparrow}^{\widetilde{\sf S}_{n+1}}\cong f_0D(\lambda)\oplus\bigoplus_{j=1}^\ell f_jD(\lambda)^{\oplus 1+a(\lambda)}$; \item $f_iD(\lambda)\not=0$ if and only if $\phi_i(\lambda)>0$, in which case $f_iD(\lambda)$ is a self-dual indecomposable supermodule with socle and head both isomorphic to $D(\widetilde f_i\lambda)$; \item $[f_iD(\lambda):D(\widetilde f_i^r\lambda)]=\phi_i(\lambda)$; \item if $D(\mu)$ is a composition factor of $f_iD(\lambda)$ then $\phi_i(\mu)\leq \phi_i(\lambda)-1$, with equality holding if and only if $\mu=\widetilde f_i\lambda$; \item $\mathrm{End}_{\widetilde{\sf S}_{n+1}}(f_i D(\lambda))\simeq(\mathrm{End}_{\widetilde{\sf S}_{n+1}}(D(\widetilde f_i\lambda)))^{\oplus \phi_i(\lambda)}$; \item $\mathrm{Hom}_{\widetilde{\sf S}_{n+1}}(f_i D(\lambda),f_iD(\nu))=0$ if $\nu\in{\mathscr {RP}}_p(n)$ with $\nu\not=\lambda$; \item if $B$ is an $i$-conormal node of $\lambda$ such that $\lambda\cup B\in{\mathscr {RP}}_p(n+1)$, then $D(\lambda\cup B)$ is a composition factor of $f_i D(\lambda)$. \end{enumerate} \end{lemma} When considering restrictions to $\widetilde{\sf S}_{n-r}$ we have that there exists divided power modules $e_i^{(r)}D(\lambda)$ with $e_i^{(1)}D(\lambda)\cong e_iD(\lambda)$ such that the following holds, see \cite[Lemma 22.3.15]{KBook} for the first part and use Lemma \ref{Lemma39s} to obtain the other two (there also exists divided power $F\widetilde{\sf S}_{n+r}$-modules $f_i^{(r)}D(\lambda)$ with corresponding properties, though these will not be needed in this paper): \begin{lemma}\label{Lemma39sr} Let $\lambda\in{\mathscr {RP}}_p(n)$, $0\leq i\leq(p-1)/2$. Then: \begin{enumerate} \item $\mathrm{Res}_i^r D(\lambda)\cong(e_i^{(r)}D(\lambda))^{\oplus 2^{\delta_{i>0}\lfloor (r+a(\lambda))/2\rfloor}r!}$; \item $e_i^{(r)}D(\lambda)\not=0$ if and only if $\epsilon_i(\lambda)\geq r$; \item $[e_iD(\lambda):D(\widetilde e_i^r\lambda)]=\binom{\epsilon_i(\lambda)}{r}$. \end{enumerate} \end{lemma} Further by \cite[Lemma 19.1.1, Theorems 22.2.2, 22.2.3]{KBook}: \begin{lemma}\label{Lefd} The functors $e_i$ and $f_i$ are biadjoint and commute with duality. \end{lemma} Comparing the number of normal and conormal nodes, we obtain the following lemma, which holds by Lemmas \ref{Lef}, \ref{Lemma39s} and \ref{Lemma40s}: \begin{lemma}\label{L15} Let $\lambda\in{\mathscr {RP}}_p(n)$. Then \begin{align*} \dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}})&=(\epsilon_0(\lambda)+2\sum_{i\geq 1}\epsilon_i(\lambda))\dim\mathrm{End}_{\widetilde{\sf S}_n}(D(\lambda)),\\ \dim\mathrm{End}_{\widetilde{\sf S}_{n+1}}(D(\lambda){\uparrow}^{\widetilde{\sf S}_{n+1}})&=(\phi_0(\lambda)+2\sum_{i\geq 1}\phi_i(\lambda))\dim\mathrm{End}_{\widetilde{\sf S}_n}(D(\lambda)). \end{align*} \end{lemma} Further, by the same lemmas, the following holds about the module $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}}{\uparrow}^{\widetilde{\sf S}_n}$: \begin{lemma}\label{L9} Let $\lambda\in{\mathscr {RP}}_p(n)$. Then \[[\mathrm{Ind}_i\mathrm{Res}_i D(\lambda):D(\lambda)]=\epsilon_i(\lambda)(\phi_i(\lambda)+1)(1+\delta_{i>0}).\] In particular \[[D(\lambda)\otimes M_1:D(\lambda)]=\epsilon_0(\lambda)(\phi_0(\lambda)+1)+2\sum_{i\geq 1}\epsilon_i(\lambda)(\phi_i(\lambda)+1).\] \end{lemma} By Mackey induction-reduction theorem we have that $M{\uparrow}^{\widetilde{\sf S}_{n+1}}{\downarrow}_{\widetilde{\sf S}_n}\cong M\oplus M{\downarrow}^{\widetilde{\sf S}_{n-1}}{\uparrow}_{\widetilde{\sf S}_n}$ for any module $M$ of $F\widetilde{\sf S}_n$. The next two lemmas then follows (for the first one use also Lemma \ref{L15}): \begin{lemma}\label{L10} Let $\lambda\in{\mathscr {RP}}_p(n)$. Then \[\epsilon_0(\lambda)+2\sum_{i\geq 1}\epsilon_i(\lambda)+1=\phi_0(\lambda)+2\sum_{i\geq 1}\phi_i(\lambda).\] In particular $\epsilon_0(\lambda)+\phi_0(\lambda)$ is odd. \end{lemma} \begin{lemma}\label{L051218_3} If $i\not=j$ and $A$ is any spin module of $\widetilde{\sf S}_{n}$ then $\mathrm{Ind}_j\mathrm{Res}_i M\cong \mathrm{Res}_i\mathrm{Ind}_j M$. \end{lemma} The next results consider normal nodes of different residues. \begin{lemma}\label{L081218_2} Let $\lambda\in{\mathscr {RP}}_p(n)$. If $i\not=j$ and $\epsilon_i(\lambda),\epsilon_j(\lambda)>0$ then $\mathrm{End}_{\widetilde{\sf S}_{n-2}}(e_i D(\widetilde e_j\lambda),e_j D(\widetilde e_i\lambda))\not=0$. \end{lemma} \begin{proof} Using Lemmas \ref{Lemma39s}, \ref{Lemma40s} and \ref{L051218_3} we have that there exists $c>0$ such that \begin{align*} \dim\mathrm{End}_{\widetilde{\sf S}_{n-2}}(e_i D(\widetilde e_j\lambda),e_j D(\widetilde e_i\lambda))&=c\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(\mathrm{Ind}_j\mathrm{Res}_i D(\widetilde e_j\lambda),D(\widetilde e_i\lambda))\\ &=c\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(\mathrm{Res}_i\mathrm{Ind}_jD(\widetilde e_j\lambda),D(\widetilde e_i\lambda))\\ &=c\mathrm{End}_{\widetilde{\sf S}_n}(\mathrm{Ind}_j D(\widetilde e_j\lambda),\mathrm{Ind}_iD(\widetilde e_i\lambda)) \end{align*} from which the lemma follows, since both $\mathrm{Ind}_j D(\widetilde e_j\lambda)$ and $\mathrm{Ind}_i D(\widetilde e_i\lambda)$ contain $D(\lambda)$ in their head and socle by Lemmas \ref{Lef} and \ref{Lemma40s}. \end{proof} \begin{lemma}\label{L051218_4} If $\lambda\in{\mathscr {RP}}_p(n)$ and $\epsilon_i(\lambda)>0$ then $\epsilon_j(\widetilde e_i\lambda)\geq\epsilon_j(\lambda)$ for $j\not=i$. \end{lemma} \begin{proof}We may assume that $\epsilon_j(\lambda)>0$. Then from Lemmas \ref{Lef}, \ref{Lemma40s} and \ref{L051218_3}, \[0\not= \mathrm{Res}_i^{\epsilon_i(\lambda)} D(\lambda)\subseteq \mathrm{Res}_i^{\epsilon_i(\lambda)} \mathrm{Ind}_j D(\widetilde e_j\lambda)\cong \mathrm{Ind}_j\mathrm{Res}_i^{\epsilon_i(\lambda)} D(\widetilde e_j\lambda).\] In particular $\mathrm{Res}_i^{\epsilon_i(\lambda)} D(\widetilde e_j\lambda)\not=0$ from which the lemma follows by Lemma \ref{Lemma39sr}. \end{proof} \subsection{Reduction modulo $p$} We now consider some results about reduction modulo $p$ of spin representation in characteristic 0. If $\mu\in{\mathscr {RP}}_0(n)$, let $\mu^R\in{\mathscr {RP}}_p(n)$ be as defined in \cite{BK3}. The main known result is the following, see \cite[Theorem 10.8]{BK2}, \cite[Theorem 10.4]{BK4}and \cite[Theorem 4.4]{BK3}: \begin{lemma}\label{L54} Let $\mu\in{\mathscr {RP}}_0(n)$ and $\nu\in{\mathscr {RP}}_p(n)$. If $[S(\mu):D(\nu)]>0$ then $\nu\unlhd \mu^R$. Further \[[S(\mu):D(\mu^R)]=2^{(h_p(\mu)+a(\mu)-a(\mu^R))/2}.\] \end{lemma} Let $n=ap+b$ with $0\leq b<p$. The spin irreducible representations of $\widetilde{\sf S}_n$ and $\widetilde {\sf A}_n$ indexed by the partition \[\beta_n:=\left\{\begin{array}{ll} (p^a,b),&b\not=0,\\ (p^{a-1},p-1,1),&b=0 \end{array}\right.\] are called basic spin modules. Basic spin modules in characteristic $p$ are exactly the composition factors of the reduction modulo $p$ of basic spin modules in characteristic 0. So basic spin modules in characteristic $p$ are the composition factors of the basic spin modules in characteristic 0 (indexed by $(n)\in{\mathscr {RP}}_0(n)$). The following holds by \cite[Table III]{Wales} and Lemma \ref{L54}: \begin{lemma}\label{LBS}\label{l1} Let $p\geq 3$. Then \begin{enumerate}[-] \item if $p\nmid n$ and $2\nmid n$ then $S((n),0)\cong D(\beta_n,0)$, \item if $p\nmid n$ and $2\mid n$ then $S((n),\pm)\cong D(\beta_n,\pm)$, \item if $p\mid n$ and $2\nmid n$ then $S((n),0)\cong D(\beta_n,+)\oplus D(\beta_n,-)$, \item if $p\mid n$ and $2\mid n$ then $S((n),\pm)\cong D(\beta_n,0)$. \end{enumerate} \end{lemma} The next lemma shows that there are cases where it is easy to compute $\mu^R$ using the partions $\beta_{\mu_i}$. \begin{lemma}\label{L55} Let $p\geq 3$ and $\mu\in{\mathscr {RP}}_0(n)$. Then $\mu^R\unlhd\sum\beta_{\mu_i}$ with equality holding if and only if $\mu_i\geq\mu_{i+1}+p$ for $1\leq i<h(\mu)$. Further if $\mu_i\geq\mu_{i+1}+p+\delta_{p\mid\mu_{i+1}}$ and $\nu\in{\mathscr {RP}}_0(n)$ with $\nu\not\!\!\unlhd\mu$ then $[S(\nu):D(\mu^R)]=0$. \end{lemma} \begin{proof} Let \[\bar\mu:=\cup\{(j,p(i-1)+k)|(j,k)\in\beta_{\mu_i}\},\] so that the first $p$ columns of $\bar\mu$ correspond to $\beta_{\mu_1}$, the second $p$ columns to $\beta_{\mu_2}$ and so on. Note that $\bar\mu$ is not necessarily (the Young diagram of) a partition, but $\bar\mu$ and of $\sum\beta_{\mu_i}$ always have the same number of nodes on any row. Further $\mu$ and $\bar\mu$ have the same number of nodes on any ladder. It then easily follows from the definition of $\mu^R$ that $\mu^R\unlhd \sum\beta_{\mu_i}$. Assume next that $\mu_i<\mu_{i+1}+p$ for some $1\leq i<h(\mu)$. Let $(j,k)$ be the good node of $\beta_{\mu_{i+1}}$. Then $(j+1,p(i-1)+k)\not\in\bar\mu$ and \[(\bar\mu\setminus(j,pi+k))\cup(j+1,p(i-1)+k)\] has the same number of nodes as $\mu$ on each ladder. It then follows that $\mu^R\not=\sum\beta_{\mu_i}$ in this case. Assume now that $\mu_i\geq\mu_{i+1}+p$ for $1\leq i<h(\mu)$. Let $A$ be the set of all $1\leq r<h(\mu)$ with $p\mid\mu_r=\mu_{r+1}+p$. Then \[\sum\beta_{\mu_i}=(\bar\mu\setminus\{(h(\beta_{\mu_{r+1}}),pr+1)|r\in A\})\cup\{(h(\beta_{\mu_{r+1}}),pr)|r\in A\}\] (that is $\sum\beta_{\mu_i}$ is obtained from $\bar\mu$ by moving the last node in the $(r+1)$-th set of $p$ columns one node to the left for all $r\in A$). So $\sum\beta_{\mu_i}$ and $\bar\mu$ have the same number of nodes on any ladder. Further by assumption that $\mu_i\geq\mu_{i+1}+p$ it can be checked that $\sum\beta_{\mu_i}\in{\mathscr {RP}}_p(n)$ and so $\mu^R=\sum\beta_{\mu_i}$. Last assume that $\mu_i\geq\mu_{i+1}+p+\delta_{p\mid\mu_{i+1}}$. Note that in this case $\bar\mu=\sum\beta_{\mu_i}=\mu^R$ by the last paragraph. Let $\nu\in{\mathscr {RP}}_0(n)$ with $\nu\not\!\!\unlhd\mu$. By Lemma \ref{L54} and the above it is enough to prove that $\sum\beta_{\mu_i}\not\!\!\unlhd\sum\beta_{\nu_i}$. Pick $r$ with $\nu_1+\ldots+\nu_r>\mu_1+\ldots+\mu_r$ and define $\bar\nu$ similarly to $\bar\mu$. Then the first $rp$ columns of $\bar\nu$ contain more nodes than the first $rp$ columns of $\bar\mu$. In particular the first $rp$ columns of $\sum\beta_{\nu_i}$ contain more nodes than the first $rp$ columns of $\sum\beta_{\mu_i}$ and so $(\sum\beta_{\mu_i})'\not\!\unrhd(\sum\beta_{\nu_i})'$, that is $\sum\beta_{\mu_i}\not\!\!\unlhd\sum\beta_{\nu_i}$. \end{proof} \subsection{Module structure} Often we will need to consider the structure of certain modules. We write \[M\sim N_1|\ldots|N_h\] if $M$ has a filtration with subquotients $N_j$ counted from the bottom and \[M\sim (N_{1,1}|\ldots|N_{1,h_1})\,\,\oplus\,\,\ldots\,\,\oplus\,\,(N_{k,1}|\ldots|N_{k,h_k})\] if $M\cong M_1\oplus\ldots\oplus M_k$ with $M_j\sim N_{j,1}|\ldots|N_{j,h_j}$. If $V_1,\ldots,V_h$ are simple we will also write \[M\cong V_1|\ldots|V_h\] if $M$ is uniserial with composition factors $V_j$ counted from the bottom and \[M\cong (V_{1,1}|\ldots|V_{1,h_1})\,\,\oplus\,\,\ldots\,\,\oplus\,\,(V_{k,1}|\ldots|V_{k,h_k})\] if $M\cong M_1\oplus\ldots\oplus M_k$ with $M_j\cong V_{j,1}|\ldots|V_{j,h_j}$. Further for groups $G,H$ and modules $A$ of $FG$ and $B$ of $FH$ we will write $A\boxtimes B$ for the corresponding modules of $F(G\times H)$. \subsection{Permutation modules} In this subsection we will consider the structure of certain permutation modules and prove some results connecting such permutations modules and the endomorphism ring $\mathrm{End}_F(V)$, for $V$ a $\widetilde{\sf S}_n$ or $\widetilde {\sf A}_n$ module. For $\alpha\in{\mathscr {P}}(n)$ a partition of $n$ let $S^\lambda$ be the reduction modulo $p$ of the Specht module indexed by $\alpha$ (which can be viewed as an $\widetilde{\sf S}_n$-module). Further let ${\sf S}_\alpha$ be the Young subgroup ${\sf S}_{\alpha_1}\times{\sf S}_{\alpha_2}\times\cdots\leq{\sf S}_n$ and define $M^\alpha:=\mathbf{1}{\uparrow}_{\widetilde{\sf S}_\alpha}^{\widetilde{\sf S}_n}$ to be the corresponding permutation module. It is well known (see for example \cite{JamesBook}) that $S^\alpha\subseteq M^\alpha$. It can be easily checked that if $\alpha\not=(1^n)$ then $M^\alpha{\downarrow}_{\widetilde {\sf A}_n}\cong \mathbf{1}{\uparrow}_{\widetilde {\sf A}_{\alpha}}^{\widetilde {\sf A}_n}$, where ${\sf A}_\alpha={\sf S}_\alpha\cap {\sf A}_n$. The next lemma holds by Frobenius reciprocity and the definition of $M^\alpha$. \begin{lemma}\label{l2} For any $F\widetilde{\sf S}_n$-module $V$ and any $\alpha\in{\mathscr {P}}(n)$ we have that \[\dim\mathrm{Hom}_{\widetilde{\sf S}_n}(M^\alpha,\mathrm{End}_F(V))=\dim\mathrm{End}_{\widetilde{\sf S}_\alpha}(V{\downarrow}_{\widetilde{\sf S}_\alpha}).\] Similarly for any $F\widetilde {\sf A}_n$-module $W$ and any $(1^n)\not=\alpha\in{\mathscr {P}}(n)$ we have that \[\dim\mathrm{Hom}_{\widetilde {\sf A}_n}(M^\alpha,\mathrm{End}_F(W))=\dim\mathrm{End}_{\widetilde {\sf A}_\alpha}(W{\downarrow}_{\widetilde {\sf A}_\alpha}).\] \end{lemma} We will also use {\em Young modules $Y^\alpha$} which can be defined using the following well-known facts contained for example in \cite{JamesArcata} and \cite[\S4.6]{Martin}: \begin{lemma} \label{LYoung} There exist indecomposable $F {\sf S}_n$-modules $Y^\alpha$ for $\alpha\in{\mathscr {P}}(n)$ such that $M^\alpha\cong Y^\alpha\,\oplus\, \bigoplus_{\beta\rhd\alpha}(Y^\beta)^{\oplus m_{\beta,\alpha}}$ for some $m_{\beta,\alpha}\geq 0$. Moreover, $Y^\alpha$ can be characterized as the unique indecomposable direct summand of $M^\alpha$ such that $S^\alpha\subseteq Y^\alpha$. Finally, we have $(Y^\alpha)^*\cong Y^\alpha$ for all $\alpha\in{\mathscr {P}}(n)$. \end{lemma} In order to prove that $V\otimes W$ is irreducible, we will usually prove that $\mathrm{Hom}_G(\mathrm{End}_F(V),\mathrm{End}_F(W))$ is not 1-dimensional by studying the modules $\mathrm{End}_F(V)$ and $\mathrm{End}_F(W)$ separately. This will in many cases be done with the next lemma, which is equivalent \cite[Lemma 4.2]{m2} (for covering groups instead of symmetric and alternating groups). \begin{lemma}\label{l15} Let $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ and $B$ and $C$ be $FG$-modules. For $\alpha\in{\mathscr {P}}(n)$ let $b_\alpha$ and $c_\alpha$ be such that there exist $\phi^\alpha_1,\ldots,\phi^\alpha_{b_\alpha}\in\mathrm{Hom}_G(M^\alpha,B^*)$ with $\phi^\alpha_1|_{S^\alpha},\ldots,\phi^\alpha_{b_\alpha}|_{S^\alpha}$ linearly independent and that similarly there exist $\psi^\alpha_1,\ldots,\psi^\alpha_{c_\alpha}\in\mathrm{Hom}_G(M^\alpha,C)$ with $\psi^\alpha_1|_{S^\alpha},\ldots,\psi^\alpha_{c_\alpha}|_{S^\alpha}$ linearly independent. Then \[\dim\mathrm{Hom}_G(B,C)\geq\sum_{\alpha\in D}b_\alpha c_\alpha,\] where $D={\mathscr {P}}_p(n)$ if $G=\widetilde{\sf S}_n$ or $D=\{\alpha\in{\mathscr {P}}_p(n)|\alpha>\alpha^{\tt M}\}$ if $G=\widetilde {\sf A}_n$. \end{lemma} Since we will often consider permutation modules $M^{(n-m,\mu)}$ for certain fixed partitions $\mu\in{\mathscr {P}}(m)$ (with $m$ small), we will write $M_{\mu_1,\mu_2,\ldots}$ (or $M_\mu$) for the module $M^{(n-m,\mu)}$. Similarly we will write $D_\mu$, $S_\mu$ and $Y_\mu$ (when they are defined). \subsection{Hooks} We now consider the structure of the reduction modulo $p$ of Specht modules indexed by hook partitions. Such modules have a quite easy structure, since $p\not=2$. For $0\leq k\leq n-1-\delta_{p\mid n}$ define \[\overline{D}_{n,k}=\overline{D}_k:=\left\{\begin{array}{ll} D^{(n-k,(k)^{\tt M})},&k<n(p-1)/p,\\ D^{((k+1)^{\tt M},n-k-1)},&k\geq n(p-1)/p\text{ and }p\nmid n,\\ D^{((k+2)^{\tt M},n-k-2)},&k\geq n(p-1)/p\text{ and }p\mid n. \end{array}\right.\] Note that for $k<p$ we then have that $\overline{D}_k=D_{1^k}$ (unless $k=p-1=n-1$). Define ${\mathscr {H}}_p(n):=\{(a,(b)^{\tt M}),((c)^{\tt M},d)\}\cap{\mathscr {P}}_p(n)$, so that ${\mathscr {H}}_p(n)$ is the set of partition labeling the modules $\overline{D}_k$. The next lemma holds by \cite[p. 52]{JamesR} and \cite[Theorem 2]{Peel} \begin{lemma}\label{LH} Let $p\geq 3$. Then for $0\leq k\leq n-1$: \begin{enumerate}[-] \item if $p\nmid n$ then $S_{1^k}\cong \overline{D}_k$, \item if $p\mid n$ then $S_{1^k}\cong \overline{D}_{k-1}|\overline{D}_k$, where $\overline{D}_{-1}=\overline{D}_{n-1}=0$. \end{enumerate} \end{lemma} The following properties then easily follows: \begin{lemma}\label{L20} Let $c=1$ if $p\nmid n$ or $c=2$ if $p\mid n$. Then $\overline{D}_k\cong \overline{D}_{n-c-k}\otimes\mathbf{\mathrm{sgn}}$ for each $0\leq k\leq n-c$. In particular $\overline{D}_k\cong \overline{D}_k\otimes\mathbf{\mathrm{sgn}}$ if and only if $k=(n-c)/2$. \end{lemma} \begin{lemma}\label{L35} Let $p\geq 3$. Then $\lambda\in{\mathscr {H}}_p(n)$ if and only if $\lambda^{\tt M}\in{\mathscr {H}}_p(n)$. \end{lemma} If $k\not=(n-1-\delta_{p\mid n})/2$ we will then write $\overline{E}_k$ for $\overline{D}_k{\downarrow}_{{\sf A}_n}$. On the other hand if $k=(n-1-\delta_{p\mid n})/2$ we will then write $\overline{E}_{k,\pm}$ for the composition factors of $\overline{D}_k{\downarrow}_{{\sf A}_n}$. When working for $\widetilde{\sf S}_n$ and $\widetilde {\sf A}_n$ at the same time, we will often write $\overline{D}_k$ to also to indicate its restriction to $\widetilde {\sf A}_n$. \section{Special homomorphisms}\label{s3} In this section we will prove that for certain large classes of modules $V$ there exist homomorphisms $\psi\in\mathrm{Hom}_G(M,\mathrm{End}_F(V))$ with $M=M_\mu$ or $S_\mu$ which do not vanish on $S_\mu$. \subsection{Definition of homomorphisms}\label{ssh} We now defining certain special elements $x_\mu$. Using these elements we will then define the homomorphisms that will play a role in this section. After having proved some branching rules in \S\ref{sbr}, we will then prove in \S\ref{shr} that these homomorphisms do not vanish on $S_\mu$ for large classes of modules $V$. For $k\geq 3$ odd let $C_k^+$ and $C_k^-$ be the conjugacy classes in $\widetilde {\sf A}_n$ of $\widetilde{(1,2,3,\ldots,k)}$ and $\widetilde{(2,1,3,\ldots,k)}$ respectively (so that $C_k^\pm$ are the two conjugacy classes in $\widetilde {\sf A}_n$ consisting of the odd order lifts of $k$-cycles). Define \begin{align*} x_3=&\sum_{g\in{\sf S}_{\{1,4\}}\times{\sf S}_{\{2,5\}}\times{\sf S}_{\{3,6\}}}\mathbf{\mathrm{sgn}}(g)\widetilde g(\widetilde{(1,2,3)}+\widetilde{(1,3,2)})(\widetilde g)^{-1},\\ x_{3,1^2}:=&\sum_{g\in {\sf S}_{4,2^2}}\sum_{h\in{\sf S}_{\{2,6,8\}}}\mathbf{\mathrm{sgn}}(g)\widetilde g\widetilde h\widetilde{(2,6,8,3,4)}(\widetilde h)^{-1}(\widetilde g)^{-1},\\ x_{1^k}:=&\sum_{g\in C_k^+}\widetilde g-\sum_{g\in C_k^-}\widetilde g, \end{align*} where $x_{1^k}$ is defined only for $k\geq 3$ odd. \begin{lemma}\label{L6} Let $n\geq 6$, $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ and $V$ be an $FG$-module. If $x_3 V\not=0$ then there exists $\psi\in\mathrm{Hom}_G(M_3,\mathrm{End}_F(V))$ which does not vanish on $S_3$. \end{lemma} \begin{proof} See the proof of \cite[Theorem 7.2]{kt}. \end{proof} \begin{lemma}\label{M311} Let $n\geq 8$, $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ and $V$ be an $FG$-module. If $x_{3,1^2}V\not=0$ then there exists $\psi\in\mathrm{Hom}_G(M_{3,1^2},\mathrm{End}_F(V))$ which does not vanish on $S_{3,1^2}$. \end{lemma} \begin{proof} Let $\{v_{\{a,b,c\},d,e}\}$ be the standard basis of $M_{3,1^2}$. Define $\psi:M_{3,1^2}\to\mathrm{End}_F(V)$ through \begin{align*} \psi(v_{\{a,b,c\},d,e})(w)=\sum_{h\in{\sf S}_{\{a,b,c\}}}\widetilde h\widetilde{(a,b,c,d,e)}(\widetilde h)^{-1}w \end{align*} for $w\in V$. Then $\psi\in\mathrm{Hom}_H(M_{3,1^2},\mathrm{End}_F(V))$. Further if $t$ is the element of the standard basis of $S_{3,1^2}$ corresponding to \[\begin{array}{cccccc} 1&5&7&9&\cdots&n\\ 2&6&8\\ 3\\ 4 \end{array}\] we have that $\psi(t)$ is just multiplication with $x_{3,1^2}$. \end{proof} \begin{lemma}\label{L1a} Let $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ and $V$ be an $FG$-module. If $k\geq 3$ is odd, $n>k$ and $x_{1^k}V\not=0$ then there exists $0\not=\psi\in\mathrm{Hom}_G(S_{1^k},\mathrm{End}_F(V))$. If $p\nmid k$ then $\psi$ extends to $\phi\in\mathrm{Hom}_G(M_{1^k},\mathrm{End}_F(V))$. \end{lemma} \begin{proof} Let $\{v_{b_1,\ldots,b_k}:1\leq b_j\leq n\text{ pairwise distinct}\}$ and $\{e_{b_1,\ldots,b_k}:2\leq b_1<\ldots<b_k\leq n\}$ be the standard bases of $M_{1^k}$ and $S_{1^k}$ respectively. For $w\in V$ define \begin{align*} \overline\phi(v_{b_1,\ldots,b_k})(w)&:=\widetilde{(b_1,\ldots,b_k)}w,\\ \psi(e_{b_1,\ldots,b_k})(w)&:=\sum_{g\in C_{b_1,\ldots,b_k}^+}gw-\sum_{g\in C_{b_1,\ldots,b_k}^-}gw, \end{align*} with $C_{b_1,\ldots,b_k}^+$ and $C_{b_1,\ldots,b_k}^-$ the conjugacy classes of $\widetilde{(b_1,b_2,b_3,\ldots,b_k)}$ and $\widetilde{(b_2,b_1,b_3,\ldots,b_k)}$ in $\widetilde {\sf A}_{\{1,b_1,\ldots,b_k\}}$. Then $\overline\phi\in\mathrm{Hom}_G(M_{1^k},\mathrm{End}_F(V))$ and $\psi\in\mathrm{Hom}_G(S_{1^k},\mathrm{End}_F(V))$. Since $\psi(e_{2,\ldots,k+1})$ is given by multiplication with $\pm x_{1^k}$, the first part of the lemma follows. The second part follows from $\overline\phi|_{S_{1^k}}=k\psi$. \end{proof} \subsection{Branching recognition}\label{sbr} In order to check that in most cases if $V$ is an irreducible representation of $\widetilde{\sf S}_n$ or $\widetilde {\sf A}_n$ we have that $x_\mu V\not=0$ (for $x_\mu$ one of the elements defined in the previous section), we will prove that $x_\mu W\not=0$, for $W$ a composition factor of $V{\downarrow}_{\widetilde{\sf S}_m}$ or $V{\downarrow}_{\widetilde {\sf A}_m}$ with $m$ small (depending on $\mu$). In order to do this, we wil prove in this section that the restrictions $V{\downarrow}_{\widetilde{\sf S}_m}$ and $V{\downarrow}_{\widetilde {\sf A}_m}$ often contain modules indexed by partitions with similar property as the partition indexing $V$. \begin{lemma}{\cite[Lemma 2.4]{kt}}\label{L2.4} Let $p\geq 3$, $n\geq 6$ and $\lambda\in{\mathscr {RP}}_p(n)\setminus\{\beta_n\}$. Then there exists $\mu\in{\mathscr {RP}}_p(n-1)\setminus\{\beta_{n-1}\}$ such that $D(\mu)$ is a composition factor of $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}}$. \end{lemma} \begin{lemma} Let $p=3$, $n\geq 9$ and $\lambda=(\lambda_1,\lambda_2)$ with $\lambda_1\geq\lambda_2+2\geq 5$. Then there exists $\mu=(\mu_1,\mu_2)$ with $\mu_1\geq\mu_2+2\geq 5$ such that $D^\mu$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_{n-1}}$. \end{lemma} \begin{proof} If $\lambda_1\geq\lambda_2+3$ then $D^{(\lambda_1-1,\lambda_2)}$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_{n-1}}$ by Lemma \ref{Lemma39}. If $\lambda_1=\lambda_2+2$ then $\lambda_2\geq 3$ and $D^{(\lambda_1,\lambda_2-1)}$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_{n-1}}$ by the same lemma. \end{proof} \begin{lemma}\label{L34} Let $p\geq 3$, $n\geq 7$ and $\lambda\in{\mathscr {P}}_p(n)\setminus{\mathscr {H}}_p(n)$. Then there exists a composition factor of $D^\lambda{\downarrow}_{{\sf S}_{n-1}}$ of the form $D^\mu$ with $\mu\in{\mathscr {H}}_p(n)$ with $\mu\in{\mathscr {P}}_p(n-1)\setminus{\mathscr {H}}_p(n-1)$. Assume now that $n\geq 10$. If further $h(\lambda),h(\lambda^{\tt M})\geq 3$, then there exists $\mu\in{\mathscr {P}}_p(n-1)\setminus{\mathscr {H}}_p(n-1)$ with $h(\mu),h(\mu^{\tt M})\geq 3$ and such that $D^\mu$ a composition factor of $D^\lambda{\downarrow}_{{\sf S}_{n-1}}$. \end{lemma} \begin{proof} Throughout the proof we will use Lemma \ref{Lemma39} without further reference to it. By Lemma \ref{L35} we have that ${\mathscr {H}}_p(n)$ is fixed under the Mullineux map. So the lemma holds for $\lambda$ if and only if it holds for $\lambda^{\tt M}$. We may assume (up to taking $\lambda^{\tt M}$) that $\lambda$ has a good node $A$ such that $\mu=\lambda\setminus A\in{\mathscr {H}}_p(n-1)$ or that $n\geq 10$, $\lambda=(\lambda_1,\lambda_2,1)$, $h(\lambda^{\tt M})\geq 3$, $(3,1)$ is good, while $(1,\lambda_1)$ and $(2,\lambda_2)$ are not good. {\bf Case 1:} $n\geq 10$, $\lambda=(\lambda_1,\lambda_2,1)$, $h(\lambda^{\tt M})\geq 3$, $(3,1)$ is good, while $(1,\lambda_1)$ and $(2,\lambda_2)$ are not good. {\bf Case 1.1:} $p=3$. In this case we may assume that $\lambda_1\geq\lambda_2+2\geq 5$, since else $\lambda\in{\mathscr {H}}_p(n)$. If $\lambda_1\geq\lambda_2+3$ then let $B:=(1,\lambda_1)$. If $\lambda_1=\lambda_2+2$ then $\lambda_2\geq 4$. In this case let $B:=\lambda\setminus (2,\lambda_2)$. is normal in $\lambda$, $\lambda\setminus B\not\in{\mathscr {H}}_p(n-1)$ and it can be easily checked that $h(\lambda\setminus B)=3$ and $h((\lambda\setminus B)^{\tt M})=5$. {\bf Case 1.2:} $p\geq 5$. In this case we may assume that $\lambda_2\geq 2$. Further $\lambda_1\geq 5$. If $\lambda_1>\lambda_2$ let $B:=(1,\lambda_1)$. If $\lambda_1=\lambda_2$ let $B:=(2,\lambda_2)$. Then $B$ is normal, $\lambda\setminus B\not\in{\mathscr {H}}_p(n)$ and $h(\lambda\setminus B)=3$. Since $(\lambda\setminus B)_1\geq 4$, the $p$-rim of $\lambda\setminus B$ contains at least $\min\{p+\delta_{p=5},(\lambda\setminus B)_1+2\}\geq 6$ nodes. So $h((\lambda\setminus B)^{\tt M})\geq 3$. {\bf Case 2:} $\mu=(n-1)$. Then $\lambda\in\{(n),(n-1,1)\}$, so $\lambda\in{\mathscr {H}}_p(n)$, contradicting the assumptions. {\bf Case 3:} $\mu=(n-1)^{\tt M}$. In this case $(n)$ can be obtained from $\lambda^{\tt M}$ by removing a good node by Lemma \ref{l17}. So this case follows from case 2. {\bf Case 4:} $\mu=(n-k-1,1^k)$ with $1\leq k\leq p-2$. Then $\lambda\in\{(n-k,1^k),(n-k-1,1^{k+1}),(n-k-1,2,1^{k-1})\}$, so we may assume that $\lambda=(n-k-1,2,1^{k-1})$. Let $B:=(1,n-k-1)$. {\bf Case 4.1:} $n-k\geq p+3$ or $n-k=p+1$. In this case $B$ is normal in $\lambda$ and $\lambda\setminus B\not\in{\mathscr {H}}_p(n-1)$. Further the first columns of the Mullineux symbols of $\lambda$ and $\lambda\setminus B$ are equal. Thus $h(\lambda)=h(\lambda\setminus B)$ and $h(\lambda^{\tt M})=h((\lambda\setminus B)^{\tt M})$ and then the lemma holds. {\bf Case 4.2:} $n-k=p+2$. In this case $n\leq 2p$ and so $p\geq 5$. Again $B$ is normal and $\lambda\setminus B\not\in{\mathscr {H}}_p(n-1)$. Further $h(\lambda\setminus B)=h(\lambda)$ and, since the first column of the Mullineux symbol of $\lambda\setminus B$ is $\binom{p+k-1}{k+1}$, we have that $h((\lambda\setminus B)^{\tt M})\geq p-2\geq 3$. So the lemma holds. {\bf Case 4.4:} $4\leq n-k\leq p$. We may assume that $(n-k,k)\not=(4,p-3)$, since else $\lambda\setminus B=(n-1)^{\tt M}$, which was already covered in case 3 (since $B$ is normal). In this case $n\leq 2p-2$ and so $p\geq 5$. Then $B$ is normal and $\lambda\setminus B\not\in{\mathscr {H}}_p(n-1)$. Further $h(\lambda\setminus B)=h(\lambda)$ and $h((\lambda\setminus B)^{\tt M})\geq n-k-1\geq 3$. So the lemma holds. {\bf Case 4.5:} $n-k=3$. In this case $\lambda=(2^2,1^{k-1})$. If $k=p-2$ then $\lambda\in{\mathscr {H}}_p(n)$, so, since $n\geq 7$, we may assume that $4\leq k\leq p-3$ (so in particular $p\geq 7$). In this case $\lambda^{\tt M}=(k+1,2)$, so we only have to prove the first part of the lemma, which follows from $C:=(1,k+1)$ being normal in $\lambda^{\tt M}$ and from $\lambda^{\tt M}\setminus C\not\in{\mathscr {H}}_p(n-1)$. {\bf Case 5:} $h(\mu)=p$. By Lemmas \ref{l17} and \ref{L20} we may assume that $\mu=(c,(d)^{\tt M})$ with $c+d=n-1$ and $c>d\geq p-1$ (otherwise $\mu^{\tt M}$ it is of this form or of one of the forms considered in cases 2-4). In this case $\lambda\in\{(c+1,(d)^{\tt M}),(c,(d+1)^{\tt M}),(c,(d)^{\tt M},1),(c,(d)^{\tt M})\cup (2,(d)^{\tt M}_1+1)\}$. We may assume that $d\geq p$ and $\lambda=(c,(d)^{\tt M},1)$ or that $(p-1)\nmid d$ and $\lambda=(c,(d)^{\tt M})\cup (2,(d)^{\tt M}_1+1)$. From $c>d\geq p$ and since $c+d=n-1$ we have $p\leq d\leq(n-2)/2$ and so \begin{align*} \lambda_1-\lambda_2&\geq c-(\lceil\frac{d}{p-1}\rceil+1)\\ &=(n-d-1)-(\lceil\frac{d}{p-1}\rceil+1)\\ &\geq n-2-\lceil\frac{pd}{p-1}\rceil\\ &\geq \lfloor(n-2)\frac{p-2}{2(p-1)}\rfloor\\ &\geq \lfloor \frac{p(p-2)}{p-1}\rfloor\\ &=\lfloor\frac{(p-1)^2-1}{p-1}\rfloor\\ &=p-2. \end{align*} {\bf Case 5.1:} $p\geq 5$. Let $B:=(1,c)$. Then $B$ is normal, $\lambda\setminus B\not\in{\mathscr {H}}_p(n-1)$ and $h(\lambda\setminus B)=h(\lambda)$. Further the $p$-rim of $\lambda\setminus B$ contains at least \[\min\{(\lambda\setminus B)_1-(\lambda\setminus B)_2+1,p\}+h(\lambda\setminus B)-1\geq h(\lambda\setminus B)+p-2\] nodes. So the first column of the Mullineux symbol of $\lambda\setminus B$ is $\binom{\geq h(\lambda\setminus B)+p-2}{h(\lambda\setminus B)}$ and then $h((\lambda\setminus B)^{\tt M})\geq p-2\geq 3$. {\bf Case 5.2:} $p=3$. Again let $B:=(1,c)$. If $h(\lambda)=3$ then the $p$-rim contains at least $\min\{(\lambda\setminus B)_1-(\lambda\setminus B)_2+1,p\}+2$ nodes. If $h(\lambda)=4$ then the $p$-rim contains at least $\min\{(\lambda\setminus B)_1-(\lambda\setminus B)_2+1,p\}+3$ nodes. If $\lambda_1-\lambda_2=(\lambda\setminus B)_1-(\lambda\setminus B)_2+1\geq 3$ we can then argue in either case as in case 5.1. So assume that $\lambda_1-\lambda_2\leq 2$. Then $c-(d+1)/2-1\leq\lambda_1-\lambda_2\leq 2$. Since $c\geq n/2$ and $d\leq (n-2)/2$ it follows that $n/2\leq c\leq n/4+3$. So $n\leq 12$ and it can then be easily checked that $\lambda\in\{(4,2,1^2),(5,3,1),(6,4,2)\}$. In the first case $D^{(3,2,1^2)}$ gives a composition factor of $D^\lambda{\downarrow}_{{\sf S}_{n-1}}$ as wanted, in the second case $D^{(5,3)}$, in the third case $D^{(6,4,1)}$. \end{proof} \subsection{Endomorphisms rings}\label{shr} We are now ready to study the endomorphisms rings $\mathrm{End}_F(V)$ for $V$ simple $F\widetilde{\sf S}_n$- or $F\widetilde {\sf A}_n$-modules indexed by certain (large) families of partitions. We will use the elements $x_\mu$ defined at the beginning of \S\ref{ssh}. \begin{lemma}\label{LM3} Let $p\geq 3$, $n\geq 6$ and $\lambda\in{\mathscr {P}}_p(n)$. If $h(\lambda),h(\lambda^{\tt M})\geq 3$ and $V$ is a simple $F{\sf S}_n$- or $F {\sf A}_n$-module indexed by $\lambda$ then there exists $\phi:M_3\to\mathrm{End}_F(V)$ which does not vanish on $S_3$. \end{lemma} \begin{proof} Let $\{v_{\{a,b,c\}}\}$ be the standard basis of $M_3$ and define $\phi_3(v_{\{a,b,c\}})(w):=(a,b,c)w+(a,c,b)w$ for any $w\in V$. If $V\cong D^\lambda$ (and then also if $V\cong E^\lambda$) we have that $\phi_3$ does not vanish on $S_3$ by \cite[Propositions 3.6, 3.8]{bk5} if $p\geq 5$ (using that in this case $M_3\sim S_3|M_2$ by \cite[Lemmas 3.1, 3.2]{bk5}) or by \cite[Corollary 6.7]{kmt} if $p=3$. So we may assume that $V\cong E^\lambda_\pm$. Since $D^\lambda{\downarrow}_{{\sf A}_n}\cong E^\lambda_+\oplus E^\lambda_-$ and $\phi$ is defined through multiplication with elements in ${\sf A}_n$, there exists $\epsilon\in\{\pm\}$ such that $\phi_3:M_3\to\mathrm{End}_F(E^\lambda_\epsilon)$ does not vanish on $S_3$. Since $E^\lambda_+\cong (E^\lambda_-)^\sigma$ for $\sigma\in{\sf S}_n\setminus {\sf A}_n$, the result holds also for $E^\lambda_{-\epsilon}$. \end{proof} \begin{lemma}\label{LM3S} Let $p\geq 3$, $n\geq 6$ and $\lambda\in{\mathscr {RP}}_p(n)\setminus\{\beta_n\}$. If $V$ is a simple $F\widetilde{\sf S}_n$- or $F\widetilde {\sf A}_n$-module indexed by $\lambda$ then there exists $\phi:M_3\to\mathrm{End}_F(V)$ which does not vanish on $S_3$. \end{lemma} \begin{proof} For $p\geq 5$ this holds by \cite[Theorem 7.2]{kt} (and its proof). So we may assume that $p=3$. From Lemma \ref{L6} it is enough to prove that $x_3V\not=0$. From Lemma \ref{L2.4} there exists a composition factor of $V{\downarrow}_{\widetilde {\sf A}_6}$ of the form $E((4,2),\pm)$. So it is enough to prove that $x_3E((4,2),\pm)\not=0$. Let $W((6),0)$ be the reduction modulo 3 of the basic spin module of $\widetilde {\sf A}_6$ in characteristic 0 and $W((4,2),\pm)$ be the reduction modulo 3 of the simple spin modules of $\widetilde {\sf A}_6$ indexed by $(4,2)$ in characteristic 0. Let $\chi^{(6),0}$ and $\chi^{(4,2),\pm}$ be the characters of $W((6),0)$ and $W((4,2),\pm)$ respectively. Using decomposition matrices and Lemma \ref{L54} it can be checked that the characters of $E((4,2),\pm)$ (over the field $F$) are $\chi^\pm=\chi^{(4,2),\pm}-\chi^{(6),0}$. In order to prove that $x_3E((4,2),\pm)\not=0$ it is enough to prove that $\chi^\pm(x_3y)\not=0$. Let $y:=\widetilde{(1,5,2,3)(4,6)}$. It can be computed that $x_3y$ is given by \begin{align*} &z^{\ldots}\widetilde{(1,5,3,2)(4,6)}+z^{\ldots}\widetilde{(1,5)(4,6)}+z^{\ldots}\widetilde{(1,3)(2,4,6,5)}+z^{\ldots}\widetilde{(1,4,6,3)(2,5)}\\ &+\widetilde{(1,5,6,2,3)}+z\widetilde{(1,5,4,2,3)}+\widetilde{(1,6,4)(2,3,5)}+\widetilde{(2,3,6,4,5)}\\ &-z\widetilde{(1,5,3)(2,4,6)}-z\widetilde{(1,5,4,6,3)}-z^{\ldots}\widetilde{(1,3,5,2)(4,6)}-z^{\ldots}\widetilde{(2,5)(4,6)}\\ &-z^{\ldots}\widetilde{(1,5,6,4)(2,3)}-z^{\ldots}\widetilde{(1,5)(2,3,6,4)}-z\widetilde{(1,6,5,2,3)}-\widetilde{(1,4,5,2,3)}. \end{align*} Further it can be computed that the lifts of $(1,5,3,2)(4,6)$, $(1,3)(2,4,6,5)$, $(1,3,5,2)(4,6)$ and $(1,5)(2,3,6,4)$ appearing in $x_3y$ are conjugate under $\widetilde {\sf A}_6$, as are those of $(1,4,6,3)(2,5)$ and $(1,5,6,4)(2,3)$. Since all lifts of elements of the form $(a,b)(c,d)$ are conjugated in $\widetilde {\sf A}_6$, it then follows that \[\chi^{\pm}(x_3y)=2\chi^\pm(\widetilde{(1,2,3,4,5)})+2\chi^\pm(\widetilde{(1,2,3)(4,5,6)})\equiv 2\!\mod 3,\] so the lemma holds. \end{proof} \begin{lemma}\label{L52} Let $p=3$, $n\geq 8$, $G\in\{\widetilde {\sf S}_n,\widetilde {\sf A}_n\}$ and $\lambda=(\lambda_1,\lambda_2)$ with $\lambda_1\geq\lambda_2+2\geq 5$. Let $V$ be an irreducible $FG$-module indexed by $\lambda$. Then there exists $\psi\in\mathrm{Hom}_G(M_{3,1^2},\mathrm{End}_F(V))$ which does not vanish on $S_{3,1^2}$. \end{lemma} \begin{proof} By \cite[Lemma 1.8]{ks2}, $\lambda\not=\lambda^{\tt M}$, so $V\cong D^\lambda$ or $E^\lambda$. So it is enough to prove the lemma for $G={\sf S}_n$. From Lemma \ref{M311} it is enough to prove that $x_{3,1^2}D^\lambda\not=0$. Throughout this proof we will consider $x_{3,1^2}$ as an element of $F{\sf S}_8$ instead of $F\widetilde{\sf S}_8$ by sending $\widetilde g$ to $g$. Note that by Lemma \ref{Lemma39} and \cite[Tables]{JamesBook}, $D^{(5,3)}\cong S^{(5,3)}$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_8}$. Let $\chi$ be the character of $S^{(5,3)}$. Let $y:=(2,6,8,3,4)$. In order to prove that $x_{3,1^2}D^\lambda\not=0$ it is enough to prove that $\chi(hx_{3,1^2})\not=0$. Note that $yx_{3,1^2}=X_+-X_-$ where \begin{align*} X_+&=y\sum_{g\in {\sf A}_{4,2^2}}\sum_{h\in{\sf S}_{\{2,6,8\}}}gh{(2,6,8,3,4)}h^{-1}g^{-1},\\ X_-&=y\sum_{g\in {\sf S}_{4,2^2}\setminus {\sf A}_{4,2^2}}\sum_{h\in{\sf S}_{\{2,6,8\}}}gh{(2,6,8,3,4)}h^{-1}g^{-1}. \end{align*} It can be computed that the number of elements appearing $X_\pm$ corresponding to each conjugacy class of ${\sf S}_8$ is as follows ($X_\pm\in F {\sf A}_8$ so that not all conjugacy classes have to be considered): \[\begin{array}{l|c|c|c|c|c|c} \mbox{cycle type}&(1^8)&(2^2,1^4)&(2^4)&(3,1^5)&(3,2^2,1)&(3^2,1^2)\\ \hline X_+&0&18&0&15&32&11\\ X_-&2&13&0&10&12&46 \end{array}\] \[\begin{array}{l|c|c|c|c|c|c} \mbox{cycle type}&(4,2,1^2)&(4^2)&(5,1^3)&(5,3)&(6,2)&(7,1)\\ \hline X_+&27&4&53&22&8&98\\ X_-&67&24&36&12&18&48, \end{array}\] from which it easily follows that $\chi(hx_{3,1^2})\equiv 2\!\mod 3$. \end{proof} \begin{lemma}\label{L51} Let $p=3$, $n\geq 8$ and $\lambda\in{\mathscr {RP}}_3(n)\setminus\{\beta_n\}$. If $V$ is an irreducible spin representation of $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ indexed by $\lambda$ then there exists $\psi\in\mathrm{Hom}_G(M_{3,1^2},\mathrm{End}_F(V))$ which does not vanish on $S_{3,1^2}$. \end{lemma} \begin{proof} Assume first that $G=\widetilde{\sf S}_n$. By Lemma \ref{L2.4} there exists a composition factor of $D(\lambda,\delta){\downarrow}_{\widetilde{\sf S}_8}$ of the form $D(\mu,\epsilon)$ with $\mu\in\{(5,2,1),(4,3,1)\}$. Let $\chi$ be the character of $D(\mu,\epsilon)$ and $\chi^{(8),\pm}$, $\chi^{(6,2),0}$ and $\chi^{(7,1),0}$ be the characters of the reduction modulo 3 of the simple spin modules in characteristic 0 indexed by the corresponding partitions. Then $\chi\in\{1/2\chi^{(6,2),0}-\chi^{(8),\pm},\chi^{(7,1),0}\}$ using decomposition matrices and \ref{L54}. In order to prove the lemma for $\widetilde{\sf S}_n$ it is enough by Lemma \ref{M311} to prove that $x_{3,1^2}D(\mu,\epsilon)\not=0$. Let $y:=\widetilde{(2,6,8,3,4)}$ and \begin{align*} X_+=&y\sum_{g\in {\sf A}_{4,2^2}}\sum_{h\in{\sf S}_{\{2,6,8\}}}\widetilde g\widetilde h\widetilde{(2,6,8,3,4)}(\widetilde h)^{-1}(\widetilde g)^{-1},\\ X_-=&y\sum_{g\in {\sf S}_{4,2^2}\setminus {\sf A}_{4,2^2}}\sum_{h\in{\sf S}_{\{2,6,8\}}}\widetilde g\widetilde h\widetilde{(2,6,8,3,4)}(\widetilde h)^{-1}(\widetilde g)^{-1}. \end{align*} Note that $yx_{3,1^2}=X_+-X_-$. It can be computed that the number of elements appearing $X_\pm$ corresponding to each conjugacy class of $\widetilde{\sf S}_8$ is as follows: \[\begin{array}{l|c|c|c|c|c|c} \mbox{cycle type}&(1^8)&(1^8)&(3,1^5)&(3,1^5)&(3^2,1^2)&(3^2,1^2)\\ \mbox{order of el.}&1&2&3&6&3&6\\ \hline X_+&0&0&4&11&7&4\\ X_-&2&0&4&6&32&14 \end{array}\] \[\begin{array}{l|c|c|c|c|c|c|c} \mbox{cycle type}&(5,1^3)&(5,1^3)&(5,3)&(5,3)&(7,1)&(7,1)&\mbox{others}\\ \mbox{order of el.}&5&10&15&30&7&14&\\ \hline X_+&11&42&22&0&62&36&89\\ X_-&9&27&12&0&42&6&134. \end{array}\] Since $X_\pm\in F\widetilde {\sf A}_8$, it easily follows that $\chi(yx_{3,1^2})\equiv 1\!\mod 3$. So the lemma holds for $\widetilde{\sf S}_n$. Assume now that $G=\widetilde {\sf A}_n$. If $V\cong E(\lambda,0)$ then $V\cong D(\lambda,\pm){\downarrow}_{\widetilde {\sf A}_n}$. So in this case the lemma holds by the previous part. If $V\cong E(\lambda,\pm)$ the lemma can be proved similarly to Lemma \ref{LM3}. \end{proof} \begin{lemma}\label{L3} Let $p\geq 3$, $n\geq 6$, $\lambda\in{\mathscr {P}}_p(n)\setminus{\mathscr {H}}_p(n)$ and $G\in\{{\sf S}_n,{\sf A}_n\}$. Let $V$ be an $G$-module indexed by $\lambda$. Then there exists a non-zero $\psi\in\mathrm{Hom}_G(S_{1^3},\mathrm{End}_F(V))$. If $p\not=3$ then $\psi$ extends to $\phi\in\mathrm{Hom}_G(M_{1^3},\mathrm{End}_F(V))$. \end{lemma} \begin{proof} By Lemma \ref{L1a} it is enough to prove that $x_{1^3}V\not=0$. We will consider $x_{1^3}$ as an element of $F {\sf A}_n$. By Lemma \ref{L34} it is enough to prove that $x_{1^3}E\not=0$ for all irreducible modules $E$ of ${\sf A}_6$ indexed by $\mu\in{\mathscr {P}}_p(6)\setminus{\mathscr {H}}_p(6)$. So we may assume that $E\in\{E^{(4,2)},E^{(3^2)},E^{(3,2,1)}_\pm\}$ if $p>5$, $E\in\{E^{(4,2)},E^{(3^2)}\}$ if $p=5$ or $E=E^{(4,2)}$ if $p=3$. Note that $x_{1^3}E\not=0$ if and only if $x_{1^3}(1,2,3)E\not=0$. It can be computed that $\pm x_{1^3}(1,2,3)$ is equal to \[(1,3)(2,4)+(1,2)(3,4)+(1,4)(2,3)+1-(1,4,3)-(1,2,4)-(2,3,4)-(1,3,2).\] If $\chi$ is the character of $E$ it then follows that $\chi(x_3(1,2,3))=\pm 12\not\equiv 0\!\mod p$ if $p\geq 5$. So assume that $p=3$. It can be computed that $\pm x_3(2,6,3,5,4)$ is equal to \begin{align*} &(2,5,4,6,3)+(1,2,6,3)(4,5)+(1,6,3,5,4)+(1,5,4,2)(3,6)\\ &-(4,6)(4,5)-(1,5,4)(2,6,3)-(1,2)(3,5,4,6)-(1,6,3)(2,5,4) \end{align*} and so $\chi(x_3(2,6,3,5,4))=\pm 2\not\equiv 0\!\mod 3$. The lemma then follows. \end{proof} \begin{lemma}\label{L4} Let $p\geq 3$, $n\geq 4$, $G\in\{\widetilde {\sf S}_n,\widetilde {\sf A}_n\}$ and $\lambda\in{\mathscr {RP}}_p(n)\setminus\{\beta_n\}$. If $V$ is a spin irreducible representation of $G$ indexed by $\lambda$ then there exists a non-zero $\psi\in\mathrm{Hom}_G(S_{1^3},\mathrm{End}_F(V))$. If $p\not=3$ then $\psi$ extends to $\phi\in\mathrm{Hom}_G(M_{1^3},\mathrm{End}_F(V))$. \end{lemma} \begin{proof} From \cite[Lemma 2.4]{kt} we have that if $m\geq 6$ and $\mu\in{\mathscr {RP}}_p(m)\setminus\{\beta_m\}$, then $D(\mu){\downarrow}_{\widetilde{\sf S}_{m-1}}$ has a composition factor which is not basic spin. Assume first that $p\geq 5$. In this case it can then be easily checked that $V{\downarrow}_{\widetilde {\sf A}_4}$ has a composition factor $E((3,1),\pm)$. Let and $g:=\widetilde{(1,2,3)}$. Up to exchange of $C_{3}^\pm$ we have that \begin{align*} gx_3=&1+z^{\ldots}\widetilde{(1,2)(3,4)}+z^{\ldots}\widetilde{(1,3)(2,4)}+z^{\ldots}\widetilde{(1,4)(2,3)}\\ &-z\widetilde{(1,4,3)}-z\widetilde{(1,2,4)}-z\widetilde{(2,3,4)}-\widetilde{(1,3,2)}. \end{align*} If $\chi$ is the character of $E((3,1))$ we then have that $\chi(gx_3)=\pm 6$. It then follows from Lemma \ref{L1a} that there exists a non-zero $\psi\in\mathrm{Hom}_G(S_{1^3},\mathrm{End}_F(V))$ (if $G=\widetilde {\sf A}_n$ and $V=E(\lambda,\pm)$, then $E((3,1),\pm)$ is a composition factor of $E(\lambda,+){\downarrow}_{\widetilde {\sf A}_4}$ if and only if $E((3,1),\mp)$ is a composition factor of $E(\lambda,-)$ and there exists a non-zero $\psi_+\in\mathrm{Hom}_{\widetilde {\sf A}_n}(S_{1^3},\mathrm{End}_F(E(\lambda,+)))$ if and only if there exists a non-zero $\psi_-\in\mathrm{Hom}_{\widetilde {\sf A}_n}(S_{1^3},\mathrm{End}_F(E(\lambda,-)))$). Assume now that $p=3$. Then $n\geq 5$ and $V{\downarrow}_{\widetilde {\sf A}_5}$ has a composition factor $E((4,1),0)$. Let $g:=\widetilde{(1,2)(4,5)}$. Then, up to exchange of $C_{3}^\pm$, \begin{align*} gx_3=&\widetilde{(1,2,3,5,4)}+\widetilde{(1,5,4,3,2)}+z\widetilde{(2,5,4)}+z^{\ldots}\widetilde{(1,3)(4,5)}\\ &-z\widetilde{(1,2,5,4,3)}-z\widetilde{(1,3,5,4,2)}-\widetilde{(1,5,4)}-z^{\ldots}\widetilde{(2,3)(4,5)}. \end{align*} If $\chi$ is the character of $E((4,1),0)$ then $\chi(gx_3)=\pm 4$, from which the lemma follows also in this case by Lemma \ref{L1a}. \end{proof} \begin{lemma}\label{L36} Let $n\geq 6$ and $G\in\{{\sf S}_n,{\sf A}_n\}$. Assume that $p\geq 5$ and $\lambda\in{\mathscr {P}}_p(n)\setminus{\mathscr {H}}_p(n)$ with $h(\lambda),h(\lambda^{\tt M})\geq 3$ or that $p=3$ and $\lambda\in{\mathscr {P}}_3(n)$ with $h(\lambda),h(\lambda^{\tt M})\geq 4$. Let $V$ be an $G$-module indexed by $\lambda$. Then there exists a non-zero $\psi\in\mathrm{Hom}_G(S_{1^5},\mathrm{End}_F(V))$. If $p\not=5$ then $\psi$ extends to $\phi\in\mathrm{Hom}_G(M_{1^5},\mathrm{End}_F(V))$. \end{lemma} \begin{proof} If $p\geq 5$ and $n\geq 9$ then by Lemma \ref{L34} there exists $\mu\in{\mathscr {P}}_p(9)\setminus{\mathscr {H}}_p(9)$ with $h(\mu),h(\mu^{\tt M})\geq 3$ such that $D^\mu$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_9}$. It can then be easily checked using Lemma \ref{Lemma39} and decomposition matrices that if $p\geq 7$ then $D^{(3,2,1)}$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_6}$, while if $p=5$ then $n\geq 7$ and $E^{(4,2,1)}$ is a composition factor of $D^\lambda{\downarrow}_{{\sf A}_7}$. If $p=3$ then $n\geq 8$ and by \cite[Lemma 4.13]{m3}, $D^{(4,2,1^2)}$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_8}$. Consider $x_{1^5}$ and $C_{5}^\pm$ upon projection to ${\sf A}_n$. If $p\geq 7$ let $g:=(1,2,3,4,5)$. Then, up to exchange of $C_{5}^\pm$, we have that the number of elements of $C_{5}^{\pm}g$ in each conjugacy class of ${\sf S}_6$ is as follows: \[\begin{array}{l|c|c|c|c|c|c|c} \text{cycle type}&(1^6)&(2^2,1^2)&(3,1^3)&(3^2)&(4,2)&(5,1)&\text{others}\\ \hline C_{5}^+g&1&10&5&5&20&31&0\\ C_{5}^-g&0&10&10&10&20&22&0. \end{array}\] If $\chi$ is the character of $D^{(3,2,1)}$ it then follows that $\chi(x_{1^5}g)=\pm 45$. If $p=5$ let $g:=(2,7,4)(3,6,5)$. Then, up to exchange of $C_{5}^\pm$, we have that the number of elements of $C_{5}^{\pm}g$ in each conjugacy class of ${\sf S}_7$ is as follows: \[\begin{array}{l|c|c|c|c|c|c|c|c} \hspace{-0.5pt}\text{cycle type}\hspace{-0.5pt}&\hspace{-0.5pt}(2^2,1^3)\hspace{-0.5pt}&\hspace{-0.5pt}(3,1^4)\hspace{-0.5pt}&\hspace{-0.5pt}(3,2^2)\hspace{-0.5pt}&\hspace{-0.5pt}(3^2,1)\hspace{-0.5pt}&\hspace{-0.5pt}(4,2,1)\hspace{-0.5pt}&\hspace{-0.5pt}(5,1^2)\hspace{-0.5pt}&\hspace{-0.5pt}(7)\hspace{-0.5pt}&\hspace{-0.5pt}\text{others}\hspace{-0.5pt}\\ \hline C_{5}^+g&0&3&6&3&27&9&24&0\\ C_{5}^-g&3&0&12&6&9&18&24&0. \end{array}\] If $\chi$ is the character of $E^{(4,2,1)}$ it then follows that $\chi(x_{1^5}g)=\pm 9$. If $p=3$ and $h(\lambda),h(\lambda^{\tt M})\geq 4$ let $g=(1,2,3,4,5)(6,7,8)$. Then, up to exchange of $C_{5}^\pm$, we have that the number of elements of $C_{5}^{\pm}g$ in each conjugacy class of ${\sf S}_8$ is as follows: \[\begin{array}{l|c|c|c|c|c} \text{cycle type}&(3,1^5)&(3,2^2,1)&(3^2,1^2)&(4,2,1^2)&(4^2)\\ \hline C_{5}^+g&0&5&5&5&10\\ C_{5}^-g&1&0&5&10&5 \end{array}\] \[\begin{array}{l|c|c|c|c|c} \text{cycle type}&(5,1^3)&(5,3)&(6,2)&(7,1)&\text{others}\\ \hline C_{5}^+g&5&12&10&20&0\\ C_{5}^-g&0&11&15&25&0. \end{array}\] If $\chi$ is the character of $D^{(4,2,1^2)}$ it can be easily checked that $\chi(x_{1^5}g)=\pm 5$. For ${\sf S}_n$ the lemma then follows. For ${\sf A}_n$ it holds similarly to Lemma \ref{LM3}. \end{proof} \begin{lemma}\label{L14} Let $p\geq 3$, $n\geq 6$, $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ and $\lambda\in{\mathscr {RP}}_p(n)\setminus\{\beta_n\}$ with $\lambda_1\geq 5$. If $V$ is an irreducible spin representation of $G$ indexed by $\lambda$, then there exists a non-zero $\psi\in\mathrm{Hom}_G(S_{1^5},\mathrm{End}_F(V))$. If $p\not=5$ then $\psi$ extends to $\phi\in\mathrm{Hom}_G(M_{1^5},\mathrm{End}_F(D))$. \end{lemma} \begin{proof} If $p\geq 7$ and $n\geq 11$ then by Lemma \ref{L2.4} there exists $\mu\in{\mathscr {RP}}_p(11)\setminus\{\beta_{11}\}$ such that $D(\mu)$ is a composition factor of $D(\lambda){\downarrow}_{\widetilde{\sf S}_{11}}$. It can then be easily checked using Lemma \ref{Lemma39s} that $D(\lambda){\downarrow}_{\widetilde{\sf S}_6}$ has a composition factor $D((5,1),0)$. Let $g=\widetilde{(1,2,3,4,5)}$. Up to exchange of $C_{5}^\pm$, we have that the number of elements of $C_{5}^\pm g$ in each conjugacy class of $\widetilde {\sf S}_6$ is as follows: \[\begin{array}{l|c|c|c|c|c|c|c|c|c} \hspace{-0.2pt}\text{cycle type}\hspace{-0.2pt}&\hspace{-0.2pt}(1^6)\hspace{-0.2pt}&\hspace{-0.2pt}(1^6)\hspace{-0.2pt}&\hspace{-0.2pt}(3,1^3)\hspace{-0.2pt}&\hspace{-0.2pt}(3,1^3)\hspace{-0.2pt}&\hspace{-0.2pt}(3^2)\hspace{-0.2pt}&\hspace{-0.2pt}(3^2)\hspace{-0.2pt}&\hspace{-0.2pt}(5,1)\hspace{-0.2pt}&\hspace{-0.2pt}(5,1)\hspace{-0.2pt}&\hspace{-0.2pt}\text{others}\hspace{-0.2pt}\\ \hspace{-0.2pt}\text{order of el.}\hspace{-0.2pt}&1&2&3&6&3&6&5&10&\\ \hline C_{5}^+g&0&0&5&5&5&5&2&20&30\\ C_{5}^-g&1&0&0&5&0&5&11&20&30. \end{array}\] Let $\chi$ be the character of $D((5,1),0)$. Then $\chi(x_{1^5}g)=\pm 45\not=0$. Assume next that $p=3$ or $p=5$ and $\lambda_1\geq 6$. Then $n\geq 7$. If $p=3$ then $E((5,2),0)$ is a composition factor of $V{\downarrow}_{\widetilde {\sf A}_7}$ by Lemma \ref{Lemma39s} by always removing the bottom normal node for which the obtained partition is in ${\mathscr {RP}}_p(m)$. If $p=5$ and $\lambda_1\geq 6$ then similarly $E((6,1),0)$ is a composition factor of $V{\downarrow}_{\widetilde {\sf A}_7}$. Let $g=\widetilde{(2,3)(4,5,6,7)}$. Up to exchange of $C_{5}^\pm$ and choice of $g$, we have that the number of elements of $C_{5}^\pm g$ in each conjugacy class of $\widetilde {\sf S}_7$ is as follows: \[\begin{array}{l|c|c|c|c|c|c} \text{cycle type}&(1^7)&(1^7)&(3,1^4)&(3,1^4)&(3^2,1)&(3^2,1)\\ \text{order of el.}&1&2&3&6&3&6\\ \hline C_{5}^+g&0&0&1&0&5&4\\ C_{5}^-g&0&0&0&1&4&5 \end{array}\] \[\begin{array}{l|c|c|c|c|c} \text{cycle type}&(5,1^2)&(5,1^2)&(7)&(7)&\text{others}\\ \text{order of el.}&5&10&7&14&\\ \hline C_{5}^+g&6&5&14&10&27\\ C_{5}^-g&5&6&10&14&27. \end{array}\] If $p=3$ and $\chi$ is the character of $E((5,2),0)$ then $\chi(x_{1^5}g)=\pm 10$. If $p=5$ and $\chi$ is the character of $E((6,1),0)$ then $\chi(x_{1^5}g)=\pm 18$. Last assume that $p=5$ and $\lambda_1=5$. Then $n\geq 8$. If $n\geq 11$ and $D(\lambda){\downarrow}_{\widetilde{\sf S}_{11}}$ has a composition factor $D(\mu)$ with $\mu_1\geq 6$ we can apply the previous paragraph. So we may assume this is not the case. Then by Lemma \ref{L2.4} if $n\geq 11$ then $D(\lambda){\downarrow}_{\widetilde{\sf S}_{11}}$ has a composition factor $D((5,3,2,1))$ or $D((5,4,2))$. It can then be checked (also when $n\leq 10$) that $D((5,2,1),0)$ is a composition factor of $D(\lambda){\downarrow}_{\widetilde{\sf S}_8}$. Let $g:=\widetilde{(2,3)(4,5,7)(6,8)}$. Up to exchange of $C_{5}^\pm$ and choice of $g$, we have that the number of elements of $C_{5}^\pm g$ in each conjugacy class of $\widetilde {\sf S}_8$ is as follows: \[\begin{array}{l|c|c|c|c|c|c} \text{cycle type}&(1^8)&(1^8)&(3,1^5)&(3,1^5)&(3^2,1^2)&(3^2,1^2)\\ \text{order of el.}&1&2&3&6&3&6\\ \hline C_{5}^+g&0&0&0&0&2&0\\ C_{5}^-g&0&0&0&0&0&2 \end{array}\] \[\begin{array}{l|c|c|c|c|c|c|c} \text{cycle type}&(5,1^3)&(5,1^3)&(5,3)&(5,3)&(7,1)&(7,1)&\text{others}\\ \text{order of el.}&5&10&15&30&7&14&\\ \hline C_{5}^+g&0&2&2&8&10&12&36\\ C_{5}^-g&2&0&8&2&12&10&36. \end{array}\] If $\chi$ is the character of $D((5,2,1),0)$ then $\chi(x_{1^5}g)=\pm 4$ (using decomposition matrices and Lemma \ref{L54} it can be checked that $D((5,2,1),0)$ is the reduction modulo 5 of either module indexed by $(5,2,1)$ in characteristic 0). The lemma then follows for $\widetilde{\sf S}_n$. For $\widetilde {\sf A}_n$ it follows similarly to the proof of Lemma \ref{LM3}. \end{proof} In the next section we will study the structure of certain permutation modules. In \S\ref{s2r} to \S\ref{sbs} we will then study more in details most classes of modules for which some of the results in this section do not apply and obtain similar results on the endomorphisms rings of those modules. These results will then be used in \S\ref{snat} to \S\ref{sbssbs} to study tensor products of certain special classes of modules. \section{Permutation modules}\label{s4} In order to extend the results obtained in the previous section to (some) of the classes of families which were not considered, we will need to study permutation modules more in detail and then study branching of the modules in detail. We start here by considering the structure of certain permutation modules. The following three lemmas on the structure of $M^\lambda$ for certain 2-rows partitions $\lambda$ follow easily from \cite[17.17,24.15]{JamesBook} and \cite[6.1.21,2.7.41]{jk}. \begin{lemma}\label{Mk} Let $1\leq k<p$. Then $M_k\sim S_k|M_{k-1}$. \end{lemma} \begin{lemma}\label{M1} Let $p\geq 3$ and $n\geq 2$. If $p\nmid n$ then $M_1\cong D_0\oplus D_1$, while if $p\mid n$ then $M^{(n-1,1)}\cong D_0|D_1|D_0$. \end{lemma} \begin{lemma}\label{L160817_0}\label{L160817_1}\label{L160817_2} Let $p=3$ and $n\geq 4$. Then \[M_2\cong\left\{\begin{array}{ll} M_1\oplus D_2,&n\equiv 0\!\mod 3,\\ D_1\oplus (D_0|D_2|D_0),&n\equiv 1\!\mod 3,\\ D_0\oplus (D_1|D_2|D_1),&n\equiv 2\!\mod 3. \end{array}\right.\] \end{lemma} We will also need information about the structure of certain permutation modules corresponding to subgroups ${\sf S}_{n-k}$. \begin{lemma}\label{L17} Let $p\geq 3$ and $n\not\equiv 0\!\mod p$. If $n\geq 2$ then \[M_1\cong D_1\oplus M_0.\] If $n\geq 4$ then \[M_{1^2}\oplus M_0\cong D_{1^2}\oplus M_2\oplus M_1.\] If $p\geq 5$ and $n\geq 6$ then \[M_{1^3}\oplus M_3\oplus M_2\oplus M_1\cong D_{1^3}\oplus M_{2,1}^{\oplus 2}\oplus M_{1^2}\oplus M_0.\] \end{lemma} \begin{proof} From Lemma \ref{LH} we have that in each of the above cases $D_{1_k}\cong S_{1_k}\subseteq M_{1^k}$. Since $[M_{1^k}:D_{1^k}]=1$ and $M_{1^k}$ is self-dual, it follows that $D_{1^k}\cong S_{1^k}$ is a direct summand of $M_{1^k}$. The lemma then follows by comparing composition factors (for example using Specht filtrations) and Lemma \ref{LYoung}, since if $\lambda\unrhd (n-k,1^k)$ and $k<p$ then $\lambda\in{\mathscr {P}}_p(n)$. \end{proof} \begin{lemma}\label{L16} Let $p\geq 3$ and $n\equiv 0\!\mod p$. If $n\geq 2$ then \[M_1\cong Y_1\] and if $n\geq 4$ \[M_{1^2}\cong M_2\oplus Y_2.\] If $p\geq 5$ and $n\geq 6$ then \[M_{1^3}\oplus M_3\cong M_{2,1}^{\oplus 2}\oplus Y_3\] and if $n\geq 8$ \[M_{1^4}\oplus M_{2^2}\oplus M_{3,1}^{\oplus 2}\cong M_{2,1^2}^{\oplus 2}\oplus M_4\oplus Y_4.\] If $p=3$ and $n\geq 6$ then \[M_{1^3}\oplus M_1\cong M_{2,1}\oplus M_{1^2}\oplus Y_3'.\] In each of the above cases $Y_k$ or $Y_k'$ is indecomposable with simple head and socle isomorphic to $D_{1^{k-1}}$ and \begin{align*} Y_1&\cong \overbrace{D_0|D_1}^{S_1}|\overbrace{D_0}^{S_0},\\ Y_2&\sim \overbrace{D_1|D_{1^2}}^{S_{1^2}}|\overbrace{D_0|D_1}^{S_1},\\ Y_3&\sim \overbrace{D_{1^2}|D_{1^3}}^{S_{1^3}}|\overbrace{D_1|D_{1^2}}^{S_{1^2}},\\ Y_3'&\sim S_{1^3}|S_{2,1}|S_{1^2},\\ Y_4&\sim \overbrace{D_{1^3}|D_{1^4}}^{S_{1^4}}|\overbrace{D_{1^2}|D_{1^3}}^{S_{1^3}}. \end{align*} \end{lemma} \begin{proof} Note that $M_{1^k}=M^{(n-k,1^{k-1})}{\uparrow}^{{\sf S}_n}$. In particular in each of the above cases since $(n-k,1^{k-1})\in{\mathscr {P}}_p(n-1)$ from Lemmas \ref{Lemma45} and \ref{LH} and self-duality of $M^{(n-k,1^{k-1})}$ we have that $D^{(n-k,1^{k-1})}\cong S^{(n-k,1^{k-1})}$ and that $e_{-k}D^{(n-k,1^{k-1})}{\uparrow}^{{\sf S}_n}$ is a direct summand of $M_{1^k}$. Let $Y_k$ or $Y_k'$ be this direct summand. Then $Y_k$ or $Y_k'$ has simple head and socle isomorphic to $D_{1^{k-1}}$ by Lemma \ref{Lemma39} and it has the right Specht filtration by \cite[Corollary 17.14]{JamesBook} and block decomposition. Structure of hook Specht modules can be obtained by Lemma \ref{LH}. The lemma then follows by comparing composition factors (for example using Specht filtrations) and Lemma \ref{LYoung}, since $\lambda\in{\mathscr {P}}_p(n)$ if $\lambda\rhd (n-k,1^k)$ and $k\leq p$. \end{proof} \section{More on endomorphisms rings}\label{s5} In this section we study branching for certain classes of modules in order to extend in many cases results from {\sf S}{shr} to families of modules which were not considered there. We divide this section according to different classes of modules. \subsection{Partitions with two or three rows}\label{s2r} \begin{lemma}\label{L7a} Let $p=3$, $n\geq 7$, $G\in\{{\sf S}_n,{\sf A}_n\}$ and $\lambda=(n-2,2)$. Let $V$ be an irreducible $FG$-module indexed by $\lambda$. If $n\not\equiv 2\!\mod 3$ then there exists $\psi\in\mathrm{Hom}_H(M_{3},\mathrm{End}_F(V))$ which does not vanish on $S_{3}$. \end{lemma} \begin{proof} By \cite[Lemma 1.8]{ks2}, $\lambda\not=\lambda^{\tt M}$, so $V\cong D^\lambda=D_2$ or $E^\lambda$. So it is enough to prove the lemma for ${\sf S}_n$. From \cite[Lemma 6.5]{m3} it is enough to prove that \[\dim\mathrm{End}_{{\sf S}_{n-3,3}}(D_2{\downarrow}_{{\sf S}_{n-3,3}})>\dim\mathrm{End}_{{\sf S}_{n-2,2}}(D_2{\downarrow}_{{\sf S}_{n-2,2}}).\] Note that the assumption on $n$ is equivalent to $(n-2,2)$ not being a JS-partition. If the two removable nodes have different residue this holds by \cite[Lemma 6.7]{m3}. So we may assume that the removable nodes have the same residue, in which case $n\equiv 0\!\mod 3$. From Mackey induction-reduction theorem we have that \begin{align*} M_1{\downarrow}_{{\sf S}_{n-2,2}}&\cong 1{\uparrow}_{{\sf S}_{n-2,1^2}}^{{\sf S}_{n-2,2}}\oplus 1{\uparrow}_{{\sf S}_{n-3,1,2}}^{{\sf S}_{n-2,2}}\\ &\cong (M^{(n-2)}\boxtimes M^{(1^2)})\oplus (M^{(n-3,1)}\boxtimes M^{(2)}),\\ M_1{\downarrow}_{{\sf S}_{n-3,3}}&\cong 1{\uparrow}_{{\sf S}_{n-3,2,1}}^{{\sf S}_{n-3,3}}\oplus 1{\uparrow}_{{\sf S}_{n-4,1,3}}^{{\sf S}_{n-3,3}}\\ &\cong (M^{(n-3)}\boxtimes M^{(2,1)})\oplus (M^{(n-4,1)}\boxtimes M^{(3)}),\\ M_2{\downarrow}_{{\sf S}_{n-2,2}}&\cong 1\oplus 1{\uparrow}_{{\sf S}_{n-3,1^3}}^{{\sf S}_{n-2,2}}\oplus 1{\uparrow}_{{\sf S}_{n-4,2^2}}^{{\sf S}_{n-2,2}}\\ &\cong (M^{(n-2)}\boxtimes M^{(2)})\oplus (M^{(n-3,1)}\boxtimes M^{(1^2)})\oplus (M^{(n-4,2)}\boxtimes M^{(2)}),\\ M_2{\downarrow}_{{\sf S}_{n-3,3}}&\cong 1{\uparrow}_{{\sf S}_{n-3,2,1}}^{{\sf S}_{n-3,3}}\oplus 1{\uparrow}_{{\sf S}_{n-4,1,2,1}}^{{\sf S}_{n-3,3}}\oplus 1{\uparrow}_{{\sf S}_{n-5,2,3}}^{{\sf S}_{n-3,3}}\\ &\cong (M^{(n-3)}\boxtimes M^{(2,1)})\oplus (M^{(n-4,1)}\boxtimes M^{(2,1)})\oplus (M^{(n-5,2)}\boxtimes M^{(3)}). \end{align*} From Lemma \ref{L160817_0} we have that $M_2\cong M_1\oplus D_2$. Comparing $M_2{\downarrow}_H$ and $M_1{\downarrow}_H$ for $H\in\{{\sf S}_{n-2,2},{\sf S}_{n-3,3}\}$ using Lemmas \ref{M1} and \ref{L160817_2}, it follows that \begin{align*} D_2{\downarrow}_{{\sf S}_{n-2,2}}&\hspace{-1pt}\!\cong\! (D^{(n-3,1)}\!\boxtimes\! (D^{(2)}\!\oplus\! D^{(1^2)})\hspace{-1pt})\!\oplus\! (\hspace{-1pt}(D^{(n-2)}|D^{(n-4,2)}|D^{(n-2)})\!\boxtimes\! D^{(2)}),\\ D_2{\downarrow}_{{\sf S}_{n-3,3}}&\hspace{-1pt}\!\sim\! (D^{(n-5,2)}\!\boxtimes\! D^{(3)})\!\oplus\! (\hspace{-1pt}(D^{(n-3)}|D^{(n-4,1)}|D^{(n-3)})\!\boxtimes\! (D^{(3)}|D^{(2,1)}|D^{(3)})\hspace{-1pt}). \end{align*} It then follows that \[\dim\mathrm{End}_{{\sf S}_{n-3,3}}(D_2{\downarrow}_{{\sf S}_{n-3,3}})=5>4=\dim\mathrm{End}_{{\sf S}_{n-2,2}}(D_2{\downarrow}_{{\sf S}_{n-2,2}}).\] \end{proof} \begin{lemma}\label{L39} Let $p\geq 3$, $n\geq 6$ with $n\not\equiv 0\!\mod p$ and $\lambda\in{\mathscr {P}}_p(n)\setminus{\mathscr {H}}_p(n)$. If $\lambda$ is not JS then $\overline{D}_0\oplus \overline{D}_1\oplus \overline{D}_3\subseteq\mathrm{End}_F(D^\lambda)$. \end{lemma} \begin{proof} Clearly $\overline{D}_0\cong D_0\subseteq\mathrm{End}_F(D^\lambda)$. From Lemmas \ref{Lemma39} and \ref{l2} we have that \[\dim\mathrm{Hom}_{{\sf S}_n}(M_1,\mathrm{End}_F(D^\lambda))=\dim\mathrm{End}_{{\sf S}_{n-1}}(D^\lambda)\geq 2.\] From Lemma \ref{L17} we then have that $\overline{D}_1\cong D_1\subseteq\mathrm{End}_F(D^\lambda)$. From Lemma \ref{LH} we have that $\overline{D}_3\cong S_{1^3}$. So $\overline{D}_3\subseteq \mathrm{End}_F(D^\lambda)$ by Lemma \ref{L3}. \end{proof} \begin{lemma}\label{L41} Let $p\geq 5$, $n\geq 6$ with $n\equiv 0\!\mod p$ and $\lambda\in{\mathscr {P}}_p(n)$. If $h(\lambda)=2$ and $\lambda_1-\lambda_2\not\equiv 0,-1,-2\!\mod p$ then $\overline{D}_0\oplus \overline{D}_2\subseteq\mathrm{End}_F(D^\lambda)$ or $\overline{D}_0\oplus \overline{D}_1\oplus \overline{D}_3\subseteq\mathrm{End}_F(D^\lambda)$. \end{lemma} \begin{proof} Notice that by Lemma \ref{Lemma39} and considering branching in characteristic 0, \begin{align*} D^\lambda{\downarrow}_{{\sf S}_{n-2,2}}\cong\,& (D^{(\lambda_1-2,\lambda_2)}\boxtimes D^{(2)})\oplus (D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(2)})\\ &\oplus (D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(1^2)})\oplus (D^{(\lambda_1,\lambda_2-2)}\boxtimes D^{(2)})^{\oplus a}, \end{align*} with $a=1$ if $\lambda_2\geq 2$ and $\lambda_1-\lambda_2\not\equiv -3\!\mod p$ or $a=0$ else. From Lemmas \ref{l2} and \ref{L16} it follows that $\overline{D}_1$ or $\overline{D}_2$ is contained in $\mathrm{End}_F(D^\lambda)$. From Lemmas \ref{LH} and \ref{L3} we also have that $\overline{D}_2$ or $\overline{D}_3$ is contained in $\mathrm{End}_F(D^\lambda)$. The lemma follows. \end{proof} \begin{lemma}\label{L43}\label{L42} Let $p\geq 3$, $n\geq 6$ with $n\equiv 0\!\mod p$ and $\lambda\in{\mathscr {P}}_p(n)$. If $h(\lambda)=2$, $\lambda_1>\lambda_2\geq 2$ and $\lambda_1-\lambda_2\equiv 0$ or $-1\!\mod p$, then $\overline{D}_0\oplus \overline{D}_2\subseteq\mathrm{End}_F(D^\lambda)$ or $\overline{D}_0\oplus \overline{D}_1\oplus \overline{D}_3\subseteq\mathrm{End}_F(D^\lambda)$. \end{lemma} \begin{proof} We will use Lemma \ref{Lemma39} without further reference to it. We may assume that $\overline{D}_2\not\subseteq \mathrm{End}_F(D^\lambda)$. From Lemmas \ref{LH} and \ref{L3} we then have that $\overline{D}_3\subseteq \mathrm{End}_F(D^\lambda)$. So it is enough to prove that $\overline{D}_1\cong D_1\subseteq\mathrm{End}_F(D^\lambda)$. From Lemmas \ref{l2} and \ref{L16} it is enough to prove that \[\dim\mathrm{End}_{{\sf S}_{n-2}}(D^\lambda{\downarrow}_{{\sf S}_{n-2}})-\dim\mathrm{End}_{{\sf S}_{n-2,2}}(D^\lambda{\downarrow}_{{\sf S}_{n-2,2}})\geq 2.\] {\bf Case 1:} $\lambda_1-\lambda_2\equiv 0\!\mod p$. Note that $\lambda_1\equiv\lambda_2\equiv 0\!\mod p$. So \[D^\lambda{\downarrow}_{{\sf S}_{n-2}}\cong D^{(\lambda_1-1,\lambda_2-1)}\oplus D^{(\lambda_1,\lambda_2-2)}\oplus e_{-2}D^{(\lambda_1-1,\lambda_2)},\] with $e_{-2}D^{(\lambda_1-1,\lambda_2)}$ indecomposable with simple head and socle and \[e_{-2}D^{(\lambda_1-1,\lambda_2)}\sim D^{(\lambda_1-1,\lambda_2-1)}|A|D^{(\lambda_1-1,\lambda_2-1)}\] with $[A:D^{(\lambda_1-1,\lambda_2-1)}]=0$. Further $D^{(\lambda_1-1,\lambda_2-1)}\otimes D^{(1^2)}$ is a composition factor of $D^\lambda{\downarrow}_{{\sf S}_{n-2,2}}$ with multiplicity 1 by \cite[Lemma 1.11]{bk5}. So by self-duality of $D^\lambda{\downarrow}_{{\sf S}_{n-2,2}}$ (or block decomposition) it follows that \[D^\lambda{\downarrow}_{{\sf S}_{n-2}}\cong (D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(1^2)})\oplus (D^{(\lambda_1,\lambda_2-2)}\boxtimes D^\nu)\oplus B,\] with $\nu\in\{(2),(1^2)\}$ and $B$ indecomposible with simple head and socle isomorphic to to $D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(2)}$ and no other such composition factor. It then follows that \[\dim\mathrm{End}_{{\sf S}_{n-2}}(D^\lambda{\downarrow}_{{\sf S}_{n-2}})-\dim\mathrm{End}_{{\sf S}_{n-2,2}}(D^\lambda{\downarrow}_{{\sf S}_{n-2,2}})=6-4=2.\] {\bf Case 2:} $\lambda_1-\lambda_2\equiv -1\!\mod p$. In this case $\lambda_1\equiv (p-1)/2\!\mod p$ and $\lambda_2\equiv (p+1)/2\!\mod p$. So both removable nodes have the same residue. Then by \cite[Lemma 4.2]{m2} we have that \[D^\lambda{\downarrow}_{{\sf S}_{n-2,2}}\cong (D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(2)})\oplus(D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(1^2)})\oplus B\] for a certain module $B$ and then \[D^\lambda{\downarrow}_{{\sf S}_{n-2}}\cong (D^{(\lambda_1-1,\lambda_2-1)})^{\oplus 2}\oplus B'\] with $B'\cong B{\downarrow}_{{\sf S}_{n-2}}$. It then follows that \[\dim\mathrm{End}_{{\sf S}_{n-2}}(D^\lambda{\downarrow}_{{\sf S}_{n-2}})-\dim\mathrm{End}_{{\sf S}_{n-2,2}}(D^\lambda{\downarrow}_{{\sf S}_{n-2,2}})\geq 2.\] \end{proof} \begin{lemma}\label{L44} Let $p\geq 5$, $n\geq 8$, $\lambda=(\lambda_1,\lambda_2)\in{\mathscr {P}}_p(n)$ with $\lambda_2\geq 2$. If $\lambda$ is JS then \[\dim\mathrm{Hom}_{{\sf S}_n}(\widetilde{D}_k,\mathrm{End}_F(D^\lambda))=\left\{\begin{array}{ll} 1,&k\in\{0,3\},\\ 0,&\text{else}. \end{array}\right.\] \end{lemma} \begin{proof} We will use Lemma \ref{Lemma39} throughout the proof without further reference to it. If $h(\mu)\geq 5$ then $D^\mu\not\subseteq \mathrm{End}_F(D^\lambda)$, since $\lambda$ has only 2 rows (note that any composition factor of $D^\lambda$ is also a composition factor of $S^\lambda\otimes M^\lambda\cong S^\lambda{\downarrow}_{{\sf S}_{\lambda_1,\lambda_2}}{\uparrow}^{{\sf S}_n}$). Since $\lambda\not=\lambda^{\tt M}$ by \cite[Lemma 1.8]{ks2}, we have that $D^{(n)^{\tt M}}\not\subseteq \mathrm{End}_F(D^\lambda)$. So we only need to check the lemma for $k\leq 3$. For $k=0$ the lemma clearly holds. If $\lambda_1>\lambda_2$ then $\lambda_1\geq\lambda_2+3$ since $\lambda$ is JS. If $\lambda_1=\lambda_2$ then $\lambda_2\geq 4$ since $n\geq 8$. It can be checked that $D^{(2)}$, $D^{(1^2)}$, $D^{(2,1)}$, $D^{(3,1)}$ and $D^{(2^2)}$ are composition factors of $D^\lambda{\downarrow}_{{\sf S}_k}$ for the corresponding $k$. Comparing dimensions and multiplicities as well as $D^\lambda{\downarrow}_{{\sf S}_{n-k,k}}{\downarrow}_{{\sf S}_{n-k-1,1,k}}$ and $D^\lambda{\downarrow}_{{\sf S}_{n-k-1,k+1}}{\downarrow}_{{\sf S}_{n-k-1,1,k}}$ we that have that if $\lambda_1>\lambda_2$ then \begin{align*} D^\lambda{\downarrow}_{{\sf S}_{n-1}}\cong\,& D^{(\lambda_1-1,\lambda_2)},\\ D^\lambda{\downarrow}_{{\sf S}_{n-2,2}}\cong\,& (D^{(\lambda_1-2,\lambda_2)}\boxtimes D^{(2)})\oplus (D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(1^2)}),\\ D^\lambda{\downarrow}_{{\sf S}_{n-3,3}}\cong\,& (D^{(\lambda_1-3,\lambda_2)}\boxtimes D^{(3)})\oplus (D^{(\lambda_1-2,\lambda_2-1)}\boxtimes D^{(2,1)}),\\ D^\lambda{\downarrow}_{{\sf S}_{n-4,4}}\cong\,& (D^{(\lambda_1-4,\lambda_2)}\boxtimes D^{(4)})^{\oplus \delta_{\lambda_1\geq\lambda_2+4}}\oplus (D^{(\lambda_1-3,\lambda_2-1)}\boxtimes D^{(3,1)})\\ &\oplus (D^{(\lambda_1-2,\lambda_2-2)}\boxtimes D^{(2^2)}), \end{align*} while $\lambda_1=\lambda_2$ and \begin{align*} D^\lambda{\downarrow}_{{\sf S}_{n-1}}\cong\,& D^{(\lambda_1,\lambda_2-1)},\\ D^\lambda{\downarrow}_{{\sf S}_{n-2,2}}\cong\,& (D^{(\lambda_1,\lambda_2-2)}\boxtimes D^{(2)})\oplus (D^{(\lambda_1-1,\lambda_2-1)}\boxtimes D^{(1^2)}),\\ D^\lambda{\downarrow}_{{\sf S}_{n-3,3}}\cong\,& (D^{(\lambda_1,\lambda_2-3)}\boxtimes D^{(3)})\oplus (D^{(\lambda_1-1,\lambda_2-2)}\boxtimes D^{(2,1)}),\\ D^\lambda{\downarrow}_{{\sf S}_{n-4,4}}\cong\,& (D^{(\lambda_1,\lambda_2-4)}\boxtimes D^{(4)})^{\oplus \delta_{p=5}}\oplus (D^{(\lambda_1-1,\lambda_2-3)}\boxtimes D^{(3,1)})\\ &\oplus (D^{(\lambda_1-2,\lambda_2-2)}\boxtimes D^{(2^2)}) \end{align*} (in the first case as well as some parts in the second case this also follows from \cite[Lemma 1.11]{bk5} and by comparing multiplicities and dimensions). From Lemma \ref{L3} we have that $\overline{D}_2$ or $\overline{D}_3$ is contained in $\mathrm{End}_F(D^\lambda)$. The lemma then follows from Lemma \ref{L17} or \ref{L16} together with Lemma \ref{l2}. \end{proof} \begin{lemma}\label{L56} Let $p=3$, $n\geq 6$ with $n\equiv 0\!\mod 3$ and $\lambda=(\lambda_1,\lambda_2,\lambda_3)\in{\mathscr {P}}_3(n)$ with $\lambda_1>\lambda_2>\lambda_3\geq 1$. If $\lambda$ is not JS then $\overline{D}_0\oplus \overline{D}_1\oplus \overline{D}_2\subseteq\mathrm{End}_F(D^\lambda)$ or $\overline{D}_0\oplus \overline{D}_1\oplus \overline{D}_3\subseteq\mathrm{End}_F(D^\lambda)$. \end{lemma} \begin{proof} In view of Lemmas \ref{LH} and \ref{L3} we have that $\overline{D}_2$ or $\overline{D}_3$ is contained in $\mathrm{End}_F(D^\lambda)$. Thus is it enough to prove that $\overline{D}_1\cong D_1\subseteq\mathrm{End}_F(D^\lambda)$. By assumption $\lambda_1-\lambda_2\equiv\lambda_2-\lambda_3\!\mod 3$ and we may assume that $\lambda_1-\lambda_2\not\equiv 1\!\mod 3$. If $\lambda_1-\lambda_2\equiv 2\!\mod 3$ then $\lambda$ has 3 normal nodes. So $\dim\mathrm{End}_{{\sf S}_{n-3}}(D^\lambda)=3$ by Lemma \ref{Lemma39}. It then follows from Lemmas \ref{l2} and \ref{L16} that $D_1\subseteq\mathrm{End}_F(D^\lambda)$. So assume now that $\lambda_1-\lambda_2\equiv 0\!\mod 3$. In this case if $i$ is the residue of $(1,\lambda_1)$ then $\epsilon_i(\lambda)=1$, $\epsilon_{i-1}(\lambda)=1$, $\phi_i(\lambda)\geq 1$ and $\phi_{i-1}(\lambda)\geq 1$. So, by Lemmas \ref{Lemma39} and \ref{Lemma40}, \[D^\lambda\otimes M_1\cong f_ie_iD^\lambda\oplus f_{i-1}e_{i-1}D^\lambda\oplus M\sim (D^\lambda|\ldots|D^\lambda)\oplus (D^\lambda|\ldots|D^\lambda)\oplus M\] for a certain module $M$. It then follows from Lemma \ref{L16} that also in this case $D_1\subseteq\mathrm{End}_F(D^\lambda)$. \end{proof} \subsection{Spin representations}\label{sr} The results obtained in this section will only be used to obtain reduction to tensor products with the natural module and with basic spin for $p=3$. However we prove them in general, since the proof in the general case is not more complicated or much longer. \begin{lemma}\label{L071218_3} Let $A$ be a superalgebra and $M$ be an $A$-supermodule with $\mathrm{hd}(M)\cong D$ simple of type Q. If $\mathrm{End}_A(M)\simeq\mathrm{End}_A(D)^{[M:D]}$ then $M$ admits an odd involution. \end{lemma} \begin{proof} Note that $\mathrm{End}_A(\mathrm{rad} M)\simeq\mathrm{End}_A(D)^{[M:D]-1}$, since $\mathrm{hd}(M)\cong D$ and $\mathrm{End}_A(M)\simeq\mathrm{End}_A(D)^{[M:D]}$. The lemma then follows since $D$ is of type Q. \end{proof} \begin{lemma}\label{L071218_4} Let $A$ and $B$ be superalgebras, $M$ be an $A$-supermodule and $N$ be a $B$-supermodule. If both $M$ and $N$ admit an odd involution then there exists an $A\otimes B$-supermodule $L$ such that $M\boxtimes N\cong L^{\oplus 2}$. \end{lemma} \begin{proof} As in \cite[Section 2-b]{BK} (in order for the argument to work it is not required that $M$ and $N$ are simple). \end{proof} \begin{lemma}\label{L101218_2} Let $\nu\in{\mathscr {RP}}_p(n-1)$. If $\epsilon_i(\nu)>0$ then the following happen. \begin{enumerate} \item If $D(\widetilde e_i\nu)$ is of type M then $e_iD(\nu){\uparrow}^{\widetilde{\sf S}_{n-2,2}}\cong e_iD(\nu)\boxtimes D((2))=: e_iD(\nu)\circledast D((2))$. \item If $D(\widetilde e_i\nu)$ is of type Q then $e_iD(\nu){\uparrow}^{\widetilde{\sf S}_{n-2,2}}\cong e_iD(\nu)\boxtimes D((2))\cong (e_iD(\nu)\circledast D((2)))^{\oplus 2}$ for a certain module $e_iD(\nu)\circledast D((2))$. \end{enumerate} Further $e_iD(\nu)\circledast D((2))$ has simple head and socle isomorphic to $D(\widetilde e_i\nu,(2))$ and \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(e_iD(\nu)\circledast D((2)))=\epsilon_i(\nu)\dim\mathrm{End}(D(\widetilde e_i\nu,(2))).\] \end{lemma} \begin{proof} (i) clearly holds. (ii) follows from Lemma \ref{L071218_4}, since by Lemmas \ref{Lemma39s} and \ref{L071218_3} there exists an odd involution for $e_iD(\nu)$. Further for any $\mu\in{\mathscr {RP}}_p(n-2)$ \begin{align*} &\dim\mathrm{Hom}_{\widetilde{\sf S}_{n-2,2}}(e_iD(\nu){\uparrow}^{\widetilde{\sf S}_{n-2,2}},D(\mu,(2)))\\ &=\dim\mathrm{Hom}_{\widetilde{\sf S}_{n-2}}(e_iD(\nu),D(\mu,(2)){\downarrow}_{\widetilde{\sf S}_{n-2}})\\ &=2^{1-a(\mu)}\dim\mathrm{Hom}_{\widetilde{\sf S}_{n-2}}(e_iD(\nu),D(\mu))\\ &=2\delta_{\mu,\widetilde e_i\nu}. \end{align*} Since $D(\widetilde e_i\nu)$ and $D(\widetilde e_i\nu,(2))$ are of different type, it follows that head and socle of $e_iD(\nu)\circledast D((2))$ are isomorphic to $D(\widetilde e_i\nu,(2))$. Last, from Lemma \ref{Lemma39s}, we have that \begin{align*} &\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(e_iD(\nu)\circledast D((2)))\\ &=2^{-2a(\widetilde e_i\nu)}\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(e_iD(\nu){\uparrow}^{\widetilde{\sf S}_{n-2,2}})\\ &=2^{-2a(\widetilde e_i\nu)}\dim\mathrm{Hom}_{\widetilde{\sf S}_{n-2}}(e_iD(\nu),e_iD(\nu){\uparrow}^{\widetilde{\sf S}_{n-2,2}}{\downarrow}_{\widetilde{\sf S}_{n-2}})\\ &=2^{1-2a(\widetilde e_i\nu)}\dim\mathrm{End}_{\widetilde{\sf S}_{n-2}}(e_iD(\nu))\\ &=2^{1-a(\widetilde e_i\nu)}\epsilon_i(\nu)\\ &=\epsilon_i(\nu)\dim\mathrm{End}(D(\widetilde e_i\nu,(2))). \end{align*} \end{proof} \begin{lemma}\label{L051218_5} Let $n\geq 5$ and $\lambda\in{\mathscr {RP}}_p(n)$. If $\epsilon_0(\lambda),\epsilon_i(\lambda)=1$ and $\epsilon_j(\lambda)=0$ for $j\not=0,i$ then at least one of $\widetilde e_0\lambda$ and $\widetilde e_j\lambda$ is not JS. \end{lemma} \begin{proof} Notice first that $h(\lambda)\geq 2$. Assume that $\widetilde e_j\lambda$ is JS. Then it is JS(0) by Lemma \ref{L051218_4}. Since $\phi_j(\widetilde e_j\lambda)\geq 1$, it follows from Lemma \ref{L10} that the top addable node of $\lambda$ is the only conormal node of $\widetilde e_j\lambda$ and this node has residue $j$. So the normal nodes of $\lambda$ are on row 1 (of residue $j$) and on row $h(\lambda)$ (of residue 0). It is easy to see that $(1,\lambda_1)$ is normal also in $\widetilde e_0\lambda$ (any removable node in $\lambda$ is also removable in $\widetilde e_0\lambda$ apart for the node $(h(\lambda),1)$ and any addable node in $\widetilde e_0\lambda$ is also addable in $\lambda$ again apart the node $(h(\lambda),1)$). Since $\widetilde e_0\lambda$ is JS it follows that $\lambda=(n-1,1)$. From $\widetilde e_j\lambda=(n-2,1)$ being JS(0) it follows from \cite[Lemma 3.7]{p} that $\lambda=(3,1)$ or $(p,1)$. The first case contradicts $n\geq 5$ while in the second case both removable nodes have residue 0. \end{proof} \begin{lemma}\label{L12} Let $\lambda\in{\mathscr {RP}}_p(n)$ be $\lambda\in JS(0)$. Then $\phi_0(\lambda)\geq 1$ if and only if $n\equiv 0\!\mod p$. \end{lemma} \begin{proof} Assume first that $\lambda\in\text{JS}(0)$ and $n\equiv 0\!\mod p$. From Lemma \ref{M1} we have that $[D(\lambda)\otimes M^{(n-1,1)}:D(\lambda)]\geq 2$. So from Lemma \ref{L9} it follows that $\phi_0(\lambda)\geq 1$. Assume now that $\lambda\in JS(0)$ and $\phi_0(\lambda)\geq 1$. Then all normal and conormal nodes of $\lambda$ have residue 0 by Lemma \ref{L10}. In particular the top addable node has residue 0. So $\lambda_1\equiv 0\mbox{ or }-1\!\mod p$. If $\lambda_1\equiv -1\!\mod p$ then from \cite[Lemma 3.7]{p} we have that $\bar\lambda:=(\lambda_1+1,\lambda_1,\lambda_2,\ldots)\in JS(0)$. Since $|\lambda|\equiv|\bar\lambda|\!\mod p$, we may assume that $\lambda_1\equiv 0\!\mod p$. For a residue $i$ define \begin{align*} A_i&:=\{2\leq j\leq h(\lambda)|\mathrm{res}(j,\lambda_j)=i=\mathrm{res}(j-1,\lambda_{j-1})-1\},\\ B_i&:=\{2\leq j\leq h(\lambda)|\mathrm{res}(j,\lambda_j)=i=\mathrm{res}(j-1,\lambda_{j-1})+1\},\\ C_i&:=\{2\leq j\leq h(\lambda)|\mathrm{res}(j,\lambda_j)=i=\mathrm{res}(j-1,\lambda_{j-1})\}. \end{align*} From \cite[Lemma 3.7]{p} we have the following: \begin{enumerate}[-] \item $\cup_i(A_i\cup B_i\cup C_i)=\{2,\ldots,h(\lambda)\}$, \item if $C_i\not=\emptyset$ then $i=0$, \item if $j\in C_0$ then $\lambda_j\equiv 0\!\mod p$ \item if $j\in A_i$ then $\lambda_j\equiv i+1\!\mod p$ \item if $j\in B_{i+1}$ then $\lambda_j\equiv -i-1\!\mod p$. \end{enumerate} Since $\mathrm{res}(1,\lambda_1)=0=\mathrm{res}(h(\lambda),\lambda_{h(\lambda)})$ it then follows that $|A_i|=|B_{i+1}|$ for each $0\leq i<(p-1)/2$. In particular \begin{align*} |\lambda|&=\lambda_1+\sum_{i=0}^{(p-3)/2}\sum_{j\in A_i}\lambda_j+\sum_{i=1}^{(p-1)/2}\sum_{j\in B_i}\lambda_j+\sum_{j\in C_0}\lambda_j\\ &\equiv \sum_{i=0}^{(p-3)/2}(|A_i|(i+1)-|B_{i+1}|(i+1))\equiv 0\!\mod p. \end{align*} \end{proof} \begin{rem} Let $\lambda\in{\mathscr {RP}}_p(n)$ be JS(0) with $\phi_0(\lambda)\geq 1$. Then by Lemma \ref{L10} we have that $\phi_0(\lambda)=2$ and $\phi_i(\lambda)=0$ for $i>0$. From Lemmas \ref{Lef} and \ref{L051218_4} we have that $\widetilde f_0\lambda$ only has normal nodes of residue 0. So it can be seen that the following are equivalent: \begin{enumerate}[-] \item $\lambda$ is JS(0) with $\phi_0(\lambda)\geq 1$, \item $\lambda=\widetilde e_0\mu$ is JS(0) and all normal nodes of $\mu$ have residue 0. \end{enumerate} This holds for example if $p=5$ and $\lambda=(4,3,2,1)=\widetilde e_0(5,3,2,1)$. Note that $(5,3,2,1)\not=(5^2,1)=\beta_{11}$. This shows that \cite[Lemma 3.14(i)]{p} is wrong. Since it is unclear where the error is in the proof of \cite[Lemma 3.14]{p} we next give a different proof of \cite[Lemma 3.14(ii)]{p}. \end{rem} \begin{lemma}\label{L13} Let $\lambda\in{\mathscr {RP}}_p(n)$. Then $\lambda\in JS(i)$ and $\widetilde e_i\lambda\in JS(j)$ for some $i,j\not=0$ if and only if $\lambda=\beta_n$ with $n\not\equiv 0,1,2\!\mod p$. \end{lemma} \begin{proof} For $\lambda=\beta_n$ it can be easily checked that $\lambda\in JS(i)$ and $\widetilde e_i\lambda\in JS(j)$ for some $i,j\not=0$ if and only if $n\not\equiv 0,1,2\!\mod p$. So assume that $\lambda\in JS(i)$ and $\widetilde e_i\lambda\in JS(j)$ for some $i,j\not=0$. Notice that the only normal node of $\lambda$ (of $\widetilde{e}_i\lambda$) is the last node on the bottom row, since $\lambda$ ($\widetilde e_i\lambda$) is JS. It then easily follows from $i,j\not=0$ that $h(\lambda)=h(\widetilde e_i\lambda)$, that $3\leq \lambda_{h(\lambda)}<p$ and $\widetilde e_i\lambda=(\lambda_1,\ldots,\lambda_{h(\lambda)-1},\lambda_{h(\lambda)}-1)$. If $p|\lambda_k$ for each $1\leq k<h(\lambda)$ then $\lambda=(p^{h(\lambda)-1},\lambda_{h(\lambda)})$ and so $\lambda=\beta_n$ and $n\not\equiv 0,1,2\!\mod p$. So assume that this is not the case and let $k< h(\lambda)$ maximal such that $p\nmid \lambda_k$. Notice that $\lambda=(\lambda_1,\ldots,\lambda_k,p^{h(\lambda)-k-1},\lambda_{h(\lambda)})$. Since $\lambda$ is JS it can be checked that $\mathrm{res}(k,\lambda_k)=\mathrm{res}(h(\lambda),\lambda_{h(\lambda)}+1)$. On the other hand, since $\widetilde e_i\lambda=(\lambda_1,\ldots,\lambda_k,p^{h(\lambda)-k-1},\lambda_{h(\lambda)}-1)$ is also JS, we have that $\mathrm{res}(k,\lambda_k)=\mathrm{res}(h(\lambda),\lambda_{h(\lambda)})$. In particular $\mathrm{res}(h(\lambda),\lambda_{h(\lambda)})=\mathrm{res}(h(\lambda),\lambda_{h(\lambda)}+1)$ and so $p|\lambda_{h(\lambda)}$, contradicting $\lambda\in{\mathscr {RP}}_p(n)$. \end{proof} \begin{lemma}\label{L081218} Let $n\geq 5$ and $\lambda\in{\mathscr {RP}}_p(n)\setminus\{\beta_n\}$. Then \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}})>\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}})+\dim\mathrm{End}_{\widetilde{\sf S}_{n}}(D(\lambda))\] unless one of the following holds: \begin{enumerate}[-] \item $\lambda$ is JS(1), $p=3$ and $\epsilon_0(\widetilde e_1\lambda)=3$, \item $\lambda$ is JS(1), $p>3$, $\epsilon_0(\widetilde e_1\lambda)=1$ and $\epsilon_2(\widetilde e_1\lambda)=1$, \item $\epsilon_0(\lambda)=2$, $\epsilon_i(\lambda)=0$ for $i>0$ and $\widetilde e_0\lambda\in\text{JS}(0)$, \item $\lambda$ is JS(0). \end{enumerate} \end{lemma} \begin{proof} Throughout the proof let $\epsilon_i:=\epsilon_i(\lambda)$ and for $\alpha\in{\mathscr {P}}(n)$ let $d_\alpha:=\dim\mathrm{End}_{\widetilde{\sf S}_\alpha}(D(\lambda){\downarrow}_{\widetilde{\sf S}_\alpha})$. We will use Lemma \ref{Lemma39s} without further referring to it. Note that \[D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong \bigoplus_i E_i\oplus\bigoplus_{i<j}E_{i,j},\] with $E_i{\downarrow}_{\widetilde{\sf S}_{n-2}}\cong (\mathrm{Res}_i)^2D(\lambda)$ and $E_{i,j}{\downarrow}_{\widetilde{\sf S}_{n-2}}\cong \mathrm{Res}_i\mathrm{Res}_jD(\lambda)\oplus\mathrm{Res}_j\mathrm{Res}_iD(\lambda)$. Further, since $M^{(n-2)}\boxtimes M^{(1^2)}\cong(\mathbf{1}\boxtimes\mathbf{1})\oplus(\mathbf{1}\boxtimes\mathbf{\mathrm{sgn}})$, we have that $E_i{\downarrow}_{{\sf S}_{n-2}}{\uparrow}^{\widetilde{\sf S}_{n-2,2}}\cong E_i\oplus E_i'$ with $E_i'\cong E_i\otimes (\mathbf{1}\boxtimes \mathbf{\mathrm{sgn}})$ and $E_{i,j}{\downarrow}_{\widetilde{\sf S}_{n-2}}{\uparrow}^{\widetilde{\sf S}_{n-2,2}}\cong E_{i,j}\oplus E_{i,j}'$ with $E_{i,j}'\cong E_{i,j}\otimes (\mathbf{1}\boxtimes \mathbf{\mathrm{sgn}})$. In particular $\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_i)=\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_i')$ and $\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_{i,j})=\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_{i,j}')$. Consider first $E_i$. If $\epsilon_i>0$ then \[(e_iD(\widetilde e_i\lambda))^{\oplus 2+2\delta_{i>0}}\subseteq (e_i^{(2)}D(\lambda))^{\oplus 2+2\delta_{i>0}}\subseteq E_i{\downarrow}_{\widetilde{\sf S}_{n-2}}.\] In particular $A=(e_iD(\widetilde e_i\lambda)\circledast D((2)))^{\oplus (2+2\delta_{i>0})(1+a(\lambda))}\subseteq E_i\oplus E_i'$. So $(e_iD(\widetilde e_i\lambda)\circledast D((2)))^{\oplus (1+\delta_{i>0})(1+a(\widetilde e_i\lambda))}$ is contained in $E_i$ or $E_i'$ from Lemma \ref{L101218_2} and similarly to \cite[Lemma 3.7]{m2}. Due to self-duality of the modules it then follows that $$\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_i)\geq 2\delta_{\epsilon_i>0}(1+\delta_{i>0})^2(\epsilon_i-1)d_{(n)}.$$ Consider next $E_{i,j}$ with $0<i<j$. Assume that $\epsilon_i,\epsilon_j>0$. Then $(e_iD(\widetilde e_j\lambda)\oplus e_jD(\widetilde e_i\lambda))^{\oplus 2}\subseteq E_{i,j}{\downarrow}_{\widetilde{\sf S}_{n-2}}$. In particular $$(e_iD(\widetilde e_j\lambda)\circledast D((2))\oplus e_jD(\widetilde e_i\lambda)\circledast D((2)))^{\oplus 2+2a(\lambda)}\subseteq E_{i,j}\oplus E_{i,j}'.$$ Let $\{k,l\}=\{i,j\}$ with $\epsilon_k(\widetilde e_l\lambda)\geq\epsilon_l(\widetilde e_k\lambda)$. Then one of \[(e_iD(\widetilde e_j\lambda)\circledast D(\hspace{-1pt}(2)\hspace{-1pt})\oplus e_jD(\widetilde e_i\lambda)\circledast D(\hspace{-1pt}(2)\hspace{-1pt})\hspace{-1pt})^{\oplus 1+a(\lambda)}\hspace{8pt}\text{or}\hspace{8pt}(e_kD(\widetilde e_l\lambda)\circledast D(\hspace{-1pt}(2)\hspace{-1pt})\hspace{-1pt})^{\oplus 2+a(\lambda)}\] is contained in $E_{i,j}$ or $E_{i,j}'$. In either case it follows from $\epsilon_a(\widetilde e_b\lambda)\geq \epsilon_a$ (see Lemma \ref{L051218_4}) and from Lemma \ref{L081218_2} that \begin{align*} \dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_{i,j})&> 2\delta_{\epsilon_i>0}\delta_{\epsilon_j>0} (\epsilon_i(\widetilde e_j\lambda)+\epsilon_j(\widetilde e_i\lambda))d_{(n)}\\ &\geq 2\delta_{\epsilon_i>0}\delta_{\epsilon_j>0} (\epsilon_i+\epsilon_j)d_{(n)}. \end{align*} Last consider $E_{0,i}$ with $i>0$. Again assume that $\epsilon_0,\epsilon_i>0$. Then $(e_0D(\widetilde e_i\lambda)\oplus e_iD(\widetilde e_0\lambda))^{\oplus 1+a(\lambda)}\subseteq E_{0,i}{\downarrow}_{\widetilde{\sf S}_{n-2}}$. In particular $$(e_0D(\widetilde e_i\lambda)\circledast D((2))\oplus e_iD(\widetilde e_0\lambda)\circledast D((2)))^{\oplus 2}\subseteq E_{i,j}\oplus E_{i,j}'.$$ Similar to the previous case we obtain \begin{align*} \dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_{0,i})&> \delta_{\epsilon_0>0}\delta_{\epsilon_i>0} (\epsilon_0(\widetilde e_i\lambda)+\epsilon_i(\widetilde e_0(\lambda))d_{(n)}\\ &\geq \delta_{\epsilon_0>0}\delta_{\epsilon_i>0} (\epsilon_0+\epsilon_i)d_{(n)}. \end{align*} In particular, if $x=|\{j>0:\epsilon_j>0\}|$, \[d_{(n-2,2)}\geq\left(\delta_{\epsilon_0>0}((2+x)\epsilon_0-2)+\sum_{i>0}\delta_{\epsilon_i>0}((6+2x+\delta_{\epsilon_0>0})\epsilon_i-8)\right)d_{(n)}.\] In view of Lemma \ref{L15} we may assume that \[d_{(n-2,2)}\leq d_{(n-1,1)}+d_{(n)}=(1+\epsilon_0+2\sum_{i>0}\epsilon_i)d_{(n)}.\] It easily follows that $x+\delta_{\epsilon_0>0}\leq 2$ and that we are in one of the following cases: \begin{enumerate}[-] \item $\epsilon_0\leq 3$ and $\epsilon_k=0$ for $k>0$, \item $\epsilon_0\leq 2$, $\epsilon_i=1$ and $\epsilon_k=0$ for $k\not=0,i$ for some $i>0$. \item $\epsilon_i,\epsilon_j=1$ and $\epsilon_k=0$ for $k\not=i,j$ for some $i,j>0$. \end{enumerate} Excluding cases which are not considered in the lemma and considering the stronger bounds involving $\epsilon_i(\widetilde e_j\lambda)$, strict inequalities and that $E_{i,j}\not=0$ if $\epsilon_i>0$ and $\epsilon_j(\widetilde e_i\lambda)>0$, we may assume that we are in one of the following cases: \begin{enumerate} \item[(a)] $\epsilon_0=3$, $\epsilon_k=0$ and $\epsilon_k(\widetilde e_0\lambda)>0$ for $k>0$, \item[(b)] $\epsilon_0=2$, $\epsilon_k=0$ for $k>0$ and there exists $i>0$ with $\epsilon_i(\widetilde e_0\lambda)>0$, \item[(c)] $\lambda$ is $\text{JS}(i)$ with $i>1$, \item[(d)] $p=3$, $\lambda$ is $\text{JS}(1)$ and $\epsilon_0(\widetilde e_1\lambda)\not=3$, \item[(e)] $p>3$, $\lambda$ is $\text{JS}(1)$ and $(\epsilon_0(\widetilde e_1\lambda),\epsilon_2(\widetilde e_1\lambda))\not=(1,1)$, \item[(f)] $\epsilon_0,\epsilon_i=1$, $\epsilon_k=0$ for $k\not=0,i$ and $\epsilon_i(\widetilde e_0\lambda)+\epsilon_0(\widetilde e_i\lambda)\leq 3$ for some $i>0$. \item[(g)] $\epsilon_i,\epsilon_j=1$, $\epsilon_k=0$ for $k\not=i,j$ and $\widetilde e_i\lambda$ and $\widetilde e_j\lambda$ are JS for some $i,j>0$. \end{enumerate} {\bf Case (a).} In this case $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2}}\cong e_0^{(2)}D(\lambda)^{\oplus 2}$ and \[[e_0^{(2)}D(\lambda):D(\widetilde e_0^2\lambda)]=3>2=[e_0D(tilde e_0\lambda):D(\widetilde e_0^2\lambda)].\] It can then be checked that $(e_0D(\widetilde e_0\lambda)\circledast D(2))^{\oplus 1+a(\lambda)}$ is strictly contained in $E_0$ or $E_0'$. Thus \begin{align*} \dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}})&\!>\!(1+a(\lambda))^2(\epsilon_0-1)\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(D(\widetilde e_0^2\lambda,(2)))\\ &\!=\!4\dim\mathrm{End}_{\widetilde{\sf S}_{n}}(D(\lambda)). \end{align*} So also in this case the lemma holds. {\bf Case (b).} In this case $\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_0)\geq 2\dim\mathrm{End}_{\widetilde{\sf S}_n}(D(\lambda))$, so it is enough to prove that $\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(E_{0,i})>\dim\mathrm{End}_{\widetilde{\sf S}_n}(D(\lambda))$ by Lemma \ref{L10}. This follows from $E_{0,i}$ not being zero or simple as supermodule (since $\epsilon_0(\lambda)=2$ and $\epsilon_i(\widetilde e_0\lambda)>0$) and since its composition factors are of the same type as $D(\lambda)$. {\bf Case (c).} Using argument similar to the above we have (letting $E_{i,j}=E_{j,i}$ and $E_{i,j}'=E_{j,i}'$ for $i>j$) that $(e_jD(\widetilde e_i\lambda)\circledast D((2)))^{\oplus 1+a(\lambda)}$ is contained in $E_{i,j}$ or $E_{i,j}'$ for each $j\not=i$ with $j> 0$. From Lemma \ref{L13} and \cite[Lemma 3.8]{p} we have that $\sum_{j\not=i}\epsilon_j(\widetilde e_i\lambda)\geq 2$. From \cite[Lemma 20.2.3]{KBook} we have that $\epsilon_0(\widetilde e_i\lambda)=0$. The lemma then follows. {\bf Case (d).} Notice that $e_0D(\widetilde e_1\lambda)\circledast D((2))$ is contained in $E_{0,1}$ or $E_{0,1}'$. Since $\lambda\not=\beta_n$ it can be easily checked that $\lambda$ ends by $(4,3^b,2)$ with $b\geq 0$. It can then be easily checked that $\epsilon_0(\widetilde e_1\lambda)\geq 3$. So in this case $\epsilon_0(\widetilde e_1\lambda)\geq 4$, from which the lemma follows. {\bf Case (e).} From \cite[Lemma 20.2.3]{KBook} and since $\lambda\in\text{JS}(1)$ we have that $\epsilon_k(\widetilde e_1\lambda)=0$ for $k\not=0,2$. If $\lambda_{h(\lambda)}=p-1$ then the bottom removable node of $\widetilde e_1\lambda$ is $2$-normal (since $p>3$). If $\lambda_{h(\lambda)}=2$ let $k<h(\lambda)$ maximal with $p\nmid \lambda_k$. Note that $k$ exists since $\lambda\not=\beta_n$ From $\lambda\in\text{JS}(1)$ it follows that $\mathrm{res}(k,\lambda_k)=2$ and by maximality of $k$ we have that $(k,\lambda_k)$ is normal for $\widetilde e_1\lambda$. In particular $\epsilon_2(\widetilde e_1\lambda)\geq 1$. We have that $(e_0D(\widetilde e_1\lambda)\circledast D((2)))^{\oplus 2}$ is contained in $E_{0,1}$ or $E_{0,1}'$ and $(e_jD(\widetilde e_i\lambda)\circledast D((2)))^{\oplus 1+a(\lambda)}$ is contained in $E_{i,j}$ or $E_{i,j}'$ for each $j\not=i$ with $j> 0$. Since $\widetilde e_1\lambda$ is not JS by Lemma \ref{L13} and \cite[Lemma 3.8]{p}, we have $\epsilon_2(\widetilde e_1\lambda)\geq 2$ or $\epsilon_0(\widetilde e_1\lambda),\epsilon_2(\widetilde e_1\lambda)\geq 1$ from which the lemma follows. {\bf Case (f).} From Lemma \ref{L051218_5} we have that $\widetilde e_0(\lambda)$ and $\widetilde e_i(\lambda)$ are not both JS. Since $\epsilon_i(\widetilde e_0\lambda)+\epsilon_0(\widetilde e_i\lambda)\leq 3$, we have by Lemmas \ref{Lemma39s} and \ref{L081218_2} that $\widetilde e_i\widetilde e_0\lambda=\widetilde e_0\widetilde e_i\lambda$. If $\epsilon_i(\widetilde e_0\lambda)+\epsilon_0(\widetilde e_i\lambda)=3$ the lemma follows from $(e_0D(\widetilde e_i\lambda)\circledast D((2))\oplus e_iD(\widetilde e_0\lambda)\circledast D((2)))$ being contained in $E_{0,i}$ or $E_{0,i}'$ or $(e_kD(\widetilde e_l\lambda)\circledast D((2)))^{\oplus 2}$ being contained in one of $E_{0,i}$ or $E_{0,i}'$ (with $\{k,l\}=\{0,i\}$ such that $\epsilon_k(\widetilde e_l\lambda)=2$). If $\epsilon_i(\widetilde e_0\lambda)+\epsilon_0(\widetilde e_i\lambda)=2$ then $E_{0,i}$ is not the only non-zero block component of $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}}$, since $\widetilde e_0(\lambda)$ and $\widetilde e_i(\lambda)$ are not both JS. From $\widetilde e_i\widetilde e_0\lambda=\widetilde e_0\widetilde e_i\lambda$ we have that $(D(\widetilde e_i\widetilde e_0\lambda)\circledast D((2)))^{\oplus 2}\subseteq E_{0,i}$ or $E_{0,i}'$, from which the lemma then follows. {\bf Case (g).} In this case from Lemma \ref{L081218_2} we have that $\widetilde e_i\widetilde e_j\lambda=\widetilde e_j\widetilde e_i\lambda$ and then $D(\widetilde e_i\widetilde e_j\lambda)^{\oplus 4}$ is contained in $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2}}$. So $(D(\widetilde e_i\widetilde e_j\lambda)\circledast D((2)))^{\oplus 2+2a(\lambda)}$ is contained in $E_{i,j}$ or $E_{i,j}'$, from which the lemma follows by Lemma \ref{L10}. \end{proof} \begin{lemma} \label{L101218_3} Let $\lambda\in{\mathscr {RP}}_p(n)\setminus\{\beta_n\}$ with $\epsilon_i(\lambda)=0$ for all $i\neq 0$ and $\widetilde e_0 \lambda\in \text{JS}(0)$, then $D(\lambda){\downarrow}_{{\sf S}_{n-1}}$ has a composition factor $D(\mu)$, where $\mu\not=\widetilde e_0\lambda$ is obtained from $\lambda$ by removing the bottom removable node. \end{lemma} \begin{proof} Let $A=(h,\lambda_h)$ be the bottom removable node of $\lambda$. Then $A$ is normal for $\lambda$. Since all normal nodes of $\lambda$ have residue 0 and $\widetilde e_0\lambda\in\text{JS}(0)$, we have that $A$ is not good, so $\mu\not=\widetilde e_0\lambda$. By Lemma \ref{Lemma39sr} it is then enough to prove that $\mu\in{\mathscr {RP}}_p(n-1)$. Note that $A$ has residue $0$, so $\lambda_h=1$. If $\mu\not\in{\mathscr {RP}}_p(n-1)$ then $\lambda_{h-1}=p$. So the node $B:=(h-1,p)$ is also normal for $\lambda$. Since $\widetilde e_0\lambda\in\text{JS}(0)$ we have that $\epsilon_0(\lambda)=2$. In particular $B$ is the $0$-good node of $\lambda$. Let $k<h-1$ be maximal with $\lambda_k>p$ (such $k$ exists since $\lambda\not=\beta_n$). By \cite[Lemma 3.7]{p} it follows that $\lambda_k=p+1$. In particular the node $(k,p+1)$ is removable of residue $0$ for $\lambda$, and then it is also $0$-normal, contradicting $B$ being the $0$-good node of $\lambda$. \end{proof} \begin{lemma}\label{L101218_4} Let $\lambda\in{\mathscr {RP}}_p(n)$ and $n\geq 4$. If $D(\lambda)$ is of type Q then $[D(\lambda,+){\downarrow}_{\widetilde{\sf S}_{n-2}}]=[D(\lambda,-){\downarrow}_{\widetilde{\sf S}_{n-2}}]$. If $D(\lambda)$ is of type M then $[E(\lambda,+){\downarrow}_{\widetilde {\sf A}_{n-2}}]=[E(\lambda,-){\downarrow}_{\widetilde {\sf A}_{n-2}}]$. \end{lemma} \begin{proof} Assume first that $D(\lambda)$ is of type Q. Then \[D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2}}\cong\bigoplus_iD_{i,i}^\pm\oplus \bigoplus_{i<j}D_{i,j}^\pm\] with $D_{i,i}^\pm\cong\mathrm{Res}_i^2D(\lambda,\pm)$ and $D_{i,j}^\pm\cong\mathrm{Res}_i\mathrm{Res}_jD(\lambda,\pm)\oplus\mathrm{Res}_j\mathrm{Res}_iD(\lambda,\pm)$. Further $D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong\bigoplus_iE_{i,i}^\pm\oplus \bigoplus_{i<j}E_{i,j}^\pm$ with $E_{i,j}^\pm{\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong D_{i,j}^\pm$ for $i\leq j$. For any $i\leq j$ we have $D_{i,j}^+\otimes\mathbf{\mathrm{sgn}}\cong D_{i,j}^-$ and $E_{i,j}^+\otimes\mathbf{\mathrm{sgn}}\cong E_{i,j}^-$. If $j>0$ we then easily have that $[D_{0,j}^+]=[D_{0,j}^-]$, since composition factors of $D_{0,j}^\pm$ are of the form $D(\mu,0)\cong D(\mu,0)\otimes \mathbf{\mathrm{sgn}}$ for some $\mu\in{\mathscr {RP}}_p(n-2)$. If $0<i\leq j$ then $[E_{i,j}^+]=[E_{i,j}^-]$, since composition factors of $E_{i,j}^\pm$ are of the form $D(\mu,(2))\cong (D(\mu,(2)))\otimes \mathbf{\mathrm{sgn}}$ for some $\mu\in{\mathscr {RP}}_p(n-2)$. Also in this case it then follows that $[D_{i,j}^+]=[D_{i,j}^-]$. If $D(\lambda)$ is of type M use a similar argument involving conjugation with $\widetilde{(1,2)}$ instead of tensoring with $\mathbf{\mathrm{sgn}}$. \end{proof} \begin{lemma}\label{L101218_5} Let $n\geq 4$, $\lambda\in{\mathscr {RP}}_p(n)\setminus\{\beta_n\}$. Let $G=\widetilde{\sf S}_n$ or $G=\widetilde {\sf A}_n$ and $D$ be a simple $F G$-module indexed by $\lambda$. Assume that one of the following holds: \begin{enumerate} \item $\lambda$ is JS(1), $p=3$ and $\epsilon_0(\widetilde e_1\lambda)=3$, \item $\lambda$ is JS(1), $p>3$ and $\epsilon_0(\widetilde e_1\lambda)=1$ and $\epsilon_2(\widetilde e_1\lambda)=1$, \item $\epsilon_0(\lambda)=2$, $\epsilon_i(\lambda)=0$ for $i>0$ and $\widetilde e_0\lambda\in\text{JS}(0)$. \end{enumerate} Then \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}\cap G}(D{\downarrow}_{\widetilde{\sf S}_{n-2,2}\cap G})>\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}\cap G}(D{\downarrow}_{\widetilde{\sf S}_{n-1}\cap G}).\] \end{lemma} \begin{proof} We will prove the lemma corresponding to cases (i), (ii) and (iii) separately. We will use Lemma \ref{Lemma39s} without further reference. {\bf Case (i).} Notice that $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong D(\widetilde e_1\lambda)^{\oplus 1+a(\lambda)}$ and $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong e_0 D(\widetilde e_1\lambda)\circledast D((2))$. So the lemma holds if $G=\widetilde{\sf S}_n$ and $D\cong D(\lambda,0)$ or $G=\widetilde {\sf A}_n$ and $D\cong E(\lambda,0)$ by Lemma \ref{L101218_2}. Assume now that $G=\widetilde{\sf S}_n$ and $D\cong D(\lambda,\pm)$. Then $D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong D(\widetilde e_1\lambda,0)$ and $D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}}$ is indecomposable with simple head and socle, it has exactly $3$ composition factors of the form $(D(\widetilde e_0\widetilde e_1\lambda,(2)),+)$ or $(D(\widetilde e_0\widetilde e_1\lambda,(2)),-)$. Let $b,c\in\{\pm\}$ such that $D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}}$ has a filtration of the form \ (D(\widetilde e_0\widetilde e_1\lambda,(2)),\pm)|\ldots|(D(\widetilde e_0\widetilde e_1\lambda,(2)),\pm b)|\ldots|(D(\widetilde e_0\widetilde e_1\lambda,(2)),\pm c).\] Note that by self-duality of $D(\lambda)$ we have that \[(D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}})^*\in\{D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}},D(\lambda,\mp){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\}.\] So there exists $d\in\{\pm\}$ such that $(D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}})^*$ has a filtration \ (D(\widetilde e_0\widetilde e_1\lambda,(2)),\pm cd)|\ldots|(D(\widetilde e_0\widetilde e_1\lambda,(2)),\pm bd)|\ldots|(D(\widetilde e_0\widetilde e_1\lambda,(2)),\pm d).\] It then follows that $c=+$ and so the lemma holds. The case $G=\widetilde {\sf A}_n$ and $D\cong E(\lambda,\pm)$ holds with similar arguments. {\bf Case (ii).} Notice that in this case $\epsilon_k(\widetilde e_1\lambda)=0$ for $k\not=0,2$ since $\lambda\in\text{JS}(1)$ and using \cite[Lemma 20.2.3]{KBook}. In particular $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong D(\widetilde e_1\lambda)^{\oplus 1+a(\lambda)}$ and $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong (D(\widetilde e_0\widetilde e_1\lambda,(2)))\oplus(D(\widetilde e_2\widetilde e_1\lambda,D(2)))^{\oplus 1+a(\lambda)}$. The lemma then easily follows. {\bf Case (iii).} In this case by Lemma \ref{L101218_3} we have $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong e_0 D(\lambda)$ and $D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong (D(\widetilde e_0^2\lambda,(2)))^{\oplus 1+a(\lambda)}\oplus A$ with $A\not=0$ corresponding to blocks different than the block of $D(\widetilde e_0^2\lambda,(2))$. So the lemma holds if $G=\widetilde{\sf S}_n$ and $D\cong D(\lambda,0)$ or $G=\widetilde {\sf A}_n$ and $D\cong E(\lambda,0)$. Assume now that $G=\widetilde{\sf S}_n$ and $D\cong D(\lambda,\pm)$. Then $D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong (D(\widetilde e_0^2\lambda,(2)),0)\oplus A'$ with $A'\not=0$. So it is enough to prove that $\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-1}})=1$. Note that $D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-1}}$ has simple head and socle and exactly two composition factors of the form $D(\widetilde e_0\lambda,+)$ or $D(\widetilde e_0\lambda,-)$. Let $b\in\{\pm\}$ with \[D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-1}}\sim D(\widetilde e_0\lambda,\pm)|\ldots|D(\widetilde e_0\lambda,\pm b).\] It is enough to prove that $b=-$. This follows from \begin{align*} \mathrm{Res}_0(D(\lambda,\pm){\downarrow}_{\widetilde{\sf S}_{n-1}})&\cong \mathrm{Res}_0^2(D(\lambda,\pm)\cong \mathrm{Res}_0 (D(\widetilde e_0\lambda,\pm)\oplus D(\widetilde e_0\lambda,\pm b))\\ &\cong D(\widetilde e_0^2\lambda,\pm)\oplus D(\widetilde e_0^2\lambda,\pm b) \end{align*} and from Lemma \ref{L101218_4}. The case $G=\widetilde {\sf A}_n$ and $D\cong E(\lambda,\pm)$ holds similarly. \end{proof} \begin{lemma}\label{L190219} Let $p\geq 3$, $n\geq 4$ and $\lambda\in{\mathscr {RP}}_p(n)$. Assume that \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}})>\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}})+\dim\mathrm{End}_{\widetilde{\sf S}_{n}}(D(\lambda)).\] Then \begin{enumerate}[-] \item If $D(\lambda)$ is of type M then there exists \[\psi\in\mathrm{Hom}_{\widetilde{\sf S}_n}(M_2,\mathrm{End}_F(D(\lambda,0)))\] which does not vanish on $S_2$. Further there exist \[\phi_1,\phi_2\in\mathrm{Hom}_{\widetilde {\sf A}_n}(M_2,\mathrm{Hom}_F(E(\lambda,\pm),E(\lambda)))\] which are linearly independent over $S_2$. \item If $D(\lambda)$ is of type Q then there exist \[\psi_1,\psi_2\in\mathrm{Hom}_{\widetilde{\sf S}_n}(M_2,\mathrm{Hom}_F(D(\lambda,\pm),D(\lambda)))\] which are linearly independent over $S_2$. Further there exists \[\phi\in\mathrm{Hom}_{\widetilde {\sf A}_n}(M_2,\mathrm{End}_F(E(\lambda,0)))\] which does not vanish on $S_2$. \end{enumerate} \end{lemma} \begin{proof} From Lemma \ref{Mk} we have that $M_2\sim S_2|M_1$. Assume first that $D(\lambda)$ is of type M, so that \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}})>\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}})+1.\] Since $D(\lambda){\downarrow}_{\widetilde {\sf A}_n}\cong E(\lambda)$ and $D(\lambda,0)\cong D(\lambda)\cong E(\lambda,\pm){\uparrow}^{\widetilde{\sf S}_n}$, for any partition $\mu\not=(1^n)$ we have that \[\dim\mathrm{End}_{\widetilde{\sf S}_\mu}(D(\lambda){\downarrow}_{\widetilde{\sf S}_\mu})=\dim\mathrm{End}_{\widetilde {\sf A}_\mu}(E(\lambda,\pm){\downarrow}_{\widetilde {\sf A}_\mu},E(\lambda){\downarrow}_{\widetilde {\sf A}_\mu}).\] The lemma then easily follows in this case. Assume next that $D(\lambda)$ is of type Q, so that \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}})>\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}})+2.\] Then for some $\epsilon\in\{\pm\}$ we have that \begin{align*} &\dim\mathrm{Hom}_{\widetilde{\sf S}_{n-2,2}}(D(\lambda,\epsilon){\downarrow}_{\widetilde{\sf S}_{n-2,2}},D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}})\\ &\geq\dim\mathrm{Hom}_{\widetilde{\sf S}_{n-1}}(D(\lambda,\epsilon){\downarrow}_{\widetilde{\sf S}_{n-1}},D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}})+2. \end{align*} So there exist $\psi_1,\psi_2\in\mathrm{Hom}_{\widetilde{\sf S}_n}(M_2,\mathrm{Hom}_F(D(\lambda,\epsilon),D(\lambda)))$ which are linearly independent over $S_2$. The lemma then follows from \[D(\lambda,+)\otimes D(\lambda)\cong D(\lambda,+)\otimes \mathbf{\mathrm{sgn}}\otimes D(\lambda)\cong D(\lambda,\epsilon)\otimes D(\lambda)\] and from $D(\lambda,\pm){\downarrow}_{\widetilde {\sf A}_n}\cong E(\lambda,0)$. \end{proof} \begin{lemma}\label{L190219_2} Let $p\geq 3$, $n\geq 4$ and $\lambda\in{\mathscr {RP}}_p(n)$. Assume that $\lambda\not=\beta_n$ and $\lambda$ is not JS(0). Then: \begin{enumerate}[-] \item If $D(\lambda)$ is of type M then there exists \[\psi\in\mathrm{Hom}_{\widetilde{\sf S}_n}(M_2,\mathrm{End}_F(D(\lambda,0)))\] which does not vanish on $S_2$. Further there exists \[\phi\in\mathrm{Hom}_{\widetilde {\sf A}_n}(M_2,\mathrm{End}_F(E(\lambda,\pm)))\] which does not vanish on $S_2$ or there exist \[\phi_1,\phi_2\in\mathrm{Hom}_{\widetilde {\sf A}_n}(M_2,\mathrm{Hom}_F(E(\lambda,\pm),E(\lambda,\mp)))\] which are linearly independent over $S_2$. \item If $D(\lambda)$ is of type Q then there exists \[\psi\in\mathrm{Hom}_{\widetilde{\sf S}_n}(M_2,\mathrm{End}_F(D(\lambda,\pm)))\] which does not vanish on $S_2$ or there exist \[\psi_1,\psi_2\in\mathrm{Hom}_{\widetilde{\sf S}_n}(M_2,\mathrm{Hom}_F(D(\lambda,\pm),D(\lambda,\mp)))\] which are linearly independent over $S_2$. Further there exists \[\phi\in\mathrm{Hom}_{\widetilde {\sf A}_n}(M_2,\mathrm{End}_F(E(\lambda,0)))\] which does not vanish on $S_2$. \end{enumerate} \end{lemma} \begin{proof} From Lemma \ref{L190219} we may assume that \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-2,2}})\leq\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}}(D(\lambda){\downarrow}_{\widetilde{\sf S}_{n-1}})+\dim\mathrm{End}_{\widetilde{\sf S}_{n}}(D(\lambda)).\] Let $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ and $D$ be an $FG$-representation indexed by $\lambda$. Then by Lemmas \ref{L081218} and \ref{L101218_5} we have that \[\dim\mathrm{End}_{\widetilde{\sf S}_{n-2,2}\cap G}(D{\downarrow}_{\widetilde{\sf S}_{n-2,2}\cap G})>\dim\mathrm{End}_{\widetilde{\sf S}_{n-1}\cap G}(D{\downarrow}_{\widetilde{\sf S}_{n-1}\cap G}).\] Since $M_2\sim S_2|M_1$ by Lemma \ref{Mk}, the lemma easily follows. \end{proof} \subsection{Basic spin modules}\label{sbs} \begin{lemma}\label{L22} Let $p\geq 3$. Let $c=1$ if $p\nmid n$ or $c=2$ if $p\mid n$. \begin{enumerate}[-] \item If $D(\beta_n)$ is of type M then $D(\beta_n,0)\otimes D(\beta_n)\cong\oplus_{k=0}^{n-c}\overline{D}_k$ and $E(\beta_n,\pm)\otimes E(\beta_n)\cong\overline{E}_{(n-c)/2,\pm}\oplus_{k=0}^{(n-2-c)/2}\overline{E}_k$. \item If $D(\beta_n)$ is of type Q then $D(\beta_n,\pm)\otimes D(\beta_n)\cong\oplus_{k=0}^{n-c}\overline{D}_k$ and $E(\beta_n,0)\otimes E(\beta_n)\cong\oplus_{k=0}^{(n-1-c)/2}\overline{E}_k$. \end{enumerate} \end{lemma} \begin{proof} Note that by \cite[Theorem 9.3]{s} if $n$ is odd \[[S((n),0)\otimes S((n))]=\sum_{k=0}^{n-1}[S^{(n-k,1^k)}],\] while if $n$ is even \[[S((n),\pm)\otimes S((n))]=\sum_{k=0}^{n-1}[S^{(n-k,1^k)}].\] By Lemmas \ref{LBS} and \ref{LH} it then follows that if $D(\beta_n)$ is of type M then \[[D(\beta_n,0)\otimes D(\beta_n)]=\sum_{k=0}^{n-c}[\overline{D}_k],\] while if $D(\beta_n)$ is of type Q then \[[D(\beta_n,\pm)\otimes D(\beta_n)]=\oplus_{k=0}^{n-c}[\overline{D}_k].\] Since $D(\beta_n)$ is self-dual and $D(\beta_n,+)\otimes \mathbf{\mathrm{sgn}}\cong D(\beta_n,-)$ if $D(\beta_n)$ is of type Q, the lemma holds for $\widetilde{\sf S}_n$. For $\widetilde {\sf A}_n$ it follows by Lemma \ref{L20}. \end{proof} \begin{lemma}\label{L33} Let $p\geq 3$ and $n\geq 10$. Then $\overline{D}_2\subseteq\mathrm{End}_F(D(\beta_n,\delta))$ and $\overline{E}_2\subseteq\mathrm{End}_F(E(\beta_n,\delta'))$. \end{lemma} \begin{proof} In this case it can be easily checked from Lemma \ref{L20} that $\overline{D}_2\cong D_{1^2}$ and that $(n-2,1^2)>(n-2,1^2)^{\tt M}$. We will use Lemma \ref{Lemma39s} without further reference. Note that any composition factor (as supermodule) of $D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-k}}$ is of the form $D(\beta_{n-k})$ (this holds for example by Lemma \ref{LBS} and branching in characteristic 0). So any composition factor of $D(\beta_n){\downarrow}_{\widetilde{\sf S}_\alpha}$ is of the form $D(\beta_{\alpha_1},\beta_{\alpha_2},\ldots)$. Consider first $D(\beta_n,\delta)$. If $\delta=0$ then $D_{1^2}\subseteq\mathrm{End}_F(D(\beta_n,\delta))$ by Lemma \ref{L22}. So we may assume that $\delta=\pm$. If $n\not\equiv 0,1,2\!\mod p$ then \begin{align*} D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-1}}&\cong D(\beta_{n-1})^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-2}}&\cong D(\beta_{n-2})^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-2,2}}&\cong D(\beta_{n-2},(2))^{\oplus 2}, \end{align*} with $D(\beta_{n-1})$ and $D(\beta_{n-2},(2))$ of type M and $D(\beta_{n-2})$ of type Q. So $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-1}}$ and $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}}$ are simple, while $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-2}}$ is a direct sum of two simple modules. So $D_{1^2}\subseteq\mathrm{End}_F(D(\beta_n,\delta))$ by Lemmas \ref{l2} and \ref{L17}. If $n\equiv 2\!\mod p$ then \begin{align*} D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-1}}&\cong D(\beta_{n-1})^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-2}}&\cong(D(\beta_{n-2})|D(\beta_{n-2}))^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-2,2}}&\cong D(\beta_{n-2},(2))|D(\beta_{n-2},(2)),\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-3,2}}&\cong D(\beta_{n-3},(2))^{\oplus 2}, \end{align*} with $D(\beta_{n-1})$, $D(\beta_{n-2})$ and $D(\beta_{n-3},(3))$ of type M and $D(\beta_{n-2},(2))$ and $D(\beta_{n-3},(2))$ of type Q. In particular $D(\beta_n,+){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong D(\beta_n,-){\downarrow}_{\widetilde{\sf S}_{n-1}}$ are simple, $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-2}}$ is uniserial with two isomorphic composition factors and $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-2,2}}$ is uniserial with two non-isomorphic composition factors (since $D(\beta_n,+){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong D(\beta_n,-){\downarrow}_{\widetilde{\sf S}_{n-1}}$ the two composition factors of $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-3,2}}$ are not isomorphic). It then follows again by Lemmas \ref{l2} and \ref{L17} that $D_{1^2}\subseteq\mathrm{End}_F(D(\beta_n,\delta))$. If $n\equiv 1\!\mod p$ then \begin{align*} D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-1}}&\cong D(\beta_{n-1})|D(\beta_{n-1}),\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-2}}&\cong(D(\beta_{n-2}))^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-2,2}}&\cong D(\beta_{n-2},(2))^{\oplus 2}, \end{align*} with $D(\beta_{n-1})$ and $D(\beta_{n-2})$ of type Q and $D(\beta_{n-2},(2))$ of type M. In particular $D(\beta_n,+){\downarrow}_{\widetilde{\sf S}_{n-2,2}}\cong D(\beta_n,-){\downarrow}_{\widetilde{\sf S}_{n-2,2}}$ are simple, from which it follows that $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-2}}\cong D(\beta_{n-2},+)\oplus D(\beta_{n-2},-)$ and then that $D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-1}}\cong D(\beta_{n-1},\pm)|D(\beta_{n-1},\mp)$, so again $D_{1^2}\subseteq\mathrm{End}_F(D(\beta_n,\delta))$. If $n\equiv 0\!\mod p$ and $p\not=3$ then \begin{align*} D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-3}}&\cong D(\beta_{n-3})^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-3,2}}&\cong D(\beta_{n-3},(2))^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-3,3}}&\cong D(\beta_{n-3},(3)), \end{align*} with $D(\beta_{n-3})$ and $D(\beta_{n-3},(3))$ of type Q, while $D(\beta_{n-3},(2))$ is of type M. So $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-3,2}}$ and $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-3,3}}$ are simple, while $D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-3}}$ is a direct sum of two simple modules. Then $D_{1^2}\subseteq\mathrm{End}_F(D(\beta_n,\delta))$ by Lemmas \ref{l2} and \ref{L16}, since $\mathrm{End}_F(D(\beta_n,\delta))$ is semisimple by Lemma \ref{L22}. If $n\equiv 0\!\mod p$ and $p=3$ then \begin{align*} D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-1}}&\cong D(\beta_{n-1}),\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-2}}&\cong D(\beta_{n-2})^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-3}}&\cong (D(\beta_{n-3})|D(\beta_{n-3}))^{\oplus 2},\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-3,2}}&\cong D(\beta_{n-3},(2))|D(\beta_{n-3},(2)),\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-3,3}}&\cong D(\beta_{n-3},(2,1))|D(\beta_{n-3},(2,1)),\\ D(\beta_n){\downarrow}_{\widetilde{\sf S}_{n-4,2}}&\cong D(\beta_{n-4},(2))^{\oplus 2}. \end{align*} Further $D(\beta_{n-2})$ and $D(\beta_{n-3})$ are of type M while $D(\beta_{n-1})$, $D(\beta_{n-3},(2))$, $D(\beta_{n-3},(2,1))$ and $D(\beta_{n-4},(2))$ are of type Q. In particular $D(\beta_n,+){\downarrow}_{\widetilde{\sf S}_{n-2}}\cong D(\beta_n,+){\downarrow}_{\widetilde{\sf S}_{n-2}}$, from which follows that \[D(\beta_n,+){\downarrow}_{\widetilde{\sf S}_{n-4,2}}\cong D(\beta_{n-4},(2),+)\oplus D(\beta_{n-4},(2),-).\] So \begin{align*} D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-3}}&\cong D(\beta_{n-3},0)|D(\beta_{n-3},0),\\ D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-3,2}}&\cong D(\beta_{n-3},(2),\pm)|D(\beta_{n-3},(2),\mp),\\ D(\beta_n,\pm){\downarrow}_{\widetilde{\sf S}_{n-3,3}}&\cong D(\beta_{n-3},(2,1),\pm)|D(\beta_{n-3},(2,1),\mp). \end{align*} Since $\mathrm{End}_F(D(\beta_n,\delta))$ is semisimple by Lemma \ref{L22}, it follows from Lemma \ref{L16} that $D_{1^2}\subseteq\mathrm{End}_F(D(\beta_n,\delta))$. For $\widetilde {\sf A}_n$ the proof is similar (it uses the restriction to the corresponding subgroups of $\widetilde {\sf A}_n$). \end{proof} \section{Tensor products}\label{s6} In this section we will consider tensor products with special classes of modules. In order to check if tensor products are irreducible we will at times use the following lemmas. \begin{lemma}\label{L21a} Let $D$ be an irreducible $F\widetilde{\sf S}_n$-module and $\mu\in{\mathscr {RP}}_p(n)$. If $D\otimes D(\lambda,\delta)$ is irreducible then \begin{align*} \dim\mathrm{Hom}_{\widetilde{\sf S}_n}(\mathrm{End}_F(D),\mathrm{Hom}_F(D(\mu),D(\mu,\delta))&\leq 1+a(\mu). \end{align*} Similarly if $E$ is an irreducible $F\widetilde {\sf A}_n$-module, $\mu\in{\mathscr {RP}}_p(n)$ and $E\otimes D(\lambda,\delta')$ is irreducible then \begin{align*} \dim\mathrm{Hom}_{\widetilde {\sf A}_n}(\mathrm{End}_F(E),\mathrm{Hom}_F(E(\mu),E(\mu,\delta'))&\leq 2-a(\mu). \end{align*} \end{lemma} \begin{proof} Similar to \cite[Lemma 3.4]{bk2}. \end{proof} \begin{lemma}\label{L50} Let $\lambda\in{\mathscr {P}}_p(n)$ and $\mu\in{\mathscr {RP}}_p(n)$. If $D(\mu)$ is of type Q and \begin{align*} \dim\mathrm{Hom}_{\widetilde{\sf S}_n}(\mathrm{End}_F(D^\lambda),\mathrm{Hom}_F(D(\mu),D(\mu,\pm))=2 \end{align*} then \begin{enumerate}[-] \item if $D^\lambda\otimes D(\mu)$ has a composition factor of type M then $D^\lambda\otimes D(\mu,\pm)$ is irreducible, \item if $D^\lambda\otimes D(\mu)$ has a composition factor of type Q then $D^\lambda\otimes D(\mu,\pm)$ is not irreducible. \end{enumerate} Similarly if $\lambda\in{\mathscr {P}}_p(n)\setminus{\mathscr P}^A_p(n)$, $D(\mu)$ is of type M and \begin{align*} \dim\mathrm{Hom}_{\widetilde {\sf A}_n}(\mathrm{End}_F(E^\lambda),\mathrm{Hom}_F(E(\mu),E(\mu,\pm))=2 \end{align*} then \begin{enumerate}[-] \item if $D^\lambda\otimes D(\mu)$ has a composition factor of type M then $E^\lambda\otimes E(\mu,\pm)$ is not irreducible, \item if $D^\lambda\otimes D(\mu)$ has a composition factor of type Q then $E^\lambda\otimes E(\mu,\pm)$ is irreducible. \end{enumerate} \end{lemma} \begin{proof} We will prove the lemma only for $\widetilde{\sf S}_n$, the proof for $\widetilde {\sf A}_n$ being similar (using conjugation by elements in $\widetilde{\sf S}_n\setminus\widetilde {\sf A}_n$ instead of tensoring with $\mathbf{\mathrm{sgn}}$). As $D(\mu)=D(\mu,+)\oplus D(\mu,-)$ and $D(\mu,+)\cong D(\mu,-)\otimes\mathbf{\mathrm{sgn}}$, \begin{align*} \dim\mathrm{End}_{\widetilde{\sf S}_n}(D^\lambda\otimes D(\mu))&=\dim\mathrm{Hom}_{\widetilde{\sf S}_n}(\mathrm{End}_F(D^\lambda),\mathrm{End}_F(D(\mu))\\ &=2\dim\mathrm{Hom}_{\widetilde{\sf S}_n}(\mathrm{End}_F(D^\lambda),\mathrm{Hom}_F(D(\mu),D(\mu,\pm))\\ &=4. \end{align*} Let $D(\nu)\subseteq D^\lambda\otimes D(\mu)$. Assume first that $D(\nu)$ is of type M. Then $D(\nu)=D(\nu,0)\cong D(\nu,0)\otimes\mathbf{\mathrm{sgn}}$. From $D(\mu,+)\cong D(\mu,-)\otimes\mathbf{\mathrm{sgn}}$ it follows that $D(\nu)^{\oplus 2}\subseteq D^\lambda\otimes D(\mu)$. Since $D^\lambda\otimes D(\mu)$ is self-dual and so it has isomorphic head and socle, it follows that $D^\lambda\otimes D(\mu)\cong D(\nu)^{\oplus 2}$. In particular as module $D^\lambda\otimes D(\mu)$ has exactly two composition factors and so $D^\lambda\otimes D(\mu,\pm)$ is irreducible. Assume now that $D(\nu)$ is of type Q. Then $D^\lambda\otimes D(\mu)\not\cong D(\nu)$. In particular as module $D^\lambda\otimes D(\mu)$ has more than two composition factors. Since $D^\lambda\otimes D(\mu,+)\cong (D^\lambda\otimes D(\mu,-))\otimes\mathbf{\mathrm{sgn}}$, it then follows that $D^\lambda\otimes D(\mu,\pm)$ is not irreducible in this case. \end{proof} \subsection{Tensor products with natural modules}\label{snat} \begin{lemma}\label{L8} Let $n\geq 4$, $G=\widetilde {\sf S}_n$ or $\widetilde {\sf A}_n$, $\lambda\in{\mathscr {RP}}_p(n)$ and $V$ be a simple spin $G$-module indexed by $\lambda$. If $V\otimes D^{(n-1,1)}{\downarrow}_G$ is simple then, as supermodule, \[[D(\lambda)\otimes M_1:D(\lambda)]=\left\{\begin{array}{ll} 1,&n\not\equiv 0\!\mod p,\\ 2,&n\equiv 0\!\mod p. \end{array}\right.\] \end{lemma} \begin{proof} Since $n\geq 4$ we have that $D^{(n-1,1)}{\downarrow}_G$ has dimension greater than 1. Let $V'$ be any simple spin $G$-module indexed by $\lambda$. Then $V'\otimes D^{(n-1,1)}{\downarrow}_G$ is simple (by either tensoring with $\mathbf{\mathrm{sgn}}$ or conjugating with $\sigma\in\widetilde{\sf S}_n\setminus\widetilde {\sf A}_n$) and so $V$ is not a composition factor of $V'\otimes D^{(n-1,1)}{\downarrow}_G$. So $[D(\lambda)\otimes M_1:D(\lambda)]=[M_1:D_0]$ and then the lemma holds by Lemma \ref{M1}. \end{proof} \begin{lemma}\label{L11} Let $G=\widetilde {\sf S}_n$ or $\widetilde {\sf A}_n$ and $\lambda\in{\mathscr {RP}}_p(n)$. \begin{enumerate}[-] \item If $G=\widetilde{\sf S}_n$ and $D(\lambda)$ is of type M then $D(\lambda,0)\otimes D^{(n-1,1)}$ is irreducible if and only if as supermodule $D(\lambda)\otimes D^{(n-1,1)}$ is irreducible of type M. \item If $G=\widetilde{\sf S}_n$ and $D(\lambda)$ is of type Q then $D(\lambda,\pm)\otimes D^{(n-1,1)}$ is irreducible if and only if as supermodule $D(\lambda)\otimes D^{(n-1,1)}$ is irreducible of type Q or it has exactly two composition factors both of type M. \item If $G=\widetilde {\sf A}_n$ then $E(\lambda,0)\otimes E^{(n-1,1)}$ or $E(\lambda,\pm)\otimes E^{(n-1,1)}$ is irreducible if and only if as supermodule $D(\lambda)\otimes D^{(n-1,1)}$ is irreducible. \end{enumerate} \end{lemma} \begin{proof} This holds by comparing the number of composition factors of $D(\lambda){\downarrow}_G$ and of $(D(\lambda)\otimes D^{(n-1,1)}){\downarrow}_G$. \end{proof} \begin{theor}\label{T2} Let $n\geq 4$, $G=\widetilde {\sf S}_n$ or $\widetilde {\sf A}_n$, $\lambda\in{\mathscr {RP}}_p(n)$ and $V$ be a simple spin $G$-module indexed by $\lambda$. If $V\otimes D^{(n-1,1)}{\downarrow}_G$ is simple then $n\not\equiv 0\!\mod p$ and $\lambda\in JS(0)$. In this case, if $\nu=(\lambda\setminus A)\cup B$ where $A$ is the bottom removable node of $\lambda$ and $B$ is the top addable node of $\lambda$, \begin{enumerate}[-] \item if $D(\lambda)$ is of type M then $D(\lambda,0)\otimes D^{(n-1,1)}$ is not irreducible, while $E(\lambda,\pm)\otimes E^{(n-1,1)}\cong E(\nu,0)$, \item if $D(\lambda)$ is of type Q then $D(\lambda,\pm)\otimes D^{(n-1,1)}\cong D(\nu,0)$, while $E(\lambda,0)\otimes E^{(n-1,1)}$ is not irreducible. \end{enumerate} \end{theor} \begin{proof} Let $c:=1$ if $D(\lambda)$ is of type M or $c:=2$ if $D(\lambda)$ is of type Q. Assume that $V\otimes D^{(n-1,1)}{\downarrow}_G$ is simple. We will use Lemmas \ref{Lemma39s} and \ref{Lemma40s} without further notice. {\bf Case 1.} $n\equiv 0\!\mod p$. From Lemma \ref{L10} we have that $\epsilon_0(\lambda)+\phi_0(\lambda)$ is odd. So by Lemmas \ref{L9} and \ref{L8} we have that $\lambda\in JS(i)$ and $\phi_i(\lambda)=0$ for some $i\geq 1$. Note that \begin{align*} D(\lambda)\otimes M_1&\cong (f_ie_iD(\lambda))^{\oplus 2}\oplus\sum_{j\geq 1:j\not=i}(f_je_iD(\lambda))^{\oplus 2}\oplus (f_0e_iD(\lambda))^{\oplus c}\\ &\cong D(\lambda)^{\oplus 2}\oplus\sum_{j\geq 1:j\not=i}(f_jD(\widetilde{e}_i\lambda))^{\oplus 2}\oplus (f_0D(\widetilde{e}_i\lambda))^{\oplus c}. \end{align*} It then follows from Lemma \ref{M1} and considering block decomposition that \[D(\lambda)\otimes D^{(n-1,1)}\cong \sum_{j\geq 1:j\not=i}(f_jD(\widetilde{e}_i\lambda))^{\oplus 2}\oplus (f_0D(\widetilde{e}_i\lambda))^{\oplus c}.\] By Lemma \ref{L11} it follows that if $D(\lambda)$ is of type Q then it needs to have exactly two composition factors of type M, while if $D(\lambda)$ is of type M then $D(\lambda)\otimes D_1$ is irreducible as supermodule. In either case $\phi_i(\widetilde{e}_i\lambda)=\phi_0(\widetilde{e}_i\lambda)=1$ and $\phi_j(\widetilde{e}_i\lambda)=0$ for $j\not=0,i$. In particular $D(\lambda)\otimes M_1\cong D(\lambda)^{\oplus 2}\oplus D(\widetilde{f}_0\widetilde{e}_i\lambda)^{\oplus c}$. Notice also that from Lemma \ref{L10} either $\phi_0(\lambda)=3$ and $\phi_k(\lambda)=0$ else or there exists $j\not=0,i$ such that $\phi_0(\lambda)=\phi_j(\lambda)=1$ and $\phi_k(\lambda)=0$ else. {\bf Case 1.1.} $\phi_0(\lambda)=3$ and $\phi_j(\lambda)=0$ else. From Lemma \ref{L051218_3} \[D(\widetilde{f}_0\widetilde{e}_i\lambda)^{\oplus c}\cong\mathrm{Ind}_0\mathrm{Res}_i D(\lambda)\cong \mathrm{Res}_i\mathrm{Ind}_0 D(\lambda)\cong \mathrm{Res}_i f_0 D(\lambda)\] and \[0=\mathrm{Ind}_0\mathrm{Res}_j D(\lambda)\cong \mathrm{Res}_j\mathrm{Ind}_0 D(\lambda)\cong \mathrm{Res}_j f_0 D(\lambda)\] for $j\not=0,i$. Since $c\leq 2<[f_0 D(\lambda):D(\widetilde f_0\lambda)]=\phi_0(\lambda)=3$, it follows that $\widetilde{f}_0$ has only normal nodes of residue 0 and then $\widetilde{f}_0\lambda\in JS(0)$, since $\epsilon_0\lambda=0$. Since $\phi_0(\widetilde{f}_0\lambda)=2$ we have from Lemma \ref{L12} that $n+1\equiv 0\!\mod p$, leading to a contradiction. {\bf Case 1.2.} There exists $j\not=0,i$ such that $\phi_0(\lambda)=\phi_j(\lambda)=1$ and $\phi_k(\lambda)=0$ else. In this case by Lemma \ref{L051218_3} \[\mathrm{Res}_iD(\widetilde f_j\lambda)^{\oplus c}\cong \mathrm{Res}_i\mathrm{Ind}_j D(\lambda)\cong \mathrm{Ind}_j\mathrm{Res}_i D(\lambda)\cong \mathrm{Ind}_j D(\widetilde e_i\lambda)^{\oplus c}=0\] and \[\mathrm{Res}_kD(\widetilde f_j\lambda)^{\oplus c}\cong \mathrm{Res}_k\mathrm{Ind}_j D(\lambda)\cong \mathrm{Ind}_j\mathrm{Res}_k D(\lambda)=0\] for $k\not=i,j$. So all normal nodes of $\widetilde f_j\lambda$ have residue $j$. Since $\epsilon_j(\lambda)=0$ we then have that $\widetilde{f}_j\lambda\in JS(j)$, which by Lemma \ref{L13} contradicts $n\equiv 0\!\mod p$. {\bf Case 2.} $n\not\equiv 0\!\mod p$. In this case $\lambda\in JS(0)$ and $\phi_0(\lambda)=0$ by Lemmas \ref{L9} and \ref{L8}. From Lemma \ref{L12} this is equivalent to $\lambda\in JS(0)$ since $n\not\equiv 0\!\mod p$. Notice that \[D(\lambda)\otimes M_1\cong f_0e_0D(\lambda)\oplus\sum_{j\geq 1}(f_je_0D(\lambda))^{\oplus c}\cong D(\lambda)\oplus\sum_{j\geq 1}(f_jD(\widetilde{e}_0\lambda))^{\oplus c}.\] From Lemma \ref{M1} it follows that \[D(\lambda)\otimes D^{(n-1,1)}\cong\sum_{j\geq 1}(f_jD(\widetilde{e}_0\lambda))^c.\] From \cite[Lemma 3.8]{p} $\widetilde{e}_0\lambda\in JS(1)$. Further $\phi_0(\widetilde{e}_0\lambda)=1$. So from Lemma \ref{L10} there exists $j\geq 1$ with $\phi_0(\widetilde{e}_0\lambda),\phi_j(\widetilde{e}_0\lambda)=1$ and $\phi_k(\widetilde{e}_0\lambda)=0$ for $k\not=0,j$. If $D(\lambda)$ is of type M then $D(\lambda)\otimes D^{(n-1,1)}\cong D(\widetilde{f}_j\widetilde{e}_0\lambda)$ and $D(\widetilde{f}_j\widetilde{e}_0\lambda)$ is of type Q. If $D(\lambda)$ is of type Q then $D(\lambda)\otimes D^{(n-1,1)}\cong D(\widetilde{f}_j\widetilde{e}_0\lambda)^{\oplus 2}$ and $D(\widetilde{f}_j\widetilde{e}_0\lambda)$ is of type M. Note that $\widetilde{e}_0\lambda=\lambda\setminus A$, since $\lambda$ is JS(0) and the bottom addable node is always normal. Then $A$ is the bottom addable node of $\widetilde{e}_0\lambda$ and it is the conormal node of $\widetilde e_0\lambda$ of residue 0. Since $n\geq 4$ and $\lambda$ is JS(0) we have that $h(\lambda)\geq 2$. If $B$ is the top addable node of $\lambda$ then it is also the top addable node of $\widetilde e_0\lambda$. Since the top addable node is always conormal, it follows that $\widetilde{f}_j\widetilde{e}_0\lambda=(\lambda\setminus A)\cup B$. The theorem then follows from Lemma \ref{L11}. \end{proof} \subsection{Tensor products of basic spin and hooks} \begin{theor}\label{THB} Let $p\geq 3$. Let $G=\widetilde{\sf S}_n$ or $\widetilde {\sf A}_n$. Assume that $V$ is indexed by an element of ${\mathscr {H}}_p(n)$ and that $W$ is basic spin. If $V$ and $W$ are not 1-dimensional and $V\otimes W$ is irreducible, then one of the following holds: \begin{enumerate}[-] \item $p\not=5$, $G=\widetilde {\sf A}_5$, $V\cong E^{(3,1^2)}_\pm$ and $W\cong E(\beta_5,\pm)$, in which case two of the corresponding tensor products are irreducible and isomorphic to $E((4,1),0)$, while the other two tensor products are not irreducible. \item $p=3$, $G=\widetilde {\sf A}_6$, $V\cong E^{(4,1^2)}_\pm$ and $W\cong E((3,2,1),\pm)$, in which case two of the corresponding tensor products are irreducible and isomorphic to $E((4,2),\pm)$, while the other two tensor products are not irreducible. \end{enumerate} In the exceptional cases, if $\chi_V$ and $\chi_W$ are the characters of $V$ and $W$, we have that $V\otimes W$ is irreducible if and only if $(\chi_V\chi_W)\widetilde{(1,2,3,4,5)}=1$. \end{theor} \begin{proof} For $n\leq 12$ the theorem can be proved by looking at decomposition matrices. So we may assume that $n>12$. We may assume that $k<n/2$. From \cite[Theorem 9.3]{s}, \[[S_{1^k}\otimes S((n))]=[S((n))]+\sum_{1\leq j\leq k}d[S((n-j,j))],\] where $d=1$ if $n$ is odd and $d=2$ if $n$ is even. In particular, using Lemmas \ref{LBS} and \ref{LH} and induction on $k$ if $n\equiv 0\!\mod p$, \begin{enumerate}[-] \item if $n\not\equiv 0\!\mod p$ then \[[\overline{D}_{k}\otimes D(\beta_n)]=[D(\beta_n)]+\sum_{1\leq j\leq k}d[S((n-j,j))],\] \item if $n\equiv 0\!\mod p$ and $k$ is even then \[[\overline{D}_{k}\otimes D(\beta_n)]=[D(\beta_n)]+\sum_{1\leq j\leq k/2}[S((n-2j,2j))],\] \item if $n\equiv 0\!\mod p$ and $k$ is odd then \[[\overline{D}_{k}\otimes D(\beta_n)]=\sum_{0\leq j\leq (k-1)/2}[S((n-2j-1,2j+1))].\] \end{enumerate} When $n\equiv 0\!\mod p$ then $D(\beta_n)$ is a composition factor of $S((n-1,1))$ by \cite[Table IV]{Wales}. So $D(\beta_n)$ is always a composition factor of $\overline{D}_{k}\otimes D(\beta_n)$ (as supermodule). Since $\overline{D}_{k}\otimes D(\beta_n,\delta)$ is irreducible if and only if $\overline{D}_{k}\otimes D(\beta_n,-\delta)$ is irreducible and since $\overline{D}_{k}$ is not 1-dimensional, it follows that $\overline{D}_{k}\otimes D(\beta_n,\delta)$ is not irreducible. Similarly if $k\not=(n-c)/2$ then $\overline{E}_{k}\otimes E(\beta_n,\delta')$ is not irreducible. So assume now that $k=(n-c)/2$. Note that in this case either $n$ is odd with $n\not\equiv 0\!\mod p$ or $n$ is even with $n\equiv 0\!\mod p$, so $D(\beta_n)$ is of type M and then $\delta'=\pm$. By Lemmas \ref{LBS} and \ref{LH} we have \[\dim((\overline{E}_{k})_\pm\otimes E(\beta_n,\pm))=\frac{1}{2}\binom{n-c}{(n-c)/2}2^{(n-c-2)/2}=2^{(n-c-4)/2}\binom{n-c}{(n-c)/2}.\] Let $d_j$ be the dimension of any simple spin module of $\widetilde {\sf A}_n$ indexed by $(n-j,j)$ in characteristic 0. For $1\leq j\leq k$ we have \[d_j=\frac{1}{2}\dim S((n-j,j))=2^{(n-c-2)/2}\frac{n-2j}{n-j}\binom{n-1}{j}.\] Note that if $(\overline{E}_k)_\pm\otimes E(\beta_n,\pm)$ is irreducible then it is not isomorphic to $E(\beta_n,\pm)$ (since $(\overline{E}_k)_\pm$ is not 1-dimensional). In order to prove that $(\overline{E}_k)_\pm\otimes E(\beta_n,\pm)$ is not irreducible it is then enough to prove that $\dim((\overline{E}_{k})_\pm\otimes E(\beta_n,\pm))>d_j$ for any $1\leq j\leq k$. If $n$ is even note that $2\binom{n-2}{(n-2)/2}>\binom{n-1}{(n-2)/2}$. So it is enough to prove that \[\binom{n-1}{\lfloor(n-1)/2\rfloor}=\binom{n-1}{(n-c)/2}>2^c\frac{n-2j}{n-j}\binom{n-1}{j}\] for any $1\leq j\leq k=\lfloor(n-1)/2\rfloor$. If $j>3/7n$ then $4(n-2j)/(n-j)<1$ and so the above inequality clearly holds. So we may assume that $j\leq 3/7n$. In this case it is enough to prove that \[\frac{\binom{n-1}{\lfloor(n-1)/2\rfloor}}{\binom{n-1}{j}}=\prod_{i=j+1}^{\lfloor (n-1)/2\rfloor}\frac{n-i}{i}>4.\] It is enough to prove this for $j=\lfloor 3/7n\rfloor$. If $n\geq 152$ then \[\prod_{i=\lfloor 3/7n\rfloor+1}^{\lfloor (n-1)/2\rfloor}\frac{n-i}{i}\geq \prod_{a=1}^{\lfloor (n-1)/2\rfloor-\lfloor 3/7n\rfloor}\frac{4/7n-a}{3/7n+a}\geq \prod_{a=1}^9\frac{4/7n-a}{3/7n+a}>4.\] Using the above formulas, it can be checked that for $n\leq 151$ and $n\leq 3/7n$ we have $\dim((\overline{E}_{n,k})_\pm\otimes E(\beta_n,\pm))>d_j$, unless possibly if $n\leq 20$ is even with $n\equiv 0\!\mod p$. In these cases notice that it is enough to prove that $\dim((\overline{E}_{n,k})_\pm\otimes E(\beta_n,\pm))>d_j$ for $j$ odd if $n\equiv 0\!\mod 4$ or for $j$ even if $n\equiv 2\!\mod 4$, which again can be checked using the above formulas since we are assuming $n>12$. \end{proof} \subsection{Tensor products of basic spin and two rows partitions} \begin{theor}\label{T011119} Let $p\geq 3$ and $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$. Let $V$ be a simple non-spin module indexed by $\lambda\in{\mathscr {P}}_p(n)$ with $\min\{h(\lambda),h(\lambda^{\tt M})\}=2$ and $W$ be basic spin. If $V\otimes W$ is irreducible then $\lambda$ is JS and $n\not\equiv 0,\pm 2\!\mod p$. Further in this case: \begin{enumerate}[-] \item if $G=\widetilde{\sf S}_n$ and $n$ is even then $V\otimes W\cong D(\mu,0)$ is irreducible with $\mu=\beta_{\lambda_1}+\beta_{\lambda_2}$ if $\lambda_1\not=\lambda_2$ or $\mu=\beta_{n/2+1}\cup\beta_{n/2-1}$ if $\lambda_1=\lambda_2$, \item if $G=\widetilde{\sf S}_n$ and $n$ is odd then $V\otimes W$ is not irreducible, \item if $G=\widetilde {\sf A}_n$ and $n$ is even then $V\otimes W$ is not irreducible, \item if $G=\widetilde {\sf A}_n$ and $n$ is odd then $V\otimes W\cong E(\mu,0)$ is irreducible with $\mu=\beta_{\lambda_1}+\beta_{\lambda_2}$ if $\lambda_1\not=\lambda_2+p-2$ or $\mu=\beta_{\lambda_1}\cup\beta_{\lambda_2}$ if $\lambda=\lambda_2+p-2$. \end{enumerate} \end{theor} \begin{proof} For $n\leq 9$ the theorem can be proved looking at decomposition matrices. So assume that $n\geq 10$. Note that $V\cong D^\lambda{\downarrow}_G$ by \cite[Lemma 1.8]{ks2}. Further we may assume that $h(\lambda)=2$. In view of Theorem \ref{T2} we may also assume that $\lambda_2\geq 2$ (since $(n-1,1)$ is JS if and only if $n\equiv 0\!\mod p$). Note that in this case $\lambda\not\in{\mathscr {H}}_p(n)$ (the case $p=3$ and $\lambda=(n)^{\tt M}$ is excluded by assumption). It is easy to see that $\lambda$ is JS if and only if $\lambda_1=\lambda_2$ or $\lambda_1-\lambda_2\equiv -2\!\mod p$. Let $W'=D(\beta_n)$ or $E(\beta_n)$ (depending on $G$). Further from Lemmas \ref{L22} and \ref{L33} we have that $\overline{D}_0\oplus \overline{D}_2\subseteq\mathrm{End}_F(W)$ and $\overline{D}_0\oplus \overline{D}_1\oplus \overline{D}_2\oplus\overline{D}_3\subseteq\mathrm{Hom}_F(W',W)$. If $\lambda$ is not JS then we have that $\overline{D}_0\oplus \overline{D}_2$ or $\overline{D}_0\oplus\overline{D}_1\oplus \overline{D}_3$ is contained in $\mathrm{End}_F(V)$ from Lemmas \ref{L39}, \ref{L41} and \ref{L42}. It follows that \[\dim\mathrm{Hom}_G(\mathrm{End}_F(V),\mathrm{End}_F(W))\geq 2\] or \[\dim\mathrm{Hom}_G(\mathrm{End}_F(V),\mathrm{Hom}_F(W',W))\geq 3.\] So $V\otimes W$ is not irreducible (in the second case by Lemma \ref{L21a}). So assume now that $\lambda$ is JS. In view of Lemmas \ref{LH} and \ref{L3} we have that $\overline{D}_0\oplus \overline{D}_2$ or $\overline{D}_0\oplus\overline{D}_3$ is contained in $\mathrm{End}_F(V)$. So \[\dim\mathrm{Hom}_G(\mathrm{End}_F(V),\mathrm{Hom}_F(W',W))\geq 2.\] So by Lemmas \ref{L21a} and \ref{L50} if $G=\widetilde{\sf S}_n$ then $D^\lambda\otimes D(\beta_n,\delta)$ is irreducible if and only if $D(\beta_n)$ is of type Q, $D^\lambda\otimes D(\beta_n)$ has a composition factor of type M and \[\dim\mathrm{Hom}_{\widetilde{\sf S}_n}(\mathrm{End}_F(D^\lambda),\mathrm{Hom}_F(D(\beta_n),D(\beta_n,\delta)))=2.\] Similarly if $G=\widetilde {\sf A}_n$ then $E^\lambda\otimes E(\beta_n,\delta')$ is irreducible if and only if $D(\beta_n)$ is of type M, $D^\lambda\otimes D(\beta_n)$ has a composition factor of type Q and \[\dim\mathrm{Hom}_{\widetilde {\sf A}_n}(\mathrm{End}_F(E^\lambda),\mathrm{Hom}_F(E(\beta_n),E(\beta_n,\delta')))=2.\] On the other hand if $D^\lambda\otimes D(\beta_n)$ has a composition factor of the same type as $D(\beta_n)$ then $V\otimes W$ is not irreducible. If $\lambda_1=\lambda_2$ then $D^\lambda$ and $D^{(\lambda_1+1,\lambda_1-1)}$ are in different blocks and so by \cite[Theorem 9.3]{s} \[[D^\lambda\otimes D(\beta_n)]=c[S((\lambda_1+1,\lambda_1-1))]+\sum_{j<\lambda_1-1}c_j[S((n-j,j))]\] with $c>0$. In this case let $\nu:=(n/2+1,n/2-1)=(\lambda_1+1,\lambda_2-1)$. If $\lambda_1>\lambda_2$ then \[[D^\lambda\otimes D(\beta_n)]=c[S(\lambda)]+\sum_{j<\lambda_2}c_j[S((n-j,j))]\] with $c>0$. In this case let $\nu:=\lambda$. Note that $\lambda_1\geq \lambda_2+p-2$. Further if $p=3$ then by assumption $\lambda_1-\lambda_2\geq 4$. From \cite[Theorems 1.2, 1.3]{m4} that there exists a composition factor $D(\mu)$ of $S(\nu)$ which is not a composition factor of $S((pi_1,\pi_2))$ for $(\pi_1,\pi_2)\in{\mathscr {RP}}_0(n)$ with $\pi_1>\nu_1$. Then $D(\mu)$ is a composition factor of $D^\lambda\otimes D(\beta_n)$. {\bf Case 1:} $n\equiv 0\!\mod p$. In this case any composition factor of $S((n-j,j))$ with $j< n/2$ is in the same block as $D(\beta_n)$, so they have the same type and then $V\otimes W$ is not irreducible in this case. {\bf Case 2:} $n\equiv \pm 2\!\mod p$. In this case it can be checked that if $\lambda_1=\lambda_2$ then one part of $(n/2+1,n/2-1)$ is divisible by $p$, while if $\lambda_1>\lambda_2$ then one part of $\lambda$ is divisible by $p$ (since in this case $\lambda_1-\lambda_2\equiv p-2\!\mod p$). So $S(\nu)$ is in the same block as $S((n))$ and then again $V\otimes W$ is not irreducible. {\bf Case 3:} $n\not\equiv 0,\pm 2\!\mod p$. In this case $p\geq 5$ and so Lemmas \ref{LH}, \ref{L20}, \ref{L44} and \ref{L22} \begin{align*} \dim\mathrm{Hom}_{\widetilde{\sf S}_n}(\mathrm{End}_F(D^\lambda),\mathrm{Hom}_F(D(\beta_n),D(\beta_n,\delta)))&=2,\\ \dim\mathrm{Hom}_{\widetilde {\sf A}_n}(\mathrm{End}_F(E^\lambda),\mathrm{Hom}_F(E(\beta_n),E(\beta_n,\delta')))&=2. \end{align*} Further if $\lambda_1=\lambda_2$ then $p\nmid n/2\pm 1$, while if $\lambda_1>\lambda_2$ then $p\nmid \lambda_1,\lambda_2$. Since $n\not\equiv 0\!\mod p$ it can then be easily checked that $D(\mu)$ and $D(\beta_n)$ are of different type. So $D^\lambda\otimes D(\beta_n,\delta)$ is irreducible if and only if $n$ is even and in this case $D^\lambda\otimes D(\beta_n,\delta)\cong D(\mu,0)$. Similarly $E^\lambda\otimes E(\beta_n,\delta')$ is irreducible if and only if $n$ is even and in this case $E^\lambda\otimes E(\beta_n,\delta')\cong E(\mu,0)$. The theorem then follows from \cite[Theorems 1.2, 1.3]{m4} to identify $\mu$. \end{proof} \subsection{Tensor products of basic spin and three rows partitions} \begin{theor}\label{tbs3r} Let $p=3$ and $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$. Let $\lambda\in{\mathscr {P}}_p(n)\setminus{\mathscr {H}}_3(p)$ with $\min\{h(\lambda),h(\lambda^{\tt M})\}=3$, $V$ be a simple non-spin module indexed by $\lambda$ and $W$ be basic spin. Then $V\otimes W$ is not irreducible. \end{theor} \begin{proof} We may assume that $h(\lambda)=3$. Since $\lambda\not\in{\mathscr {H}}_3(n)$ we then have that $\lambda=(\lambda_1,\lambda_2,\lambda_3)$ with $\lambda_1\geq\lambda_2+2$, $\lambda_2\geq\lambda_3+2$ and $\lambda_3\geq 1$. In particular $n\geq 9$. Further it is easy to check that $\lambda\not=\lambda^{\tt M}$, so $V\cong D^\lambda{\downarrow}_G$. If $W'=D(\beta_n)$ if $G=\widetilde{\sf S}_n$ or $W'=D(\beta_n)$ if $G=\widetilde {\sf A}_n$ then \[\overline{D}_0\oplus \overline{D}_1\oplus\overline{D}_2\oplus\overline{D}_3\subseteq\mathrm{Hom}_F(W',W)\] by Lemma \ref{L22}. If $\lambda$ is not JS then \[\overline{D}_0\oplus \overline{D}_1\oplus\overline{D}_k\subseteq\mathrm{End}_F(V)\] with $2\leq k\leq 3$ from Lemmas \ref{L39} and \ref{L56}. So in this case $V\otimes W$ is not irreducible by Lemma \ref{L21a}. So we may assume that $\lambda$ is JS. So $\lambda_1-\lambda_2,\lambda_2-\lambda_3\equiv 1\!\mod 3$ and then we have $\lambda_1\geq\lambda_2+4$ and $\lambda_2\geq\lambda_3+4$. From Lemmas \ref{LH} and \ref{L3} we have that $\overline{D}_2$ or $\overline{D}_3$ is contained in $\mathrm{End}_F(V)$. Since we always have $\overline{D}_0\subseteq\mathrm{End}_F(V)$ from Lemmas \ref{L21a} and \ref{L50} to prove that $V\otimes W$ is not irreducible it is enough to prove that $D^\lambda\otimes D(\beta_n)$ has a composition factor of the same type as $D(\beta_n)$. Note that by Lemma \ref{LBS} and \cite[Theorem 9.3]{s} we have that \[[D^\lambda\otimes D(\beta_n)]=c[S(\lambda)]+\sum_{\mu\in{\mathscr {RP}}_0(n):\mu\rhd\lambda}c_\mu[S(\mu)]\] with $c>0$. From Lemmas \ref{L54} and \ref{L55} we then have that if $\nu=\lambda^R=\beta_{\lambda_1}+\beta_{\lambda_2}+\beta_{\lambda_3}$ then $D(\nu)$ is a composition factor of $D^\lambda\otimes D(\beta_n)$ (since $\lambda_1\geq\lambda_2+4$ and $\lambda_2\geq\lambda_3+4$). From $\lambda_1-\lambda_2,\lambda_2-\lambda_3\equiv 1\!\mod 3$ we have that $n\equiv 0\!\mod 3$ and one of $\lambda_1$, $\lambda_2$ and $\lambda_3$ is divisible by 3. In particular $S(\lambda)$ and $S((n))$ are in the same block and so $D(\nu)$ and $D(\beta_n)$ are of the same type. So $V\otimes W$ is not irreducible. \end{proof} \subsection{Tensor products of basic and second basic spin}\label{sbssbs} \begin{theor}\label{L30} Let $p\geq 3$, $n\geq 6$ and $G=\widetilde{\sf S}_n$ or $\widetilde {\sf A}_n$. Assume that $V$ is second basic spin and that $W$ is basic spin. Then $V\otimes W$ is not irreducible. \end{theor} \begin{proof} From Lemma \ref{LBS} and \cite[Table IV]{Wales} we have that any composition factor of $V\otimes W$ is a composition factor of the reduction modulo $p$ of $S((n-1,1))\otimes S((n))$. So from \cite[Theorem 9.3]{s}, any composition factor of $V\otimes W$ is a composition factor of a Specht module of the form $S^{(n-k,1^k)}$ with $0\leq k\leq n-1$ or $S^{(n-k,2,1^{k-2})}$ with $2\leq k\leq n-2$. Notice also that by \cite[Tables III and IV]{Wales} \[\dim V\otimes W\geq 2^{n-4}(n-4).\] It can be computed that \[\dim S^{(n-k,1^k)}=\binom{n-1}{k},\hspace{24pt}\dim S^{(n-k,2,1^{k-2})}=\binom{n}{k}\frac{(n-k-1)(k-1)}{n-1}.\] Since \[\frac{(n-k-1)(k-1)}{n-1}\leq \frac{(n-2)^2}{4(n-1)}\leq\frac{n-2}{4},\] it is enough to prove that \[\binom{n}{\lfloor n/2\rfloor}\frac{n-2}{4}<2^{n-4}(n-4),\] that is that \[\frac{\binom{n}{\lfloor n/2\rfloor}(n-2)}{2^{n-2}(n-4)}<1.\] Notice that $(n-2)/(n-4)$ is decreasing as is $\binom{n}{\lfloor n/2\rfloor}/2^{n-2}$, since \begin{align*} \binom{n}{\lfloor n/2\rfloor}/2^{n-2}&=\binom{n-1}{\lfloor n/2\rfloor}/2^{n-2}+\binom{n-1}{\lfloor n/2\rfloor-1}/2^{n-2}\\ &\leq \binom{n-1}{\lfloor (n-1)/2\rfloor}/2^{(n-1)-2}. \end{align*} Since $\binom{15}{7}\cdot 13/(2^{13}\cdot 11)<1$, the lemma holds for $n\geq 15$. For $6\leq n\leq 14$ the lemma can be checked by looking at decomposition matrices for ${\sf S}_n$ and ${\sf A}_n$ to find the dimension of composition factors of the reduction modulo $p$ of the modules $S^{(n-k,1^k)}$ and $S^{(n-k,2,1^{k-2})}$ as well as exact formulas for $\dim V\otimes W$ coming \cite[Tables III and IV]{Wales}. \end{proof} \section{Proofs of Theorems \ref{TS} and \ref{TA}}\label{s7} By \cite{bk,bk2,m2,m3,z1} we may assume that $W$ is a spin representation. Further we may assume that neither $V$ nor $W$ is 1-dimensional. For $n\leq 12$ the theorems can be proved looking at decomposition matrices (and using Lemma \ref{L54} to identify modular spin representations). So assume that $n\geq 13$. It can then be checked (using Lemma \ref{L20} and \cite[Lemma 2.2]{bkz} to help check some cases) that if $\alpha$ is one of $(n-3,3)$, $(n-3,1^3)$, $(n-5,1^5)$ or $(n-5,3,1^2)$ (the last one only for $p=3$) and $\alpha\in{\mathscr {P}}_p(n)$, then $\alpha>\alpha^{\tt M}$. Let $G\in\{\widetilde{\sf S}_n,\widetilde {\sf A}_n\}$ depending on which theorem we are considering. Since $n\geq 13$ we have from \cite[Lemma 1.8]{ks2} that $(n-k,k)\not=(n-k,k)^{\tt M}$ for any $0\leq k\leq n/2$. Further the modules $\overline{E}_k$ for $0\leq k\leq 5$ are defined and they are simple and pairwise non-isomorphic. {\bf Case 1:} $p\geq 5$ and neither $V$ nor $W$ is a basic spin representation or a natural representation (a non-spin representation indexed by $(n-1,1)$ or $(n-1,1)^{\tt M}$). Parts of this case could be proved using results from \cite{bk5,kt}. However the cases where $V$ is a non-spin representation which is indexed by a 2-rows JS partition (or its Mullineux dual) or if $H=\widetilde {\sf A}_n$ and $V$ is non spin and indexed by a Mullineux-fixed partition are not covered by results from \cite{bk5,kt}. By Lemmas \ref{LM3S} and \ref{L4} there exist $\phi_3:(M_3,\mathrm{End}_F(W))$ and $\phi_{1^3}:(M_{1^3},\mathrm{End}_F(W))$ which do not vanish on $S_3$ and $S_{1^3}$ respectively. Further from Lemmas \ref{LM3}, \ref{LM3S} and \ref{L3} there exists $\psi_3:(M_3,\mathrm{End}_F(V))$ or $\psi_{1^3}:(M_{1^3},\mathrm{End}_F(V))$ which does not vanish on $S_3$ or $S_{1^3}$. Since $M_0=S_0$ is the trivial module, so that there also always exist non-zero $\phi0:(M_0,\mathrm{End}_F(W))$ and $\psi_0:(M_0,\mathrm{End}_F(V))$, we then have from Lemma \ref{l15} that \[\dim\mathrm{End}_G(V\otimes W)=\dim\mathrm{Hom}_G(\mathrm{End}_F(V),\mathrm{End}_F(W))\geq 2\] and so $V\otimes W$ is not irreducible. {\bf Case 2:} $p=3$ and neither $V$ nor $W$ is a basic spin representation or a natural representation. If $n\equiv 2\!\mod 3$ then further $V$ is not a non-spin representation indexed by $(n-2,2)$ or $(n-2,2)^{\tt M}$. This case holds similarly to the previous case, using Lemmas \ref{LM3}, \ref{LM3S}, \ref{L52}, \ref{L51} or \ref{L7a} (so using $M_{3,1^2}$ instead of $M_{1^3}$). {\bf Case 3:} $p=3$, $n\equiv 2\!\mod 3$ and $V$ is a non-spin representation indexed by $(n-2,2)$ or $(n-2,2)^{\tt M}$ and $W$ is not basic spin. We have that $(n-2,2)\not=(n-2,2)^{\tt M}$. So (up to tensoring with $\mathbf{\mathrm{sgn}}$) $V\cong D^{(n-2,2)}$ or $E^{(n-2,2)}$. From \cite[Corollary 3.9]{bk5} and Lemma \ref{Mk} there exists $\psi_2:(M_2,\mathrm{End}_F(V))$ which does not vanish on $S_2$. From \cite[Lemma 3.7]{p} we have that, for $p=3$, any JS(0) partition in ${\mathscr {RP}}_3(m)$ is of the form $\beta_{\mu_1}+\ldots+\beta_{\mu_k}$ with $\mu_j\equiv 0\!\mod 3$ for $j<k$ and $\mu_k=1$ or $\mu_k\equiv 0\!\mod 3$. Since $n\equiv 2\!\mod 3$ there is then no JS(0) partition in ${\mathscr {RP}}_3(n)$. Let $\nu$ be the partition indexing $W$. We will now consider $G=\widetilde{\sf S}_n$, the case $G=\widetilde {\sf A}_n$ being similar. By Lemma \ref{L190219_2} there exists $\phi_2:(M_2,\mathrm{End}_F(W))$ which does not vanish on $S_2$ or $W\cong D(\nu,\pm)$ and there exist $\phi_2,\phi_2'\in\mathrm{Hom}_{\widetilde{\sf S}_n}(M_2,\mathrm{Hom}_F(D(\lambda,\pm),D(\lambda,\mp)))$ which are linearly independent over $S_2$. In the first case we can conclude as in Case 1. In the second case we have by Lemma \ref{l15} that \begin{align*} &\dim\mathrm{Hom}_{\widetilde{\sf S}_n}(V\otimes D(\lambda,\pm),V\otimes D(\lambda,\mp))\\ &=\dim\mathrm{Hom}_{\widetilde{\sf S}_n}(\mathrm{End}_F(V),\mathrm{Hom}_F(D(\lambda,\pm),D(\lambda,\mp)))\\ &\geq 2. \end{align*} Since $D(\lambda,+)$ and $D(\lambda,-)$ have the same dimension, this contradicts $V\otimes W$ being irreducible. {\bf Case 4:} $V$ is a natural module. Up to tensoring with $\mathbf{\mathrm{sgn}}$ we have that $V\cong D^{(n-1,1)}$ or $E^{(n-1,1)}$. The theorems then follow from Theorem \ref{T2}. {\bf Case 5:} $V$ and $W$ are basic spin. Let $A:=D(\beta_n)$ if $G=\widetilde{\sf S}_n$ or $A:=E(\beta_n)$ if $G=\widetilde {\sf A}_n$. Then by Lemma \ref{L22} \[\dim\mathrm{Hom}_G(A\otimes A,V\otimes W)=\dim\mathrm{Hom}_G(\mathrm{Hom}_F(V,A),\mathrm{Hom}_F(A,W)\geq 5.\] Since $\dim A\leq 2\dim V$ and $\dim V=\dim W$, it follows that $V\otimes W$ is not irreducible. {\bf Case 6:} $W$ is basic spin and $V$ is either a non-spin representation is indexed by $\lambda\not\in{\mathscr {H}}_p(n)$ with $h(\lambda),h(\lambda^{\tt M})\geq 3+\delta_{p=3}$ or a spin representation indexed by $\mu\not=\beta_n$ with $\mu_1\geq 5$. From Lemmas \ref{LH} and \ref{L20} we have that $[S_{1^3}]=[\overline{D}_3]+\delta_{p\mid n}[\overline{D}_2]$ and $[S_{1^5}]=[\overline{D}_5]+\delta_{p\mid n}[\overline{D}_4]$. From Lemmas \ref{L3} and \ref{L4} there exists $0\not=\phi_3\in\mathrm{Hom}_G(S_{1^3},\mathrm{End}_F(V))$ and from Lemmas \ref{L36} and \ref{L14} there exists $0\not=\phi_5\in\mathrm{Hom}_G(S_{1^5},\mathrm{End}_F(V))$. In particular there exist $a=0<b<c\leq 5$ with $\overline{D}_k{\downarrow}_G\subseteq\mathrm{End}_F(V)$ for $k\in\{a,b,c\}$. If again $A:=D(\beta_n)$ if $G=\widetilde{\sf S}_n$ or $A:=E(\beta_n)$ if $G=\widetilde {\sf A}_n$ then by Lemma \ref{L22} \[\dim\mathrm{Hom}_G(V\otimes A,V\otimes W)=\dim\mathrm{Hom}_G(\mathrm{End}_F(V),\mathrm{Hom}_F(A,W)\geq 3\] and so $V\otimes W$ is not irreducible by Lemma \ref{L21a}. {\bf Case 7:} $W$ is basic spin and $V$ is a non-spin representation indexed by $\lambda\in{\mathscr {H}}_p(n)$. In this case the theorems hold by Theorem \ref{THB}. {\bf Case 8:} $W$ is basic spin and $V$ is a non-spin representation indexed by $\lambda$ with $h(\lambda),h(\lambda^{\tt M})=2$. This case is covered by Theorem \ref{T011119}. {\bf Case 9:} $p=3$, $W$ is basic spin and $V$ is a non-spin representation indexed by $\lambda\not\in{\mathscr {H}}_3(n)$ with $h(\lambda),h(\lambda^{\tt M})=3$. In this case $V\otimes W$ is not irreducible by Theorem \ref{tbs3r}. {\bf Case 10:} $W$ is basic spin and $V$ is a spin representation indexed by $\mu\not=\beta_n$ with $\mu_1\leq 4$. Note that in this case $p=3$ since $n\geq 13$. Since $\mu\not=\beta_n$ we have that $\mu=(4,\beta_{n-4})=\beta_{n-1}+\beta_1$. In view of Lemma \ref{L55} and \cite[Table IV]{Wales} we have that $V$ is second basic spin. So the theorems hold by Theorem \ref{L30}. \section*{Acknoledgements} The author thanks Alexander Kleshchev for some comments on the paper.
train/arxiv
BkiUdHw5qsBDEK-qIpjs
5
1
\section{INTRODUCTION} The detailed mathematical modeling of complex dynamical systems often yields models of thousands of degrees of freedom. Since the numerical simulation and design for such large-scale systems is computatio\-nally too demanding, reduced order models that accurately approximate the original systems with substantially less effort are strongly aimed. Nonlinear model order reduction has been widely studied over the past two decades due to the ever increasing interest in the efficient, numerical analysis of large-scale \emph{nonlinear} dynamical systems. In this regard, \emph{simulation-based} reduction techniques such as Proper Orthogonal Decomposition (POD) \cite{moore1981principal, kunisch1999control}, Trajectory Piecewise Linear (TPWL) \cite{rewienski2003trajectory}, Empirical Gramians \cite{lall2002subspace}, Balanced-POD \cite{willcox2002balanced} and Reduced Basis methods \cite{haasdonk2017reduced} have established as standard approaches. \emph{System-theoretic} reduction procedures such as nonlinear ba\-lan\-ced truncation \cite{scherpen1993balancing} and Krylov subspace methods for special nonlinear system classes \cite{breiten2013interpolatory} have been also studied. \\ Recently, the concept of moment matching and Krylov subspaces has been developed and carried over to nonlinear systems \cite{astolfi2010model, astolfi2010steady, ionescu2013families, ionescu2016nonlinear, scarciotti2016model, scarciotti2017data}. From a system-theoretical perspective, this represents a promising and interesting a\-pproach towards a nonlinear model reduction technique, which does not rely on the numerical simulation of the original full model to construct the reduced model. The extension of the well-known concept of moment matching to nonlinear systems has been initiated by Astolfi \cite{astolfi2010model} a few years ago based on the theory of the steady-state res\-ponse of nonlinear systems and the techniques of nonlinear output regulation \cite{krener1992construction}, \cite[Ch. 8]{isidori1995nonlinear}, \cite{huang2004nonlinear}. Since then, moment matching for linear and, specially, nonlinear systems has been further developed in several pu\-bli\-cations. For instance, the equivalence between projection-based and parametrized, non-projective families of reduced models achieving moment matching is presented in \cite{astolfi2010steady}. Therein, the time domain interpretation of \emph{output Krylov subspace}-based moment matching is also established for linear systems using the dual Sylvester equation. These findings are transferred to the nonlinear case in \cite{ionescu2013families} and further developed in \cite{ionescu2016nonlinear} to provide a two-sided, nonlinear moment matching theory. More recently, the steady-state interpretation of moments has also been extended to linear and nonlinear time-delay systems \cite{scarciotti2016model}. In addition, the online estimation of moments of linear and nonlinear systems from input/output data has newly been proposed in \cite{scarciotti2017data}. This recent work aims at the data-driven, low-order identification of an unknown nonlinear system, by solving a recursive, moving window, least-square estimation problem using input/output snapshot measurements. Hereby, a polynomial expansion ansatz with user-defined basis functions is used for the reduced output mapping. Furthermore, the concept of parametrized families of reduced models is employed to enforce additional properties on the reduced order model, which approximately matches the nonlinear, estimated moments. From a practical point of view, \cite{scarciotti2017data} represents the first milestone towards a feasible method for nonlinear moment matching, since the proposed algorithm does not involve the solution of a partial differential equation, but rather aims at estimating the moment of a nonlinear system from input/output data. In fact, the previously mentioned papers \cite{astolfi2010model,ionescu2013families,ionescu2016nonlinear,scarciotti2016model} face the same difficulty, namely the computation of the solution of a nonlinear Sylvester-like partial differential equation. This paper de\-ve\-lopes the concept of moment matching for nonlinear systems presented in \cite{astolfi2010model} towards practical application: Inspired by the POD community, which usually employs a linear projection and time-snapshots to reduce nonlinear systems, we propose some simplifications to avoid the Sylvester-like partial differential equation and achieve a feasible, numerical algorithm for model reduction by nonlinear moment matching. The proposed approach is actually linked to the technique presented in \cite{scarciotti2017data} in the sense that both methods \emph{approximately} match nonlinear moments. However, the goals of both techniques are different, since \cite{scarciotti2017data} focuses on the data-driven, \emph{low-order identification} of an unknown nonlinear system, whereas this paper deals with the \emph{reduction} of a known nonlinear system. For this reason, the proposed algorithms are also different. Algorithm 2 in \cite{scarciotti2017data} requires the solution of a least-square problem using input/output data, whereas the practicable algorithm presented here relies on the solution of nonlinear systems of equations using the explicitly known nonlinear system. The rest of the paper is organized as follows. In Section~\ref{sec:LMM} the theory of model reduction by moment matching for linear systems is first reviewed. Herein, the time domain perception of moment matching from \cite{astolfi2010model,astolfi2010steady} as the interpolation of the steady-state response of the output of the system when excited by exponential inputs plays a crucial role for transferring this theory to nonlinear systems. After comprehensively explaining the generalization given in \cite{astolfi2010model} and providing some valuable, illuminating insights in Section \ref{sec:nlmm-PDE}, some step-by-step simplifications are performed in Section~\ref{sec:simulation-free-NLMM} towards a \emph{practicable}, \emph{simulation-free} algorithm for nonlinear moment matching. Finally, a numerical example is provided, which illustrates the effectiveness of the proposed method. \textbf{Notation:} $\mathbb{R}$ is the set of real numbers and $\mathbb{C}$ is the set of complex numbers. $\uplambda(\boldsymbol{E}^{-1} \boldsymbol{A})$ denotes the spectrum of the matrix $\boldsymbol{E}^{-1} \boldsymbol{A} \in \mathbb{R}^{n \times n}$ and $\emptyset$ represents the empty set. Finally, the range of a matrix $\boldsymbol{V}$ is denoted by $\mathrm{Ran}(\boldsymbol{V})$. \section{Moment Matching for linear systems} \label{sec:LMM Consider a large-scale, linear time-invariant (LTI), asymptotically stable, multiple-input multiple-output (MIMO) state-space model of the form \vspace{-0.2em} \begin{equation} \label{eq:linear-FOM} \boldsymbol{E} \, \dot{\boldsymbol{x}}(t) = \boldsymbol{A} \, \boldsymbol{x}(t) + \boldsymbol{B} \, \boldsymbol{u}(t), \quad \boldsymbol{y}(t) = \boldsymbol{C} \, \boldsymbol{x}(t), \end{equation} where $\boldsymbol{E} \in \mathbb{R}^{n \times n}$ with $\det(\boldsymbol{E}) \neq 0$ is the descriptor matrix, $\boldsymbol{A} \in \mathbb{R}^{n \times n}$ is the system matrix and $\boldsymbol{x}(t) \in \mathbb{R}^n$, $\boldsymbol{u}(t) \in \mathbb{R}^m$, $\boldsymbol{y}(t) \in \mathbb{R}^p$ denote the state, inputs and outputs of the system, respectively. The input-output behavior is characterized in the frequency domain by the rational transfer function \vspace{-0.2em} \begin{equation} \boldsymbol{G}(s) = \boldsymbol{C}(s \boldsymbol{E} - \boldsymbol{A})^{-1} \boldsymbol{B} \ \ \in \, \mathbb{C}^{p \times m}. \end{equation} The goal of model order reduction is to approximate the full order model (FOM) \eqref{eq:linear-FOM} by a reduced order model (ROM) \vspace{-0.2em} \begin{equation} \label{eq:linear-ROM} \boldsymbol{E}_{\mathrm{r}} \, \dot{\boldsymbol{x}}_{\mathrm{r}}(t) = \boldsymbol{A}_{\mathrm{r}} \, \boldsymbol{x}_{\mathrm{r}}(t) + \boldsymbol{B}_{\mathrm{r}} \, \boldsymbol{u}(t), \quad \boldsymbol{y}_{\mathrm{r}}(t) = \boldsymbol{C}_{\mathrm{r}} \, \boldsymbol{x}_{\mathrm{r}}(t), \end{equation} of much smaller dimension $r \ll n$ with $\boldsymbol{E}_{\mathrm{r}} \!=\! \boldsymbol{W}^{\mathsf{T}} \boldsymbol{E} \boldsymbol{V}$, $\boldsymbol{A}_{\mathrm{r}} \!=\! \boldsymbol{W}^{\mathsf{T}} \boldsymbol{A} \boldsymbol{V}$, $\boldsymbol{B}_{\mathrm{r}} \!=\! \boldsymbol{W}^{\mathsf{T}} \boldsymbol{B}$ and $\boldsymbol{C}_{\mathrm{r}} \!=\! \boldsymbol{C} \boldsymbol{V}$, such that $\boldsymbol{y}(t) \approx \boldsymbol{y}_{\mathrm{r}}(t)$. Note that in this framework, the reduction is performed by a \emph{Petrov-Galerkin projection} of \eqref{eq:linear-FOM} onto the $r$-dimensional subspace $\mathrm{Ran}(\boldsymbol{EV})$ by means of the projector $\boldsymbol{P} \!=\! \boldsymbol{E} \boldsymbol{V}(\boldsymbol{W}^{\mathsf T} \boldsymbol{E} \boldsymbol{V})^{-1} \boldsymbol{W}^{\mathsf T}$. Thus, the main task in this setting consists in finding suitable (orthogonal) projection matrices $\boldsymbol{V}, \boldsymbol{W} \in \mathbb{R}^{n \times r}$ that span appropriate subspaces. \subsection{Notion of Moments and Krylov subspaces} One common and numerically efficient linear reduction technique relies on the concept of \emph{implicit moment matching} by rational Krylov subspaces \cite{grimme1997krylov, beattie2017model}. \begin{definition} The Taylor series expansion of the transfer function $\boldsymbol{G}(s)$ around a complex number $\sigma \in \mathbb{C}$, also called \emph{shift} or \emph{expansion / interpolation point}, is given by \vspace{-0.2em} \begin{equation} \boldsymbol{G}(s) = \sum_{i=0}^{\infty} \boldsymbol{m}_{i}(\sigma) (s - \sigma)^i\,, \end{equation} where $\boldsymbol{m}_i(\sigma)$ is called the $i$-th \emph{moment} of $\boldsymbol{G}(s)$ around $\sigma$. The moments represent the Taylor coefficients and satisfy: \begin{equation*} \begin{aligned} \boldsymbol{m}_i(\sigma) &= \frac{1}{i!} \left.\frac{\mathrm{d}^{i} \boldsymbol{G}(s)}{\mathrm{d}s^{i}}\right|_{s=\sigma} = \frac{1}{i!} \left.\left[\frac{\mathrm{d}^{i}}{\mathrm{d}s^{i}} \, \boldsymbol{C}(s\boldsymbol{E} \!-\! \boldsymbol{A})^{-1}\boldsymbol{B}\right]\right|_{s=\sigma} \\[0.5em] &=(-1)^i \, \boldsymbol{C} \big((\sigma \boldsymbol{E} - \boldsymbol{A})^{-1} \boldsymbol{E}\big)^i (\sigma \boldsymbol{E} - \boldsymbol{A})^{-1} \boldsymbol{B}. \end{aligned} \end{equation*} \end{definition} If the matrices $\boldsymbol{V} \!$ and $\boldsymbol{W} \!$ are chosen as bases of res\-pective $r$-order \emph{input} and \emph{output rational Krylov subspaces} \begin{subequations} \label{eq:multimom-block-Krylov} \begin{align} \mathcal{K}_r\left((\sigma \boldsymbol{E} \!-\! \boldsymbol{A})^{-1} \boldsymbol{E}, (\sigma \boldsymbol{E} \!-\! \boldsymbol{A})^{-1} \boldsymbol{B}\right) \subseteq \mathrm{Ran}(\boldsymbol{V}), \label{eq:multimom-block-Krylov-V} \\[0.2em] \mathcal{K}_r\left((\mu \boldsymbol{E} \!-\! \boldsymbol{A})^{- \mathsf{T}} \boldsymbol{E}^{\mathsf T}, (\mu \boldsymbol{E} \!-\! \boldsymbol{A})^{- \mathsf{T}} \boldsymbol{C}^{\mathsf T}\right) \subseteq \mathrm{Ran}(\boldsymbol{W}), \end{align} \end{subequations} then the ROM \eqref{eq:linear-ROM} matches $r$ moments of the original transfer function around the expansion points $\sigma$ and $\mu$, respectively. In addition to the afore explained \emph{multimoment} matching strategy, note that it is also possible to match (high-order) moments at a set of different shifts $\left\{\sigma_i\right\}_{i=1}^q$ and $\left\{\mu_i\right\}_{i=1}^q$ with associated multiplicities $\left\{r_i\right\}_{i=1}^q$, where $\sum_{i=1}^{q} r_i = r$. In this setting, known as \emph{multipoint} moment matching, each subspace $\mathrm{Ran}(\boldsymbol{V})$ and $\mathrm{Ran}(\boldsymbol{W})$ is given by the union of all respective rational Krylov subspaces $\mathcal{K}_{r_i}$. Note also that, besides the \emph{block} Krylov subspaces \eqref{eq:multimom-block-Krylov}, in the MIMO case we alternatively may use so-called \emph{tangential} Krylov subspaces (e.g. for $r_1=\ldots=r_q = 1$): \begin{subequations} \label{eq:multipoint-tang-Krylov} \begin{equation} \resizebox{0.89\linewidth}{!}{ $\mathrm{span}\left\{ (\sigma_1 \boldsymbol{E} \!-\! \boldsymbol{A})^{-1} \! \boldsymbol{B} \, \boldsymbol{r}_1, \ldots, (\sigma_r \boldsymbol{E} \!-\! \boldsymbol{A})^{-1} \! \boldsymbol{B} \, \boldsymbol{r}_r\right\} \subseteq \mathrm{Ran}(\boldsymbol{V})$,} \end{equation} \begin{equation} \resizebox{0.89\linewidth}{!}{ $\mathrm{span}\left\{ (\mu_1 \boldsymbol{E} \!-\! \boldsymbol{A})^{- \! \mathsf T}\boldsymbol{C}^{\mathsf T} \boldsymbol{l}_1, \ldots, (\mu_r \boldsymbol{E} \!-\! \boldsymbol{A})^{- \! \mathsf T}\boldsymbol{C}^{\mathsf T} \boldsymbol{l}_r\right\} \subseteq \mathrm{Ran}(\boldsymbol{W})$.} \end{equation} \end{subequations} Here, convenient right and left tangential directions $\! \boldsymbol{r}_i \!\in\! \mathbb{C}^m$ and $\boldsymbol{l}_i \in \mathbb{C}^p$ are chosen to tangentially interpolate the transfer function at selected expansion points $\sigma_i$, $\mu_i \in \mathbb{C} \setminus \uplambda(\boldsymbol{E}^{-1} \boldsymbol{A})$. \subsection{Equivalence of Krylov subspaces and Sylvester equations} In fact, any basis of an input and output Krylov subspace \eqref{eq:multipoint-tang-Krylov} can be equivalently interpreted as the solution $\boldsymbol{V}$ and $\boldsymbol{W}$ of the following Sylvester equations \cite{gallivan2004sylvester}: \begin{subequations} \begin{align} \boldsymbol{E} \, \boldsymbol{V} \, \boldsymbol{S}_{v} - \boldsymbol{A} \, \boldsymbol{V} &= \boldsymbol{B} \, \boldsymbol{R}\,, \label{eq:Sylv-V}\\ \boldsymbol{E}^{\mathsf{T}} \, \boldsymbol{W} \, \boldsymbol{S}_{w}^{\mathsf T} - \boldsymbol{A}^{\mathsf{T}} \, \boldsymbol{W} &= \boldsymbol{C}^{\mathsf{T}} \, \boldsymbol{L}. \end{align} \end{subequations} Hereby, the input interpolation data $\left\{\sigma_i, \boldsymbol{r}_i\right\}$ is specified by the matrices $\boldsymbol{S}_v \!=\! \mathrm{diag}(\sigma_1, \ldots, \sigma_r) \!\in\! \mathbb{C}^{r \times r}$ and $\boldsymbol{R} \!=\! \left[\boldsymbol{r}_1, \ldots, \boldsymbol{r}_r\right] \in \mathbb{C}^{m \times r}$, where the pair $(\boldsymbol{R}, \boldsymbol{S}_v)$ is observable. Similarly, the output interpolation data $\left\{\mu_i, \boldsymbol{l}_i\right\}$ is denoted by the matrices $\boldsymbol{S}_w \!=\! \mathrm{diag}(\mu_1, \ldots, \mu_r) \in \mathbb{C}^{r \times r}$ and $\boldsymbol{L} \!=\! \left[\boldsymbol{l}_1, \ldots, \boldsymbol{l}_r \right] \in \mathbb{C}^{p \times r}$, where the pair $(\boldsymbol{S}_w, \boldsymbol{L}^{\mathsf T})$ is controllable. Note that in the multimoment case, $\boldsymbol{S}_v, \boldsymbol{S}_w$ are Jordan matrices, and that in the SISO case\footnote{For SISO ($m\!=\!1, p\!=\!1$) replace $\boldsymbol{B} \to \boldsymbol{b} \in \mathbb{R}^{n}$ and $\boldsymbol{C} \to \boldsymbol{c}^{\mathsf T} \in \mathbb{R}^{1 \times n}$.}, $\boldsymbol{R}, \boldsymbol{L}$ become row vectors with corresponding ones and zeros. \subsection{Time domain interpretation of Moment Matching} In addition to the frequency domain perception of moment matching by means of the interpolation of the original transfer function around certain shifts, one can also interpret this concept in the time domain. To this end, moments will be first characterized in terms of the solution of a Sylvester equation in Lemma \ref{th:lin-moments-Sylvester}, and then interpreted as the steady-state response of an interconnected system in Theorem \ref{th:lin-moments-steady-state} \cite{astolfi2010model,astolfi2010steady}. \begin{lemma} \label{th:lin-moments-Sylvester} The moments $\boldsymbol{m}_i(\sigma)$ of system \eqref{eq:linear-FOM} around shifts $\sigma \not\in \uplambda(\boldsymbol{E}^{-1} \boldsymbol{A})$ are equivalently given by \begin{equation} \boldsymbol{m}_i(\sigma) = (-1)^i \, \boldsymbol{C} \boldsymbol{V}_i\,, \quad i=0,\ldots,r-1 \end{equation} \vspace{-1em} where, according to \eqref{eq:multimom-block-Krylov-V}, $\boldsymbol{V}_i$ is calculated as \begin{equation} \begin{aligned} (\sigma \boldsymbol{E} - \boldsymbol{A}) \boldsymbol{V}_0 &= \boldsymbol{B}, \\ (\sigma \boldsymbol{E} - \boldsymbol{A}) \boldsymbol{V}_i &= \boldsymbol{E} \, \boldsymbol{V}_{i-1}, \quad i \geq 1 \end{aligned} \end{equation} or, alternatively, $\boldsymbol{V} \!=\! \left[\boldsymbol{V}_0, \ldots, \boldsymbol{V}_{r-1}\right]$ corresponds to the unique solution of the Sylvester equation \eqref{eq:Sylv-V} with the corresponding ``modified" Jordan matrix $\boldsymbol{S}_v$ with \emph{negative} off-diagonal square blocks and $\boldsymbol{R} \!=\! \begin{bmatrix} \boldsymbol{\mathrm{I}}_m & \boldsymbol{0}_m & \cdots & \boldsymbol{0}_m \end{bmatrix}$. \end{lemma} \begin{theorem} \cite{astolfi2010model} \label{th:lin-moments-steady-state} Consider the interconnection of system \eqref{eq:linear-FOM} with the signal generator \begin{equation} \label{eq:lin-SG} \begin{aligned} \dot{\boldsymbol{x}}_{\mathrm{r}}^v(t) &= \boldsymbol{S}_{v} \, \boldsymbol{x}_{\mathrm{r}}^v(t), \quad \boldsymbol{x}_{\mathrm{r}}^v(0) = \boldsymbol{x}_{\mathrm{r},0}^v \neq \boldsymbol{0}, \\ \boldsymbol{u}(t) &= \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r}}^v(t), \end{aligned} \end{equation} where the triple $(\boldsymbol{S}_v, \, \boldsymbol{R}, \, \boldsymbol{x}_{\mathrm{r},0}^v)$ is such that $(\boldsymbol{R}, \boldsymbol{S}_v)$ is observable, $\uplambda(\boldsymbol{S}_v) \, \cap \, \uplambda(\boldsymbol{E}^{-1} \boldsymbol{A}) \!=\! \emptyset$ and $(\boldsymbol{S}_v, \boldsymbol{x}_{\mathrm{r},0}^v)$ is excitable. Then, the moments of system \eqref{eq:linear-FOM} are related to the (well-defined) steady-state response of the output $\boldsymbol{y}(t) \!=\! \boldsymbol{y}_{\mathrm{r}}(t) \!=\! \boldsymbol{C} \boldsymbol{V} \boldsymbol{x}_{\mathrm{r}}^v(t)$ of such interconnected system (cf. Fig~\ref{fig:lin-sys-lin-SG_steady-state-V}), where $\boldsymbol{V}$ is the unique solution of the Sylvester equation \eqref{eq:Sylv-V}. \end{theorem} \begin{figure}[tp] \centering \scalebox{0.56}{ \input{tikz/lin-sys-lin-SG_steady-state-V}} \caption{\footnotesize Diagram depicting the interconnection between the linear FOM/ROM and the linear signal generator to illustrate the time domain interpretation of moment matching for linear systems.} \label{fig:lin-sys-lin-SG_steady-state-V} \end{figure} \begin{corollary} Interconnecting system \eqref{eq:linear-FOM} with the signal ge\-nerator \eqref{eq:lin-SG} is equivalent to exciting the FOM with exponential input signals $\boldsymbol{u}(t) \!=\! \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r}}^v(t) \!=\! \boldsymbol{R} \, \mathrm{e}^{\boldsymbol{S}_v t} \, \boldsymbol{x}_{\mathrm{r},0}^v$ with exponents given by the shift matrix $\boldsymbol{S}_v$. Consequently, given $\boldsymbol{u}(t) \!=\! \boldsymbol{R} \, \mathrm{e}^{\boldsymbol{S}_v t} \, \boldsymbol{x}_{\mathrm{r},0}^v$ with $\boldsymbol{x}_{0} \!=\! \boldsymbol{V} \boldsymbol{x}_{\mathrm{r},0}^v$, $\boldsymbol{x}_{\mathrm{r},0}^v \!\neq\! \boldsymbol{0}$ arbitrary, $\boldsymbol{V}$ as solution of \eqref{eq:Sylv-V} and $\boldsymbol{W}$ such that $\det(\boldsymbol{W}^{\mathsf T} \boldsymbol{E} \boldsymbol{V}) \neq 0$, then the (asymptotically stable) ROM \eqref{eq:linear-ROM} exactly matches the steady-state response of the output of the FOM, i.e. $\boldsymbol{e}(t) \!=\! \boldsymbol{y}(t) - \boldsymbol{y}_{\mathrm{r}}(t) \!=\! \boldsymbol{C} \boldsymbol{x}(t) - \boldsymbol{C} \boldsymbol{V} \boldsymbol{x}_{\mathrm{r}}(t)\!=\! \boldsymbol{0} \ \forall \, t$ (see Fig. \ref{fig:lin-sys-lin-SG_steady-state-V}). \end{corollary} Thus, linear moment matching can be interpreted as the interpolation of the steady-state response of the output of the FOM, when this is excited with exponential input signals. This understanding follows from the fact that the transfer function $G(s)$ represents the scaling factor of (complex) exponentials $\mathrm{e}^{st}$, which are the \emph{eigenfunctions} of LTI systems, i.e. $y(t) \!=\! G(s) \, \mathrm{e}^{s t}$ for $u(t) \!=\! \mathrm{e}^{st}$. Interestingly enough, the Sylvester equation \eqref{eq:Sylv-V} can be alternatively derived using this time domain perception and the notion of signal generators. To this end, first insert the linear approximation ansatz $\boldsymbol{x}(t) \!=\! \boldsymbol{V} \boldsymbol{x}_{\mathrm{r}}(t)$ with $\boldsymbol{x}_{\mathrm{r}}(t) \!\overset{!}{=}\! \boldsymbol{x}_{\mathrm{r}}^v(t)$ in the state equation of~\eqref{eq:linear-FOM}: \begin{equation} \label{eq:derivation-Syl} \boldsymbol{E} \, \boldsymbol{V} \, \dot{\boldsymbol{x}}_{\mathrm{r}}^v(t) = \boldsymbol{A} \, \boldsymbol{V} \, \boldsymbol{x}_{\mathrm{r}}^v(t) + \boldsymbol{B} \, \boldsymbol{u}(t). \end{equation} Subsequently, the linear signal generator $\dot{\boldsymbol{x}}_{\mathrm{r}}^v(t) \!=\! \boldsymbol{S}_{v} \, \boldsymbol{x}_{\mathrm{r}}^v(t)$, $\boldsymbol{u}(t) \!=\! \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r}}^v(t)$ is plugged into \eqref{eq:derivation-Syl}, yielding \begin{equation} \label{eq:derivation-Syl-2} \boldsymbol{0} = \left(\boldsymbol{A} \, \boldsymbol{V} - \boldsymbol{E} \, \boldsymbol{V} \, \boldsymbol{S}_{v} + \boldsymbol{B} \, \boldsymbol{R}\right) \cdot \boldsymbol{x}_{\mathrm{r}}^v(t). \end{equation} Since the equation holds for $\boldsymbol{x}_{\mathrm{r}}^v(t) \!\!=\! \mathrm{e}^{\boldsymbol{S}_v t} \boldsymbol{x}_{\mathrm{r},0}^v$, the state vector $\boldsymbol{x}_{\mathrm{r}}^v(t)$ can be factored out and a \emph{constant} (state-independent) linear Sylvester equation of dimension $n \!\times\! r$ is obtained \begin{equation} \boldsymbol{A} \, \boldsymbol{V} - \boldsymbol{E} \, \boldsymbol{V} \, \boldsymbol{S}_{v} + \boldsymbol{B} \, \boldsymbol{R} = \boldsymbol{0}, \end{equation} whose solution $\boldsymbol{V}$ spans a corresponding input rational Krylov subspace which guarantees moment matching under the aforementioned circumstances. \section{Moment Matching for nonlinear systems} \label{sec:nlmm-PDE} \subsection{Nonlinear Petrov-Galerkin projection} Consider now a large-scale, nonlinear time-invariant, exponentially stable, MIMO state-space model of the form \begin{equation} \label{eq:nonlin-FOM} \begin{aligned} \dot{\boldsymbol{x}}(t) &= \boldsymbol{f}\big(\boldsymbol{x}(t), \boldsymbol{u}(t)\big), \quad \boldsymbol{x}(0) = \boldsymbol{x}_0,\\ \boldsymbol{y}(t) &= \boldsymbol{h}\big(\boldsymbol{x}(t)\big), \end{aligned} \end{equation} with $\boldsymbol{x}(t) \in \mathbb{R}^n$, $\boldsymbol{u}(t) \in \mathbb{R}^m$, $\boldsymbol{y}(t) \in \mathbb{R}^p$ and smooth mappings $\boldsymbol{f}(\boldsymbol{x}, \boldsymbol{u}): \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^n$ and $\boldsymbol{h}(\boldsymbol{x}) : \mathbb{R}^n \to \mathbb{R}^p$, such that $\boldsymbol{f}(\boldsymbol{0}, \boldsymbol{0}) \!=\! \boldsymbol{0}$ and $\boldsymbol{h}(\boldsymbol{0}) \!=\! \boldsymbol{0}$. The aim now is to find a nonlinear ROM of dimension $r \ll n$ using again a projection framework. One established and successful way to do this, is by applying the classical Petrov-Galerkin projection with linear mappings given by the matrices $\boldsymbol{V}, \boldsymbol{W}$ to the nonlinear system \eqref{eq:nonlin-FOM}. Herein, the projection matrices are generally constructed via POD or other nonlinear reduction techniques. Another possibility is to apply a nonlinear Petrov-Galerkin projection using \emph{nonlinear mappings} defined on \emph{manifolds} \cite{ionescu2016nonlinear}. To this end, let $\boldsymbol{x}(t) \!=\! \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}(t))$ be the nonlinear approximation ansatz with smooth mapping $\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}) : \mathbb{R}^r \to \mathbb{R}^n$. Furthermore, define $\boldsymbol{x}_{\mathrm{r}}(t) \!=\! \boldsymbol{\omega}(\boldsymbol{x}(t))$ with smooth mapping $\boldsymbol{\omega}(\boldsymbol{x}) : \mathbb{R}^n \to \mathbb{R}^r$, such that $\boldsymbol{\omega}(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}})) \!=\! \boldsymbol{x}_{\mathrm{r}}$. The reduction is then performed through a nonlinear Petrov-Galerkin projection $\boldsymbol{\varrho}(\boldsymbol{x}(t)) \!=\! \boldsymbol{\nu}\big(\boldsymbol{\omega}(\boldsymbol{x}(t))\big)$ of \eqref{eq:nonlin-FOM} by means of the projector mapping $\boldsymbol{\varrho}(\boldsymbol{x}) : \mathbb{R}^n \to \mathbb{R}^n$, yielding the nonlinear ROM \begin{equation} \label{eq:nonlin-ROM} \begin{aligned} \dot{\boldsymbol{x}}_{\mathrm{r}}(t) &= \left.\frac{\partial \boldsymbol{\omega}(\boldsymbol{x}(t))}{\partial \boldsymbol{x}(t)} \, \boldsymbol{f}\big(\boldsymbol{x}(t), \boldsymbol{u}(t)\big)\right|_{\boldsymbol{x}(t)=\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}(t))}\,, \\ \boldsymbol{y}_{\mathrm{r}}(t) &= \boldsymbol{h}\big(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}(t))\big), \end{aligned} \end{equation} where $\boldsymbol{x}_{\mathrm{r}}(t) \! \in \! \mathbb{R}^r$, $(\partial \boldsymbol{\omega}(\boldsymbol{x})/\partial \boldsymbol{x})|_{\boldsymbol{x}\!=\!\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}})} \!\cdot\! (\partial \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}})/\partial \boldsymbol{x}_{\mathrm{r}}) \!=\! \mathbf{I}_r$ and the initial condition is $\boldsymbol{x}_{\mathrm{r}}(0) \!=\! \boldsymbol{\omega}(\boldsymbol{x}_0)$. \begin{remark} The afore explained nonlinear projection framework (for $\boldsymbol{E} \!=\! \mathbf{I}$) is a generalization from the linear case. Therefore, the nonlinear mappings are strongly related to their linear counterparts: \begin{subequations} \vspace{-0.6em} \begin{align} \boldsymbol{x} &= \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}) \ \widehat{=} \ \boldsymbol{V} \, \boldsymbol{x}_{\mathrm{r}}, && \frac{\partial \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}})}{\partial \boldsymbol{x}_{\mathrm{r}}} \ \widehat{=} \ \boldsymbol{V},\\[0.1em] \boldsymbol{x}_{\mathrm{r}} &= \boldsymbol{\omega}(\boldsymbol{x}) \ \widehat{=} \ \underbrace{(\boldsymbol{W}^{\mathsf T}\boldsymbol{V})^{-1}\boldsymbol{W}^{\mathsf T}}_{*} \boldsymbol{x}, && \ \frac{\partial \boldsymbol{\omega}(\boldsymbol{x})}{\partial \boldsymbol{x}} \ \widehat{=} \ * \,, \\[-0.3em] \boldsymbol{\varrho} &= \boldsymbol{\nu}\big(\boldsymbol{\omega}(\boldsymbol{x})\big) \ \widehat{=} \ \boldsymbol{P} \boldsymbol{x}, && \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \frac{\partial \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}})}{\partial \boldsymbol{x}_{\mathrm{r}}} \frac{\partial \boldsymbol{\omega}(\boldsymbol{x})}{\partial \boldsymbol{x}} \ \widehat{=} \ \boldsymbol{P}. \end{align} \end{subequations} Note that the condition $\boldsymbol{\omega}(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}})) \!\!=\!\! \boldsymbol{x}_{\mathrm{r}}$ corresponds to $(\boldsymbol{W}^{\mathsf T}\boldsymbol{V})^{-1}\boldsymbol{W}^{\mathsf T} \boldsymbol{V} \boldsymbol{x}_{\mathrm{r}} \!\!=\!\! \boldsymbol{x}_{\mathrm{r}}$ and that $\boldsymbol{P} \!=\! \boldsymbol{V}(\boldsymbol{W}^{\mathsf T}\boldsymbol{V})^{-1}\boldsymbol{W}^{\mathsf T}$. For nonlinear systems, a one-sided projection ($\boldsymbol{W} \!=\! \boldsymbol{V}$) is commonly used, which yields $*\!=\!\boldsymbol{V}^{\mathsf T}$, $\boldsymbol{P} \!=\! \boldsymbol{V}\boldsymbol{V}^{\mathsf T}$ and $\boldsymbol{x}_{\mathrm{r},0} \!=\! \boldsymbol{V}^{\mathsf T}\boldsymbol{x}_0$, provided that $\boldsymbol{V}$ is orthogonal ($\boldsymbol{V}^{\mathsf T} \boldsymbol{V} \!=\! \boldsymbol{\mathrm{I}}_r$). \end{remark} \begin{figure}[tp] \centering \scalebox{0.56}{ \input{tikz/nonlin-sys-nonlin-SG_steady-state-V}} \caption{\footnotesize Diagram depicting the interconnection between the nonlinear FOM/ROM and the nonlinear signal generator to illustrate the time domain interpretation of moment matching for nonlinear systems.} \label{fig:nonlin-sys-nonlin-SG_steady-state-V} \end{figure} \subsection{Notion of Nonlinear Moments} The notion of moments and their steady-state-based interpretation can be carried over to nonlinear systems \cite{astolfi2010model}. \begin{theorem} \cite{astolfi2010model} \label{th:nonlin-moments-steady-state} Consider the interconnection of system \eqref{eq:nonlin-FOM} with the nonlinear signal generator \begin{equation} \label{eq:nonlin-SG} \begin{aligned} \dot{\boldsymbol{x}}_{\mathrm{r}}^v(t) &= \boldsymbol{s}_{v}\big(\boldsymbol{x}_{\mathrm{r}}^v(t)\big), \quad \boldsymbol{x}_{\mathrm{r}}^v(0) = \boldsymbol{x}_{\mathrm{r},0}^v \neq \boldsymbol{0}, \\ \boldsymbol{u}(t) &= \boldsymbol{r}\big(\boldsymbol{x}_{\mathrm{r}}^v(t)\big), \end{aligned} \end{equation} where $\boldsymbol{s}_{v}(\boldsymbol{x}_{\mathrm{r}}^v) \!:\! \mathbb{R}^r \to \mathbb{R}^r$, $\boldsymbol{r}(\boldsymbol{x}_{\mathrm{r}}^v) \!:\! \mathbb{R}^r \to \mathbb{R}^m$ are smooth mappings such that $\boldsymbol{s}_v(\boldsymbol{0}) \!=\! \boldsymbol{0}$ and $\boldsymbol{r}(\boldsymbol{0}) \!=\! \boldsymbol{0}$. Hereby it is assumed, that the signal generator $(\boldsymbol{r}, \, \boldsymbol{s}_v, \, \boldsymbol{x}_{\mathrm{r},0}^v)$ is ob\-ser\-va\-ble, i.e. for any pair of initial conditions $\boldsymbol{x}_{\mathrm{r},\mathrm{a}}^v(0) \! \neq \! \boldsymbol{x}_{\mathrm{r},\mathrm{b}}^v(0)$, the corresponding trajectories $\boldsymbol{r}(\boldsymbol{x}_{\mathrm{r},\mathrm{a}}^v(t))$ and $\boldsymbol{r}(\boldsymbol{x}_{\mathrm{r},\mathrm{b}}^v(t))$ do not coincide. In addition, the signal generator \eqref{eq:nonlin-SG} can be Poisson stable in a neighborhood of its equilibrium $\boldsymbol{x}_{\mathrm{r}}^v \!=\! \boldsymbol{0}$ with $\boldsymbol{x}_{\mathrm{r}}^v(0) \! \neq \! \boldsymbol{0}$. Further assume that the zero equilibrium of the system $\dot{\boldsymbol{x}} \!=\! \boldsymbol{f}(\boldsymbol{x}, \boldsymbol{0})$ is locally exponentially stable. Then, the moments of system \eqref{eq:nonlin-FOM} at $(\boldsymbol{s}_v(\boldsymbol{x}_{\mathrm{r}}^v), \boldsymbol{r}(\boldsymbol{x}_{\mathrm{r}}^v), \boldsymbol{x}_{\mathrm{r},0}^v)$ are related to the (locally well-defined) steady-state response of the output $\boldsymbol{y}(t) \!=\! \boldsymbol{y}_{\mathrm{r}}(t) \!=\! \boldsymbol{h}\big(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t))\big)$ of such in\-ter\-co\-nnec\-ted system (cf. Fig. \ref{fig:nonlin-sys-nonlin-SG_steady-state-V}), where the mapping $\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v)$, defined in a neighborhood of $\boldsymbol{x}_{\mathrm{r}}^v \!=\! \boldsymbol{0}$, is the unique solution of the following Sylvester-like partial differential equation~(PDE) \begin{equation} \label{eq:nonlin-Sylv-v} \frac{\partial \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v)}{\partial \boldsymbol{x}_{\mathrm{r}}^v} \, \boldsymbol{s}_v(\boldsymbol{x}_{\mathrm{r}}^v) = \boldsymbol{f}\big(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v), \boldsymbol{r}(\boldsymbol{x}_{\mathrm{r}}^v)\big). \end{equation} \end{theorem} \vspace{0.2em} \subsection{Steady-State-Based Nonlinear Moment Matching} Based on Theorem \ref{th:nonlin-moments-steady-state}, the perception of nonlinear moment matching in terms of the interpolation of the steady-state response of an interconnected system follows. \begin{corollary} Consider the interconnection of system \eqref{eq:nonlin-FOM} and the nonlinear signal generator \eqref{eq:nonlin-SG}. Suppose all assumptions concerning observability and local exponential stability from above hold. Moreover, let $\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v)$ be the unique solution of \eqref{eq:nonlin-Sylv-v} and $\boldsymbol{\omega}(\boldsymbol{x})$ such that $\boldsymbol{\omega}(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v)) \!=\! \boldsymbol{x}_{\mathrm{r}}^v$. Assume that the zero equilibrium of \eqref{eq:nonlin-ROM} is locally exponentially stable. Then, the ROM \eqref{eq:nonlin-ROM} exactly matches the steady-state response of the output of the FOM, i.e. $\boldsymbol{e}(t) \!=\! \boldsymbol{y}(t) - \boldsymbol{y}_{\mathrm{r}}(t) \!=\! \boldsymbol{h}\big(\boldsymbol{x}(t)\big) - \boldsymbol{h}\big(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}(t))\big) \!=\! \boldsymbol{0} \ \forall \, t$ (see Fig. \ref{fig:nonlin-sys-nonlin-SG_steady-state-V}). \end{corollary} Note that the Sylvester-like PDE from \eqref{eq:nonlin-Sylv-v} represents the nonlinear counterpart of the linear equation \eqref{eq:derivation-Syl-2}: \begin{equation} \boldsymbol{V} \, \boldsymbol{S}_v \, \mathrm{e}^{\boldsymbol{S}_v t} \, \boldsymbol{x}_{\mathrm{r},0}^v = \boldsymbol{A} \, \boldsymbol{V} \, \mathrm{e}^{\boldsymbol{S}_v t} \, \boldsymbol{x}_{\mathrm{r},0}^v + \boldsymbol{B} \, \boldsymbol{R} \, \mathrm{e}^{\boldsymbol{S}_v t} \, \boldsymbol{x}_{\mathrm{r},0}^v. \end{equation} Thus, the PDE can be alternatively derived as follows. First, the nonlinear approximation ansatz $\boldsymbol{x}(t) \!=\! \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}(t))$ with $\boldsymbol{x}_{\mathrm{r}}(t) \overset{!}{=} \boldsymbol{x}_{\mathrm{r}}^v(t)$ is inserted in the state equation of \eqref{eq:nonlin-FOM}: \begin{equation} \label{eq:derivation-PDE} \frac{\partial \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t))}{\partial \boldsymbol{x}_{\mathrm{r}}^v(t)} \, \dot{\boldsymbol{x}}_{\mathrm{r}}^v(t) = \boldsymbol{f}\big(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t)), \boldsymbol{u}(t)\big). \end{equation} Afterwards, the nonlinear signal generator $\dot{\boldsymbol{x}}_{\mathrm{r}}^v(t) \!\!=\!\! \boldsymbol{s}_{v}\big(\boldsymbol{x}_{\mathrm{r}}^v(t)\big)$, $\boldsymbol{u}(t) \!=\! \boldsymbol{r}\big(\boldsymbol{x}_{\mathrm{r}}^v(t)\big)$ is plugged into \eqref{eq:derivation-PDE}, yielding \begin{equation} \label{eq:nonlin-time-dep-Sylv-v} \frac{\partial \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t))}{\partial \boldsymbol{x}_{\mathrm{r}}^v(t)} \, \boldsymbol{s}_{v}\big(\boldsymbol{x}_{\mathrm{r}}^v(t)\big) = \boldsymbol{f}\big(\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t)), \boldsymbol{r}(\boldsymbol{x}_{\mathrm{r}}^v(t))\big). \end{equation} Note that -- in contrast to the linear, state-independent Sylvester equation \eqref{eq:Sylv-V} of dimension $n \times r$ -- the PDE \eqref{eq:nonlin-time-dep-Sylv-v} is a \emph{nonlinear}, \emph{state-dependent} equation of dimension $n \times 1$. \section{Simulation-Free Nonlinear Model Reduction by Moment Matching} \label{sec:simulation-free-NLMM} The approach for nonlinear moment matching described in Section \ref{sec:nlmm-PDE} requires the solution $\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t))$ of the nonlinear, state-dependent PDE \eqref{eq:nonlin-time-dep-Sylv-v} for a given signal generator, in order to reduce the FOM \eqref{eq:nonlin-FOM}. This involves either symbolic computations, or the numerical integration of a resulting system of ordinary differential equations (ODEs) after reduced state-space discretization of the PDE \eqref{eq:nonlin-time-dep-Sylv-v}. Since we aim to reduce large-scale nonlinear systems, almost only \emph{numerical} methods come into consideration, which preferably should also avoid an expensive simulation. Hence, some step-by-step simplifications are performed in the following towards a practicable, simulation-free method for nonlinear moment matching, which relies on the solution of a system of nonlinear \emph{algebraic} equations rather than of a PDE. \subsection{Linear Projection} The first step towards a practical method consists in applying the linear projection ansatz $\boldsymbol{x}(t) \!=\! \boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t)) \!=\! \boldsymbol{V} \, \boldsymbol{x}_{\mathrm{r}}^v(t)$ instead of the nonlinear projection mapping $\boldsymbol{\nu}(\boldsymbol{x}_{\mathrm{r}}^v(t))$. This simplification is motivated by the fact that nonlinear projections are much more involved than linear ones, and that the latter are successfully employed even in nonlinear model order reduction. By doing so, the PDE \eqref{eq:nonlin-time-dep-Sylv-v} also becomes an algebraic equation, which is much easier to handle. Depending on the form of the used signal generator, we distinguish (based on \cite{astolfi2010model}) between the following cases: \subsubsection{Nonlinear signal generator} In this case, the PDE \eqref{eq:nonlin-time-dep-Sylv-v} becomes the following nonlinear system of equations \begin{equation} \label{eq:LP-NSG} \boldsymbol{0} = \boldsymbol{f}\big(\boldsymbol{V} \boldsymbol{x}_{\mathrm{r}}^v(t), \boldsymbol{r}(\boldsymbol{x}_{\mathrm{r}}^v(t))\big) - \boldsymbol{V} \, \boldsymbol{s}_{v}\big(\boldsymbol{x}_{\mathrm{r}}^v(t)\big), \end{equation} where the triple $(\boldsymbol{s}_v(\boldsymbol{x}_{\mathrm{r}}^v(t)), \boldsymbol{r}(\boldsymbol{x}_{\mathrm{r}}^v(t)), \boldsymbol{x}_{\mathrm{r},0}^v)$ is user-defined and the projection matrix $\boldsymbol{V} \in \mathbb{R}^{n \times r}$ is the searched solution. Note, however, that system \eqref{eq:LP-NSG} consists of $n$ equations for $n \cdot r$ unknowns, i.e. it is underdetermined. Thus, we consider the equation column-wise for each $\boldsymbol{v}_i \in \mathbb{R}^n$, $i=1,\ldots,r$ \begin{equation} \label{eq:LP-NSG-elem} \boldsymbol{0} = \boldsymbol{f}\big(\boldsymbol{v}_i \, x_{\mathrm{r}, i}^v(t), \boldsymbol{r}_{_i}(x_{\mathrm{r},i}^v(t))\big) - \boldsymbol{v}_i \, s_{v_{i}}\big(x_{\mathrm{r},i}^v(t)\big), \end{equation} with $x_{\mathrm{r},i}^v(t) \in \mathbb{R}$ and $\boldsymbol{V} \!=\! \left[\boldsymbol{v}_1, \ldots, \boldsymbol{v}_{r}\right]$. Please bear in mind that, in the linear setting, a column-wise construction of the orthogonal basis $\boldsymbol{V}$ using the Arnoldi process still fulfills the Sylvester matrix equation \eqref{eq:Sylv-V}. In the nonlinear setting, however, this does not hold true anymore, since equation \eqref{eq:LP-NSG} is generally not satisfied, even if each column $\boldsymbol{v}_i$ fulfills \eqref{eq:LP-NSG-elem}. This shortcoming is a consequence of the usage of a linear projection instead of a nonlinear mapping on a manifold. \subsubsection{Linear signal generator} Motivated from the linear case, one may also come to the idea of interconnecting the nonlinear system \eqref{eq:nonlin-FOM} with the linear signal generator \eqref{eq:lin-SG}, where $\boldsymbol{s}_v(\boldsymbol{x}_{\mathrm{r}}^v(t)) \!=\! \boldsymbol{S}_v \, \boldsymbol{x}_{\mathrm{r}}^v(t)$ and $\boldsymbol{r}(\boldsymbol{x}_{\mathrm{r}}^v(t)) \!=\! \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r}}^v(t)$. By doing so, equation \eqref{eq:LP-NSG} becomes \begin{equation} \boldsymbol{0} = \boldsymbol{f}\big(\boldsymbol{V} \boldsymbol{x}_{\mathrm{r}}^v(t), \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r}}^v(t)\big) - \boldsymbol{V} \, \boldsymbol{S}_{v} \, \boldsymbol{x}_{\mathrm{r}}^v(t), \end{equation} where the triple $(\boldsymbol{S}_v, \, \boldsymbol{R}, \, \boldsymbol{x}_{\mathrm{r},0}^v)$ is user-defined. Remember that the usage of a linear signal generator corresponds to exciting the nonlinear system with exponential input signals $\boldsymbol{u}(t) \!=\! \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r}}^v(t) \!=\! \boldsymbol{R} \, \mathrm{e}^{\boldsymbol{S}_v t} \, \boldsymbol{x}_{\mathrm{r},0}^v$. This choice naturally raises the question whether (growing) exponential inputs are sufficiently valid for characterizing nonlinear systems. Note that the dynamics of the selected signal generator represent the dynamics of the nonlinear system for which the steady-state responses are matched. Therefore, the signal generator should ideally be chosen such that it excites and cha\-rac\-te\-ri\-zes the important dynamics of the nonlinear system. It is well known that exponential functions are the characterizing eigenfunctions for linear systems. By exciting the nonlinear system with exponential input signals, we therefore hope to describe the nonlinear dynamics adequately as well. Considering the underdetermined equation again column-wise delivers \begin{equation} \label{eq:LP-LSG-elem} \boldsymbol{0} = \boldsymbol{f}\big( \boldsymbol{v}_i \, x_{\mathrm{r},i}^v(t), \underbrace{\boldsymbol{r}_i \, x_{\mathrm{r},i}^v(t)}_{\boldsymbol{r}_{_i}\left(x_{\mathrm{r},i}^v(t)\right)} \big) - \boldsymbol{v}_i \, \underbrace{\sigma_i \, x_{\mathrm{r},i}^v(t)}_{s_{v_{i}}\left(x_{\mathrm{r},i}^v(t)\right)}, \end{equation} where the signal generator \eqref{eq:lin-SG} becomes $\dot{x}_{\mathrm{r},i}^v(t) \!=\! \sigma_i \, x_{\mathrm{r},i}^v(t)$, $\boldsymbol{u}_i(t) \!=\! \boldsymbol{r}_i \, x_{\mathrm{r},i}^v(t)$ with $x_{\mathrm{r},i}^v(t) \!=\! \mathrm{e}^{\sigma_i t} x_{\mathrm{r},0,i}^v$ for $i=1,\ldots,r$. \subsubsection{Zero signal generator} This special (linear) signal generator is defined as $\dot{\boldsymbol{x}}_{\mathrm{r}}^v(t) \!=\! \boldsymbol{s}_v(\boldsymbol{x}_{\mathrm{r}}^v(t)) \!=\! \boldsymbol{0}$, which means that $\boldsymbol{x}_{\mathrm{r}}^v(t) \!=\! \boldsymbol{x}_{\mathrm{r},0}^v \!=\! \textrm{const}$ and $\boldsymbol{u}(t) \!=\! \boldsymbol{R} \boldsymbol{x}_{\mathrm{r}}^v(t) \!=\! \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r},0}^v \!=\! \textrm{const}$. Hence, the usage of a zero signal generator is equivalent to exciting the nonlinear system with a constant input signal. In this particular case, equation \eqref{eq:LP-NSG} becomes \begin{equation} \boldsymbol{0} = \boldsymbol{f}\big(\boldsymbol{V} \boldsymbol{x}_{\mathrm{r},0}^v, \ \boldsymbol{R} \, \boldsymbol{x}_{\mathrm{r},0}^v\big), \end{equation} which is a nonlinear, \emph{time-independent} system of equations. A column-wise consideration of the underdetermined equation yields \vspace{-1.5em} \begin{equation} \label{eq:LP-ZSG-elem} \boldsymbol{0} = \boldsymbol{f}\big(\boldsymbol{v}_i \, x_{\mathrm{r},0,i}^v, \ \overbrace{\boldsymbol{r}_i \, x_{\mathrm{r},0,i}^v}^{\boldsymbol{r}_{_i}(x_{\mathrm{r},0,i}^v)}\big), \end{equation} where $\dot{x}_{\mathrm{r},i}^v(t) \!=\! 0$ with $\sigma_i \!=\! 0$, $\boldsymbol{u}_i(t) \!=\! \boldsymbol{r}_i \, x_{\mathrm{r},0,i}^v \!=\! \textrm{const}$ and $x_{\mathrm{r},i}^v(t) \!=\! x_{\mathrm{r},0,i}^v \!=\! \textrm{const}$ hold for $i=1,\ldots,r$. In other words, the employment of a zero signal generator corresponds to moment matching at shifts $\sigma_i \!=\! 0$. \subsection{Time discretization with collocation points} Except for the case with a zero signal generator, the nonlinear equations \eqref{eq:LP-NSG-elem} and \eqref{eq:LP-LSG-elem} are state-dependent and cannot be solved so easily. Remember that in the linear case, the state vector $\boldsymbol{x}_{\mathrm{r}}^v(t)$ could be factored out, yielding a constant linear matrix equation that is satisfied for $\boldsymbol{x}_{\mathrm{r}}^v(t)$. Unfortunately, this factorization cannot be generally done in the non\-li\-near setting anymore. Thus, inspired by POD, we propose to discretize the state-dependent equations with \emph{time-snapshots} or \emph{collocation points} $\left\{t^*_k\right\}$, $k=1,\ldots,K$. \subsubsection{Nonlinear signal generator} \label{subsubsec:LP-NSG-elem-timeDis} For a time-discretized nonlinear signal generator $s_{v_{i}}(x_{\mathrm{r},i}^v(t^*_k))$, $\boldsymbol{r}_{_i}(x_{\mathrm{r},i}^v(t^*_k))$ and $x_{\mathrm{r},0,i}^v$, the following time-independent equation results \begin{equation} \label{eq:LP-NSG-elem-timeDis} \boldsymbol{0} = \boldsymbol{f}\big(\boldsymbol{v}_{ik} \, x_{\mathrm{r}, i}^v(t^*_k), \, \boldsymbol{r}_{_i}(x_{\mathrm{r},i}^v(t^*_k))\big) - \boldsymbol{v}_{ik} \, s_{v_{i}}\big(x_{\mathrm{r},i}^v(t^*_k)\big), \end{equation} which can be solved for each $\boldsymbol{v}_{ik} \in \mathbb{R}^n$, with $i=1,\ldots,r$ and $k=1,\ldots,K$, if desired. Note that the discrete solution $x_{\mathrm{r},i}^v(t^*_k)$ of the nonlinear signal generator ODE must be given or computed via simulation before solving equation \eqref{eq:LP-NSG-elem-timeDis}. \subsubsection{Linear signal generator} Using the time-discretized signal generator $\dot{x}_{\mathrm{r},i}^v(t^*_k) \!=\! \sigma_i \, x_{\mathrm{r},i}^v(t^*_k)$, $\boldsymbol{u}_i(t^*_k) \!=\! \boldsymbol{r}_i \, x_{\mathrm{r},i}^v(t^*_k)$ and $x_{\mathrm{r},0,i}^v$, equation \eqref{eq:LP-LSG-elem} becomes time-independent \begin{equation} \label{eq:LP-LSG-elem-timeDis} \begin{aligned} \boldsymbol{0} &= \boldsymbol{f}\big(\boldsymbol{v}_{ik} \, x_{\mathrm{r},i}^v(t^*_k), \, \boldsymbol{r}_i \, x_{\mathrm{r},i}^v(t^*_k)\big) - \boldsymbol{v}_{ik} \, \sigma_i \, x_{\mathrm{r},i}^v(t^*_k), \end{aligned} \end{equation} with $x_{\mathrm{r},i}^v(t^*_k) \!=\! \mathrm{e}^{\sigma_i t^*_k} \, x_{\mathrm{r},0,i}^v$ for $i=1,\ldots,r$. Note that in this case, the discrete solution $x_{\mathrm{r},i}^v(t^*_k)$ of the linear signal generator ODE is analytically given by exponential functions with exponents $\sigma_i$, so that no simulation of the signal generator is required. \subsubsection{Zero signal generator} For this special case, no time discretization is needed, since \eqref{eq:LP-ZSG-elem} already represents a time-independent equation. Please note that solving the nonlinear system of equations \eqref{eq:LP-ZSG-elem} is strong related to computing the \emph{steady-state} $\boldsymbol{x}_{\infty}$, also called \emph{equilibrium point}, of the nonlinear system \eqref{eq:nonlin-FOM} by means of $\boldsymbol{0} \!=\! \boldsymbol{f}\big(\boldsymbol{x}_{\infty}, \boldsymbol{u}_{\mathrm{const}}\big)$. \subsection{Simulation-free nonlinear moment matching algorithm} After the step-by-step simplifications discussed in the previous section, we are now ready to state our proposed simulation-free nonlinear moment matching algorithm: \begin{algorithm}[!ht]\caption{Nonlinear Moment Matching (NLMM)} \label{alg:nlmm} \begin{algorithmic}[1] \Require $\boldsymbol{f}(\boldsymbol{x}, \!\boldsymbol{u})$, $\!\boldsymbol{J}_{\boldsymbol{f}}(\boldsymbol{x}, \!\boldsymbol{u})$, $\!s_{v_i}(x_{\mathrm{r},i}^v(t^*_k))$, $\!\boldsymbol{r}_{_i}(x_{\mathrm{r},i}^v(t^*_k))$, $\!x_{\mathrm{r},i}^v(t^*_k)$, \hspace{1em} initial guesses $\boldsymbol{v}_{0,ik}$, deflated reduced order $r_{\mathrm{defl}}$ \vspace{0.2em} \Ensure orthogonal basis $\boldsymbol{V}$ \vspace{0.2em} \For{\begin{small} \texttt{i = 1 : r} \end{small}} \hspace{1.5em} $\triangleright$ e.g. $r$ different shifts $\sigma_i$ \For{\begin{small} \texttt{k = 1 : K} \end{small}} \vspace{0.2em} \hspace{0.4em} $\triangleright$ e.g. $K$ samples in each shift \State \begin{small} \hspace{-1.5em} \texttt{fun=@(v)} $\boldsymbol{f}\big(\texttt{v*}x_{\mathrm{r},ik}^v, \, \boldsymbol{r}_{_i}(x_{\mathrm{r},ik}^v)\big) - \texttt{v*}s_{v_i}(x_{\mathrm{r},ik}^v)$ \end{small} \label{al:line:fun} \vspace{0.3em} \State \begin{footnotesize} \hspace{-1.5em} \texttt{Jfun=@(v)} $\boldsymbol{J}_{\boldsymbol{f}}\big(\texttt{v*}x_{\mathrm{r},ik}^v, \, \boldsymbol{r}_{_i}(x_{\mathrm{r},ik}^v)\big)\texttt{*}x_{\mathrm{r},ik}^v - \boldsymbol{\mathrm{I}}_n\texttt{*}s_{v_i}(x_{\mathrm{r},ik}^v)$ \end{footnotesize} \label{al:line:Jfun} \vspace*{-0.8em} \State \begin{small} \hspace{-1.5em} \texttt{V(:,(i-1)*K+k)=} \textbf{\texttt{Newton}}\texttt{(fun,}$\, \boldsymbol{v}_{0,ik} \, $\texttt{,Jfun)} \end{small} \label{al:line:Newton} \vspace{0.3em} \State \begin{small} \hspace{-1.4em} \texttt{V = }\textbf{\texttt{gramSchmidt}}\texttt{((i-1)*K+k, V)} \end{small} \label{al:line:gramSchmidt} \vspace{0.2em} \hspace{0.1em} $\triangleright$ optional \EndFor \EndFor \State \texttt{V = }\textbf{\texttt{svd}}\texttt{(V,}$\, r_{\mathrm{defl}}$\texttt{)} \hspace{1.1em} $\triangleright$ deflation is optional \label{al:line:SVD} \end{algorithmic} \end{algorithm}\\ Note that the algorithm is given for the most general case of a nonlinear signal generator (cf. eq. \eqref{eq:LP-NSG-elem-timeDis}), and where \emph{two} nested \textbf{for}-loops are used to compute all possible $\boldsymbol{v}_{ik} \in \mathbb{R}^n$. Nevertheless, other (simpler) strategies are also conceivable. These and further aspects are discussed in the following. \paragraph{Different strategies and degrees of freedom} In a\-ddi\-tion to a nonlinear signal generator, one could also apply a linear or a zero signal generator. To this end, line \ref{al:line:fun} (and correspondingly line \ref{al:line:Jfun} also) in Algorithm \ref{alg:nlmm} should be replaced according to the equations \eqref{eq:LP-LSG-elem-timeDis} and \eqref{eq:LP-ZSG-elem}. Note again that the latter cases do not require the simulation of the signal generator ODE to compute $x_{\mathrm{r},i}^v(t^*_k)$. Moreover, please remember the importance of the choice of an adequate signal generator for a suitable characterization and reduction of the nonlinear system at hand. Besides the depicted most general approach, where basis vectors are computed for different signal generators~$i\!=\!1,\ldots,r$ at several collocation points $k \!=\! 1, \ldots,K$, one could also consider some special cases. For instance, a single signal generator ($r\!=\!1$) at several collocation points $k \!=\! 1, \ldots,K$ is a possible simpler approach. Herein, the choice of appropriate time-snapshots $t^*_k$ of the selected signal generator is of crucial importance. Another procedure consists in matching moments for different signal generators $i \!=\! 1,\ldots,r$ at only one time-snapshot ($K \!=\! 1$). This multipoint moment matching strategy implies, exemplarily for a linear signal generator, the choice of different shifts and tangential directions $\left\{\sigma_i, \boldsymbol{r}_i\right\}$, which may be selected e.g. logarithmically between $\left[\omega_{\mathrm{min}}, \omega_{\mathrm{max}}\right]$ or via IRKA \cite{gugercin2008h_2}. For a zero signal generator this implies the choice of different initial conditions and tangential directions $\left\{x_{\mathrm{r},0,i}^v, \boldsymbol{r}_i\right\}$. \paragraph{Computational effort} The presented reduction technique is \emph{simulation-free}, since it does not require the numerical integration of the large-scale nonlinear system \eqref{eq:nonlin-FOM}. However, it involves the solution of (at most $r \cdot K$) nonlinear systems of equations (NLSE) of full order dimension $n$. These NLSEs can be solved using either a self-programmed Newton-Raphson scheme (cf. line \ref{al:line:Newton}) or the \textsc{MATLAB}'s built-in function \textbf{\texttt{fsolve}}. For a faster convergence of the Newton method, it is highly recommended to supply the analytical Jacobian of the right-hand side \texttt{Jfun}, for which the Jacobian of the nonlinear system $\boldsymbol{J}_{\boldsymbol{f}}(\boldsymbol{x}, \boldsymbol{u})$ is needed. If \texttt{Jfun} is not provided, then the Jacobian is approximated using finite differences, which can be very time-consuming. Further note that reduction techniques like POD require a, typically implicit, numerical simulation of the FOM, which also relies on the solution of NLSEs with the Newton-Raphson method. However, the computational effort of a forward simulation compared to NLMM is supposed to be higher, since -- within a simulation -- a NLSE must be solved in \emph{each} time-step. \paragraph{Other aspects} A good initial guess for the solution of a NLSE can considerably speed-up the convergence of the Newton method. Towards this aim, initial guesses can be taken from linearized models, i.e. $\boldsymbol{v}_{0,i} \!=\! (\sigma_i \boldsymbol{\mathrm{I}} \!-\! \boldsymbol{A})^{-1}\boldsymbol{B} \boldsymbol{r}_i$, or from the solutions at neighbouring shifts or time-snapshots. Another important aspect is that the matrix $\boldsymbol{V}$ containing all basis vectors $\boldsymbol{v}_{ik}$ must have full rank, and should pre\-fe\-ra\-bly be orthogonal for better numerical robustness. Thus, if too many or redundant columns are available, a deflation can be performed (cf. line \ref{al:line:SVD}). Moreover, a Gram-Schmidt orthogonalization process can optionally be employed. \vspace{-0.2em} \subsection{Analysis and Discussion} In this section, a discussion about the proposed simplifications and the presented simulation-free algorithm is given. Firstly, it is important to note that the use of a linear projection resembles a special case of the most general nonlinear projection framework, or the polynomial expansion-based ansatz proposed in \cite{krener1992construction,huang2004nonlinear} and used in \cite{scarciotti2017data}. In fact, applying a more sophisticated projection ansatz with basis functions customized for the nonlinear system at hand seems promising for future research. Interestingly, this polynomial ansatz seems to be also linked to the Volterra series representation often used for the reduction of special nonlinear system classes \cite{rugh1981nonlinear,breiten2013interpolatory,cruz2018nonlinear}. Nevertheless, a linear projection might be sufficient in certain cases and its use is motivated here by its simplicity and its frequent and successful employment in nonlinear MOR. Secondly, it is emphasized again that the choice of the signal generator is crucial for the quality of the reduced model. Hence, it should be selected according to the nonlinear system to be reduced. The validity of the special linear signal generator for characterizing nonlinear systems is questionable and not completely clear yet, but it has been shown that this type of signal generator (together with a linear projection) is being implicitly applied also for the reduction of bilinear and quadratic bilinear systems \cite{cruz2018nonlinear}. \section{Numerical Example} The efficiency of the proposed simulation-free nonlinear moment matching algorithm is illustrated by means of the FitzHugh-Nagumo (FHN) benchmark model from \cite{chaturantabut2010nonlinear}. This model describes the activation and deactivation dynamics of a spiking neuron. A spatial discretization of the underlying coupled nonlinear PDE into $\ell$ elements yields a model of $n \!\!=\!\! 2 \ell$ degrees of freedom. The model equation is given by \vspace{-0.2em} \begin{equation} \label{eq:FHN} \begin{aligned} \boldsymbol{E} \, \dot{\boldsymbol{x}}(t) &= \overbrace{\boldsymbol{A} \, \boldsymbol{x}(t) + \boldsymbol{\tilde{f}}\big(\boldsymbol{x}(t)\big) + \boldsymbol{B} \, \boldsymbol{u}(t)}^{\boldsymbol{f}\left(\boldsymbol{x}(t), \boldsymbol{u}(t)\right)}, \\ \boldsymbol{y}(t) &= \underbrace{\boldsymbol{C} \, \boldsymbol{x}(t)}_{\boldsymbol{h}\left(\boldsymbol{x}(t)\right)}, \end{aligned} \end{equation} with a cubic nonlinea\-ri\-ty $\tilde{f}(v_\ell) \!=\! v_\ell(v_\ell - 0.1)(1 - v_\ell)$ and $\boldsymbol{x} \!\!=\!\! \left[\boldsymbol{v}^{\mathsf T}, \boldsymbol{w}^{\mathsf T}\right]^{\mathsf T}$. The state variables $v_\ell$ and $w_\ell$ represent the voltage and recovery voltage at each spatial element. The model is input-affine with $\boldsymbol{u}(t) \!=\! \left[i_0(t), \, 1\right]^{\mathsf T}$, where $i_0(t) \!=\! 5 \cdot 10^4 \, t^3 \, \mathrm{e}^{-15 t}$ denotes the electric current excitation. The outputs are chosen at the left boundary ($z\!=\!0$) via the output matrix $\boldsymbol{C}$, i.e. $\boldsymbol{y}(t) \!=\! \left[v_1(t), w_1(t)\right]^{\mathsf T}$. Please note that $\boldsymbol{E} \neq \boldsymbol{\mathrm{I}}$. Since the matrix $\boldsymbol{E}$ in this example is however diagonal, it can be efficiently carried to the right-hand side by its inverse $\boldsymbol{E}^{-1}$ to obtain the explicit representation \eqref{eq:nonlin-FOM} with $\boldsymbol{E} = \boldsymbol{\mathrm{I}}$. Note that, for systems with more general, regular $\boldsymbol{E}$, it is advisable to apply the reduction directly to the implicit state-space representation instead of using the inverse. To this end, Algorithm \ref{alg:nlmm} can be extended in a straightforward manner. For the application of Algorithm \ref{alg:nlmm}, in this case a single signal generator ($r \!=\! 1$) with $K \!=\! 41$ equidistant time-snapshots in the interval $\left[0, 5\right]$ is considered. The following linear signal generator $\dot{x}_{\mathrm{r}}^v(t) \!=\! x_{\mathrm{r}}^v(t) + 0.3$, $\boldsymbol{u}(t) \!=\! \left[x_{\mathrm{r}}^v(t), \, 1\right]^{\mathsf T}$, $x_{\mathrm{r},0}^v \!=\! -0.29$ is chosen, since the solution of the ODE is given by $x_{\mathrm{r}}^v(t) \!=\! \mathrm{e}^{t} \, x_{\mathrm{r},0}^v + 0.3 (\mathrm{e}^{t} - 1)$. Hence, this signal represents a growing exponential function shifted along the negative $y$-axis, whose values cover the interesting value range $\left[-0.29, 1.18\right]$ of the state variables. Please note that this unstable input signal is only used to compute the projection matrix $\boldsymbol{V}_{\text{NLMM}}$ via Algorithm \ref{alg:nlmm} during the \emph{training phase}. For the \emph{test phase}, the above input with the current $i_0(t)$ has been applied. Regarding POD, the input $i_0(t)$ has been applied for \emph{both} the training and test phase. This means that $\boldsymbol{V}_{\text{POD}}$ has been constructed and tested with the very same input signal. This rather unfair scenario has been chosen intentionally to assess the potential of NLMM. The numerical results of this scenario are quantitatively summarized in Table \ref{tab:results} and exemplarily illustrated for $r_{\mathrm{defl}} \!=\! 22$ in Fig. \ref{fig:num-results}. \begin{table}[h] \centering \caption{Numerical comparison between POD and NLMM} \begin{tabular}{c c c c} \toprule FHN ($\ell \!=\! 1000 $) & red. time & sim. time & rel. $\mathcal{L}_1$ error norm \\ \midrule FOM ($n \!=\! 2000 $) & - & \unit[382.16]{s} & - \\ \midrule \midrule POD ($r_{\mathrm{defl}} \!=\! 22$) & \unit[382.25]{s} & \unit[28.29]{s} & $1.03 \, \mathrm{e}^{-5}$ \\[0.3em] NLMM ($r_{\mathrm{defl}} \!=\! 22$) & \unit[46.17]{s} & \unit[28.91]{s} & $3.36 \, \mathrm{e}^{-3}$ \\[0.1em] \midrule \midrule POD ($r_{\mathrm{defl}} \!=\! 34$) & \unit[382.26]{s} & \unit[31.84]{s} & $2.17 \, \mathrm{e}^{-8}$ \\[0.3em] NLMM ($r_{\mathrm{defl}} \!=\! 34$) & \unit[47.23]{s} & \unit[30.86]{s} & $1.83 \, \mathrm{e}^{-3}$ \\[0.1em] \bottomrule \end{tabular} \label{tab:results} \end{table} \vspace{-1.3em} \begin{figure}[h!] \begin{center} \ref{named}\\[-0.1em] \setlength\mywidth{0.18\textwidth} \setlength\myheight{0.8\mywidth} \subfloat{\centering \input{./matlab/limitCyclen2000r22.tikz} \label{fig:limitCyclen2000r22}} \subfloat{\centering \input{./matlab/outputsn2000r22.tikz} \label{fig:outputsn2000r22}} \end{center} \vspace{-1em} \caption{\footnotesize Limit cycle behavior and outputs of the FHN model for test signal $\boldsymbol{u}(t) \!=\! \left[i_0(t), \, 1\right]^{\mathsf T}$ with $i_0(t) \!=\! 5 \cdot 10^4 \, t^3 \, \mathrm{e}^{-15 t}$ \ ($r_{\mathrm{defl}} \!=\! 22$)} \label{fig:num-results} \end{figure} \vspace{-0.7em} The comparison between POD and NLMM in terms of computational effort shows that NLMM requires less time to compute the deflated basis $\boldsymbol{V}$ than POD. Note that the latter relies first on the training simulation of the FOM (using $i_0(t)$) within an \textbf{\texttt{implicitEuler}} scheme with the fixed-step size $h \!=\! \unit[0.01]{s}$, and then on a singular value decomposition (SVD) of the gained snapshot matrix. By contrast, NLMM needs to solve $K \!=\! 41$ NLSEs (using the unstable signal generator) and to perform an SVD of a smaller matrix. \\ In terms of approximation quality, both approaches yield satisfactory numerical results with moderate relative error norms between FOM and ROM using $i_0(t)$ as test signal, even though for NLMM a growing exponential input has been applied during the training phase. \section{CONCLUSIONS} In this contribution, the concept of moment matching known from linear systems is first revisited and then comprehensively explained for nonlinear systems based on \cite{astolfi2010model}. Then, some simplifications are proposed, yielding a ready-to-implement, simulation-free nonlinear moment matching algorithm, which relies on the solution of NLSEs rather than of a PDE. Hereby, some useful theoretical insights concerning the meaning and the importance of the chosen signal generator are provided, and the diverse strategies and numerical aspects of the proposed algorithm are discussed. All in all, it can be concluded that the signal generator, i.e. the chosen input, plays a crucial role and should characterize the nonlinear system at hand. \vspace{-0.2em} \bibliographystyle{abbrv}
train/arxiv
BkiUazvxK3YB9raXxvA7
5
1
\section{INTRODUCTION} Wolf-Rayet\,(WR) bubbles are expected to be filled with hot plasma at temperatures of $\sim10^{7}$--$10^{8}$~K , but previous X-ray observations of hot bubbles have shown that this plasma presents lower temperatures, of the order of $\sim$10$^{6}$~K \citep[see][]{Chu2008}. The scarcity of X-ray detections among WR nebulae is also intriguing; there are only two WR nebulae detected in diffuse X-rays: S\,308 and NGC\,6888 \citep{B1988,Wrigge1994,Wrigge1998,Wrigge1999,Wrigge2002,Chu2003,Wrigge2005,Zhekov2011,Toala2012}. These two WR bubbles share several characteristics: the X-ray-emitting plasma is confined inside optical shells where the H$\alpha$ emission presents a clumpy distribution inside an [O~{\sc iii}] shell \citep{Gruendl2000}, and both bubbles are nitrogen-rich and surround WN stars with terminal wind velocities of $\sim$1800~km~s$^{-1}$ \citep{vdH2001}. This configuration can be pictured as the WR wind sweeping up the previously ejected Red Supergiant\,(RSG) wind material whilst the central star photoionizing this material. NGC\,6888 has been the subject of many studies over the years since it was first reported by \citet{Sharpless1959} and associated to its central WR star, WR\,136, by \citet{JH1965}. The most recent optical study of this nebula, presented by \citet{FernandezMartin2012}, investigated the ionization, chemical composition, and kinematics in several regions within the nebula. They concluded that NGC\,6888 is composed by multiple shells, and its morphology can be interpreted as a sphere with an ellipsoidal cavity inside. The first map of the diffuse X-ray-emitting gas in NGC\,6888 was presented by \citet{B1988} using \emph{Einstein} observations; a total flux of $\sim$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$ was detected in the 0.2--3.0~keV band. \citet{Wrigge1994} analyzed \textit{ROSAT} PSPC observations and found a flux of $(1.2\pm0.5)\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$. \citet{Wrigge2005} made use of the \textit{ASCA} SIS and \textit{ROSAT} PSPC observations to fit a two-temperature model ($T_{1}\sim1.3\times10^{6}$~K, $T_{2}\sim8.5\times10^{6}$~K) and measured a total observed flux of $\sim$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$. The most recent X-ray observations of NGC\,6888 are those obtained by \citet{Zhekov2011} using the \textit{Suzaku} satellite. They concluded that the spectrum indicates a relatively cool plasma with $T<5\times10^{6}$~K and a small contribution from a much hotter plasma component with temperature greater than $2\times10^{7}$~K. No appreciable temperature variations are found between the northern and southern regions of the nebula. The observed flux was reported to be $2\times10^{-12}$~erg~cm$^{-2}$~s~$^{-1}$ in the 0.3--1.5~keV energy range. In this paper we present \textit{Chandra} observations of NGC\,6888. A preliminary analysis of this dataset was presented by \citet{Chu2006}, where they showed the diffuse X-ray emission coming from the NE quadrant of the nebula (ACIS-S CCD\,S3). Here we present the analysis of the ACIS-S CCD\,S3 and CCD\,S4, covering $\sim$62\% of the nebula. The spectral properties of the X-ray-emitting plasma are compared to those derived previously by other authors using observations obtained by other X-ray facilities. The data from CCD\,S4 show an additional spatial component of X-ray emission which has not been reported in previous observations. \\ \\ \section{\textit{Chandra} OBSERVATIONS} The superb angular resolution and sensitivity at soft energies of \textit{Chandra}, as compared to previous satellites that have observed NGC\,6888, allow a more reliable study of the soft X-ray emission from the hot plasma in this nebula. The \textit{Chandra} observation of NGC\,6888 was performed on 2003 February 19-20 (Observation ID 3763; PI: R.A. Gruendl) using the Advanced CCD Imaging Spectrometer (ACIS-S) for a total exposure time of 92.8~ks. The NE quadrant of NGC\,6888 was imaged on the back-illuminated ACIS-S CCD\,S3 while the western region was imaged by CCD\,S4. The \textit{Chandra} Interactive Analysis of Observations\,(CIAO) software package version 4.4 was used to analyze the data using CALDB version 3.2.2. Very short periods of high background affected the data and the resulting useful exposure time is 88.0~ks after excising dead time periods. The \textit{Chandra} ACIS-S observation detects diffuse emission from NGC\,6888 in the soft energy band below 2.0~keV. No significant emission is detected above this energy limit. The total background-subtracted count rates of the diffuse X-ray emission for CCD\,S3 and CCD\,S4 are 0.160 and 0.053 cnts~s$^{-1}$, respectively. \begin{figure}[!t] \begin{center} \includegraphics[angle=0,width=1.0\linewidth]{fig1.eps} \caption{\textit{Chandra} ACIS-S image of the diffuse X-ray emission of NGC\,6888 in the 0.3-2.0~keV band. Point sources have been excised from this image. The regions used for spectral analysis are indicated with polygonal apertures: green, red, and blue solid lines correspond to source regions, and black dashed lines to background regions.} \end{center} \label{fig:ngc6888_diffuse} \end{figure} \section{SPATIAL DISTRIBUTION OF THE DIFFUSE X-RAY EMISSION} \begin{figure*}[!htbp] \begin{center} \includegraphics[angle=0,height=0.5\linewidth]{fig2_astro_ph.eps} \caption{Left: Composite color picture of the \textit{Chandra} ACIS-S observation of NGC\,6888 (blue) and MLO H$\alpha$ (red) and [O~{\sc iii}] (green) images. Right: Same as in \textit{left} image but with \textit{ROSAT} PSPC image (blue). North is up, East to the left.} \end{center} \label{fig:ngc6888} \end{figure*} In order to analyze the spatial distribution of the hot gas in NGC\,6888, we excised all point sources from the observation using the CIAO \textit{dmfilth} routine. The identification of the point sources was made using the CIAO \textit{wavdetect} routine. The image of the diffuse X-ray emission was extracted in the 0.3-2.0~keV energy band and smoothed with the CIAO task \textit{csmooth}, with a Gaussian kernel of 4$''$ in the brightest regions and 16$''$ and 24$''$ in the faintest ones for the CCD\,S3 and S4, respectively. The resultant image is shown in Figure~1. We compare in Figure~2-left the X-ray image with H$\alpha$ and [O\,{\sc iii}] optical images of the nebula taken with the 1~m telescope at the Mount Laguna Observatory \citep{Gruendl2000}. This figure shows a limb-brightened spatial distribution of the X-ray emission confined within the optical [O\,{\sc iii}] shell. In particular the X-ray-emitting gas in the NE region of the nebula (ACIS-S CCD\,S3) seems to fill all the area within the nebula with a broad emission peak superposed on the H$\alpha$ clumps, while the emission detected in ACIS-S CCD\,S4 can be associated with the southwest cap of the nebula, and with emission outside the H$\alpha$ shell but inside the western [O~{\sc iii}] skin. For comparison, we also present in Figure~2-right a composite picture of the same optical images and the X-ray emission detected by \textit{ROSAT} PSPC \citep[][]{Wrigge1994,Wrigge2005}. This image demonstrates that the X-ray emission from NGC\,6888 is stronger at the caps along the major axis, but an extra contribution can be detected at the westernmost regions of the nebula, just inside the optical [O~{\sc iii}] shell. This additional spatial component of X-ray emission, hinted in previous images of the nebula made with \textit{ROSAT} HRI and \textit{ASCA} \citep{Wrigge1994,Wrigge2002,Wrigge2005}, is reminiscent of the Northwest blowout in S\,308 \citep{Chu2003,Toala2012}. \section{PHYSICAL PROPERTIES OF THE HOT GAS IN NGC\,6888} We have carried out the study of the hot gas of NGC\,6888 in several steps. First, we have studied the emission from the nebular gas detected in CCD\,S3 and CCD\,S4, keeping in mind that the former has more reliable spectral resolution and sensitivity at lower energies than CCD\,S4. Therefore, we have defined two regions encompassing the diffuse X-ray emission registered in the field of view of the \emph{Chandra} ACIS-S CCD\,S3 and S4 (green polygonal lines in Figure~1). For further analysis, we have defined several smaller polygonal aperture regions, also shown in Figure~1, corresponding to different features presented in NGC\,6888: regions labeled as A comprise the apparent shell and caps, and regions labeled with B correspond to the shell interior. We note that both regions are present within each CCD detector, and thus we have extracted two spectra corresponding to each morphological feature. For example, A1 corresponds to a region extracted from CCD\,S3 and A2 to a region extracted from CCD\,S4. The same applies to regions B1 and B2. \subsection{Spectral Properties} As discussed in \citet{Toala2012}, the extraction of spectra from extended sources, as is the case of WR bubbles, is challenging because the emission fills almost the entire field of view of the instrument. The background contribution can be estimated from high signal-to-noise blank fields, but as mentioned by \citet{Toala2012}, this technique does not produce suitable results because WR bubbles are located in regions close to the Galactic Plane where extinction and background emission are significant \citep{Snowden1997}. To show the contribution of the Galactic background, we plot in Figure~3 the background-unsubtracted spectrum from CCD\,S3 and a background spectrum extracted from the edge of the camera. X-ray emission from the background is soft and show lines in the 0.3--1.0~keV energy band from thermal components \citep[see also figure~5 in][for the case of S\,308]{Toala2012}. The spectral shape of the background emission certainly differs from that derived from ACIS blank-field observations. Therefore, the most feasible procedure to subtract the background contribution is the use of background spectra extracted from areas near the camera edges, even though the instrumental responses for sources and background regions do not completely match each other. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=1.05\linewidth]{fig3.eps} \caption{ Comparison of the raw CCD\,S3 spectrum (black circles) and scaled background spectrum extracted from the edges of CCD\,S3 (red crosses). The emission lines around 2~keV in the background spectrum are instrumental lines. } \end{center} \label{fig:spec_raw} \end{figure} The individual background-subtracted spectra of the diffuse emission of the NE quadrant and western region of NGC\,6888 (namely CCD\,S3 and CCD\,S4) are presented in Figure~4, as well as the individual spectra extracted from regions A1, A2, B1, and B2. All spectra were extracted using the CIAO task \textit{specextract}, which generates the source and background spectra and their corresponding calibration files. The most notable differences in the spectral shapes are attributed to the differences in sensitivity of the ACIS-S CCD\,S3 and CCD\,S4. All spectra are soft and show two main peaks, a narrow peak at 0.5~keV and a broader peak around 0.7--0.9~keV. The feature around 0.5~keV can be identified with the N~{\sc vii} ion, while the feature around $\sim$0.7--0.9~keV can be associated with the Fe complex and Ne lines. Above 1.0 keV, the emission declines and diminishes at energies $\simeq$1.5 keV. We note that the spectra extracted from CCD\,S4 show the instrumental Au M complex at 2.2~keV, which has not been properly removed due to the reduced spatial extent of the background region. \begin{figure*}[t] \begin{center} \includegraphics[angle=0,width=0.33\linewidth]{fig4a.eps}~ \includegraphics[angle=0,width=0.33\linewidth]{fig4b.eps}~ \includegraphics[angle=0,width=0.33\linewidth]{fig4c.eps}\\ \includegraphics[angle=0,width=0.33\linewidth]{fig4d.eps}~ \includegraphics[angle=0,width=0.33\linewidth]{fig4e.eps}~ \includegraphics[angle=0,width=0.33\linewidth]{fig4f.eps} \caption{Background-subtracted \textit{Chandra} ACIS-S spectra of the NE (top panels) and western (bottom panels) regions of NGC\,6888 over-plotted with their best-fit two-temperature \textit{apec} model (solid lines) in the energy range of $0.3-3$~keV. } \end{center} \label{fig:spec_all} \end{figure*} In accordance with the spectral properties and previous spectral fits, all X-ray spectra from NGC\,6888 have been fit with XSPEC v12.7.0 \citep{Arnaud1996} using an absorbed two-temperature \textit{apec} optically thin plasma emission model with an absorption model using \citet{Balu1992} cross-sections. A low temperature component is used to model the bulk of the X-ray emission, while a high temperature component is added to model the extra emission at and above $\sim0.7$~keV. As in previous studies of the X-ray emission from WR bubbles \citep[see][]{Chu2003,Zhekov2011,Toala2012}, we have initially adopted nebular abundances for the X-ray-emitting plasma. In particular, we have used abundance values for N, O, and Ne of 3.2, 0.41, and 0.85 times the solar values \citep{Anders1989} as averaged from regions X1 and X2 described in \citet[][see their Table~4]{FernandezMartin2012}, and 0.39 times the solar value for S \citep{Moore2000}. Models with variable C, Mg, Fe, and Ne abundances were also tested. We found that the fitted abundances of Mg and Fe converged to solar values, whereas those of Ne tended to 0.85 times the solar value, i.e., the value determined from optical spectrophotometry \citep{FernandezMartin2012}. Consequently, we decided to fix the abundances of Mg, Fe, and Ne to these values. As for the carbon abundance, the fits could not converge to specific values because the C~{\sc vi} line at 0.37~keV or C~{\sc v} triplet at 0.3~keV are in the low energy range, where absorption is high and the instrument sensitivity is low. Therefore, we fixed the value of the carbon abundance to its solar value. Finally, all spectra were modeled with varying nitrogen abundance ($\mathrm{X_{N}}$) as the prominent N~{\sc vii} line at 0.5 keV seems to suggest a possible nitrogen enrichment of the X-ray-emitting plasma. The simulated two-temperature \textit{apec} model spectra obtained were absorbed by the interstellar hydrogen column of 3.13$\times10^{21}$~cm$^{-2}$ implied by optical measurements \citep{Hamann1994}. This is the same value used by previous authors \citep{Wrigge1994,Wrigge2005} which was found to be uniform throughout NGC\,6888 by \citet{Wendker1975}. Models with variable columnar density $N_{\mathrm{H}}$ were also attempted. The absorption column density showed a correlation with the temperature of the main plasma component ($T_{1}$), with values 2--4$\times10^{21}$~cm$^{-2}$ which is adopted here and is consistent with the value used by \citet{Zhekov2011}. The resultant model spectra were compared with the observed spectra in the 0.3 - 3~keV energy range and the $\chi^{2}$ statistics was used to determine the best-fit models. A minimum of 60 counts per bin was required for the spectral fit. The plasma temperatures ($kT_{1}$, $kT_{2}$) with 1$\sigma$ uncertainties, normalization factors\footnote{$A = 1 \times10^{-14}\int n_{\mathrm{e}} n_{\mathrm{H}} dV/ 4 \pi d^{2}$, where $d$ is the distance, $n_{\mathrm{e}}$ is the electron density , and $V$ the volume in cgs units.} ($A_{1}$, $A_{2}$), and nitrogen abundance ($X_{\mathrm{N}}$) of the best-fit models are listed in Table 1. Fluxes and luminosities listed in this table have been computed for the energy range 0.3-2.0 keV. The best-fit models are over-plotted on the background-subtracted spectra, together with the residuals of the fits as solid lines in Figure~4 \subsubsection{Properties of the NE X-ray Emission} The parameters of the best-fit model of the NE quadrant of NGC\,6888 are listed in Table\,1 as CCD\,S3. The model presents a low-temperature component of $1.6\times10^{6}$~K and a second component of $7.8\times10^{6}$~K with an observed flux ratio, $f_{1}/f_{2}\sim3$, corresponding to an intrinsic flux ratio $F_{1}/F_{2}\sim14$. The total observed flux in the 0.3--2~keV energy band is $(6.4^{+0.1}_{-0.2}) \times10^{-13}$~erg~cm$^{-2}$~s$^{-1}$ while the total unabsorbed flux is $(8.0^{+0.4}_{-0.2})\times10^{-12}$~erg~cm$^{-2}$~s$^{-1}$. The nitrogen abundance of the best-fit model is $\approx$5.3 times that of the solar value. In the case of the resultant spectra from regions A1 and B1, the temperatures are consistent with those obtained from the whole region, $T_{1}=1.6\times10^{6}$~K and $T_{2}=7.9\times10^{6}$~K for region A1, and $T_{1}=1.4\times10^{6}$~K and $T_{2}=7.7\times10^{6}$~K for region B1, respectively. Their nitrogen abundances are 5.3 and 3.7 times the solar value for A1 and B1, respectively. \subsubsection{Properties of the Western X-ray Emission} The background-subtracted X-ray spectra from the western regions of NGC\,6888 are shown in the lower panels of Figure~4, which correspond to the total diffuse emission detected by the CCD\,S4 (left), the emission from the rim registered by region A2 (middle), and the emission from inside the WR nebula registered by region B2 (right). The parameters of the best-fit models over-plotted to these spectra are presented in Table~1. We remark that the fits of these spectra failed to constrain accurately the nitrogen abundance\footnote{At first glance, this may seem perplexing because the CCD\,S4 and B2 spectra have larger total count numbers than the B1 spectrum from CCD\,S3 which could be used instead to derive the nitrogen abundance. The cause of this apparent conflict originates in the reduced sensitivity of the front-illuminated\,(FI) CCD\,S4 at $\sim$0.5 keV, the energy of the N~{\sc vii} line, which is a few times smaller than that of the back-illuminated\,(BI) CCD\,S3. This is clearly illustrated by the differing shapes of the spectra detected by the BI CCD\,S3 and the FI CCD\,S4 (Figure~4). As a result, the FI CCD\,S4 detects a count number at the N~{\sc vii} line which is insufficient for a reliable estimate of the nitrogen abundance.}, and therefore we fixed its value to that found for the spectrum of the CCD\,S3. The model of the X-ray-emitting plasma detected in the western regions of NGC\,6888 (the ACIS CCD\,S4 spectrum) has a lower dominant temperature of 1.2$\times$10$^6$~K with a second component of 7.4$\times$10$^6$~K. The total observed flux is $(8.8\pm0.8)\times10^{-13}$~erg~cm$^{-2}$~s$^{-1}$, while the unabsorbed flux is $(1.5\pm0.3)\times10^{-11}$~erg~cm$^{-2}$~s$^{-1}$, with an unabsorbed flux ratio of $F_{1}/F_{2}\sim31$. The resultant spectra from A2 and B2 are consistent from that obtained for the total spectrum extracted from CCD\,S4. $T_{1}=1.2\times10^{6}$~K and $T_{2}=7.4\times10^{6}$~K for region A2, and $T_{1}=1.4\times10^{6}$~K and $T_{2}=7.5\times10^{6}$~K for region B2\footnote{Due to the low count number of the B2 spectrum, the temperature of the hottest component in this region could not be fitted. Consequently, the temperature of this component was fixed at 0.65~keV.}. The unabsorbed flux ratios are $F_{1}/F_{2}\sim$35 and $\sim26$ for A2 and B2, respectively. \subsection{Global Properties of the hot gas in NGC\,6888} To assess the global properties of the hot gas in NGC\,6888, we have derived the parameters of the two-temperature plasma emission model that best describes the total X-ray emission detected by \emph{Chandra}. The CCD\,S3 and CCD\,S4 spectra have been fitted simultaneously using the same hydrogen column density and plasma parameters ($kT_1$, $kT_2$, $X_\mathrm{N}$). The results of this joint fit are given in Table~1 as CCD S34. The normalization factor of the cold thermal component of both spectra ($A_\mathrm{1}^\mathrm{S3}$, $A_\mathrm{1}^\mathrm{S4}$) were obviously allowed to vary to account for the different volume emission measure of hot gas mapped by each detector. The ratio between the normalization factors of the cold and hot components was also allowed to vary, but it was kept the same for both spectra ($A_\mathrm{2}^\mathrm{S3}$/$A_\mathrm{1}^\mathrm{S3}$ $\equiv$ $A_\mathrm{2}^\mathrm{S4}$/$A_\mathrm{1}^\mathrm{S4}$ $\equiv$ $A_\mathrm{2}^\mathrm{S34}/A_\mathrm{1}^\mathrm{S34}$), i.e., it was assumed that the relative contribution of both components of the X-ray-emitting plasma is the same across the nebula. The values of the normalization factors listed for this fit have been obtained by adding the normalization factor of each component for the CCD S3 and CCD S4 spectra ($A_\mathrm{1}^\mathrm{S34}=A_\mathrm{1}^\mathrm{S3}+A_\mathrm{1}^\mathrm{S4}$, $A_\mathrm{2}^\mathrm{S34}=A_\mathrm{2}^\mathrm{S3}+A_\mathrm{2}^\mathrm{S4}$). The temperature of the two components of this model, $T_1=$1.4$\times$10$^6$~K and $T_2=$7.4$\times$10$^6$~K, are enclosed within those obtained for the CCD\,S3 and S4 spectra. For comparison with all regions, we show in Figure~5 the temperature distribution obtained from our fits for the different regions. Accounting for the uncertainties, there is a good agreement for the temperature components of the different regions. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=1.05\linewidth]{fig5.eps} \caption{ Plot of the temperatures of the cold and hot plasma components for the different spatial regions defined for the analysis of NGC\,6888. The inset shows a chart of the correspondence between symbols and spatial regions, where the star marks the location of the temperatures derived from the joint fit CCD\,S34. } \end{center} \label{fig:contornos} \end{figure} The total observed flux is $f_{\mathrm{X}}=(1.5\pm0.2)\times10^{-12}$~erg~cm$^{-2}$~s$^{-1}$, which corresponds to an unabsorbed flux of $F_{\mathrm{X}}=(2.5\pm0.3)\times10^{-11}$~erg~cm$^{-2}$~s$^{-1}$. The comparison between \textit{ROSAT} PSPC and \textit{Chandra} ACIS images indicates that the latter includes $\simeq62$~\% of the total X-ray flux of NGC\,6888. With this we can estimate a total X-ray intrinsic flux of $F_{\mathrm{X,TOT}}=(4.05\pm0.5)\times10^{-11}$~erg~cm$^{-2}$~s$^{-1}$. Adopting a distance of 1.26~kpc, the total X-ray luminosity of NGC\,6888 in the 0.3-2.0 keV energy range is $L_{\mathrm{X}}=(7.7\pm0.1)\times10^{33}$~erg~s$^{-1}$. \begin{sidewaystable} \label{tab:spectral} \centering \caption{Spectral Fits of the Diffuse X-ray Emission from NGC\,6888} \scriptsize \begin{tabular}{lcccccccccccl} \hline \hline Region & Counts & $X_{\mathrm{N}}$ & $kT_{1}$ & $A_{1}$ & $f_{1}$ &$F_{1}$ & $kT_{2}$ & $A_{2}$ & $f_{2}$ & $F_{2}$ & $F_{1}/F_{2}$ & $\chi^{2}$/DoF \\ & & & (keV) &(cm$^{-5}$) &(erg~cm$^{-2}$~s$^{-1}$) &(erg~cm$^{-2}$~s$^{-1}$) &(keV) & (cm$^{-5}$) &(erg~cm$^{-2}$~s$^{-1}$)& (erg~cm$^{-2}$~s$^{-1}$) & \\ \hline CCD\,S3 & 14000$\pm$250 & 5.3$^{+0.7}_{-0.7}$& 0.140$^{+0.003}_{-0.003}$ & 6.1$\times10^{-3}$& 4.7$\times10^{-13}$& 7.5$\times10^{-12}$ & 0.67$^{+0.04}_{-0.03}$ & 1.7$\times10^{-4}$& 1.7$\times10^{-13}$ & 5.2$\times10^{-13}$ & 14.4 & 1.5=170.3/109 \\ A1 & 10250$\pm$200 & 5.3$^{+0.9}_{-0.9}$& 0.140$^{+0.008}_{-0.005}$ & 3.9$\times10^{-3}$& 3.2$\times10^{-13}$& 5.0$\times10^{-12}$ & 0.68$^{+0.02}_{-0.05}$ & 1.2$\times10^{-4}$& 1.2$\times10^{-13}$ & 3.7$\times10^{-13}$ & 13.6 & 1.6=173.5/105 \\ B1 & 3300 $\pm$120 & 3.7$^{+1.3}_{-1.0}$ & 0.117$^{+0.012}_{-0.007}$ & 3.2$\times10^{-3}$& 1.2$\times10^{-13}$& 2.5$\times10^{-12}$ & 0.66$^{+0.08}_{-0.08}$ & 3.9$\times10^{-5}$& 3.9$\times10^{-14}$ & 1.2$\times10^{-13}$ & 20.7 & 1.1=93.0/89 \\ \hline CCD\,S4 & 4700$\pm$350 & 5.0 & 0.101$^{+0.020}_{-0.008}$ & 2.7$\times10^{-2}$& 7.0$\times10^{-13}$& 1.5$\times10^{-11}$ & 0.64$^{+0.07}_{-0.07}$ & 1.8$\times10^{-4}$& 1.8$\times10^{-13}$ & 4.7$\times10^{-13}$ & 30.9 & 1.5=136.6/90 \\ A2 & 3200$\pm$150 & 5.0 & 0.100$^{+0.018}_{-0.008}$ & 2.2$\times10^{-2}$& 5.5$\times10^{-13}$& 1.4$\times10^{-11}$ & 0.64$^{+0.08}_{-0.08}$ & 1.4$\times10^{-4}$& 1.3$\times10^{-13}$ & 4.2$\times10^{-13}$ & 34.5 & 1.9=115.9/59\\ B2 & 1550$\pm$100 & 5.0 & 0.118$^{+0.043}_{-0.028}$ & 3.4$\times10^{-3}$& 1.5$\times10^{-13}$& 3.1$\times10^{-12}$ & 0.65 & 3.8$\times10^{-5}$& 3.8$\times10^{-14}$ & 1.2$\times10^{-13}$ & 25.7 & 0.6=20.13/31 \\ \hline CCD\,S34 & \dots & 4.0$^{+0.5}_{-0.6}$& 0.118$^{+0.005}_{-0.004}$ & 3.1$\times10^{-2}$& 1.1$\times10^{-12}$& 2.4$\times10^{-11}$ & 0.64$^{+0.04}_{-0.03}$ & 3.6$\times10^{-4}$& 3.5$\times10^{-13}$ & 1.1$\times10^{-12}$ & 22.2 & 1.5=308.1/196\\ \hline \hline \end{tabular} \end{sidewaystable} To proceed to the calculation of the electron density and mass of the X-ray-emitting plasma in NGC\,6888 we need to adopt a geometrical model for the nebula in order to estimate the volume occupied by the hot plasma. It is tempting to assume an ellipsoidal geometry, as suggested by the H$\alpha$ images, however the \emph{Chandra} ACIS-S and \emph{ROSAT} PSPC observations of NGC\,6888 have disclosed emission external to the H$\alpha$ shell, just inside the [O~{\sc iii}] skin (see Figure~2), that implies a different physical structure. \citet{FernandezMartin2012} describe a simple morphology for NGC\,6888 where an ellipsoidal cavity has been carved inside the almost spherical outer optical shell. We can estimate lower and upper limits of the electron density of the X-ray-emitting gas by adopting spherical and ellipsoidal geometries, respectively. For the spherical model, we have adopted a radius of 9\arcmin\ to obtain an rms electron density of $n_{\mathrm{e}}=0.4 (\epsilon / 0.1)^{-1/2}$~cm$^{-3}$, implying a mass of the X-ray-emitting gas of $m_{\mathrm{X}} = 1.7 (\epsilon / 0.1)^{1/2}$~$M_{\odot}$, where $\epsilon$ is the gas filling factor. For the ellipsoidal case, we have assumed semiaxes of 9\arcmin, 6\arcmin, and 6\arcmin\ to obtain an rms electron density of $n_{\mathrm{e}}=0.6 (\epsilon / 0.1)^{-1/2}$~cm$^{-3}$ and a mass of the X-ray-emitting gas of $m_{\mathrm{X}} = 1.2 (\epsilon / 0.1)^{1/2}$~$M_{\odot}$. \section{DISCUSSION} \subsection{Comparison with previous X-ray studies} All previous X-ray analyses of the diffuse X-ray emission from NGC\,6888 agree on the presence of a main plasma component with a temperature $\simeq$1.4$\times$10$^6$ K consistent with that reported in our analysis of the \emph{Chandra} ACIS data. As for the hot component, there are notable discrepancies: \citet{Wrigge1994} did not find evidence of this hot component using \emph{ROSAT} PSPC data, \citet{Wrigge2005} found a hot component with a temperature marginally higher by $\simeq$13\% than ours using \emph{ASCA} SIS observations, and \citet{Zhekov2011} reported a second temperature component significantly hotter than ours, $kT\geqslant2$~keV, using \emph{Suzaku} XIS data. The lack of hot component in the \emph{ROSAT} PSPC data can be attributed to the low sensitivity of this instrument to energies above 1.0 keV. On the other hand, the high temperature for the secondary component reported by \citet{Zhekov2011} originated on the high level of X-ray emission in the range 1.5-4.0 keV found in the \emph{Suzaku} XIS data. The lack of such a hard component in the spectra derived from the \emph{Chandra} ACIS observations is in sharp contrast with the \emph{Suzaku} XIS data. We present in Figure~6 the combined spectrum of point sources projected onto the nebula in the ACIS-S3 detector. The spectrum of the point sources is clearly harder than that of the nebula and shows significant emission above 0.8 keV up to 5-7 keV. Its count rate is 0.0188$\sim$0.0005 cnts~s$^{-1}$, i.e., $\sim$12\% that of the nebular ACIS-S3 region. This emission can be formally fitted using an absorbed model\footnote{ For consistency with the spectral fit of the nebular emission from NGC\,6888, we have adopted a hydrogen column density of 3.1$\times$10$^{21}$ cm$^{-2}$. } comprising an optically-thin plasma emission component at a temperature of (7.7$\pm$1.0)$\times$10$^6$ K and a power-law component with photon index $\Gamma$=1.16$\pm$0.08 for a reduced $\chi^2$ of 62.30/63=0.99. The flux of this emission in the 0.3-2.0 keV is (4.1$\pm$0.3)$\times$10$^{-14}$ erg~cm$^{-2}$~s$^{-1}$, i.e., it acounts for 6\% of the nebular flux measured in the ACIS-S3 spectrum in this band. The total flux in the 0.3-9.0 keV of this component is (2.46$\pm$0.24)$\times$10$^{-13}$ erg~cm$^{-2}$~s$^{-1}$. These results demonstrate that the hard, 1.5-4.0 keV component detected by \emph{Suzaku} is an observational artifact caused by its limited spatial resolution ($\sim$2\arcmin), which makes difficult to identify point sources in the field of view of NGC\,6888 and to excise their contribution from the diffuse emission. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=1.05\linewidth]{fig6.eps} \caption{(top panel) Background-subtracted spectrum of point sources projected onto the nebula NGC\,6888 in the \textit{Chandra} ACIS-S3 detector overplotted with the best-fit model (black histogram) consisting of an absorbed \textit{apec} (red histogram) and a power law (blue histogram) model. See text for details. (Bootom panel) Residuals of the spectral fit.} \end{center} \label{fig:spec_ps} \end{figure} The first observations of the X-ray emission toward NGC\,6888 reported absorbed fluxes $\sim$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$ \citep[e.g.,][]{B1988,Wrigge1994,Wrigge2005} that were raised up to $\sim$2$\times10^{-12}$~erg~cm$^{-2}$~s$^{-1}$ by the more recent \emph{Suzaku}'s observations \citep{Zhekov2011}. Our estimate for the observed flux ($2.4\times10^{-12}$~erg~cm$^{-2}$~s$^{-1}$) is in good agreement with the latest measurements despite that a fraction of the nebula was not registered by the ACIS-S detectors and we had to rely on \emph{ROSAT} PSPC observations to estimate the total flux. Sensitive observations with a large field of view, as those that would be provided by \emph{XMM-Newton}, are most needed to search for the fainter and more extended X-ray emission in this WR nebula. Our \emph{Chandra} ACIS-S observation yielded the detection of three peaks in the spatial distribution of the diffuse X-ray emission in NGC\,6888: two associated with the caps, and another one toward the NW blowout at the western edge of the ACIS-S field of view in CCD\,S4. The latter peak in the X-ray emission is hinted in \emph{ROSAT} PSPC and HRI, and \emph{ASCA} SIS observations \citep{Wrigge1994,Wrigge2002,Wrigge2005}, but it has not been reported previously to be part of the X-ray-emitting gas associated with the nebula\footnote{The \emph{Suzaku} observations did not map this nebular region, which was not registered by its detectors. }. This additional emission component is located in a region with no H$\alpha$ counterpart \citep[see Figure~2;][]{Gruendl2000}, but it is spatially delineated by the [O~{\sc iii}] outer shell. The situation is reminiscent of the northwest blowout in S\,308, which can be ascribed to the action of the hot gas carving a cavity towards a low density region of the circumstellar medium \citep[CSM;][]{Chu2003,Toala2012}. In contrast to the limb-brightened morphology of the X-ray-emitting gas reported for S\,308 \citep{Toala2012}, the current (and previous) X-ray observations of NGC\,6888 do not show such a simple morphology, but that the hot gas is distributed in (at least) three maxima. The comparison between infrared images of NGC\,6888 and that obtained by \emph{ROSAT} PSPC of the X-ray-emitting gas (see Figure~6) is suggestive of a correlation between the regions where the X-ray emission is faintest, toward the southeast region of NGC\,6888, and a molecular filament traced in infrared wavelengths. Spatial variations of the amount of intervening material across the nebula, not accounted so far, can be playing an interesting role in the X-ray morphology of NGC\,6888. Finally, it is worth mentioning here that the apparent nitrogen overabundances are still consistent with the values reported by \citet{FernandezMartin2012} in their region X1. This region is spatially coincident with one of the brightest clumps in the X-ray-emitting region. \begin{figure}[t] \begin{center} \includegraphics[angle=0,width=1.0\linewidth]{fig7_astro_ph.eps} \caption{ \emph{Spitzer} MIPS 24~$\mu$m (red), \emph{WISE} 12~$\mu$m (green), and \emph{ROSAT} PSPC (blue and contours) colour-composite picture of NGC\,6888. } \end{center} \label{fig:contornos} \end{figure} \subsection{Comparison with simulations} There have been several attempts in the past to model the morphology and X-ray emission from NGC\,6888 revealed by \emph{ROSAT}, \emph{ASCA}, and \emph{Suzaku} observations. One-dimensional (1D) analytic or hydrodynamic models are an elegant first approximation to the evolution of the CSM of WR stars \citep[e.g.,][]{Zhekov1998}, although they do not include the effects of ionization due to the central star, neither can they reproduce the wealth of structures produced by hydrodynamical instabilities, which seem to trace the X-ray-emitting gas in some regions inside the bubble. Furthermore, those simulations are not able to reproduce simultaneously regions in which instabilities are important (e.g., the caps) and regions where there are no clumps (the blowout). \citet{GS1996} presented 2D hydrodynamical simulations of the evolving medium around a star with an initial mass of 35~$M_{\odot}$. The broad morphological properties of NGC\,6888 could be reproduced adopting a slow RSG wind of 15~km~s$^{-1}$. These simulations were refined by \citet{Freyer2006} who included the effects of photoionization. Despite the progress achieved by the 2D radiative hydrodynamic models published to date, which have helped us advance our understanding of the formation of NGC\,6888, they fail to produce blowout-like features. Blowouts might result from anisotropies in the RSG or non uniformities in the interstellar medium. Simulations recently presented by \citet[][]{Rogers2013} explore the evolution of a non uniform initial interstellar medium around massive stars. Specific modeling accounting for these features are needed to understand the morphology and distribution of the X-ray-emitting gas in NGC\,6888. It is often argued that thermal conduction between the cold ($10^{4}$~K) outer RSG material and the inner hot ($10^{7}-10^{8}$~K) bubble causes the temperature of the hot bubble to drop to the observed values of $T_{\noindent{X}}\sim$10$^{6}$~K \citep[e.g.,][]{Zhekov1998,Arthur2007,Pittard2007}. \citet{Toala2011} recently presented numerical models including conductive effects (classical and saturated thermal conduction) of WR bubbles that result in a wide range of temperatures capable of generating the soft X-ray emission observed in NGC\,6888. Indeed, they present a spectrum corresponding to a nitrogen-rich plasma that shows two main components, one at 0.5~keV and another at $\sim$0.9~keV, very similar to the spectra presented in Figure~4 \citep[see also][for a comparison with models without thermal conduction]{Dwarkadas2013}. Finally, it is interesting to check whether the central star of NGC\,6888 can provide the observed X-ray-emitting material. \citet{GS1996} could match the morphology of their 2D simulations with that of NGC\,6888 at a time $\sim$12,000~yr after the onset of the WR phase. That time-lapse is further reduced to $\sim$8,000~yr when the dynamical effects on the nebular material of the photoionization are accounted \citep{Freyer2006}. For a mass-loss rate of 3.1$\times$10$^{-5}$ $M_\odot$~yr$^{-1}$ \citep{Abbott1986}, the total mass provided by the star amounts up to 0.25--0.37 $M_\odot$, which is smaller than the estimate of $>$1~$M_{\odot}$ for the mass of hot plasma inside NGC\,6888. Apparently, the WR wind has not had sufficient time to inject all the hot gas inside NGC\,6888. This supports the possibility that physical processes such as thermal conduction have transferred material from the outer RSG shell into the hot bubble. The enhanced N/O ratio of the hot plasma, similar to that measured in the cold shell through optical emission lines, is in concordance with this possibility. \section{SUMMARY AND CONCLUSIONS} We present \emph{Chandra} ACIS-S observations of the NE quadrant and western regions of NGC\,6888. We have used these observations to study the spatial distribution of the diffuse X-ray emission inside the nebula and derived global values of its physical conditions. In particular we find: \begin{itemize} \item The hot gas in NGC\,6888 is distributed inside the optical shell delineated by [O\,{\sc iii}] emission. The spatial distribution of the X-ray emission shows enhancements towards the caps and a blowout present in the NW region of NGC\,6888. This blowout, not discussed in previous studies, has no H$\alpha$ counterpart, but an outer \emph{skin} of [O~{\sc iii}] is detected. The X-ray-emitting gas is, thus, traced by H$\alpha$ clumps inside the nebular shell and by the blowout. No clear evidence of limb-brightening is detected. \item The X-ray emission is dominated by the N~{\sc vii} 0.5~keV line with additional contributions of the Fe complex and Ne~{\sc ix} line at 0.7--0.9~keV. The spectrum declines with energy, fading at energies higher than 1.5 keV. The X-ray emission from NGC\,6888 can be described by a two-temperature optically thin plasma emission model with temperatures of $\sim$1.4$\times$10$^6$~K and 7.4$\times$10$^6$~K. \item The intrinsic total flux emitted by NGC\,6888 in the 0.3-2 keV energy band is estimated to be $\sim(4.05\pm0.5)\times10^{-11}$~erg~cm$^{-2}$~s$^{-1}$, and the X-ray luminosity at a distance of 1.26~kpc is $L_{\mathrm{X}}=(7.7\pm0.1)\times10^{33}$~erg~s$^{-1}$. \item The estimated rms electron density $n_{\mathrm{e}}$ of the X-ray-emitting gas ranges between $0.4 \times (\epsilon / 0.1)^{-1/2}$ and $0.6 \times (\epsilon / 0.1)^{-1/2}$~cm$^{-3}$ resulting in an estimated total mass of $1.7 \times (\epsilon / 0.1)^{1/2} \,M_{\odot}$ and $1.2 \times (\epsilon / 0.1)^{1/2} \,M_{\odot}$, respectively. The density, temperature, and abundance of the X-ray-emitting gas are consistent with the expectation of thermal conduction at the wind-wind interaction zone, where the RSG wind material is mixed in the shocked WR wind in the bubble interior. \end{itemize} Future \emph{XMM-Newton} observations are needed to acquire a complete view of the soft X-ray emission from the hot plasma in NGC\,6888 with better spatial coverage, sensitivity, and energy resolution than the existing studies. As the blowout detected in NGC\,6888 is at the edge of the cameras of the ACIS-S instrument and a significant section of the bubble remains unobserved, \emph{XMM-Newton} observations could finally unveil the total distribution of the hot gas in this nebula. The soft sensitivity and spatial coverage of such observations would also be very valuable to assess the varying amounts of intervening material across the nebula suggested by optical and infrared observations. \acknowledgements We want to thank the anonymous referee for her/his comments that improved the presentation of the technical details in this paper. This work is funded by grants NASA \emph{Chandra} X-ray observatory Guest Observer Program Grant SAO G03-4023X, and AYA~2005-01495 of the Spanish MEC (Ministerio de Educaci\'on y Ciencia) and AYA 2011-29754-C03-02 of the Spanish MICINN (Ministerio de Ciencia e Innovaci\'on) co-funded with FEDER funds. JAT also acknowledges support by the CSIC JAE-Pre student grant 2011-00189. JAT is grateful to Y.\,Jim\'{e}nez-Teja for introducing him to the use of matplotlib routines.
train/arxiv
BkiUbmw5qsFAfu5d6PVO
5
1
\section{Introduction} In mixed reality, contradictory occlusion problem happens when a foreground real object is partially or completely covered by a background virtual object. To address this problem, the foreground real scene needs to be accurately segmented from the image frame. However, it is difficult to precisely extract the foreground in real time especially when the object is complex which are ubiquitous in outdoor scenes. Moreover, segmentation becomes more difficult when dealing with moving cameras because it has to be performed per frame and achieved in real-time. In this work, we focus on solving this occlusion problem specifically for an outdoor MR system on a moving vehicle using a monocular omnidirectional camera. Several methods have been proposed that handles foreground extraction in a monocular camera. Methods presented in \cite{grabcut} and \cite{lazysnap} can extract a very accurate foreground region from still images. However, extending the method to videos is inherently difficult due to computational cost of the segmentation technique used. In \cite{boundarymatting} and \cite{criminisi}, contours are cut by using alpha matting \cite{grabcut}, however an accurate prior foreground estimation is still required and the method fails even for small inaccuracy in the estimation especially for complex scenes such as tree branches and vegetation. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{figure1.png} \end{center} \caption{Results of semantic segmentation, depth estimation and visibility-based rendering for occlusion handling in outdoor mixed reality.} \label{fig:occlusion} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=1.0\linewidth]{error.png} \end{center} \caption{Failure case of alpha blending (left: poor boundary handling) and transparency blending \cite{fukiage} (right: poor visibility) for handling occlusions.} \label{fig:error} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{overview3.png} \end{center} \caption{Overview of the proposed method.} \label{fig:overview} \end{figure*} Background subtraction methods \cite{criminisi}\cite{backgroundcut} achieve real-time processing and can be applied for monocular cameras. In \cite{kakuta}\cite{vinh}, the background subtraction technique are modified and applied on an arbitrary outdoor environment taking advantage of multipile cues such as color, motion, and luminance change. However, these methods are constrained on a fixed camera and extension to moving camera applications is difficult. Other methods use depth information \cite{depth1}\cite{depth2}\cite{depth3}\cite{depth4} to effectively reason on the foreground-background relationship of the virtual and real objects. By adding additional hardware such as multiple cameras for stereo vision, time-of-flight cameras, and range sensors, depth estimation is a straightforward foreground detection method that can be done in real-time. However, additional hardware is not always a desired solution in some applications. Image-based methods in solving depth information for foreground extraction and segmentation in mixed reality field has been proposed before. In \cite{zollmann}, sparse 3D model data from a GIS were used to infer a dense depth map of the real scene. However, these prior 3D models are not easily available in most cases. In this paper, we propose to use semantic segmentation for handling occlusions. We assign different attributes (i.e. amount of visibility, or transparency) depending on the class of an object. The reasoning is straightforward: for outdoor augmented reality, the sky and ground are background, and therefore should be hidden behind the CG object (or transparent). The rest could either be background or foreground. For objects that can be classified as both, we propose a real time foreground probability map estimation based on motion stereo. By combining the semantic segments and the probability map, we overlay the CG object onto the real scene by adapting a visibility-based rendering method first proposed in \cite{fukiage}. In \cite{fukiage}, a blending method, which uses visibility predictor based on human vision system, was used to predict the visibility (or transparency) level of the CG object. The method has an advantage over alpha blending methods (Figure \ref{fig:error}) because it does not require accurate estimation of complex boundaries. The method predicts the visibility of the CG object based on the color of the pixels and the foreground probability map inside a blending window. However, the blending method fails when the color of the foreground and background objects within the window are very similar, in which case the virtual object becomes too transparent (Figure \ref{fig:error}). Instead of using a fixed visibility level for all objects, as in \cite{fukiage}, we use our proposed semantic classes to choose the amount of visibility for different type of objects. This allows us to control the appearance of the rendered object based on the type of the scene. To summarize, this work has three main contributions (see Figure \ref{fig:overview}). First, we present a category scheme that uses semantics for assigning visibility values. We achieve this by first classifying the scene into specific categories using a convolutional neural network (CNN) based semantic segmentation method (SegNet \cite{segnet}). We then use our proposed scheme to group the segments into more usable categories to be used in visibility blending. Second, we present a real-time foreground probability map estimation method based on depth and optical flow for omnidirectional cameras. Finally, we combine the semantic segmentation and the foreground probability map to create a visually pleasing augmented reality scene using visibility-based rendering. This paper is organized as follows. In Section \ref{sec:semantic}, we propose a category scheme for foreground prediction using semantic segmentation. In \ref{sec:foreground}, we present our foreground probability map estimation method. In Section \ref{sec:visibility} we introduce our visibility-based blending method which uses semantic classsification and the foreground probability map. In Section \ref{sec:results}, we show our results and comparison among our method, alpha blending, and transparency blending \cite{fukiage} methods. Finally, we conclude this work in Section \ref{sec:conclusion}. \section{Semantic Segmentation for Occlusion Handling} \label{sec:semantic} \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{semanticscheme.png} \end{center} \caption{First and Second stage of semantic segmentation. The first stage segments the scene into nine classes. The second stage groups the segmented classes into the three main categories.} \label{fig:semanticscheme} \end{figure} Given an image of the real scene, we need to categorize each objects in it as either foreground or background. Instead of directly using these two labels, we classify the object into three main categories: Background, Complex and Simple. Objects that belong to the Background category are those that are always in the background such as the ground or the sky, and are therefore transparent. On the other hand, objects that are classified as Complex or Simple can either be foreground or background, depending on the actual depth order of the CG object and the real scene. Complex and Simple objects are handled using the method we will describe in the following sections. In order to implement the above scheme, we first segment the scene into more specific categories. We use nine classes: Building, Grass, Car, Ground, Road, Sky, Tree, Tree trunk and Unknown. Note that the Tree category is mostly the leaves part of the tree (i.e. without the tree trunk). After this first stage of segmentation, we then group the resulting sections into our proposed main categories: Background (Grass, Ground Road, Sky, Unknown), Simple (Building, Car, Tree Trunk) and Complex (Tree) (see Figure \ref{fig:semanticscheme}). We use these three categories but additional classes can be added depending on the type of the scene where the mixed reality system is deployed. This choice of implementation (two stage) is done to avoid misclassification which is possible when the class size is very small. For example, the Road and Grass section are visually different but they belong to the same Background category. Moreover, Grass, which is always in the background, is visually closer to a Tree, which can either be in the foreground or background region. Therefore, we opt to allow the learning of the more refined classes instead of combining them into one semantic class. The result of the semantic segmentation are two outputs: the labeled segments and the uncertainty of prediction (see Figure \ref{fig:semanticsegmentation}). The uncertainty of prediction ($0 < g < 1$) is usually high along the object boundaries. We utilize this value to smoothly transit the visibility value between two different categories resulting in a more visually pleasing boundary handling (see Sec \ref{sec:visibility}) \section{Foreground Probability Map Estimation} \label{sec:foreground} For handling non-Background objects, we need to compare the actual depth of the scene and the rendered CG objects. To do this, we estimate a foreground probability map that indicates whether the real object is in the foreground or not. We solve this probability map by first estimating the depth of the scene for each frame, with respect to the center of the camera. Given an omnidirectional frame $I$ at time $t$, with pixel position corresponding to polar and azimuthal angles $\mathbf{s} = (\theta, \phi)$ and unity radial distance (focal length $f=1$), we first solve the optical flow vector $\mathbf{u} \in \mathbb{R}^2$ for every pixel $\mathbf{s}$. Assuming known camera positions $\mathbf{c}_{t}$ and $\mathbf{c}_{t-1}$, where $t-1$ is the position of the previous frame, and $\mathbf{c} \in \mathbb{R}^3$ in real-world coordinate system, we then solve the depth $d_{real}$ of $\mathbf{s}$ using triangulation: \begin{equation} \label{eq:triangulation} d_{real}(\mathbf{s}) = |c - c_{t-1}| \frac{\sin{\alpha}}{\sin{\alpha}\cos{\alpha_{t-1}} - \cos{\alpha}\sin{\alpha_{t-1}}} \end{equation} where $\alpha_t$ and $\alpha_{t-1}$ are the parallax angles calculated as the offset from corresponding pixels $\mathbf{s}_t$ and $\mathbf{s}_{t-1}$ to $\mathbf{s}_{div}$. $\mathbf{s}_{div} = ({\theta}_{div}, {\phi}_{div})$) is the direction of motion corresponding to the divergence point of the optical flow vectors. (See Figure \ref{fig:divergencepoint}). To solve the divergence point, we first estimate a rectangular region around the general direction of motion in the omnidirectional image. Since we know the position and orientation of the camera on the vehicle, we only need to extract the direction vector. Within this rectangular region in the image, we perform a convolution between the 2D optical flow vectors and a divergence kernel. The minimum value is then assigned as the divergence point. In order to handle the inaccuracy of the camera position estimation, we perform temporal smoothing of the depth of corresponding points in the image sequence along several consecutive frames. Since the optical flow is already given, we simply perform bilinear warping of the depth form one frame to another using the flow vectors as mapping. After warping the depth to the reference frame (current view), we simply averaged the depth values in the same pixel position. Figure \ref{fig:depth} shows the normalized depth and the temporally smoothed depth map of the real scene. Using $d_{real}$ and the corresponding depth of the virtual object $d_{cg}$, we calculate the foreground probability as: \begin{equation} \label{eq:foregroundprobability} P_f = \frac{1}{1+e^{-(d_{cg}-d_{real})}} \end{equation} Equation \ref{eq:foregroundprobability} is a straightforward computation of the foreground probability map. The value is high if the depth of the real scene is smaller than that of the virtual object, which means that the real scene is closer to the camera. As the depth difference becomes smaller, however, the probability only decreases gradually so as not to suffer from inconsistency in depth estimation. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{divergencepoint.png} \end{center} \caption{Divergence point of the optical flow vectors and the relationship of the paralax angles of corresponding pixels $\mathbf{s_t, s_{t-1}}$ and the optical flow vector $\mathbf{u}$.} \label{fig:divergencepoint} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{depth.png} \end{center} \caption{Normalized depth (left column) and smoothed depth (right column).} \label{fig:depth} \end{figure} \section{Visibility-Based Rendering} \label{sec:visibility} We extend a visibility-based blending method \cite{fukiage2} to further utilize the semantic classification. This blending technique allows us to locally optimize the visibility of each region of the virtual object that can achieve arbitrarily trageted level. In \cite{fukiage2}, a visibility predictor based on human vision system is used in blending the virtual object. We extend this technique and use the semantic class and uncertainty of prediction in order to calculate the visibility value. We first define the visibility of the virtual object $V_{cg}$ (as in \cite{fukiage} Eq. (7)) as: \begin{equation} \label{eq:visibility} V_{cg} = (1-\omega)V_f + \omega V_b \end{equation} From here, we deviate from \cite{fukiage} by setting: \begin{equation} \omega = \frac{1}{\sqrt{2 \pi \sigma}}\exp(-\frac{P_f^2}{2 \sigma}) \end{equation} as the weighting based on the foreground probability map $P_f$. $V_{cg}$ is the weighted average between the foreground and background visibility values $V_f$ and $V_b$ and is averaged within a square window. $V_f$ is the visibility of the CG object when the real scene is a foreground, and $V_b$ is when the real scene is a background. We calculate these values based on the uncertainty $g$ and type of the segment classification: \begin{align} \label{eq:visibilities} V_f &= \frac{1}{2}V_{f1} + \frac{1}{2}\{(1-g)V_{f1} + gV_{f2}\}\\ \nonumber V_b &= \frac{1}{2}V_{b1} + \frac{1}{2}\{(1-g)V_{b1} + gV_{b2}\} \end{align} where $V_{f1}$, $V_{f2}$, $V_{b1}$, and $V_{b2}$ are arbitrary values set by the user based on the desired appearance of the augmented scene depending on the class of the segment. $V_{f1}$ and $V_{b1}$ are the desired maximum visibility and $V_{f2}$ and $V_{b2}$ are the fallback minimum visibility. For the Background class, $V_f1$ and $V_b1$ are set to a high value such that the foreground probability map will be ignored. This is due to the fact that background object should not be visible. For the Simple class, $V_{f1}$ is set to a very low value (almost zero), where as $V_{b1}$ is set to high value. In contrast, the Complex class has $V_{f1}$ also set to a high value, which should mean that when the Complex object is in the foreground the CG object will still be visible. This is not the case. In our observation, the Complex class tend to always appear in the foreground due to its texture complexity. Hence, when we solve $V_{cg}$ within the square window containing a Complex object, the CG object appears more transparent. We avoid this case by setting a high value for $V_{f1}$. Equation \ref{eq:visibilities} allows gradual shifting from different visibility levels through the uncertainty value. This scheme is particularly effective along object boundaries. For example, if the uncertainty is very low (i.e. $g=0.01$) for a background object in the Simple category (i.e. Tree trunk), the visibility $V_f$ and $V_b$ is almost equavalent of the maximum visibility. In this case, if the foreground probibility map is high (i.e. $P_f = 0.95$), the total visibility of the CG object $V_{cg}$ approaches the maximum visibility for foreground $V_f$. On the other hand, if the uncertainty is high, (i.e. g = 0.85) which usually happens along boundaries, then $V_f$ and $V_b$ become weighted averages of the maximum visibility $V_{b1}$ and the fallback minimium visibility $V_{b2}$. \section{Implementation} \subsection{Test Environment} Our method requires that the test environment and the dataset for semantic segmentation are similar. Due to this, we created a dataset of our test environment which consists of man-made objects such as buildings and cars and natural objects such as trees. Since we are using monocular depth estimation, we assume that no dynamic object are present (i.e., moving cars, walking people). Because we could not discriminate between static and dynamic objects, the computed depths are solved assuming that all objects in the scene are static. This will obviously result in wrong depth maps for dynamic objects (i.e. being farther or nearer due to motion parallax) but this is beyond the scope of this paper. \subsection{Mixed Reality Bus System} We tested our method on a Mixed Reality Bus (MR Bus) system. In this system, the real scene is captured by two omnidirectional cameras mounted externally on the vehicle. The two panoramic images are stitched together to form a single panoramic image. Each image is served on a computer which solves the depth estimation and another which performs the semantic segmentation. Another image, the composite depth map of the CG object is also solved given the current position of the bus. The results are then combined to create the foreground probability map. The foreground probability map and the real scene are then served to the rendering system of the commercial-off-the-shelf head-mounted displays or HMDs. Using the HMD hardware (gyroscope and compass) for head pose estimation, it is straightforward to convert the panoramic images to the view of the HMDs (perspective) using image warping. Using the proposed blending method, the rendering system combines the perspective real scene and the CG objects. Figure \ref{fig:mrsystem} shows the actual MR bus used in our application. \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{mrbus.png} \end{center} \caption{Mixed Reality Bus Sytem. The scene is captured by an external camera, processed and rendered on the HMD.} \label{fig:mrsystem} \end{figure} \subsection{SegNet} For the semantic segmentation, we use the SegNet implementation provided by the authors \cite{segnetwebsite} compiled on a Intel Core i7-4930K, 32GB RAM and GTX980Ti GPU. We used the default Bayesian SegNet Basic model for our application. For the dataset, we used omnidirectional images captured from a LadyBug\cite{ladybug} camera. Using the default resolution ($2048\times1024$), we manually created the labels using the open annotation tool LabelMe \cite{labelme}. We labeled 104 training images sampled uniformly from our dataset using the nine categories stated in Sec. \ref{sec:semantic}. We used a scaled version ($512\times256$) of the image for training in order to handle large VRAM requirement. We present our used parameters in Table \ref{tab:segnetparameters} for reference. \begin{table}[h] \begin{center} \begin{tabular}{|l|c|} \hline Parameter & Value \\ \hline\hline learning rate & 0.001\\ gamma & 1.0 \\ momentum & 0.9 \\ weight decay & 0.0005 \\ iteration & 10000\\ batch size & 4 \\ dropout ratio & 0.5 \\ output batch size & 8 \\ \hline \end{tabular} \end{center} \caption{Parameter setting for training on a Bayesian SegNet Basic Model.} \label{tab:segnetparameters} \end{table} We achieved a classification accuracy of $88.56\%$ with training time of $181 s$. Using the learned model, we feed each frame on the network and achived classification time of 310 ms. The output of each processing is the semantic segmentation of the original frame into nine classes including the uncertainty of the classification. Figure \ref{fig:semanticsegmentation} shows sample frame for the manually labeled images and the result of the semantic segmentation with uncertainty frame. \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{semanticsegmentation.png} \end{center} \caption{Input frames and labels for training (left column) and result of segmentation (right column) with colors corresponding to different classes (bottom left) and the uncertainty of classification (bottom right)(white = high certainty).} \label{fig:semanticsegmentation} \end{figure} \subsection{Optical Flow Estimation} We implemented a version of the TV-L1 \cite{tvl1} optical flow method on a Intel Core i7-4930K and GTX1080Ti GPU using CUDA. TV-L1 optical flow estimation achieves real-time results with reasonable accuracy. Moreover, the Total Variation (TV) regularization used in this method can handle discontinuities in motion estimation that usually happens along object boundaries. This greatly benefits our foreground probability map estimation because we want the boundaries to be as accurate as possible. Following the notations in the paper (See \cite{tvl1} for details), we set the following parameters fitted for our dataset: $\lambda=100, \theta=0.3, \tau=0.25, iteration=115$. We also use a pyramid scaling of $0.5$ and $6$ pyramid levels. We are able to achieve a frame rate of $15 fps$ on a $1024\times512$ image which is suitable for our application. \section{Results and Experiments} \label{sec:results} \subsection{Comparison with existing methods} We compare the results of our method with simple alpha blending and bi-stable transparency blending \cite{fukiage} methods. For all three methods, we use the same depth map to solve the foreground probability map. For the alpha blending method we solve the color of the pixel as: \begin{equation} \label{eq:alphablending} RGB = Real_{RGB} \times P_f + Cg_{RGB} \times (1-P_f) \end{equation} For \cite{fukiage}, we set the $V_f$ and $V_b$ as fixed based on the region class (see Table \ref{tab:transparency}), and solve the visiblity as in Equation \ref{eq:visibility}. \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Category & $V_f$ \cite{fukiage} & $V_b$ \cite{fukiage} & $V_{f1}$ & $V_{f2}$ & $V_{b1}$ & $V_{b2}$ \\ \hline\hline Background & 10.0 & 10.0 & 10.0 & 10.0 & 10.0 & 10.0 \\ Simple & 0.0005 & 5.0 & 0.0005 & 0.001 & 5.0 & 4.0 \\ Complex & 1.5 & 4.0 & 1.5 & 1.0 & 4.0 & 2.5 \\ \hline \end{tabular} \end{center} \caption{Visibility parameters setting for \cite{fukiage} and our method.} \label{tab:transparency} \end{table} We show the comparison of the output from the three methods in Figure \ref{fig:comparison}. The first column correspond to the frame seen by the HMD. The second, third and fourth column are the output of the alpha blending, transparency blending and our method. In all cases, the alpha blending method achieves the highest visibility value. However, it is apparent along the more complex contours of tree leaves that the alpha blending fails. The method results in an insufficient segmentation of the foreground region. In contrast, the transparency blending achieves more visually pleasing blending along the complex contours. However, the visibility of the virtual object suffers when the background is smooth. This results in the virtual object being almost invisible. Our method achieves the best tradeoff between visibility and accurate segmentation. Along the regions of the complex contours of the foreground, our method outperforms the simple alpha blending. When the background is flat, our method outperforms the transparency blending. We present more results from our method in Figure \ref{fig:moreresults}. Our method depends on the correctness of the semantic segmentation. For example in the third row, the sky was incorrectly classified therefore appearing more visible. Moreover, due to the lack of texture, the foreground probability map was also inaccurate. This issue can be rectified by using more training data for the semantic segmentation. Furthermore, the training can be overfitted to the environment where the MR system will be deployed. \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{comparison1.png} \end{center} \caption{Comparison of rendering results from alpha blending, transparency blending, and our method.} \label{fig:comparison} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.8\linewidth]{result.png} \end{center} \caption{Results of our occlusion handling method on different scenes. Top to bottom: different scenarios. Left to right: succeeding frames.} \label{fig:moreresults} \end{figure*} \subsection{User Evaluation} Using the same settings for the three methods as in the previous section, we conducted an experiment with users (6 male and female, ages 23-48). Five scenes (see Figure \ref{fig:comparison}) of 10-second video each were randomly shown to the users. We performed a pairwise comparison (total of 6 combinations) among the three methods. We showed one sequence first and then another and asked the users to compare the two sequence based on three categories: 1)Visibility of virtual object (Is it easy to see the virtual object?), 2)Realistic occlusion of the virtual object (Does the virtual object appear to be realistically occluded?) and 3) Realistic appearance of the rendered scene (Does the scene look realistic?). Each of the sequence is graded from -3 to +3 (+3 if the second video has maximum preferrence, -3 if the first video has maximum preference). We also randomly show each video pairs in reverse order, resulting in 30 pairs of evaluation dataset. Based on the evaluation, we plot the total preference scores for each scene and questions in Figures \ref{fig:usereval}. In all the tests, our method achieved highest preferential scores compared to the other two methods. \begin{figure*} \begin{center} \includegraphics[width=0.9\linewidth]{usereval.png} \end{center} \caption{Comparison of preference scores for Question 1 (visibility of virtual object), Question 2 (realistic occlusion of the virtual object) and Question 3 (realistic appearance of the rendered scene).} \label{fig:usereval} \end{figure*} \section{Conclusion and Future Work} \label{sec:conclusion} In this work, we demonstrated how to use visibility-based blending method in handling occlusion problem in mixed reality. We incorporated depth estimation from optical flow to solve the foreground probability map together with semantic classsification using convolutional neural network. Our results shows that compared to existing alpha blending and transparency blending based techniques, our method achieves better visibility in flat background areas and better occlusion handling along complex foreground objects. However, limitations in the semantic segmentation only allows us to achieve sub-real-time processing. In the future, a faster implementation of semantic segmentation that can perform in real-time is desired. Furthermore, a more robust camera pose estimation that handles real-time and outdoor applications is also desired. \nocite{*} \bibliographystyle{ACM-Reference-Format}
train/arxiv
BkiUe6Y5qhLBqQGjQlGx
5
1
\section{Introduction} Graph theory has many applications in different fields of physics where graphs are usually used for illustrating two-body interactions \cite{graphbook}. Especially, in quantum information theory, graphs have been used to characterize some specific entangled states namely graph states \cite{Briegel2004}. Graph states are a set of stabilizer states which have especially been considered for quantum error correction where information are encoded in such states to be protected against decoherence \cite{gott1997}. Furthermore, recently an interesting extension of graphs called hypergraphs \cite{Berg} have attracted much attention because of their ability for illustrating many-body interactions in quantum many-body systems. For example, it has been shown that quantum entangled states can be encoded in the structure of a hypergraph \cite{hyp2013}. In particular, quantum hypergraph states have been introduced and many applications have been considered \cite{Brus2013}. Mapping quantum states to a hypergraphs helps one to use some relations in graph theory to study different properties of quantum entangled states \cite{hyp1, hyp2, hyp3, hyp4}. On the other hand, since quantum entangled states can also been considered as the ground state of a quantum many-body system, most concepts in condense matter physics can also be used for quantum states. Among such quantum systems, quantum CSS codes introduced by Shor and et al \cite{Calderbank1996, Stean} have been attracted much attention. In particular, quantum CSS codes with topological order including Kitaev's toric codes (TC) and color codes (CC) play a key role because of natural robustness against local perturbations \cite{Kitaev2003, bombin2006}. Topological order is a new phase of matter that can not be characterized by symmetry breaking theory \cite{Wen1990}. Unlike the ordinary order, in a topological phase there is not a local order parameter and topological order should be characterized by some topological properties \cite{Wen1991, Hansson2004, Rokhsar1988}. Especially, degeneracy of the ground state is a measure of topological order that is related to topological structure of the physical system. While complete characterization of topological order is still an open problem, power of quantum codes with topological order for fault-tolerant quantum computation has been a popular field of research in recent years \cite{16, preskill2002, freedman2002, 17, 18, bravi2010}. Studying robustness of TC and CC against local perturbations has been considered in many interesting recent works where a quantum phase transition occurs from a topological phase to a trivial phase. On the one hand different perturbations like uniform magnetic field and Ising perturbation have been studied \cite{vidal2011, jahromi2013, kargar2013, karimipour, zarei2015, zarei2016} and on the other hand topological states with three-level particles have been considered \cite{mohseni}. However, most topological CSS codes considered in the above researches are two-dimensional ones. Since, it has been shown that topological CSS codes in low dimensions do not have enough power for self-correcting quantum memory \cite{bravi}, different properties of topological codes in higher dimensions have been found more importance \cite{Bacon, bombin2016, review2016, zarei2017}. In this paper, we use the idea of mapping quantum many-body systems to hypergraphs in order to consider a general quantum CSS code in presence of a uniform magnetic field. To this end, we begin with giving a formalism for mapping quantum CSS codes and Ising-like systems to hypergraphs where we use an approach introduced in \cite{zareim}. Then we consider a Hamiltonian as a weighted sum of stabilizers from the CSS code defined on a hypergraph $H$ in precense of a uniform magnetic field term. Next, we introduce a new basis to re-write the above Hamiltonian. We show that the above Hamiltonian in the new basis is equal to an Ising-like system with many-body interactions in a transverse field on dual hypergraph $\tilde{H}$. Interestingly, we show that such a mapping is a strong-weak coupling duality where the original model in a strong coupling regime is mapped to the new one in a weak coupling regime. In this way, we will give a duality relation that relates the problem of robustness of a CSS code on the $H$ against a strong magnetic field to the problem of a quantum phase transition in a Ising-like system in a transverse weak field on the $\tilde{H}$. As an important application of our mapping, we will be able to consider robustness of topological CSS codes in high dimensions \cite{delgad, Bombin2015, Bravi2011, Sapta2009}. We show that the problem of robustness of TC defined on an arbitrary graph in a uniform magnetic field is mapped to a quantum phase transition in an Ising model in a transverse field on the same graph. Furthermore, We also give another example of our mapping for a CC on a D-colex which is mapped to a Ising-like model with $(D+1)$-body interaction on a D-simplicial lattice. Our explicit studies for TC on different graphs, where we use well-known results on the transverse Ising models \cite{trans}, show that robustness of TC defined on graphs against a uniform magnetic field decreases in higher dimensions. Although such a result is derived for a subclass of TC with qubits living on edges of a graph, it can lead to a new insight in application of topological codes for quantum memory. Especially, it has been shown \cite{bombin2016} that power of topological codes for self-correcting quantum memory in finite temperature increases in higher dimensions. Therefore, according to our results, increasing dimension might not be necessarily good for a topological memory where it improves the self-correctness of the code in finite temperature but it might reduce the robustness against local perturbation at zero temperature. Finally, similar to most duality relations, the above mapping is useful for exactly determining the phase transition point (robustness) of self-dual models. Corresponding to self-dual hypergraphs, we introduce several self-dual models in different dimensions where the phase transition occurs at critical ratio $(\frac{h}{J})_c =1$. Especially, the one-dimensional case is the well-known model of one-dimensional Ising model in a transverse field. The rest of the paper is as follows: In section (\ref{s1}) we give a brief review on definition of hypergraphs and especially dual of hypergraphs. Furthermore, we define an orthogonality for hypergraphs. In section (\ref{s2}) we give a formalism for mapping an arbitrary quantum CSS state and also an Ising-like system to hypergraphs. In section (\ref{s3}) , we give the main result of the paper where we map Hamiltonian of a quantum CSS state in presence of a uniform field on hypergraph $H$ to Hamiltonian of a Ising-like model in a transverse field on dual hypergraph $\tilde{H}$. In section (\ref{s4}) , we give some interesting applications of our mapping for TC defined on graphs and CC defined on color complexes in arbitrary dimensions and several self-dual models. \section{hypergraphs and their dual}\label{s1} An ordinary graph $G=(V,E)$ is defined by two sets of vertices $V$ and edges $E$ where each edge connects two vertices of the graph. A hypergraph $H=(V,E)$, similar to a graph, is also defined by two sets of vertices and edges. However, unlike an ordinary graph, each edge of the hypergraph, which is called hyperedge, can involve arbitrary number of vertices. In other words, if degree of each hyperedge $e$ is equal to the number of vertices that are involved by $e$ and is denoted by $|e|$, the degree of edges of a hypergraph can be an arbitrary number. As an example in figure(\ref{hyp}-a), a hypergraph on five vertices $v_1, v_2 , v_3 , v_4 ,v_5$ has been shown where edge of $e_1 =\{v_1 \}$ is denoted by a loop , edge of $e_2 =\{v_1 , v_2 , v_4 , v_5\}$ is denoted by a closed curve and edges of $e_3 =\{v_2 , v_3\}$, $e_4 =\{v_3 , v_5 \}$ are denoted by links. In this figure, Degree of edges $e_1$, $e_2$, $e_3$, $e_4$ are equal to 1, 4, 2, 2, respectively. For a hypergraph $H=(V,E)$ where $ V=\{v_1 ,v_2 , ...,v_K \} $ and $E=\{e_1 , e_2 , ...,e_N \}$, there is a simple definition for a dual hypergraph $\tilde{H}$ . In fact, dual of a hypergraph $H(V,E)$ is a hypergraph $\tilde{H}=(\tilde{V},\tilde{E})$ where $\tilde{V}=\{ \tilde{v}_1 ,\tilde{v}_2 ,...\tilde{v}_N \}$ and $\tilde{E}=\{\tilde{e}_1 ,\tilde{e}_2 ,...,\tilde{e}_K \}$ where $\tilde{e}_i =\{ \tilde{v}_m ~|~ v_i ~\in ~e_m ~in~ H \}$. In other words, vertices and edges of $H$ are switched in the $\tilde{H}$. For example, see figure(\ref{hyp}) for a hypergraph and its dual. A hypergraph is called self-dual if it and its dual are the same \cite{Berg}. It is clear that for a self-dual hypergraph, it is necessary that the number of vertices be the same as the number of hyperedges $K=N$. \begin{figure}[t] \centering \includegraphics[width=8cm,height=3cm,angle=0]{hyp} \caption{(Color online) a) A simple example of a hypergraph where we denote vertices by black circles. we use a red loop for an edge containing only one vertex, a red link for edges containing two vertices and red closed curves for edges containing more than two vertices. b) Dual a hypergraph where edges play the role of vertices, denoted by red circles, and vertices play the role of edges, denoted by red curves, of dual hypergraph. c) Orthogonal hypergraph of the original hypergraph} \label{hyp} \end{figure} It is also possible to introduce a binary vector representation for hyperedges of a hypergraph. To this end, for a hypergraph with $K$ vertices $v_1 , v_2 ,..., v_k$ and $N$ edges $e_1 , e_2 , ..., e_N$, we relate a binary vector, which is called edge vector, to each hyperedge $e_m$. Such an edge vector will have $K$ components which are denoted by $e_{m}^{j}$ and $j=\{1,2,3,...,K\}$ where $e_{m}^{j}=1$ if $v_{j} \in e_m$ and $e_{m}^{j}=0$ otherwise. For example, for hypergraph in figure(\ref{hyp}-a), binary vectors corresponding to four edges are $e_1 =(1,0,0,0,0)$, $e_2 =(1,1,0,1,1)$, $e_3 =(0,1,1,0,0)$ and $e_4 =(0,0,1,0,1)$ . In this way, we will have $N$ binary vectors corresponding to $N$ edges of the hypergraph. Furthermore, in a binary representation, two edge vectors can be added to achieve a new edge vector. If an edge vector related to a hypergraph $H$ is equal to a superposition of other edge vectors of the $H$, it will be called dependent. We call a set of all independent edges of a hypergraph a independent set which is denoted by $\mathcal{I}$. It is clear that $|\mathcal{I}|\leq |V|$ which means that the number of independent edges is always lesser that the number of vertices. By binary vector representation of edges, For a hypergraph $H$, one can define an orthogonal hypergraph $H^{*}$. To this end, two hyperedges $e$ and $e'$ are called orthogonal if and only if their corresponding binary vectors are orthogonal as $e.e'=0$. Now consider a hypergraph $H=(V,E)$ with a independent set $\mathcal{I}$ of edges. We define orthogonal hypergraph $H^{*}=(V^{*},E^{*})$ that has the same vertices of the $H$, $V^{*}=V$, but it has $K-|\mathcal{I}|$ distinct edges that are orthogonal to all edges of the $H$ where $|\mathcal{I}|$ is the number of independent edges, see figure(\ref{hyp}-c). We can also show that, for any hypergraph $H$, there is certainly an orthogonal hypergraph $H^{*}$. To this end, consider binary vector corresponding to an edge $e^{*}$ of the hypergraph $H^{*}$. Since the $e^{*}$ should be orthogonal to all edges of the $H$, it is necessary to $e^{*}.e_{m}=0$ for $m=1,2,...N$. Since the number of independent edges is $|\mathcal{I}|$, it is enough to the above condition is held for $|\mathcal{I}|$ independent edges of the $H$. Therefore, these conditions are equal to a set of $|\mathcal{I}|$ independent equations on $K$ binary variables and it is clear that there are $K-|\mathcal{I}|$ independent answers for such a set of equations. These $K-|\mathcal{I}|$ answers are the edges of hypergraph $H^{*}$. \section{Quantum many-body systems on hypergraphs; Ising-like systems and quantum CSS codes}\label{s2} For a physical system with two-body interactions, graphs are a useful tool where each edge of the graph illustrates a two-body interactions between physical variables living in two end-points of that edge. Ising models with two-body interactions between Ising variables $S_i =\pm 1$ are an important example of application of graphs. However, there are also several important physical systems with many-body interactions. For example, one can consider Ising-like systems with many-body interactions between Ising variables. In order to encode interaction patterns of such systems, hypergraphs are a better candidate. Here, we relate a hyperedge to each interaction term of such a model. In this way, corresponding to a hypergraph $H=(V,E)$ we define a Ising-like system with the following Hamiltonian: \begin{equation} \mathcal{H}_I=\sum_{e\in E}\prod_{v\in e}X_i \end{equation} where qubits live in vertices of the $H$, $X$ is a pauli operator with eigenvalues $\pm 1$, $e\in E$ refers to all hyperedges of the $H$ and $v\in e$ refers to all vertices belonging to the $e$. Another example of a quantum many-body systems, that we consider in this paper, are quantum CSS codes. They are one of the most popular codes in quantum error correction that have been introduced by Shor and et al \cite{Calderbank1996, Stean}. A quantum CSS state on $K$ qubits is a stabilizer state that stabilized by $X$-type and $Z$-type operators belonging to Pauli group on $K$ qubits. Here we use hypergraphs to encode the structure of such states. To this end, consider a hypergraph $H=(V,E)$ and suppose that all hyperedges of the $H$ are independent and therefore $\mathcal{I}=E$ where $|V|=K$, $|E|=N$ and $K\geq N$. We insert $K$ qubits in all vertices of the hypergraph. Then we define a $X$-type operator corresponding to each independent edge of the hypergraph. we denote such operator by $A_e$ where, for each $e\in \mathcal{I}$, it is defined in the following form: \begin{equation} A_e =\prod_{i\in e}X_i \end{equation} where $i \in e$ refers to all vertices belonging to edge $e$. Now, we define the following state as a quantum CSS state corresponding to the $H$: \begin{equation}\label{q1} |C_H\rangle=\frac{1}{2^{\frac{|\mathcal{I}|}{2}}}\prod_{e \in \mathcal{I}}(1+A_e )|0\rangle ^{\otimes N} \end{equation} where $e\in \mathcal{I}$ refers to all independent edges belonging to the $H$ and $|0\rangle$ is the positive eigenstate of the Pauli operator $Z$ where $Z|0\rangle=|0\rangle$. In order to prove that the above state is a CSS state, we should show that it is stabilized by $Z$-type and $X$-type operators. By the fact that $A_e (1+A_e)=(1+A_e)$, we conclude that all $A_e$ operators are stabilizers of the above state. In this way, since the number of independent edges of the hypergraph is equal to $N$, we have found $N$ number of stabilizers of the above state where all of them are $X$-type operators. Since $K\geq N$, we should find $K-N$ numbers of other stabilizers in order to completely characterize the CSS state. To this end consider the orthogonal hypergraph of the $H$ that is called $H^{*} $. We define $K-N$ number of $Z$-type operators corresponding to each edge of the $H^{*}$. We denote them by $B_{e^{*}}$ and define in the following form: \begin{equation} B_{e^{*}} =\prod_{i\in e^{*}}Z_i \end{equation} As we showed in section(\ref{s1}), all edges of the $H^{*}$ are orthogonal to the edges of the $H$. It is clear that when two edges are orthogonal to each other it is necessary that the number of common vertices of those edges is an even number. By this fact, it is simple to show that $[A_e , B_{e^{*}}]=0$. In this way, we have found all $K$ stabilizers corresponding to the state (\ref{q1}). Since all these operators are $X$-type or $Z$-type operators, it is concluded that the state (\ref{q1}) is a quantum CSS state. There is also possible to represent the CSS state (\ref{q1}) by $Z$-type operators $B_{e{*}}$ in the following form: \begin{equation}\label{q2} |C_H\rangle=\prod_{e^{*} \in E^{*}}(1+B_{e^{*}} )|+\rangle ^{\otimes K} \end{equation} where $|+\rangle$ is the positive eigenstate of the Pauli operator $X$. In this way, we can give a CSS state corresponding to each hypergraph with independent edges. Finally, such a state is the ground state of a Hamiltonian in the following form: \begin{equation}\label{cssh} \mathcal{H}_C=-J \sum_{e\in E}A_e -J \sum_{e^{*}\in E^{*}}B_{e^{*}} \end{equation} \section{Mapping quantum CSS codes to Ising-like systems; a strong-weak coupling duality}\label{s3} Duality in many-body systems is an interesting and well-stablished problem that has attracted much attention during past decades such as duality for generalized Ising models\cite{generalized} and recently duality in quantum models \cite{gaug1, gaug2}. In this section, we find a duality mapping from quantum CSS code in a uniform magnetic field to Ising-like systems in a transverse field. To this end, we begin with studying a CSS code defined on a hypergraph $H$ in presence of a uniform magnetic field. We consider a Hamiltonian for a CSS code where ground state of that Hamiltonian is stabilizer state of the quantum CSS code. Then we add a magnetic term to this Hamiltonian in the following form: \begin{equation}\label{init} \mathcal{H}=-J \sum_{e\in E}A_e -J \sum_{e^{*}\in E^{*}}B_{e^{*}} -\sum_{v\in V} h Z_v \end{equation} where $v \in V$ refers to vertices of the hypergraph. it is clear that when $\frac{h}{J}$ is equal to zero the ground state of the model is a quantum CSS state and when $\frac{h}{J}$ goes to infinity the ground state will be a product state in the form of $|00...0\rangle$ where $|0\rangle$ is eigenstate of Pauli operator $Z$. Now, we should study the problem of a quantum phase transition that can happen by tuning the magnetic field $h$ when there are two different phases in two limits $h=0$ and $h\rightarrow \infty$. We should emphasize that, in view of application in quantum information theory, such a phase transition is also related to the robustness of a quantum CSS code against a uniform magnetic field where the phase transition point is a measure of the robustness of the CSS code. Here, we give a general mapping for the above Hamiltonian to convert it to an Ising-like system in a switched regime of couplings. To this end, we use a new basis for re-writing the above Hamiltonian where we consider a new basis in the following form: \begin{equation}\label{ab} |C_{r_{1}, r_{2}, ..., r_{N}}\rangle=\prod_{e \in E}(1+(-1)^{r_{e}}A_e )|0\rangle ^{\otimes K} \end{equation} where $r_e$ are binary numbers that are related to each edge of the hypergraph and $N$ is the number of edges of the hypergraph. Since $K\geq N$ it is clear that the above basis is not a complete basis. In fact, For a complete basis we should also consider $Z$-type operators $B_{e^{*}}$ in the form of $\prod_{e^{*} \in E^{*}}(1+(-1)^{r_{e^{*}}}B_{e^{*}} )$. However, since the magnetic terms in the original Hamiltonian commute with operators $B_{e^{*}}$, we can stay in a subspace that is stabilized by all operators $B_{e^{*}}$. In this way we consider the bases (\ref{ab}) in such a subspace. Finally, we are ready to re-write the original Hamiltonian in the new basis. To this end, we define vertices corresponding to binary variables $r_e$ related to each hyperedge of the $H$. In the other words, these new vertices denoted by $\tilde{v}$ are vertices of dual hypergraph $\tilde{H}$. In this way, the bases defined in (\ref{ab}) are equal to a computational basis on qubits which live in $\tilde{v}$ and we call them dual qubits. Then, we consider the effect of each term of the original Hamiltonian on the basis (\ref{ab}). Since the operators $B_{e^{*}}$ commute with $A_e$, we will have $B_{e^{*}}(1+(-1)^{r_e}A_e)=(1+(-1)^{r_e}A_e)B_{e^{*}}$. Then by the fact that $B_{e^{*}}|00..0\rangle=|00..0\rangle$, it is concluded that $B_{e^{*}}|C_{r_{1}, r_{2}, ..., r_{N}}\rangle=|C_{r_{1}, r_{2}, ..., r_{N}}\rangle$. In the next step, we consider the effect of operators $A_{e}$. Since $A_{e}^{2}=1$, we will have $A_{e}(1+(-1)^{r_e}A_{e})=(-1)^{r_e}(1+(-1)^{r_e}A_e)$. Therefore, the effect of $A_e$ in the new basis is similar to Pauli operator $Z$ on dual qubits. In this way, we can re-write term $A_e$ in the original Hamiltonian with $Z_e$ in the new basis on dual qubits. The most interesting part of changing basis is related to magnetic terms in the original Hamiltonian. It is clear that $Z_v$ does not commute with those operators $A_e$ including vertex $v$. Since, In the dual space, vertex $v$ is a hyperedge of $\tilde{H}$ denoted by $\tilde{e}$ and edge $e$ is a vertex of $\tilde{H}$ denoted by $\tilde{v}$, we can denote operators $Z_v$ and $A_e$ by $Z_{\tilde{e}}$ and $A_{\tilde{v}}$. In this way, we can say that $Z_{\tilde{e}}$ does not commute with those operators $A_{\tilde{v}}$ that $\tilde{v}$ is a member of the hyperedge $\tilde{e}$ in the dual space. On the other hand if operator $Z_{\tilde{e}}$ does not commute with an operator $A_{\tilde{v}}$, we will have $Z_{\tilde{e}} (1+(-1)^{r_{\tilde{v}} }A_{\tilde{v}})=(1+(-1)^{r_{\tilde{v}} +1} A_{\tilde{v}})Z_{\tilde{e}}$. In this way the effect of operator $Z_{\tilde{e}}$ leads to rise the value of $r_{\tilde{v}}$ for all $A_{\tilde{v}}$ which does not commute with $Z_{\tilde{e}}$. It is equal to applying Pauli operators $X$ on all dual qubits $\tilde{v}$ belonging to the edge $\tilde{e}$. Therefore the magnetic term of the original Hamiltonian should be re-written in the new basis in the form of $h\sum_{\tilde{e}} \prod_{\tilde{v} \in \tilde{e}}X_{\tilde{v}}$. In this way the original Hamiltonian in the new basis will be in the following form: \begin{equation}\label{main} \mathcal{H}=-h\sum_{\tilde{e}} \prod_{\tilde{v} \in \tilde{e}}X_{\tilde{v}} - J\sum_{\tilde{v}}Z_{\tilde{v}}-K \end{equation} The first term in the above Hamiltonian is a Ising-like system on the $\tilde{H}$ with many-body interactions corresponding to each hyperedge of the $\tilde{H}$ and the second term is a transverse field. Interesting point is that the role of the magnetic field $h$ and coupling constant $J$ of the original Hamiltonian (\ref{init}) have been switched in the new Hamiltonian (\ref{main}). In this way, a regime of the strong magnetic field in the original model, where the ratio $\frac{h}{J}$ is bigger than $1$, is mapped to a regime of weak magnetic field in the dual Hamiltonian where the ratio $\frac{J}{h}$ is smaller than $1$. To this reason, we call our mapping a strong-weak coupling duality. \section{Applications of the duality mapping}\label{s4} In this section we give some applications for the above mapping. Especially, we use the duality mapping for studying the robustness of two important sets of CSS codes namely TC and CC. Finally, we introduce some self-dual models where we exactly derive the phase transition point. \subsection{Robustness of TC against magnetic field} Here we consider TC which are defined on graphs where qubits live in edges of the graph. We emphasize that while TC can generally be defined on lattices with qubits living on higher dimensional cells, our results in this section are only related to a subclass of TC defined on graphs with edge qubits. To this end, corresponding to a graph $G$, two set of $X$-type and $Z$-type stabilizer operators are defined, see Fig(\ref{kitaev}) for two sample graphs, in the following form: \begin{equation} A_v =\prod_{i \in v} X_i ~~,~~B_p =\prod_{i\in \partial p}Z_i \end{equation} where $A_v$ corresponds to vertex $v$ of the graph and $i\in v$ refers to all qubits incoming to vertex $v$. Furthermore, $B_p$ corresponds to plaquette $p$ of the graph and $i\in \partial p$ refers to all qubits on boundary of plaquette $p$. \begin{figure}[t] \centering \includegraphics[width=8cm,height=3.5cm,angle=0]{kitaev} \caption{(Color online) Two sample graphs where TC are defined. An $X$-type stabilizer is defined corresponding to each vertex of the graph and a $Z$-type stabilizer is defined corresponding to each plaquette of the graph} \label{kitaev} \end{figure} By the fact that $A_v (1+A_v)=(1+A_v)$ and $[B_p , A_v ]=0$, it is simple to check that the following state will be stabilized by the above operators: \begin{equation} |K\rangle =\prod_{v\in \mathcal{I}}(1+A_v)|00...0\rangle \end{equation} where we ignore the normalization factor and $\mathcal{I}$ refers to an independent set of $X$-type operators. It is well-known that such a state on a specific topological structure like a torus shows a topological degeneracy. Now, let us consider such model in precense of a uniform magnetic field. Since TC is a CSS state, we should be able to use the dual mapping that we introduced in the previous section. To this end, we should give a hypergraph representation for TC. Therefore, we define a hypergraph $H$ with vertices corresponding to qubits of TC and edges corresponding to $X$-type operators $A_v$. In this way, corresponding to each vertex $v$ of the $G$, there is a hyperedge of the $H$ which involve all qubits belonging to the $v$, see Fig(\ref{kitexa}). \begin{figure}[t] \centering \includegraphics[width=6cm,height=2cm,angle=0]{kitexa} \caption{(Color online) In a dual space we insert a red vertex corresponding to each hyperedge of the $H$. Since each vertex of a hexagonal lattice is a member of two neighboring hyperedges of the $H$, in the dual space a hyperedge involves two vertex of the $\tilde{H}$. Therefore, the $\tilde{H}$ is a ordinary graph which is matched on the original graph.} \label{kitexa} \end{figure} In the next step, according to the dual mapping, the TC on the hypergraph $H$, corresponding to initial graph $G$, in a uniform magnetic field is mapped to a Ising-like system on dual hypergraph $\tilde{H}$ in a transverse field. Therefore, we should find the $\tilde{H}$ for the TC. To this end, we should find all hyperedges of the $H$ that a vertex of the $H$ is a member of them. Since each qubit of the TC lives in an edge of the original graph $G$, it is just included by two vertex operators corresponding to two neighboring vertices of the $G$, see Fig(\ref{kitexa}). Therefore, it is concluded that each vertex of the $H$ is a member of two neighboring hyperedges of the $H$. In a dual space, it means that each edge of the $\tilde{H}$ involves two vertices of the $\tilde{H}$. Therefore, it is enough to insert a vertex of the $\tilde{H}$ called $\tilde{v}$ in each vertex of the $G$ and then each edge of the $\tilde{H}$ involves two neighboring vertices on the $G$, see Fig(\ref{kitexa}). It means that dual hypergraph $\tilde{H}$ is a ordinary graph that is exactly matched on graph $G$. A spin model corresponding to such a hypergraph is Ising model in a transverse field, see also Fig(\ref{kit}) for a 3D example. \begin{figure}[t] \centering \includegraphics[width=8cm,height=4.5cm,angle=0]{kit} \caption{(Color online) Similar to the 2D model in Fig.(\ref{kitexa}), In a dual space we insert a red vertex corresponding to each hyperedge of the $H$. Since each vertex of a 3D lattice is a member of two neighboring hyperedges of the $H$, in the dual space a hyperedge involves two vertex of the $\tilde{H}$. Therefore, the $\tilde{H}$ is a ordinary graph which is matched on the original 3D rectangular lattice.} \label{kit} \end{figure} Finally we conclude that TC on an arbitrary graph in a magnetic field is equal to Ising model on the same graph in a transverse field. Interesting point is that Ising model in a transverse field is a well-known model that has been studied in statistical physics. Importantly, according to our mapping, the point of the phase transition of Ising model on different lattices will determine the robustness of the TC on the same graph against magnetic field. We should also be care that while the robustness of TC is determined by the ratio $\frac{h}{J}$, the quantum phase transition point of Ising model in accord of the relation (\ref{main}) is determined by $\frac{J}{h}$. In other words, in the corresponding Ising model $J$ is power of transverse magnetic field. By this point and well-known results derived in \cite{trans}, we provide a table (\ref{tab}) for robustness of TC on different lattices against a uniform magnetic field. \begin{table}[t] \centering \begin{tabular}{ |p{4cm}|p{5cm}| } \hline TC on different lattices & Robustness against a magnetic field\\ \hline On a honeycomb lattice & $(\frac{h}{J})_c \approx ~ 0.469$\\ \hline On a kagome lattice &$(\frac{h}{J})_c \approx ~ 0.339$\\ \hline On a triangular lattice & $(\frac{h}{J})_c \approx ~ 0.209$\\ \hline On a square lattice & $(\frac{h}{J})_c \approx ~ 0.328$\\ \hline On a cubic lattice & $(\frac{h}{J})_c \approx ~ 0.194$\\ \hline \end{tabular} \caption{The robustness of TC on different lattices against a uniform magnetic field.} \label{tab} \end{table} We should emphasize in an important result according to our mapping. It is well-known that the phase transition point of transverse Ising model increases in higher dimensions and it means that the ratio $\frac{J}{h}$ increases in higher dimensions. Since the robustness of TC is determined by the ratio $\frac{h}{J}$, we conclude that the robustness of TC, defined on a graph with qubits living on edges, against a uniform magnetic field decreases in higher dimensions. We believe that it is an important result because, according to recent researches, power of topological codes for a self-correcting quantum memory increases in higher dimensions. Although it is clear that the measure of self-correctness of a topological code in finite temperature is different with the measure of the robustness at zero temperature, our result proposes that increasing dimension might not be necessarily a good strategy for improving a topological code. In other words, by increasing dimension, on the one hand power of self-correctness increases and on the other hand the robustness decreases. Therefore there might be an optimum dimension for an efficient topological memory. \subsection{Robustness of CC against magnetic field} As another example of the dual mapping, here we consider robustness of CC against a uniform magnetic field. CC are defined on D-colexes that are color complexes on D-dimensional manifold where cells of the lattice can be colored by $D+1$ distinct colors \cite{delgad}. In Fig(\ref{color}), we show two examples of colexes in two and three dimension. A CC on a $D$-colex are defined by two set of $X$-type and $Z$-type stabilizers in the following form: \begin{equation} A_c =\prod_{i\in C}X_i ~~,~~B_{c'} =\prod_{i\in C'} Z_i \end{equation} where $C$ and $C'$ are cells of the colex so that $[A_c ,B_{c'}]=0$. For example for a 2-colex (a hexagonal lattice) the above operators are defined corresponding to each plaquette of the lattice. On the other hand, for a 3-colex $X$-type operators are defined corresponding to each three-dimensional cell of the lattice and $Z$-type operators are defined corresponding to each plaquette of the lattice. In order to consider the CC in a uniform magnetic field, we should represent the CC as a CSS code on a hypergraph. We explain this idea with two simple examples and then we extend it to more general cases. Therefore, consider a CC on a 2-colex like an hexagonal lattice. The CC corresponding to this lattice is in the following form: \begin{equation} |CC_2 \rangle =\prod_{p}(1+A_p)|00...0\rangle \end{equation} \begin{figure}[t] \centering \includegraphics[width=8cm,height=4.5cm,angle=0]{color} \caption{(Color online) Two sample lattice where a color code can be defined. The left hand is a 2-colex (three-colorable hexagonal lattice) and the right-hand is a 3-colex with four-colorable cells. } \label{color} \end{figure} Now we define a hypergraph $H$ corresponding to this state where each qubit will be a vertex of the hypergraph and each plaquette of the 2-colex corresponds to a hyperedge of the $H$ which involves all qubits belonging to that plaquette. Then we should find dual of such a hypergraph. As it is shown in Fig(\ref{colohyp}), since each vertex $v$ of the $H$ is a member of three neighboring hyperedges, in the dual space each edge $\tilde{e}$ should involve three vertices. Therefore, dual hypergraph will be a triangular lattice where triangles corresponds to hyperedges of the $\tilde{H}$, see Fig(\ref{colexa}). According to the dual mapping, the spin model corresponding to the $\tilde{H}$ is a Ising-like model with three-body interactions in a transverse field. \begin{figure}[t] \centering \includegraphics[width=6cm,height=2cm,angle=0]{colohyp} \caption{(Color online) In a dual space we insert a red vertex corresponding to each hyperedge of the $H$. Since each vertex of the hexagonal lattice is a member of three plaquettes or three hyperedges of the $H$, in a dual space each hyperedge involves three vertices of the $\tilde{H}$ denoted by a triangle.} \label{colohyp} \end{figure} Then consider a CC on a 3-colex. Since in this case $X$-type operators are related to cells of the 3-colex, a hypergraph $H$ should be defined with hyperedges corresponding to the cells. The next step is to find dual of such a hypergraph. As it is shown in Fig(\ref{colo3}), since each vertex is a member of four hyperedges of the $H$, in the dual space each hyperedge would involve four vertices. In this way, the $\tilde{H}$ is related to a tetrahedron lattice where each tetrahedron corresponds to a hyperedge of the $\tilde{H}$ which involves four vertices belonging to that tetrahedron. According to the dual mapping, the spin model corresponding to the $\tilde{H}$ will be an Ising-like model on a tetrahedron lattice with four-body interactions corresponding to each tetrahedron and in presence of a transverse field. \begin{figure}[t] \centering \includegraphics[width=8cm,height=4.5cm,angle=0]{colexa} \caption{(Color online) By applying a transformation similar to Fig.(\ref{colohyp}) for all vertices, we will have a triangular lattice in the dual space with a new Hamiltonian with three-body interactions corresponding to each triangle of the lattice.} \label{colexa} \end{figure} Extension of the above idea to CC in higher dimensions is straightforward. In fact, it has been shown that dual of a $D$-colex is a D-simplicial lattice with $(D+1)$-colorable vertices \cite{Bombin2015}. Therefore, According to the dual mapping, a CC on a $D$-colex in magnetic field is mapped to a Ising-like model with $(D+1)$-body interactions corresponding to each cell of a $D$-simplicial lattice in presence of a transverse field. Unfortunately, such spin systems on simplicial lattices are very abstract and they have been studied only for some specific two-dimensional examples \cite{baxterwu}. Therefore, unlike TC, we will not be able to compare the robustness of CC on different lattices. \begin{figure}[t] \centering \includegraphics[width=8cm,height=4.5cm,angle=0]{colo3} \caption{(Color online) In a dual space we insert a red vertex corresponding to each hyperedge of the $H$. Since each vertex of a 3-colex is a member of four cells of the lattice or four hyperedges of the $H$, in a dual space each hyperedge involves four vertices of the $\tilde{H}$ where we will have a tetrahedron lattice with a Hamiltonian with four-body interactions corresponding to tetrahedrons.} \label{colo3} \end{figure} \subsection{Self-dual models} As we mentioned in section (\ref{s1}), a hypergraph will be called self-dual if it is the same as its dual. On the other hand, according to our duality mapping, for a Hamiltonian on a self-dual hypergraph, the dual Hamiltonian will be the same as the original Hamiltonian where the coupling $J$ and magnetic field $h$ has been exchanged. In this way, it is clear that such a model will have a phase transition at $\frac{h}{J}=1$ if there are two different phases in two limits $h$, or $J$ goes to zero. In the following of this subsection we give some examples of such models. \begin{figure}[t] \centering \includegraphics[width=6cm,height=2.5cm,angle=0]{one} \caption{(Color online) a) A one-dimensional lattice where qubits live in edges of the lattice and an $X$-type stabilizer is defined corresponding to each vertex. b) A hypergraph representation for the above model where corresponding to each stabilizer we have defined a hyperedge denoting by red color. c) In a dual space we insert a red vertex corresponding to each hyperedge of the $H$. Since each vertex of the $H$ is a member of two hyperedges, in the dual space each hyperedge also involves two vertices of the $\tilde{H}$. Therefore, the $\tilde{H}$ will be the same as the $H$.} \label{one} \end{figure} The first example is a one-dimensional model. To this end consider a one dimensional lattice with qubits which live in edges of the lattice. Corresponding to each vertex of the lattice we define a $X$-type operator in the form of $A_v =X_l X_r$ where $l$ and $r$ denote qubits in the left-hand and right-hand of each vertex. In order to define a hypergraph corresponding to such model it is enough to relate a hyperedge to each $X$-type stabilizer. In Fig.(\ref{one}) we show such a hypergraph. The number of vertices and hyperedges for this hypergraph are the same and since each vertex is a member of two neighboring hyperedges, it is simple to check that such a hypergraph is self dual, see Fig.(\ref{one}). In this way, according to the duality mapping a Hamiltonian corresponding to the above $X$-type stabilizers in a uniform magnetic field is self-dual and the phase transition point will be at $\frac{h}{J}=1$. Such a Hamiltonian is a one-dimensional Ising model in a transverse field and the above phase transition point is a well-known result. Another example of a self dual model can be defined on a two-dimensional square lattice where qubits live in vertices. Corresponding to each plaquette of the lattice we define a $X$-type stabilizer in the form of $A_p =X_1 X_2 X_3 X_4$ where 1, 2, 3, 4 denote qubits on four vertices belonging to each plaquette. In Fig.(\ref{two}), we show a hypergraph representation for such a model. In a square lattice the number of vertices and plaquettes are the same and since each vertex is a member of four hyperedges of the hypergraph, it is simple to check that such a hypergraph is self-dual. In this way the phase transition point of such a model in a transverse field will be at $\frac{h}{J}=1$. \begin{figure}[t] \centering \includegraphics[width=8cm,height=2.5cm,angle=0]{two} \caption{(Color online) a) A two-dimensional square lattice where qubits live in vertices of the lattice and an $X$-type stabilizer is defined corresponding to each plaquette. b) A hypergraph representation for the above model where corresponding to each stabilizer we have defined a hyperedge denoting by red curves. c) In a dual space we insert a red vertex corresponding to each hyperedge of the $H$. Since each vertex of the $H$ is a member of four hyperedges, in the dual space each hyperedge also involves four vertices of the $\tilde{H}$. Therefore, $\tilde{H}$ is the same as the $H$.} \label{two} \end{figure} Extension of the above two examples to higher dimensions is straightforward. In three dimension, it is enough to define $X$-type stabilizers corresponding to each cubic cell of the lattice and it will be clear that the corresponding hypergraph will be self-dual. Generally in $D$ dimension we can define a rectangular lattice and corresponding to each D-dimensional unit cell we should define an $X$-type stabilizer. Such a model in a transverse field is also self-dual with a phase transition point at $\frac{h}{J}=1$. \section{Discussion} Mapping quantum many-body systems to hypergraphs is an interesting idea which should be followed in the future. In this paper, we used such an idea for studying quantum CSS codes in a uniform magnetic field at zero temperature. Especially, we derived a strong-weak coupling duality that is specifically interesting in view of theoretical physics where quantum phase transitions for two different quantum many-body systems are mapped to each other in two switched regimes of couplings . Especially, we used such a mapping for considering one of the important problems in quantum information theory namely the robust quantum memory. By studying the robustness of TC on different graphs, we showed that the robustness of TC defined on graphs decreases in higher dimensions. On the other hand, it has already been well-known that power of self-correctness of a topological code in finite temperature increases in higher dimensions. We should emphasize that we achived to such a result just for a subclass of TC defined on graphs, with qubits living on edges, which are not useful as self-correcting quantum memores in any dimension due to their string-like excitations. Therefore, it will be interesting that one studies other topological codes in this direction. In other words, our results propose that there might be an optimum dimension where we might have an efficient quantum memory for a more general topological code. Furthermore, we used self-duality of hypergraphs to introduce several self-dual models where we exactly found the phase transition point.
train/arxiv
BkiUdXM5qhLA8SA3CFec
5
1
\section{Introduction} Consider a symmetric Dirichlet form on $L^2(\mathbb R^d, \lambda)$ \begin{equation}\label{eq:DirichletRd} \mathcal{E}(f,g) = \int_{\mathbb R^d} \sum_{i,j=1}^d a^{i,j}(\partial_i f) (\partial_j g)d\lambda\;, \end{equation} where $\lambda$ is the Lebesgue measure and $a$ is a measurable, uniformly elliptic function taking values in the space of symmetric $d\times d$ matrices (we make our set-up precise in Section~\ref{subsec:Notation}). It is well-known that there exists a symmetric Markov process $\mathbf{X}$ in $\mathbb R^d$ associated with $\mathcal{E}$; see~\cite{Fukushima11} for a general construction of $\mathbf X$ and~\cite{Stroock88} for fundamental analytic properties of $\mathcal{E}$. We are interested in differential equations of the form \begin{equation}\label{eq:RDEIntro} d\mathbf Y_t = V(\mathbf Y_t)d\mathbf X_t\;, \quad \mathbf Y_0 = y_0 \in \mathbb R^e\;, \end{equation} driven by $\mathbf X$ along vector fields $V = (V_1,\ldots,V_d)$ on $\mathbb R^e$. When $a$ is taken sufficiently smooth, the process $\mathbf{X}$ can be realised as a semi-martingale for which the classical framework of It{\^o} gives meaning to the equation~\eqref{eq:RDEIntro}. However for irregular functions $a$, this is no longer the case, and~\eqref{eq:RDEIntro} falls outside the scope of It{\^o} calculus. One of the applications of Lyons' theory of rough paths~\cite{Lyons98} has been to give meaning to differential equations driven by processes outside the range of semi-martingales. One viewpoint of rough paths theory is that it factors the problem of solving equations of the type~\eqref{eq:RDEIntro} into first enhancing $\mathbf X$ to a rough path by appropriately defining its iterated integrals (which is typically done through stochastic means), and then solve~\eqref{eq:RDEIntro} deterministically. Probabilistic methods to enhance the Markov process $\mathbf X$ to a rough path and the study of its fundamental properties appear in~\cite{LyonsStoica99, BassHamblyLyons02, Lejay06, Lejay08}, where primarily the forward-backward martingale decomposition is used to show existence of the stochastic area. A somewhat different approach, which we follow here, is taken in~\cite{FrizVictoir08} where the authors define $\mathbf X$ directly as a diffusion on the free nilpotent Lie group $G^N(\mathbb R^d)$ (in particular the iterated integrals are given directly in the construction). One can show that in the situation mentioned at the start, the two methods give rise to equivalent definitions of rough paths. The latter construction in fact yields further flexibility in that the evolution of $\mathbf X$ can depend in a non-trivial way on its higher levels (its iterated integrals). Note that this is a common feature with L{\'e}vy rough paths studied in~\cite{FrizShekhar17, Chevyrev18}. Markovian rough paths have also recently been investigated in~\cite{CassOgrodnik17, ChevyrevLyons16} in connection with the accumulated local $p$-variation functional and the moment problem for expected signatures. The goal of this paper is to contribute two results to the study of Markovian rough paths in the sense of~\cite{FrizVictoir08}. Our first contribution (Theorem~\ref{thm:support_proper}) answers in the positive a conjecture about the support of $\mathbf X$ in $\alpha$-H{\"o}lder rough path topology. Such a support theorem appeared in~\cite{FrizVictoir08} for $\alpha \in (0,1/6)$, and was improved to $\alpha \in (0,1/4)$ in~\cite{FrizVictoir10} where it was conjectured to hold for $\alpha \in (0,1/2)$ in analogy to enhanced Brownian motion. Comparing our situation to the case of Gaussian rough paths, where such support theorems are known with sharp H{\"o}lder exponents (see e.g.,~\cite[Sec.~15.8]{FrizVictoir10}, and~\cite{FrizGess16} for recent improvements), the difficulty of course lies in the lack of a Gaussian structure, in particular the absence of a Cameron-Martin space. Our solution to this problem relies almost entirely on elementary techniques. Indeed, we first show that any stochastic process (taking values in a Polish space) admits explicit lower bounds on the probability of keeping a small $\alpha$-H{\"o}lder norm, provided that it satisfies lower and upper bounds on certain transition probabilities comparable to Brownian motion. This is made precise by conditions~\ref{point:supUpper} and~\ref{point:ballLower} and Theorem~\ref{thm:smallHolder}. We then verify these conditions for the translated rough path $T_h(\mathbf X)$ (which is in general non-Markov, see Remark~\ref{remark:nonMarkov}) for any $h \in W^{1,2}([0,T],\mathbb R^d)$ using heat kernel estimates of $\mathbf X$ (we also note that, just like for enhanced Brownian motion, all relevant constants depend on $h$ only through $\norm{h}_{W^{1,2}}$). As usual, in combination with the continuity of the It{\^o}-Lyons map from rough paths theory, an immediate consequence of improving the H{\"o}lder exponent in the support theorem for $\mathbf X$ is a stronger Stroock-Varadhan support theorem (in $\alpha$-H{\"o}lder topology) for the solution $\mathbf Y$ to the rough differential equation (RDE)~\eqref{eq:RDEIntro} along with the lower regularity assumptions on the driving vector fields $V$ ($\textnormal{Lip}^2$ instead of $\textnormal{Lip}^4$). Our second contribution (Theorem~\ref{thm:WHormander} and its Corollary~\ref{cor:HorCond}) may be seen as a non-Gaussian H{\"o}rmander-type theorem, and provides sufficient conditions on the driving vector fields $V = (V_1,\ldots, V_d)$ under which the solution to the RDE~\eqref{eq:RDEIntro} admits a density with respect to the Lebesgue measure on $\mathbb R^e$. Once again, while this result is reminiscent of density theorems for RDEs driven by Gaussian rough paths (e.g.,~\cite{BaudoinHairer07, CassFriz10, CassHairer15}), the primary difference in our setting is that methods from Malliavin calculus are no longer available due to the lack of a Gaussian structure. We replace the use of Malliavin calculus by direct analysis of (non-symmetric) Dirichlet forms on manifolds. Indeed, we identify conditions under which the couple $(\mathbf X,\mathbf Y)$ admits a density on its natural state-space, and conclude by projecting to $\mathbf Y$. We note however that our current result gives no quantitative information about the density beyond its existence (not even for the couple $(\mathbf X,\mathbf Y)$), and we strongly suspect that the method can be improved to yield further information (particularly $L^p$ bounds and regularity results in the spirit of the De Giorgi--Nash--Moser theorem). \subsection{Notation}\label{subsec:Notation} Throughout the paper, we adopt the convention that the domain of a path $\mathbf x : [0,T] \to E$, for $T > 0$ and a set $E$, is extended to all of $[0,\infty)$ by setting $\mathbf x_t = \mathbf x_T$ for all $t > T$. For a metric space $(E,d)$, $r \geq 0$, and $x \in E$, we denote the ball $B(x,r) = \{y \in E \mid d(x,y) \leq r\}$. We let $G = G^N(\mathbb R^d)$ denote the step-$N$ free nilpotent Lie group over $\mathbb R^d$ for some $N \geq 2$, and let $U_1,\ldots, U_d$ be a set of generators for its Lie algebra $\mathfrak g = \mathfrak g^N(\mathbb R^d)$, which we identify with the space of left-invariant vector fields on $G$. We equip $\mathbb R^d$ with the inner product for which $U_1,\ldots, U_d$ form an orthonormal basis upon canonically identifying $\mathbb R^d$ with a subspace of $\mathfrak g$. We equip $G$ with the corresponding Carnot--Carath{\'e}odory metric $d$. Let $1_G$ denote the identity element of $G$ and let $\lambda$ denote the Haar measure on $G$ normalised so that $\lambda(B(1_G,1)) = 1$. For $\Lambda > 0$, let $\Xi(\Lambda) = \Xi^{N,d}(\Lambda)$ denote the set of measurable functions $a$ on $G$ which take values in the space of symmetric $d\times d$ matrices and which are sub-elliptic in the following sense: \[ \Lambda^{-1} |\xi|^2 \leq \gen{\xi,a(x)\xi} \leq \Lambda|\xi|^2\;, \quad \forall \xi \in \mathbb R^d\;, \quad \forall x \in G\;. \] For $a \in \Xi(\Lambda)$, we define the associated Dirichlet form $\mathcal{E} = \mathcal{E}^a$ on $L^2(G,\lambda)$ for all $f,g \in C^\infty_c(G)$ by \begin{equation}\label{eq:DirichletG} \mathcal{E}(f,g) = \int_G \sum_{i,j} a^{i,j} (U_i f) (U_j g) d\lambda\;. \end{equation} We let $\mathbf X = \mathbf X^{a,x}$ denote the Markov diffusion on $G$ associated to $\mathcal{E}$ with starting point $\mathbf X_0 = x \in G$. We recall that the sample paths of $\mathbf X$ are a.s. geometric $\alpha$-H{\"o}lder rough paths for all $\alpha \in (0,1/2)$, and when $a(x)$ depends only on the level-$1$ projection $\pi_1(x) \in \mathbb R^d$ of $x \in G$, $\mathbf X$ serves as the natural rough path lift of the Markov diffusion associated to the Dirichlet form~\eqref{eq:DirichletRd} on $L^2(\mathbb R^d)$ discussed earlier. For further details, we refer to~\cite{FrizVictoir10}. \begin{remark} Throughout the paper we assume the symmetric Dirichlet form~\eqref{eq:DirichletG} is defined on the Hilbert space $L^2(G,\lambda)$ so that $\mathbf X$ is symmetric with respect to $\lambda$. As pointed out in~\cite{CassOgrodnik17}, it is natural to also consider $\mathcal{E}$ defined over $L^2(G,\mu)$ for a measure $\mu(dx) = v(x)\lambda(dx)$, $v \geq 0$. While for simplicity we only work with $\mathcal{E}$ defined on $L^2(G,\lambda)$, we note that appropriate assumptions of $v$ and a Girsanov transform (see, e.g.,~\cite{Fitzsimmons97}) can be used to relate the results of this paper to this more general setting. \end{remark} \section{Support theorem} \subsection{Restricted H{\"o}lder norms} We first record some deterministic results on H{\"o}lder norms which will be used in the sequel. Throughout this section, let $(E,d)$ be a metric space, $\alpha \in (0,1]$, $T > 0$, and $\mathbf x \in C([0,T],E)$ a continuous path. Let $\square$ denote any of the relations $<, \leq, =, \geq, >$, and consider the quantity \[ \norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l},\square\varepsilon;[s,t]} = \sup_{u,v \in [s,t], |u-v| \square \varepsilon} \frac{d(\mathbf x_u,\mathbf x_v)}{|u-v|^\alpha}\;, \] where we set $\norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l},\square\varepsilon;[s,t]} = 0$ if the set $\{(u,v) \in [s,t]^2 \mid |u-v| \square \varepsilon\}$ is empty. \begin{definition}\label{def:tau} For $\varepsilon, \gamma > 0$ and $s \in [0,T]$, define the times $(\tau^{\varepsilon, \gamma,s}_n)_{n \geq 0} = (\tau_n)_{n \geq 0}$ by $\tau_0 = s$ and for $n \geq 1$ \[ \tau_n = \inf\{t > \tau_{n-1} \mid \norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l},\geq \varepsilon;[\tau_{n-1},t]} \geq \gamma\}\;. \] We call any such $\tau_n$ a \emph{H{\"o}lder stopping time} of $\mathbf x$. \end{definition} \begin{lemma}\label{lem:unifHolBound} Let $\varepsilon,\gamma >0$ and $s = 0$, and suppose that for some $c > 0$ \begin{equation}\label{eq:supCond} \sup_{t \in [\tau_n,\tau_n + \varepsilon]} d(\mathbf x_{\tau_n}, \mathbf x_{t}) < c\;, \quad \forall n \geq 0\;. \end{equation} Then $\norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l},=\varepsilon;[0,T]} < \tilde \gamma := (3c\varepsilon^{-\alpha}) \vee (4\gamma + c\varepsilon^{-\alpha})$. \end{lemma} \begin{proof} For $n \geq 1$ and $t \in [\tau_n - \varepsilon, \tau_n]$, we have one of the following three mutually exclusive cases: (a) $\tau_n = \tau_{n-1}+\varepsilon$, (b) $\tau_n \in (\tau_{n-1}+\varepsilon, \tau_{n-1}+2\varepsilon]$ and $t \in [\tau_{n-1},\tau_{n-1}+\varepsilon]$, or (c) $t > \tau_{n-1}+\varepsilon$. In case (a),~\eqref{eq:supCond} implies that $d(\mathbf x_t,\mathbf x_{\tau_n}) < 2c$. In case (b), $d(\mathbf x_{\tau_n},\mathbf x_{\tau_{n-1}}) \leq \gamma(2\varepsilon)^\alpha$ and~\eqref{eq:supCond} implies that $d(\mathbf x_{t},\mathbf x_{\tau_{n-1}}) < c$, so that \[ d(\mathbf x_{t},\mathbf x_{\tau_n}) < c + \gamma(2\varepsilon)^\alpha \leq (2c) \vee (4\gamma \varepsilon^{\alpha})\;. \] In case (c), we have $d(\mathbf x_{t-\varepsilon},\mathbf x_t) \leq \gamma\varepsilon^\alpha$ and $d(\mathbf x_{t-\varepsilon},\mathbf x_{\tau_n}) \leq \gamma(2\varepsilon)^\alpha$, so that \[ d(\mathbf x_t,\mathbf x_{\tau_n}) \leq \gamma\varepsilon^\alpha + \gamma(2\varepsilon)^\alpha \leq 3\gamma\varepsilon^\alpha\;. \] Hence, in all three cases \begin{equation}\label{eq:ttaun} d(\mathbf x_t,\mathbf x_{\tau_n}) < (2c)\vee (4\gamma\varepsilon^{\alpha})\;. \end{equation} Consider now \[ \tau = \inf\{t > 0 \mid \norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l};=\varepsilon;[0,T]} = \tilde \gamma\}\;. \] Note that $\norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l};=\varepsilon;[0,T]} \geq \tilde \gamma \Leftrightarrow \tau < \infty$. Arguing by contradiction, suppose that $\tau < \infty$, which means that $d(\mathbf x_{\tau-\varepsilon},\mathbf x_\tau) = \tilde \gamma \varepsilon^\alpha$. Consider the largest $n$ for which $\tau_n \leq \tau$. Observe that $\tau_n \in [\tau-\varepsilon,\tau]$, since otherwise $d(\mathbf x_{\tau-\varepsilon},\mathbf x_\tau) < \gamma \varepsilon^\alpha$, which is a contradiction since $\tilde \gamma > \gamma$. It follows from~\eqref{eq:supCond} that $d(\mathbf x_{\tau_n},\mathbf x_\tau) \leq c$, and therefore by~\eqref{eq:ttaun} and the triangle inequality \[ d(\mathbf x_{\tau-\varepsilon},\mathbf x_\tau) < c + (2c)\vee (4\gamma\varepsilon^\alpha) = \tilde \gamma\varepsilon^\alpha\;, \] which is again a contradiction. \end{proof} \begin{lemma}\label{lem:dyadics} Suppose that $\norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l}; =2^{-n}\varepsilon;[0,T]} \leq \gamma$ for every $n > N \in \mathbb Z$. Then \[ \norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l}; < 2^{-N}\varepsilon;[0,T]} \leq \frac{\gamma}{1-2^{-\alpha}}\;. \] \end{lemma} \begin{proof} Consider $(t-s)/\varepsilon \in (0,2^{-N})$ with binary representation $(t-s)/\varepsilon = \sum_{n = m}^\infty c_n 2^{-n}$ with $c_n \in \{0,1\}$, $m > N$, and $c_m = 1$. It follows that \[ d(\mathbf x_s,\mathbf x_t) \leq \gamma \sum_{n=m}^\infty \varepsilon^\alpha c_n 2^{-n\alpha}\;. \] Since $2^{-m} \leq (t-s)/\varepsilon$, we have $\varepsilon^\alpha 2^{-n\alpha} \leq 2^{\alpha(m-n)}(t-s)^\alpha$. Hence \[ d(\mathbf x_s,\mathbf x_t) \leq \gamma \sum_{n=m}^\infty 2^{\alpha(m-n)}(t-s)^\alpha = \frac{\gamma(t-s)^\alpha}{1-2^{-\alpha}}\;. \] \end{proof} \begin{lemma}\label{lem:globalHol} Suppose there exist $x \in E$ and $r > 0$ such that for all integers $k \geq 0$, $\mathbf x_{k\varepsilon} \in B(x,r)$ and $\norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l}; \leq \varepsilon; [k\varepsilon,(k+1)\varepsilon]} \leq \gamma$. Then \[ \norm{\mathbf x}_{\alpha\textnormal{-H{\"o}l};[0,T]} \leq 2\gamma + 2r\varepsilon^{-\alpha}\;. \] \end{lemma} \begin{proof} Consider $0 \leq s < t \leq [0,T]$, and denote $s \in [k\varepsilon,(k+1)\varepsilon)$, $t \in [n\varepsilon,(n+1)\varepsilon)$. If $k=n$ there is nothing to prove, so suppose $k < n$. If $|t-s| \leq \varepsilon$, so that $n=k+1$, then \[ d(\mathbf x_s,\mathbf x_t) \leq d(\mathbf x_s,\mathbf x_{n\varepsilon}) + d(\mathbf x_{n\varepsilon},\mathbf x_t) \leq \gamma 2^{1-\alpha}|t-s|^\alpha\;. \] Finally, if $|t-s| > \varepsilon$ then since $\mathbf x_{k\varepsilon},\mathbf x_{n\varepsilon} \in B(x,r)$, it follows that \begin{align*} |t-s|^{-\alpha}d(\mathbf x_s,\mathbf x_t) &\leq |t-s|^{-\alpha}(d(\mathbf x_{k\varepsilon},\mathbf x_s) + d(\mathbf x_{k\varepsilon},\mathbf x_{n\varepsilon}) + d(\mathbf x_{n\varepsilon},\mathbf x_t)) \\ &\leq |t-s|^{-\alpha}(2\varepsilon^{\alpha}\gamma+ 2r) \\ & \leq 2\gamma + 2r\varepsilon^{-\alpha}\;. \end{align*} \end{proof} \subsection{Positive probability of small H{\"o}lder norm} Suppose now $(E,d)$ is a Polish space. In this section, we give conditions under which an $E$-valued process has an explicit positive probability of keeping a small H{\"o}lder norm. We fix $\alpha \in (0,1/2)$, a terminal time $T > 0$, and an $E$-valued stochastic process $\mathbf X$ adapted to a filtration $(\mathcal{F}_t)_{t \in [0,T]}$. Consider the following conditions: \begin{enumerate}[label={(\arabic*)}] \item \label{point:supUpper} There exists $C_1 > 0$ such that for every $c,\varepsilon > 0$, and every H{\"o}lder stopping time $\tau$ of $\mathbf X$, a.s. \[ \mathbb{P} \Big[\sup_{t \in [\tau,\tau+\varepsilon]} d(\mathbf X_\tau,\mathbf X_t) > c \mid \mathcal{F}_\tau \Big] \leq C_1\exp\left( \frac{-c^2}{C_1\varepsilon} \right)\;. \] \item \label{point:ballLower} There exist $c_2, C_2 > 0$ and $x \in E$ such that for every $s \in [0,T]$ and $\varepsilon \in (0,T-s]$, a.s. \[ \mathbb{P} \big[ \mathbf X_{s+\varepsilon} \in B(x,C_2\varepsilon^{1/2}) \mid \mathcal{F}_s\big] \geq c_2 \1{\mathbf X_s \in B(x,C_2\varepsilon^{1/2})}\;. \] \end{enumerate} Roughly speaking, the first condition states that the probability of large fluctuations of $\mathbf X$ over small time intervals should have the same Gaussian tails as that of a Brownian motion, while the second condition bounds from below the probability that $\mathbf X_{s+\varepsilon}$ is in a ball of radius $\sim\varepsilon^{1/2}$ given that $\mathbf X_s$ was in the same ball. \begin{theorem}\label{thm:smallHolder} Assume conditions~\ref{point:supUpper} and~\ref{point:ballLower}. Fix $x$ as in~\ref{point:ballLower}. Then there exist $C_{\ref{thm:smallHolder}}, c_{\ref{thm:smallHolder}} > 0$, depending only on $C_1,c_2, C_2, \alpha, T$, such that for every $\gamma > 0$, a.s. \[ \PPP{\norm{\mathbf X}_{\alpha\textnormal{-H{\"o}l};[0,T]} < \gamma \mid \mathcal{F}_0} \geq C_{\ref{thm:smallHolder}}^{-1} \exp\Big(\frac{-C_{\ref{thm:smallHolder}}}{\gamma^{2/(1-2\alpha)}}\Big)\1{\mathbf X_0 \in B(x,c_{\ref{thm:smallHolder}}\gamma^{1/(1-2\alpha)})}\;. \] \end{theorem} \begin{lemma}\label{lem:lessHol} Assume condition~\ref{point:supUpper}. Then there exists $C_{\ref{lem:lessHol}} > 0$, depending only on $C_1$ and $\alpha$, such that for all $0 \leq s < t \leq T$ and $\varepsilon \in (0,t-s]$, a.s. \[ \PPP{\norm{\mathbf X}_{\alpha\textnormal{-H{\"o}l};\leq\varepsilon;[s,t]} \geq \gamma \mid \mathcal{F}_s} \leq C_{\ref{lem:lessHol}} (t-s) \varepsilon^{-1}(\gamma^{-2}\varepsilon^{1-2\alpha}+1) \exp\Big(\frac{-\gamma^{2}(1-2^{-\alpha})^2}{9C_1\varepsilon^{1-2\alpha}}\Big)\;. \] \end{lemma} \begin{proof} Let $\tau_n = \tau_n^{\varepsilon,\gamma,s}$ be defined as in Definition~\ref{def:tau} with $\tau_0 = s$. Note that~\ref{point:supUpper} implies that for all $c,\gamma > 0$, $t > s$ and $\varepsilon \in (0,t-s]$, \[ \PPP{\exists n \geq 0, \tau_n \leq t, \sup_{u \in [\tau_n,\tau_n+\varepsilon]} d(\mathbf X_{\tau_n}, \mathbf X_u) > c \mid \mathcal{F}_s} \leq \roof{(t-s)/\varepsilon} C_1\exp\Big(\frac{-c^2}{C_1\varepsilon}\Big)\;, \] so that by Lemma~\ref{lem:unifHolBound} \[ \PPP{\norm{\mathbf X}_{\alpha\textnormal{-H{\"o}l};=\varepsilon;[s,t]} \geq (3c\varepsilon^{-\alpha}) \vee (4\gamma + c\varepsilon^{-\alpha}) \mid \mathcal{F}_s} \leq \roof{(t-s)/\varepsilon} C_1\exp\Big(\frac{-c^2}{C_1\varepsilon}\Big)\;. \] In particular, choosing $c = 2\gamma\varepsilon^{\alpha}$ yields that for all $\gamma > 0$, $t > s$, and $\varepsilon \in (0,t-s]$, \[ \PPP{\norm{\mathbf X}_{\alpha\textnormal{-H{\"o}l};=\varepsilon;[s,t]} \geq 6\gamma \mid \mathcal{F}_s} \leq \roof{(t-s)/\varepsilon} C_1\exp\Big(\frac{-(2\gamma)^2}{C_1\varepsilon^{1-2\alpha}}\Big)\;. \] Hence \[ \PPP{\exists n \geq 0, \norm{\mathbf X}_{\alpha\textnormal{-H{\"o}l};=2^{-n}\varepsilon;[s,t]} \geq \gamma \mid \mathcal{F}_s} \leq 2C_1(t-s)\varepsilon^{-1}\sum_{n=0}^\infty 2^n \exp\Big(\frac{-2^{n(1-2\alpha)}\gamma^2}{9C_1\varepsilon^{1-2\alpha}}\Big)\;. \] The conclusion now follows from Lemma~\ref{lem:dyadics} and the observation that for every $\theta > 0$ there exists $C_4$ such that for all $K > 0$ \[ \sum_{n=0}^\infty 2^n \exp\left(-K 2^{\theta n}\right) \leq C_4 (K^{-1}+1) e^{-K} \] (which can be seen, for example, by the integral test and the asymptotic behaviour of the incomplete gamma function $\Gamma(p,K)$). \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:smallHolder}] For $\gamma, \varepsilon > 0$ and $s \in [0,T]$, consider the event \[ A_s = \{ \norm{\mathbf X}_{\alpha\textnormal{-H{\"o}l}; \leq \varepsilon;[s,s+\varepsilon]} < \gamma, \mathbf X_{s+\varepsilon} \in B(x,C_2\varepsilon^{1/2})\}\;. \] Applying condition~\ref{point:ballLower} and Lemma~\ref{lem:lessHol} with $t = s+\varepsilon$, we see that for all $s \in [0,T]$, and $\varepsilon, \gamma > 0$ \[ \PPP{ A_s \mid \mathcal{F}_s} \geq c_2\1{\mathbf X_s \in B(x,C_2\varepsilon^{1/2})} - C_{\ref{lem:lessHol}}(\gamma^{-2}\varepsilon^{1-2\alpha}+1) \exp\Big( \frac{-\gamma^{2}(1-2^{-\alpha})^2}{9C_1\varepsilon^{1-2\alpha}}\Big)\;. \] Observe also that Lemma~\ref{lem:globalHol} (with $r = C_2\varepsilon^{1/2}$) implies that for all $\varepsilon, \gamma > 0$ \[ \mathbb{P}\Big[\norm{\mathbf X}_{\alpha\textnormal{-H{\"o}l};[0,T]} < 2\gamma + 2C_2\varepsilon^{1/2-\alpha} \mid \mathcal{F}_0\Big] \geq \mathbb{P}\Big[\bigcap_{k=0}^{\roof{T/\varepsilon}-1} A_{k\varepsilon} \mid \mathcal{F}_0 \Big]\;. \] It remains to control the final probability on the RHS. We set $\varepsilon = c_1\gamma^{2/(1-2\alpha)}$ (so that $\varepsilon^{1/2-\alpha}\sim \gamma$), where $c_1 > 0$ is sufficiently small (and depends only on $C_1,c_2, C_2,C_{\ref{lem:lessHol}}$ and $\alpha$) such that \[ \kappa := c_2 - C_{\ref{lem:lessHol}}(c_1^{1-2\alpha} + 1)\exp\Big(\frac{-(1-2^{-\alpha})^2}{36 C_1 c_1^{1-2\alpha}}\Big) > 0\;, \] so in particular for all $s \in [0,T]$ and $\gamma > 0$, \[ \PPP{A_s \mid \mathcal{F}_s} \geq \kappa \1{\mathbf X_s \in B(x,C_2\varepsilon^{1/2})}\;. \] Inductively applying conditional expectations, it follows that for all $n \geq 0$ \[ \mathbb{P}\Big[\bigcap_{k=0}^{n} A_{k\varepsilon} \mid \mathcal{F}_0\Big] \geq \kappa^{n+1}\1{\mathbf X_0 \in B(x,C_2\varepsilon^{1/2})}\;. \] Taking $n = \roof{T/\varepsilon}-1$ yields the desired result. \end{proof} \subsection{Support theorem for Markovian rough paths} We now turn to the support theorem for Markovian rough paths in $\alpha$-H{\"o}lder topology, which we state in Theorem~\ref{thm:support_proper} at the end of this section. Recall the Sobolev path space $W^{1,2} = W^{1,2}([0,T], \mathbb R^d)$ and the translation operator $T_{h}(\mathbf x)$ defined for $\mathbf x \in C^{p\textnormal{-var}}([0,T],G)$, $1 \leq p < N+1$, and $h \in C^{1\textnormal{-var}}([0,T], \mathbb R^d)$ (see~\cite[Sec.~1.4.2,~9.4.6]{FrizVictoir10}). Let us fix $\alpha \in (0,1/2)$ and $\Lambda > 0$. Recall further Notation~\ref{subsec:Notation}, in particular the set $\Xi(\Lambda)$. \begin{proposition}\label{prop:support} Let $h \in W^{1,2}$. There exists a constant $C_{\ref{prop:support}} > 0$, depending only on $\Lambda$, $\norm{h}_{W^{1,2}}$, $\alpha$, and $T$, such that for all $a \in \Xi(\Lambda)$, $x \in G$, and $\gamma > 0$ \[ \PPPover{a,x}{\norm{T_h(\mathbf X)}_{\alpha\textnormal{-H{\"o}l};[0,T]} < \gamma} \geq C_{\ref{prop:support}}^{-1} \exp \Big( \frac{-C_{\ref{prop:support}}}{\gamma^{2/(1-2\alpha)}}\Big)\;. \] \end{proposition} For the proof, let us fix $h \in W^{1,2}$ and a filtration $(\mathcal{F}_t)_{t \in [0,T]}$ to which $\mathbf X$ (and thus $T_h(\mathbf X)$) is adapted (e.g, the natural filtration generated to $\mathbf X$). \begin{remark}\label{remark:nonMarkov} If $a(x)$ depends only on the first level $\pi_1(x)$ for all $x \in G$, then $T_h(\mathbf X)$ is a (non-symmetric, time-inhomogeneous) Markov process. In general, however, $T_h(\mathbf X)$ is non-Markov. The reason is that, for any fixed $t \in (0,T]$, the sigma-algebra $\sigma(\mathbf X_t)$ is not necessarily contained in $\sigma(T_h(\mathbf X)_t)$, i.e., information on whether $T_h(\mathbf X)_t \in A$ for Borel subsets $A \subset G$ does not yield full information about $\mathbf X_t$, which is necessary to determine the evolution of $\mathbf X$, and thus of $T_h(\mathbf X)$. \end{remark} Recall that the Fernique estimate~\cite[Cor.~16.12]{FrizVictoir10} implies that for every stopping time $\tau$ and $p > 2$, a.s. \begin{equation}\label{eq:Fernique} \PPP{\norm{\mathbf X}_{p\textnormal{-var};[\tau,\tau+\varepsilon]} > c \mid \mathcal{F}_\tau} \leq C_F\exp\Big( \frac{-c^2}{C_F \varepsilon} \Big), \end{equation} where $C_F$ depends only on $\Lambda$ and $p$. We now prove two lemmas which demonstrate that the process $T_h(\mathbf X)$ satisfies conditions~\ref{point:supUpper} and~\ref{point:ballLower}. \begin{lemma}\label{lem:supBound} There exists a constant $C > 0$, depending only on $\Lambda$, such that for all $c, \varepsilon > 0$ satisfying \begin{equation}\label{eq:epsUpper} \varepsilon \leq \frac{c^2}{4\norm{h}^2_{W^{1,2}}}, \end{equation} it holds that for every stopping time $\tau$, a.s. \[ \mathbb{P}\Big[\sup_{t \in [\tau,\tau+\varepsilon]} d(T_h(\mathbf X)_\tau,T_h(\mathbf X)_t) > c \mid \mathcal{F}_\tau \Big] \leq C\exp\Big(\frac{-c^2}{4C\varepsilon}\Big)\;. \] \end{lemma} \begin{proof} Suppose $c,\varepsilon > 0$ satisfy~\eqref{eq:epsUpper}. Using that $\norm{h}_{1\textnormal{-var};[s,s+\varepsilon]} \leq \varepsilon^{1/2}\norm{h}_{W^{1,2};[s,s+\varepsilon]}$, we have $\norm{h}_{1\textnormal{-var};[s,s+\varepsilon]} \leq c/2$. Fix now any $2 < p < N+1$. Observe that (see~\cite[Thm.~9.33]{FrizVictoir10}) \begin{align*} \sup_{t \in [s,s+\varepsilon]}d(T_h(\mathbf X)_s,T_h(\mathbf X)_t) &\leq \norm{T_h(\mathbf X)}_{p\textnormal{-var};[s,s+\varepsilon]} \\ &\leq C_1\left(\norm{\mathbf X}_{p\textnormal{-var};[s,s+\varepsilon]} + \norm{h}_{1\textnormal{-var};[s,s+\varepsilon]} \right)\;, \end{align*} from which the conclusion follows by the Fernique estimate~\eqref{eq:Fernique}. \end{proof} \begin{lemma}\label{lem:lowerBallBound} For all $C \geq C_0(\Lambda, \norm{h}_{W^{1,2}}) > 0$, there exists $c = c(C, \Lambda, \norm{h}_{W^{1,2}}) > 0$ such that for all $x \in G$, $s \in [0,T]$, and $\varepsilon \in (0,T-s]$, a.s. \[ \mathbb{P}\big[ T_h(\mathbf X)_{s+\varepsilon} \in B(x, C\varepsilon^{1/2}) \mid \mathcal{F}_s \big] \geq c \1{T_h(\mathbf X)_s \in B(x,C\varepsilon^{1/2})}\;. \] \end{lemma} \begin{proof} We use the shorthand notation $\mathbf Y = T_h(\mathbf X)$. For every $x,y \in G$, consider a geodesic $\gamma^{y,x} : [0,1] \to G$ with $\gamma^{y,x}_{0} = y$ and $\gamma^{y,x}_{1} = x$ parametrised at unit speed. Let $z(y,x) := \gamma^{y,x}_{1/2}$ denote its midpoint. For any $x \in G$, observe that \begin{align*} d(\mathbf Y_{s+\varepsilon}, x) &\leq d(\mathbf Y_{s+\varepsilon}, z(\mathbf Y_s,x)) + d(z(\mathbf Y_s,x),x) \\ &\leq d(\mathbf Y_{s,s+\varepsilon},\mathbf X_{s,s+\varepsilon}) + d(\mathbf X_{s,s+\varepsilon},\mathbf Y_s^{-1}z(\mathbf Y_s,x)) + d(z(\mathbf Y_s,x),x)\;. \end{align*} If $\mathbf Y_s \in B(x,r)$, then evidently $d(z(\mathbf Y_s,x),x) \leq r/2$. Moreover, since $G$ is a homogeneous group and due to our normalisation of $\lambda$, it holds that $\lambda(B(x,r)) = r^Q$ for all $r \geq 0$ and $x \in G$, where $Q \geq 1$ is the homogeneous dimension of $G$. Recall also the lower bound on the heat kernel~\cite[Thm.~16.11]{FrizVictoir10} \[ p(\varepsilon,x,y) \geq C_l^{-1} \varepsilon^{-Q/2} \exp\Big(\frac{-C_l d(x,y)^2}{\varepsilon}\Big)\;, \quad \forall x,y \in G\;, \quad \forall \varepsilon > 0\;, \] where $C_l > 0$ depends only on $\Lambda$. It follows that there exists $C_1 > 0$, depending only on $\Lambda$, such that, for any $r,\varepsilon > 0$ and $y \in B(1_G,r/2)$, \begin{align*} \PPP{d(\mathbf X_{s,s+\varepsilon}, y) < r/4} &\geq \lambda(B(y,r/4))C_l^{-1}\varepsilon^{-Q/2}\exp\Big(\frac{-C_l r^2}{\varepsilon}\Big) \\ &\geq \frac{C_1^{-1}r^Q}{\varepsilon^{Q/2}}\exp\Big(\frac{-C_1 r^2}{\varepsilon}\Big)\;. \end{align*} Note that if $\mathbf Y_s \in B(x,r)$, then necessarily $\mathbf Y_s^{-1}z(\mathbf Y_s,x) \in B(1_G,r/2)$, so we obtain for all $x \in G$, $r,\varepsilon > 0$ and $s \in [0,T]$ \[ \PPP{d(\mathbf X_{s,s+\varepsilon}, \mathbf Y_s^{-1}z(\mathbf Y_s,x)) < r/4 \mid \mathcal{F}_s} \geq \frac{C_1^{-1}r^Q}{\varepsilon^{Q/2}}\exp\Big( \frac{-C_1 r^2}{\varepsilon} \Big) \1{\mathbf Y_s \in B(x,r)}\;. \] Finally, by standard rough paths estimates (using that $T_h(\mathbf X)_{s,t}$ is equal to $\mathbf X_{s,t}$ plus a combination of cross-integrals of $\mathbf X$ and $h$ over $[s,t]$) we have \begin{align*} d(\mathbf X_{s,s+\varepsilon},\mathbf Y_{s,s+\varepsilon}) &\leq C_2\max_{i \in \{1\ldots, N\}} \Big(\sum_{k=1}^i \norm{h}_{1\textnormal{-var};[s,s+\varepsilon]}^k \norm{\mathbf X}_{p\textnormal{-var};[s,s+\varepsilon]}^{i-k} \Big)^{1/i} \\ &\leq C_2\max_{i \in \{1\ldots, N\}} \Big(\sum_{k=1}^i \varepsilon^{k/2}\norm{h}_{W^{1,2};[s,s+\varepsilon]}^k \norm{\mathbf X}_{p\textnormal{-var};[s,s+\varepsilon]}^{i-k} \Big)^{1/i}\;. \end{align*} Hence, if $\norm{\mathbf X}_{p\textnormal{-var};[s,s+\varepsilon]} \leq R\varepsilon^{1/2}$, then for some $C_3 > 0$ depending only on $G$ \[ d(\mathbf X_{s,s+\varepsilon},\mathbf Y_{s,s+\varepsilon}) \leq C_3\varepsilon^{1/2}(\norm{h}_{W^{1,2};[s,s+\varepsilon]}^{1/N} + \norm{h}_{W^{1,2};[s,s+\varepsilon]})(1+R^{(N-1)/N})\;. \] We now let $r = C\varepsilon^{1/2}$. It follows that if $C$ and $R$ satisfy \begin{equation}\label{eq:Clower} C \geq 4C_3(\norm{h}_{W^{1,2};[s,s+\varepsilon]}^{1/N} + \norm{h}_{W^{1,2};[s,s+\varepsilon]})(1+R^{(N-1)/N})\;, \end{equation} then by the Fernique estimate~\eqref{eq:Fernique}, for any $2 < p < N+1$, \begin{align*} \mathbb{P}\big[ d(\mathbf X_{s,s+\varepsilon},\mathbf Y_{s,s+\varepsilon}) > C\varepsilon^{1/2}/4 \mid \mathcal{F}_s \big] &\leq \mathbb{P}\big[\norm{\mathbf X}_{p\textnormal{-var};[s,s+\varepsilon]} > R\varepsilon^{1/2}, \mid \mathcal{F}_s\big] \\ &\leq C_F \exp\Big( \frac{-R^2}{C_F} \Big)\;. \end{align*} It follows that if $C$ and $R$ furthermore satisfy \begin{equation}\label{eq:clower} c := C_1^{-1} C^{Q}\exp\left( -C_1 C^2 \right) - C_F \exp\Big( \frac{-R^2}{C_F} \Big) > 0\;, \end{equation} then we obtain \[ \mathbb{P} \big[ d(\mathbf Y_{s+\varepsilon},x) < C\varepsilon^{1/2} \mid \mathcal{F}_s \big] \geq c \1{\mathbf Y_s \in B(x,C\varepsilon^{1/2})}\;. \] We now observe that due to the factor $R^{(N-1)/N}$ in~\eqref{eq:Clower} above, there exists $C_0 > 0$, depending only on $\norm{h}_{W^{1,2}}$ and $\Lambda$, such that for every $C \geq C_0$, we can find $R > 0$ for which~\eqref{eq:Clower} and~\eqref{eq:clower} are satisfied. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:support}] By Theorem~\ref{thm:smallHolder}, it suffices to check that $T_h(\mathbf X)$ satisfies conditions~\ref{point:supUpper} and~\ref{point:ballLower} with constants $C_1,c_2, C_2$ only depending on $\Lambda$ and $\norm{h}_{W^{1,2}}$. However this follows directly from Lemmas~\ref{lem:supBound} and~\ref{lem:lowerBallBound}. \end{proof} \begin{theorem}\label{thm:support_proper} Let $\gamma, R > 0$. It holds that \begin{equation}\label{eq:dHol_lower_bound} \inf_{x \in G} \inf_{a\in\Xi(\Lambda)} \inf_{\|h\|_{W^{1,2}} \leq R} \PPPover{a,x}{d_{\alpha\textnormal{-H{\"o}l};[0,T]}(\mathbf X,S_N(h)) < \gamma} > 0\;, \end{equation} where $d_{\alpha\textnormal{-H{\"o}l};[0,T]}$ denotes the (homogeneous) $\alpha$-H{\"o}lder metric and $S_N(h)$ is the level-$N$ lift of $h$. In particular, the support of $\mathbf X^{a,x}$ in $\alpha$-H{\"o}lder topology is precisely the closure in $C^{\Hol{\alpha}}([0,T], G)$ of $\{ x S_N(h) \mid h \in W^{1,2} \}$. \end{theorem} \begin{proof} By uniform continuity of the map $(\mathbf x,h) \mapsto T_h(\mathbf x)$ on bounded sets~\cite[Cor.~9.35]{FrizVictoir10}, and the fact that $T_hT_{-h}(\mathbf x) = \mathbf x$ and $T_h(0) = S_N(h)$, there exists $\delta = \delta(\gamma,R) > 0$ such that for all $h\in W^{1,2}$ with $\|h\|_{W^{1,2}} \leq R$ \[ \|T_{-h}(\mathbf x)\|_{\Hol{\alpha};[0,T]} < \delta(\gamma,R) \Rightarrow d_{\Hol{\alpha}}(\mathbf x,S_N(h)) < \gamma\;. \] The bound~\eqref{eq:dHol_lower_bound} then follows from Proposition~\ref{prop:support}. As a consequence, we see that the support of $\mathbf X^{a,x}$ contains the closure of $\{ x S_N(h) \mid h \in W^{1,2} \}$. The reverse inclusion follows from the fact that $\mathbf X^{a,x}$ is a.s. a geometric $\alpha$-H{\"o}lder rough path, and is therefore the limit in the $d_{\Hol{\alpha};[0,T]}$ metric of lifts of smooth paths. \end{proof} \begin{remark} The main difference with the approach taken in~\cite[Thm.~50]{FrizVictoir08} and~\cite[Thm~16.33]{FrizVictoir10} to prove a bound of the form $\mathbb{P}[d_{\Hol{\alpha}}(\mathbf X,S_N(h)) <\gamma] > 0$ (with $\alpha \in [0,1/6)$ and $\alpha \in [0,1/4)$ respectively) is that we do not rely on a support theorem in the uniform topology. As a consequence, our analysis is more delicate but does not lose any power at each step, which allows us to push to the sharp H{\"o}lder exponent range $\alpha \in [0, 1/2)$. Note also that~\cite[Thm~16.39]{FrizVictoir10} and~\cite[Cor.~46]{FrizVictoir08} give this bound for $h\equiv 0$ with the sharp range $\alpha \in [0,1/2)$. The proof therein relies crucially on lower and upper bounds on the probability that $\mathbf X$ stays in small balls, namely $\mathbb{P}^{a,x}[\|\mathbf X\|_{0;[0,t]} < \gamma] \asymp e^{-\lambda(\gamma)t\gamma^2}$ with $0<\lambda_{\min} \leq \lambda(\gamma) \leq \lambda_{\max} < \infty$, which yields a version of Lemma~\ref{lem:lessHol} for the untranslated process $\mathbf X^{a,x}$ conditioned to stay in a small ball around $x$. This argument is rather sensitive to the fact that for each fixed $\gamma > 0$ the same quantity $\lambda(\gamma)$ appears in the lower and upper bounds; this is not true for the translated process $T_h(\mathbf X)$, which is the reason for our different strategy. \end{remark} \section{Density theorem} \subsection{Semi-Dirichlet forms associated with H{\"o}rmander vector fields}\label{subsec:semiDir} In this subsection, let $\mathcal{O}$ be a smooth manifold and $W = (W_1,\ldots, W_d)$ a collection of smooth vector fields on $\mathcal{O}$. For $z \in \mathcal{O}$, let $\textnormal{Lie}_{z}W$ denote the subspace of $T_{z}\mathcal{O}$ spanned by the vector fields $(W_1,\ldots, W_d)$ and all their commutators at $z$. We say that $W$ satisfies H{\"o}rmander's condition on $\mathcal{O}$ if $\textnormal{Lie}_{z}W = T_{z}\mathcal{O}$ for every $z \in \mathcal{O}$, in which case we call $W$ a collection of H{\"o}rmander vector fields. Fix a collection $W = (W_1,\ldots, W_d)$ of H{\"o}rmander vector fields on $\mathcal{O}$ and $U \subset \mathcal{O}$ an open subset with compact closure. Consider a bounded measurable function $a$ on $U$ taking values in (not necessarily symmetric) $d\times d$ matrices such that for some $\Lambda \geq 1$ \begin{equation}\label{eq:aLower} \Lambda^{-1} |\xi|^2 \leq \gen{\xi, a(z)\xi}\;, \quad \forall \xi \in \mathbb R^d\;, \quad \forall z \in U\;. \end{equation} Let $\mu$ be a smooth measure on $\mathcal{O}$ and define the bilinear map \begin{align*} \mathcal{E} &: C^\infty_c(U) \times C^\infty_c(U) \to \mathbb R \\ \mathcal{E} &: (f,g) \mapsto -\sum_{i,j=1}^d \int_{U} a^{i,j}(z) (W_i f)(z)(W_j^*g)(z) \mu(dz)\;, \end{align*} where $W_j^* = -W_j - \textnormal{div}_\mu W_j$ is the formal adjoint of $W_j$ with respect to $\mu$. In the following lemma, the $L^p$ norm $\norm{\cdot}_{p}$ for $p \in [1,\infty]$ is assumed to be on $L^p(U,\mu)$. For background concerning (non-symmetric, semi-)Dirichlet forms, we refer to~\cite{Oshima13}. \begin{lemma}\label{lem:L2Density} The bilinear form $\mathcal{E}$ is closable in $L^2(U, \mu)$, lower bounded, and satisfies the sector condition. Denote by $P_t$ the associated (strongly continuous) semi-group on $L^2(U,\mu)$. Suppose further that $P_t$ is sub-Markov (so that the closed extension of $\mathcal{E}$ is a lower-bounded semi-Dirichlet form) and maps $C_b(U)$ into itself. Then there exists $\nu > 2$ and $b>0$ such that for every $x \in U$ and $t>0$ there exists $p_t(x,\cdot) \in L^2(U,\mu)$ with $\norm{p_t(x,\cdot)}_{2} \leq bt^{-\nu/2}$ such that for all $f \in L^2(U,\mu)$ \[ P_t f(x) = \int_U p_t(x,y) f(y) \mu(dy)\;. \] \end{lemma} The proof of Lemma~\ref{lem:L2Density} is based on the sub-Riemannian Sobolev inequality combined with a classical argument of Nash~\cite{Nash58}. We believe this result should be standard, but as we were unable to find a sufficiently similar form in the literature, we prefer to give a proof in Appendix~\ref{appendix:proof} (see~\cite{SCS91,Sturm95} for closely related results in the case that $\mathcal{E}$ is symmetric or positive semi-definite). Note also that in the sequel, namely in the proof of Theorem~\ref{thm:WHormander}, we will only require the fact from Lemma~\ref{lem:L2Density} that the kernel $p_t$ exists. The bound on $\|p_t(x,\cdot)\|_{2}$ is merely a free consequence of the proof of its existence. \subsection{Density for RDEs} We now specialise to the setting of Markovian rough paths. Recall Notation~\ref{subsec:Notation} and consider the RDE \begin{equation}\label{eq:RDE} d\mathbf Y_t = V(\mathbf Y_t)d\mathbf X_t\;, \quad \mathbf Y_0 = y_0 \in \mathbb R^e\;, \end{equation} for smooth vector fields $V = (V_1,\ldots, V_d)$ on $\mathbb R^e$. We suppose also that $V$ are $\textnormal{Lip}^2$ so that~\eqref{eq:RDE} admits a unique solution. We fix also the starting point $\mathbf X_0 = x_0 \in G$ of $\mathbf X$. For the reader's convenience, we recall the Nagano--Sussmann orbit theorem (see, e.g.,~\cite[Chpt.~5]{AgrachevSachkov04}). \begin{theorem}[Orbit theorem, Nagano--Sussmann]\label{thm:orbit} Let $W$ be a set of complete smooth vector fields on a smooth manifold $M$. Let $\mathcal{O}$ denote the orbit of $W$ through a point $z_0\in M$. Then $\mathcal{O}$ is a connected immersed submanifold of $M$. Furthermore, for any $z \in \mathcal{O}$, \[ T_{z}\mathcal{O} = \spn{ \mathrm{d} (P^{-1})_{P(z)} w(P(z)) \mid P \in \mathcal{P}, w \in W }\;, \] where \[ \mathcal{P} = \{e^{t_1 w_1}\circ\ldots \circ e^{t_k w_k} \mid t_i \in \mathbb R, \; w_i \in W , \; k \geq 1\} \subset \textnormal{Diff}\, M\;. \] \end{theorem} A particularly useful consequence of the orbit theorem is the following. \begin{corollary}\label{cor:Frobenius} Let notation be as in Theorem~\ref{thm:orbit}. It holds that $\textnormal{Lie}_{z}W \subseteq T_{z}\mathcal{O}$ for all $z \in \mathcal{O}$. Furthermore, $\textnormal{Lie}_{z}W = T_{z}\mathcal{O}$ for all $z \in \mathcal{O}$ if and only if $\dim \textnormal{Lie}_zW$ is constant in $z$. \end{corollary} \begin{proof} The fact that $\textnormal{Lie}_{z}W \subseteq T_{z}\mathcal{O}$ and the ``only if'' implication are obvious. For the ``if'' implication, suppose $\dim \textnormal{Lie}_zW$ is constant in $z \in \mathcal{O}$. Then $\textnormal{Lie} \; W$ defines a distribution on $\mathcal{O}$ (a subbundle of the tangent bundle), so the Frobenius theorem implies that $\textnormal{Lie}\; W$ arises from a regular foliation of $\mathcal{O}$. However, each leaf of this foliation is itself an orbit of $W$. Therefore the foliation contains only one leaf, namely $\mathcal{O}$, which concludes the proof. \end{proof} Consider the manifold $G \times \mathbb R^e$. We canonically identify the tangent space $T_{(x,y)}(G \times \mathbb R^e)$ with $T_x G \oplus T_y\mathbb R^e$ and define smooth vector fields on $G \times \mathbb R^e$ by $W_i = U_i + V_i$. Let $z_0 = (x_0,y_0) \in G \times \mathbb R^e$ and denote by $\mathcal{O} = \mathcal{O}_{z_0}$ the orbit of $z_0$ under the collection $W = (W_1,\ldots, W_d)$. Denote the couple $\mathbf Z_t = (\mathbf X_t,\mathbf Y_t)$ which is a Markov process on $G \times \mathbb R^e$. One can readily show that a.s. $\mathbf Z^{z_0}_t \in \mathcal{O}$ for all $t > 0$ (e.g., by approximating each sample path of $\mathbf X$ in $p$-variation for some $p > 2$ by piecewise geodesic paths). \begin{theorem}\label{thm:WHormander} Suppose $W$ satisfies H{\"o}rmander's condition on $\mathcal{O}$, i.e., $\textnormal{Lie}_z W = T_z\mathcal{O}$ for all $z \in \mathcal{O}$. Then for all $t > 0$, $\mathbf Z^{z_0}_t$ admits a density with respect to any smooth measure on $\mathcal{O}$. \end{theorem} The proof of Theorem~\ref{thm:WHormander} will be given at the end of this section. We first state several remarks and a consequence of the theorem. \begin{remark}\label{remark:levelOne} Note that from Notation~\ref{subsec:Notation} we always consider $G = G^N(\mathbb R^d)$ with $N \geq 2$. However, in the special case that $a(x)$ depends only on the first level $\pi_1(x)$ for all $x \in G^N(\mathbb R^d)$, the identical statement in Theorem~\ref{thm:WHormander} holds for the process $\mathbf Z_t = (\pi_1(\mathbf X_t),\mathbf Y_t) \in \mathbb R^d\times \mathbb R^e$ (the conditions change by substituting $G$ by $\mathbb R^d$ everywhere). The reason for this is that Lemma~\ref{lem:infGen} below can be readily adjusted to give analogous infinitesimal behaviour of the process $\mathbf Z_t$ (now taking values in $\mathcal{O} \subseteq \mathbb R^d \times \mathbb R^e$), after which the proof of the theorem carries through without change. \end{remark} For a statement of the density of $\mathbf Y_t$ itself, let $\mathcal{O}' \subseteq \mathbb R^e$ denote the orbit of $y_0 \in \mathbb R^e$ under $V$. \begin{lemma}\label{lem:ZImpliesY} Suppose $\mathbf Z_t^{z_0}$ admits a density with respect to a smooth measure on $\mathcal{O}$. Then $\mathbf Y_t$ admits a density with respect to any smooth measure on $\mathcal{O}'$. \end{lemma} \begin{proof} By the description of the tangent space $T_z \mathcal{O}$ in Theorem~\ref{thm:orbit}, it holds that the projection $p_2 : \mathcal{O} \to \mathcal{O}', (x,y) \mapsto y$, is a (surjective) submersion (in fact a smooth fibre bundle) from $\mathcal{O}$ to $\mathcal{O}'$. The conclusion follows from the fact that pre-images of null-sets under submersions are null-sets for smooth measures. \end{proof} Moreover, the condition in Theorem~\ref{thm:WHormander} may be restated in terms of just the driving vector fields $V = (V_1,\ldots, V_d)$ as follows. \begin{lemma}\label{lem:equiv_Hor} For a multi-index $I = (i_1,\ldots, i_k) \in \{1,\ldots, d\}^k$ of length $|I| = k$, denote by $V_{[I]}$ the vector field $[[\ldots[V_{i_1},V_{i_2}],\ldots],V_{i_k}]$. It holds that $W$ satisfies H{\"o}rmander's condition on $\mathcal{O}$ if and only if \begin{equation}\label{eq:HorCond} \textnormal{$\dim \textnormal{span}\{V_{[I]}(y) : |I| > N \} \subseteq T_y\mathbb R^e$ is constant in $y \in \mathcal{O}'$.} \end{equation} \end{lemma} \begin{proof} Since the vector fields $U_1,\ldots, U_d$ are freely step-$N$ nilpotent and generate the tangent space of $G$, observe that \begin{equation}\label{eq:dimLieW} \dim \textnormal{Lie}_{(x,y)} W = \dim G + \dim \textnormal{span}\{V_{[I]}(y) : |I| > N \}\;, \quad \forall (x,y) \in G\times \mathbb R^e\;. \end{equation} Suppose $W$ satisfies H{\"o}rmander's condition on $\mathcal{O}$. Then $\dim \textnormal{Lie}_zW$ is constant in $z \in \mathcal{O}$, and by~\eqref{eq:dimLieW} it follows that~\eqref{eq:HorCond} holds. Conversely, suppose~\eqref{eq:HorCond} holds. It now follows from~\eqref{eq:dimLieW} that $\dim \textnormal{Lie}_zW$ is constant in $z \in \mathcal{O}$, and thus $W$ satisfies H{\"o}rmander's condition on $\mathcal{O}$ by Corollary~\ref{cor:Frobenius}. \end{proof} Combining Theorem~\ref{thm:WHormander} with Lemmas~\ref{lem:ZImpliesY} and~\ref{lem:equiv_Hor}, we obtain the following corollary. \begin{corollary}\label{cor:HorCond} Suppose condition~\eqref{eq:HorCond} holds. Then for all $t > 0$, the RDE solution $\mathbf Y_t$ admits a density with respect to any smooth measure on $\mathcal{O}'$. \end{corollary} \begin{remark} Note that $\mathcal{O}' = \mathbb R^e$ whenever $V$ satisfies H{\"o}rmander's condition on $\mathbb R^e$, in which case every smooth measure is equivalent to the Lebesgue measure. \end{remark} \begin{remark} Following Remark~\ref{remark:levelOne}, in the case that $a(x)$ depends only on the first level $\pi_1(x)$, we are able to take $N=1$ in~\eqref{eq:HorCond} when applying Corollary~\ref{cor:HorCond}. \end{remark} \begin{remark} Note that while~\eqref{eq:HorCond} (for any $N\geq 0$) implies that $V$ satisfies H{\"o}rmander's condition on $\mathcal{O}'$, the reverse implication is clearly not true. In particular, we do not know if it is sufficient for $V$ to only satisfy H{\"o}rmander's condition on $\mathcal{O}'$ in order for $\mathbf Y_t$ to admit a density on $\mathcal{O}'$. The difficulty of course is that unless~\eqref{eq:HorCond} is satisfied, the couple $(\mathbf X_t,\mathbf Y_t)$ will in general not admit a density in $\mathcal{O}$, whereby our method of proof breaks down. \end{remark} For the proof of Theorem~\ref{thm:WHormander}, we first recall for the reader's convenience the infinitesimal behaviour of the coordinate projections of $\mathbf X^a$. As before, let $\lambda$ denote the Haar measure on $G$. \begin{lemma}\label{lem:infProj} Let $g \in C^\infty_c(G)$. Then for all $k,l \in \{1,\ldots, d\}$ \begin{align*} \lim_{t \rightarrow 0} t^{-1}\gen{g, \EEEover{a,\cdot}{\mathbf X_{0,t}^k}}_{L^2(G,\lambda)} &= -\sum_{j=1}^d\int_{G}a^{k,j}(x)U_j g(x)\lambda(dx)\;, \\ \lim_{t \rightarrow 0} t^{-1}\gen{g, \EEEover{a,\cdot}{\mathbf X^k_{0,t} \mathbf X^l_{0,t}}}_{L^2(G,\lambda)} &= 2\int_{G}a^{k,l}(x) g(x)\lambda(dx)\;, \\ \lim_{t \rightarrow 0} t^{-1}\gen{g, \mathbb{E}^{a,\cdot}\big[\mathbf X_{0,t}^{k,l}\big]}_{L^2(G,\lambda)} &= 0\;. \end{align*} \end{lemma} \begin{proof} This is~\cite[Lem.~27]{FrizVictoir08} extended {\it mutatis mutandis} to the general case $G^N(\mathbb R^d)$, $N \geq 1$, cf.~\cite[Prop.~16.20]{FrizVictoir10}. \end{proof} \begin{lemma}\label{lem:infGen} Let $U \subset \mathcal{O}$ be an open subset with compact closure. Consider the (sub-Markov) semi-group $P_t^U$ of $\mathbf Z_t$ killed upon exiting $U$, defined for all bounded measurable $f : U \to \mathbb R$ by \[ P_t^U f(z) = \EEEover{z}{f(\mathbf Z_t)\1{\mathbf Z_s \in U, \forall s \in [0,t]}}\;. \] Then $P^U_t$ maps $C_b(U)$ into itself, and for any smooth measure $\mu$ on $\mathcal{O}$ it holds that for all $f,g \in C^\infty_c(U)$ \begin{equation}\label{eq:EEU} \lim_{t\rightarrow 0} t^{-1}\gen{P^U_t f - f, g}_{L^2(U,\mu)} = \sum_{i,j=1}^d \int_{U} a^{i,j}(p_1(z)) (W_i f)(z)(W_j^*g)(z) \mu(dz)\;, \end{equation} where $p_1 : \mathcal{O} \to G$ is the projection $(x,y) \mapsto x$ and $W_j^* = -W_j - \textnormal{div}_\mu(W_j)$ is the adjoint of $W_j$ in $L^2(U,\mu)$. \end{lemma} \begin{proof} To show that $P_t^U$ maps $C_b(U)$ into itself, let $f \in C_b(U)$. As $z_n = (x_n,y_n) \rightarrow z = (x,y)$ in $U$, it holds in particular that $x_n \rightarrow x$ in $G$. It follows that $\mathbf X^{a,x_n} \,{\buildrel \DD \over \rightarrow}\, \mathbf X^{a,x}$ in $\alpha$-H{\"o}lder topology for any $\alpha \in [0,1/2)$~\cite[Thm.~16.28]{FrizVictoir10}, and we readily obtain that $P_t^U f(z_n) \rightarrow P_t^U f(z)$. Hence $P_t^U f \in C_b(U)$, so indeed $P_t^U$ maps $C_b(U)$ into itself. It remains to verify~\eqref{eq:EEU}. Note that for every $z \in U$ the probability that $\mathbf Z^{z}$ leaves $U$ in $[0,t]$ is bounded above by $C^{-1}\exp(-Ct^{-1})$ for some $C = C(z,U,\Lambda) > 0$ (see, e.g., the Fernique estimate~\eqref{eq:Fernique}). It follows by a localisation argument and the stochastic Taylor expansion (e.g.,~\cite[Lem.~26]{FrizVictoir08}), that \begin{align}\label{eq:inf1} \nonumber \lim_{t\rightarrow 0} t^{-1}\gen{P^U_t f - f, g}_{L^2(U,\mu)} =& \lim_{t \rightarrow 0}t^{-1} \int_U \Big(\sum_{i=1}^d W_i f(z) \EEEover{x}{\mathbf X_{0,t}^{ i}} \\ &+ \sum_{i,j=1}^d\frac{1}{2}W_iW_jf(z) \mathbb{E}^{x}\big[\mathbf X_{0,t}^{i}\mathbf X_{0,t}^{x;j}\big] \\ \nonumber &+ \sum_{i,j=1}^d\frac{1}{2}[W_i,W_j]f(z)\mathbb{E}^{x}\big[\mathbf X_{0,t}^{i,j}\big] \Big) g(z)\mu(dz)\;. \end{align} Since $p_1 : \mathcal{O} \to G$ is a (surjective) submersion (in fact a smooth fibre bundle), by integrating over the fibres (e.g.,~\cite[p.~307]{GuilleminSternberg77}) we can associate to any $v \in C^\infty_c(U)$ a function $\hat v \in C^\infty_c(G)$ such that for any bounded measurable $ h : G \to \mathbb R$ \[ \int_U (h \circ p_1)(z) v (z) \mu(dz) = \int_{G} h(x) \hat v(x)\lambda(dx)\;. \] In particular, setting $v_i := (W_if)g$, $v_{i,j} := (W_iW_jf)g$ and $w_{i,j} := ([W_i,W_j]f)g$, we can apply Lemma~\ref{lem:infProj} to obtain that~\eqref{eq:inf1} equals \begin{equation}\label{eq:inf2} \sum_{i,j=1}^d \int_{G} \left[-a^{i,j}(x)(U_j \hat v_i)(x) + a^{i,j}(x)\hat v_{i,j}(x) \right] \lambda(dx)\;. \end{equation} It remains to show that~\eqref{eq:inf2} agrees with the RHS of~\eqref{eq:EEU}. To this end, we may assume by a limiting procedure that $a$ is smooth, and note that the same argument as in~\cite[p.~503]{FrizVictoir08} applies {\it mutatis mutandis} to our current setting. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:WHormander}] Consider an increasing sequence of relatively compact open sets $(U_n)_{n \geq 1}$ such that $\cup_{n \geq 1} U_n = \mathcal{O}$. By Lemma~\ref{lem:infGen}, we can apply Lemma~\ref{lem:L2Density} to conclude that for every $x \in \mathcal{O}$ and $n \geq 1$ such that $x \in U_n$, there exists a non-negative kernel $p^n_t(x,\cdot) \in L^2(U_n,\mu)$ such that $P^{U_n}_t f(x) = \gen{p^n_t(x,\cdot), f}_{L^2(U_n,\mu)}$ for all $f \in C_b(U_n)$. Moreover, by definition of $P^{U_n}_t$, the sequence $p^n_t(x,\cdot)$ is increasing in $n$ and satisfies $\norm{p^n_t(x,\cdot)}_{L^1(U_n,\mu)} \leq 1$. Hence the limit $p_t(x,\cdot) := \lim_{n \rightarrow \infty}p^n_t(x,\cdot)$ is almost everywhere finite and gives precisely the transition kernel of the Markov process $\mathbf Z_t$ in $\mathcal{O}$ with respect to $\mu$. \end{proof} \begin{remark}\label{remark:precompacts} The pre-compact subsets $U_n$ were considered in the proof only to obtain existence of $p^n_t$ from Lemma~\ref{lem:L2Density} for each $n \geq 1$. We could have avoided considering such a compact exhaustion by formulating Lemma~\ref{lem:L2Density} without a pre-compactness assumption on $U$ (however, at least without extra assumptions, the proof of such a formulation itself would seem to require a compact exhaustion). \end{remark}
train/arxiv
BkiUaqXxK0zjCsHeYYys
5
1
\section{Introduction} \label{intro} The phase diagram of strongly interacting matter at finite temperature and chemical potential has been extensively studied along the past decades. In the region of very high temperatures and low densities, it is well known that Quantum Chromodynamics (QCD) predicts the formation of a quark-gluon plasma (QGP)~\cite{Fukushima:2010bq}. Under these extreme conditions quark and gluons are expected to be weakly coupled, and the phase diagram can be explored by means of first-principle perturbative calculations based on expansions in powers of the QCD coupling constant. Moreover, lattice QCD (LQCD) calculations indicate that at vanishing chemical potential the transition from the hadronic phase to the QGP occurs in the form of a smooth crossover, at a pseudocritical temperature $T_{\rm pc} \sim 150-170$~MeV. On the other hand, at sufficiently high densities and low temperatures, one expects to find a ``color-flavor locked'' phase~\cite{Alford:2007xm}, in which quarks are bounded into color superconducting states analogous to the Cooper pairs formed by electrons in an ordinary superconductor. At moderate densities, however, the situation is much more uncertain. The main reason for this is that first-principle nonperturbative QCD calculations at finite baryon chemical potential $\mu_B$ are not accessible by Monte Carlo simulations, due to the presence of a complex fermion determinant in the corresponding partition function (the so-called ``sign problem''). In this region, which is not accessible through lattice techniques or first principles, most of the present theoretical knowledge on the phase transitions is obtained from the study of effective models for strong interactions. Given the important role played by effective models in the understanding of the QCD phase diagram, it is important to test their reliability. This can be done by comparing the corresponding predictions with those obtained from first principle calculations, in situations where the latter are available. One obvious possibility is to consider the above mentioned case of strong-interaction matter at finite temperature and vanishing chemical potential. Another interesting situation is the one in which $\mu_B=0$, but one has a nonzero isospin chemical potential $\mu_I$. In this case (both at zero and finite temperature) LQCD simulations are feasible, since the functional determinant turns out to be real~\cite{Alford:1998sd}. Following the early work in Refs.~\cite{Kogut:2002tm,Kogut:2002zg}, several groups have performed LQCD calculations at $\mu_I \neq 0$ using different techniques, see e.g.\ Refs.~\cite{Kogut:2004zg,deForcrand:2007uz,Cea:2012ev,Detmold:2012wc,Brandt:2017oyy,Brandt:2018bwq}. One important feature confirmed by these calculations is that at $\mu_I \gtrsim m_\pi$ one finds the onset of a Bose-Einstein pion condensation phase, as previously conjectured in Ref.~\cite{Son:2000xc}. For a recent review on meson condensation triggered by a large isospin imbalance see Ref.~\cite{Mannarelli:2019hgn}, where references to various theoretical approaches for the analysis of associated phase transitions can be found. In this work we consider the properties of quark matter at finite isospin chemical potential using a particular class of effective theories, viz.\ the nonlocal Polyakov-Nambu$-$Jona-Lasinio (nlPNJL) models~\cite{Blaschke:2007np,Contrera:2007wu,Contrera:2009hk,Contrera:2010kz,Hell:2008cc,Hell:2009by}. In the nlPNJL approach the quarks move in a background color field and interact through covariant nonlocal chirally symmetric four-point couplings, which are separable in momentum space. At vanishing $\mu_B$ and finite temperature these models provide a plausible description of chiral restoration and deconfinement transitions, in good agreement with LQCD results~\cite{Dumm:2021vop}. In general, it can be considered that they represent an improvement over the local Polyakov Nambu$-$Jona-Lasinio (PNJL) model~\cite{Meisinger:1995ih,Fukushima:2003fw,Megias:2004hj,Ratti:2005jh, Roessner:2006xn,Mukherjee:2006hq,Sasaki:2006ww}. In fact, nonlocal interactions arise naturally in the context of several successful approaches to low-energy quark dynamics, and lead to a momentum dependence in quark propagators that can be made consistent~\cite{Noguera:2008cm} with lattice results. Moreover, it can be seen that nonlocal extensions of the NJL model do not show some of the known inconveniences that are present in the local theory. Well-behaved nonlocal form factors can regularize the loop integrals in such a way that anomalies are preserved~\cite{RuizArriola:1998zi} and charges are properly quantized. In addition, one can avoid the introduction of various sharp cutoffs to deal with higher order loop integrals~\cite{Blaschke:1995gr}, improving in this way the predictive power of the models. Within the above mentioned framework, the aim of the present work is to provide a comparison, both at zero and finite temperature, between the results obtained within the nlPNJL model and those arising from other theoretical approaches. In particular, we consider the results from the local NJL model~\cite{He:2005nk,Avancini:2019ego}, its PNJL extension~\cite{Zhang:2006gu,Sasaki:2010jz}, chiral perturbation theory (ChPT)~\cite{Adhikari:2019mdk} and recent LQCD calculations~\cite{Brandt:2017oyy,Brandt:2018bwq}. This article is organized as follows. In Sec.~\ref{model} we present the general formalism to describe a two-flavor nonlocal PNJL model at finite temperature and nonvanishing isospin chemical potential. In Sec.~\ref{results} we quote and discuss our numerical results, including the comparison with the outcomes from alternative effective approaches and LQCD simulations. Finally, in Sec.~\ref{summary} we summarize our results and present our main conclusions. \section{Theoretical Formalism} \label{model} We start by considering the Euclidean action of a two-flavor quark model that includes nonlocal scalar and pseudoscalar quark-antiquark currents. One has \begin{eqnarray} S_E &=& \int d^4 x \,\left[ \bar \psi (x) \left( - i \rlap/\partial + \hat m \right) \psi (x) \, - \, \frac{G}{2}\, j_a(x) j_a(x) \right] \ , \label{action} \end{eqnarray} where $\psi = (\psi_u\ \psi_d)^T$ stands for the $u$, $d$ quark field doublet, and $\hat m = \mbox{diag}(m_u,m_d)$ is the current quark mass matrix. In what follows we assume that the current masses of $u$ and $d$ quarks are equal, denoting $m_c \equiv m_u=m_d $. The nonlocal currents $j_a(x)$ in Eq.~(\ref{action}) are given by \begin{eqnarray} j_a (x) &=& \int d^4 z \ {\cal G}(z) \ \bar \psi(x+\frac{z}{2}) \ \Gamma_a \ \psi(x-\frac{z}{2}) \ , \label{cuOGE} \end{eqnarray} where we have defined $\Gamma_a = ( \openone, i \gamma_5 \vec \tau )$, $\tau_i$ being Pauli matrices that act on flavor space. The function ${\cal G}(z)$ is a form factor responsible for the nonlocal character of the four-point interactions. To study strong-interaction matter at finite temperature and/or chemical potential we introduce the partition function of the system, given by $\mathcal{Z} = \int \mathcal{D} \bar{\psi}\,\mathcal{D}\psi \,\exp[-S_E]$. As stated, we are interested in dealing with isospin asymmetric matter. This is effectively implemented by introducing quark chemical potentials $\mu_u$ and $\mu_d$, which in principle can be different from each other. Thus, in the effective action we perform the replacement \begin{equation} \left(\begin{array}{cc} \partial_4 & 0 \\ 0 & \partial_4 \end{array} \right) \rightarrow \left(\begin{array}{cc} \partial_4 - \mu_u & 0 \\ 0 & \partial_4 - \mu_d \end{array} \right) \ . \label{kinrep} \end{equation} The quark chemical potentials can be written in terms of average and isospin chemical potentials $\mu$ and $\mu_I$ as \begin{equation} \mu_u \ = \ \mu + \dfrac{\mu_I}{2} \ ,\qquad\qquad \mu_d \ = \ \mu - \dfrac{\mu_I}{2}\ , \end{equation} where $\mu=\mu_B/3$, $\mu_B$ being the baryon chemical potential. For the nonlocal model under consideration, to obtain the appropriated conserved currents the replacement in Eq.~(\ref{kinrep}) has to be complemented with a modification of the nonlocal currents appearing in Eq.~(\ref{cuOGE}), namely~\cite{GomezDumm:2006vz,Dumm:2010hh} \begin{eqnarray} \psi(x-z/2) & \rightarrow & \mathcal{W}(x,x-z/2) \, \psi(x-z/2)\ , \nonumber \\ \bar\psi(x+z/2) & \rightarrow & \bar\psi(x+z/2)\, \gamma_0 \,\mathcal{W}(x+z/2,x)\,\gamma_0 \ . \label{transport} \end{eqnarray} In the present case the transport functions $\mathcal{W}$ are simply given by \begin{equation} \mathcal{W}(x,x-z/2)\ = \ \mathcal{W}(x+z/2,x) \ = \ \exp \left( \frac{z_4}{2}\, \hat \mu \right) \ , \end{equation} where $\hat\mu = {\rm diag}(\mu_u,\mu_d)$. It is convenient to perform a standard bosonization of the the fermionic action~\cite{Ripka:1997zb}, introducing auxiliary mesonic fields $\sigma$ and $\pi_i$, $i=1,2,3$, and integrating out the fermion fields. We consider here the mean field approximation (MFA), in which the bosonic fields are replaced by their vacuum expectation values (VEV) $\bar\sigma$ and $\bar\pi_i$. Let us recall that, for $\mu_I=0$, in the chiral limit ($m_c=0$) the action in Eq.~(\ref{action}) is invariant under global ${\rm U(1)}_B \otimes {\rm SU(2)}_I \otimes {\rm SU(2)}_{IA}$ transformations. The group U(1)$_B$ is associated to baryon number conservation, while the chiral group SU(2)$_I \otimes {\rm SU(2)}_{IA}$ corresponds to the symmetries under isospin and axial-isospin transformations. At zero temperature the SU(2)$_{IA}$ symmetry is expected to be spontaneously broken by a large value of $\bar\sigma$ (which leads to large constituent quark masses), while at high temperatures one expects to have $\bar\sigma=0$, which implies a restoration of the chiral symmetry. In the presence of finite quark masses one has an explicit breakdown of SU(2)$_{IA}$ (and also of SU(2)$_I$, if current $u$ and $d$ quark masses are different to each other), hence the chiral symmetry is expected to be only partially restored at high $T$. Now, in the presence of a nonzero isospin chemical potential the full chiral symmetry group is explicitly broken down to the U(1)$_{I_3} \otimes {\rm U(1)}_{I_3A}$ subgroup. At $T=0$ it might happen that, similarly to the $\mu_I=0$ case, U(1)$_{I_3A}$ is spontaneously broken by a large value of $\bar\sigma$. Moreover, while for finite current quark masses one has $\bar \pi_3=0$~\cite{Ebert:2006uh}, it can happen that nonvanishing VEVs for $\pi_1$ and $\pi_2$ be developed, leading to a spontaneous breakdown of the remaining U(1)$_{I_3}$ symmetry. Since the action is still invariant under U(1)$_{I_3}$ transformations, without loss of generality one can choose $\bar\pi_i=\delta_{i1}\bar\Delta$. We consider the above described general situation in which both $\bar\sigma$ and $\bar\Delta$ can be nonvanishing. At zero temperature, the mean field thermodynamic potential is found to be given by \begin{eqnarray} \Omega^{\rm MFA}(T=0) &=& \frac{\bar \sigma^2 + \bar\Delta^2}{2\ G} - {\rm Tr} \ln \begin{pmatrix} -\rlap/ p_u + M\big(p_u\big) & i \, \gamma_5 \, \rho\big(\bar p\big) \\ i\, \gamma_5 \, \rho\big(\bar p\big) & -\rlap/ p_d + M\big(p_d\big) \end{pmatrix} \ , \label{actionMF} \end{eqnarray} where \begin{eqnarray} M\big(p\big) \ = \ m_c + g\big(p\big)\, \bar\sigma \ , \qquad \qquad \rho\big(p\big) \ = \ g\big(p\big) \, \bar\Delta \ . \label{defsMrhop} \end{eqnarray} Here we have defined $p_f^\nu \equiv \left( \vec p ,\, p_4 + i \mu_f \right)$, with $f=u,d$, and $\bar p = (p_u+p_d)/2$. The function $g(p)$ is the Fourier transform of the form factor ${\cal G}(z)$ in Eq.~(\ref{cuOGE}). \hfill Let us consider the extension of the model to the case of finite temperature, which can be addressed by using the standard Matsubara formalism. In order to account for confinement effects, we also include the coupling of fermions to the Polyakov loop (PL), assuming that quarks move on a constant color background field $\phi = i g\, G_{4a} \lambda_a/2$, where $G_{\mu a}$ are SU(3) color gauge fields. We work in the so-called Polyakov gauge, in which the matrix $\phi$ is given a diagonal representation $\phi = {\rm diag}(\phi_r,\phi_g,\phi_b) = \phi_3 \lambda_3 + \phi_8 \lambda_8$, taking the traced Polyakov loop $\Phi=\frac{1}{3} {\rm Tr}\, \exp( i \phi/T)$ as an order parameter of the confinement/deconfinement transition. In addition, to account for effective gauge field self-interactions we introduce a mean field Polyakov-loop potential ${\cal U}$ that depends on the traced PL, its conjugate $\bar\Phi$ and the temperature. The resulting scheme is usually referred to as a nonlocal Polyakov-Nambu-Jona-Lasinio (nlPNJL) model~\cite{Blaschke:2007np,Contrera:2007wu,Contrera:2009hk,Contrera:2010kz,Hell:2008cc,Hell:2009by}. Concerning the PL potential, its functional form is usually based on properties of pure gauge QCD. In this work we consider a potential given by a polynomial function based on a Ginzburg-Landau ansatz~\cite{Ratti:2005jh,Scavenius:2002ru}, namely \begin{eqnarray} \frac{{\cal{U}}_{\rm poly}(\Phi,\bar \Phi, T)}{T ^4} \ = \ -\,\frac{b_2(T)}{2}\, \bar \Phi \Phi -\,\frac{b_3}{6}\, \left(\Phi^3 + \bar \Phi^3\right) + \,\frac{b_4}{4}\, \left( \bar \Phi \Phi \right)^2 \ , \label{upoly} \end{eqnarray} where \begin{eqnarray} b_2(T) = a_0 +a_1 \left(\dfrac{T_0}{T}\right) + a_2\left(\dfrac{T_0}{T}\right)^2 + a_3\left(\dfrac{T_0}{T}\right)^3\ . \label{pol} \end{eqnarray} The parameters $a_i$ and $b_i$ can be fitted to pure gauge lattice QCD results imposing the presence of a first-order phase transition at the reference temperature $T_0$, which is a further parameter of the model. In the absence of dynamical quarks, $T_0$ is the critical temperature for deconfinement, and from lattice QCD calculations one expects it to be approximately equal to 270~MeV. However, it has been argued that in the presence of light dynamical quarks $T_0$ should be rescaled to about 210 and 190~MeV for the case of two and three flavors, respectively, with an uncertainty of about 30~MeV~\cite{Schaefer:2007pw,Schaefer:2009ui}. The numerical values for the PL potential parameters are~\cite{Ratti:2005jh} \begin{equation} a_0 = 6.75\ ,\quad a_1 = -1.95\ ,\quad a_2 = 2.625\ ,\quad a_3 = -7.44 \ ,\quad b_3 = 0.75\ ,\quad b_4 = 7.5\ . \end{equation} In this way, the grand canonical thermodynamic potential of the system is given by \begin{eqnarray} \Omega^{\rm MFA} & = & \frac{\bar\sigma^2 + \bar\Delta^2}{2\ G} - 2 \, T\ \sum_{n=-\infty}^\infty \sum_{c=r,g,b} \int \frac{d^3 p}{(2\pi)^3} \ \ln\Big\{ E_{nuc}^2\ E_{ndc}^2 \nonumber \\ & & -\ \rho(\bar p_{nc})^2 \Big[\big(M(p_{nuc})-M(p_{ndc})\big)^2-(\mu_u-\mu_d)^2\Big] \Big\} + \, {\cal{U}}_{\rm poly}(\Phi,\bar \Phi,T)\ , \label{granp} \end{eqnarray} where we have introduced the definitions $\bar p_{nc} = (p_{nuc}+p_{ndc})/2$ and $E_{nfc}^2=M(p_{nfc})^2 + p_{nfc}^2+\rho(\bar p_{nc})^2$, with $p_{nfc} \equiv (\vec p,(2n+1)\pi T + i \mu_f +\phi_c)$. As usual in this type of model, it is seen that $ \Omega^{\rm MFA}$ turns out to be divergent, thus it has to be regularized. We adopt here a prescription similar as the one considered e.g.\ in Ref.~\cite{GomezDumm:2004sr}, viz. \begin{equation} \Omega^{\rm MFA,\rm reg} \ = \ \Omega^{\rm MFA}\, -\, \Omega^{\rm free}_{\rm q}\, +\, \Omega^{\rm free,reg}_{\rm q}\, +\, \Omega_0 \ . \end{equation} Here the ``free'' potential keeps the interaction with the PL, while $\bar \sigma$ and $\bar \Delta$ are set to zero. A constant term $\Omega_0$ is also added so as to fix $\Omega^{\rm MFA,\rm reg} = 0$ at $\mu_B = \mu_I = T = 0$. For the regularized form of the free piece, the Matsubara sum can be performed analytically. One has \begin{eqnarray} \Omega^{\rm free,reg}_{\rm q} = -2 T \sum_{f=u,d} \, \sum_{c=r,g,b} \, \sum_{s=\pm 1} \int \frac{d^3 \vec{p}}{(2\pi)^3}\; \mbox{Re}\; \ln \left[ 1 + \exp\left(-\;\frac{\epsilon_f + s\ (\mu_f + i \phi_c)}{T} \right)\right] \ , \end{eqnarray} where $\epsilon_f = \sqrt{\vec p^{\;2} + m_f^2}\,$. The mean field values $\bar \sigma$ and $\bar \Delta$, as well as the values of $\phi_3$ and $\phi_8$, can now be obtained from a set of four coupled ``gap equations'' that follow from the minimization of the regularized thermodynamic potential, namely \begin{equation} \frac{\partial \Omega^{\rm MFA,reg}}{\partial \bar \sigma} \ = \ 0\ , \qquad \frac{\partial \Omega^{\rm MFA,reg}}{\partial \bar \Delta} \ = \ 0\ , \qquad \frac{\partial \Omega^{\rm MFA,reg}}{\partial \phi_3} \ = \ 0\ , \qquad \frac{\partial \Omega^{\rm MFA,reg}}{\partial \phi_8} \ = \ 0\ . \label{gapeqs} \end{equation} In addition, it is interesting to study the behavior of quark condensates. As usual, we consider the scalar condensate $\Sigma = \Sigma_u + \Sigma_d$, where $\Sigma_f = \langle \bar \psi_f \psi_f \rangle$. The corresponding expressions can be obtained by differentiating $\Omega^{\rm MFA,reg}$ with respect to the current up and down current quark masses, i.e. \begin{equation} \Sigma_f \ = \ \frac{\partial \Omega^{\rm MFA,reg}}{\partial m_f} \ . \label{Sigma} \end{equation} Another relevant quantity is the charged pion condensate $\Pi$, which is expected to be nonvanishing for $\mu_I \neq 0$. According to our choice $\bar\pi_i= \delta_{i1} \bar \Delta$, we get \begin{eqnarray} \Pi \ = \ \langle \bar \psi i \gamma_5 \tau_1 \psi \rangle \ . \label{Pi} \end{eqnarray} The analytical expression for this condensate can be obtained by taking the derivative of the regularized thermodynamic potential with respect to an auxiliary parameter added to $\rho(\bar p)$ in Eq.~(\ref{actionMF}), and then set to zero after the calculation. To study the phase transitions, we also introduce the susceptibilities associated to the $\Sigma$ and $\Pi$ condensates~\cite{Lu:2019diy} and the Polyakov loop. These are given by \begin{equation} \chi_{\rm ch} \ = \ - \frac{\partial\Sigma}{\partial m_c} \ , \qquad \chi_{\Pi} \ = \ \frac{\partial\Pi}{\partial m_c} \ , \qquad \chi_\Phi \ = \ \frac{d\Phi}{dT}\ . \label{chis} \end{equation} Finally, from the regularized potential one can calculate various thermodynamic quantities, such as the energy and entropy densities $\varepsilon$ and $s$, and the particle number densities $n_I$ and $n_B$. The corresponding expressions are \begin{eqnarray} \varepsilon &=& \Omega^{\rm MFA,reg} + T\, s + n_I\, \mu_I + n_B\, \mu_B\ , \nonumber \\ s &=& -\, \frac{\partial \Omega^{\rm MFA,reg}}{\partial T} \ , \nonumber \\ n_I &=& -\, \frac{\partial \Omega^{\rm MFA,reg}}{\partial \mu_I} \ , \nonumber \\ n_B &=& -\, \frac{\partial \Omega^{\rm MFA,reg}}{\partial \mu_B} \ . \label{esn} \end{eqnarray} In this work we restrict to the case of $\mu_B = 0$, focusing on the effect of finite isospin chemical potential $\mu_I$. As stated in the Introduction, in this situation the results from effective models can be compared with existing lattice QCD calculations~\cite{Brandt:2016zdy,Brandt:2017oyy,Brandt:2017zck,Brandt:2018wkp,Brandt:2018bwq}, which do not suffer from the sign problem. Since the thermodynamic potential turns out to be real, one gets $\Phi = \bar \Phi$, $\phi_8 = 0$, and the last of Eqs.~(\ref{gapeqs}) is trivially satisfied. \section{Numerical Results} \label{results} To fully define the model it is necessary to specify the form factor entering the nonlocal fermion current in Eq.~(\ref{cuOGE}). In this work we consider an exponential momentum dependence for the form factor in momentum space, \begin{equation} g(p) \ = \ \exp (-p^2 / \Lambda^2)\ . \label{ff} \end{equation} This form, which is widely used, guarantees a fast ultraviolet convergence of quark loop integrals. Notice that the energy scale $\Lambda$, which acts as an effective momentum cutoff, has to be taken as an additional parameter of the model. Other functional forms, e.g.\ Lorentzian form factors with integer~\cite{Dumm:2010hh} or fractional~\cite{Carlomagno:2018tyk} momentum dependences, have also been considered in the literature. In any case, it is seen that the form factor choice does not have in general major impact in the qualitative predictions for the relevant thermodynamic quantities~\cite{Carlomagno:2013ona}. Given the form factor shape, the model parameters $m_c$, $G$ and $\Lambda$ can be fixed by requiring that the model reproduce the phenomenological values of some selected physical quantities. If we take as inputs the pion mass $m_\pi=138$~MeV, the pion weak decay constant $f_\pi=92.4$~MeV and the quark condensates $\Sigma_u = \Sigma_d = - (240\ {\rm MeV})^3$, one has $m_c = 5.67$~MeV, $\Lambda = 752$~MeV and $g = G\Lambda^2 = 20.67$~\cite{GomezDumm:2006vz}. \subsection{Zero temperature} \label{zeroT} At zero temperature the Polyakov loop decouples from the fermions, and the thermodynamic potential within the nonlocal NJL (nlNJL) model is given by the expression in Eq.~(\ref{actionMF}), properly regularized. In Fig.~\ref{fig:1} we show our numerical results for the normalized mean field condensates $\Sigma/\Sigma_0$ and $\Pi/\Sigma_0$, where $\Sigma_0\equiv\Sigma(\mu_I=0)$, as functions of the isospin chemical potential. The solid red lines correspond to the parametrization described above, which leads to $\Sigma_0 = - $(240~MeV)$^3$. To provide an estimation of the parametrization dependence, we show with a red shaded band the results covered by a parameter range such that $\Sigma_0$ lies between $-$(230~MeV)$^3$ and $-$(250~MeV)$^3$. The right panel of Fig.~\ref{fig:1} just extends the results given in the left panel, covering a broader range of values of the scaled isospin chemical potential $\mu_I/m_\pi$. For comparison, in both panels we include the results obtained from several alternative approaches. The green band (partially hidden by the red one) corresponds to the results from the local NJL, for parametrizations leading to a quark condensate in the range between $-$(240~MeV)$^3$ and $-$(250~MeV)$^3$. The dashed (green) lines, the dotted (brown) lines and the dashed-dotted (blue) lines correspond to the results obtained within the linear sigma model (LSM) in Ref.~\cite{He:2005nk}, the NJL model in Ref.~\cite{Avancini:2019ego} (where a medium separation regularization scheme is used) and the Chiral Perturbation Theory (ChPT) approach in Ref.~\cite{Adhikari:2019mdk}, respectively. In addition, the fat dots denote the results from lattice QCD obtained in Ref.~\cite{Brandt:2018bwq}. \begin{figure}[hbt] \centering{\includegraphics[width=0.75\textwidth]{sig-del-complete}} \caption{(Color online) Normalized $\Sigma$ and $\Pi$ condensates as functions of the isospin chemical potential. The solid red line and the red band correspond to the numerical results obtained within the nlNJL model. Results from other theoretical approaches (see text) are included for comparison.} \label{fig:1} \end{figure} As expected, for $\mu_I < m_\pi$ one has $\Sigma = \Sigma_0$ and $\Pi = 0$. Indeed, for both local and nonlocal NJL models it can be analytically shown that the onset of the pion condensation at $T=0$ occurs at $\mu_I = m_\pi$. For larger isospin chemical potentials, as shown in Fig.~\ref{fig:1}, the chiral condensate decreases monotonically and the charged pion condensate gets strongly increased. In this way, for $\mu_I\geq m_\pi$ the isospin symmetry U(1)$_{I_3}$ gets spontaneously broken, while one finds a partial restoration of the U(1)$_{I_3A}$ symmetry for large values of $\mu_I$. From the left panel of Fig.~\ref{fig:1} it is also seen that there is an overall agreement between most theoretical approaches up to $\mu_I \simeq 2m_\pi$. On the other hand, as shown in the right panel of the figure, for larger values of $\mu_I$ there is some splitting between the predictions from different models. The results for the chiral and pion condensate susceptibilities as functions of $\mu_I$ are displayed in Fig.~\ref{fig:1b}. It can be seen that the chiral susceptibility $\chi_{\rm ch}$ (solid line, left panel) is approximately zero for low values of $\mu_I$, showing a jump to a high value at $\mu_I = m_\pi$ and remaining relatively large for $\mu_I > m_\pi$. This signals that at $\mu_I = m_\pi$ one has the onset of a smooth transition from a phase in which the U(1)$_{I_3A}$ symmetry is spontaneously broken to a region in which it becomes (partially) restored. It is found that the height of the jump at $\mu_I = m_\pi$ gets increased if the current quark mass $m_c$ is reduced. The pion condensate susceptibility is given by the solid line in the right panel of Fig.~\ref{fig:1b}. It is seen that $\chi_\Pi$ is zero for low values of $\mu_I$, and has a divergence at $\mu_I = m_\pi$. This is the signature of a second order phase transition leading to the appearance of the pion condensate, as shown in Fig.~\ref{fig:1}. The behavior of the susceptibilities is similar to the one found in the local NJL model, see Ref.~\cite{Lu:2019diy} \begin{figure}[hbt] \centering{}\includegraphics[width=0.8\textwidth]{susc_t0} \caption{(Color online) Chiral and pion susceptibilities as functions of the isospin chemical potential. Solid and dashed lines correspond to the results from nlNJL model calculations and lowest order ChPT expressions, respectively.} \label{fig:1b} \end{figure} It is interesting to compare the above results with those arising from Chiral Perturbation Theory. At the lowest order in the chiral expansion, it is found that for $\mu_I\geq m_\pi$ the condensates satisfy the relations~\cite{Kogut:2001id} \begin{equation} \frac{\Sigma}{\Sigma_0} \ = \ \frac{m_\pi^2}{\mu_I^2} \ , \qquad \frac{\Pi}{\Sigma_0} \ = \ \sqrt{1-\frac{m_\pi^2}{\mu_I^2}} \ . \end{equation} In this way one has \begin{equation} \left(\frac{\Sigma}{\Sigma_0}\right)^2 \, + \, \left(\frac{\Pi}{\Sigma_0}\right)^2 \ = \ 1 \ , \label{circle} \end{equation} which defines the so-called ``chiral circle''. The relation in Eq.~(\ref{circle}) is approximately satisfied in local and nonlocal NJL models, as can be seen from Fig.~\ref{fig:1}. In fact, the agreement is very good up to $\mu_I \simeq 2m_\pi$, where the prediction from ChPT is trustable. Moreover, with the aid of the Gell-Mann-Oakes-Renner relation one can find simple analytical expressions for the susceptibilities, namely \begin{equation} \chi_{\rm ch} \ = \ -\,\frac{\Sigma_0}{m_c} \, \frac{m_\pi^2}{\mu_I^2}\ , \qquad \chi_{\rm \Pi} \ = \ -\,\frac{\Sigma_0}{m_c}\, \frac{m_\pi^4}{\mu_I^4}\, \frac{1}{\sqrt{1-\frac{m_\pi^4}{\mu_I^4}}} \ , \label{suscep} \end{equation} where it has been assumed that the ratio $\Sigma_0/f_\pi^2$ is approximately independent of $m_c$. In Eqs.~(\ref{suscep}), it can be seen that $\chi_\Pi$ diverges at $\mu_I = m_\pi$, while $\chi_{\rm ch}$ is finite and only becomes divergent in the chiral limit. The behavior of the susceptibilities as functions of $\mu_I$ obtained from these equations are shown by the dashed lines in Fig.~\ref{fig:1b}. It can be seen that they match nicely the results arising from the nlNJL model. \begin{figure}[hbt] \centering{}\includegraphics[width=0.8\textwidth]{p-e-nI_vs_muI} \caption{(Color online) Numerical results for the normalized pressure, energy density and isospin particle density as functions of the isospin chemical potential. Besides the local and nonlocal NJL models, the graphs include the results obtained from the ChPT approach in Ref.~\cite{Adhikari:2019mdk}, the linear sigma model in Ref.~\cite{He:2005nk}, and LQCD calculations in Ref.~\cite{Brandt:2018bwq,Avancini:2019ego}.} \label{fig:2} \end{figure} Next, in Fig.~\ref{fig:2} we show the results obtained within the nlNJL model for the normalized pressure, energy density and isospin particle density as functions of $\mu_I/m_\pi$. Results from other theoretical approaches are also included for comparison (lines and bands for NJL and nlNJL models are defined in the same way as in Fig.~\ref{fig:1}). In the left panels we consider a range of $\mu_I$ from $m_\pi$ to $2m_\pi$, for which LQCD estimations have been obtained in Refs.~\cite{Brandt:2018bwq,Avancini:2019ego} (short-dashed black lines and fat dots in the figure). In the right panels we include the results for the same quantities using a different scale that covers values of the isospin chemical potential up to $5 m_\pi$. Notice that all three quantities are zero for $0\leq \mu_I\leq m_\pi$. From the left panels it can be seen that in general there is a good agreement between the predictions of effective models ---which do not differ significantly from each other--- and LQCD results. On the other hand, for larger values of $\mu_I$ the splitting between the results from various theoretical approaches becomes appreciably large. Unfortunately, no LQCD results are available up to now within this enlarged range. The behavior of the studied quantities for the nonlocal approach (solid red lines, red bands) is found to be qualitatively similar to the one obtained within the local NJL model (green bands), showing a monotonic growth when $\mu_I$ gets increased. Notice that the dependence on the parametrization turns out to be relatively low. Another interesting magnitude to be analyzed is the interaction energy, or trace anomaly, $\epsilon-3p$. The behavior of this quantity (normalized by $\mu_I^4$) as a function of $\mu_I/m_\pi$, is shown in Fig.~\ref{fig:3}. It is seen that the results obtained within the nlNJL model are similar to those found in other theoretical approaches. In particular, the so-called ``conformal point'', for which $\epsilon = 3p$, is reached at a value of $\mu_I/m_\pi$ in the range between 1.75 and 1.77 (depending on the parametrization), in good agreement with the analytical result $\mu_I/m_\pi = \sqrt{3}$ arising from leading order ChPT~\cite{Carignano:2016rvs}. \begin{figure}[hbt] \includegraphics[width=0.85\textwidth]{e-3p} \caption{(Color online) Numerical results for the interaction energy as function of the isospin chemical potential. The graphs include the values obtained from local and nonlocal NJL models, ChPT~\cite{Adhikari:2019mdk} and LQCD calculations~\cite{Brandt:2018bwq,Avancini:2019ego}.} \label{fig:3} \end{figure} To conclude this subsection, in Fig.~\ref{fig:4} we plot the numerical results obtained for the equation of state, i.e.~the behavior of the energy density as a function of the pressure (here the isospin chemical potential $\mu_I$ is an underlying parameter). The notation for the curves obtained within the nonlocal NJL approach and other models, as well as those arising from lattice QCD calculations, are the same as in Figs.~\ref{fig:2} and~\ref{fig:3}. Once again, the results from the nonlocal approach are qualitatively similar to those obtained in the local NJL model, and are consistent with LQCD results in the low energy region (where LQCD data are available). \begin{figure}[hbt] \vspace*{0.1cm} \includegraphics[width=0.85\textwidth]{eos} \caption{(Color online) Numerical results for the equation of state. The graphs include the results obtained from local and nonlocal NJL models, the linear sigma model~\cite{He:2005nk}, ChPT~\cite{Adhikari:2019mdk} and LQCD calculations~\cite{Brandt:2018bwq,Avancini:2019ego}.} \label{fig:4} \end{figure} \subsection{Finite temperature} \label{finiteT} We present here our numerical results at finite temperature for the quantities defined in Sec.~\ref{model}. As discussed above, we include the interaction between the fermions and a background color field, considering the Polyakov loop potential in Eq.~(\ref{upoly}). The parameter $T_0$ entering this potential is taken to be 200~MeV, following the estimations carried out for the case of two dynamical quarks~\cite{Schaefer:2007pw,Schaefer:2009ui}. Let us start by studying the thermal behavior of the normalized mean field condensates and the traced PL for some representative values of $\mu_I$ within the range $0 \leq \mu_I \leq 2m_\pi$. Our results are shown in Fig.~\ref{fig:5}. On the left panels we plot the condensates $\Sigma$ and $\Pi$, normalized by $\Sigma_0$ (solid and dashed lines, respectively), together with the traced PL $\Phi$ (dashed-dotted lines). The results are given as functions of the temperature, normalized to the critical temperature for $\mu_I = 0$, viz.\ $T_c^0 = 174$~MeV. We also include the curves for the normalized combined quantity $R$, defined by \begin{equation} R \ = \ \frac{\sqrt{\Sigma^2+\Pi^2}}{\Sigma_0}\ . \end{equation} In the right panels of Fig.~\ref{fig:5} we plot the susceptibilities associated to the chiral and pion condensates and the traced Polyakov loop (solid, dashed and dashed-dotted lines, respectively), defined in Eq.~(\ref{chis}). As usual, the peaks of the curves for $\chi_{\rm ch}$ and $\chi_\Phi$ are used in order to define the chiral restoration and deconfinement transition critical temperatures. {}From the left panels of Fig.~\ref{fig:5} it is seen that for $\mu_I=0$ the chiral restoration and deconfinement transitions proceed as a smooth crossovers at temperatures $T \simeq T_c^0$, while the pion condensate vanishes for all $T$. The situation remains basically the same up to values of $\mu_I$ approaching $m_\pi$. Then, for a small region of values of $\mu_I$ just below $m_\pi$ (as shown explicitly for the case of $\mu_I/m_\pi=0.99$) the pion condensate vanishes for all $T$ except for a short range of temperatures slightly below the critical value $T_c$ that characterizes the (almost simultaneous) chiral restoration and deconfinement crossover transitions. On the other hand, for $\mu_I > m_\pi$, at low temperatures the pion condensate gets nonzero values, showing the spontaneous breakdown of isospin symmetry. These values of $\Pi$ are approximately independent of the temperature up to $T\simeq T_c^0$, where one finds a second order transition to a U(1)$_{I_3}$ symmetry restored phase. In addition, it can be seen that these values of $\Pi$ get increased with $\mu_I$, while the values of the chiral condensate $\Sigma$ decrease, in such a way that $R$ is approximately constant. We recall that, as discussed in the previous subsection, from lowest order ChPT one gets at $T=0$ a constant value $R=1$ for all values of $\mu_I$. Moreover, as noted in Ref.~\cite{He:2005nk}, the behavior of $R$ as a function of $T$ is very similar to that found for $\Sigma/\Sigma_0$ when pion condensation is not considered. Concerning the deconfinement transition, the graphs on the left panel of Fig.~\ref{fig:5} show that it proceeds as a smooth crossover at an approximately constant temperature $T \lesssim T_c^0$ for the considered range of values of the isospin chemical potential. \begin{figure}[hbt] \centering{}\includegraphics[width=0.75\textwidth]{nlPNJL240-T} \caption{(Color online) Left: numerical results for the normalized $\Sigma$ and $\Pi$ condensates, the traced Polyakov loop $\Phi$ and the quantity $R$ as functions of the temperature, for some fixed values of $\mu_I/m_\pi$. Right: numerical results for the susceptibilities associated to the chiral and pion condensates and the Polyakov loop, as functions of $T/T_c^0$.} \label{fig:5} \end{figure} Taking now into account the plots in the right panels of Fig.~\ref{fig:5}, it can be seen that the PL susceptibility (green dashed-dotted lines) shows clear peaks that indicate a crossover-like deconfinement transition at a temperature slightly lower than $T_c^0$ and approximately independent of $\mu_I$. In the case of the chiral susceptibility (red solid lines in the right panels of Fig.~\ref{fig:5}), for $\mu_I=0$ one finds a peak that defines the critical temperature $T_c^0 = 174$~MeV. Notice that for $\mu_I$ larger than $m_\pi$ the susceptibility $\chi_{\rm ch}$ is relatively large at low temperatures. This is in agreement with the behavior shown in Fig.~\ref{fig:1b}, and it can be attributed to the presence of a nonzero pion condensate. The same effect occurs for values of $\mu_I$ slightly below $m_\pi$ and temperatures $T\lesssim T_c^0$, owing to the existence of the already mentioned nonzero value of $\Pi$ in this region (see panels of the second row in Fig.~\ref{fig:5}). Finally, the pion condensate susceptibility (dashed lines in the right panels of Fig.~\ref{fig:5}) is also found to be nonzero in the presence of the pion condensate. Moreover, as expected, it becomes divergent at the temperatures in which one finds the second order phase transition into the isospin symmetry restored phase. These temperatures are slightly lower than $T_c^0$ and basically coincide with the ones corresponding to the deconfinement transition. For completeness, we show in Fig.~\ref{fig:5b} the behavior of the $\Sigma$ and $\Pi$ susceptibilities as functions of the isospin chemical potential, for $T=0$ and temperatures slightly below and above $T_c^0$. In fact, it is seen that the behavior of $\chi_{\rm ch}$ and $\chi_\Pi$ found for $T=0$ (see Fig.~\ref{fig:1b}) does not change qualitatively up to the critical isospin symmetry restoration temperature. Notice that for temperatures just below $T_c^0$ the position of the discontinuity is shifted to $\mu_I/m_\pi$ slightly smaller than 1. It is also worth noticing that the curves for $T\lesssim T_c^0$ are quite different from the ones obtained in the framework of the local PNJL model, for which the discontinuity is found to occur at significantly larger values of $\mu_I/m_\pi$ (see Fig.~3 of Ref.~\cite{Lu:2019diy}). \begin{figure}[hbt] \centering{}\includegraphics[width=0.8\textwidth]{suscep-T} \caption{(Color online) Chiral (left) and pion (right) condensate susceptibilities as functions of the isospin chemical potential, for some representative values of the temperature.} \label{fig:5b} \end{figure} Through the analysis of the quantities in Fig.~\ref{fig:5} one can sketch the phase diagram in the $\mu_I-T$ plane. This is shown in Fig.~\ref{fig:6}, where the temperature and the isospin chemical potential are normalized to $T_c^0$ and $m_\pi$, respectively. As expected, for low values of $T$ and $\mu_I$ the system lies in a ``normal matter'' (NM) phase, i.e.\ a U(1)$_{I_3A}$ symmetry broken phase in which the scalar quark-antiquark condensate $\Sigma$ is large and the pion condensate $\Pi$ is zero. By increasing the temperature one reaches a transition to a ``quark gluon plasma'' (QGP) phase, in which quarks deconfine and the chiral symmetry becomes partially restored. Both chiral restoration and deconfinement transitions occur as smooth crossovers, at approximately a common temperature that does not depend significantly on $\mu_I$. The corresponding curves, obtained from the peaks of $\chi_{\rm ch}$ and $\chi_\Phi$ susceptibilities, are shown by the solid and dash-dotted lines in the figure, respectively. The results are found to be similar to those obtained from lattice QCD calculations in Ref.~\cite{Brandt:2017oyy}, shown by the gray and blue bands. On the other hand, for temperatures below the critical value $T_c^0$, by increasing the isospin chemical potential one reaches a second order phase transition to a pion-condensate ($\pi$C) region in which the condensate $\Pi$ is nonvanishing and therefore the U(1)$_{I_3}$ symmetry is broken. The onset of this phase, shown by the dashed line in Fig.~\ref{fig:6}, occurs approximately at $\mu_I = m_\pi$ for all temperature values up to $T_c^0$, in agreement with lattice QCD calculations (red band in the figure)~\cite{Brandt:2017oyy}. Then, for $\mu_I > m_\pi$, at a given critical temperature there is a second order phase transition from the $\pi$C phase to the QGP phase. As discussed above, this critical temperature is slightly lower than $T_c^0$ and remains approximately constant for all considered values of $\mu_I$ above the pion mass. The location of the pseudo-triple point where NM, QGP and $\pi$C phases meet is found to be in good agreement with the result obtained in lattice QCD calculations, given by the black square. Concerning the U(1)$_{I_3A}$ symmetry within the $\pi$C phase, it is seen that for a given temperature $T$ lower than $T_c^0$ the values of the quark condensates decrease steadily if $\mu_I$ gets increased beyond $m_\pi$. This can be read from the values of $\Sigma/\Sigma_0$ shown in the left panels of Fig.~\ref{fig:5}. Notice that for $\mu_I\simeq 1.4\,m_\pi$ the value of $\Sigma$ is found to be reduced to approximately one half of the $\mu_I=0$ value $\Sigma_0$. Finally, in Fig.~\ref{fig:6} we also show for comparison the $\pi$C$-$NM transition curves corresponding to the local PNJL model and leading order chiral perturbation theory~\cite{Splittorff:2002xn} (thin dashed and short-dashed lines, respectively). As anticipated in the discussion concerning Fig.~\ref{fig:5b}, it is seen that there is a substantial difference between nonlocal and local PNJL-like approaches. This situation does not change significantly if one considers the NJL model omitting the interaction with the Polyakov loop. \begin{figure}[hbt] \centering{}\includegraphics[width=0.75\textwidth]{pdnlPNJL240.eps} \caption{(Color online) Phase diagram in the $\mu_I-T$ plane for the nonlocal PNJL model. NM, QGP and $\pi$C stand for normal matter, quark gluon plasma and pion condensation phases, respectively. Solid (black), dashed (red) and dash-dotted (blue) lines correspond to chiral restoration, pion condensation and deconfinement transitions, while the shaded bands indicate the transition regions obtained from LQCD results in Ref.~\cite{Brandt:2017oyy}. The thin dashed and short-dashed lines indicate the NQM-$\pi$C transition curves arising from the local PNJL model and leading order ChPT, respectively.} \label{fig:6} \end{figure} \section{Conclusions} \label{summary} We have analyzed the phase diagram of strongly interacting matter within a nonlocal two-flavor PNJL model, considering both zero and finite temperature and nonzero isospin chemical potential. In this context, we have studied the quark deconfinement and the breakdown/restoration of chiral and isospin symmetries, together with the corresponding footprints on various thermodynamic quantities. At zero temperature, for $\mu_I = m_\pi$ one finds the onset of a phase in which isospin symmetry is broken by the presence of a nonzero pion condensate. Up to $\mu_I \simeq 2m_\pi$, one observes a rapid growth of this condensate, in overall agreement with the predictions from other effective model analyses and LQCD calculations. The agreement is also good for various thermodynamic quantities, as the pressure, energy density, isospin particle density and interaction energy. For larger values of $\mu_I$ (where no LQCD data are available up to now), although one finds some general agreement in the qualitative behavior of these quantities, there are significant quantitative discrepancies between the results from different theoretical approaches. In the case of a system at finite temperature, for low values of $\mu_I$ the pion condensate is absent and one gets, as expected, a transition from the usual ``normal matter'' (NM) scenario into a quark-gluon plasma (QGP) phase in which chiral symmetry is restored and quarks are deconfined. This transition proceeds as a smooth crossover signaled by the behavior of chiral and Polyakov loop susceptibilities. The critical temperature $T_c^0\simeq 174$~MeV is approximately the same for both chiral restoration and quark deconfinement. For $T\leq T_c^0$, by increasing the isospin chemical potential one finds a second order transition into a pion condensation ($\pi$C) phase, in which isospin symmetry is spontaneously broken. The corresponding critical line $\mu_I(T)$ is a primary result of our analysis. The critical value $\mu_I = m_\pi$ found at $T=0$ remains approximately constant up to $T\simeq T^0_c$, reaching a pseudo-triple point in which NM, QGP and $\pi$C phases coexist. It can be seen that there is a remarkable agreement between these results and those obtained from lattice QCD calculations. On the other hand, the $\pi$C-QGP transition occurs at a temperature of the order of $T_c^0$, which is approximately constant for $\mu_I > m_\pi$. It is worth noticing that our predictions for the border of the $\pi$C phase region (and, in particular, for the location of the triple point) are in good agreement with the available results from lattice QCD, whereas they differ significantly from the predictions obtained in the framework of the local PNJL model. \section{Acknowledgements} This work has been supported in part by Consejo Nacional de Investigaciones Cient\'ificas y T\'ecnicas and Agencia Nacional de Promoci\'on Cient\'ifica y Tecnol\'ogica (Argentina), under Grants No.~PIP17-700 and No.~PICT17-03-0571, respectively, and by the National University of La Plata (Argentina), Project No.~X824.
train/arxiv
BkiUgIPxK7IDOjusFT5s
5
1
\section{Introduction} \begin{figure*} \centering \includegraphics[width=.95\textwidth]{Figures/Schematic_NRSECorr_sep12-eps-converted-to.pdf} \caption{ \label{figSchematic} Schematic of the corrected neutron resonance spin echo beamline. The neutron is polarized at the far left (Pol.) before traveling through the first arm, scattering from the sample (Sam.) at angle $\theta$, echoing through the second arm, and entering the analyzer (Ana.) at the far right; the detector is not shown. The correction magnets are labeled ``CM'' and the rf flippers ``RF''. The white space between the CMs is at zero field. The scattering plane for the example neutron path shown is the $x$-$y$ plane. For the first arm, $L_n$ for $n=1,2,3$ is the distance from the beam-defining slit to the center of the corresponding correction magnet, while for the second arm, $L_n$ for $n = 4,5,6$ is the distance from the sample to each correction magnet center. The distance from the optical axis to the point that the neutron passes through the $n^{\mathrm{th}}$ correction magnet is defined as $y_n$. The distance between rf flippers in both arms is $L_{\mathrm{RF}}$. } \end{figure*} Neutron Resonance Spin Echo (NRSE) is a modification of the Neutron Spin Echo (NSE) technique which replaces the static-field precession coils with radio-frequency (rf) spin-flippers. \cite{Golub1987} The underlying principle of both types of \textit{echo} measurements is that the instrument will measure the change in velocity, and hence the change in energy, of a neutron scattered from a sample. Currently, most large neutron sources use static-field NSE instruments for high-energy-resolution measurements of slow dynamics. In order to be competitive with existing NSE instruments, NRSE would need to achieve a Fourier time (also called the spin echo time) of about one hundred nanoseconds. \cite{Farago2015,Ohl2012} The Fourier time $\tau$ is given by \begin{equation} \tau = \frac{2 m^2}{h^2} L_{\mathrm{RF}} f \lambda^3 \label{EqnFouriertime} \end{equation} where $L_{\mathrm{RF}}$ is the distance between the rf flippers in each arm, $f$ is the rf flipper (linear) frequency, $\lambda$ is the neutron wavelength, $m$ is the neutron mass, and $h$ is Planck's constant.\cite{Keller2002} State-of-the-art rf flippers are already capable of producing a high performance NRSE instrument. As an example, suppose an NRSE beamline uses recently developed transverse rf flippers. \cite{Li2020} A beamline with those rf flippers operating at 4 MHz, with a 2 meter separation between the rf flippers, and with a standard NSE wavelength of 1 nm would have a Fourier time of 100 nanoseconds, which is comparable to modern NSE beamlines.\cite{Golub1988} However, due to the long wavelength requirements for both NSE and NRSE, the neutron flux is often low and the relevant samples scatter weakly. Therefore, measurements are only possible by having a large spatial and angular beam size, which leads to aberrations. In conventional NSE, one aberration source is due to the variation in the static field strength across the beam due to the field profile created by a solenoid geometry. In NRSE, the rf flippers are separated by zero field regions, so this aberration is not present. A second type of aberration arises from scattering from a sample. The sample is placed in the center of the two symmetric arms as shown in Fig. \ref{figSchematic}. If the neutron scatters with some non-zero momentum transfer, then the path length through the second arm will not be the same as the path length through the first arm, so the neutron will spend a different amount of time in the two arms. Because NRSE instruments measure the velocity change of a neutron by measuring its Larmor phase $\Phi = 4 \pi f t$, where $t$ is the time it takes for the neutron to travel between the rf flippers, a difference in time is measured as a change in the neutron velocity. An uncorrected echo measurement will then conflate a change in scattering angle with a change in energy. The time can be written in terms of the neutron path length between the rf flippers as \begin{equation} t = \frac{L_{\mathrm{RF}}}{v \cos\theta}, \end{equation} where $v$ is the neutron velocity and $\theta$ is the scattering angle in the scattering plane (see Fig. \ref{figSchematic}).\cite{Keller2002} Thus, for our example NRSE beamline, a neutron scattering at 1 degree would be out of Larmor phase by more than 2000 degrees compared to the unscattered neutrons if there were no correction. This aberration will be present for elastic, quasielastic, and inelastic neutron scattering. Clearly, an NRSE instrument must have a method for correcting this geometric contribution to the Larmor phase. In NSE instruments, Fresnel coils with longitudinal-fields (i.e. the field is orientated along the optical axis) are used for an analogous correction as well as correcting the aberration from the static field variation. \cite{Ohl2005,Ohl2012,Farago2015} However, transverse-field rf flippers (i.e. the rf flipper's static-field is perpendicular to the optical axis) have been constructed for NRSE measurements, \cite{Li2020, Endo2019} which require a transverse-field correction magnet to improve the polarization. In this paper, we present an analytical solution of the magnetic field profile needed to correct the path length aberrations and the design of a suitable prototype correction magnet. The Larmor phase that a neutron acquires traveling through our prototype NRSE correction magnet varies quadratically with the distance from the optical axis and is radially symmetric. We simulate an NRSE beamline with the correction magnets and experimentally measure the spatial dependence of the Larmor phase change through the device. We demonstrate that transverse-field NRSE beamlines can be corrected with transverse static-field magnets, and therefore have the potential to be competitive with NSE beamlines. \section{Analytical Solution} \label{sec:ana} The two arms of the NRSE instrument will be corrected independently; we will discuss the correction for the second arm first. The necessary magnitude of the correction is proportional to the path~length~difference~$\Delta L_{\mathrm{RF}}$ between the two pairs of rf flippers, which is given by \begin{equation} \label{pathlength} \Delta L_{\mathrm{RF}} = L_{\mathrm{RF}} \left(\frac{1}{\cos\theta} - 1\right) \approx \frac{1}{2}L_{\mathrm{RF}}\theta^2, \end{equation} where $\theta$ is the small scattering angle shown in Fig. \ref{figSchematic}, defined relative to the optical axis. This difference in path length will cause a delay in time, and thus a difference in the Larmor phase $\Phi$. The difference in Larmor phase $\Delta \Phi$ between the unscattered and scattered beam for idealized rf flippers in the NRSE configuration is\cite{Keller2002} \begin{equation} \label{deltaLinital} \Delta \Phi = \frac{4 \pi f}{v} \Delta L_{\mathrm{RF}} \approx 2 \pi f \frac{\lambda m}{h} L_{\mathrm{RF}} \theta^2. \end{equation} The Larmor phase has already been defined as $\Phi = 4 \pi f t$ for an NRSE instrument, but it can also be defined for NSE instruments in terms of the magnetic field integral: $\Phi = (\gamma/v) \int ds \, B$, where $\mathrm{FI}_s = \int ds \, B$ the field integral experienced by the neutron traveling along the path $s$ and $\gamma \approx -1.832 \times 10^8$ rad/(T$\cdot$s) is the neutron's gyromagnetic ratio. In NSE, the neutron magnetic moment rotates (precesses) in the plane perpendicular to an applied static field and the Larmor phase measures the amount of this rotation relative to some fixed direction in the lab. In NRSE, the rf field rotates in the plane perpendicular to a static field, and the Larmor phase measures that angle between the neutron magnetic moment and the rf field. Hence, rf and static field effects on the neutron can be added together, as has been exploited recently.\cite{Jochum2020b} To correct the phase difference, we must design a correction scheme consisting of static magnetic fields that generates a Larmor phase proportional to $\theta^2$, with the proportionality coefficient $\chi$ being \begin{equation} \label{CC} \chi = 2 \pi \frac{f}{\gamma} L_{\mathrm{RF}}. \end{equation} Notice that the required correction is independent of wavelength. With this correction, the Larmor phase of all diverging neutrons will be corrected as if they traveled the same effective path length, namely the distance $L_{\mathrm{RF}}$. Unfortunately, it is not obvious how to design a static magnetic field profile that would generate a purely $\theta^2$-dependent field integral term for a finite-sized beam. However, we can generate such a term for a finite-sized beam with three correction magnets consisting of transverse static fields in which the magnitude of the field integral of a neutron traveling through the devices varies quadratically as a function of transverse position, with the center being the minimum and increasing radially outward. For scattering from a point-like sample, one can show that only two devices of this type are needed. This design is a two-dimensional extension to the original solution proposed by Monkenbusch. \cite{Monkenbusch1999} For simplicity, we only look at the aberrations in one dimension (transverse $y$ direction), but the following argument can be easily generalized to the entire two-dimensional plane perpendicular to the optical axis. The neutrons may be scattered from any position on the sample at a distance of $y_{\mathrm{sam}}$ from the optical axis into any angle $\theta$ defined relative to the optical axis. If the sample were point-like, then the scattering angle $\theta$ would be defined just by the $y$ distance from the optical axis and the $\theta^2$ aberration would be known simply from the $x$ and $y$ position in the second arm. With a finite-size sample, the scattering position $y_{\mathrm{sam}}$ will add to the $y$ position from the scattering angle, so one device at a specific point along the beamline is not sufficient to correct the $\theta^2$ aberration. To lowest order in scattering angle, the field integral per amp of a single prototype correction magnet is \begin{equation} \label{correctiondeviceFIgeneral} \mathrm{FI}_n = a_n + b_n y_n^2 + c_n y_n \theta, \end{equation} where $y_n$ is the distance between the neutron's path at the $n^{\mathrm{th}}$ correction magnet and the optical axis (see Fig. \ref{figSchematic}). There is no linear term in $y$ because the device is left-right symmetric; similarly, there is no linear term in $z$ due to the top-bottom symmetry. Each term in the field integral is proportional to the applied current, and $a_n$ has units of T$\cdot$m, $b_n$ has units of T/m, and $c_n$ has units of T/rad. Here $b_n$ is the term that corrects the path-length aberrations while the $a_n$ and $c_n$ terms appear because of the particular correction magnet geometry that we have chosen; higher order terms were found to have a negligible contribution to the field integral. The transverse position that the scattered neutron passes through each correction magnet is given by \begin{equation} \label{ydefinition} y_n = y_{\mathrm{sam}}+ L_n \theta, \end{equation} where $\theta$ is assumed to be small and $L_n$ is the distance from the sample to the center of the $n$\textsuperscript{th} correction magnet, as shown in Fig. \ref{figSchematic}. Combining Eqns. \eqref{correctiondeviceFIgeneral} and \eqref{ydefinition} for each device, we find the total field integral per amp $\mathrm{FI_T}$ experienced by a neutron arriving at the analyzer due to the three correction magnets to be \begin{equation} \label{eqnLarmorPhaseCC} \begin{aligned} \mathrm{FI_T} = \sum_{n \in \{4,5,6\}} \big[& a_n + b_n y_{\mathrm{sam}}^2 + (c_n + 2 b_n L_n) y_{\mathrm{sam}} \theta \\ &+ L_n (c_n + b_n L_n) \theta^2 \big]. \end{aligned} \end{equation} The goal of the correction scheme is to have the coefficient of the $\theta^2$ term equal to Eqn. \eqref{CC} while having all other terms zero. Doing so, the series of correction magnets would correct the path-length aberration in the second arm of the instrument regardless of the $y_{\mathrm{sam}}$ position and without introducing any net field integral. These requirements on Eqn. \eqref{eqnLarmorPhaseCC} can be rewritten into several conditions: \begin{gather*} \label{CoefficientCancellingEqns} a_4 + a_5 + a_6 = b_4 + b_5 + b_6 = c_4 + c_5 + c_6 = 0 \\ b_4 L_4 + b_5 L_5 + b_6 L_6 = 0 \\ \chi = L_4 (c_4 + b_4 L_4) + L_5 (c_5 + b_5 L_5) + L_6 (c_6 + b_6 L_6). \end{gather*} Notice that if the sum of the currents through the three correction magnets is zero, then the first line will be satisfied. Ignoring the $a_n$ terms for now, we solve this system of equations, obtaining \begin{subequations} \label{SolutionEnc} \begin{align} b_4 &= \frac{\chi + c_4(L_5 - L_4) + c_6(L_5 - L_6)}{(L_4 - L_5)(L_4 - L_6)} \\ b_5 &= -\frac{\chi + c_4(L_5 - L_4) + c_6(L_5 - L_6)}{(L_4 - L_5)(L_5 - L_6)} \\ b_6 &= \frac{\chi + c_4(L_5 - L_4) + c_6(L_5 - L_6)}{(L_4 - L_6)(L_5 - L_6)} \\ c_5 &= - (c_4 + c_6), \end{align} \end{subequations} with $c_4$ and $c_6$ being free parameters. From this set of solutions, it is apparent that if we choose CM5, the fifth correction magnet, to be equidistant from CM4 and CM6 (such that $|L_4 - L_5| = |L_5 - L_6| = \delta L$) and also $c_4 = c_6$, then the angle-dependent $c_n$ terms cancel out, leaving $b_4 = b_6 = \chi/[2(\delta L)^2]$ and $b_5 = -\chi/(\delta L)^2$. Therefore, we can obtain our desired field integral by putting the same field in the first and last device and a field twice as large in the opposite direction in the middle device. We call this choice of fields the $(1,-2,1)$ configuration. In this configuration, the constant $a_n$ terms will also cancel out. Next, we determine the magnitude required for $b_n$ for a realistic NRSE beamline. Plugging in Eqn. \eqref{CC} to the $b_n$ terms in Eqn. \eqref{SolutionEnc}, we see that \begin{gather} b_4 = \frac{\pi f L_{\mathrm{RF}}}{\gamma (\delta L)^2} \\ b_5 = -2 b_4, \quad b_6 = b_4. \nonumber \end{gather} To estimate the required field in the correction magnet, let $L_{\mathrm{RF}} = 2$ m, $\delta L = 1$ m, and $f = 4 $ MHz. Plugging in the numbers, we find $b_4 \approx -140$ mT/m. As we will show below, this value is attainable with our correction magnet. The above discussion has only considered the second arm; now we look at the first arm. If the initial beam is well-collimated (i.e. all neutrons in the first arm travel parallel to the optical axis), then no correction elements are required in the first arm even if there are correction magnets in the second arm. However, in practice, neutrons in a real instrument have some divergence angle relative to the optical axis. This initial beam divergence will lead to a variation in path length between neutrons propagating at different angles in the first arm, similar to the scattering term for the second arm. Therefore, we must correct for this variation in the first arm with another three correction magnets, as shown in Fig. \ref{figSchematic}. They must be in the $(-1,2,-1)$ configuration as the static magnetic field in both rf flippers is in the opposite direction relative the static fields in the rf flippers in the second arm. Without this additional correction, we do not obtain the best possible improvement to the polarization. The correction for the divergence angle of the neutron in the first arm will not require any changes to the correction magnet set-up in the second arm because the correction for the second arm is independent of angle or $y_{\mathrm{sam}}$ position. With all six correction magnets installed, the Larmor phase, and hence Fourier time, of neutrons along any path in either arm will be corrected to the Larmor phase and Fourier time of a neutron traveling parallel to the optical axis. \section{Development of the Correction Magnet} \label{sec:design} \begin{figure} \includegraphics[width=.95\linewidth]{Figures/CorrDevelopment_sep20-eps-converted-to.pdf} \caption{\label{figCorrDevelopment} (a) CAD model of the correction magnet. The light blue surfaces are the high-temperature superconducting (HTS) films (front film not shown), the purple is the low-carbon steel chevron, the tan are the HTS coils, and the brown is the low-carbon steel flux return and pole pieces. (b and c) Simulation of fields at $y = 0$ and $z = 0$ with the coil current set to 10 amps, which have the same colorbar legend. The beam traveled from left to right. The HTS films are highlighted in white. (d) Contour plot of simulated field integral through the correction device at 10 amps for a neutron originating from a point source at $y = z = 0$ and $x = -20$ m. (e) Slices of the simulated field integrals in (d) through the origin.} \end{figure} To implement the analytical solution, we designed and constructed a vertical-field correction magnet following the drawing of Fig. \ref{figCorrDevelopment}(a). The bottom, top, and sides of the coils are enclosed by a magnetic flux return made of low-carbon steel (alloy 1018). High-temperature superconducting (HTS) films, gold-coated 350 nm thick YBCO on 0.5 mm thick sapphire substrate, were placed on the front and sides of the coils with another film placed 38 mm after the coils, outside of the magnetic circuit. These HTS films act as magnetic field screens due to the Meissner effect which creates sharp boundaries between field regions, as shown in Fig. \ref{figCorrDevelopment}(b,c). Thus the magnet can be thought of in two parts: a contained region where the coils sit and an open region before the back film. HTS wire was wound around hollow low-carbon steel pole pieces and topped with ``chevrons'', low-carbon steel plates with v-shaped cutouts. The opening of the v is at the front of the device. The thickness of the chevrons was 3.2 mm, leaving a separation between chevrons of 50 mm. The angle of the chevron was 60 degrees, and the space from the coils to the rear HTS film was 38 mm. It was already well-known that a dipole magnet without a HTS film constraining the magnetic flux will create a magnetic field with a quadratic $z$-dependence, as correcting for this was one of the initial advantages of adding a HTS film. \cite{Wang2014} The magnetic field in this device was simulated using the Siemens MagNet $\copyright$ software, which includes the material properties in its solutions via the finite-element method. The HTS films were simulated as perfect diamagnets preventing any perpendicular magnetic flux. We note that the explicit field profile in the device is arbitrary as long as the resulting field integral is quadratic across the device and the field direction does not change too quickly. A useful feature of this design is that the $y$ and $z$ components of the field integral may be tuned independently. The $z$ component, shown in Fig. \ref{figCorrDevelopment}(b) is largely dependent on the distance to the back film while the $y$ component, shown in Fig. \ref{figCorrDevelopment}(c) is largely dependent on the chevron angle. As shown in \ref{figCorrDevelopment}(b), the quadratic behavior of the $z$ component comes from the bowing field lines protruding around the back of the coils due to the displaced back film. A numerical solution to the field integral for any starting position $(y,z)$ and angle through the device was found by extracting the MagNet solution for the field at each point (mesh size 4 mm) and integrating the field along any chosen path. The simulated field integral through the coils at 10 amps for a neutron traveling from a far-away point source at $y= z =0$ and $x$ = -20 m is shown in Fig. \ref{figCorrDevelopment}(d). The difference in field integral between the center and edges is about 0.01 mT$\cdot$m, which approximately is the necessary correction value for an NRSE beamline with an rf flipper frequency of 1 MHz and 1 meter between the correction magnets. The field integral through the center is about 1.69 mT$\cdot$m. \section{McStas Simulations of an NRSE Instrument} \label{sec:mcstas} \FloatBarrier Using McStas, a Monte Carlo neutron ray-tracing software package,\cite{Willendrup_Lefmann_2020, Willendrup_Lefmann_2021} we simulated an NRSE beamline with these correction magnets installed. The polarizer, rf flippers, analyzer, and detector were taken to be 100$\%$ efficient, while the correction magnet component was built using numerically simulated magnetic field data extracted from the MagNet simulations. The following model was used for the rf flipper at resonance: \begin{equation} \Phi_f = 2 \pi f (2t_i + \Delta t) -\Phi_i, \end{equation} where $\Phi_f$ is the final Larmor phase after exiting the rf flipper, $f$ the rf frequency (2 MHz for these simulations), $t_i$ the time at which the neutron enters the flipper, $\Delta t$ the time spent inside the flipper of thickness 15 cm, and $\Phi_i$ the initial Larmor phase when entering the flipper. We used a modified version of the ``SANS\_spheres2'' sample, a default McStas sample that emulates elastically scattering hard spheres in a dilute solution. The sample parameters were chosen to prevent incoherent scattering or transmission without scattering, so the component acted like an idealized elastic scatterer with a maximum momentum transfer of 0.09 nm$^{-1}$, which corresponds to a maximum scattering angle of 0.7 degrees. The sample was 3 cm by 3 cm in transverse size with negligible thickness. The aperture diameter and the neutron wavelength were 2 cm and $0.8$ nm $ \pm 1\%$, respectively. The separation between the rf flippers in each arm was taken to be 2.3 m, and the distance between the correction magnets 1 meter. The total distance from the source to the two-dimensional detector was 5.2 meters. With these parameters, the effective initial beam divergence is about 0.6 degrees. Using the magnetic field data extracted from MagNet simulations, we determined that our prototype correction magnet had the following field integral per amp expansion across the correction magnet: \begin{equation} \label{Eqn:fullFIE} \mathrm{FI} = a + b_y y^2 + b_z z^2 + c_y y \theta + c_z z \psi, \end{equation} where $\psi$ is the vertical neutron divergence angle and the values of the fitted coefficients are given in Tab. \ref{tab:FIE CC coefficients} below. Higher order terms were found to have a negligible contribution. \renewcommand{\arraystretch}{2} \begin{table}[h] \centering \newcolumntype{R}{>{\centering\arraybackslash}X} \begin{tabularx}{.99\linewidth}{R|R|R|R|R} $a \left(\frac{\mathrm{mT}\cdot \mathrm{m}}{\mathrm{A}}\right)$ & $b_y \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{m}}\right)$ & $b_z \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{m}}\right)$ & $c_y \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{rad}}\right)$ & $c_z \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{rad}}\right)$ \\ \hline 0.169 & 8.78 & 8.97 & -0.525 & 0.986 \end{tabularx} \renewcommand{\arraystretch}{1} \caption{\label{tab:FIE CC coefficients} Values of the field integral expansion coefficients in Eqn. \eqref{Eqn:fullFIE}. The longitudinal length of the magnet is 14 cm.} \end{table} \begin{figure}[!b] \centering \includegraphics[width=.95\linewidth]{Figures/Correction_coil_mcstas_tinyinset.png} \caption{\label{figNRSEmcstas} Simulated McStas polarization for an NRSE beamline with all six correction magnets on (red curve), only the final three correction magnets on (green curve), only the first three correction magnets on (orange curve), and no correction magnets on (blue curve). The inset compares the polarization for all correction magnets on (red curve) to all correction magnets off and sample removed (purple curve) which shows that the polarization drops to about 0.9998 at the edge of the detector when both arms are corrected.} \end{figure} The NRSE beamline was simulated in McStas both with and without the correction magnets. A plot comparing the simulated echo polarizations vs. radial position on the detector is shown in Fig. \ref{figNRSEmcstas}. These simulations confirm that the inclusion of the correction elements greatly increases the polarization, especially for larger scattering angles which correspond to the edges of the detector. Correcting only the second arm of the beamline improves the polarization for the larger scattering angles, although the polarization at the center of the detector is worsened compared to the uncorrected simulation due to the initial neutron divergence angle. However, if the initial beam divergence is large (e.g., about 1 degree or more for our specific simulation parameters), then both of the single arm correction schemes show very little improvement in the polarization, so both arms must be corrected. The alignment of the correction magnets is also important, with more precision required for higher Fourier times. For the simulation parameters used above, all correction magnets must be aligned within approximately $\pm 0.5$ mm in both the $y$ and $z$ directions. \section{Experiment Results} \label{sec:experiment} \begin{figure}[b] \centering \includegraphics[width=.95\linewidth]{Figures/beamline_diagram_sep21.png} \caption{\label{figBeamline} Schematic of the experimental test of the correction magnet (CM). The beam travels from left to right. Precession occurred inside both the CM and guide field (GF).} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=.95\textwidth]{Figures/2d_analysis_10amp_sep12-eps-converted-to.pdf} \caption{\label{figCorrAnalysis} Data with a current of -10 amps in the NRSE correction magnet. (a) The intensity vs. position recorded by the Anger camera when the guide field coil had a current of 1.11 amps. (b) A cosine fit of the intensity vs. guide field coil current for several pixels along the line $z=0$. (c) The phase and (d) phase error extracted from the cosine fit shown for each pixel. (e) The quadratic fit of the phase data to Eqn. \eqref{Quadratic2d} and (f) the quadratic fit subtracted from the phase data. } \end{figure*} \begin{figure} \centering \includegraphics[width=.95\linewidth]{Figures/quadratic_fig_sep2-eps-converted-to.pdf} \caption{\label{figCorrAnalysis1d} (a) Horizontal and (b) vertical slices through the magnetic center for different currents in the correction magnet. Data are fit to a parabola.} \end{figure} A measurement of the field integral through the prototype correction magnet was performed on the cold-neutron, polarized test beamline CG4B at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). As shown in Fig. \ref{figBeamline}, the correction magnet was installed in a vacuum chamber in front of a guide field magnet that also generated a field in the vertical, $z$-direction. The guide field magnet had a front and back HTS film, with the front film also serving as the back film of the correction magnet. The vertical guide field magnitude was designed to be spatially uniform, as the neutron will continue to precess in it. S-benders served as the neutron polarizer and analyzer. A horizontal guide field outside of the correction magnet and the non-adiabatic field transition through the HTS film induced precession inside both the correction magnet and the spatially uniform guide field. Precession was stopped by another horizontal guide field after the back HTS film of the guide field. The beam size was determined by a square 1 by 1 cm slit located 1 meter in front of the correction magnet and a square 2.5 by 2.5 cm slit at the end of the guide field coil. The wavelength was 0.55 nm with a FWHM wavelength spread of less than 1$\%$. The variation in the Larmor phase across the beam was measured across the two-dimensional detector. The detector was an Anger camera with 1.8 mm pixel size, as shown in Fig. \ref{figCorrAnalysis}(a). \cite{Riedel2015, Cao2018} From the detector image in Fig. \ref{figCorrAnalysis}(a), one can directly see an approximately ``bullseye'' shaped signal, suggesting a radial dependence of the Larmor phase. The Larmor phase was measured by setting a current in the correction magnet and scanning the precessing guide field between 1 and 1.12 amps. This current range varies the phase of neutrons passing through the magnet by about $2 \pi$, as shown in Fig. \ref{figCorrAnalysis}(b). The difference in the phase of the curves shown in Fig. \ref{figCorrAnalysis}(b) shows the different Larmor phase acquired by neutrons traveling through the correction magnet. The intensity recorded in each pixel varies greatly due to the spatial non-uniformity in the CG4B beam intensity as well as the non-uniform detector efficiency. The intensity $N$ as a function of pixel $(y,z)$ was fit to \begin{equation} N(y,z) = \alpha + \beta \cos[\phi + f_g(I - I_0)], \end{equation} where $\alpha$ and $\beta$ are fitting parameters, $\phi$ is the Larmor phase from the correction magnet, $f_g$ is the frequency of the oscillation in the polarization due to the Larmor phase produced by the guide field, $I$ is the current in the guide field, and $I_0$ is the current at the start of the scan (1 amp in this case). The polarization of the signal is defined as $\beta/\alpha$. We fit the relative phase compared to the center which we set as zero. The phase data for the phase $\phi$ were fit to the following two-dimensional quadratic function: \begin{equation} \label{Quadratic2d} \phi(y,z) = \phi_2 [(y - y_0)^2 + \epsilon(z-z_0)^2] + \phi_0, \end{equation} where $\phi_2$ and $\phi_0$ are fitting parameters and $\epsilon$ the eccentricity term which allows for a difference in the $y$ and $z$ correction terms, and $(y_0, z_0)$ the beam center. The fit is shown in Fig. \ref{figCorrAnalysis}(e). Subtracting the quadratic fit from the phase data gives the accessible corrected beam size, shown in Fig. \ref{figCorrAnalysis}(f) to be about 2 centimeters. MagNet simulations show that the eccentricity term can be tuned to unity by varying the chevron angle and back film separation distance. There are several unexpected features in the data. Most notably, the center of the beam is not the same as the center of the quadratic fit, which we call the magnetic center. This discrepancy is possibly due to a misalignment of the beam mask and the correction magnet. Additionally, the positive $y$ side of the magnet has a larger discrepancy in the data-fit compared to the negative side. While these features are surprising, the most likely source for the off-center signal is due to misalignment between the beam apertures and the correction magnet, and the non-homogeneous signal is possibly due to a magnetic inhomogeneity in the soft-iron used in the pole and chevron pieces. An additional feature is the non-radial dependence (i.e. the diamond shape of the fitted phase) of the data at large $(y,z)$. This feature can partially account for where the quadratic fit fails to match the data in Fig. \ref{figCorrAnalysis}(f). It can also be seen in the MagNet simulations of the field integral, suggesting that it is a result of the chevron design. A more sophisticated pole piece shape may be required to adjust the field integral into the proper quadratic shape for larger beam sizes. In order to compare different currents in the correction magnet, we fit a vertical and horizontal slice through the magnetic center to a parabola as displayed in Fig. \ref{figCorrAnalysis1d}. The offset in $y$ of 3 mm remains approximately constant for all currents, which is consistent with the conclusion of misalignment between the correction magnet and the beam. The quadratic coefficient divided by the current should be the same for all currents if the field is generated solely by the current in the correction magnets. However the phase change at 15 amps is only 2.6 times the variation at 5 amps. This difference is possibly due to hysteresis effects and domain formation in the soft-iron inside of the correction magnet. It may also be due to the coupling of the field in the correction magnet to the external guide fields, although MagNet simulations show very little coupling. \section{Discussion and Conclusion} \label{sec:dis} This correction technique is for transverse-field NRSE instruments, while Fresnel coils may be installed for longitudinal NRSE. An advantage of this device compared to Fresnel coils is the small amount of neutron-absorbing or scattering material in the beam. There is a 100 nm film of gold coating a 350 nm film of YBCO on a 0.5 mm sapphire substrate. With a 1 nm neutron wavelength, 12 of these films will have a transmission of $\sim 95\%$. Fresnel coils add at least 2 cm of aluminum wire which has a transmission of $\sim 86\%$ for 1 nm neutron wavelength. The exact amount of material for a Fresnel coil depends on the required current, so reaching a higher Fourier time generally produces more background scattering. However, the Fresnel coils have a long history of being successfully used to correct for NSE and have been built to accommodate much larger beam sizes. One of the reasons longitudinal NRSE is preferred for the Reseda instrument at FRM-II is the historical difficulty in correcting transverse NRSE path length aberrations. \cite{Franz2019} If this correction technique can reach the same performance as Fresnel coils, the choice of longitudinal or transverse-NRSE will be more complicated: transverse rf flippers offer the opportunity to have a higher effective frequency in ``bootstrap'' mode, \cite{Li2020} while longitudinal rf flippers have a proven history of high performance.\cite{Franz2019} Another method of correcting divergent neutrons has recently been installed at VIN-ROSE in JPARC, which addresses the same problem by adding elliptical mirrors to each arm so that all neutron paths will be the same length. \cite{Endo2019} To our knowledge, this correction magnet has not yet been used to correct for scattering from different points in the sample. We have demonstrated a theory, simulation, and experiment of correcting aberrations caused from path deviations in a transverse NRSE beamline. Simulation shows that an arrangement of six correction magnets maintains a high polarization even for 3 cm beam sizes and large rf flipper frequencies. Experimental tests of the prototype magnet show that shaping the pole pieces and separating the coil from the back HTS film allow for the independent control of the $y$ and $z$ parameters of a quadratic field integral. The most pressing improvements to future designs are more careful alignment, more magnetically-uniform material for pole pieces, and acceptance of larger beam sizes. With these improvements, this type of correction magnet is ready to benefit transverse NRSE beamlines. \section{Acknowledgements} The authors would like to thank Lowell Crow, Georg Ehlers, Fumiaki Funama, and Steven Parnell for useful discussions. CAD drawings were made by Jak Doskow and machining was done by the Indiana University Physics machine shop: John Frye, Danny Clark, Darren Nevitt, and Todd Sampson. We thank Matthew Loyd for assistance with the Anger camera. This research used resources at the High Flux Isotope Reactor, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. F. Li would also like to acknowledge the support from DOE Early Career Research Program Award (KC0402010), under Contract No. DE-AC05- 00OR22725. The work reported here was funded by the Department of Energy STTR program under grants DE-SC0021482 and DE-SC0018453. A number of the authors acknowledge support from the US Department of Commerce through co- operative agreement number 70NANB15H259. \bibliographystyle{apsrev4-2.bst} \section{Introduction} \begin{figure*} \centering \includegraphics[width=.95\textwidth]{Figures/Schematic_NRSECorr_sep12-eps-converted-to.pdf} \caption{ \label{figSchematic} Schematic of the corrected neutron resonance spin echo beamline. The neutron is polarized at the far left (Pol.) before traveling through the first arm, scattering from the sample (Sam.) at angle $\theta$, echoing through the second arm, and entering the analyzer (Ana.) at the far right; the detector is not shown. The correction magnets are labeled ``CM'' and the rf flippers ``RF''. The white space between the CMs is at zero field. The scattering plane for the example neutron path shown is the $x$-$y$ plane. For the first arm, $L_n$ for $n=1,2,3$ is the distance from the beam-defining slit to the center of the corresponding correction magnet, while for the second arm, $L_n$ for $n = 4,5,6$ is the distance from the sample to each correction magnet center. The distance from the optical axis to the point that the neutron passes through the $n^{\mathrm{th}}$ correction magnet is defined as $y_n$. The distance between rf flippers in both arms is $L_{\mathrm{RF}}$. } \end{figure*} Neutron Resonance Spin Echo (NRSE) is a modification of the Neutron Spin Echo (NSE) technique which replaces the static-field precession coils with radio-frequency (rf) spin-flippers. \cite{Golub1987} The underlying principle of both types of \textit{echo} measurements is that the instrument will measure the change in velocity, and hence the change in energy, of a neutron scattered from a sample. Currently, most large neutron sources use static-field NSE instruments for high-energy-resolution measurements of slow dynamics. In order to be competitive with existing NSE instruments, NRSE would need to achieve a Fourier time (also called the spin echo time) of about one hundred nanoseconds. \cite{Farago2015,Ohl2012} The Fourier time $\tau$ is given by \begin{equation} \tau = \frac{2 m^2}{h^2} L_{\mathrm{RF}} f \lambda^3 \label{EqnFouriertime} \end{equation} where $L_{\mathrm{RF}}$ is the distance between the rf flippers in each arm, $f$ is the rf flipper (linear) frequency, $\lambda$ is the neutron wavelength, $m$ is the neutron mass, and $h$ is Planck's constant.\cite{Keller2002} State-of-the-art rf flippers are already capable of producing a high performance NRSE instrument. As an example, suppose an NRSE beamline uses recently developed transverse rf flippers. \cite{Li2020} A beamline with those rf flippers operating at 4 MHz, with a 2 meter separation between the rf flippers, and with a standard NSE wavelength of 1 nm would have a Fourier time of 100 nanoseconds, which is comparable to modern NSE beamlines.\cite{Golub1988} However, due to the long wavelength requirements for both NSE and NRSE, the neutron flux is often low and the relevant samples scatter weakly. Therefore, measurements are only possible by having a large spatial and angular beam size, which leads to aberrations. In conventional NSE, one aberration source is due to the variation in the static field strength across the beam due to the field profile created by a solenoid geometry. In NRSE, the rf flippers are separated by zero field regions, so this aberration is not present. A second type of aberration arises from scattering from a sample. The sample is placed in the center of the two symmetric arms as shown in Fig. \ref{figSchematic}. If the neutron scatters with some non-zero momentum transfer, then the path length through the second arm will not be the same as the path length through the first arm, so the neutron will spend a different amount of time in the two arms. Because NRSE instruments measure the velocity change of a neutron by measuring its Larmor phase $\Phi = 4 \pi f t$, where $t$ is the time it takes for the neutron to travel between the rf flippers, a difference in time is measured as a change in the neutron velocity. An uncorrected echo measurement will then conflate a change in scattering angle with a change in energy. The time can be written in terms of the neutron path length between the rf flippers as \begin{equation} t = \frac{L_{\mathrm{RF}}}{v \cos\theta}, \end{equation} where $v$ is the neutron velocity and $\theta$ is the scattering angle in the scattering plane (see Fig. \ref{figSchematic}).\cite{Keller2002} Thus, for our example NRSE beamline, a neutron scattering at 1 degree would be out of Larmor phase by more than 2000 degrees compared to the unscattered neutrons if there were no correction. This aberration will be present for elastic, quasielastic, and inelastic neutron scattering. Clearly, an NRSE instrument must have a method for correcting this geometric contribution to the Larmor phase. In NSE instruments, Fresnel coils with longitudinal-fields (i.e. the field is orientated along the optical axis) are used for an analogous correction as well as correcting the aberration from the static field variation. \cite{Ohl2005,Ohl2012,Farago2015} However, transverse-field rf flippers (i.e. the rf flipper's static-field is perpendicular to the optical axis) have been constructed for NRSE measurements, \cite{Li2020, Endo2019} which require a transverse-field correction magnet to improve the polarization. In this paper, we present an analytical solution of the magnetic field profile needed to correct the path length aberrations and the design of a suitable prototype correction magnet. The Larmor phase that a neutron acquires traveling through our prototype NRSE correction magnet varies quadratically with the distance from the optical axis and is radially symmetric. We simulate an NRSE beamline with the correction magnets and experimentally measure the spatial dependence of the Larmor phase change through the device. We demonstrate that transverse-field NRSE beamlines can be corrected with transverse static-field magnets, and therefore have the potential to be competitive with NSE beamlines. \section{Analytical Solution} \label{sec:ana} The two arms of the NRSE instrument will be corrected independently; we will discuss the correction for the second arm first. The necessary magnitude of the correction is proportional to the path~length~difference~$\Delta L_{\mathrm{RF}}$ between the two pairs of rf flippers, which is given by \begin{equation} \label{pathlength} \Delta L_{\mathrm{RF}} = L_{\mathrm{RF}} \left(\frac{1}{\cos\theta} - 1\right) \approx \frac{1}{2}L_{\mathrm{RF}}\theta^2, \end{equation} where $\theta$ is the small scattering angle shown in Fig. \ref{figSchematic}, defined relative to the optical axis. This difference in path length will cause a delay in time, and thus a difference in the Larmor phase $\Phi$. The difference in Larmor phase $\Delta \Phi$ between the unscattered and scattered beam for idealized rf flippers in the NRSE configuration is\cite{Keller2002} \begin{equation} \label{deltaLinital} \Delta \Phi = \frac{4 \pi f}{v} \Delta L_{\mathrm{RF}} \approx 2 \pi f \frac{\lambda m}{h} L_{\mathrm{RF}} \theta^2. \end{equation} The Larmor phase has already been defined as $\Phi = 4 \pi f t$ for an NRSE instrument, but it can also be defined for NSE instruments in terms of the magnetic field integral: $\Phi = (\gamma/v) \int ds \, B$, where $\mathrm{FI}_s = \int ds \, B$ the field integral experienced by the neutron traveling along the path $s$ and $\gamma \approx -1.832 \times 10^8$ rad/(T$\cdot$s) is the neutron's gyromagnetic ratio. In NSE, the neutron magnetic moment rotates (precesses) in the plane perpendicular to an applied static field and the Larmor phase measures the amount of this rotation relative to some fixed direction in the lab. In NRSE, the rf field rotates in the plane perpendicular to a static field, and the Larmor phase measures that angle between the neutron magnetic moment and the rf field. Hence, rf and static field effects on the neutron can be added together, as has been exploited recently.\cite{Jochum2020b} To correct the phase difference, we must design a correction scheme consisting of static magnetic fields that generates a Larmor phase proportional to $\theta^2$, with the proportionality coefficient $\chi$ being \begin{equation} \label{CC} \chi = 2 \pi \frac{f}{\gamma} L_{\mathrm{RF}}. \end{equation} Notice that the required correction is independent of wavelength. With this correction, the Larmor phase of all diverging neutrons will be corrected as if they traveled the same effective path length, namely the distance $L_{\mathrm{RF}}$. Unfortunately, it is not obvious how to design a static magnetic field profile that would generate a purely $\theta^2$-dependent field integral term for a finite-sized beam. However, we can generate such a term for a finite-sized beam with three correction magnets consisting of transverse static fields in which the magnitude of the field integral of a neutron traveling through the devices varies quadratically as a function of transverse position, with the center being the minimum and increasing radially outward. For scattering from a point-like sample, one can show that only two devices of this type are needed. This design is a two-dimensional extension to the original solution proposed by Monkenbusch. \cite{Monkenbusch1999} For simplicity, we only look at the aberrations in one dimension (transverse $y$ direction), but the following argument can be easily generalized to the entire two-dimensional plane perpendicular to the optical axis. The neutrons may be scattered from any position on the sample at a distance of $y_{\mathrm{sam}}$ from the optical axis into any angle $\theta$ defined relative to the optical axis. If the sample were point-like, then the scattering angle $\theta$ would be defined just by the $y$ distance from the optical axis and the $\theta^2$ aberration would be known simply from the $x$ and $y$ position in the second arm. With a finite-size sample, the scattering position $y_{\mathrm{sam}}$ will add to the $y$ position from the scattering angle, so one device at a specific point along the beamline is not sufficient to correct the $\theta^2$ aberration. To lowest order in scattering angle, the field integral per amp of a single prototype correction magnet is \begin{equation} \label{correctiondeviceFIgeneral} \mathrm{FI}_n = a_n + b_n y_n^2 + c_n y_n \theta, \end{equation} where $y_n$ is the distance between the neutron's path at the $n^{\mathrm{th}}$ correction magnet and the optical axis (see Fig. \ref{figSchematic}). There is no linear term in $y$ because the device is left-right symmetric; similarly, there is no linear term in $z$ due to the top-bottom symmetry. Each term in the field integral is proportional to the applied current, and $a_n$ has units of T$\cdot$m, $b_n$ has units of T/m, and $c_n$ has units of T/rad. Here $b_n$ is the term that corrects the path-length aberrations while the $a_n$ and $c_n$ terms appear because of the particular correction magnet geometry that we have chosen; higher order terms were found to have a negligible contribution to the field integral. The transverse position that the scattered neutron passes through each correction magnet is given by \begin{equation} \label{ydefinition} y_n = y_{\mathrm{sam}}+ L_n \theta, \end{equation} where $\theta$ is assumed to be small and $L_n$ is the distance from the sample to the center of the $n$\textsuperscript{th} correction magnet, as shown in Fig. \ref{figSchematic}. Combining Eqns. \eqref{correctiondeviceFIgeneral} and \eqref{ydefinition} for each device, we find the total field integral per amp $\mathrm{FI_T}$ experienced by a neutron arriving at the analyzer due to the three correction magnets to be \begin{equation} \label{eqnLarmorPhaseCC} \begin{aligned} \mathrm{FI_T} = \sum_{n \in \{4,5,6\}} \big[& a_n + b_n y_{\mathrm{sam}}^2 + (c_n + 2 b_n L_n) y_{\mathrm{sam}} \theta \\ &+ L_n (c_n + b_n L_n) \theta^2 \big]. \end{aligned} \end{equation} The goal of the correction scheme is to have the coefficient of the $\theta^2$ term equal to Eqn. \eqref{CC} while having all other terms zero. Doing so, the series of correction magnets would correct the path-length aberration in the second arm of the instrument regardless of the $y_{\mathrm{sam}}$ position and without introducing any net field integral. These requirements on Eqn. \eqref{eqnLarmorPhaseCC} can be rewritten into several conditions: \begin{gather*} \label{CoefficientCancellingEqns} a_4 + a_5 + a_6 = b_4 + b_5 + b_6 = c_4 + c_5 + c_6 = 0 \\ b_4 L_4 + b_5 L_5 + b_6 L_6 = 0 \\ \chi = L_4 (c_4 + b_4 L_4) + L_5 (c_5 + b_5 L_5) + L_6 (c_6 + b_6 L_6). \end{gather*} Notice that if the sum of the currents through the three correction magnets is zero, then the first line will be satisfied. Ignoring the $a_n$ terms for now, we solve this system of equations, obtaining \begin{subequations} \label{SolutionEnc} \begin{align} b_4 &= \frac{\chi + c_4(L_5 - L_4) + c_6(L_5 - L_6)}{(L_4 - L_5)(L_4 - L_6)} \\ b_5 &= -\frac{\chi + c_4(L_5 - L_4) + c_6(L_5 - L_6)}{(L_4 - L_5)(L_5 - L_6)} \\ b_6 &= \frac{\chi + c_4(L_5 - L_4) + c_6(L_5 - L_6)}{(L_4 - L_6)(L_5 - L_6)} \\ c_5 &= - (c_4 + c_6), \end{align} \end{subequations} with $c_4$ and $c_6$ being free parameters. From this set of solutions, it is apparent that if we choose CM5, the fifth correction magnet, to be equidistant from CM4 and CM6 (such that $|L_4 - L_5| = |L_5 - L_6| = \delta L$) and also $c_4 = c_6$, then the angle-dependent $c_n$ terms cancel out, leaving $b_4 = b_6 = \chi/[2(\delta L)^2]$ and $b_5 = -\chi/(\delta L)^2$. Therefore, we can obtain our desired field integral by putting the same field in the first and last device and a field twice as large in the opposite direction in the middle device. We call this choice of fields the $(1,-2,1)$ configuration. In this configuration, the constant $a_n$ terms will also cancel out. Next, we determine the magnitude required for $b_n$ for a realistic NRSE beamline. Plugging in Eqn. \eqref{CC} to the $b_n$ terms in Eqn. \eqref{SolutionEnc}, we see that \begin{gather} b_4 = \frac{\pi f L_{\mathrm{RF}}}{\gamma (\delta L)^2} \\ b_5 = -2 b_4, \quad b_6 = b_4. \nonumber \end{gather} To estimate the required field in the correction magnet, let $L_{\mathrm{RF}} = 2$ m, $\delta L = 1$ m, and $f = 4 $ MHz. Plugging in the numbers, we find $b_4 \approx -140$ mT/m. As we will show below, this value is attainable with our correction magnet. The above discussion has only considered the second arm; now we look at the first arm. If the initial beam is well-collimated (i.e. all neutrons in the first arm travel parallel to the optical axis), then no correction elements are required in the first arm even if there are correction magnets in the second arm. However, in practice, neutrons in a real instrument have some divergence angle relative to the optical axis. This initial beam divergence will lead to a variation in path length between neutrons propagating at different angles in the first arm, similar to the scattering term for the second arm. Therefore, we must correct for this variation in the first arm with another three correction magnets, as shown in Fig. \ref{figSchematic}. They must be in the $(-1,2,-1)$ configuration as the static magnetic field in both rf flippers is in the opposite direction relative the static fields in the rf flippers in the second arm. Without this additional correction, we do not obtain the best possible improvement to the polarization. The correction for the divergence angle of the neutron in the first arm will not require any changes to the correction magnet set-up in the second arm because the correction for the second arm is independent of angle or $y_{\mathrm{sam}}$ position. With all six correction magnets installed, the Larmor phase, and hence Fourier time, of neutrons along any path in either arm will be corrected to the Larmor phase and Fourier time of a neutron traveling parallel to the optical axis. \section{Development of the Correction Magnet} \label{sec:design} \begin{figure} \includegraphics[width=.95\linewidth]{Figures/CorrDevelopment_sep20-eps-converted-to.pdf} \caption{\label{figCorrDevelopment} (a) CAD model of the correction magnet. The light blue surfaces are the high-temperature superconducting (HTS) films (front film not shown), the purple is the low-carbon steel chevron, the tan are the HTS coils, and the brown is the low-carbon steel flux return and pole pieces. (b and c) Simulation of fields at $y = 0$ and $z = 0$ with the coil current set to 10 amps, which have the same colorbar legend. The beam traveled from left to right. The HTS films are highlighted in white. (d) Contour plot of simulated field integral through the correction device at 10 amps for a neutron originating from a point source at $y = z = 0$ and $x = -20$ m. (e) Slices of the simulated field integrals in (d) through the origin.} \end{figure} To implement the analytical solution, we designed and constructed a vertical-field correction magnet following the drawing of Fig. \ref{figCorrDevelopment}(a). The bottom, top, and sides of the coils are enclosed by a magnetic flux return made of low-carbon steel (alloy 1018). High-temperature superconducting (HTS) films, gold-coated 350 nm thick YBCO on 0.5 mm thick sapphire substrate, were placed on the front and sides of the coils with another film placed 38 mm after the coils, outside of the magnetic circuit. These HTS films act as magnetic field screens due to the Meissner effect which creates sharp boundaries between field regions, as shown in Fig. \ref{figCorrDevelopment}(b,c). Thus the magnet can be thought of in two parts: a contained region where the coils sit and an open region before the back film. HTS wire was wound around hollow low-carbon steel pole pieces and topped with ``chevrons'', low-carbon steel plates with v-shaped cutouts. The opening of the v is at the front of the device. The thickness of the chevrons was 3.2 mm, leaving a separation between chevrons of 50 mm. The angle of the chevron was 60 degrees, and the space from the coils to the rear HTS film was 38 mm. It was already well-known that a dipole magnet without a HTS film constraining the magnetic flux will create a magnetic field with a quadratic $z$-dependence, as correcting for this was one of the initial advantages of adding a HTS film. \cite{Wang2014} The magnetic field in this device was simulated using the Siemens MagNet $\copyright$ software, which includes the material properties in its solutions via the finite-element method. The HTS films were simulated as perfect diamagnets preventing any perpendicular magnetic flux. We note that the explicit field profile in the device is arbitrary as long as the resulting field integral is quadratic across the device and the field direction does not change too quickly. A useful feature of this design is that the $y$ and $z$ components of the field integral may be tuned independently. The $z$ component, shown in Fig. \ref{figCorrDevelopment}(b) is largely dependent on the distance to the back film while the $y$ component, shown in Fig. \ref{figCorrDevelopment}(c) is largely dependent on the chevron angle. As shown in \ref{figCorrDevelopment}(b), the quadratic behavior of the $z$ component comes from the bowing field lines protruding around the back of the coils due to the displaced back film. A numerical solution to the field integral for any starting position $(y,z)$ and angle through the device was found by extracting the MagNet solution for the field at each point (mesh size 4 mm) and integrating the field along any chosen path. The simulated field integral through the coils at 10 amps for a neutron traveling from a far-away point source at $y= z =0$ and $x$ = -20 m is shown in Fig. \ref{figCorrDevelopment}(d). The difference in field integral between the center and edges is about 0.01 mT$\cdot$m, which approximately is the necessary correction value for an NRSE beamline with an rf flipper frequency of 1 MHz and 1 meter between the correction magnets. The field integral through the center is about 1.69 mT$\cdot$m. \section{McStas Simulations of an NRSE Instrument} \label{sec:mcstas} \FloatBarrier Using McStas, a Monte Carlo neutron ray-tracing software package,\cite{Willendrup_Lefmann_2020, Willendrup_Lefmann_2021} we simulated an NRSE beamline with these correction magnets installed. The polarizer, rf flippers, analyzer, and detector were taken to be 100$\%$ efficient, while the correction magnet component was built using numerically simulated magnetic field data extracted from the MagNet simulations. The following model was used for the rf flipper at resonance: \begin{equation} \Phi_f = 2 \pi f (2t_i + \Delta t) -\Phi_i, \end{equation} where $\Phi_f$ is the final Larmor phase after exiting the rf flipper, $f$ the rf frequency (2 MHz for these simulations), $t_i$ the time at which the neutron enters the flipper, $\Delta t$ the time spent inside the flipper of thickness 15 cm, and $\Phi_i$ the initial Larmor phase when entering the flipper. We used a modified version of the ``SANS\_spheres2'' sample, a default McStas sample that emulates elastically scattering hard spheres in a dilute solution. The sample parameters were chosen to prevent incoherent scattering or transmission without scattering, so the component acted like an idealized elastic scatterer with a maximum momentum transfer of 0.09 nm$^{-1}$, which corresponds to a maximum scattering angle of 0.7 degrees. The sample was 3 cm by 3 cm in transverse size with negligible thickness. The aperture diameter and the neutron wavelength were 2 cm and $0.8$ nm $ \pm 1\%$, respectively. The separation between the rf flippers in each arm was taken to be 2.3 m, and the distance between the correction magnets 1 meter. The total distance from the source to the two-dimensional detector was 5.2 meters. With these parameters, the effective initial beam divergence is about 0.6 degrees. Using the magnetic field data extracted from MagNet simulations, we determined that our prototype correction magnet had the following field integral per amp expansion across the correction magnet: \begin{equation} \label{Eqn:fullFIE} \mathrm{FI} = a + b_y y^2 + b_z z^2 + c_y y \theta + c_z z \psi, \end{equation} where $\psi$ is the vertical neutron divergence angle and the values of the fitted coefficients are given in Tab. \ref{tab:FIE CC coefficients} below. Higher order terms were found to have a negligible contribution. \renewcommand{\arraystretch}{2} \begin{table}[h] \centering \newcolumntype{R}{>{\centering\arraybackslash}X} \begin{tabularx}{.99\linewidth}{R|R|R|R|R} $a \left(\frac{\mathrm{mT}\cdot \mathrm{m}}{\mathrm{A}}\right)$ & $b_y \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{m}}\right)$ & $b_z \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{m}}\right)$ & $c_y \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{rad}}\right)$ & $c_z \left(\frac{\mathrm{mT}}{\mathrm{A}\cdot\mathrm{rad}}\right)$ \\ \hline 0.169 & 8.78 & 8.97 & -0.525 & 0.986 \end{tabularx} \renewcommand{\arraystretch}{1} \caption{\label{tab:FIE CC coefficients} Values of the field integral expansion coefficients in Eqn. \eqref{Eqn:fullFIE}. The longitudinal length of the magnet is 14 cm.} \end{table} \begin{figure}[!b] \centering \includegraphics[width=.95\linewidth]{Figures/Correction_coil_mcstas_tinyinset.png} \caption{\label{figNRSEmcstas} Simulated McStas polarization for an NRSE beamline with all six correction magnets on (red curve), only the final three correction magnets on (green curve), only the first three correction magnets on (orange curve), and no correction magnets on (blue curve). The inset compares the polarization for all correction magnets on (red curve) to all correction magnets off and sample removed (purple curve) which shows that the polarization drops to about 0.9998 at the edge of the detector when both arms are corrected.} \end{figure} The NRSE beamline was simulated in McStas both with and without the correction magnets. A plot comparing the simulated echo polarizations vs. radial position on the detector is shown in Fig. \ref{figNRSEmcstas}. These simulations confirm that the inclusion of the correction elements greatly increases the polarization, especially for larger scattering angles which correspond to the edges of the detector. Correcting only the second arm of the beamline improves the polarization for the larger scattering angles, although the polarization at the center of the detector is worsened compared to the uncorrected simulation due to the initial neutron divergence angle. However, if the initial beam divergence is large (e.g., about 1 degree or more for our specific simulation parameters), then both of the single arm correction schemes show very little improvement in the polarization, so both arms must be corrected. The alignment of the correction magnets is also important, with more precision required for higher Fourier times. For the simulation parameters used above, all correction magnets must be aligned within approximately $\pm 0.5$ mm in both the $y$ and $z$ directions. \section{Experiment Results} \label{sec:experiment} \begin{figure}[b] \centering \includegraphics[width=.95\linewidth]{Figures/beamline_diagram_sep21.png} \caption{\label{figBeamline} Schematic of the experimental test of the correction magnet (CM). The beam travels from left to right. Precession occurred inside both the CM and guide field (GF).} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=.95\textwidth]{Figures/2d_analysis_10amp_sep12-eps-converted-to.pdf} \caption{\label{figCorrAnalysis} Data with a current of -10 amps in the NRSE correction magnet. (a) The intensity vs. position recorded by the Anger camera when the guide field coil had a current of 1.11 amps. (b) A cosine fit of the intensity vs. guide field coil current for several pixels along the line $z=0$. (c) The phase and (d) phase error extracted from the cosine fit shown for each pixel. (e) The quadratic fit of the phase data to Eqn. \eqref{Quadratic2d} and (f) the quadratic fit subtracted from the phase data. } \end{figure*} \begin{figure} \centering \includegraphics[width=.95\linewidth]{Figures/quadratic_fig_sep2-eps-converted-to.pdf} \caption{\label{figCorrAnalysis1d} (a) Horizontal and (b) vertical slices through the magnetic center for different currents in the correction magnet. Data are fit to a parabola.} \end{figure} A measurement of the field integral through the prototype correction magnet was performed on the cold-neutron, polarized test beamline CG4B at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). As shown in Fig. \ref{figBeamline}, the correction magnet was installed in a vacuum chamber in front of a guide field magnet that also generated a field in the vertical, $z$-direction. The guide field magnet had a front and back HTS film, with the front film also serving as the back film of the correction magnet. The vertical guide field magnitude was designed to be spatially uniform, as the neutron will continue to precess in it. S-benders served as the neutron polarizer and analyzer. A horizontal guide field outside of the correction magnet and the non-adiabatic field transition through the HTS film induced precession inside both the correction magnet and the spatially uniform guide field. Precession was stopped by another horizontal guide field after the back HTS film of the guide field. The beam size was determined by a square 1 by 1 cm slit located 1 meter in front of the correction magnet and a square 2.5 by 2.5 cm slit at the end of the guide field coil. The wavelength was 0.55 nm with a FWHM wavelength spread of less than 1$\%$. The variation in the Larmor phase across the beam was measured across the two-dimensional detector. The detector was an Anger camera with 1.8 mm pixel size, as shown in Fig. \ref{figCorrAnalysis}(a). \cite{Riedel2015, Cao2018} From the detector image in Fig. \ref{figCorrAnalysis}(a), one can directly see an approximately ``bullseye'' shaped signal, suggesting a radial dependence of the Larmor phase. The Larmor phase was measured by setting a current in the correction magnet and scanning the precessing guide field between 1 and 1.12 amps. This current range varies the phase of neutrons passing through the magnet by about $2 \pi$, as shown in Fig. \ref{figCorrAnalysis}(b). The difference in the phase of the curves shown in Fig. \ref{figCorrAnalysis}(b) shows the different Larmor phase acquired by neutrons traveling through the correction magnet. The intensity recorded in each pixel varies greatly due to the spatial non-uniformity in the CG4B beam intensity as well as the non-uniform detector efficiency. The intensity $N$ as a function of pixel $(y,z)$ was fit to \begin{equation} N(y,z) = \alpha + \beta \cos[\phi + f_g(I - I_0)], \end{equation} where $\alpha$ and $\beta$ are fitting parameters, $\phi$ is the Larmor phase from the correction magnet, $f_g$ is the frequency of the oscillation in the polarization due to the Larmor phase produced by the guide field, $I$ is the current in the guide field, and $I_0$ is the current at the start of the scan (1 amp in this case). The polarization of the signal is defined as $\beta/\alpha$. We fit the relative phase compared to the center which we set as zero. The phase data for the phase $\phi$ were fit to the following two-dimensional quadratic function: \begin{equation} \label{Quadratic2d} \phi(y,z) = \phi_2 [(y - y_0)^2 + \epsilon(z-z_0)^2] + \phi_0, \end{equation} where $\phi_2$ and $\phi_0$ are fitting parameters and $\epsilon$ the eccentricity term which allows for a difference in the $y$ and $z$ correction terms, and $(y_0, z_0)$ the beam center. The fit is shown in Fig. \ref{figCorrAnalysis}(e). Subtracting the quadratic fit from the phase data gives the accessible corrected beam size, shown in Fig. \ref{figCorrAnalysis}(f) to be about 2 centimeters. MagNet simulations show that the eccentricity term can be tuned to unity by varying the chevron angle and back film separation distance. There are several unexpected features in the data. Most notably, the center of the beam is not the same as the center of the quadratic fit, which we call the magnetic center. This discrepancy is possibly due to a misalignment of the beam mask and the correction magnet. Additionally, the positive $y$ side of the magnet has a larger discrepancy in the data-fit compared to the negative side. While these features are surprising, the most likely source for the off-center signal is due to misalignment between the beam apertures and the correction magnet, and the non-homogeneous signal is possibly due to a magnetic inhomogeneity in the soft-iron used in the pole and chevron pieces. An additional feature is the non-radial dependence (i.e. the diamond shape of the fitted phase) of the data at large $(y,z)$. This feature can partially account for where the quadratic fit fails to match the data in Fig. \ref{figCorrAnalysis}(f). It can also be seen in the MagNet simulations of the field integral, suggesting that it is a result of the chevron design. A more sophisticated pole piece shape may be required to adjust the field integral into the proper quadratic shape for larger beam sizes. In order to compare different currents in the correction magnet, we fit a vertical and horizontal slice through the magnetic center to a parabola as displayed in Fig. \ref{figCorrAnalysis1d}. The offset in $y$ of 3 mm remains approximately constant for all currents, which is consistent with the conclusion of misalignment between the correction magnet and the beam. The quadratic coefficient divided by the current should be the same for all currents if the field is generated solely by the current in the correction magnets. However the phase change at 15 amps is only 2.6 times the variation at 5 amps. This difference is possibly due to hysteresis effects and domain formation in the soft-iron inside of the correction magnet. It may also be due to the coupling of the field in the correction magnet to the external guide fields, although MagNet simulations show very little coupling. \section{Discussion and Conclusion} \label{sec:dis} This correction technique is for transverse-field NRSE instruments, while Fresnel coils may be installed for longitudinal NRSE. An advantage of this device compared to Fresnel coils is the small amount of neutron-absorbing or scattering material in the beam. There is a 100 nm film of gold coating a 350 nm film of YBCO on a 0.5 mm sapphire substrate. With a 1 nm neutron wavelength, 12 of these films will have a transmission of $\sim 95\%$. Fresnel coils add at least 2 cm of aluminum wire which has a transmission of $\sim 86\%$ for 1 nm neutron wavelength. The exact amount of material for a Fresnel coil depends on the required current, so reaching a higher Fourier time generally produces more background scattering. However, the Fresnel coils have a long history of being successfully used to correct for NSE and have been built to accommodate much larger beam sizes. One of the reasons longitudinal NRSE is preferred for the Reseda instrument at FRM-II is the historical difficulty in correcting transverse NRSE path length aberrations. \cite{Franz2019} If this correction technique can reach the same performance as Fresnel coils, the choice of longitudinal or transverse-NRSE will be more complicated: transverse rf flippers offer the opportunity to have a higher effective frequency in ``bootstrap'' mode, \cite{Li2020} while longitudinal rf flippers have a proven history of high performance.\cite{Franz2019} Another method of correcting divergent neutrons has recently been installed at VIN-ROSE in JPARC, which addresses the same problem by adding elliptical mirrors to each arm so that all neutron paths will be the same length. \cite{Endo2019} To our knowledge, this correction magnet has not yet been used to correct for scattering from different points in the sample. We have demonstrated a theory, simulation, and experiment of correcting aberrations caused from path deviations in a transverse NRSE beamline. Simulation shows that an arrangement of six correction magnets maintains a high polarization even for 3 cm beam sizes and large rf flipper frequencies. Experimental tests of the prototype magnet show that shaping the pole pieces and separating the coil from the back HTS film allow for the independent control of the $y$ and $z$ parameters of a quadratic field integral. The most pressing improvements to future designs are more careful alignment, more magnetically-uniform material for pole pieces, and acceptance of larger beam sizes. With these improvements, this type of correction magnet is ready to benefit transverse NRSE beamlines. \section{Acknowledgements} The authors would like to thank Lowell Crow, Georg Ehlers, Fumiaki Funama, and Steven Parnell for useful discussions. CAD drawings were made by Jak Doskow and machining was done by the Indiana University Physics machine shop: John Frye, Danny Clark, Darren Nevitt, and Todd Sampson. We thank Matthew Loyd for assistance with the Anger camera. This research used resources at the High Flux Isotope Reactor, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. F. Li would also like to acknowledge the support from DOE Early Career Research Program Award (KC0402010), under Contract No. DE-AC05- 00OR22725. The work reported here was funded by the Department of Energy STTR program under grants DE-SC0021482 and DE-SC0018453. A number of the authors acknowledge support from the US Department of Commerce through co- operative agreement number 70NANB15H259. \bibliographystyle{apsrev4-2.bst}
train/arxiv
BkiUbbA5qg5A50uBwESn
5
1
\section{Introduction} Despite achieving high performance in a variety of classification and regression tasks, neural networks are not always guaranteed to satisfy certain desired properties after training. A prominent example is adversarial robustness. Neural networks can be overly sensitive to carefully designed input perturbations (\cite{szegedy2013intriguing}). This intriguing property holds in the reverse direction too. In classification problems, neural networks can also be excessively insensitive to large perturbations, causing two semantically different inputs (e.g., images) to be classified in the same category (\cite{jacobsen2018excessive}). Indeed, a fundamental trade-off has been shown between adversarial robustness and excessive invariance (\cite{tramer2020fundamental}), which is mathematically related to the noninvertibility of the map defined by the neural network. To mitigate noninvertibility, and hence excessive invariance, one can consider invertible-by-design architectures. Invertible neural networks (INNs) have been used to design generative models (\cite{donahue2019large}), implement memory-saving gradient computation (\cite{gomez2017reversible}), and solve inverse problems (\cite{ardizzone2018analyzing}). However, commonly-used INN architectures suffer from exploding inverses; in this paper, we therefore consider the problem of certifying the (possible) nonivertibility of conventional neural networks after training. Specifically, we study two relevant invertibility problems: \emph{(i) local invertibility of neural networks:} given a dynamical system whose time-$\tau$ map is parameterized by a neural network, we verify whether it is locally invertible around a certain input (or trajectory) and compute the largest region of local invertibility; and \emph{(ii) local invertibility of transformations between neural networks:} we certify whether two (assumed ``equivalent'') neural networks (e.g., related through pruning) can be transformed (i.e. calibrated) to each other locally via an invertible transformation. We develop mathematical tools based on mixed-integer linear/quadratic programming for the characterization of noninvertibility that are applicable to both (a) neural network approximators of dynamics, as well as to (b) transformations between neural networks. \paragraph{Related work} Noninvertibility in neural networks was studied in the 1990s (\cite{GICQUEL19988,298587}); more recently, several papers focus on the global invertibility property in neural networks (see \cite{chang2017reversible, teshima2020couplingbased, chen2018neural, 10.5555/3327546.3327578, Jaeger2014ControllingRN}). Analyzing invertibility of neural networks (\cite{Behrmann2018AnalysisOI}) and constructing invertible architectures arises in many contexts, such as generative modeling (\cite{chen2019residualflows}), inverse problems (\cite{Ardizzone2019AnalyzingIP}) or probabilistic inference (\cite{9298920}). Neural networks invertible by design have been developed for these applications. Some of the these networks (e.g. RevNet (\cite{gomez2017reversible}), NICE (\cite{dinh2015nice}), real NVP (\cite{dinh2017density})) partition the input domains and use affine or coupling transformations as the forward pass, keeping the Jacobians (block-)triangular with nonzero diagonal elements, resulting in nonzero determinants; others, like i-ResNet (\cite{behrmann2019invertible}) have no analytical forms for the inverse dynamics, yet their finite bi-Lipschitz constants can be derived: both methods can guarantee global invertibility. A comprehensive analysis is found in (\cite{behrmann2020understanding, song2019mintnet}). However, a theoretical understanding of the expressiveness of these architectures, as well as of their universal approximation properties, is still incomplete. Compared to standard networks like multi-layer perceptrons (MLPs) or convolutional neural networks (CNNs), the novel invertible neural networks (INNs) become computationally demanding. Neural ODE (\cite{chen2018neural}) use an alternative method to compute gradients for backward propagation; i-ResNet (\cite{behrmann2019invertible}) has restrictions on the norm of every weight matrix to be enforced during the training process. In most cases, the input domain of interest is a small subset of the whole space. For example, the grey-scale image domain in computer vision problems is $[0, 1]^{H \times W}$ (where $H$ and $W$ are height and width of images), and it is unnecessary to consider the whole space $\mathbb{R}^{H \times W}$. We thus focus on {\em local invertibility}: how do we know if our network is invertible on a given finite domain, and if not, how do we quantify noninvertibility? Beyond classification problems, noninvertibility can also lead to catastrophic consequences in regression, and more specifically in dynamical systems prediction. The flow of smooth differential equations is invertible when it exists; yet traditional numerical integrators used to approximate them can be noninvertible. Neural network approximations of the corresponding time-$\tau$ map also suffer from this potential pathology. In this paper, we initially study noninvertibility in the context of dynamical systems predictions. \section{Local invertibility of dynamical systems and neural networks} Continuous-time dynamical systems, in particular autonomous ordinary differential equations (ODEs) have the form $dX(t) / dt = f(X(t)), X(t = t_0) = X_0$, where $X(t) \in \mathbb{R}^m$ are the state variables of interest; $f: \mathbb{R}^m \mapsto \mathbb{R}^m$ relates the states to their time derivatives and $X_0 \in \mathbb{R}^m$ is the initial condition at $t_0$. If $f$ is uniformly Lipschitz continuous in $X$ and continuous in $t$, the Cauchy-Lipschitz theorem guarantees the existence and uniqueness of the solution. In practice, we observe the states $X(t)$ at discrete points in time, starting at $t_0 = 0$. For a fixed timestep $\tau \in \mathbb{R}^+$, and $\forall n \in \mathbb{N}$, $t_n = n \tau$ denotes the $n$-th time stamp, and $X_n = X(t = t_n)$ the corresponding state values. Now we will have: \begin{equation} X_{n + 1} := F(X_n) = X_n + \int_{t_n}^{t_{n + 1}} f(X(t)) dt; \ X_n = F^{-1} (X_{n + 1}). \label{odeint} \end{equation} This equation also works as the starting point of many numerical ODE solvers. For the time-$\tau$ map in \eqref{odeint}, the inverse function theorem provides a sufficient condition for its invertibility: If $F$ is a continuously differentiable function from an open set $\mathcal{B}$ of $\mathbb{R}^m$ into $\mathbb{R}^m$, and the Jacobian determinant of $F$ at $p$ is non-zero, then $F$ is invertible near $p$. Thus, if we define the {\em noninvertibility locus} as the set $J_0(F) = \{p \in \mathcal{B}:$ $ \det(\mathbf{J}_F(p)) = 0 \}$, then the condition $J_0(F) = \emptyset$ guarantees global invertibility of $F$ (notice that this condition is not {\em necessary}: the scalar function $F(X) = X^3$ provides a counterexample). If $F$ is continuous over $\mathcal{B}$ but not everywhere differentiable, then the definition of $J_0$ set should be altered to: \begin{equation} J_0(F) = \left\{p \in \mathcal{B}: \forall N_0(p), \exists\, p_1, p_2 \in N_0(p), p_1 \neq p_2, \text{ s.t. } \det(\mathbf{J}_F(p_1)) \det(\mathbf{J}_F(p_2)) \leq 0 \right\}. \label{eq: J0_def_ext}, \end{equation} the set of points where the determinant discontinuously changes sign. \subparagraph{Numerical integrators are (often) noninvertible} Numerically approximating the finite integral in \eqref{odeint} can introduce noninvertibility in the transformation. Here is a simple one-dimensional illustrative ODE example: $dX / dt = f(X) = X^2 + bX + c, \quad X(t = 0) = X_0$, where $b, c \in \mathbb{R}$ are two fixed parameters. The analytical solution \eqref{odeint} is invertible; however a forward-Euler discretization with step $\tau$ gives \begin{equation} X_{n + 1} = F(X_n) = X_n + \tau(X_n^2 + bX_n + c) \Rightarrow \tau X_n^2 + (\tau b + 1) X_n + (\tau c - X_{n + 1}) = 0. \label{euler_1d_inv} \end{equation} Given a fixed $X_{n + 1}$, Equation \eqref{euler_1d_inv} is quadratic w.r.t. $X_n$; this determines the local invertibility of $F$ based on $\Delta = (\tau b + 1)^2 - 4 \tau (\tau c - X_{n + 1})$: no real root if $\Delta < 0$; one real root with multiplicity 2 if $\Delta = 0$; and two distinct real roots if $\Delta > 0$. In practice, one uses small timesteps $\tau \ll 1$ for accuracy/stability, leading to the last case: there will always exist a solution $X_n$ close to $X_{n + 1}$, and a second preimage, far away from the region of our interest, and arguably physically irrelevant (this second $X_n \rightarrow -\infty$ as $\tau \rightarrow 0$). On the other hand, as $\tau$ grows, the two roots move closer to each other, $J_0(F)$ moves close to the regime of our simulations, and noninvertibility can have visible implications on the predicted dynamics. Thus, choosing a small timestep in explicit integrators guarantees desirable accuracy, and simultaneously {\em practically} mitigates noninvertibility pathologies in the dynamics. \paragraph{Invertibility in transformations between neural networks} Training two neural networks for the same regression or classification task practically never gives identical network parameters. Numerous criteria exist for comparing the performance of different models (e.g. accuracy in classification, or mean-squared loss in regression). Here we explore whether two different models {\em can be calibrated to each other} (leading to a {\em de facto} implicit function problem). Extending our analysis provides invertibility guarantees for the transformation from the output of network 1 to the output of network 2 (and vice versa). \section{Invertibility certification of neural networks and of transformations between them} Here we pose the verification of local invertibility of continuous functions as an optimization problem. We then show that for ReLU networks, this leads to a mixed-integer linear/quadratic program. For an integer $q \geq 1$, we denote the $L_q$-ball centered at $x_c$ by $\mathcal{B}_q(x_c,r) = \{x \in \mathbb{R}^n \mid \|x-x_c\|_q \leq r\}$ (the notation also holds when $q \rightarrow +\infty$). \begin{problem}[Local Invertibility of NNs] \label{problem 1} % Given a neural network $f: \mathbb{R}^m \mapsto \mathbb{R}^m$ and a point $x_c \in \mathbb{R}^m$ in the input space, we want to find the largest radius $r> 0$ such that $f$ is invertible on $\mathcal{B}_q(x_c,r)$, i.e., $f(x_1) \neq f(x_2)$ for all $x_1, x_2 \in \mathcal{B}_q(x_c,r)$, $x_1 \neq x_2$. \end{problem} Another relevant problem is to verify whether, for a particular point, a nearby point exists with the same forward image. This is of particular interest in assessing invertibility of discrete-time dynamical systems around a given trajectory. We formally state the problem as follows: \begin{problem}[Pseudo Local Invertibility of NNs] \label{problem 2} Given a neural network $f: \mathbb{R}^m \mapsto \mathbb{R}^m$ and a point $x_c \in \mathbb{R}^m$ in the input space, we want to find the largest radius $R> 0$ such that $f(x) \neq f(x_c)$ for all $x \in \mathcal{B}_q(x_c,R)$, $x \neq x_c$. \end{problem} If $r$ and $R$ are the optimal radii in problems \ref{problem 1} and \ref{problem 2} respectively, we must have $r \leq R$. For Problem \ref{problem 1}, the ball $\mathcal{B}_q(x_c,r)$ just ``touches'' the $J_0$ set; for Problem \ref{problem 2}, the ball $\mathcal{B}_q(x_c,R)$ extends to the ``other'' closest preimage of $f(x_c)$. Figure \ref{fig: 1D sketch} illustrates both concepts in the one-dimensional case. For the scalar function $y = f(x)$ and around a particular input $x_c$, we show the nearest bounds of local invertibility and pseudo invertibility. The points $Q_1= (x_{Q_1}, y_{Q_1})$ and $Q_2= (x_{Q_2}, y_{Q_2})$ are the two closest turning points (elements of the $J_0$ set) to the point $C =(x_c, y_c)$; $f$ is uniquely invertible (bi-Lipschitz) on the open interval $(x_{Q_1}, x_{Q_2})$, so that the optimal solution to Problem \ref{problem 1} is: $r = \min \{|x_{Q_1} - x_c|, |x_{Q_2} - x_c|\} = |x_{Q_1} - x_c|$. Noting that $M_1 = (x_{M_1}, y_{M_1})$ and $M_2 = (x_{M_2}, y_{M_2})$ are the two closest points that have the same $y$-coordinate as the point $C = (x_c, y_c)$, the optimal solution to Problem \ref{problem 2} is $R = \min \{|x_{M_1} - x_c|, |x_{M_2} - x_c|\} = |x_{M_1} - x_c|$. \begin{figure}[H] \begin{center} \includegraphics[width=0.9\textwidth]{Fig/ps_sketch_2023.png} \end{center} \caption{ \small{Illustration of problems 1 (distance to invertibility boundary, red) and 2 (distance to pseudo invertibility boundary, blue).} } \label{fig: 1D sketch} \end{figure} We now state our first result, posing the local invertibility of a function (such as a neural network) as a constrained optimization problem. \begin{theorem}[Local Invertibility of Continuous Functions] \label{theorem: local_inv} Let $f \colon \mathbb{R}^m \to \mathbb{R}^m$ be a continuous function and $\mathcal{B} \subset \mathbb{R}^m$ be a compact set. Consider the following optimization problem, \begin{alignat}{2} \label{opt problem 1} p^\star \leftarrow& \mathrm{max} \quad && \|x_1 - x_2\| \quad \text{subject to } x_1, x_2 \in \mathcal{B}, \quad f(x_1)=f(x_2). \end{alignat} Then $f$ is invertible on $\mathcal{B}$ if and only if $p^\star =0$. \end{theorem} \begin{theorem}[Pseudo Local Invertibility] \label{theorem: local_pseudo} Let $f \colon \mathbb{R}^m \to \mathbb{R}^m$ be a continuous function and $\mathcal{B} \subset \mathbb{R}^m$ be a compact set. Suppose $x_c \in \mathcal{B}$. Consider the following optimization problem, \begin{align} \label{opt problem 2} P^\star \leftarrow \mathrm{max} \quad \|x-x_c\| \quad \text{subject to } x \in \mathcal{B}, \quad f(x)=f(x_c). \end{align} % Then we have $f(x) \neq f(x_c)$ for all $x \in \mathcal{B} \setminus \{x_c\}$ if and only if $P^\star =0$. \end{theorem} Note that by adding the equality constraints $x=x_1, x_c=x_2$ to the optimization problem \eqref{opt problem 1}, we obtain the optimization problem \eqref{opt problem 2}. Hence, we will only focus on \eqref{opt problem 1} in what follows. \paragraph{Invertibility certification of ReLU networks via mixed-integer programming} \label{MILP1} We now show that for a given ball $\mathcal{B}_{\infty}(x_c,r)$ in the input space, and piecewise linear networks with ReLU activations, the optimization problem in \eqref{opt problem 1} can be cast as an MILP. A single ReLU constraint $y = \max(0,x)$ with pre-activation bounds $\underline{x} \leq x \leq \bar{x}$ can be equivalently described by the following mixed-integer linear constraints (\cite{tjeng2017evaluating}): \begin{align} y=\max(0,x), \ \underline{x} \leq x \leq \bar{x} \iff \{y \geq 0, \ y \geq x, y \leq x - \underline{x} (1-t), \ y \leq \bar{x} t, \ t \in \{1,0\}\}, \label{eq: relu_decomp} \end{align} where the binary variable $t \in \{1,0\}$ is an indicator of the activation function being active ($y=x$) or inactive ($y=0$). % Now consider an $\ell$-layer feed-forward fully-connected ReLU network with input $x$ given by the following recursions, \begin{align} \label{eq: nn equations} x^{(k+1)} = \max(W^{(k)} x^{(k)} + b^{(k)},0) \text{ for } k=0,\cdots,\ell-1; \ f(x^{(0)}) = W^{(\ell)} x^{(\ell)} + b^{(\ell)}, \end{align} where $x^{(k)} \in \mathbb{R}^{n_k}$ gives the input to the $(k + 1)$-th layer (specifically, we have $x=x^{(0)}$ and $n_0=m$), $W^{(k)} \in \mathbb{R}^{n_{k+1} \times n_k},b^{(k)} \in \mathbb{R}^{n_{k+1}}$ are the weight matrices and bias vectors of the affine layers. We denote $n = \sum_{k=1}^{\ell} n_{k}$ the total number of neurons. Suppose $l^{(k)}$ and $u^{(k)}$ are known elementwise lower and upper bounds on the input to the $(k+1)$-th activation layer, i.e., $l^{(k)} \leq W^{(k)} x^{(k)} +b^{(k)}\leq u^{(k)}$. Then the neural network equations are equivalent to a set of mixed-integer constraints as follows: \begin{align} \label{eq:MIL_NN} x^{(k+1)} = \max(W^{(k)} x^{(k)} + b^{(k)},0) \Leftrightarrow \begin{cases} x^{(k+1)} \geq W^{(k)} x^{(k)} + b^{(k)} \\ x^{(k+1)} \leq W^{(k)} x^{(k)} + b^{(k)} - l^{(k)} \odot (\mathrm{1}_{n_{k+1}}-t^{(k)}) \\ x^{(k+1)} \geq 0, \quad x^{(k+1)} \leq u^{(k)} \odot t^{(k)}, \end{cases} \end{align} where $t^{(k)} \in \{1,0\}^{n_{k+1}}$ is a vector of binary variables for the $(k+1)$-th activation layer and $\mathrm{1}_{n_{k+1}}$ denotes vector of all $1$'s in $\mathbb{R}^{n_{k+1}}$. We note that the element-wise pre-activation bounds $\{ l^{(k)} , u^{(k)} \}$ can be precomputed by, for example, interval bound propagation or linear programming, assuming known bounds on the input of the neural network (\cite{weng2018towards, zhang2018efficient,hein2017formal,wang2018efficient,wong2018provable}). Since the state-of-the-art solvers for mixed-integer programming are based on branch $\&$ bound algorithms (\cite{bandb, 10.5555/247975}), tight pre-activation bounds will allow the algorithm to prune branches more efficiently and reduce the total running time. \subparagraph{Local invertibility certificates via mixed-integer programming} Having represented the neural network equations by mixed-integer constraints, it remains to encode the objective function $\|x_1 - x_2\|$ of (\ref{opt problem 1}) as well as the set $\mathcal{B}$. We assume that $\mathcal{B}$ is an $L_\infty$ ball around a given point $x_c$, i.e., $\mathcal{B} = \mathcal{B}_{\infty}(x_c,r)$. Furthermore, for the sake of space, we only consider $L_\infty$ norms for the objective function. Specifically, consider the equality $w = \|x_1-x_2\|_{\infty}$. This equality can be encoded as mixed-integer linear constraints by introducing $2n_0$ mutually exclusive indicator variables, which leads to the following MILP: \begin{align} \label{eq: local invertibility MIP} p^\star & \leftarrow \mathrm{max} \ w \text{ subject to } \|x_1-x_c\|_{\infty} \leq r, \notag \ \ \text{} \|x_2-x_c\|_{\infty} \leq r \\ \mathrm{(I)}: & \begin{cases} (x_1-x_2) \leq w \mathrm{1}_{n_0} \leq (x_1-x_2) + 4r(\mathrm{1}_{n_0}-f) \\ -(x_1-x_2) \leq w \mathrm{1}_{n_0} \leq -(x_1-x_2) + 4r(\mathrm{1}_{n_0}-f') \\ f + f' \leq \mathrm{1}_{n_0}, \mathrm{1}_{n_0}^\top (f+f') =1, f,f' \in \{0,1\}^{n_0} \end{cases} \notag \\ \mathrm{(II)}: & \ W^{(\ell)} x_1^{(\ell)} = W^{(\ell)} x_2^{(\ell)} \\ \notag & \text{for } k=0,\cdots,\ell-1: \\ \notag \mathrm{(III)}: & \begin{cases} x_1^{(k+1)} \geq W^{(k)} x_1^{(k)} + b^{(k)}, x_2^{(k+1)} \geq W^{(k)} x_2^{(k)} + b^{(k)} \\ x_1^{(k+1)} \leq W^{(k)} x_1^{(k)} + b^{(k)} - l^{(k)} \odot (1-t^{(k)}), x_2^{(k+1)} \leq W^{(k)} x_2^{(k)} + b^{(k)} - l^{(k)} \odot (1-t^{(k)}) \\ x_1^{(k+1)} \geq 0, x_2^{(k+1)} \geq 0, x_1^{(k+1)} \leq u^{(k)} \odot t^{(k)}, x_2^{(k+1)} \leq u^{(k)} \odot t^{(k)}; t^{(k)},s^{(k)} \in \{0,1\}^{n_k + 1}, \end{cases} \notag \end{align} \noindent where the set of constraints in $\mathrm{(I)}$ model the objective function $\|x_1-x_2\|_{\infty}$, and the set of constraints $\mathrm{(III)}$ encode the network $x_1^{(k+1)}=\max(W^{(k)} x_1^{(k)}+b^{(k)},0)$ and $x_2^{(k+1)}=\max(W^{(k)} x_2^{(k)}+b^{(k)},0)$. The constraint $\mathrm{(II)}$ enforces that $f(x_1) = f(x_2)$. This optimization problem \eqref{opt problem 1} has $2(n_0+n)$ integer variables. \begin{remark} If we instead use the $\ell_2$ norm both for the objective function and the ball $\mathcal{B}_2(x_c,r)$, we will arrive at a mixed-integer quadratic program (MIQP). However, \eqref{eq: local invertibility MIP} remains an MILP if we change them to $\ell_1$ norms. \end{remark} \subparagraph{Largest region of invertibility}For a fixed radius $r \geq 0$, the optimization problem \eqref{eq: local invertibility MIP} either verifies whether $f$ is invertible on $\mathcal{B}_{\infty}(x_c,r)$ or it finds counterexamples $x_1 \neq x_2$ such that $f(x_1)=f(x_2)$. Thus, we can find the maximal $r$ by performing a bisection search on $r$ (Problem \ref{problem 1}). To close this section, we consider the problem of invertibility certification in transformations between two functions (and in particular two neural networks). \begin{problem}[Transformation Invertibility] \label{problem 3} Given two functions $f_1,f_2 \colon \mathbb{R}^m \to \mathbb{R}^m$ and a particular point $x_c \in \mathbb{R}^m$ in the input space, we would like to find the largest ball $\mathcal{B}_q(x_c,r)$ over which the output of $f_2$ is a function of the output of $f_1$ (and vice versa). \end{problem} \begin{theorem} \label{theorem: trans} Let $f_1 \colon \mathbb{R}^m \to \mathbb{R}^n$, $f_2 \colon \mathbb{R}^m \to \mathbb{R}^n$ be two continuous functions and $\mathcal{B} \subset \mathbb{R}^m$ be a compact set. Consider the following optimization problem, \begin{align} \label{opt problem 3} p_{12}^\star \leftarrow \mathrm{max} \quad \|f_2(x_1) - f_2(x_2)\| \quad \text{subject to } x_1, x_2 \in \mathcal{B}, \quad f_1(x_1) = f_1(x_2). \end{align} Then the output of $f_2$ is a function of the output of $f_1$ on $\mathcal{B}$ if and only if $p_{12}^\star = 0$. \end{theorem} \noindent Similar to Problem \ref{problem 1}, we can pose Problem \ref{problem 3} as a mixed-integer program. Furthermore, we can also define $p_{21}^\star$, whose zero value determines whether output of $f_1$ is a function of output of $f_2$ over $ \mathcal{B}$. It is straightforward to see that $p_{12}^\star = p_{21}^\star = 0$ if and only if output of $f_2$ is an invertible function of output of $f_1$. \section{Numerical Experiments} We now present experiments with $\mathrm{ReLU}$ multi-layer perceptrons (MLPs) in both (a) regression problems, and also in (b) transformations between two $\mathrm{ReLU}$ networks. \paragraph{1D Example} We use a 1-10-10-1 randomly generated fully-connected neural network $f(x)$ with $\mathrm{ReLU}$ activations. We find the largest interval around the points $x=-1.8; -1; -0.3$ on which $f$ is invertible (Problem \ref{problem 1}); we also find the largest interval around the point $x=-1$ for which no other interior points map to $f(-1)$ (Problem \ref{problem 2}). The results are plotted in Figure \ref{fig: 1D example}, where intervals in red and blue respectively represent the optimal solutions for the two problems. The largest certified radii are 0.157, 0.322 and 0.214 for Problem 1 and 0.553 for Problem 2. \begin{figure}[H] \centering \includegraphics[width=0.48\linewidth]{Fig/1D_example_BL.png} \includegraphics[width=0.48\linewidth]{Fig/1D_example_PI.png} \caption{\small{ Solutions to Problem \ref{problem 1} (left, red) and Problem \ref{problem 2} (right, blue) for the MLP corresponding to a randomly-generated ReLU network (see text). }} \label{fig: 1D example} \end{figure} \paragraph{2D Example: a disrete-time integrator.} The Brusselator (\cite{doi:10.1063/1.1679748}) is a system of two ODEs for the two variables $(x, y)$, depending on the parameters $(a, b)$; it describes oscillatory dynamics in a theoretical chemical reaction scheme. We use its forward-Euler discretization with step $\tau$, \begin{equation} x_{n + 1} = x_n + \tau (a + x_n^2 y_n - (b + 1) x_n), \ y_{n + 1} = y_n + \tau (bx_n - x_n^2 y_n). \label{ode_approx} \end{equation} Rearranging and eliminating $y_{n + 1}$ in \eqref{ode_approx} we obtain: \begin{equation} \tau (1 - \tau) x_n^3 + \tau (\tau a - x_{n + 1} - y_{n + 1}) x_n^2 + (\tau b + \tau - 1) x_n + (x_{n + 1} - \tau a) = 0. \label{cubic} \end{equation} Equation \eqref{cubic} is a cubic for $x_n$ given $(x_{n + 1}, y_{n + 1})$ when $\tau \neq 1$. By varying the parameters $a$, $b$ and $\tau$, we see the past states $(x_n, y_n)^T$ of point $(x_{n + 1}, y_{n + 1})^T$ (also called ``inverses'' or ``preimages'') may be multi-valued, so that this discrete-time system is, in general, noninvertible. We fix $a = 1$ and consider how inverses will be changing (a) with $b$ for fixed $\tau = 0.15$; and (b) with $\tau$, for fixed $b = 2$. We are interested in training a neural network that learns this time-$\tau$ mapping; for a fixed set of parameter values, this is a network from 3D to 2D: $(x_{n + 1}, y_{n + 1})^T \approx \mathcal{N}(x_n, y_n; p)^T$, where $p \in \mathbb{R}$ is the parameter. The network dynamics will be parameter-dependent if we set $p \equiv b$, or timestep-dependent if $p \equiv \tau$. The first layer of such an MLP reads \begin{equation} W^{(0)} \begin{bmatrix} x_n \\ y_n \\ p \\ \end{bmatrix} + b^{(0)} = (W^{(0)} (e_1 + e_2)) \begin{bmatrix} x_n \\ y_n \\ \end{bmatrix} + (p W^{(0)} e_3 + b^{(0)}), \label{eq: 3d_2d} \end{equation} where $e_{1, 2, 3} \in \mathbb{R}^3$ are indicator vectors. Here we trained two separate MLPs, ione with $b$ and one with $\tau$ dependence. For fixed $p$ (either $b$ or $\tau$) each of these two networks $\mathcal{N}$ can be thought of as a MLP mapping from $\mathbb{R}^2$ to $\mathbb{R}^2$, by slightly modifying the weights and biases in the first linear layer. \subparagraph{Parameter-dependent Inverses} It is useful to start with a brief discussion of the dynamics and noninvertibilities in the ground-truth system (see Figure \ref{fig:inv}). Consider a state located on the invariant circle (IC, shown in orange), for we therefore know there exists at least one preimage {\em also on this IC}. In Figure \ref{fig:inv} we indeed see that every point on the IC has three preimages: one still on the IC, and two extra inverses (in green and purple) after one iteration, all three loops map to the orange one, and then remain forward invariant. The phase space, upon iteration, {\em folds} along the two branches of the $J_0$ curve (sets of red points). For lower values of $b$, these three closed loops {\em do not intersect each other}. As $b$ increases the (orange) attractor will become tangent to, and subsequently intersect $J_0$, leading to an interaction with the other (green) preimage branch. At this point the dynamics predicted by the network become unphysical (beyond just inaccurate). \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{Fig/inv2_three.png} \caption{\small{Attractors (and their multiple inverses) for several parameter values of the discrete Brusselator neural network for $\tau=0.15$. Notice the relative positions of the $J_0$ curves (red), the ``main'' preimage locus (yellow), and the ``extra" preimages (green, purple). When the attractor starts interacting with the $J_0$ curve and, therefore, with these extra preimages, the dynamic behavior degenerates quantitatively and qualitatively % (see also \cite{298587}). }} \label{fig:inv} \end{figure} After convergence of training, we employ our algorithm to obtain noninvertibility certificates for the resulting MLP, and plot results for $b = 2.1$ in Figure \ref{2_inv}. In Figure \ref{2_inv}, we arbitrarily select one representative point, marked by triangle ($\triangle$), on the attractor (the orange invariant circle); we know there exists one inverse {\em also} located on the attractor, see the nearby cross ($+$); we call this the {\em primal} inverse. Our algorithm will produce two regions for this point, one for each of our problems (squares of constant $L_{\infty}$ distance in 2D). As a sanity check, we also compute the $J_0$ sets (the red point), as well as a few additional inverses, beyond the primal ones with the help of a numerical root solver and automatic differentiation (\cite{autodiff}). Clearly, the smaller square neighborhood just hits the $J_0$ curve, while the larger one extends to the closest non-primal inverse of the attractor. \subparagraph{Timestep-dependent Inverses} In the right two subfigures of Figure \ref{2_inv}, we explore the effect of varying the time horizon $\tau$. We compare a single Euler step of the ground truth ODE to the MLP approximating the same time $\tau$ map, and find that, for both of them, smaller time horizons lead to larger regions of invertibility. \begin{figure}[H] \centering \includegraphics[width=0.3\linewidth]{Fig/2d_point_left.pdf} \includegraphics[width=0.62\linewidth]{Fig/ts_j0_new.pdf} \caption{\small{ Left: illustration of our solution to Problems \ref{problem 1} and \ref{problem 2} for the Brusselator network with $(a, b) = (1, 2.1)$. For a particular reference point on the attractor, we show the neighborhoods found by our algorithms. They clearly locate the closest point on the $J_0$ curve / the closest ``extra preimage" of the point of interest. Last two: plots of $J_0$ curves at different $\tau$ with $(a, b) = (1, 2)$, for both the Euler integrator and our Brusselator ReLU network. Small timesteps lead to progressively more remote $J_0$ curves. Notice also the piecewise linear nature of the $J_0$ curve for the ReLU network; its accurate computation constitutes an interesting challenge by itself.} } \label{2_inv} \end{figure} \paragraph{Network Transformation Example: Learning the Van der Pol Equation} Here, to test our algorithm on the problem of transformations between networks \ref{problem 3}, we trained two networks on the same regression task. Our data comes from the 2D Van der Pol equation $dx_1 / dt = x_2, dx_2 / dt = \mu (1 - x_1^2) x_2 - x_1$, where the input and output are the initial and final states of 1000 short solution trajectories of duration 0.2 for $\mu = 1$, when a stable limit cycle exists. The initial states are uniformly sampled in the region $[-3, 3] \times [-3, 3]$. The neural network A used to learn the time-$\tau=0.2$ map is a 2-32-32-2 MLP, while the neural network B is a retrained sparse version of A, where half of the weight entries are pruned (set to zero) based on \cite{prune}. To visualize the performances of the two networks, two trajectories, generated by respectively iterating each network function for a fixed number of times starting from a common given initial state have been plotted in the left subplot of Figure \ref{prune}. The ODE solution trajectory starting at the same initial state with same overall time duration is also shown. We see that both network functions A and B exhibit long term oscillations; the shapes of both attractors appear to only have small visual differences from the true ODE solution (the red curve). These two network functions were then used to illustrate the algorithm for Problem \ref{problem 3}. Here we chose a center point $x_c = (0, 0)^T$, computed and plotted the mappable regions (the regions over which there is a one-to-one mapping between the output of one network and the output of the other, i.e. where one network can be {\em calibrated} to the other). This was done for two subcases (see the right subfigure of Figure \ref{prune}): (a) where the output of network $B$ is a function of the output of network $A$ (the square with white bounds centered at the red point, radius 3.0820), and vice versa, where the output of network $A$ is a function of the output of the network $B$ (the square with black bounds centered at the red point, radius 3.6484). This also gives us the ``common" region (the interior of the white square) where both networks can be calibrated {\em to each other}. For validation we also computed the Jacobian values of network $A$ and network $B$ on every grid point of the input domain, and shown that the white square touches the $J_0$ curve of network $A$, while the black square touches the $J_0$ curve of network $B$. Inside the black square the Jacobian of network $B$ remains positive, so that network $B$ is invertible (i.e. there exists a mapping from $f_B(x)$ to $x$, or equivalently, $f_B^{-1} (x)$); therefore we can find the mapping from $f_B(x)$ to $f_A(x)$ by composing the mapping from $f_B(x)$ to $x$ with the mapping from $x$ to $f_A(x)$ (the function $f_A(x)$ itself). The size of the white square can be similarly rationalized, validating our computation. \begin{figure}[H] \centering \includegraphics[width=0.4\linewidth]{Fig/pruned_pp.pdf} \includegraphics[width=0.55\linewidth]{Fig/pruned_jac.pdf} \caption{\small{Left: Trajectories of the ODE solution for the Van der Pol system (red), and their discrete-time neural network approximations (blue and green). All three trajectories begin at the same initial state. While the ODE solution curve is smooth due to its continuous-time nature, the others are just straight line segments connecting consecutive states (discrete-time dynamics). However, it is clear that all three systems have visually nearby long-time dynamic attractors, corroborating the good performance of the network and its pruned version. Right: visualization of MILP computation results, along with signs of Jacobian values of networks on the grid points of the input domain. Here, the center of the region is shown in red, while the white and black boundaries quantify the mappable region between outputs of network A and network B.}} \label{prune} \end{figure} \begin{table}[H] \centering \begin{tabular}{c|ccc|ccc|ccc} \toprule Sparsity & \multicolumn{3}{c|}{40 \%} & \multicolumn{3}{|c|}{50 \%} & \multicolumn{3}{|c}{60 \%} \\ \midrule Network $B$ & $B_1$ & $B_2$ & $B_3$ & $B_4$ & $B_5$ & $B_6$ & $B_7$ & $B_8$ & $B_9$ \\ \midrule $r_{AB}$ & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 \\ \midrule $r_{BA}$ & 3.4609 & 3.1055 & 3.8555 & 3.6484 & 2.6523 & 3.8203 & 3.6328 & 3.9727 & 4.5547 \\ \bottomrule \end{tabular} \caption{\small{The radii of the mappable regions between the original network $A$ and its pruned versions $B$. $r_{AB}$ relates to the region within which $f_B(x)$ is a function of $f_A(x)$. }} \label{tab:pruneResults} \end{table} As a sanity check, we consructed eight more pruned networks; two of them have $50 \%$ sparsity (networks $B_5$ and $B_6$), three have $40 \%$ sparsity (networks $B_1, B_2$ and $B_3$) and the others have $60 \%$ sparsity (networks $B_7, B_8$ and $B_9$). Above we discussed network $B_4$ For each pruned network, we computed the radii of the regions of interest (aka $r_{AB}$ and $r_{BA}$). The results are listed in Table \ref{tab:pruneResults}. All pruned networks $\{B_i\}$ share the same radii $r_{AB}$, consistent with the invertibility of $A$ itself. Since $r_A = 3.0820$, $A$ is invertible in the ball we computed, and the existence of the mapping $f_A(x) \mapsto f_B(x)$ follows by composition of $f_A(x) \mapsto x$ and $x \mapsto f_B(x)$. Based on these few computational experiments one might very tentatively surmise a trend: the higher the pruning (e.g. $60\%$) the larger the invertibility guarantee for the pruned network. In our work the input and output dimensions are the same (e.g. $m = n$ in Problem \ref{problem 3}). However, this condition is not necessary, and our algorithm can be conceptually extended to classification problems, where in general $m \gg n$. \section{Conclusions} In this paper, we revisited noninvertibility issues that arise in discrete-time dynamical systems integrators) as well as in neural networks that perform approximations of the same (time-series related) task. We argued that such noninvertibility may have dramatic pathological consequences, going beyond just inaccuracies, in the dynamics predicted by the networks. We also extended the analysis to transformations between different neural networks. We formulated three problems that provide a quantifiable assessment of ``local'' invertibility for any given, arbitrarily selected input. Specifically, for functions like MLPs with ReLU activations, these problems were formulated as mixed-integer programs. We then performed experiments on regression tasks. An extension of our algorithm to ResNets. can be found in the Appendix. Future directions include developing structure-exploiting methods to globally solve these MIPs more efficiently, and for larger networks. On the other hand, given that convolution and average pooling are linear operations, while max pooling is piecewise linear, it is natural to adapt our algorithms to convolutional neural networks like AlexNet (\cite{alex}) or VGG (\cite{VGG}). The successful application of our algorithm to ResNet architectures (\cite{resnet}) holds promise for applicability also to recursive architectures (\cite{lu18d, Weinan2017APO}), such as fractal networks (\cite{larsson2017fractalnet}), poly-inception networks (\cite{zhang2016polynet}), and RevNet (\cite{gomez2017reversible}). We are working on making the algorithm practical for continuous differentiable activations like tanh or Swish (\cite{swish}), and for other piecewise activations like gaussian error linear units (GELUs, \cite{gelu}). We are particularly interested in the case when the input and output domains are of different dimension (e.g., classifiers).
train/arxiv
BkiUdrs5qsNCPgVW_Jjs
5
1
\section{Introduction} \label{sec:intro} The Schramm-Loewner evolution (SLE) was first introduced by Schramm \cite{sch-sle} in 1999 as a candidate for the scaling limit of curves in statistical physics models at criticality. Soon afterwards it was proven that the SLE indeed describes the limiting behavior of a range of statistical physics models, including the uniform spanning tree, the loop-erased random walk \cite{lsw-lerw-ust}, percolation \cite{smi-cardy}, the Ising model \cite{smi-ising,cs-ising}, and the discrete Gaussian free field \cite{ss-dgff}. Schramm argued in his work that SLE is the unique one-parameter family of processes satisfying two natural properties called conformal invariance and the domain Markov property, and he denoted the parameter by $\kappa>0$. In this paper we will study the regularity of SLE. We first present some measures of regularity for general fractal curves in Section \ref{sec:quantify} along with previous results for SLE. Then we state our main results of Section \ref{se:results}, and finally we give an outline of the paper in Section \ref{se:outline}. \subsection{How to quantify the regularity of fractal curves, and previous SLE results} \label{sec:quantify} Let $I\subset\mathbbm{R}$ be an interval and let $\eta\colon I\to\mathbb{C}$ be a fractal curve in the plane. A natural question is how one can quantify the regularity or fractality of $\eta$. One approach is to view the curve as a subset of $\mathbb{C}$ by considering the set $\eta(I)$. One can study the dimension of this set, e.g.\ the Hausdorff or Minkowski dimension. It follows from \cite{bef-sle-dimension,rs-sle} that in the case of SLE, the two latter dimensions agree a.s.\ and are given by $(1+\kappa/8)\wedge 2$. One can also ask about the exact gauge function that gives a non-trivial Hausdorff measure or Minkowski content for SLE; this is known to be an exact power-law for the Minkowski content \cite{lr-minkowski-content} while it is unknown for the Hausdorff measure \cite{rez-hausdorff}. The $(1+\kappa/8)\wedge 2$-dimensional Minkowski content of SLE has been proven to define a parametrization of the curve known as the natural parametrization \cite{lr-minkowski-content,lv-lerw-nat,ls-natural-parametrisation}. For conformally invariant curves in $\mathbb{C}$ like SLE it is also natural to study the regularity of a uniformizing conformal map from the complement of the curve to some reference domain. See e.g.\ \cite{bs-aims-sle,gms-sle-multifractal,vl-tip-multifractal,abv-boundary-collisions,sch-mf-boundary,ghm-dim-conformal,kms-sle48x} for results on this for SLE. One can also quantify the regularity of $\eta$ by viewing it as a parametrized curve rather than a subset of $\mathbb{C}$. The \emph{modulus of continuity} is natural in this regard. We say that $\eta\colon I \to\mathbb{C}$ admits $\omega\colon[0,\infty]\to[0,\infty]$ as a modulus of continuity if $|\eta(t)-\eta(s)|\leq \omega(|t-s|)$ for any $s,t\in I$. If this holds for $\omega(t)=ct^\alpha$ for some $c>0$ and $\alpha>0$ then we say that the curve is $\alpha$-H\"older continuous. Note that the modulus of continuity of a curve depends strongly on the parametrization of the curve. For SLE there are two commonly considered parametrizations: the capacity parametrization (see e.g.\ \cite{law-conformal-book}) and the natural parametrization. The optimal H\"older exponent of SLE was computed by Lawler and Viklund \cite{vl-sle-hoelder} for the capacity parametrization, and logarithmic refinements were studied in \cite{yuan-psivarx,kms-sle48x}. For SLE with its natural parametrization the optimal H\"older exponent is equal to the dimension of the curve (see \eqref{eq:d} below). This was proven by Zhan for $\kappa\leq 4$ \cite{zhan-hoelder} and by Gwynne, Holden, and Miller for space-filling SLE with $\kappa>4$ \cite{ghm-kpz-sle}. The regularity of $\eta$ at a fixed time $t_0\in I$ can be quantified via a \emph{law of the iterated logarithm} which describes the magnitude of the fluctuations of $\eta$ as it approaches $t_0$. One can also consider the set of exceptional times where the fluctuations are different from a typical point, e.g. so-called fast or slow points. Finally, another important notion of regularity for a curve is the \emph{variation}. For an increasing homeomorphism $\psi\colon(0,\infty)\to(0,\infty)$ we say that $\eta\colon I\to\mathbb{C}$ has finite $\psi$-variation if \[ \sup_{t_0<\dots<t_r} \sum_i \psi(|\eta(t_{i+1})-\eta(t_i)|) < \infty, \] where we take the supremum over all finite sequences $t_0<t_1<\dots<t_r$, $t_i\in I$ for $i=1,\dots,r$. Equivalently, a continuous curve $\eta$ has finite $\psi$-variation if and only if there exists a reparametrization $\widetilde\eta$ of $\eta$ that admits modulus of continuity $\psi^{-1}$, i.e., \[ \abs{\widetilde\eta(t)-\widetilde\eta(s)} \le \psi^{-1}(\abs{t-s}) . \] This measure of regularity is invariant under reparametrizations of the curve, and is therefore particularly natural in the setting of SLE where people use multiple different parametrizations or simply consider the curve to be defined only modulo reparametrization of time. The case $\psi(x) = x^p$ (usually called $p$-variation) plays an important role in the theories of Young integration and rough paths. A curve is of finite $p$-variation if and only if it admits a $1/p$-Hölder continuous reparametrisation. When $\eta$ has finite $\psi$-variation, the following quantity is finite and, for convex $\psi$, can be proven to define a semi-norm \[ [\eta]_{\psi\text{-var},I} = \inf\left\{ M>0 \mmiddle| \sup_{\substack{t_0<\dots<t_r\\ t_i \in I}} \sum_i \psi\left(\frac{\abs{\eta(t_{i+1})-\eta(t_i)}}{M}\right) \le 1 \right\} . \] The optimal $p$-variation exponent of SLE was computed to be equal to its dimension $(1+\kappa/8)\wedge 2$ in \cite{bef-sle-dimension,ft-sle-regularity,wer-sle-pvar}, and a non-optimal logarithmic refinement of the upper bound was established by the second author \cite{yuan-psivarx}. (Recall the general result that the $p$-variation exponent cannot be smaller than the Hausdorff dimension of the curve.) \subsection{Main results} \label{se:results} We assume throughout the section that $\eta$ is either \begin{enumerate}[label=(\roman*)] \item \label{it:non-spfill} a two-sided whole-plane SLE$_\kappa$, $\kappa\le 8$, from $\infty$ to $\infty$ passing through $0$ with its natural parametrization, or \item \label{it:spfill} a whole-plane space-filling SLE$_\kappa$, $\kappa>4$, from $\infty$ to $\infty$ with its natural parametrization (i.e., parametrized by Lebesgue area measure). \end{enumerate} Furthermore, we let $d$ denote the dimension of the curve, namely \begin{equation} d = 1+\frac{\kappa}{8} \quad\text{in case \ref{it:non-spfill}\qquad and\qquad } d=2 \quad\text{in case \ref{it:spfill}.} \label{eq:d} \end{equation} The SLE variants considered in \ref{it:non-spfill} and \ref{it:spfill} are particularly natural since it can be argued that they describe the local limit in law of an arbitrary variant of SLE with its natural parametrization zoomed in at a typical time. The reason we consider these two SLE variants is that they are self-similar processes of index $1/d$ with stationary increments, in the sense that for every $t_0 \in \mathbb{R}$ and $\lambda > 0$, the process $t \mapsto \eta(t_0+\lambda t)-\eta(t_0)$ has the same law as $t \mapsto \lambda^{1/d}\eta(t)$ (see \cite[Corollary 4.7 and Remark 4.9]{zhan-loop} for case \ref{it:non-spfill}, and \cite[Lemma 2.3]{hs-mating-eucl} for case \ref{it:spfill}). We give a more thorough introduction to these curves in Section \ref{se:prelim}. Throughout the paper, we write $\log^*(x) = \log(x) \vee 1$. \begin{theorem}[variation regularity]\label{thm:main_var} Let $\psi(x) = x^d(\log^*\logp\frac{1}{x})^{-(d-1)}$. There exists a deterministic constant $c_0 \in {(0,\infty)}$ such that almost surely \[ \lim_{\delta\downarrow 0} \sup_{\abs{t_{i+1}-t_i}<\delta} \sum_i \psi(\abs{\eta(t_{i+1})-\eta(t_i)}) = c_0\abs{I} \] for any bounded interval $I \subseteq \mathbb{R}$, where the supremum is taken over finite sequences $t_0 < ... < t_r$ with $t_i \in I$ and $\abs{t_{i+1}-t_i}<\delta$. Moreover, for any bounded interval $I \subseteq \mathbb{R}$, there exists $c>0$ depending on the length of $I$ such that \[ \mathbb{P}( [\eta]_{\psi\text{-var};I} > u ) \le c^{-1}\exp(-c u^{d/(d-1)}) . \] \end{theorem} Recall that the previous works \cite{bef-sle-dimension,ft-sle-regularity} have identified $d$ as the optimal $p$-variation exponent. Our result gives the optimal function $\psi$ up to a non-explicit deterministic factor. In other words, the best modulus of continuity among all parametrisations of $\eta$ is $\omega(t) = ct^{1/d}(\log^*\logp t^{-1})^{1-1/d}$, and in particular, there is no reparametrisation that is $1/d$-Hölder continuous. \begin{theorem}[modulus of continuity]\label{thm:main_moc} There exists a deterministic constant $c_0 \in {(0,\infty)}$ such that almost surely \[ \lim_{\delta\downarrow 0} \sup_{s,t \in I, \abs{t-s}<\delta} \frac{\abs{\eta(t)-\eta(s)}}{\abs{t-s}^{1/d}(\log \abs{t-s}^{-1})^{1-1/d}} = c_0 \] for any non-trivial bounded interval $I \subseteq \mathbb{R}$. Moreover, for any bounded interval $I \subseteq \mathbb{R}$ there exists $c>0$ depending on the length of $I$ such that \[ \mathbb{P} \left( \sup_{s,t \in I} \frac{\abs{\eta(t)-\eta(s)}}{\abs{t-s}^{1/d}(\log^* \abs{t-s}^{-1})^{1-1/d}} > u \right) \le c^{-1}\exp(-c u^{d/(d-1)}) . \] \end{theorem} The optimal H\"{o}lder exponent $1/d$ has been identified previously in \cite{zhan-hoelder,ghm-kpz-sle} (except in case \ref{it:non-spfill} for $\kappa\in(4,8)$, where only the upper bound was established). We prove that $\omega(t) = t^{1/d}(\log t^{-1})^{1-1/d}$ is the optimal (up to constant) modulus of continuity. \begin{theorem}[law of the iterated logarithm and maximal growth rate]\label{thm:main_lil} There exist deterministic constants $c_0,c_1 \in {(0,\infty)}$ such that for any $t_0 \in \mathbb{R}$, almost surely \begin{align*} \limsup_{t \downarrow 0} \frac{\abs{\eta(t_0+t)-\eta(t_0)}}{t^{1/d}(\log\log t^{-1})^{1-1/d}} = c_0,\\ \limsup_{t \to \infty} \frac{\abs{\eta(t)}}{t^{1/d}(\log\log t)^{1-1/d}} = c_1. \end{align*} \end{theorem} \begin{remark} We expect that with some extra care one can show that the moment bounds in \cref{thm:main_moc,thm:main_var} are uniform in $\kappa$ as long as we stay away from the degenerate values, i.e.\ away from $4$ in case \ref{it:spfill} and possibly away from $0$ in case \ref{it:non-spfill}. A uniformity statement of this type is proven in \cite{am-sle8x} and used to construct \sle{8} as a continuous curve. \end{remark} \begin{remark} The results transfer to other SLE variants by conformal invariance and absolute continuity as long as we stay away from the boundary and force points, i.e., the statements of \cref{thm:main_lil,thm:main_moc,thm:main_var} hold true on curve segments that do not touch force points or domain boundaries. We expect that the same results also hold for entire SLE curves (including boundary intersecting segments) in bounded domains whose boundaries are not too fractal. \end{remark} \begin{remark} In view of \cref{thm:main_lil,thm:main_moc}, it is natural to study exceptional times where the law of the iterated logarithm fails. For $a>0$ and a time $t_0$ call the time $a$-fast if \[ \limsup_{t \downarrow 0} \frac{\abs{\eta(t_0+t)-\eta(t_0)}}{t^{1/d}(\log t^{-1})^{1-1/d}} \ge a . \] With the methods of this paper, one can show that the Hausdorff dimension of the set of $a$-fast times for SLE is bounded between $1/d-c_1 a^{d/(d-1)}$ and $1-c_2 a^{d/(d-1)}$ with (non-explicit) deterministic constants $c_1,c_2>0$. The reason we get $1/d$ instead of 1 in our lower bound is that in our argument we only consider the radial direction of the curve instead of $d$ dimensions. We conjecture that there is a deterministic constant $c_0 >0$ such that the dimension is exactly $1-c_0 a^{d/(d-1)}$. For comparison, the Hausdorff dimension of the set of $a$-fast times for Brownian motion is proven in \cite{ot-bm-fast} to be $1-a^2/2$. \end{remark} In the case of space-filling SLE, we show a stronger formulation of the upper bound in \cref{thm:main_moc}. \begin{theorem}\label{thm:ball_filling} Consider space-filling \sle{\kappa}{} as in case \ref{it:spfill}. There exist $\delta > 0$ and $u_0 > 0$ such that the following is true. For $r,u>0$ and $I$ a bounded interval let $E_{r,u,I}$ denote the event that for any $s,t \in I$ with $\abs{\eta(s)-\eta(t)} \le ur$, the set $\eta[s,t]$ contains $\delta u^2 \log(u\abs{\eta(s)-\eta(t)}^{-1})$ disjoint balls of radius $u^{-2} \abs{\eta(s)-\eta(t)} /\log(u\abs{\eta(s)-\eta(t)}^{-1})$. Then for any bounded interval $I \subseteq \mathbb{R}$ there exist $c_1,c_2 > 0$ such that \[ \mathbb{P}(E_{r,u,I}^c) \le c_1 r^{c_2 u^2} \] for any $u \ge u_0$ and $r \in {(0,1)}$. \end{theorem} A key input to the proofs is the following precise estimate for the lower tail of the Minkowski content of SLE segments: \begin{equation} \mathbb{P}(\operatorname{Cont}(\eta[0,\tau_r]) < t) \approx \exp\left(-c\, r^{d/(d-1)} t^{-1/(d-1)} \right), \label{eq:mink-tail} \end{equation} where $\operatorname{Cont}$ denotes Minkowski content of dimension $d$ (with $d$ as \eqref{eq:d}), $\tau_r=\inf\{t\geq 0\,:\, |\eta(t)|=r \}$ denotes the hitting time of radius $r$, we write $\eta[0,\tau_r]$ instead of $\eta([0,\tau_r])$ to simplify notation, and we use $\approx$ to indicate that the left side of \eqref{eq:mink-tail} is bounded above and below by the right side of \eqref{eq:mink-tail} for different choices of $c$. Furthermore, building on the domain Markov property of SLE we prove ``conditional'' variants of \eqref{eq:mink-tail} where we condition on part of the past curve. The conditional variant of the upper bound holds for all possible realizations of the past curve segment while the conditional variant of the lower bound requires that the tip of the past curve is sufficiently nice. See \cref{pr:diam_tail_upper,pr:cont_tail_lower,pr:area_tail_lower} for precise statements, and see \cref{eq:cont_utail_cond,pr:cont_ltail_cond,pr:area_ltail_cond} for conditional variants. Note that since we parametrise $\eta$ by its Minkowski content, \eqref{eq:mink-tail} can be equivalently formulated as \[ \mathbb{P}(\operatorname{diam}(\eta[0,t]) > r) \approx \exp\left(-c\, r^{d/(d-1)} t^{-1/(d-1)} \right) . \] We establish the upper bounds in \cref{thm:main_lil,thm:main_moc,thm:main_var} for general stochastic processes whose increments satisfy a suitable moment condition \eqref{eq:Xinc}. Several related results are available in the existing literature, see e.g.\ \cite[Appendix A]{fv-rp-book} and \cite{Bed07}. We review and generalise them in \cref{sec:variation-upper-general}. We then prove that the SLE variants \ref{it:non-spfill} and \ref{it:spfill} do satisfy the required condition. In the latter step we use only the self-similarity of the SLE along with a Markov-type property satisfied by the increments, namely a conditional variant of the upper bound in \eqref{eq:mink-tail}. To prove the lower bounds, we need to argue that the increments of the process in disjoint time intervals are sufficiently decorrelated. Given sufficient decorrelation, our proof is relatively simple to implement; see \cref{se:markov} where we have spelled out the proof for Markov processes that have uniform bounds on the transition probabilities. For SLE, we rely on the conditional variant of the lower bound in \eqref{eq:mink-tail}, which is based on the domain Markov property. Here extra care is needed due to the fact that this estimate only holds when the past curve is nice. \subsection{Outline} \label{se:outline} We give in \cref{se:prelim} some basic definitions and results on conformal maps and SLE, including the precise definition of the SLE variants that we work with. In \cref{se:upper} we prove our main theorems except for the lower bounds which will be proved in \cref{sec:sle-lower-bounds}. To illustrate the basic idea of the proof, we show in \cref{se:markov} the analogous results for Markov processes. \medskip {\bf Acknowledgements}. We thank Peter Friz, Ewain Gwynne, and Terry Lyons for helpful discussions and comments. N.H.\ was supported by grant 175505 of the Swiss National Science Foundation (PI: Wendelin Werner) and was part of SwissMAP. Y.Y.\@ acknowledges partial support from European Research Council through Starting Grant 804166 (SPRS; PI: Jason Miller), and Consolidator Grant 683164 (PI: Peter Friz) during the initial stage of this work at TU Berlin. \section{Preliminaries} \label{se:prelim} \subsection{Conformal maps} \label{se:prelim_conformal} We will always denote by $\mathbb{H}$ the upper complex half-plane $\{ z \in \mathbb{C} \mid \Im z > 0 \}$, and by $\mathbb{D}$ the unit disk $\{ z \in \mathbb{C} \mid \abs{z} < 1\}$. For $z_0 \in \mathbb{C}$ and $r > 0$, we denote by $B(z_0,r)$ the open ball of radius $r$ about $z$, i.e. $\{ z \in \mathbb{C} \mid \abs{z-z_0} < r\}$. For a bounded, relatively closed set $A \subseteq \mathbb{H}$, we define its half-plane capacity to be $\lim_{y \to \infty} y\,\mathbb{E}[\Im B^{iy}_{\tau_{A \cup \mathbb{R}}}]$ where $B^{iy}$ denotes Brownian motion started at $iy$ and $\tau_{A \cup \mathbb{R}}$ the hitting time of $A \cup \mathbb{R}$. For a simply connected domain $D \subseteq \widehat{\bbC}$ and a prime end $x \in \partial D$, fix a conformal map $f\colon D \to \mathbb{H}$ with $f(x) = \infty$. For a relatively closed set $A \subseteq D$ with positive distance to $x$, we define the {\it{capacity of $A$ in $D$ relative to $x$}} to be the half-plane capacity of $f(A)$.\footnote{In case $\partial D$ has an analytic segment in the neighbourhood of $x$, there is a more intrinsic definition given in \cite{dv-relcap}. Their definition differs to ours by a fixed factor depending on the normalisation of $f$. In particular, we can pick $f$ such that both definitions agree. For our purposes, the choice of normalisation will not matter.} Standard results for conformal maps include Koebe's distortion and $1/4$-theorem. See e.g. \cite[Theorems 14.7.8 and 14.7.9]{con-complex2-book} for proofs. \begin{lemma}[Koebe's distortion theorem] Let $R>0$ and $f\colon B(z,R) \to \mathbb{C}$ be a univalent function. Then \[ \frac{1-r}{(1+r)^3} \le \frac{\abs{f'(w)}}{\abs{f'(z)}} \le \frac{1+r}{(1-r)^3} \] for all $w \in B(z,R)$ where $r = \frac{\abs{w-z}}{R} < 1$. \end{lemma} The most common application in our work is that for any $w\in B(z,r)$, $r < R$, we have the bounds \[ c\abs{f'(z)} \le \abs{f'(w)} \le c^{-1}\abs{f'(z)} \] with $c>0$ depending on $r$. \begin{lemma}[Koebe's $1/4$ theorem] Let $R>0$ and $f\colon B(z,R) \to D \subseteq \mathbb{C}$ be a conformal map. Then \[ \operatorname{dist}(f(z),\partial D) \ge \frac{R}{4}\abs{f'(z)} . \] \end{lemma} For a simply connected domain $D \subseteq \mathbb{C}$ and $z \in D$, the \textit{conformal radius of $z$ in $D$} is defined as $\abs{f'(0)}$ where $f\colon \mathbb{D} \to D$ is a (unique up to rotations of $\mathbb{D}$) conformal map with $f(0) = z$. We have the standard estimates \[ \operatorname{dist}(z,\partial D) \le \operatorname{crad}(z,D) \le 4\operatorname{dist}(z,\partial D) \] which follow from the Schwarz lemma and Koebe's $1/4$ theorem. Throughout the paper, we will often consider domains of the following type. Let $(D,a)$ be such that \begin{equation}\begin{split} &D\subset\wh{{\bf C}} \text{ is a simply connected domain with } \infty\in D,\, 0\not\in D,\\ \text{and}\quad &a\in\partial D \text{ with } \abs{a} = \sup\{ \abs{z}\mid z \in \wh\mathbb{C}\setminus D\} > 0. \label{eq:Da} \end{split}\end{equation} Typical examples of such domains are $D = \widehat{\bbC} \setminus \fill(\eta[0,\tau_r])$, $a = \eta(\tau_r)$ where $\eta$ is some continuous path starting from the origin, $\tau_r=\inf\{t\geq 0\,:\, |\eta(t)|=r \}$ denotes the hitting time of $\partial B(0,r)$, and fill($\eta[0,\tau_r]$) denotes the union of $\eta[0,\tau_r]$ and the set of points disconnected from $\infty$ by this set. To $(D,a)$ as in \eqref{eq:Da}, we associate a conformal map $f\colon D \to \mathbb{D}$ as follows. Let $z_D \in \partial B(0,\abs{a}+2)$ be the point closest to $a$, and $f\colon D \to \mathbb{D}$ the conformal map with $f(z_D)=0$ and $f(a)=1$. The following property is shown within the proof of \cite[Lemma 3.1]{ghm-kpz-sle}. \begin{lemma}\label{le:ghm} There exists $\varepsilon_0 > 0$ such that the following is true. Let $(D,a)$ be as in \eqref{eq:Da} with $\abs{a} \ge 1$, and $f\colon D \to \mathbb{D}$ the associated conformal map. Let $V$ be the union of $B(z_D,3) \cap D$ with all points that it separates from $\infty$ in $D$. There exists a path $\alpha$ from $0$ to $1$ in $\mathbb{D}$ whose $\varepsilon_0$-neighbourhood is contained in $f(V)$. Moreover, $\alpha$ can be picked as a simple nearest-neighbour path in $\varepsilon_0 \mathbb{Z}^2$. \end{lemma} \subsection{(Ordinary) SLE} \label{se:prelim_sle} In this and the next section we discuss the SLE variants that we use in the paper. All SLE variants are probability measures on curves (modulo reparametrisation) either in a simply connected domain $D \subseteq \widehat{\bbC}$ or in the full plane $\widehat{\bbC}$. Fix $\kappa > 0$. Let $D \subseteq \widehat{\bbC}$ be a simply connected domain, and $a,b \in \partial D$ two distinct prime ends. Moreover, we may possibly have additional force points $u^1,...,u^n \in \overline{D}$ with weights $\rho_1,...,\rho_n \in \mathbb{R}$. The chordal \sle{\kappa}{}$(\rho_1,...,\rho_n)$ in $D$ from $a$ to $b$ with these force points is a probability measure on curves in $\overline{D}$ stating at $a$ with the following domain Markov property: For any stopping time $\tau$, conditionally on $\eta\big|_{[0,\tau]}$, the law of $\eta\big|_{[\tau,\infty)}$ in an \sle{\kappa}{}$(\rho_1,...,\rho_n)$ in the connected component of $D \setminus \eta[0,\tau]$ containing $b$ from $\eta(\tau)$ to $b$ with the same force points. (There is a subtlety when $\eta$ has swallowed some force points, but in this paper we will not encounter that scenario.) Moreover, the SLE measures are conformally invariant in the sense that if $f\colon D \to f(D)$ is a conformal map, then the push-forward of \sle{\kappa}{}$(\rho_1,...,\rho_n)$ in $D$ is \sle{\kappa}{}$(\rho_1,...,\rho_n)$ in $f(D)$ from $f(a)$ to $f(b)$ with force points $f(u^1),...,f(u^n)$. Similarly, radial SLE is characterised by the same properties except that $b \in D$ instead of $\partial D$. For both chordal and radial SLE, it is sometimes convenient to consider $b$ as an additional force point with weight $\rho_{n+1} = \kappa-6-\rho_1-...-\rho_n$. By doing so, we have the following simple transformation rule (see \cite[Theorem 3]{sw-sle-coordinate-change}): For any conformal map $f\colon D \to f(D)$, the pushforward of \sle{\kappa}{}$(\rho_1,...,\rho_{n+1})$ in $D$ starting from $a$ (stopped before swallowing any force point) is \sle{\kappa}{}$(\rho_1,...,\rho_{n+1})$ in $f(D)$ starting from $f(a)$. (Note that in this rule, the target point $b$ is not necessarily $u^{n+1}$ but can be any $u^j$. The law does not depend on the choice of a target point.) In case $D = \mathbb{H}$ or $D = \mathbb{D}$, the laws of SLE can be spelled out explicitly (cf.\@ \cite{sw-sle-coordinate-change}). Namely, we can describe the law of the conformal maps $g_t\colon \mathbb{H} \setminus \fill(\eta[0,t]) \to \mathbb{H}$ with $g_t(z) = z + O(1/z)$, $z \to \infty$. If $\eta$ is parametrised by half-plane capacity, i.e. $\operatorname{hcap}(\fill(\eta[0,t])) = 2t$, then the families $(g_t(z))_{t \ge 0}$ satisfy \[ \partial_t g_t(z) = \frac{2}{g_t(z)-W_t}, \quad g_0(z) = z, \] with processes $(W_t,U^1_t,...,U^n_t)$ satisfying the following system of SDEs \[ \begin{alignedat}{2} dW_t &= \sqrt{\kappa}\,dB_t + \sum_j \Re\frac{\rho_j}{W_t-U^j_t}\,dt , \quad && W_0 = a,\\ dU^j_t &= \frac{2}{U^j_t-W_t}\,dt , \quad && U^j_0 = u^j . \end{alignedat} \] By Girsanov's theorem, for fixed $\kappa \ge 0$, all such \sle{\kappa}{} variants (with different values of $\rho_j$) are absolutely continuous with respect to each other (before any force point is swallowed). The Radon-Nikodym derivatives can be spelled out explicitly, see \cite[Theorem 6]{sw-sle-coordinate-change}. One particular consequence is the following. \begin{lemma}\label{le:sle_abs_cont} Let $D \subseteq \mathbb{C}$ be a simply connected domain, $U \subseteq D$ a bounded subdomain, and $a \in \partial D \cap \overline{U}$. Let $\varepsilon > 0$ and $\rho_1,...,\rho_n \in \mathbb{R}$. Then there exists $c>0$ such that the following is true: Let $u^1,...,u^k \in U$ and let $u^{k+1},...,u^n$, and $b$ be outside the $\varepsilon$-neighbourhood of $U$. Consider the law $\nu$ of \sle{\kappa}{} in $D$ from $a$ to $b$ with force points $(u^1,...,u^k)$ and weights $(\rho_1,...,\rho_k)$, and the law $\widetilde\nu$ of \sle{\kappa}{} in $D$ from $a$ to $b$ with force points $(u^1,...,u^n)$ and weights $(\rho_1,...,\rho_n)$. Then, as laws on curves stopped upon exiting $U$, these two \sle{\kappa}{} measures are absolutely continuous, and their Radon-Nikodym derivative is bounded within $[c,c^{-1}]$. \end{lemma} Whole-plane \sle{\kappa}{} and whole-plane \sle{\kappa}{}$(\rho)$ are probability measures on curves in $\widehat{\bbC}$ running from $a$ to $b$ where $a,b$ are two distinct points on $\widehat{\bbC}$. They are characterised by an analogous domain Markov property. For any non-trivial stopping time $\tau$, conditionally on $\eta\big|_{(-\infty,\tau]}$, the law of $\eta\big|_{[\tau,\infty)}$ is a radial \sle{\kappa}{} (resp. \sle{\kappa}{}$(\rho)$) in the connected component of $\widehat{\bbC} \setminus \eta((-\infty,\tau])$ containing $b$ from $\eta(\tau)$ to $b$ (with a force point at $a$). Two-sided whole-plane \sle{\kappa}{}, $\kappa \le 8$, is a probability measure on closed curves in $\widehat{\bbC}$ from $a$ to $a$ passing through some $b \in \widehat{\bbC}$. It is defined as follows: \begin{itemize} \item The segment $\eta\big|_{(-\infty,0]}$ is a whole-plane \sle{\kappa}{}$(2)$ from $a$ to $b$ with force point at $a$. \item Conditionally on $\eta\big|_{(-\infty,0]}$, the segment $\eta\big|_{[0,\infty)}$ is a chordal \sle{\kappa}{} in $\widehat{\bbC} \setminus \eta(-\infty,0]$ from $b$ to $a$. \end{itemize} We will use the following facts about two-sided whole-plane \sle{\kappa}{} from $\infty$ to $\infty$ passing through $0$ (cf.\@ \cite{zhan-loop}). Both whole-plane \sle{\kappa}{}$(2)$ and two-sided whole-plane \sle{\kappa}{} are reversible. In particular, the restriction $\eta\big|_{[0,\infty)}$ is a whole-plane \sle{\kappa}{}$(2)$ from $0$ to $\infty$ with force point at $0$. Moreover, if $\eta$ is parametrised by Minkowski content, then $\eta$ is a self-similar process of index $1/d$ with stationary increments, in the sense that for every $t_0 \in \mathbb{R}$ and $\lambda > 0$, the process $t \mapsto \eta(t_0+\lambda t)-\eta(t_0)$ has the same law as $t \mapsto \lambda^{1/d}\eta(t)$. In the remainder of this paper, we denote by $\nu_{D;a\to b}$ (chordal or radial) \sle{\kappa}{} in $D$ from $a$ to $b$, and by $\nu_{D;a\to b;u}$ \sle{\kappa}{}$(2)$ with a force point at $u$. We denote by $\nu_{\widehat{\bbC};a\to b;a}$ whole-plane \sle{\kappa}{}$(2)$. The Minkowski content measures the size of fractal sets. It has been shown in \cite{lr-minkowski-content} that \sle{\kappa}{} curves\footnote{Strictly speaking, their result applies to \sle{\kappa}{} in $\mathbb{H}$ without force points, but it transfers to other \sle{\kappa}{} variants by conformal invariance and absolute continuity, at least on segments away from force points and fractal boundaries. The result is also true for two-sided whole-plane \sle{\kappa}{}, see \cite[Lemma 2.12]{zhan-loop}.} possess non-trivial $d$-dimensional Minkowski content where $d = (1+\frac{\kappa}{8})\wedge 2$. Moreover, the Minkowski content of $\eta[0,t]$ is continuous and strictly increasing in $t$, hence $\eta$ can be parametrised by its Minkowski content. This is called the \emph{natural parametrization} of the curve. Finally, it is shown there that the Minkowski content is additive over SLE curve segments, i.e. $\operatorname{Cont}(\eta[s,u])=\operatorname{Cont}(\eta[s,t])+\operatorname{Cont}(\eta[t,u])$ for $s\le t\le u$. By \cite[Lemma 2.6]{zhan-loop}, the Minkowski content satisfies the following transformation rule. If $f\colon U \to f(U) \subseteq \mathbb{C}$ is a conformal map and $\mu$ is the Minkowski content measure of some $S \subseteq U$, i.e. $\mu(K) = \operatorname{Cont}(K \cap S)$ for every compact $K \subseteq U$, then \begin{equation}\label{eq:cont_transf} \operatorname{Cont}(f(K \cap S)) = \int_K \abs{f'(z)}^d \,\mu(dz) \end{equation} for every compact $K \subseteq U$. \subsection{Space-filling SLE} \label{se:prelim_spf} In this section we introduce the whole-plane space-filling SLE$_\kappa$ which is defined via the theory of imaginary geometry. Whole-plane space-filling SLE$_{\kappa}$ from $\infty$ to $\infty$ for $\kappa > 4$ was originally defined in~\cite[Section~1.4.1]{dms-mating-trees}, building on the chordal definition in~\cite[Sections 1.2.3 and 4.3]{ms-ig4}. See also \cite[Section 3.6.3]{ghs-mot-survey} for a survey. For $\kappa \geq 8$, whole-plane space-filling SLE$_{\kappa}$ from $\infty$ to $\infty$ is a curve from $\infty$ to $\infty$ obtained via the local limit of a regular chordal SLE$_{\kappa}$ at a typical point. For $\kappa \in (4,8)$, space-filling SLE$_{\kappa}$ from $\infty$ to $\infty$ traces points in the same order as a curve that locally looks like (non-space-filling) SLE$_{\kappa}$, but fills in the ``bubbles'' that it disconnects from its target point with a continuous space-filling loop. To construct whole-plane space-filling SLE$_{\kappa}$ from $\infty$ to $\infty$, fix a deterministic countable dense subset $\mathcal C\subset \mathbbm C$. Let $h$ be a whole-plane GFF, viewed modulo a global additive multiple of $2\pi \chi$ where $\chi \mathrel{\mathop:}= \frac{2}{\sqrt\kappa} - \frac{\sqrt\kappa}{2}$; see \cite[Section 2.2]{ms-ig4} for the definition of this variant of the GFF. It is shown in~\cite{ms-ig4} that for each $z\in\mathcal C$, one can make sense of the flow lines $\eta_z^{\mathrm{L}}$ and $\eta_z^{\mathrm{R}}$ of $e^{ih/\chi}$ of angles $\pi/2$ and $-\pi/2$, respectively, started from $z$. These flow lines are SLE$_{16/\kappa}(2-16/\kappa)$ curves~\cite[Theorem~1.1]{ms-ig4}. The flow lines $\eta_z^{\mathrm{L}}$ and $\eta_w^{\mathrm{L}}$ (resp.\ $\eta_z^{\mathrm{R}}$ and $\eta_w^{\mathrm{R}}$) started at distinct $z,w\in\mathcal C$ eventually merge, such that the collection of flow lines $\eta_z^{\mathrm{L}}$ (resp.\ $\eta_z^{\mathrm{R}}$) for $z\in\mathcal C$ form the branches of a tree rooted at $\infty$. We define a total ordering on $\mathcal C$ by saying that $w\in \mathcal C$ comes after $z\in \mathcal C$ if and only if $\eta^{\mathrm{L}}_w$ merges into $\eta^{\mathrm{L}}_z$ on its right side (equivalently, $\eta^{\mathrm{R}}_w$ merges into $\eta^{\mathrm{R}}_z$ on its left side). It can be argued that there is a unique space-filling curve $\eta \colon \mathbbm R\rightarrow\mathbbm C$ that visits the points in $\mathcal C$ in order, is continuous when parameterized by Lebesgue measure (i.e. $\eta[0,t]$ and $\eta[-t,0]$ both have Lebesgue measure $t$ for any $t>0$), satisfies $\eta(0)=0$, and is such that $(\eta)^{-1}(\mathcal C)$ is a dense set of times (see \cite[Section 4.3]{ms-ig4} and \cite{dms-mating-trees}). The curve $\eta$ does not depend on the choice of $\mathcal C$ and is defined to be whole-plane space-filling SLE$_{\kappa}$ from $\infty$ to $\infty$. For each fixed $z\in\mathbbm C$, it is a.s.\ the case that the left and right outer boundaries of $\eta$ stopped at the first time it hits $z$ are given by the flow lines $\eta_z^{\mathrm{L}}$ and $\eta_z^{\mathrm{R}}$. Since the flow lines have zero Lebesgue measure and $(\eta)^{-1}(\mathcal C)$ is dense, it follows that almost surely for all $s<t$ the Lebesgue measure of $\eta[s,t]$ is exactly $\abs{t-s}$. We remark that for $\kappa=8$, the whole-plane space-filling SLE$_8$ as defined here is equal in law to the two-sided whole-plane SLE$_8$ defined in the previous subsection. In our proofs in the next subsection we will consider $\eta|_{[0,\infty)}$ and we will now describe this curve slightly more explicitly. The two flow lines $\eta^{\mathrm{L}}_0$ and $\eta^{\mathrm{R}}_0$ divide $\mathbb{C}$ into two (for $\kappa\geq 8$) or countably infinite (for $\kappa\in(4,8)$) connected components, such that the boundary of each connected component can be written as the union of a segment of $\eta^{\mathrm{L}}_0$ and a segment of $\eta^{\mathrm{R}}_0$. The curve $\eta|_{[0,\infty)}$ will visit precisely the connected components that lie to the right of $\eta^{\mathrm{L}}_0$ (i.e. $\eta^{\mathrm{L}}_0$ traces its boundary in clockwise direction), and $\eta|_{[0,\infty)}$ restricted to each such component has the law of a chordal space-filling SLE$_\kappa$ connecting the two points of intersection of $\eta^{\mathrm{L}}_0$ and $\eta^{\mathrm{R}}_0$ on its boundary. For $\kappa\geq 8$ the chordal space-filling SLE$_\kappa$ is just a regular chordal SLE$_\kappa$, while for $\kappa\in(4,8)$ the curve can be constructed by starting with a regular chordal (non-space-filling) SLE$_\kappa$ and filling in the components disconnected from the target point by a space-filling SLE$_\kappa$-type loop. The SLE$_\kappa$-type loop can be obtained via an iterative construction where one first samples a regular chordal (non-space-filling) SLE$_\kappa(\kappa-6)$ and then samples curves with this law iteratively in each complementary component of the curve. The boundary data of $h$ along an angle $\theta\in\{-\pi/2,\pi/2 \}$ flow line is given by $\chi$ times the winding of the curve plus $-\pi/\sqrt{\kappa}-\theta\chi$ (resp.\ $\pi/\sqrt{\kappa}-\theta\chi$) on the left (resp.\ right) side, where the winding is relative to a segment of the curve going straight upwards. We refer to \cite[Section 1]{ms-ig4} for the precise description of this conditional law and in particular to \cite[Figures 1.9 and 1.10]{ms-ig4} for the boundary data and the concept of winding. For $t\geq 0$, let $\fill(\eta[0,t])$ be the hull generated by $\eta[0,t]$, i.e.\ the union of $\eta[0,t]$ and the set of points which it disconnects from $\infty$. For $\kappa\geq 8$ we have $\fill(\eta[0,t])=\eta[0,t]$ while for $\kappa\in(4,8)$ we have that $\eta[0,t]$ is strictly contained in $\fill(\eta[0,t])$. However, in the latter case it still holds a.s.\ that $\eta(\tau_r)$ lies on the boundary of $\fill(\eta[0,\tau_r])$ for $\tau_r=\inf\{t\geq 0\,:\,|\eta(t)|\geq r \}$ and fixed $r>0$, and that $\eta|_{[\tau_r,\infty)}$ stays in $\widehat{\bbC} \setminus \fill(\eta[0,\tau_r])$. See Figure \ref{fig:spfill-sle}. \begin{figure} \centering \includegraphics{fig-spfill-sle} \caption{An illustration of $\eta[0,\tau_r]$ (light blue) for $\kappa\geq 8$ (left) and $\kappa\in(4,8)$ (right), where $\eta$ is a whole-plane space-filling SLE$_\kappa$. The complement of $\fill(\eta[0,\tau_r])$ is shown in light yellow. The arcs $\underline A^{\operatorname{L}},\underline A^{\operatorname{R}},\overline A^{\operatorname{L}},\overline A^{\operatorname{R}}$ are the subarcs of $\partial \fill(\eta[0,\tau_r])$ shown in orange, purple, blue and green, respectively, separated by the points $\eta(\tau_r),u_1,u_2,u_3$. (Note that $\underline A^{\operatorname{L}}$ (resp.\ $\underline A^{\operatorname{R}}$) is only a segment of the full orange (resp.\ purple) curve, namely the segment on the boundary of $\fill(\eta[0,\tau_r])$.)} \label{fig:spfill-sle} \end{figure} Fix $r>0$. The set $\partial \fill(\eta[0,\tau_r])$ can be divided into four distinguished arcs, which we denote as follows. \begin{itemize} \item $\underline A^{\mathrm{L}}$ (resp.\ $\underline A^{\mathrm{R}}$) is the arc of $\partial \fill(\eta[0,\tau_r])$ traced by $\eta_0^{\mathrm{L}}$ (resp.\ $\eta_0^{\mathrm{R}}$). \item $\overline A^{\mathrm{L}}$ (resp.\ $\overline A^{\mathrm{R}}$) is the arc of $\partial \fill(\eta[0,\tau_r]) $ not traced by $\eta_0^{\mathrm{L}}$ or $\eta_0^{\mathrm{R}}$ which is adjacent to $\underline A^{\mathrm{L}}$ (resp.\ $\underline A^{\mathrm{R}}$). \end{itemize} Define the $\sigma$-algebra $\mathcal F_r$ by $ \mathcal F_r \mathrel{\mathop:}= \sigma\left( \eta|_{[0,\tau_r]} ,\, h|_{\eta[0,\tau_r]} \right) . $ The following is \cite[Lemma 3.2]{ghm-kpz-sle}. \begin{lemma}\label{le:localset} The set $\eta[0,\tau_r]$ is a local set for $h$ in the sense of~\cite[Lemma 3.9]{ss-gff-contour}. In particular, the boundary data for the conditional law of $h|_{\mathbbm C\setminus \fill(\eta[0,\tau_r]) }$ given $\mathcal F_r$ on each of the arcs $\underline A^{\mathrm{L}}$, $\underline A^{\mathrm{R}}$, $\overline A^{\mathrm{L}}$, and $\overline A^{\mathrm{R}}$ coincides with the boundary data of the corresponding flow line of $h$, and the conditional law of $h|_{\mathbbm C\setminus \fill(\eta[0,\tau_r]) }$ is that of a Dirichlet GFF in $\mathbbm C\setminus \fill(\eta[0,\tau_r])$ with the given boundary data. \end{lemma} Let $(D,a)$ be as in \eqref{eq:Da}, and let $\u=(u_1,u_2,u_3)$ be distinct points on $\partial D\setminus \{a \}$ such that $a,u_1,u_2,u_3$ are ordered counterclockwise. Let $\widetilde h$ be a GFF in $D$ with the law in Lemma \ref{le:localset} if $D=\mathbbm C\setminus \fill(\eta[0,\tau_r])$ and we let $a,u_1,u_2,u_3$ describe the points of intersection of the boundary arcs $\underline A^{\mathrm{L}}$, $\underline A^{\mathrm{R}}$, $\overline A^{\mathrm{L}}$, and $\overline A^{\mathrm{R}}$. Then since the conditional law of $\eta$ given $\mathcal{F}_r$ depends only on $(D,a,\u)$, we can define a measure $\wh\nu_{D;a\to\infty;\u}$ on curves from $a$ to $\infty$ in $D$ that describes this conditional law. Consider the pair $(\mathbbm C\setminus \fill(\eta[0,\tau_r]),\eta(\tau_r))$ and let $f\colon\mathbbm C\setminus \fill(\eta[0,\tau_r])\to\mathbbm{D}$ be the conformal map described right below \eqref{eq:Da}, which in particular satisfies $f(\eta(\tau_r))=1$. Define $\wh h$ by \begin{equation} \wh h \mathrel{\mathop:}= h \circ f^{-1} - \chi \operatorname{arg} (f^{-1})', \label{eq:ig-coord-ch} \end{equation} Then $\wh h$ is a GFF on $\mathbbm D$ with Dirichlet boundary data determined by $f(u_1),f(u_2),f(u_3)$ plus an $\arg$ singularity at $f(\infty)$, where we pick an arbitrary choice of branch cut for $\arg$ function; picking a different branch cut has the effect of adding a multiple of $4\pi\chi$ in the region between the two branch cuts. In particular, the law of $\wh h$ (modulo $2\pi\chi$) depends only on the location of the points $f(u_1),f(u_2),f(u_3),f(\infty)$. On each of the four arcs $f(\overline A^{\mathrm{L}}),f(\overline A^{\mathrm{R}}),f(\underline A^{\mathrm{L}}),f(\underline A^{\mathrm{R}})$ on $\partial\mathbbm{D}$ the boundary data will be given by a constant (depending on which arc we consider) plus $\chi$ times the winding of $\partial\mathbbm{D}$ viewed as a curve. The image of $\eta|_{\mathbbm{R}\setminus[0,\tau_r]}$ under $f$ can be constructed via the flow lines of $\wh h$ exactly as above. The boundary data of $\wh h$ along its flow line is given by $\pm \lambda'$ plus $\chi$ times the winding of the flow line, except that there is a jump of $4\pi\chi$ when crossing the branch cut in clockwise direction (i.e. $-2\chi$-flow line boundary conditions in the terminology of \cite{ms-ig4}). Similarly as in the paragraph right after Lemma \ref{le:localset} we can define a measure $\wh\nu_{\mathbbm D,1\to z_\infty,\wh{\u}}$ on curves in $\mathbbm D$ from $1$ to $z_\infty\in\mathbbm D$ that describes the conditional law of $f\circ\eta|_{[\tau_r,\infty)}$ given $\mathcal{F}_r$, where $z_\infty=f(\infty)$ and $\wh{\u}=(f(u_1),f(u_2),f(u_3))$. This conditional law can be explicitly defined in terms of the flow lines of $\wh h$. The flow lines started from $f(u_1)$ and $f(u_3)$ of angle $\pi/2$ and $-\pi/2$, respectively, will end at $f(\infty)$, and these two flow lines divide $\mathbb{D}$ into two (for $\kappa\geq 8$) or countably infinite (for $\kappa\in(4,8)$) connected components. The curve $f\circ\eta|_{[\tau_r,\infty)}$ visits precisely the complementary components that lie to the right of the flow line started from $f(u_1)$ and has the law of a chordal space-filling SLE$_\kappa$ in each such component. \section{Upper bounds} \label{se:upper} In this section we prove the upper bounds in our main results (\cref{thm:main_lil,thm:main_moc,thm:main_var}). The upper bounds hold for general stochastic processes whose increments satisfy a suitable moment condition, and we state these general results in \cref{sec:variation-upper-general}. In \cref{se:ss_upper,se:sle_upper} we will prove that SLE satisfies these conditions, which in particular implies the upper bound in \cref{eq:mink-tail}. We phrase part of the argument in \cref{se:ss_upper} in a more general setting for processes with a suitable scaling property and Markov-type increment condition. In \cref{se:01laws} we prove zero-one laws for some quantities related to SLE which will imply that the constants $c_0,c_1$ in \cref{thm:main_lil,thm:main_moc,thm:main_var} are deterministic. Using the earlier results in the section we get that these constants are finite (but we do not know at this point whether they are positive; this will be proved in Section \ref{sec:sle-lower-bounds}). In \cref{se:ball_filling}, we prove \cref{thm:ball_filling}. \subsection{Regularity upper bounds under increment moment conditions} \label{sec:variation-upper-general} In this section, we let $\Phi,\varphi\colon {[0,\infty)} \to {[0,\infty)}$ be convex self-homeomorphisms with $\Phi(1) = \varphi(1) = 1$. Suppose that there exist $R>1$ and $n_0 \ge 1$, $n_0 \in \mathbb{N}$ such that \begin{align} &\Phi(x)\Phi(y) \le \Phi(R^2 xy) \quad\text{for } x,y \ge 1,\label{eq:Phi_mult}\\ &\frac{\varphi(R^k)}{\varphi(R^{k+1})} \le \frac{\varphi(R^{k-1})}{\varphi(R^k)} \quad\text{for } k \ge 1, k \in \mathbb{N} ,\nonumber\\ \text{and}\quad &\sum_{k=0}^\infty \frac{\varphi(R^k)}{\Phi(R^{k+n_0})} < \infty .\nonumber \end{align} We give examples of appropriate functions $\Phi,\varphi$ in \cref{ex:expo,ex:poly} below. Let $\alpha \in {(0,1]}$, and let $(X_t)_{t \in [0,1]}$ be a separable process with values in a separable Banach space\footnote{The result in \cite{Bed07} is stated for real-valued processes, but it is not required in their proof.} that satisfies \begin{equation}\label{eq:Xinc} \mathbb{E} \Phi\left(\frac{\abs{X_s-X_t}}{\abs{s-t}^\alpha}\right) \le 1 \quad\text{for all } s,t \in [0,1] . \end{equation} We will give general results on the modulus of continuity, law of the iterated logarithm, and variation regularity for such processes. \subsubsection{Modulus of continuity} We review the following general result which is a special case of \cite[Corollary 1]{Bed07}. (Recall that a separable process that is uniformly continuous on a suitable countable dense subset is necessarily continuous.) \begin{theorem}[{\cite[Corollary 1]{Bed07}}]\label{thm:moc_upper} There exists a finite constant $K$ (depending on $\Phi$, $\varphi$, $\alpha$) such that every separable process as in \eqref{eq:Xinc} satisfies \[ \mathbb{E} \sup_{s,t \in [0,1]} \Phi\left(\frac{\abs{X_s-X_t}}{2K\tau(\abs{t-s})}\right) \le 1 \] where \[ \tau(t) = \int_0^{t^\alpha} \varphi^{-1}\left(\frac{1}{2u^{1/\alpha}}\right) du . \] \end{theorem} The above theorem in particular says that $X$ a.s.\ admits a modulus of continuity given by a (possibly random) constant multiple of $\tau$ (as long as $\tau$ is finite). The general result in \cite{Bed07} allows for stochastic processes indexed by a compact metric space $T$, and they give a modulus of continuity by a function $\tau(s,t)$ that may depend on both variables $s,t$ (also called ``minorising metric'' in the literature). Note that for any $t_0 > s_0 > 0$, an application of \cref{thm:moc_upper} to the process $t \mapsto (t_0-s_0)^{-\alpha}X_{s_0+(t_0-s_0) t}$ yields \begin{equation}\label{eq:Xsup_inc} \mathbb{E} \sup_{s,t \in [s_0,t_0]} \Phi\left(\frac{\abs{X_s-X_t}}{2K\tau(1)\abs{s_0-t_0}^{\alpha}} \right) \le \mathbb{E} \sup_{s,t \in [s_0,t_0]} \Phi\left(\frac{\abs{X_s-X_t}}{2K\tau\bigl(\abs{s_0-t_0}^{-1}\abs{t-s}\bigr)\abs{s_0-t_0}^{\alpha}} \right) \le 1 . \end{equation} \subsubsection{Law of the iterated logarithm} We get the following theorem via Theorem \ref{thm:moc_upper} and a union bound. In fact, we only use \eqref{eq:Xsup_inc} and not the full statement of \cref{thm:moc_upper}. \begin{theorem}\label{thm:lil_upper} For every non-decreasing function $h\colon {(0,\infty)} \to {(0,\infty)}$ with $\int_1^\infty \frac{du}{h(u)} < \infty$, every separable process as in \eqref{eq:Xinc} satisfies \[ \limsup_{t \downarrow 0} \frac{\abs{X_t-X_0}}{t^\alpha \Phi^{-1}(h(\log\frac{1}{t}))} \le 2K\tau(1) \] with the same $K$ and $\tau$ as in \cref{thm:moc_upper}. \end{theorem} \begin{proof} Assume without loss of generality that $X_0 = 0$. Fix $q < 1$ and consider the events \[ A_n = \{ \norm{X}_{\infty;[0,q^n]} \le 2K\tau(1) q^{n\alpha} \Phi^{-1}(h(n)) \} . \] By \eqref{eq:Xsup_inc}, we have $\mathbb{P}(A_n^c) \le \frac{1}{h(n)}$, and therefore $\sum_n \mathbb{P}(A_n^c) < \infty$. By the Borel-Cantelli lemma, with probability $1$ all but finitely many $A_n$ occur. If $A_n$ occur, then for every $t \in [q^{n+1},q^n]$ we have \[ \abs{X(t)} \le \norm{X}_{\infty;[0,q^{n}]} \le 2K\tau(1) q^{n\alpha} \Phi^{-1}(h(n)) \le 2K\tau(1) q^{-\alpha} t^{\alpha}\Phi^{-1}(h(\frac{\log\frac{1}{t}}{\log\frac{1}{q}})) . \] The result follows by observing that for every $C>0$ and $h$ with $\int \frac{1}{h} < \infty$ we can find $\tilde h$ with $\int \frac{1}{\tilde h} < \infty$ such that $\limsup_{u \to \infty} \frac{\tilde h(Cu)}{h(u)} < 1$, applying the estimate above with $C = \frac{1}{\log\frac{1}{q}}$ and $\tilde h$, and then sending $q \uparrow 1$. \end{proof} \subsubsection{Variation} We now establish variation regularity of processes satisfying \eqref{eq:Xsup_inc}. Such results can be found e.g. in \cite{tay-bm-variation} and \cite[Appendix A.4]{fv-rp-book} for some exponential $\Phi$ as described in \cref{ex:expo}. We generalise their result to the setup of \cref{sec:variation-upper-general}. Moreover, we find our proof conceptually clearer since we construct a natural way of parametrising $X$ to obtain optimal modulus of continuity $\sigma$. (Note that this is the best modulus that we can expect, cf.\@ \cref{pr:var_lil}.) \begin{theorem}\label{thm:psivar_upper} Suppose that $\Phi$ also satisfies $\int_2^\infty \frac{\log y}{\Phi(y^\alpha)}\,dy < \infty$. Let $h\colon {(0,\infty)} \to {(0,\infty)}$ be a non-decreasing continuous function with $\int_1^\infty \frac{du}{h(u)} < \infty$, and such that \[ \sigma(t) = t^\alpha \Phi^{-1}(h(\log^*\frac{1}{t})) \] is increasing with $\sigma(0^+)=0$. Then there exist $c>0$ (depending on $\Phi$, $\alpha$) and $C>0$ (depending on $\Phi$, $\alpha$, $h$) such that every separable process as in \eqref{eq:Xinc} satisfies \[ \mathbb{P}( [X]_{\psi\text{-var}} > M ) \le \frac{C}{\Phi(cM)} \] with $\psi=\sigma^{-1}$. \end{theorem} To prove this, we construct a parametrisation of $X$ that has modulus $\sigma$. Fix some $M > 0$. Consider the intervals $I_{k,j} = [j2^{-k}, (j+1)2^{-k}]$, and let \begin{align*} s_{k,j} &= \sup\{s \le 1 \mid \sup_{u,v \in I_{k,j}} \abs{X_u-X_v} \le M\sigma(2^{-k}/s) \} , \\ s(t) &= \inf\{ s_{k,j} \mid t \in I_{k,j} \} . \end{align*} The idea is to slow down the path $X$ at time $t$ to speed $s(t)$. More precisely, define \[ T(t) = \int_0^t \frac{dt}{s(t)}. \] Intuitively, this describes the elapsed time after reparametrisation. \begin{lemma} If $T(1) < \infty$, then \[ \abs{X_{t_1}-X_{t_2}} \le (2^\alpha+1)M \sigma(\abs{T(t_1)-T(t_2)}) . \] In other words, $X \circ T^{-1}$ has modulus $(2^\alpha+1)M\sigma$. \end{lemma} \begin{proof} Pick $k$ such that $\abs{t_1-t_2} \in [2^{-k-1},2^{-k}]$. Then $t_1$ and $t_2$ lie in the same or in adjacent $I_{k,j}$'s. In the former case we have \[ \begin{split} \abs{X_{t_1}-X_{t_2}} &\le M\sigma(2^{-k}/s_{k,j}) \\ &\le M 2^\alpha \sigma(\abs{t_1-t_2}/s_{k,j}) \\ &\le M 2^\alpha \sigma(\abs{T(t_1)-T(t_2)}) \end{split} \] since $s(t) \le s_{k,j}$ for all $t \in [t_1,t_2]$. In the latter case (say the point $j2^{-k}$ lies between them) we have \[ \abs{X_{t_1}-X_{t_2}} \le \abs{X_{t_1}-X_{j2^{-k}}}+\abs{X_{t_2}-X_{j2^{-k}}} \] and proceed similarly. Note that we have $\abs{t_1-j2^{-k}} \le 2^{-k-1}$ or $\abs{t_2-j2^{-k}} \le 2^{-k-1}$, so for that term we get the same estimate without the factor $2^\alpha$. \end{proof} We claim that \begin{equation}\label{eq:psivar_norm} [X]_{\psi\text{-var}} \le M(2^\alpha+1)T(1)^\alpha . \end{equation} Indeed, for any partition $0 = t_0 < t_1 < ... < t_m = 1$ we have, by the lemma above, \[ \begin{split} \sum_i \psi\left(\frac{\abs{X_{t_i}-X_{t_{i-1}}}}{\widetilde M}\right) &\le \sum_i \psi\left(\frac{(2^\alpha+1)M}{\widetilde M}\sigma(\abs{T(t_i)-T(t_{i-1})})\right) \\ &\le \sum_i \left(\frac{(2^\alpha+1)M}{\widetilde M}\right)^{1/\alpha} \abs{T(t_i)-T(t_{i-1})} \\ &= \left(\frac{(2^\alpha+1)M}{\widetilde M}\right)^{1/\alpha} T(1) . \end{split} \] \begin{proof}[Proof of \cref{thm:psivar_upper}] By \eqref{eq:psivar_norm}, we have \[ \begin{split} \mathbb{P}( [X]_{\psi\text{-var}} > 6M ) &\le \mathbb{P}( T(1)^\alpha > 2 ) \\ &\le \mathbb{P}\left( \int_0^1 \frac{1}{s(t)} 1_{s(t)<1}\,dt > 1 \right) . \end{split} \] We now estimate \[ \mathbb{E} \int_0^1 \frac{1}{s(t)} 1_{s(t)<1}\,dt = \int_0^1 \mathbb{E} \frac{1}{s(t)} 1_{s(t)<1}\,dt \] and then \[ \begin{split} \mathbb{E}\left[ \frac{1}{s(t)}1_{s(t)<1} \right] &= \int_0^\infty \mathbb{P}\left( \frac{1}{s(t)}1_{s(t)<1} > y \right) dy \\ &= \int_0^\infty \mathbb{P}\left( s(t) < \frac{1}{y} \wedge 1 \right) dy . \end{split} \] For $y \ge 1$, we get from \eqref{eq:Xsup_inc} \[ \begin{split} \mathbb{P}( s_{k,j} < 1/y ) &= \mathbb{P}( \sup_{u,v \in I_{k,j}} \abs{X_u-X_v} > M\sigma(2^{-k}y) ) \\ &= \mathbb{P}\left( \sup_{u,v \in I_{k,j}} \frac{\abs{X_u-X_v}}{2^{-k\alpha}} > M y^\alpha \Phi^{-1}(h(\log^*(2^{k}/y))) \right) \\ &\le \frac{1}{\Phi( \frac{M y^\alpha \Phi^{-1}(h(\log^*(2^{k}/y)))}{2K\tau(1)} )} \\ &\le \frac{1}{\Phi(\frac{M}{2K\tau(1)R^4})\Phi(y^\alpha) h(\log^* (2^k/y))} \end{split} \] where we have applied \eqref{eq:Phi_mult} in the last step. Hence, \[ \begin{split} \mathbb{P}( s(t) < 1/y ) \le \sum_{k \in \mathbb{N}} \mathbb{P}( s_{k,j} < 1/y ) &\le \frac{1}{\Phi(\frac{M}{2K\tau(1)R^4})\Phi(y^\alpha)} \sum_{k \in \mathbb{N}} \frac{1}{h(\log^* (2^k/y))} \\ &\lesssim \frac{1}{\Phi(\frac{M}{2K\tau(1)R^4})\Phi(y^\alpha)} \left( \log y + \sum_{2^k > y} \frac{1}{h(k\log 2 - \log y)} \right) \\ &\le \frac{1}{\Phi(\frac{M}{2K\tau(1)R^4})\Phi(y^\alpha)} (\log y + C) , \end{split} \] and finally \[ \int_0^\infty \mathbb{P}\left( s(t) < \frac{1}{y} \wedge 1 \right) dy \le \frac{1}{\Phi(\frac{M}{2K\tau(1)R^4})} \left( C + \int_2^\infty \frac{\log y + C}{\Phi(y^\alpha)}\,dy \right) \lesssim \frac{1}{\Phi(\frac{M}{2K\tau(1)R^4})} \] by the extra assumption on $\Phi$. Thus we have shown \[ \mathbb{P}( [X]_{\psi\text{-var}} > 6M ) \le \mathbb{E} \int_0^1 \frac{1}{s(t)} 1_{s(t)<1}\,dt \lesssim \frac{1}{\Phi(\frac{M}{2K\tau(1)R^4})} . \] \end{proof} \subsubsection{Examples} \begin{example}\label{ex:expo} If the following holds for $\beta>0$ \[ \Phi(x) \asymp \exp(cx^\beta) ,\] we can pick $\varphi = \Phi$. Hence, the modulus of continuity is given by \[ \tau(t) \asymp t^\alpha (\log^*\frac{1}{t})^{1/\beta} , \] the law of the iterated logarithm reads \[ \sigma(t) \asymp t^\alpha (\log\log\frac{1}{t})^{1/\beta} , \] and the variation regularity is \[ \psi(x) \asymp x^{1/\alpha} (\log^*\logp\frac{1}{x})^{-1/(\alpha\beta)} . \] Hence, we recover the results of \cite[Section A.4]{fv-rp-book} which consider the case $\beta=2$, and generalize these results to arbitrary $\beta>0$. Note that Brownian motion satisfies this with $\alpha = 1/2$, $\beta = 2$. For SLE, we will use this result with $\alpha = 1/d$, $\beta = \frac{d}{d-1}$, where $d$ is the dimension of the process as in \eqref{eq:d}. \end{example} \begin{example}\label{ex:poly} Suppose \[ \Phi(x) = x^p, \quad p>\frac{1}{\alpha} . \] Define $h$ by e.g.\ $h(u)=u^{1+\varepsilon}$ or $h(u)=u(\log^* u)(\log^*\logp u)\cdots(\log^*\cdots\log^* u)^{1+\varepsilon}$, and set $\varphi(x) = \frac{x^p}{h(\log^* x)}$. Then the modulus of continuity is given by \[ \tau(t) \asymp t^{\alpha-1/p}\, h(\log^*\frac{1}{t})^{1/p} , \] the law of the iterated logarithm reads \[ \sigma(t) = t^\alpha\, h(\log\frac{1}{t})^{1/p} , \] and the variation regularity is \[ \psi(x) \asymp x^{1/\alpha}\, h(\log^*\frac{1}{x})^{-1/(p\alpha)} . \] This sharpens the results obtained from the Besov-Hölder embedding (also known as Kolmogorov's continuity theorem) in \cite[Corollary A.2]{fv-rp-book} and the Besov-$p$-variation embedding in \cite[Corollary A.3]{fv-rp-book}, which provide corresponding statements with Hölder and variation exponents arbitrarily close to $\alpha-1/p$ and $1/\alpha$, respectively. \end{example} \subsection{Diameter upper bound given scale invariance and Markov-type increments} \label{se:ss_upper} In this and the next subsection we show that SLE satisfies the conditions of \cref{ex:expo}. The argument in this subsection concerns general self-similar processes whose increments satisfy a Markov-type property. (We remark here that the argument does \emph{not} apply to stable processes since they violate \eqref{eq:ss_markov} due to large jumps.) For what comes now, $\eta$ can be any self-similar process of index $1/d < 1$, in the sense that $t \mapsto \eta(\lambda t)$ has the same law as $t \mapsto \lambda^{1/d}\eta(t)$ for any $\lambda >0$. Denote by $(\mathcal{F}_t)_{t \ge 0}$ the filtration generated by $\eta$, and $\tau_r = \inf\{ t \ge 0 \mid \abs{\eta(t)} \ge r\}$. Suppose there exists $p<1$ and $l>0$ such that \begin{equation}\label{eq:ss_markov} \mathbb{P}( \tau_{r+6} \le \tau_r+l \mid \mathcal{F}_{\tau_r} ) < p \end{equation} for any $r>0$. For such processes, we show the following statement. \begin{proposition}\label{pr:ss_tail} There exist $l>0$ and $c_1,c_2>0$ such that \[ \mathbb{P}( \tau_{r+r'} \le \tau_r+l \mid \mathcal{F}_{\tau_r} ) < c_1\exp(-c_2 (r')^{d/(d-1)}) \] for all $r,r'>0$. \end{proposition} The number $6$ in $\tau_{r+6}$ in \eqref{eq:ss_markov} is not significant, and can be replaced by any $s>0$. Of course, in all the statements, the constants may depend on $s$. \begin{lemma}\label{le:ss_large_inc} For any $\varepsilon > 0$ there exists $c>0$ such that \[ \mathbb{P}( \tau_{r+c} \le \tau_r+l \mid \mathcal{F}_{\tau_r} ) < \varepsilon \] for any $r>0$. \end{lemma} \begin{proof} This follows by applying \eqref{eq:ss_markov} iteratively. \end{proof} \begin{lemma}\label{le:ss_iter} There exist $l>0$ and $c_1,c_2>0$ such that \[ \mathbb{P}( \tau_{r+r'} \le \tau_r+lr' \mid \mathcal{F}_{\tau_r} ) < c_1\exp(-c_2 r') \] for all $r,r'>0$. \end{lemma} \begin{proof} Pick $\varepsilon < 1/4$ and let $c$ be the constant from \cref{le:ss_large_inc}. It suffices to show \[ \mathbb{P}( \tau_{r+r'c} \le \tau_r+\frac{l}{2}r' \mid \mathcal{F}_{\tau_r} ) < c_1\exp(-c_2 r') \] for $r' \in 2\mathbb{N}$. On the event that $\tau_{r+r'c} \le \tau_r+\frac{l}{2}r'$ there must exist integers $k_1,...,k_{r'/2}$ such that $\tau_{r+k_i c} \le \tau_{r+(k_i-1)c}+l$ for $i=1,...,r'/2$. For each such choice of $k_1,...,k_{r'/2}$, we can apply \cref{le:ss_large_inc} iteratively (each time conditionally on $\mathcal{F}_{\tau_{r+(k_i-1)c}}$), so that \[ \mathbb{P}( \tau_{r+k_i c} \le \tau_{r+(k_i-1)c}+l \text{ for all } i \mid \mathcal{F}_{\tau_r}) \le \varepsilon^{r'/2} . \] Since the number of such choices of $k_1,...,k_{r'/2}$ is $\binom{r'}{r'/2} \le 2^{r'}$, we get \[ \mathbb{P}( \tau_{r+r'c} \le \tau_r+\frac{l}{2}r' \mid \mathcal{F}_{\tau_r}) \le 2^{r'} \varepsilon^{r'/2} . \] The claim follows since we picked $\varepsilon < 1/4$. \end{proof} Note that we do not need the scaling property in the proof of Lemma \ref{le:ss_iter}, so this lemma holds under only the assumption \eqref{eq:ss_markov}. Combining the lemma with the scaling property of the process we obtain \cref{pr:ss_tail}. \begin{proof}[Proof of \cref{pr:ss_tail}] Let $\lambda = (r')^{1/(d-1)}$. By the self-similarity of $\eta$, we have that $t \mapsto \lambda\eta(\lambda^{-d}t)$ has the same law as $\eta$. Hence the desired probability is equal to \[ \mathbb{P}( \tau_{\lambda r+\lambda r'} \le \tau_{\lambda r}+\lambda^d l \mid \mathcal{F}_{\tau_{\lambda r}} ) . \] We have chosen $\lambda$ such that $\lambda r' = \lambda^d = (r')^{d/(d-1)}$. Therefore, by \cref{le:ss_iter}, the probability is bounded by \[ c_1\exp(-c_2 (r')^{d/(d-1)}) . \] \end{proof} \subsection{Markov-type increment bound for SLE} \label{se:sle_upper} In this subsection we verify \eqref{eq:ss_markov} for SLE. Once we have done this, \cref{pr:ss_tail} and the self-similarity of $\eta$ immediately imply the following result. The proposition is a strengthening of \cite[Lemma 1.7]{zhan-hoelder} which proved that $\mathbb{E}[\operatorname{diam}(\eta[0,1])^a]<\infty$ for any $a>0$. \begin{proposition}\label{pr:diam_tail_upper} Let $\eta$ be a two-sided whole-plane \sle{\kappa}{} or a whole-plane space-filling \sle{\kappa}{} as specified in \cref{se:results}. There exists $c > 0$ such that \[ \mathbb{E}\exp\left(c\left(\frac{\operatorname{diam}(\eta[s,t])}{\abs{s-t}^{1/d}}\right)^{d/(d-1)}\right) < \infty \] for all $s<t$. \end{proposition} In particular, the conditions from \cref{sec:variation-upper-general,ex:expo} are satisfied with $\alpha=1/d$, $\beta=\frac{d}{d-1}$. This proves the upper bounds in \cref{thm:main_lil,thm:main_moc,thm:main_var}. In fact, we will prove a stronger result than Proposition \ref{pr:diam_tail_upper} below, namely that there exist finite constants $c_1,c_2>0$ such that for any $(D,a)$ as in \eqref{eq:Da} and $u \in \partial D \setminus \{a\}$ we have \begin{equation} \nu_{D;a\to\infty;u}(\operatorname{Cont}(\eta[0,\tau_{\abs{a}+r}]) < l) \le c_1 \exp\left(-c_2 l^{-1/(d-1)} r^{d/(d-1)}\right) \label{eq:cont_utail_cond} \end{equation} for all $r>0$. The same is true for $\wh\nu_{D;a\to\infty;\u}$ defined in \cref{se:prelim_spf}. Since we parametrise $\eta$ by its Minkowski content, we can phrase the condition \eqref{eq:ss_markov} as follows. (Recall that the Minkowski content is additive over SLE curve segments.) As before, we write $\tau_r = \inf\{ t \ge 0 \mid \abs{\eta(t)} = r\}$. \begin{lemma}\label{le:sle_markov_inc} There exists $l>0$ and $p<1$ such that for any $r>0$ we have \[ \P\left(\operatorname{Cont}(\eta[\tau_r,\tau_{r+6}]) < l \mmiddle| \eta\big|_{[0,\tau_r]} \right) < p . \] \end{lemma} \begin{proof}[Proof of \cref{le:sle_markov_inc} in case of space-filling \sle{\kappa}{}] In case of $r > 1$, the statement (even with $\tau_{r+1}$ instead of $\tau_{r+6}$) is precisely \cite[Lemma 3.1]{ghm-kpz-sle}. In case $r \le 1$, conditioning on $\eta[0,\tau_2]$ we get \[ \P\left(\operatorname{Cont}(\eta[\tau_2,\tau_3]) < l \mmiddle| \eta\big|_{[0,\tau_2]} \right) < p . \] This implies \cref{le:sle_markov_inc} for arbitrary $r > 0$. \end{proof} \begin{remark}\label{rm:ball_filling} In the case of space-filling \sle{\kappa}{}, the proof shows that there exist $l_0,\delta>0$ such that \[ \wh\nu_{D;a\to\infty;\u}\left( \eta[0,\tau_{\abs{a}+r}] \text{ contains $\delta r$ disjoint balls of radius $l_0$} \right) > 1-c_1\exp(-c_2 r) . \] By scaling, we get \begin{equation} \wh\nu_{D;a\to\infty;\u}\left( \eta[0,\tau_{\abs{a}+r}] \text{ contains $\delta l^{-1} r^2$ disjoint balls of radius $l_0 l r^{-1}$} \right) > 1-c_1\exp(-c_2 l^{-1} r^2) . \label{eq:balls} \end{equation} Notice that on this event we have \[ \operatorname{Cont}(\eta[0,\tau_{\abs{a}+r}]) \gtrsim l , \] so \eqref{eq:balls} is a stronger version of \cref{pr:diam_tail_upper} and \eqref{eq:cont_utail_cond}. \end{remark} In the remainder of the section, we prove \cref{le:sle_markov_inc} for two-sided whole-plane \sle{\kappa}{}. Recall from \cref{se:prelim_sle} that the restriction $\eta\big|_{[0,\infty)}$ is a whole-plane \sle{\kappa}{}$(2)$ from $0$ to $\infty$ with force point at $0$. Therefore the statement is equivalent when we consider whole-plane \sle{\kappa}{}$(2)$. \begin{lemma} There exists $l>0$ and $p<1$ such that for any $(D,a)$ as in \eqref{eq:Da} and $u \in \partial D \setminus \{a\}$ we have \[ \nu_{D;a\to\infty;u}(\operatorname{Cont}(\eta[0,\tau_{\abs{a}+6}]) < l) < p . \] \end{lemma} \begin{proof} We show the statement in case of $\abs{a} \ge 1$ and with $\tau_{\abs{a}+5}$ instead of $\tau_{\abs{a}+6}$. In case $\abs{a} < 1$, considering $\eta\big|_{[\tau_1,\tau_6]}$ conditionally on $\eta[0,\tau_1]$ gives us \[ \nu_{D;a\to\infty;u}(\operatorname{Cont}(\eta[\tau_1,\tau_6]) < l) < p . \] Let $f\colon D \to \mathbb{D}$ be the conformal map described in the paragraph below \eqref{eq:Da}, and $\varepsilon_0 > 0$ and $\alpha$ as in \cref{le:ghm}. Denote by $B(\alpha,\varepsilon_0)$ the $\varepsilon_0$-neighbourhood of $\alpha$. By the definitions above, $f\circ\eta$ leaves $B(\alpha,\varepsilon_0)$ before $\eta$ hits radius $\abs{a}+5$. Although $\alpha$ depends on $D$, it can be picked among a finite number of nearest-neighbour paths. Therefore it suffices to show that for given $\varepsilon_0$ and $\alpha$, there exists $l>0$ and $q>0$ such that the following holds. Let $w \in \mathbb{D} \setminus B(\alpha,\varepsilon_0)$ and $\nu_{\mathbb{D};1\to w;u}$ a radial \sle{\kappa}$(2)$ with a force point $u \in \partial\mathbb{D}$. Then \[ \nu_{\mathbb{D};1\to w;u}(\operatorname{Cont}(\eta[0,\sigma_{\alpha,\varepsilon_0}] \cap B(0,\varepsilon_0/2)) > l) > q \] where $\sigma_{\alpha,\varepsilon_0}$ is the exit time of $B(\alpha,\varepsilon_0)$. Indeed, since $\abs{(f^{-1})'} \asymp 1$ in a neighbourhood of $0$ (due to Koebe's distortion theorem), by the transformation rule for Minkowski content \eqref{eq:cont_transf} this will imply $f^{-1}(\eta[0,\sigma_{\alpha,\varepsilon_0}] \cap B(0,\varepsilon_0/2))$ has Minkowski content at least a constant times $l$. To show the claim, we need to find $l$ and $q$ such that the bound holds uniformly over all target points and force points. For concreteness, let us map $(\mathbb{D},1,0)$ to $(\mathbb{H},0,i)$. Then the image is an \sle{\kappa}{} in $\mathbb{H}$ with force points $(u,2)$, $u \in \mathbb{R}$, and $(w,\kappa-8)$, $w \in \mathbb{H} \setminus B(\alpha,\varepsilon_0)$ (cf.\@ \cite[Theorem 3]{sw-sle-coordinate-change}). Since the force point $w$ lies outside $B(\alpha,\varepsilon_0)$, we can disregard it until the exit time of $B(\alpha,\varepsilon_0/2)$ since the density between the corresponding SLE measures is uniformly bounded, regardless of the locations of $u$ and $w$ (cf.\@ \cref{le:sle_abs_cont}). Hence we are reduced to proving the following statement. \end{proof} \begin{lemma} Let $\alpha$ be a simple path in $\mathbb{H}$ from $0$ to $i$, and $\varepsilon_0 > 0$. There exists $l>0$ and $q>0$ such that the following holds. Let $\nu_{\mathbb{H},0\to\infty;u}$ denote chordal \sle{\kappa}$(2)$ with a force point $u \in \partial\mathbb{H}$. Then \[ \nu_{\mathbb{H},0\to\infty;u}(\operatorname{Cont}(\eta[0,\sigma_{\alpha,\varepsilon_0}] \cap B(i,\varepsilon_0)) > l) > q \] where $\sigma_{\alpha,\varepsilon_0}$ is the exit time of $B(\alpha,\varepsilon_0)$. \end{lemma} \begin{proof} Case 1: Suppose that $u \notin B(0,\varepsilon_0)$. Then the density between the laws of \sle{\kappa}$(2)$ and \sle{\kappa}$(0)$ is uniformly bounded until the exit of $B(\alpha,\varepsilon_0/2)$ (cf.\@ \cref{le:sle_abs_cont}). Therefore it suffices to consider \sle{\kappa}$(0)$. There is a positive probability that $\eta$ follows $\alpha$ within $\varepsilon_0/2$ distance. Moreover, since the Minkowski content on each sub-interval is almost surely positive, there is a positive probability that also $\operatorname{Cont}(\eta[0,\sigma_{\alpha,\varepsilon_0}] \cap B(i,\varepsilon_0)) > l$ with sufficiently small $l>0$. Case 2: Suppose $u$ is arbitrary. Let $t>0$ be a small time. Denote by $f_t$ the conformal map from $\mathbb{H} \setminus \fill(\eta[0,t])$ to $\mathbb{H}$ with $f_t(\eta(t))=0$ and $f_t(z) = z+O(1)$. Then there exist $c_1>0$ and $q>0$ (independent of $u$) such that with probability at least $q$ the following occur:\\ 1. $\eta[0,t] \subseteq B(0,\varepsilon_0/4)$,\\ 2. $\abs{f_t(u)} \ge c_1$.\\ Indeed, $u_t = f_t(u)$ is a Bessel process of positive index started at $u$ (this follows directly from the definition of \sle{\kappa}$(2)$). By the monotonicity of Bessel processes in the starting point, it suffices to consider $u=0$. The claim follows since the Bessel process stopped at a deterministic time is almost surely positive. It follows that if $t$ is chosen small enough, we have $\abs{f_t(z)-z} < \varepsilon_0/2$ for every $z \in \partial B(\alpha,\varepsilon_0) \subseteq \mathbb{H}$, and therefore $B(\alpha,\varepsilon_0/2) \subseteq f_t(B(\alpha,\varepsilon_0))$. Then, applying Case 1 to $f_t \circ \eta\big|_{[t,\infty)}$ with $\varepsilon_0$ replaced by $\varepsilon_0/2 \wedge c_1$ implies the claim. \end{proof} \subsection{Zero-one laws and upper bounds on the regularity of SLE} \label{se:01laws} In this subsection, we show the upper bounds in our main results (\cref{thm:moc_upper,thm:lil_upper,thm:psivar_upper}), which is stated as \cref{pr:upperbounds} below. To show that the constants $c_0,c_1$ are deterministic, we prove that they satisfy zero-one laws. We begin by proving analogues of Blumenthal's and Kolmogorov's zero-one laws for SLE. Define $\mathcal{F}_t = \sigma(\eta(s), s \in [0,t])$, and $\mathcal{F}_{t+} = \bigcap_{s>t} \mathcal{F}_s$. Moreover, denote by $\mathcal{G}$ the shift-invariant $\sigma$-algebra, i.e. the sub-$\sigma$-algebra of $\bigcap_{t > 0} \sigma(\eta(s), s>t)$ consisting of events $A$ such that $(\eta(t))_{t \ge 0} \in A$ if and only if $(\eta(t_0+t))_{t \ge 0} \in A$ for any $t_0 \ge 0$. Note that we are considering paths restricted to $t \in \mathbb{R}^+$. \begin{proposition}\label{pr:01law} For two-sided whole-plane \sle{\kappa}{} and whole-plane space-filling \sle{\kappa}{} as in \cref{it:non-spfill,it:spfill}, the $\sigma$-algebras $\mathcal{F}_{0+}$ and $\mathcal{G}$ are trivial (in the sense that they contain only events of probability $0$ and $1$). \end{proposition} \begin{proof} Denote by $s(t)$ the logarithmic capacity of $\eta[0,t]$, and write $\hat\mathcal{F}_u \mathrel{\mathop:}= \mathcal{F}_{s^{-1}(u)}$. Recall that $e^{s(t)}$ is comparable to $\operatorname{diam}(\eta[0,t])$. Note also that $s^{-1}(u) = \operatorname{Cont}(\hat\eta[0,u])$ where $\hat\eta$ is $\eta$ parametrised by capacity. We claim that $\hat\mathcal{F}_{-\infty+} = \mathcal{F}_{0+}$. Let $A \in \hat\mathcal{F}_{-\infty+}$. We want to show that $A \in \mathcal{F}_t$ for any $t>0$. Since $s(t) \downarrow -\infty$ as $t \downarrow 0$, we can write $A = \bigcup_{u \in \mathbb{Z}} A \cap \{s^{-1}(u)\le t\}$. By definition, since $A \in \hat\mathcal{F}_u$ for any $u$, we have $A \cap \{s^{-1}(u)\le t\} \in \mathcal{F}_t$, and hence $A \in \mathcal{F}_t$. The other inclusion is analogous. For whole-plane space-filling \sle{\kappa}{}, it is shown in \cite[Lemma 2.2]{hs-mating-eucl} that for a whole-plane GFF modulo $2\pi\chi$, the $\sigma$-algebra $\bigcap_{r>0} \sigma( h\big|_{B(0,r)})$ is trivial. Since \cref{le:localset} implies $\hat\mathcal{F}_{-\infty+} \subseteq \bigcap_{r>0} \sigma( h\big|_{B(0,r)})$, the former is also trivial. We now prove the proposition for two-sided whole-plane \sle{\kappa}{}, or rather whole-plane \sle{\kappa}$(2)$ since we are restricting to $t \ge 0$. Let $\hat\eta$ denote whole-plane \sle{\kappa}$(2)$ parametrised by capacity. Since initial segments of $\hat\eta$ and $\eta$ determine each other, we have the identity $\hat\mathcal{F}_{-\infty+} = \bigcap_{u\in\mathbb{R}}\sigma(\hat\eta(u'), u' \le u)$. We show that $\hat\mathcal{F}_{-\infty+}$ is independent of $\sigma(\hat\eta(u), u \in \mathbb{R})$ which in particular implies that $\hat\mathcal{F}_{-\infty+}$ is independent of itself, and therefore trivial. Fix arbitrary $u_1<...<u_r$, and let $g$ be some bounded continuous function. From the Markov property of the driving process and \cite[Lemma 4.20]{law-conformal-book}, it follows that \[ \mathbb{E}[ g(\hat\eta(u_1),...,\hat\eta(u_r)) \mid \hat\mathcal{F}_u ] \to \mathbb{E}[ g(\hat\eta(u_1),...,\hat\eta(u_r)) ] \quad\text{as } u \downarrow -\infty . \] On the other hand, by backward martingale convergence, we also have \[ \mathbb{E}[ g(\hat\eta(u_1),...,\hat\eta(u_r)) \mid \hat\mathcal{F}_u ] \to \mathbb{E}[ g(\hat\eta(u_1),...,\hat\eta(u_r)) \mid \hat\mathcal{F}_{-\infty+} ] \quad\text{as } u \downarrow -\infty \] which implies that $\sigma(\hat\eta(u_1),...,\hat\eta(u_r))$ is independent of $\mathcal{F}_{-\infty+}$. Since this is true for any choice of $u_1<...<u_r$, we must have that $\sigma(\hat\eta(u), u \in \mathbb{R})$ is independent of $\hat\mathcal{F}_{-\infty+}$. For the triviality of $\mathcal{G}$ we show the following statement: Denote by $\widetilde\eta\colon (-\infty,\infty)\to\mathbb{C}$ the time-reversal of $\eta$, parametrised by log conformal radius of its complement relative to the origin, and by $\widetilde\mathcal{F}_{-\infty+}$ the infinitesimal $\sigma$-algebra of $\widetilde\eta$. Then we claim \begin{equation}\label{eq:sigmaalg_reversal} \mathcal{G} = \widetilde\mathcal{F}_{-\infty+} \quad\text{(modulo null sets).} \end{equation} This will imply the triviality of $\mathcal{G}$ since for whole-plane \sle{\kappa}{}$(2)$, the reversibility and the previous step imply $\widetilde\mathcal{F}_{-\infty+}$ is trivial, and for space-filling \sle{\kappa}{}, we have triviality of $\bigcap_{R>0}\sigma(h\big|_{\mathbb{C}\setminus B(0,R)})$ by \cite[Lemma 2.2]{hs-mating-eucl}. Now we show \cref{eq:sigmaalg_reversal}. The inclusion $\widetilde\mathcal{F}_{-\infty+} \subseteq \mathcal{G}$ is easy to see. Indeed, for any fixed $t_0$ the time reversals (parametrized by log conformal radius as before) of $\eta$ and $\eta(t_0+\cdot)$ agree until hitting some circle $\partial B_r(0)$ (with $r$ random and depending on $t_0$). We are left to show $\mathcal{G} \subseteq \widetilde\mathcal{F}_{-\infty+}$. For any $A \in \mathcal{G}$ and $R>0$, we need to show $A \in \widetilde\mathcal{F}_R$, where $\widetilde\mathcal{F}_R$ is the $\sigma$-algebra generated by $\widetilde\eta$ up to the first hitting of circle $\partial B_R(0)$ (recall that conformal radius and radius are comparable up to a factor). Note that for any two curves $\widetilde\eta_1,\widetilde\eta_2$ starting at $\infty$ that agree until hitting circle $\partial B_R(0)$, their time-reversals agree after their last exit of $B_R(0)$. Consequently, when we parametrise their time-reversals by Minkowski content (denoted by $\eta_1,\eta_2$), they will agree up to a time-shift after their last exit of $B_R(0)$. In particular, $1_A(\eta_1) = 1_A(\eta_2)$. This implies $\mathbb{P}(A \mid \widetilde\mathcal{F}_R)$ takes only the values $0,1$, or equivalently $A \in \widetilde\mathcal{F}_R$ (modulo null sets). \end{proof} \begin{remark} We believe that also the tail $\sigma$-algebra $\bigcap_{t > 0} \sigma(\eta(s), s>t)$ is trivial. Proving this requires extra work since the cumulative Minkowski content and hence the parametrisation of $\eta$ at large times \emph{does} depend on its initial part. An interesting consequence of the tail triviality would be that the measure-preserving maps $T_{t_0}\colon \eta \mapsto \eta(t_0+\cdot)-\eta(t_0)$ (now seen as paths on $t \in \mathbb{R}$) are ergodic, i.e.\@ any event $A \in \sigma(\eta)$ that is invariant under $T_{t_0}$ has probability $0$ or $1$. \end{remark} The above proposition implies that the limits $c_0,c_1$ in \cref{thm:main_lil} are deterministic. We now show that the limits $c_0$ in \cref{thm:main_moc,thm:main_var} are also deterministic. \begin{proposition}\label{pr:01law_mv} There exist deterministic constants $c_0,c_1$ (possibly $0$) such that almost surely the following identities hold for any non-trivial bounded interval $I\subseteq\mathbb{R}$. \begin{enumerate}[label=(\roman*)] \item\label{it:moc_det} $\displaystyle\lim_{\delta\downarrow 0} \sup_{s,t \in I, \abs{t-s}<\delta} \frac{\abs{\eta(t)-\eta(s)}}{\abs{t-s}^{1/d}(\log \abs{t-s}^{-1})^{1-1/d}} = c_0$. \item\label{it:var_det} $\displaystyle\lim_{\delta\downarrow 0} \sup_{(t_i) \subseteq I, \abs{t_{i+1}-t_i}<\delta} \sum_i \psi(\abs{\eta(t_{i+1})-\eta(t_i)}) = c_1\abs{I}$ \quad where $\psi(x) = x^d(\log\log\frac{1}{x})^{-(d-1)}$. \end{enumerate} \end{proposition} \begin{proof} \ref{it:moc_det} For an interval $I$ define \[ S_I \mathrel{\mathop:}= \lim_{\delta\downarrow 0}\sup_{t,s\in I , \abs{t-s}<\delta} \frac{\abs{\eta(t)-\eta(s)}}{\abs{t-s}^{1/d}(\log \abs{t-s}^{-1})^{1-1/d}}. \] The law of $S_I$ is independent of the choice of $I$ by scale-invariance in law of $\eta$ and since for any fixed $a>0$ it holds that $\log ((a\Delta t)^{-1})=(1+o(1))\log((\Delta t)^{-1})$ as $\Delta t\rightarrow 0$. We claim that $S_I = S_{I'}$ almost surely for any two intervals $I,I'$. Indeed, in case $I \subseteq I'$, we have $S_I \le S_{I'}$ by the definition of $S_I$. But then the two random variables can only have the same law if they are almost surely equal. For general $I,I'$ apply the argument iteratively. It follows that $S_{[0,t]} = S_{[0,1]}$ almost surely for every $t$. Letting $t \downarrow 0$, we see that $S_{[0,1]}$ is (up to null sets) measurable with respect to $\mathcal{F}_{0+}$, and therefore deterministic by \cref{pr:01law}. \ref{it:var_det} Let \[ V_I \mathrel{\mathop:}= \lim_{\delta\downarrow 0}\sup_{(t_i) \subseteq I, \abs{t_{i+1}-t_i}<\delta}\sum_i \psi(\abs{\eta(t_{i+1})-\eta(t_i)}) . \] We claim that $V_I = \abs{I}V_{[0,1]}$ almost surely. This will imply $V_{[0,t]} = tV_{[0,1]}$ for all rational $t$, and by continuity for all $t$. As before, we conclude that $V_{[0,1]}$ is measurable with respect to $\mathcal{F}_{0+}$, and therefore deterministic. Note that $V$ is additive, i.e. \[ V_{[t_1,t_2]}+V_{[t_2,t_3]} = V_{[t_1,t_3]} \quad\text{for } t_1 < t_2 < t_3 . \] Moreover, by scaling and translation-invariance of $\eta$, the random variable $V_I$ has the same law as $\abs{I}V_{[0,1]}$. Therefore the claim follows from the lemma below. (Note that we have shown in the previous subsections that $V$ has exponential moments.) \end{proof} \begin{lemma} Let $X,Y,Z$ be random variables with the same law and finite second moments. If $\lambda X+ (1-\lambda)Y = Z$ for some $\lambda \neq 0,1$, then $X=Y=Z$ a.s. \end{lemma} \begin{proof} We have \[ \mathbb{E}[Z^2] = \lambda^2\mathbb{E}[X^2]+(1-\lambda)^2\mathbb{E}[Y^2]+2\lambda(1-\lambda)\mathbb{E}[XY] \] and hence (using $\mathbb{E}[X^2]=\mathbb{E}[Y^2]=\mathbb{E}[Z^2] < \infty$) \[ \mathbb{E}[Z^2] = \mathbb{E}[XY] \le (\mathbb{E}[X^2]\mathbb{E}[Y^2])^{1/2} = \mathbb{E}[Z^2] , \] i.e. the Cauchy-Schwarz inequality holds with equality. This means $X,Y$ are linearly dependent. The claim follows. \end{proof} \begin{proposition}\label{pr:upperbounds} The assertions of \cref{thm:main_var,thm:main_moc,thm:main_lil} hold, except that the constants $c_0,c_1$ may take their value in $[0,\infty)$ (instead of $(0,\infty)$). \end{proposition} \begin{proof} By the results of this subsection there exist deterministic constants $c_0,c_1\in[0,\infty]$ as in the theorem statements. \cref{pr:diam_tail_upper} implies that \eqref{eq:Xinc} is satisfied with $\Phi$ as in Example \ref{ex:expo}. \cref{thm:moc_upper,thm:lil_upper,thm:psivar_upper} imply that $c_0,c_1< \infty$, along with the last indented equations of \cref{thm:main_var,thm:main_moc}. \end{proof} \subsection{Proof of \cref{thm:ball_filling}} \label{se:ball_filling} In essence, we follow the proof of \cref{thm:moc_upper} using the stronger input given in \cref{rm:ball_filling}. By stationarity, it suffices to prove the result on the interval $I=[0,1]$. Denote by $A_{s,r}^{\ell,k}$ the event that $\eta[s,\tau_{s,r}]$ contains $k$ disjoint balls of radius $\ell$ where $\tau_{s,r} = \inf\{ t>s \mid \abs{\eta(t)-\eta(s)} \ge r\}$. For $n \in \mathbb{N}$, by \cref{rm:ball_filling}, \[ \mathbb{P}\left( \left(A_{s,\, u 2^{-n/2} n^{1/2}}^{u^{-1} 2^{-n/2} n^{-1/2},\, \delta u^2 n}\right)^c \right) \lesssim \exp(-c_2 u^2 n) . \] Summing over $s = j\pi\delta 2^{-n}$, $j=0,...,(\pi\delta)^{-1} 2^n-1$, yields \[ \mathbb{P}\left( \bigcup_{s = j\pi\delta 2^{-n}} \left(A_{s,\, u 2^{-n/2} n^{1/2}}^{u^{-1} 2^{-n/2} n^{-1/2},\, \delta u^2 n}\right)^c \right) \lesssim 2^n \exp(-c_2 u^2 n) \lesssim \exp(-\tilde c_2 u^2 n) \] for sufficiently large $u$. Summing over $n \ge n_0$ yields \[ \mathbb{P}\left( \bigcup_{n \ge n_0} \bigcup_{s = j\pi\delta 2^{-n}} \left(A_{s,\, u 2^{-n/2} n^{1/2}}^{u^{-1} 2^{-n/2} n^{-1/2},\, \delta u^2 n}\right)^c \right) \lesssim \exp(-\tilde c_2 u^2 n_0) \] for sufficiently large $u$. Let $r \in {(0,1)}$, and pick $n_0 \in \mathbb{N}$ such that $r \asymp 2^{-n_0/2} n_0^{1/2}$. Then the estimate above reads \[ \mathbb{P}( \bigcup \dots ) \lesssim r^{\tilde c_2 u^2} . \] We claim that \[ \bigcap_{n \ge n_0} \bigcap_{s = j\pi\delta 2^{-n}} A_{s,\, u 2^{-n/2} n^{1/2}}^{u^{-1} 2^{-n/2} n^{-1/2},\, \delta u^2 n} \subseteq E_{r,u,[0,1]} . \] Suppose $\abs{\eta(t)-\eta(s)} \le ur$. Find $n \ge n_0$ such that $\frac{\abs{\eta(t)-\eta(s)}}{u 2^{-n/2} n^{1/2}} \in [4,8]$. Note that on the event $A_{s,\, u 2^{-n/2} n^{1/2}}^{u^{-1} 2^{-n/2} n^{-1/2},\, \delta u^2 n}$, we have $\tau_{s,u 2^{-n/2}n^{1/2}} > \pi\delta 2^{-n}$ and hence $\operatorname{diam}(\eta[s,s+\pi\delta 2^{-n}]) \le 2u 2^{-n/2}n^{1/2}$ since $\eta$ is parametrised by area. Therefore, by our choice of $n$ we must have $s \le j\pi\delta 2^{-n} < (j+1)\pi\delta 2^{-n} \le t$ for some $j$. In particular, $\eta[s,t] \supseteq \eta[j\pi\delta 2^{-n} , (j+1)\pi\delta 2^{-n}]$ contains $\delta u^2 n \asymp \delta u^2 \log(u\abs{\eta(t)-\eta(s)}^{-1})$ disjoint balls of radius $u^{-1} 2^{-n/2} n^{-1/2} \asymp u^{-2}\abs{\eta(t)-\eta(s)} /\log(u\abs{\eta(t)-\eta(s)}^{-1})$. \section{Lower bounds for Markov processes} \label{se:markov} We prove lower bounds on the regularity (corresponding to the lower bounds in \cref{thm:main_lil,thm:main_moc,thm:main_var}) for Markov processes satisfying a uniform ellipticity condition. The arguments are elementary but they illustrate well the general idea on how to obtain lower bounds, and we have not seen them written out in earlier literature, except that (even functional versions of) laws of the iterated logarithms and rates of escape of Markov processes have been proved in \cite{bk-markov-lil}. Our arguments on SLE follow the same idea, but will be more technical since SLE is not exactly a Markov process, and we need to work with its domain Markov property. In the following, let $X =(X_t)_{t\geq 0}$ be a Markov process on a metric space $(E,d)$, and let $\mathbb{P}^x$ denote the law of the Markov process started at $X_0=x$. In particular, we assume the Markov property $\mathbb{P}^x(X_{t+s} \in A \mid \mathcal{F}_t) = \mathbb{P}^{X_t}(X_s \in A)$ for every $x$. We suppose the following uniform bounds on the transition probabilities: There exist constants $d_{\mathrm{w}} > 1$, $T>0$, $r_0 > 0$ such that \begin{equation}\label{eq:markov_tail_upper} \mathbb{P}^x(d(X_t,x) > r) \le c_1 \exp\left(-c_2 \left(\frac{r}{t^{1/d_{\mathrm{w}}}}\right)^{d_{\mathrm{w}}/(d_{\mathrm{w}}-1)} \right) \end{equation} for all $r>0$, $0<t\le T$, and \begin{equation}\label{eq:markov_tail_lower} \mathbb{P}^x(d(X_t,x) > r) \ge c_3 \exp\left(-c_4 \left(\frac{r}{t^{1/d_{\mathrm{w}}}}\right)^{d_{\mathrm{w}}/(d_{\mathrm{w}}-1)} \right) \end{equation} for $r \le r_0$, $0<t<r^{d_{\mathrm{w}}}$. The exponent $d_{\mathrm{w}}$ is usually called the walk dimension. For instance, this is satisfied for diffusions on $\mathbb{R}^d$ with uniformly elliptic generator (for which $d_{\mathrm{w}}=2$). Other typical examples are Brownian motions on fractals (cf.\@ \cite{bk-markov-lil} and references therein) or Liouville Brownian motion (cf.\@ \cite{akm-lbm83}). For these Markov processes, the analogues of \cref{thm:main_lil,thm:main_moc,thm:main_var} hold with $d = d_{\mathrm{w}}$, only that we do not prove 0-1 laws for the limits but only deterministic upper and lower bounds for them (but see e.g.\@ \cite{bk-markov-lil} for a type of 0-1 law). We only need to prove the lower bounds since the matching upper bounds follow already from the results in \cref{sec:variation-upper-general}. The upper bounds hold for general stochastic processes and the Markov property is not needed. \begin{proposition} Under assumption \eqref{eq:markov_tail_lower} there exist positive deterministic constants $a_1,a_2,a_3 > 0$ such that the following is true. \begin{enumerate}[label=(\roman*)] \item\label{it:markov_var} Variation: For any bounded interval $I \subseteq \mathbb{R}^+$, almost surely \[ \inf_{\delta > 0} \sup_{\abs{t_{i+1}-t_i}<\delta} \sum_i \psi(d(X_{t_{i+1}},X_{t_i})) \ge a_3\abs{I} \] with $\psi(x) = x^{d_{\mathrm{w}}}(\log^*\logp\frac{1}{x})^{-(d_{\mathrm{w}}-1)}$, and the supremum is taken over finite sequences $t_0 < ... < t_r$ with $t_i \in I$ and $\abs{t_{i+1}-t_i}<\delta$. \item\label{it:markov_moc} Modulus of continuity: For any non-trivial interval $I \subseteq \mathbb{R}^+$, almost surely \[ \inf_{\delta > 0} \sup_{s,t \in I, \abs{t-s}<\delta} \frac{d(X_t,X_s)}{\abs{t-s}^{1/d_{\mathrm{w}}}(\log \abs{t-s}^{-1})^{1-1/d_{\mathrm{w}}}} \ge a_2 . \] \item\label{it:markov_lil} Law of the iterated logarithm: For any $t \ge 0$, almost surely \[ \limsup_{t \downarrow 0} \frac{d(X_{t_0+t},X_{t_0})}{t^{1/d_{\mathrm{w}}}(\log\log t^{-1})^{1-1/d_{\mathrm{w}}}} \ge a_1 . \] \end{enumerate} \end{proposition} \begin{proof} \ref{it:markov_moc}: By the Markov property, there is no loss of generality assuming $I = [0,1]$. For $\varepsilon > 0$ and $k = 1,...,\lfloor\varepsilon^{-1}\rfloor$, we define the event \[ A_{\varepsilon,k} = \{ d(X_{k\varepsilon},X_{(k-1)\varepsilon}) \ge a_0\varepsilon^{1/d_{\mathrm{w}}}(\log\varepsilon^{-1})^{1-1/d_{\mathrm{w}}} \} \in \mathcal{F}_{k\varepsilon} \] with $a_0 > 0$ a constant whose value will be decided upon later. By \eqref{eq:markov_tail_lower} and the Markov property, we have \[ \mathbb{P}( A_{\varepsilon,k} \mid \mathcal{F}_{(k-1)\varepsilon} ) \ge c_3 \exp\left( -c_4 a_0^{d_{\mathrm{w}}/(d_{\mathrm{w}}-1)} (\log\varepsilon^{-1}) \right) = c_3\varepsilon^{1/2} \] for a suitable choice of $a_0$. Applying this estimate iteratively yields \[ \mathbb{P}( A_{\varepsilon,1}^c \cap ... \cap A_{\varepsilon,\varepsilon^{-1}}^c ) \le \left( 1-c_3\varepsilon^{1/2} \right)^{\varepsilon^{-1}} \le \exp\left( -c_3\varepsilon^{-1/2} \right) \to 0 \quad \text{as }\varepsilon \downarrow 0 . \] This shows that for any fixed $\delta > 0$, the event \[ \sup_{s,t \in I, \abs{t-s}<\delta} \frac{d(X_t,X_s)}{\abs{t-s}^{1/d_{\mathrm{w}}}(\log \abs{t-s}^{-1})^{1-1/d_{\mathrm{w}}}} \ge a_0 \] must occur with probability $1$. The claim follows. \ref{it:markov_lil}: By the Markov property, there is no loss of generality assuming $t_0 = 0$. Define a sequence of events \[ A_k = \{ d(X_{e^{-k}},X_{e^{-(k+1)}}) \ge a_0 e^{-k/d_{\mathrm{w}}}(\log k)^{1-1/d_{\mathrm{w}}} \} \in \mathcal{F}_{e^{-k}} \] with a constant $a_0 > 0$ whose value will be decided upon later. We show that almost surely $A_k$ occur infinitely often. This implies the claim since on the event $A_k$ we have \[ d(X_{e^{-k}},X_0)+d(X_{e^{-(k+1)}},X_0) \ge a_0 e^{-k/d_{\mathrm{w}}}(\log k)^{1-1/d_{\mathrm{w}}} \] and hence for either $t=e^{-k}$ or $t=e^{-(k+1)}$ we have \[ d(X_t,X_0) \ge \frac{a_0}{2} t^{1/d_{\mathrm{w}}}(\log\log t^{-1})^{1-1/d_{\mathrm{w}}} . \] By \eqref{eq:markov_tail_lower} and the Markov property, we have \[ \mathbb{P}( A_k \mid \mathcal{F}_{e^{-(k+1)}} ) \ge c_3 \exp\left( -c_4 a_0^{d_{\mathrm{w}}/(d_{\mathrm{w}}-1)}(1-e^{-1})^{1/(d_{\mathrm{w}}-1)} (\log k) \right) = c_3 k^{-1} \] for a suitable choice of $a_0$. Applying this estimate iteratively yields \[ \mathbb{P}( A_{k}^c \cap ... \cap A_{k'}^c ) \le (1-c_3 k^{-1})\dots(1-c_3 (k')^{-1}) \le \exp(-c_3(k^{-1}+...+(k')^{-1})) \to 0 \quad \text{as }k' \to \infty \] and hence \[ \mathbb{P}\left( \bigcup_{k' > k} A_{k'} \right) = 1 . \] Since this holds for any $k$, the claim follows. \ref{it:markov_var}: This follows from \ref{it:markov_lil} by a general result which we will state as \cref{pr:var_lil} in the next section. \end{proof} \section{Lower bounds for SLE} \label{sec:sle-lower-bounds} In this section we conclude the lower bounds in our main results (\cref{thm:main_lil,thm:main_moc,thm:main_var}). We begin in \cref{sec:variation-lower-general} by reviewing a general argument saying that the lower bound for $\psi$-variation follows from the lower bound in the law of the iterated logarithm. In \cref{sec:diam-lower-nonspf,sec:diam-lower-spf}, which constitute the main part of this section, we prove the lower bound in \cref{eq:mink-tail} along with some conditional variants of this estimate. Finally, we use these in \cref{se:lowerbounds_pf} to conclude the lower bounds for the modulus of continuity and the law of the iterated logarithm. \subsection{Law of the iterated logarithm implies variation lower bound} \label{sec:variation-lower-general} We review an argument for general processes that says that a ``lower'' law of the iterated logarithm implies a lower bound on the variation regularity. We follow \cite[Section 13.9]{fv-rp-book} where the argument is spelled out for Brownian motion (implying Taylor's variation \cite{tay-bm-variation}). \begin{proposition}\label{pr:var_lil} Let $(X_t)_{t \in [0,1]}$ be a separable process such that for every fixed $t \in(0,1)$ we almost surely have \begin{equation}\label{eq:lil_lower} \limsup_{s \to 0}\frac{\abs{X_{t+s}-X_t}}{\sigma(\abs{s})} > 1 \end{equation} where $\sigma$ is a (deterministic) self-homeomorphism of $[0,\infty)$. Then, almost surely, for any $\varepsilon > 0$ there exist disjoint intervals $[t_i,u_i]$ of length at most $\varepsilon$ such that \[ \sum_i \sigma^{-1}(\abs{X_{t_i}-X_{u_i}}) > 1-\varepsilon . \] \end{proposition} The proposition proves in particular that $X$ cannot have better $\psi$-variation regularity than $\psi = \sigma^{-1}$, i.e., $X$ has infinite $\widetilde\psi$-variation if $\widetilde\psi(x)/\psi(x)\rightarrow \infty$ as $x\downarrow 0$. \begin{proof} Let $E$ be the set of $t \in [0,1]$ where \eqref{eq:lil_lower} holds. By Fubini's theorem, we almost surely have $\abs{E} = 1$. By definition, for each $t \in E$, there exist arbitrarily small $s$ such that $\abs{X_{t+s}-X_t} > \sigma(\abs{s})$. The collection of all such intervals $[t,t+s]$ form a Vitali cover of $E$ in the sense of \cite[Lemma 13.68]{fv-rp-book}. In particular, there exist disjoint invervals $[t_i,u_i]$ with $\abs{E \setminus \bigcup_i [t_i,u_i]} < \varepsilon$ and $\abs{X_{t_i}-X_{u_i}} > \sigma(\abs{t_i-u_i})$, so \[ \sum_i \sigma^{-1}(\abs{X_{t_i}-X_{u_i}}) > \sum_i \abs{t_i-u_i} >1-\varepsilon . \] By picking in the Vitali cover only intervals of length at most $\varepsilon$, we get intervals $[t_i,u_i]$ of length at most $\varepsilon$. \end{proof} \subsection{Diameter lower bound for non-space-filling SLE} \label{sec:diam-lower-nonspf} In this section we prove the matching lower bound to the result in \cref{pr:diam_tail_upper}. Our main result is the following proposition, together with a ``conditional'' variant of it, see \cref{pr:cont_ltail_cond} below. As before we let $\tau_r=\inf\{t\geq 0\,:\, |\eta(t)|=r \}$ denote the hitting time of radius $r$. \begin{proposition}\label{pr:cont_tail_lower} Let $\eta$ be a whole-plane \sle{\kappa}{}$(2)$ from $0$ to $\infty$, $\kappa \le 8$. For some $\widetilde c_2 > 0$ we have \[ \mathbb{P}( \operatorname{Cont}(\eta[0,\tau_r]) < \ell ) \ge \exp(-\widetilde c_2 \ell^{-1/(d-1)} r^{d/(d-1)}) \] for any $r,\ell>0$ with $\ell \le r^d$. \end{proposition} This is the matching lower bound to the upper bound in \cref{pr:diam_tail_upper}. Note that our upper and lower bounds match except for a different constant $\widetilde c_2$. We remark that the proposition can be equivalently stated as a tail lower bound on the increment of $\eta$ when parametrised by Minkowski content. Namely, we have \[ \mathbb{P}( \operatorname{diam}(\eta[0,1]) > r ) \ge \exp(-c r^{d/(d-1)}) \] for $r \ge 1$. Similarly as in \cref{se:sle_upper}, we can use the scaling property to reduce the statement of \cref{pr:cont_tail_lower} to the following special case. \begin{lemma}\label{le:cont_tail_lower1} Let $\eta$ be a whole-plane \sle{\kappa}{}$(2)$ from $0$ to $\infty$. Then for some $c_1,c_2 > 0$ we have \[ \mathbb{P}( \operatorname{Cont}(\eta[0,\tau_r]) < c_1 r ) \ge \exp(-c_2 r) \] for any $r \ge 1$. \end{lemma} \begin{proof}[Proof of \cref{pr:cont_tail_lower} given \cref{le:cont_tail_lower1}] Let $\lambda = c_1^{1/(d-1)}\ell^{-1/(d-1)}r^{1/(d-1)}$. By the scaling property (cf.\@ \cref{se:prelim_sle}), $\widetilde\eta = \lambda\eta$ is also a whole-plane \sle{\kappa}{}$(2)$, and $\operatorname{Cont}(\widetilde\eta[0,\widetilde\tau_{\lambda r}]) = \lambda^d \operatorname{Cont}(\eta[0,\tau_r])$. Hence the desired probability is equal to \[ \mathbb{P}( \operatorname{Cont}(\widetilde\eta[0,\widetilde\tau_{\lambda r}]) < \lambda^d \ell ) . \] We have chosen $\lambda$ such that $\widetilde r \mathrel{\mathop:}= \lambda r = c_1^{1/(d-1)}\ell^{-1/(d-1)}r^{d/(d-1)}$ and $\lambda^d \ell = c_1 \widetilde r$. Therefore, by \cref{le:cont_tail_lower1}, the probability is at least \[ \exp(-c_2 \widetilde r) = \exp(-\widetilde c_2 \ell^{-1/(d-1)}r^{d/(d-1)}) . \] \end{proof} The heuristic idea of \cref{le:cont_tail_lower1} is straightforward, but requires some care to implement. We would like to show that when $\eta$ crosses from radius $r$ to $r+1$, it has some chance $p>0$ to do so while creating at most $c_1$ Minkowski content. If $p>0$ is uniform conditionally on $\eta[0,\tau_r]$, the lemma would follow. However, it is difficult to control the Minkowski content near the tip of $\eta[0,\tau_r]$ due to its fractal nature, therefore we want to keep only those curves that have a sufficiently ``nice'' tip when hitting radius $r$. Below we will implement this idea. Let $(D,a)$ be as in \eqref{eq:Da}, and $u\in\partial D\setminus\{a\}$. The point $u$ will play the role of a force point with weight $2$. (We can allow more general force points but to keep notation as simple as possible, we restrict to this case.) For $p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}} > 0$ we say that $(D,a,u)$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice if \[ \abs{f(u)-1} \ge 2r_{\mathrm{N}} \] and \[ \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(f^{-1}(\eta[0,\sigma_{r_{\mathrm{N}}}])) > c_{\mathrm{N}}) < p_{\mathrm{N}} \] where $\nu_{\mathbb{D};1\to 0}$ denotes radial \sle{\kappa}{} (without force point) and $\sigma_{r_{\mathrm{N}}}$ the exit time of $B(1,r_{\mathrm{N}}) \cap \overline{\bbD}$. \begin{proposition}\label{pr:nice_tip_iter} There exist finite and positive constants $p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}},p,c$ such that the following is true. Let $(D,a,u)$ with $\abs{a}\ge 1$ be $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice. Then \[ \nu_{D;a\to \infty;u} \left( \begin{array}{cc} \operatorname{Cont}(\eta[0,\tau_{\abs{a}+1}]) < c \quad\text{and} \\ D \setminus \eta[0,\tau_{\abs{a}+1}] \text{ is } (p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})\text{-nice} \end{array} \right) \ge p . \] \end{proposition} \begin{lemma}\label{le:nice_tip} There exist finite positive constants $p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}},p,\varepsilon,c_1$ with $\varepsilon < r_{\mathrm{N}}/2$ such that the following is true. Let $(D,a,u)$ with $\abs{a}\ge 1$ be $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice, and let $f\colon D \to \mathbb{D}$ the corresponding conformal map. Let $A = f(\widehat{\bbC} \setminus B(0,\abs{a}+1))$, and $\tau_A$ the hitting time, and $\eta$ a radial \sle{\kappa}{}$(2)$ in $\mathbb{D}$ from $1$ to $f(\infty)$ with force point $f(u)$. Then the following event has probability at least $p$. \begin{enumerate}[label=(\roman*)] \item\label{it:follow} $\norm{\gamma-\eta}_{\infty;[0,\tau_A]} < \varepsilon$ where $\gamma$ denotes the straight line from $1$ to $1/64$, and $\gamma$ and $\eta$ are parametrised by capacity relative to $-1$ (see \cref{se:prelim} for the definition of relative capacity). \item\label{it:cont} $\operatorname{Cont}(\eta[0,\tau_A]) \le c_1$. \item\label{it:nice_base} $\operatorname{Cont}(f^{-1}(\eta[0,\sigma_{r_{\mathrm{N}}}])) \le c_{\mathrm{N}}$. \item\label{it:nice_tip} $D \setminus f^{-1}(\eta[0,\tau_A])$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice. \end{enumerate} \end{lemma} This lemma will be proved in several steps where we successively pick the constants that we look for. First, we recall that for any $\varepsilon > 0$, the probability of \ref{it:follow} can be bounded from below by some $p_\varepsilon > 0$ (cf.\@ \cref{le:support} below). The lemma then follows if we can guarantee that the probability of any of the other conditions failing is at most $p_\varepsilon/2$. We will pick all the required constants in a way such that this is true. We begin by making a few general comments about the conformal maps $f\colon D \to \mathbb{D}$ corresponding to domains as in \eqref{eq:Da}. Consider $D^* = \{ 1/z \mid z \in D\} \subseteq \mathbb{C}$ and $g(z) = f(1/z)$. This is the conformal map from $D^*$ to $\mathbb{D}$ with $g(1/z_D) = 0$ and $g(1/a) = 1$. Note that $\operatorname{dist}(1/z_D, \partial D^*) = \abs{1/z_D-1/a} = \frac{1}{\abs{a}}-\frac{1}{\abs{a}+2}$. Hence, the conformal radius of $1/z_D$ in $D^*$ is between $\frac{2}{\abs{a}(\abs{a}+2)}$ and $\frac{8}{\abs{a}(\abs{a}+2)}$ (cf.\@ \cref{se:prelim_conformal}). It follows that $\abs{f'(z_D)} \in [\frac{\abs{a}}{8(\abs{a}+2)}, \frac{\abs{a}}{2(\abs{a}+2)}]$. \begin{lemma}\label{le:dist_1} There exists $r_0 > 0$ such that for any $(D,a)$ as in \eqref{eq:Da} we have $f^{-1}(B(1,r_0) \cap \mathbb{D}) \subseteq B(0,\abs{a}+1)$. \end{lemma} \begin{proof} When $\abs{a}$ is not too large, we can use Koebe's distortion theorem to argue that even $f^{-1}(\mathbb{D} \setminus B(0,1-r_0)) \subseteq B(0,\abs{a}+1)$. Indeed, considering $D^* = \{ 1/z \mid z \in D\} \subseteq \mathbb{C}$ and the conformal map $z \mapsto f(1/z)$ from $D^*$ to $\mathbb{D}$, we see that its derivative is comparable everywhere on $B(0,\frac{1}{\abs{a}+1})$. In particular, it cannot map any of these points anywhere close to $\partial\mathbb{D}$. When $\abs{a}$ is large, let $\widetilde z \in \partial B(0,\abs{a}+1/3)$ be the closest point to $a$. The argument above shows that $f(\widetilde z)$ cannot be too close to $\partial\mathbb{D}$. Let $V$ be the union of $B(\widetilde z,2/3) \cap D$ with all points that it separates from $\infty$ in $D$. By \cref{le:ghm} there exists a universal constant $r_0$ (independent of $D$) and a path from $f(\widetilde z)$ to $1$ (dependent of $D$) whose $r_0$-neighbourhood is contained in $f(V)$. Since $V \subseteq B(0,\abs{a}+1)$, the claim follows. \end{proof} \begin{lemma}\label{le:inf_loc} Given $r_0 > 0$, there exists $R>0$ such that for any $(D,a)$ as in \eqref{eq:Da} with $\abs{a} > R$ we have $f(\infty) \notin B(0,1-r_0/2)$. \end{lemma} \begin{proof} Consider the conformal map $z \mapsto 1/f^{-1}(z)$ from $\mathbb{D}$ to $D^* = \{ 1/z \mid z \in D\} \subseteq \mathbb{C}$. We saw right above the statement of Lemma \ref{le:dist_1} that the conformal radius of $1/z_D$ in $D^*$ is comparable to $\abs{a}^{-2}$. Koebe's distortion theorem implies that the derivative of $z \mapsto 1/f^{-1}(z)$ is comparable to $\abs{a}^{-2}$ on $B(0,1-r_0/2)$ (up to some factor depending on $r_0$). But since the distance from $1/z_D = 1/f^{-1}(0)$ to $0 = 1/\infty$ is comparable to $\abs{a}^{-1} \gg \abs{a}^{-2}$, we cannot have $\infty \in f^{-1}(B(0,1-r_0/2))$. \end{proof} \begin{lemma}\label{le:inf_dist} For any $R>0$ there exists $\delta>0$ such that the following is true. Let $(D,a)$ be as in \eqref{eq:Da} with $\abs{a} \in [1,R]$, and $A = f(\widehat{\bbC} \setminus B(0,\abs{a}+1))$. Then $\operatorname{dist}(f(\infty),\partial A) \ge \delta$. \end{lemma} \begin{proof} Consider the conformal map $z \mapsto f(1/z)$ from $D^* = \{ 1/z \mid z \in D\}$ to $\mathbb{D}$. By the discussion right above the statement of Lemma \ref{le:dist_1}, its derivative at $1/z_D$ is comparable to $\abs{a}(\abs{a}+2) \ge 1$. Since $\abs{1/z_D} \asymp \operatorname{dist}(1/z_D,\partial D^*)\asymp 1$ (where the implicit constants depend on $R$), Koebe's distortion theorem gives that also the derivative at $0$ is bounded from below by a constant depending on $R$. Since $\partial A = \{ f(1/z) \mid z \in \partial B(0,1/(\abs{a}+1)) \}$, the claim follows from Koebe's $1/4$-theorem. \end{proof} \begin{corollary}\label{co:target_change} There exist finite constants $c' > 0$ and $\varepsilon > 0$ such that the following is true. Let $(D,a)$ be as in \eqref{eq:Da} with $\abs{a} \ge 1$, and $A = f(\widehat{\bbC} \setminus B(0,\abs{a}+1))$. Let $\gamma$ be the straight line from $1$ to $0$. Consider either the two \sle{\kappa}{} measures $\nu_{\mathbb{D};1\to 0}$ and $\nu_{\mathbb{D};1\to f(\infty)}$, or the two \sle{\kappa}{}$(2)$ measures $\nu_{\mathbb{D};1\to 0;u}$ and $\nu_{\mathbb{D};1\to f(\infty);u}$ with the same force point $u \in \overline{\bbD}$. Then, on the event $\{ \norm{\gamma-\eta}_{\infty;[0,\tau_A]} < \varepsilon \}$, the law of $\eta\big|_{[0,\tau_A]}$ under the two measures are absolutely continuous with density bounded between $[1/c',c']$. \end{corollary} \begin{proof} By \cref{le:inf_loc,le:inf_dist}, if $\varepsilon$ is sufficiently small, the point $f(\infty)$ has distance at least $2\varepsilon$ from $\gamma[0,\tau_A]$. Moreover, we have $B(0,1/64) \subseteq A$ due to Koebe's $1/4$-theorem, so $0$ is also bounded away from $\gamma[0,\tau_A]$. Therefore, the claim follows from \cref{le:sle_abs_cont}. \end{proof} \begin{lemma}\label{le:no_return} Given $r_0 > 0$, there exists $r_1 > 0$ such that the following is true. Let $\alpha\colon [0,1] \to \overline{\mathbb{D}}$ be a curve with $\alpha(0) = 1$ and $\alpha \subseteq \{ \abs{z} \ge 1/64, \Re(z)>0 \}$. Suppose also that $\alpha$ does not leave $B(0,1-r_0/4)$ after entering $B(0,1-r_0/2)$, and that $\alpha(1)$ is connected to $0$ in $B(0,1-r_0) \setminus \alpha$. Let $D_\alpha$ denote the connected component of $\mathbb{D}\setminus\alpha$ containing $0$ (so in particular $D_\alpha=\mathbb{D}\setminus\alpha$ if $\mathbb{D}\setminus\alpha$ is connected), and let $f_\alpha\colon D_\alpha \to \mathbb{D}$ denote the conformal map with $f_\alpha(0) = 0$ and $f_\alpha(\alpha(1)) = 1$. Then $f_\alpha^{-1}(B(1,r_1) \cap \mathbb{D}) \subseteq B(0,1-r_0/4)$. \end{lemma} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{no_return.pdf} \caption{The setup and proof of \cref{le:no_return}.} \end{figure} \begin{proof} Notice that $\partial B(0,1-r_0) \setminus \alpha$ may have several connected components, each of which is an arc of $\partial B(0,1-r_0)$. Let $C_{r_0}$ denote the longest arc of $\partial B(0,1-r_0) \setminus \alpha$ (i.e. the one to the left in the figure, or, equivalently, the unique arc crossing the negative real axis). By our assumption, it does not separate $\alpha(1)$ from $0$ in $D_\alpha$. Therefore $f_\alpha(C_{r_0})$ does not separate $1$ from $0$. Next, let $C_{r_0/2}$ denote the longest arc of $\partial B(0,1-r_0/2) \setminus \alpha$. Its image $f_\alpha(C_{r_0/2})$ lies in the component of $\mathbb{D} \setminus f_\alpha(C_{r_0})$ that does not contain $0$. Let $J_+$, $J_-$ denote the two arcs of $\partial\mathbb{D}$ that lie between $f_\alpha(C_{r_0})$ and $f_\alpha(C_{r_0/2})$. By considering a Brownian motion in the domain $D_\alpha$ starting from $0$, we see that the harmonic measures of both $J_\pm$ seen from $0$, and therefore their lengths, are at least some constant depending on $r_0$. (More precisely, these harmonic measures are at least the probability of Brownian motion staying inside $B(0,1/64) \cup \{\Re(z)<0\}$ before entering the annulus $\{ 1-r_0 < \abs{z} < 1-r_0/2 \}$ and then making a clockwise (resp., counterclockwise) turn inside the annulus.) We claim that $f_\alpha^{-1}(J_\pm) \subseteq B(0,1-r_0/4)$. The result will then follow from the fact that $f_\alpha^{-1}(z)$ is obtained from integrating $f_\alpha^{-1}$ on $\partial\mathbb{D}$ against the harmonic measure seen from $z$. By symmetry, it suffices to show the claim for $J_+$. Let $z_1,z_2$ denote the endpoints of $f_\alpha^{-1}(J_+)$. By our assumption, any sub-curve of $\alpha$ from $z_1$ to $z_2$ stays inside $B(0,1-r_0/4)$. Suppose $f_\alpha^{-1}(J_+)$ contains some point $z' \notin B(0,1-r_0/4)$. Pick a simple path $P'$ in $D_\alpha$ that begins with a straight line from $-1$ to $-(1-\frac{3}{4}r_0)$, then stays between $C_{r_0}$ and $C_{r_0/2}$, and ends at $z'$. As a consequence of the Jordan curve theorem, $P'$ separates $B(0,1-r_0/4)$ into at least two components, and the two halfs of $C_{r_0/2}$ lie in different components. Moreover, $C_{r_0}$ is in the same component as the lower half of $C_{r_0/2}$. Therefore $z_1$ and $z_2$ (which are the upper endpoints of $C_{r_0}$ and $C_{r_0/2}$ respectively) lie in different components of $B(0,1-r_0/4) \setminus P'$. But since $P' \subseteq D_\alpha$, any sub-curve of $\alpha$ from $z_1$ to $z_2$ must avoid $P'$ which is impossible while staying inside $B(0,1-r_0/4)$. \end{proof} \begin{lemma}\label{le:nice_tip_in_D} Given $r_0,r_1 > 0$ there exist $r_{\mathrm{N}} > 0$ and $\widetilde c > 0$ such that the following is true. Let $(D,a,u)$ be as in \eqref{eq:Da} with $\abs{a} \ge 1$, and $f\colon D \to \mathbb{D}$ the corresponding conformal map. Let $\alpha\colon [0,1] \to \overline{\mathbb{D}}$ be a curve with $\alpha(0) = 1$, $\alpha(1) \in B(0,1-r_0)$, and $\abs{f^{-1}(\alpha(1))} = \abs{a}+1 > \abs{f^{-1}(\alpha(t))}$ for $t < 1$. Let $D_\alpha$ be the connected component of $\mathbb{D}\setminus\alpha$ containing $0$ and suppose $\alpha(1)\in\partial D_\alpha$. Let $f_\alpha\colon D_\alpha \to \mathbb{D}$ denote the conformal map with $f_\alpha(0) = 0$ and $f_\alpha(\alpha(1)) = 1$. Suppose additionally that $f_\alpha^{-1}(B(1,r_1)\cap \mathbb{D}) \subseteq B(0,1-r_0/4)$. If for some $p_{\mathrm{N}}, c_{\mathrm{N}} > 0$ \[ \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(f_\alpha^{-1}(\eta[0,\sigma_{r_1}])) > \widetilde c\,c_{\mathrm{N}}) < \widetilde c\,p_{\mathrm{N}} , \] then $(D \setminus f^{-1}(\alpha),\ f^{-1}(\alpha(1)),\ u)$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice. \end{lemma} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{nice_tip_in_D.pdf} \caption{The setup and proof of \cref{le:nice_tip_in_D}.} \end{figure} \begin{proof} Write $\widetilde D = D \setminus f^{-1}(\alpha)$ and denote by $\widetilde f\colon \widetilde D \to \mathbb{D}$ the conformal map corresponding to $\widetilde D$. Let $z_D$ and $z_{\widetilde D}$ denote the points on $\partial B(0,\abs{a}+2)$ and $\partial B(0,\abs{a}+3)$ closest to $a$ and $f^{-1}(\alpha(1))$, respectively. Observe that $\widetilde f$ and $f$ are related via $\widetilde f = h^{-1} \circ f_\alpha \circ f$ where $h\colon \mathbb{D} \to \mathbb{D}$ is the conformal map with $h(\widetilde f(z_D)) = 0$ and $h(1) = 1$. We claim that the distance $\abs{z_D-z_{\widetilde D}}$ is bounded from above by a constant depending on $r_0$. Indeed, if we let $R$ be as in \cref{le:inf_loc}, then if $\abs{a} \ge R$, the distance $\abs{z_D-f^{-1}(\alpha(1))}$ is bounded via Koebe's distortion theorem, and hence also $\abs{z_D-z_{\widetilde D}}$. If $\abs{a} < R$, then the distance is trivially bounded by $2R+5$. We conclude that $\abs{z_D-z_{\widetilde D}}$ is bounded. Since $\abs{z_D-z_{\widetilde D}}$ is bounded from above (by a constant depending on $r_0$), we get that $\widetilde f(z_D)$ is bounded away from $\partial\mathbb{D}$, and $\abs{h'} \asymp 1$ on $\mathbb{D}$, with both bounds depending only on $r_0$. Consequently there exists $r_{\mathrm{N}} > 0$ such that $h(B(1,2r_{\mathrm{N}}) \cap \mathbb{D}) \subseteq B(1,r_1) \cap \mathbb{D}$. To show that $\widetilde D$ is nice, we need to consider an \sle{\kappa}{} in $\mathbb{D}$ from $1$ to $0$, stopped at hitting $\partial B(1,r_{\mathrm{N}})$. However since $\widetilde f(z_D)$ is bounded away from $\partial\mathbb{D}$, this law is absolutely continuous with respect to \sle{\kappa}{} from $1$ to $\widetilde f(z_D)$, with Radon-Nikodym derivative bounded in some interval $[\widetilde c, \widetilde c^{-1}]$ with $\widetilde c > 0$ depending on $r_0$ and $r_{\mathrm{N}}$ (after possibly further decreasing $r_{\mathrm{N}}$; cf.\@ \cref{le:sle_abs_cont}). The first condition for niceness, i.e. $\abs{\widetilde f(u)-1} \ge 2r_{\mathrm{N}}$ is guaranteed by $f_{\alpha}^{-1}(h(B(1,2r_{\mathrm{N}}) \cap \mathbb{D})) \subseteq f_\alpha^{-1}(B(1,r_1)\cap \mathbb{D}) \subseteq B(0,1-r_0/4)$, which is away from the boundary. The second condition for niceness is satisfied if \[ \nu_{\mathbb{D};1\to \widetilde f(z_D)}(\operatorname{Cont}(\widetilde f^{-1}(\widetilde\eta[0,\sigma_{r_{\mathrm{N}}}])) > c_{\mathrm{N}}) < \widetilde c\,p_{\mathrm{N}} . \] Suppose now that $\widetilde\eta$ is an \sle{\kappa}{} from $1$ to $\widetilde f(z_D)$, so that $h\circ\widetilde\eta$ has the law of an \sle{\kappa}{} $\eta$ from $1$ to $0$. By our assumption, with probability at least $1-\widetilde c\,p_{\mathrm{N}}$ we have \[ \operatorname{Cont}(f_\alpha^{-1}(\eta[0,\sigma_{r_1}])) \le \widetilde c\,c_{\mathrm{N}} . \] Applying Koebe's distortion theorem to the map $z \mapsto 1/f^{-1}(z)$ we see that on the set $B(0,1-r_0/4) \cap f(D \cap B(0,\abs{a}+2))$ the derivative $\abs{(f^{-1})'}$ is bounded from above by a constant depending on $r_0$. Therefore, using the assumption $f_\alpha^{-1}(\eta[0,\sigma_{r_1}]) \subseteq f_\alpha^{-1}(B(1,r_1)\cap \mathbb{D}) \subseteq B(0,1-r_0/4)$ and the transformation rule for Minkowski content \eqref{eq:cont_transf}, such $\eta$ satisfies \[ \operatorname{Cont}(\widetilde f^{-1}(\widetilde\eta[0,\sigma_{r_{\mathrm{N}}}])) \le \operatorname{Cont}(f^{-1} \circ f_\alpha^{-1}(\eta[0,\sigma_{r_1}])) \lesssim \widetilde c\,c_{\mathrm{N}} \] with a factor depending only on $r_0$. The claim follows if $\widetilde c$ has been picked small enough. \end{proof} \begin{lemma}\label{le:support} Let $\gamma$ be a simple curve in $\mathbb{D} \setminus \{0\}$ with $\gamma(0) = 1$, and $T \ge 0$ the capacity of $\gamma$ relative to $-1$ (i.e. the half-plane capacity of the curve after mapping to $\mathbb{H}$ as described in \cref{se:prelim}). For any $\varepsilon > 0$ there exists $p_\varepsilon > 0$ such that \[ \nu_{\mathbb{D};1\to 0}( \norm{\gamma-\eta}_{\infty;[0,T]} < \varepsilon ) \ge p_\varepsilon \] where $\gamma$ and $\eta$ are parametrised by capacity. \end{lemma} \begin{proof} For chordal SLE from $1$ to $-1$ in $\mathbb{D}$, this is \cite[Proposition 1.4]{ty-support}. The result transfers to radial SLE by absolute continuity (cf.\@ \cite{sw-sle-coordinate-change}). \end{proof} \begin{lemma}\label{le:next_tip} Let $\gamma$ be the straight line from $1$ to $0$. Let $r_0,r_1,r_{\mathrm{N}} > 0$ be as in \cref{le:dist_1,le:no_return,le:nice_tip_in_D}. For any $\varepsilon \in {(0,r_0/4]}$ and $p_{\mathrm{N}} > 0$ there exists $c_{\mathrm{N}} > 0$ such that the following is true. Let $(D,a,u)$ be as in \eqref{eq:Da} with $\abs{a} \ge 1$, and $A = f(\widehat{\bbC} \setminus B(0,\abs{a}+1))$. Then \[ \nu_{\mathbb{D};1\to 0}\left(\begin{array}{cc} \norm{\gamma-\eta}_{\infty;[0,\tau_A]} < \varepsilon \quad\text{but the domain} \\ D \setminus f^{-1}(\eta[0,\tau_A]) \text{ is not } (p_{\mathrm{N}}, r_{\mathrm{N}}, c_{\mathrm{N}})\text{-nice} \end{array}\right) < p_{\mathrm{N}} \] where $\tau_A$ denotes the hitting time of $A$. \end{lemma} \begin{proof} We have $B(0,1/64) \subseteq A$ and $A \cap B(1,r_0) = \varnothing$ due to Koebe's $1/4$-theorem and \cref{le:dist_1}. Since the Minkowski content of radial \sle{\kappa}{} (stopped before entering $B(0,1/128)$) is almost surely finite, we can find for any $p_2 > 0$ some $c_2 > 0$ such that \[ \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(\eta[0,\tau_{\partial B(0,1/128)}]) > c_2) < p_2 . \] This gives \[ \nu_{\mathbb{D};1\to 0}\left( \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(\eta[0,\tau_{\partial B(0,1/128)}]) > c_2 \mid \eta[0,\tau_A] ) > p_2^{1/2} \right) < p_2^{1/2} \] by Markov's inequality since \begin{multline*} \nu_{\mathbb{D};1\to 0}\left( \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(\eta[0,\tau_{\partial B(0,1/128)}]) > c_2 \mid \eta[0,\tau_A] ) \right) \\ = \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(\eta[0,\tau_{\partial B(0,1/128)}]) > c_2) < p_2 . \end{multline*} Now, suppose $\norm{\gamma-\eta}_{\infty;[0,\tau_A]} < \varepsilon$. Apply the above with $p_2 = \widetilde c^2 p_{\mathrm{N}}^2$, and let $c_{\mathrm{N}} = \widetilde c^{-1}c_2$. We claim that if \[ \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(\eta[\tau_A,\tau_{\partial B(0,1/128)}]) > c_2 \mid \eta[0,\tau_A] ) < p_2^{1/2} , \] then $D \setminus f^{-1}(\eta[0,\tau_A])$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice with our choices of $p_{\mathrm{N}},c_{\mathrm{N}}$. Indeed, conditionally on $\eta[0,\tau_A]$, the curve $\widetilde\eta = f_{\tau_A} \circ \eta\big|_{[\tau_A,\infty)}$ is an independent \sle{\kappa}{} in $\mathbb{D}$. Since the capacity of $\eta[0,\tau_{\partial B(0,1/128)}]$ is much larger than of $\eta[0,\tau_A]$, we must have $f_{\tau_A}^{-1}(\widetilde\eta[0,\sigma_{r_1}]) \subseteq \eta[\tau_A,\tau_{\partial B(0,1/128)}]$ (there is no loss of generality assuming $r_1$ is small). Moreover, by \cref{le:no_return} (the conditions are satisfied due to $\norm{\gamma-\eta}_{\infty;[0,\tau_A]} < \varepsilon < r_0/4$), we have $f_{\tau_A}^{-1}(B(1,r_1)\cap \mathbb{D}) \subseteq B(0,1-r_0/4)$. Hence, by \cref{le:nice_tip_in_D}, $\widetilde D$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice. \end{proof} \begin{lemma}\label{le:initial_nice_tip} Let $r_{\mathrm{N}}>0$ be as in \cref{le:next_tip}. For any $p_{\mathrm{N}}>0$ there exist $c_{\mathrm{N}} > 0$ and $p_1 > 0$ such that the following is true. Let $(D,a,u)$ be as in \eqref{eq:Da} with $\abs{a} \ge 1$. Then \[ \nu_{D;a\to\infty;u}\left( \text{the domain } \widehat{\bbC} \setminus \eta[0,\tau_{\abs{a}+1}] \text{ is } (p_{\mathrm{N}}, r_{\mathrm{N}}, c_{\mathrm{N}})\text{-nice} \right) \ge p_1 . \] In particular, \[ \nu_{\widehat{\bbC};0\to\infty;0}\left( \text{the domain } \widehat{\bbC} \setminus \eta[0,\tau_R] \text{ is } (p_{\mathrm{N}}, r_{\mathrm{N}}, c_{\mathrm{N}})\text{-nice} \right) \ge p_1 \] for $R \ge 2$. \end{lemma} \begin{proof} The second assertion follows from the first assertion by considering $D = \widehat{\bbC}\setminus\eta[0,\tau_{R-1}]$, so in the remainder of the proof we will focus only on proving the first assertion. Let us suppose first that $\abs{f(u)-1} \ge \delta$ for some $\delta > 0$. In that case, the claim follows from \cref{le:support,le:next_tip} and the absolute continuity between SLE variants, cf.\@ \cref{le:sle_abs_cont,co:target_change} (note that $\widetilde\eta = f\circ\eta$ is an \sle{\kappa}{}$(2)$ from $1$ to $f(\infty)$ with force point at $f(u)$). In case $f(u)$ is close to $1$, we do not have uniform control over the density between the SLE variants. But picking a small time $t>0$ there exists some $\widetilde p_1 > 0$ and $\widetilde\delta > 0$ such that $\abs{f_t(f(u))-1} \ge \widetilde\delta$ with probability at least $\widetilde p_1$ (independent of $f(u)$). This is because $t \mapsto \arg(f_t(f(u)))$ is a radial Bessel process of positive index started at $\arg(f(u))$ (cf.\@ \cite[Section 2.1]{zhan-hoelder}), and by the monotonicity of Bessel processes in the starting point it suffices to compare to the case when it starts at $0$. But a Bessel process stopped at a deterministic time is almost surely positive. This allows us to consider $\widetilde\eta^{(t)} = f_t \circ \widetilde\eta$, stopped at hitting $A^{(t)} = f_t(A)$. On the event $\norm{\gamma-\widetilde\eta^{(t)}}_{\infty;[0,\tau_{A^{(t)}}]} < \widetilde\delta/2$, we now do have bounded density between the SLE variants (with a bound depending on $\widetilde\delta$). In order to show $\widehat{\bbC} \setminus \eta[0,\tau_{\abs{a}+1}] = D \setminus f^{-1}(\widetilde\eta[0,\tau_A]) = D \setminus f^{-1}(f_t^{-1}(\widetilde\eta^{(t)}_{[0,\tau_{A^{(t)}}]}))$ is nice with sufficiently positive probability, we can follow the proof of \cref{le:next_tip} with minor modifications. Instead of $\widetilde\eta$ and $A$, we consider $\widetilde\eta^{(t)}$ and $A^{(t)}$. We write $f^{(t)}_s = f_{t+s} \circ f_t^{-1}$ for the mapping-out function of $\widetilde\eta^{(t)}$. Following the proof of \cref{le:next_tip}, we get that conditionally on $\widetilde\eta[0,t]$ and $\abs{f_t(f(u))-1} \ge \widetilde\delta$, with probability at least $p_{\varepsilon,\delta} > 0$ we have \[ \norm{\gamma-\widetilde\eta^{(t)}}_{\infty;[0,\tau_{A^{(t)}}]} < \widetilde\delta/2 \wedge \varepsilon \] and \[ \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(\widetilde\eta^{(t)}[\tau_{A^{(t)}}, \tau_{\partial B(0,1/128)}]) > \widetilde c\,c_{\mathrm{N}} \mid \widetilde\eta^{(t)}[0, \tau_{A^{(t)}}] ) < \widetilde c\,p_{\mathrm{N}} . \] We claim that on this event, $D \setminus f^{-1}(\widetilde\eta[0,\tau_A])$ is $(p_{\mathrm{N}},r_{\mathrm{N}},2c_{\mathrm{N}})$-nice. Conditionally on $\widetilde\eta^{(t)}[0, \tau_{A^{(t)}}]$, consider an independent \sle{\kappa}{} $\dbtilde\eta$ from $1$ to $0$, stopped at $\sigma_{r_1}$. As in the proof of \cref{le:next_tip}, $(f^{(t)}_{\tau_{A^{(t)}}})^{-1} \circ \dbtilde\eta$ has the same law as a subsegment of $\widetilde\eta^{(t)}[\tau_{A^{(t)}}, \tau_{\partial B(0,1/128)}]$, and therefore \[ \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}((f^{(t)}_{\tau_{A^{(t)}}})^{-1}(\dbtilde\eta[0,\sigma_{r_1}])) > \widetilde c\,c_{\mathrm{N}} ) < \widetilde c\,p_{\mathrm{N}} . \] We need to map $\dbtilde\eta$ back by $f_{\tau_A}^{-1} = f_t^{-1} \circ (f^{(t)}_{\tau_{A^{(t)}}})^{-1}$. Since $(f^{(t)}_{\tau_{A^{(t)}}})^{-1}(\dbtilde\eta[0,\sigma_{r_1}]) \subseteq B(0,1-r_0/4)$ by \cref{le:no_return} and $t$ is small, the derivative $\abs{(f_t^{-1})'}$ is bounded on $(f^{(t)}_{\tau_{A^{(t)}}})^{-1}(\dbtilde\eta[0,\sigma_{r_1}])$ by $2$, say. By the transformation rule for Minkowski content, this implies \[ \nu_{\mathbb{D};1\to 0}(\operatorname{Cont}(f_{\tau_A}^{-1}(\dbtilde\eta[0,\sigma_{r_1}])) > 2\widetilde c\,c_{\mathrm{N}}) < \widetilde c\,p_{\mathrm{N}} , \] and \cref{le:nice_tip_in_D} implies the claim. \end{proof} \begin{proof}[Proof of \cref{le:nice_tip}] We proceed as outlined below the statement of the lemma. First observe that it suffices to show the statement for radial \sle{\kappa}{} from $1$ to $0$. Indeed, on the event $\{\norm{\gamma-\eta}_{\infty;[0,\tau_A]}\} < \varepsilon$, the laws of the corresponding SLEs are absolutely continuous with density bounded by a constant depending on $\varepsilon$. This is because $f(\infty)$ and $f(u)$ are both at distance at least $2\varepsilon$ from $\gamma[0,\tau_A]$, due to \cref{le:inf_loc,le:inf_dist} and the definition of niceness of $D$, and we may apply \cref{le:sle_abs_cont} to get the desired absolute continuity. Pick $r_0,r_1,r_{\mathrm{N}} > 0$ as in \cref{le:dist_1,le:no_return,le:nice_tip_in_D}. Then pick $\varepsilon < r_{\mathrm{N}}/2$, and $T$ the capacity of the straight line from $1$ to $1/64$. For this choice of $\varepsilon$, let $p_\varepsilon$ as in \cref{le:support}. Then \cref{it:follow} occurs with probability at least $p_\varepsilon$. Since the Minkowski content of radial \sle{\kappa}{} (stopped before entering $B(0,1/64)$) is almost surely finite, we can find $c_1 > 0$ such that \ref{it:cont} fails with probability at most $p_\varepsilon/4$. Next, pick $p_{\mathrm{N}} < p_\varepsilon/4$. Supposing $D$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice, the probability of \ref{it:nice_base} failing is at most $p_{\mathrm{N}} < p_\varepsilon/4$. Finally, given $r_{\mathrm{N}}$, $\varepsilon$, and $p_{\mathrm{N}}$, we pick $c_{\mathrm{N}}$ according to \cref{le:next_tip}. This will imply \ref{it:nice_tip} fails with probability at most $p_{\mathrm{N}} < p_\varepsilon/4$. \end{proof} \begin{proof}[Proof of \cref{pr:nice_tip_iter}] Pick the constants as in \cref{le:nice_tip}. As already observed in the proof of \cref{le:next_tip}, we have $B(0,1/64) \subseteq A$ and $A \cap B(1,r_0) = \varnothing$. Note that $f \circ \eta$ is an \sle{\kappa}{}$(2)$ in $\mathbb{D}$ from $1$ to $f(\infty)$ with force point $f(u)$, with probability at least $p$ all the \cref{it:follow,it:cont,it:nice_base,it:nice_tip} occur for $f \circ \eta$. In particular, \ref{it:nice_tip} means $D\setminus \eta[0,\tau_{\abs{a}+1}]$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice. To bound $\operatorname{Cont}(\eta[0,\tau_{\abs{a}+1}])$, note that due to \ref{it:follow} and $B(0,1/64) \subseteq A$, $f \circ \eta$ hits $A$ before reaching the capacity of $\gamma$. Moreover, also due to \ref{it:follow}, it does not leave $B(0,1-r_{\mathrm{N}}/4)$ after $\sigma_{r_{\mathrm{N}}}$. Recalling from the proof of \cref{le:nice_tip} that $f(\infty)$ is at distance at least $2\varepsilon$ from $\gamma[0,\tau_A]$, we see that $\abs{(f^{-1})'}$ is bounded on the points of $(f \circ \eta)[\sigma_{r_{\mathrm{N}}},\tau_A]$ by a constant depending on $r_{\mathrm{N}}$ and $\varepsilon$ (by applying Koebe's distortion theorem on a subdomain avoiding $f(\infty)$). Therefore, combining \cref{it:nice_base,it:cont} and the transformation rule for Minkowski content \eqref{eq:cont_transf}, $\operatorname{Cont}(\eta[0,\tau_{\abs{a}+1}])$ is bounded. \end{proof} \begin{proof}[Proof of \cref{le:cont_tail_lower1}] It suffices to show this for large integers $r$. By \cref{le:initial_nice_tip}, $\widehat{\bbC} \setminus \eta[0,\tau_2]$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice with positive probability. Then, by \cref{pr:nice_tip_iter}, with probability at least $p^r$ we have \[ \operatorname{Cont}(\eta[\tau_2,\tau_{2+r}]) < rc . \] Since the Minkowski content of two-sided whole-plane \sle{\kappa}{} and therefore also of whole-plane \sle{\kappa}{}$(2)$ is almost surely finite, $\operatorname{Cont}(\eta[0,\tau_2])$ is also bounded above by some large constant with probability close to $1$. This finishes the proof. \end{proof} The proof shows also the following statement. \begin{proposition}\label{pr:cont_ltail_cond} There exist finite and positive constants $p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}},c,\widetilde c_2$ such that the following is true. Let $(D,a,u)$ be as in \eqref{eq:Da}, and $r,\ell>0$ such that $\ell \le r^d$. Let $\lambda = \ell^{-1/(d-1)}r^{1/(d-1)}$. If $\lambda D$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice and $\lambda\abs{a} \ge 1$, then \[ \nu_{D;a\to\infty;u}( \operatorname{Cont}(\eta[0,\tau_{\abs{a}+r}]) < c\ell ) \ge \exp(-\widetilde c_2 \ell^{-1/(d-1)} r^{d/(d-1)}) . \] \end{proposition} \begin{proof} By iterating \cref{pr:nice_tip_iter} as in the proof of \cref{le:cont_tail_lower1}, if $D \subset \widehat{\bbC}$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice and $\abs{a} \ge 1$, then \[ \nu_{D;a\to\infty;u}( \operatorname{Cont}(\eta[0,\tau_{\abs{a}+r}]) < rc ) \ge p^r \] for any $r \ge 1$. For the general statement, we use the same scaling argument as in the proof of \cref{pr:cont_tail_lower}. \end{proof} \subsection{Diameter lower bound for space-filling SLE} \label{sec:diam-lower-spf} In this section we will prove the counterparts of Propositions \ref{pr:cont_tail_lower} and \ref{pr:cont_ltail_cond} in the setting of space-filling SLE, which are stated as Propositions \ref{pr:area_tail_lower} and \ref{pr:area_ltail_cond} below. The proofs follow the exact same structure as in the non-space-filling case and we will therefore be brief and only highlight the differences. The following is the counterpart of Proposition \ref{pr:cont_tail_lower}. Note in particular that the estimate takes exactly the same form as in the non-space-filling case with $d=2$. \begin{proposition}\label{pr:area_tail_lower} Let $\eta$ be a whole-plane space-filling \sle{\kappa}{} from $-\infty$ to $\infty$, $\kappa > 4$, satisfying $\eta(0)=0$. For some $\tilde c_2 > 0$ we have \[ \mathbb{P}( \operatorname{Cont}(\eta[0,\tau_r]) < \ell ) \ge \exp(-\tilde c_2 \ell^{-1} r^{2}) \] for any $r,\ell>0$ with $\ell \le r^2$. \end{proposition} One difference between the space-filling and non-space-filling settings is that in the space-filling case, $\eta|_{[0,\infty]}$ (viewed as a curve in ${\bf C}$) is not an SLE$_\kappa(\rho)$ for any vector $\rho$, and therefore the curve does not satisfy the domain Markov property, which plays a crucial role in the argument in the previous section. However, via the theory of imaginary geometry the curve does satisfy a counterpart of the domain Markov property if we also condition on a particular triple of marked points on the boundary of the trace, see Section \ref{se:prelim_spf}. While most of the argument in the non-space-filling case carries through using this observation, we need to modify some parts of the argument, e.g.\ the various absolute continuity arguments comparing different variants of SLE and the proof of Lemma \ref{le:initial_nice_tip}. There are also some parts of the proof that simplify since Cont$(\cdot)$ is equal to Lebesgue area measure, so the natural measure of the curve while staying in a domain is deterministically bounded by the Lebesgue area measure of the domain. Following the proof in Section \ref{sec:diam-lower-nonspf}, we start by giving the definition of nice for space-filling SLE. This property is now defined for tuples $(D,a,\u)$, $\u=(u_1,u_2,u_3)\in(\partial D)^3$, satisfying \begin{equation} \text{$(D,a)$ is as in \eqref{eq:Da}; $a,u_1,u_2,u_3$ are distinct and ordered counterclockwise}. \label{eq:Daubold} \end{equation} For $r_{\mathrm{N}},c_{\mathrm{N}} > 0$ we say that $(D,a,\u)$ is $(r_{\mathrm{N}},c_{\mathrm{N}})$-nice if \[ \abs{f(u_1)-1}\wedge\abs{f(u_3)-1} \ge 2r_{\mathrm{N}}, \] and \begin{equation} \operatorname{Cont}(f^{-1}( B(1,r_{\mathrm{N}}) \cap \overline{\mathbb{D}} )) \le c_{\mathrm{N}} . \label{eq:nice-spfill2} \end{equation} The second condition is simplified as compared to the non-space-filling case since Cont$(\cdot)$ is equal to Lebesgue measure. The space-filling counterpart of Proposition \ref{pr:cont_ltail_cond} is the following. \begin{proposition}\label{pr:area_ltail_cond} There exist finite and positive constants $r_{\mathrm{N}},c_{\mathrm{N}},c,\tilde c_2$ such that the following is true. Let $(D,a,\u)$ be as in \eqref{eq:Daubold}, and $r,\ell>0$ such that $\ell \le r^2$. Let $\lambda = \ell^{-1}r$. If $\lambda D$ is $(r_{\mathrm{N}},c_{\mathrm{N}})$-nice and $\lambda\abs{a} \ge 1$, then \[ \wh\nu_{D;a\to\infty;\u}( \operatorname{Cont}(\eta[0,\tau_{\abs{a}+r}]) < c\ell ) \ge \exp(-\tilde c_2 \ell^{-1} r^{2}) . \] \end{proposition} We now go through Section \ref{sec:diam-lower-nonspf} in chronological order and point out what changes we need to make for the case of space-filling SLE. The counterparts of Lemma \ref{le:cont_tail_lower1}, Proposition \ref{pr:nice_tip_iter}, and Lemma \ref{le:nice_tip} in the space-filling case are identical as before, except that we consider whole-plane space-filling SLE$_\kappa$ (restricted to the time interval $[0,\infty)$) and $(D,a,\u)$, respectively, instead of SLE$_\kappa(2)$ and $(D,a,u)$. Proposition \ref{pr:cont_tail_lower} is deduced from Lemma \ref{le:cont_tail_lower1} as before via scaling. Lemmas \ref{le:dist_1}, \ref{le:inf_loc}, \ref{le:inf_dist}, and \ref{le:no_return} are used in precisely the same form in the space-filling case as in the non-space-filling case; note that these results only concern conformal maps and not SLE. For the counterpart of Corollary \ref{co:target_change}, on the other hand, we modify the statement and the proof as follows. In particular, we do not prove a uniform bound on the Radon-Nikodym derivative in this case. \begin{lemma}\label{le:target_change_ig} There exist finite constants $c',\varepsilon > 0$ such that the following is true. Let $(D,a,\u)$ be as in \eqref{eq:Daubold} with $\abs{a} \ge 1$ and $|f(u_1)-1|\wedge|f(u_3)-1|>2\varepsilon$, and set $\u_0 \mathrel{\mathop:}= (-i,-1,i)$. Let $A = f(\widehat{\bbC} \setminus B(0,\abs{a}+1))$ and let $\gamma$ be the straight line from $1$ to $0$. Then, for any event $E\subset \{ \norm{\gamma-\eta}_{\infty;[0,\tau_A]} < \varepsilon \}$ measurable with respect to the curve until time $\tau_A$ and for $\nu_1,\nu_2\in\{ \wh\nu_{\mathbb{D};1\to 0;\u_0}, \wh\nu_{\mathbb{D};1\to f(\infty);\u_0}, \wh\nu_{\mathbb{D};1\to 0;f(\u)}, \wh\nu_{\mathbb{D};1\to f(\infty);f(\u)} \}$ it holds that $\nu_1(E)\leq c' \nu_2(E)^{1/2}$. \end{lemma} \begin{proof} Let $h_1,h_2$ be the variants of the GFF associated with the measures $\nu_1,\nu_2$. Then $h_1,h_2$ can be coupled together such that $h_1=h_2+g$ for $g$ a harmonic function that is zero along the $2\varepsilon$-neighbourhood of $1$ on $\partial\mathbb{D}$, and bounded by a constant depending only on $\varepsilon$ on $\{ z\not\in A\,:\,\operatorname{dist}(z,\gamma)<1.5\varepsilon \}$. The lemma is now immediate by the argument in \cite[Lemma 2.1]{hs-mating-eucl}. \end{proof} The space-filling counterpart of Lemma \ref{le:nice_tip_in_D} is the following, with the same proof as before. \begin{lemma}\label{le:nice_tip_in_D_spf} In the setup of \cref{le:nice_tip_in_D}, if $\operatorname{Cont}(f_\alpha^{-1}(B(1,r_{1}) \cap \overline{\mathbb{D}})) \le \widetilde c\,c_{\mathrm{N}}, $ then $(D \setminus f^{-1}(\alpha),\ f^{-1}(\alpha(1)),\ \u)$ is $(r_{\mathrm{N}},c_{\mathrm{N}})$-nice. \end{lemma} Lemma \ref{le:support} still holds in the space-filling setting, with the only modification being that we replace $\nu_{\mathbb{D};1\to 0}$ by $\wh\nu_{\mathbb{D};1\to 0;\u}$ for fixed $\u=(u_1,u_2,u_3)$ such that $1,u_1,u_2,u_3$ are ordered counterclockwise. The proof of the lemma in the space-filling setting will follow by iterative applications of Lemma \ref{le:support-spfill-nbh} right below. Notice that in this lemma we do not rule out scenarios where $\eta$ oscillates back and forth while staying close to $\gamma$, while do not want such behavior in Lemma \ref{le:support} as we consider the $L^\infty$ distance. \begin{lemma}\label{le:support-spfill-nbh} Let $\u=(u_1,u_2,u_3)$ be distinct points of $(\partial\mathbb{D})\setminus\{1 \}$ such that $1,u_1,u_2,u_3$ are ordered clockwise and let $\gamma\colon[0,T]\to \mathbb{D} \setminus \{0\}$ be a simple curve with $\gamma(0) = 1$ for some $T>0$. For $\delta>0$ let $A(\delta)$ denote the $\delta$-neighborhood of $\gamma([0,T])$. Define stopping times \[ \sigma_1=\inf\{t\geq 0\,:\,|\eta(t)-\gamma(T)|<\delta \},\qquad \sigma_2 = \inf\{ t\geq 0\,:\,\eta(t)\not\in A(\delta) \}. \] Then for any $\delta>0$ we have $\wh\nu_{\mathbb{D};1\to 0;\u}[\sigma_1<\sigma_2]>0$. \end{lemma} \begin{proof} The proof is identical to the proof of \cite[Lemma 2.3]{mw-cut-double}. The only difference is that the lemma treats chordal curves and we consider a radial curve, but the argument is the same. More precisely, we need the argument in the lemma for $\kappa>4$, corresponding to the case where the curve in question is a counterflow line. It is enough to prove the claim for counterflow lines since by \cite[Theorem 1.13]{ms-ig4}, for every rational $z$, the set $\eta[0,\tau_z]$ (with $\tau_z$ the first hitting time of $z$) is the union of branches of a branching counterflow line. (Strictly speaking, \cite[Theorem 1.13]{ms-ig4} is stated for GFF with fully branchable boundary condition, but we can perform an absolutely continuous change of measure such that the law of $h\big|_{A(\delta)}$ becomes the law of the restriction of a GFF with fully branchable boundary condition.) \end{proof} \begin{proof}[Proof of Lemma \ref{le:support} with $\wh\nu_{\mathbb{D};1\to 0;\u}$ instead of $\nu_{\mathbb{D};1\to 0}$] Note that Lemma \ref{le:support-spfill-nbh} can also be applied to simply connected domains not equal to $\mathbb{D}$ by applying an appropriate conformal change of coordinates to the GFF and the curves $\eta$ and $\gamma$. Set $j_0=10/\varepsilon+1$, assuming without loss of generality that $j_0$ is an integer. We apply Lemma \ref{le:support-spfill-nbh} iteratively for $j=1,2,\dots,j_0$ with domain $D=\mathbb{D}\setminus \eta[0,\tau_{j-1}]$, $\gamma$ the straight line from $\eta(\tau_{j-1})$ to $1-j\varepsilon/10$, $\delta\ll\varepsilon$, and $\tau_{j}$ defined as the stopping time $\sigma_1$ in the lemma (with $\tau_0=0$). \end{proof} The statement and proof of \cref{le:next_tip} simplify as follows, since for suitable $c_{\mathrm{N}}$ the condition of \cref{le:nice_tip_in_D_spf} is trivially satisfied as $f_{\alpha}^{-1}$ maps into $\mathbb{D}$. \begin{lemma} In the setup of \cref{le:next_tip}, if $\norm{\gamma-\eta}_{\infty;[0,\tau_A]} < \varepsilon$, then we have that \linebreak$(D \setminus f^{-1}(\eta[0,\tau_A]),\ f^{-1}(\eta(\tau_A)),\ \u)$ is $(r_{\mathrm{N}}, c_{\mathrm{N}})$-nice. \end{lemma} The statement and proof of Lemma \ref{le:initial_nice_tip} are identical to the non-space-filling case, except that we consider $\wh\nu_{D;a\to\infty;\u}$ instead of $\nu_{D;a\to\infty;u}$ and that the proof relies on the following lemma (proved at the very end of the proof of \cite[Lemma 3.1]{ghm-kpz-sle}) to treat the case where both $|f(u_1)-1|$ and $|f(u_3)-1|$ are small.\footnote{Note that there is a typo in the published version of \cite{ghm-kpz-sle} which interchanges the role of ``left'' and ``right'' in the second-to-last sentence of the below lemma.} A similar lemma is used when only one of $|f(u_1)-1|$ and $|f(u_3)-1|$ is small, with the only difference being that we only grow a flow line from the point in $\{f(u_1),f(u_3)\}$ that is close to 1. \begin{lemma} There exist positive constants $\delta, q > 0$ such that the following is true. Let $\u=(u_1,u_2,u_3)$ be such that $1,u_1,u_2,u_3\in\partial\mathbbm D$ are distinct and ordered clockwise. Let $\wh h$ denote the variant of the GFF in $\mathbbm{D}$ that is used when defining $\wh\nu_{\mathbbm{D};1\to 0;\u}$ at the very end of \cref{se:prelim_spf}. Suppose that $|u_1-1| \vee|u_3-1| \leq\delta$. Let $\eta^{\mathrm{L}}_{u_1}$ (resp.\ $\eta^{\mathrm{R}}_{u_3}$) denote the flow line of $\wh h$ started from $u_1$ (resp.\ $u_3$) with angle $\pi/2$ (resp.\ $-\pi/2$) targeted at $0$, and let $S_1$ (resp.\ $S_3$) be its exit time from $B_{2\delta}(1)$. Let $U$ be the connected component containing $0$ of $\mathbbm D\setminus ( \eta^{\mathrm{L}}_{u_1}([0,S_1]) \cup \eta^{\mathrm{R}}_{u_3}([0,S_3]) )$. Let $E$ be the event that the harmonic measure from 0 in $U$ of each of the right side of $\eta^{\mathrm{L}}_{u_1}([0,S_1])$ and the left side of $\eta^{\mathrm{R}}_{u_3}([0,S_3])$ is at least $q$. Then $\mathbbm P(E) \geq q$. \end{lemma} Finally, the proofs of the space-filling counterparts of Lemma \ref{le:cont_tail_lower1}, Proposition \ref{pr:nice_tip_iter}, Lemma \ref{le:nice_tip}, and Proposition \ref{pr:cont_ltail_cond} go through precisely as before. The only minor change is when justifying the space-filling counterpart of Lemma \ref{le:nice_tip}\ref{it:nice_base}, where we now use \[ \operatorname{Cont}(f^{-1}(\eta[0,\sigma_{r_{\mathrm{N}}}])) \le \operatorname{Cont}(f^{-1}( B(1,r_{\mathrm{N}})\cap\overline{\mathbb{D}} ))\le c_{\mathrm{N}}. \] \subsection{Lower bounds on the regularity of SLE} \label{se:lowerbounds_pf} In this section we conclude the proofs of \cref{thm:main_lil,thm:main_moc,thm:main_var}. Given Proposition \ref{pr:upperbounds}, it remains to prove that the constants $c_0,c_1$ in the theorems are positive. A natural approach for proving lower bounds is to find disjoint intervals $[s_k,t_k]$ on which the increment of $\eta$ is exceptionally big with a certain (small) probability. If the sum of these probabilities is infinite and there is sufficient decorrelation of these events, then a variant of the second Borel-Cantelli lemma will imply that infinitely many of these events will occur. In our case, however, the correlations are not easy to control. The probabilities of having exceptionally large increments in the non-space-filling case are given in \cref{pr:cont_tail_lower} or its ``conditional'' version \cref{pr:cont_ltail_cond}. But when conditioning on $\eta[0,s_k]$, \cref{pr:cont_ltail_cond} does not apply to every realisation of $\eta[0,s_k]$ but only the ones that are nice (as defined in \cref{sec:diam-lower-nonspf}). Another attempt would be to use the corresponding upper bound of the probability given by \cref{pr:diam_tail_upper}. But the upper and lower bounds differ not just by a factor but by a power, which is too weak to guarantee sufficient decorrelation. The exact same issues arise in the case of space-filling SLE. Our idea is to introduce another sequence of events $B_k$ on which the conditional lower bound on the interval $[s_k,t_k]$ is valid again (an example is the event that $\eta[0,s_k]$ is nice). We formulate the argument as an abstract lemma. \begin{lemma}\label{le:conditional_prob} Let $A_1,...,A_k$ and $B_1,...,B_k$ be events. Suppose that $\mathbb{P}(B_j) \ge p$ for some $p>0$ and all $j$. Let \[ p_j = \mathbb{P}( A_j \mid B_j \cap (A_1 \cap B_1)^c \cap ... \cap (A_{j-1} \cap B_{j-1})^c ) . \] Then, if $q\in[0,1]$ satisfies \begin{equation}\label{eq:cond_p_sum} \exp\left( -(1-\frac{1-p}{q})(p_1+...+p_k) \right) < q , \end{equation} we have \[ \mathbb{P}( (A_1 \cap B_1) \cup ... \cup (A_k \cap B_k) ) > 1-q . \] \end{lemma} \begin{proof} Suppose the contrary, i.e. \begin{equation}\label{eq:cond_p_contra} \mathbb{P}( (A_1 \cap B_1)^c \cap ... \cap (A_k \cap B_k)^c ) \ge q . \end{equation} Observe this in that case \[ \mathbb{P}( B_j^c \mid (A_1 \cap B_1)^c \cap ... \cap (A_{j-1} \cap B_{j-1})^c ) \le \frac{\mathbb{P}(B_j^c)}{\mathbb{P}( (A_1 \cap B_1)^c \cap ... \cap (A_{j-1} \cap B_{j-1})^c )} \le \frac{1-p}{q} \] by our assumptions. It follows that \[ \mathbb{P}( A_j \cap B_j \mid (A_1 \cap B_1)^c \cap ... \cap (A_{j-1} \cap B_{j-1})^c ) \ge (1-\frac{1-p}{q})p_j \] and therefore inductively \[ \mathbb{P}( (A_1 \cap B_1)^c \cap ... \cap (A_k \cap B_k)^c ) \le \prod_{j \le k} \left(1-(1-\frac{1-p}{q})p_j\right) \le \exp\left( -(1-\frac{1-p}{q}) \sum_{j \le k} p_j \right) \] which by \eqref{eq:cond_p_sum} contradicts \eqref{eq:cond_p_contra}. \end{proof} We now prove the lower bound in \cref{thm:main_moc}, namely that the constant $c_0$ in the theorem statement must be positive; see Proposition \ref{pr:upperbounds} for the other assertions of the theorem. We use \cref{le:conditional_prob} to show that the lower bound is satisfied with positive probability. This will imply the claim. As in the previous sections, we define $\tau_r=\inf\{t\geq 0\,:\, |\eta(t)|=r \}$ to be the hitting time of radius $r$. \begin{lemma}\label{le:mod_lower_p} There exist $p_0 > 0$ and $b_0 > 0$ such that the following is true. For any $(D,a)$ as in \eqref{eq:Da} with $\abs{a} > 0$ and $u\in\partial D\setminus\{a\}$, we have \begin{equation*} \nu_{D;a \to \infty;u}\left( \sup_{s,t \in [0,\tau_{\abs{a}+r}]} \frac{\abs{\eta(t)-\eta(s)}}{\abs{t-s}^{1/d}(\log \abs{t-s}^{-1})^{1-1/d}} \ge b_0 \right) \ge p_0 \end{equation*} for any $r > 0$. The same holds for space-filling \sle{\kappa}{}, except that we consider $(D,a,\u)$ as in \eqref{eq:Daubold} and $\wh\nu_{D;a\to\infty;\u}$ instead of $(D,a,u)$ and $\nu_{D;a\to\infty;u}$. \end{lemma} \begin{proof}[Proof of \cref{thm:main_moc} given \cref{le:mod_lower_p}] By \cref{pr:upperbounds} it remains to show $c_0 > 0$ in \cref{pr:01law_mv}. Applying \cref{le:mod_lower_p} with arbitrarily small $r>0$, we see that with probability at least $p_0$ we have \[ \lim_{r\downarrow 0} \sup_{s,t \in [0,\tau_{\abs{a}+r}]} \frac{\abs{\eta(t)-\eta(s)}}{\abs{t-s}^{1/d}(\log \abs{t-s}^{-1})^{1-1/d}} \ge b_0 . \] This implies $S_I \ge b_0$ for some random interval $I$ (with $S_I$ defined in the proof of \cref{pr:01law_mv}). But since $S_I$ is a deterministic constant independent of $I$, the result follows. \end{proof} \begin{proof}[Proof of \cref{le:mod_lower_p}] We will do the proof for \sle{\kappa}$(2)$ and explain at the end what modifications are necessary for the case of space-filling \sle{\kappa}{}. For $\varepsilon > 0$ and $k=1,...,\lfloor\varepsilon^{-1}r\rfloor$ we define the event \[ A_{\varepsilon,k} = \{ \operatorname{Cont}(\eta[\tau_{\abs{a}+(k-1)\varepsilon},\tau_{\abs{a}+k\varepsilon}]) \le a_0 \varepsilon^d (\log \varepsilon^{-1})^{-(d-1)} \} \] where $a_0 > 0$ is a constant whose value will be decided upon later. Note that on the event $A_{\varepsilon,k}$ we have \[ \abs{\eta(\tau_{\abs{a}+k\varepsilon})-\eta(\tau_{\abs{a}+(k-1)\varepsilon})} \geq \varepsilon \gtrsim \abs{\tau_{\abs{a}+k\varepsilon}-\tau_{\abs{a}+(k-1)\varepsilon}}^{1/d}(\log \abs{\tau_{\abs{a}+k\varepsilon}-\tau_{\abs{a}+(k-1)\varepsilon}}^{-1})^{(d-1)/d} \] which is the lower bound we want to prove. Let $p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}},\widetilde c_2$ be as in \cref{pr:cont_ltail_cond}. Let $B_{\varepsilon,k}$ denote the event that $(D_k,a_k,u_k)$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice where \[ (D_k,a_k,u_k) \mathrel{\mathop:}= \left( \varepsilon^{-1}(\log\varepsilon^{-1})\,(D \setminus \eta[0,\tau_{\abs{a}+(k-1)\varepsilon}]), \ \varepsilon^{-1}(\log\varepsilon^{-1})\,\eta(\tau_{\abs{a}+(k-1)\varepsilon}), \ \varepsilon^{-1}(\log\varepsilon^{-1}) u \right). \] On the event $B_{\varepsilon,k}$ we have by \cref{pr:cont_ltail_cond} \begin{equation*} \nu_{D;a\to\infty;u}(A_{\varepsilon,k} \mid \eta[0,\tau_{\abs{a}+(k-1)\varepsilon}]) \ge \exp\left(-\widetilde c_2 a_0^{-1/(d-1)} (\log \varepsilon^{-1})\right) = \varepsilon^{\widetilde c_2 a_0^{-1/(d-1)}} = \varepsilon^{1/2} \end{equation*} for a suitable choice of $a_0$. Note that $\varepsilon^{-1}(\log\varepsilon^{-1})\,\eta[0,\tau_{\abs{a}+(k-1)\varepsilon}]$ is a radial \sle{\kappa}{}$(2)$ in $\varepsilon^{-1}(\log\varepsilon^{-1})D$ stopped at hitting radius $\varepsilon^{-1}(\log\varepsilon^{-1})\abs{a}+(k-1)(\log\varepsilon^{-1})$. By \cref{le:initial_nice_tip}, we have $\nu_{D;a\to\infty;u}(B_{\varepsilon,k}) \ge p_1$ where $p_1>0$ does not depend on $D,\varepsilon,k$. Applying \cref{le:conditional_prob} with any choice of $q \in {(1-p_1,1)}$ and noting that \[ \exp\left( -(1-\frac{1-p_1}{q})r\varepsilon^{-1}\varepsilon^{1/2} \right) < q \] for small $\varepsilon > 0$, this implies the result with $p_0 = 1-q$. In the case of space-filling \sle{\kappa}{}, we instead apply \cref{pr:area_ltail_cond} and the space-filling analogue of \cref{le:initial_nice_tip} as explained at the end of \cref{sec:diam-lower-spf}. \end{proof} We finally show the lower bounds in \cref{thm:main_lil}. By \cref{pr:var_lil}, this implies also the lower bound in \cref{thm:main_var}. Given \cref{pr:upperbounds}, it remains to prove that the constants $c_0,c_1$ are positive. By the stationarity of $\eta$, it suffices to show the claim for $t_0=0$. \begin{proof}[Proof of \cref{thm:main_lil}] The proof is almost identical for whole-plane \sle{\kappa}$(2)$ ($\kappa\in(0,8]$) and for whole-plane space-filling \sle{\kappa}{} ($\kappa>4$). We begin with the former. We first prove the $t \downarrow 0$ statement. Define the sequences \[ r_k = \exp(-k^2), \quad m_k = a_0 r_k^d (\log\log r_k^{-1})^{-(d-1)} \] where the exact value of $a_0 > 0$ will be decided upon later. Note that if \[ \operatorname{Cont}(\eta[0,\tau_{r_k}]) < m_k , \] then for some $t < m_k$ we have \[ \abs{\eta(t)} = r_k \asymp m_k^{1/d}(\log\log m_k^{-1})^{(d-1)/d} \] which is exactly the lower bound in the law of the iterated logarithm. By \cref{pr:cont_tail_lower} we have \[ \mathbb{P}( \operatorname{Cont}(\eta[0,\tau_{r_k}]) < m_k ) \ge \exp(-\widetilde c_2 m_k^{-1/(d-1)}r_k^{d/(d-1)}) \asymp k^{-\widetilde c a_0^{-1/(d-1)}} , \] so $a_0$ can be chosen such that the sum of the probabilities diverges. We would like to argue by \cref{le:conditional_prob} that with positive probability this happens for infinitely many $k$. Then \cref{pr:01law} implies that the probability must actually be $1$. To show this, we introduce another sequence $r'_k$ with $r_{k+1} < r'_{k+1} < r_k$. Let \[ r'_{k+1} = 2 r_k (\log k)^{-1} \] and define the events \[ A_k = \{ \operatorname{Cont}(\eta[\tau_{r'_{k+1}},\tau_{r_k}]) \le m_k \}, \quad A'_k = \{ \operatorname{Cont}(\eta[0,\tau_{r'_{k+1}}]) \le m_k \}, \quad \bar A_k = A_k \cap A'_k . \] As noted above, on the event $\limsup_k \bar A_k$ we have \[ \limsup_{t \downarrow 0} \frac{\abs{\eta(t)}}{t^{1/d}(\log\log t^{-1})^{1-1/d}} \ge b_0 \] where $b_0$ depends on the choice of $a_0$. We are left to show $\mathbb{P}(\limsup_k \bar A_k) > 0$. Let $B'_k$ denote the event that $\widehat{\bbC} \setminus r_k^{-1}(\log k)\,\eta[0,\tau_{r'_{k+1}}]$ is $(p_{\mathrm{N}},r_{\mathrm{N}},c_{\mathrm{N}})$-nice. On the event $B'_k$ we have by \cref{pr:cont_ltail_cond} \begin{equation*} \mathbb{P}( \operatorname{Cont}(\eta[\tau_{r'_{k+1}},\tau_{r_k}]) < a_1 m_k \mid \eta[0,\tau_{r'_{k+1}}] ) \ge \exp(-\widetilde c_2 m_k^{-1/(d-1)}r_k^{d/(d-1)}) \asymp k^{-\widetilde c a_0^{-1/(d-1)}} = k^{-1} \end{equation*} for suitable choice of $a_0,a_1$. Since $r_k^{-1}(\log k)\,\eta[0,\tau_{r'_{k+1}}]$ is again a whole-plane \sle{\kappa}{}$(2)$ stopped at hitting radius $2$, we have $\mathbb{P}(B'_k) \ge p_1$ by \cref{le:initial_nice_tip} where $p_1 > 0$ does not depend on $k$. The content $\operatorname{Cont}(\eta[0,\tau_{r'_{k+1}}])$ can be controlled by the negative moments in \cite[Lemma 1.7]{zhan-hoelder}, implying \begin{equation} \begin{split} \mathbb{P}( \operatorname{Cont}(\eta[0,\tau_{r'_{k+1}}]) > m_k ) &\le \mathbb{P}( \abs{\eta(m_k)} < r'_{k+1} ) \le (r'_{k+1})^{d(1-\varepsilon)} \mathbb{E} \abs{\eta(m_k)}^{-d(1-\varepsilon)}\\ &\lesssim (r'_{k+1})^{d(1-\varepsilon)} m_k^{-(1-\varepsilon)} \asymp (\log k)^{-1+\varepsilon} . \end{split} \label{eq:neg-moment} \end{equation} It follows that $\mathbb{P}(A'_k \cap B'_k) > p_1/2$. Write $B_k = A'_k \cap B'_k$. We apply \cref{le:conditional_prob} with the events $A_{k'},...,A_k$ and $B_{k'},...,B_k$. Pick any $q \in {(1-p_1/2,1)}$ and note that for any $k \in \mathbb{N}$ we can find $k' > k$ such that \[ \exp\left( -(1-\frac{1-p_1/2}{q})((k')^{-1}+...+k^{-1}) \right) < q . \] This implies $\mathbb{P}\left( \bigcup_{k' \ge k} \bar A_{k'} \right) \ge \mathbb{P}\left( \bigcup_{k' \ge k} (A_{k'} \cap B_{k'}) \right) > 1-q$ for all $k$, and therefore \linebreak$\mathbb{P}(\limsup_k \bar A_k) \ge 1-q > 0$. The proof is identical for space-filling \sle{\kappa}{}, except that the proof of \eqref{eq:neg-moment} is even easier since $\operatorname{Cont}(\eta[0,\tau_{r'_{k+1}}])\leq \pi(r'_{k+1})^2$ deterministically. The proof of the $t \to \infty$ statement is identical when we set \[ r_k = \exp(k^2), \quad r'_{k-1} = 2r_k(\log k)^{-1}, \quad m_k = a_0 r_k^d (\log\log r_k)^{-(d-1)}. \] \end{proof} \bibliographystyle{alpha} \input{sle_regularity_uptoconst.bbl} \end{document}
train/arxiv
BkiUfPTxaKgQVeJrmxWg
5
1
\section{Introduction and Motivation} \label{WJD:intro} Two recently found properties of (super-massive) black holes (SMBHs) in the centers of active galaxies shed new light on their formation and growth: \begin{itemize} \item The luminosities of the quasars with the largest redshifts indicate that central BH masses of $> 10^9\,\mathrm{M}_\odot$ were already present when the Universe was less than $10^9\,$years old (Barth et al.\ 2003). These masses are lower limits as they are based on the assumption that the BHs accrete at their Eddington limit. There is no indication that the (majority of the) sample of highest-redshift-quasar luminosities is afflicted by amplification by gravitational lensing (White et al.\ 2005). \item Surveys in the X-ray regime (Hasinger et al.\ 2005) and in the optical/UV regime (Wolf et al.\ 2003) show a strong luminosity dependence of the redshift at which the active galactic nuclei (AGN) space density peaks: The lower the AGN luminosity, i.e., the smaller the BH mass, the later in the evolution of the Universe the co-moving space density of these AGN peaks. In other words, it takes BHs of a lower {\it final\/} mass longer to reach that mass than BHs of a larger final mass. This is also supported by a comparison of the local and the derived accretion mass function of SMBHs (Shankar et al.\ 2004). \end{itemize} \noindent In this contribution we report results of a project investigating the growth of SMBHs by disk accretion. We find that both above-mentioned phenomena can be explained in the framework of such a model. \section{Black Hole Formation and Growth in Galactic Centers} \label{WJD:scenario} Our model involves two basic processes, namely: (a) galaxy mergers resulting in aggregation of material into a self-gravitating disk around the galactic center (GC); and (b) subsequent disk accretion into a central BH. The issue in the past has been whether enough disk material could be accreted in the available time, a problem that has become steadily more acute as increasingly luminous quasars have been found at great redshifts. In regard to (a), we envisage that the interaction between two (gas rich) galaxies leads to the rapid formation of a circum-nuclear gaseous disk. This process is expected to occur on a dynamical time scale. The mass and spatial extent of this disk will depend on the mass and gas content of the interacting galaxies and on the impact parameters of the collision. The process will thus result in a range of disk masses and extents. For our subsequent reasoning it is important to note that fairly early in the evolution of the Universe ($z > 1.5 \cdots 3$) massive galaxies were already in place (Chen \& Marzke 2004, Glazebroke et al.\ 2004, van Dokkum \& Ellis 2003). Moreover, there is mounting evidence (e.g., Dunlop et al.\ 2003, S\'anchez et al.\ 2004) that interactions between and mergers of galaxies trigger nuclear activity (e.g., McLeod \& Rieke 1994a\&b, Bahcall et al. 1997, Canalizo \& Stockton 2001, S\'anchez et al. 2005, Sanders 2004). Thus for a {\it major merger\/} the resulting gaseous disk mass may well contain $> 10^{10}\,\mathrm{M}_\odot$ within a few hundred parsecs of the GC. This basic process has been demonstrated by numerical simulation (e.g., Barnes 2002, Barnes \& Hernquist 1996 \& 1998, Iono, Yun, \& Mihos 2004) and provides the essential initial conditions for our analysis. The resulting nuclear disk will, of course, be subject to viscous dissipation causing an inward flow of material towards the GC where it is potentially available for accretion into a BH. With the initial mass and radius parameters discussed above, such a disk must inevitably be self-gravitating, at least initially. In these circumstances, the disk accretion time scale and, hence, the growth times and limiting mass of the putative BH, depend on both the mass and extent of the initial disks. We also note that, initially, the disk may provide mass to the BH at a rate higher than is allowed by the Eddington limit. Thus in such a case initially the BH growth rate is defined by the Eddington limit, with the rest of the material presumably driven from the system (or at least the proximity of the BH) by radiation pressure. The peak luminosity will occur roughly when the rate of supply of material from the disk equals the Eddington limit for the current BH mass (higher for higher accretion rate). \section{Evolution of Self-Gravitating Accretion Disks and the Growth of Black Hole Masses}\label{evol} We carried out numerical simulations modelling the evolution of initially self-gravitating accretion disks and the ensuing growth of the central BH. Our model disks are geometrically thin and rotationally symmetric, with the following modifications with respect to standard accretion disk models: \begin{itemize} \item We allow for a disk mass which is not necessarily small compared to the mass of the central BH, i.e., we do not assume a Keplerian rotation curve in the disk but solve Poisson's equation. \item We use the generalized viscosity prescription by Duschl et al. (2000; $\beta$-viscosity). \item We take the Eddington limit into account. Mass flow above the Eddington limit is assumed to be lost from the system. \end{itemize} \noindent Our numerical code is based on an explicit finite-difference scheme. For further details of the modelling, we refer the reader to an upcoming paper (Duschl \& Strittmatter, {\it in prep.\/}) \subsection{A Reference Model} As a Reference Model, we define an accretion diks with the following set of parameters: \begin{itemize} \item Inner radius of the disk: $s_{\rm i} = 10^{16.5}\,$cm \item Outer radius of the disk: $s_{\rm o} = 10^{20.5}\,\textrm{cm} \approx 10^2\,$pc \item Initial disk mass: $M_{{\rm d},0} = 10^{10}\,\textrm{M}_\odot$ \item Initial surface density distribution: $\Sigma_0 \left( s \right) \propto s^{-1}$. \item Seed black hole mass: $M_{{\rm BH},0} = 10^6\,\textrm{M}_\odot \ll M_{{\rm d},0}$ \item Viscosity parameter: $\beta = 10^{-3}$. \end{itemize} \subsection{The Evolution of the Reference Model} The evolution of the mass flow rate of the reference model is shown in the left panel of Fig.\ \ref{WJD:fig1}. The right panel shows the corresponding evolution of the BH mass. The zero-point of the time is the point at which, as a consequence of a galaxy-galaxy interaction, a massive nuclear accretion disk has been established. One can clearly discern two phases of the accretion process: From the beginning of the evolution to $t_{\rm Edd} \sim 2.7\cdot 10^8\,$years the evolution is dominated by the Eddington limit: The disk delivers mass at a larger rate (broken line; $\dot M_{\rm d}$) than the BH can accrete due to the Eddington limit (dash-dot-dotted line; $\dot M_{\rm Edd}$). For times $t < t_{\rm Edd}$ the growth rate of the BH, $\dot M_{\rm BH}$, is subject to the Eddington limit, i.e., $\dot M_{\rm BH}\left( t < t_{\rm Edd} \right) = \dot M_{\rm Edd}$. For $t > t_{\rm Edd}$, however, both the mass of the BH has become so large and the mass flow rate from the disk has dropped by so much that $\dot M_{\rm d}$ now has fallen below $\dot M_{\rm Edd}$ and all the mass delivered by the disk can be accreted: $\dot M_{\rm BH}\left( t > t_{\rm Edd} \right) = \dot M_{\rm d}$. For the following few $10^8\,$years the free accretion rate, however, is still large enough to make the BH grow at a fast rate. This is slowed down considerably only after another $\sim 3.5\cdot 10^8\,$years by when the accretion rate has fallen by approximately one and a half orders of magnitude. From now on the BH grows only slowly; it has almost reached its {\it final\/} mass of $2.1\cdot 10^9\,\textrm{M}_\odot$ (broken line in the right panel). \begin{figure} \centering \includegraphics[width=\textwidth]{duschlF1.eps} \caption{The evolution of the mass flow rate (left panel) and the BH mass (right panel) for the reference model.} \label{WJD:fig1} \end{figure} \subsection{Variations of the Initial Physical Setup: The Black Hole Growth Time } As an example of the influence of the variation of the initial physical setup, we show in Fig.\ \ref{WJD:fig2} the growth time scale $t_{0.5}$ of BHs where the initial disk mass and the inner disk radius have been changed, while all the other parameters of the reference model remained unaltered. $t_{0.5}$ is defined as the time at which the BH has reached half its final mass. In all our models, at this time the accretion rate, and thus the accretion luminosity have already fallen considerably below their maximum value. It is noteworthy that for BH masses in the realm of our GC, the growth times reach values comparable to the Hubble time. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{duschlF2.eps} \caption{The BH growth time scale $t_{0.5}$ as a function of the final BH mass $M_{\rm BH}$.} \label{WJD:fig2} \end{figure} \section{Discussion and Outlook} Massive accretion disks seem to have the required properties to explain the observations described at the beginning of this contribution: The BH mass growth is quick enough to account for the inferred masses in the highest-redshift quasars, and the evolution time is an inverse function of the final BH mass\footnote{For a presentation of the entire set of model calculations and for a more exhausting discussion of their results, we refer the reader to an upcoming paper (Duschl \& Strittmatter, {\it in prep.\/}).}. We expect that the evolution of the Universe as a whole will even emphasize the latter effect: In the early Universe, galaxy mergers and collisions were much more frequent than they are in today's Universe making high initial disk masses more likely at higher redshifts. For a detailed comparison to the observed luminosity functions (e.g., Hasinger et al.\ 2005) this cosmological evolution of the initial conditoions has to be taken into account.\\ \noindent {\bf\it Acknowledgements.} We benefitted very much from discussions of the topic with G.\ Hasinger and S.\ Komossa (Garching) and A.\ Burkert and T.\ Naab (Munich). This work is partially supported by the German Science Foundation DFG through the Collaborative Research Center SFB439 {\it Galaxies in the Young Universe\/}.
train/arxiv
BkiUckbxaKPQonPpK5_b
5
1
\section{Introduction} Well-quasi-ordering is a highly desirable property and a frequently discovered concept in mathematics and theoretical computer science \cite{Finkel,Kruskal}. One of the most remarkable recent results in this area is the proof of Wagner's conjecture stating that the set of all finite graphs is well-quasi-ordered by the minor relation \cite{minor-wqo}. However, the subgraph or induced subgraph relation is not a well-quasi-order. On the other hand, each of these relations may become a well-quasi-order when restricted to graphs with some special properties. In this paper, we study well-quasi-orderability of graphs with hereditary properties. A {\it graph property} (or a {\it class of graphs}) is a set of graphs closed under isomorphism. A property is {\it hereditary} if it is closed under taking induced subgraphs. It is well-known (and not difficult to see) that a graph property $X$ is hereditary if and only if $X$ can be described in terms of forbidden induced subgraphs. More formally, $X$ is hereditary if and only if there is a set $M$ of graphs such that no graph in $X$ contains any graph from $M$ as an induced subgraph. We call $M$ the set of {\it forbidden induced subgraphs} for $X$ and say that the graphs in $X$ are $M$-free. Of our particular interest in this paper are graphs \emph{without large bicliques}. We say that the graphs in a hereditary class $X$ are \emph{without large bicliques} if there is a natural number $t$ such that no graph in $X$ contains $K_{t,t}$ as a (not necessarily induced) subgraph. Equivalently, there are $q$ and $r$ such $K_{q,q}$ and $K_r$ appear in the set of forbidden induced subgraphs of $X$. According to \cite{Zaran}, these are precisely graphs with a subquadratic number of edges. This family of properties includes many important classes, such as graphs of bounded vertex degree, of bounded tree-width, all proper minor closed graph classes. In all these examples, the number of edges is bounded by a linear function in the number of vertices and all of the listed properties are rather small (see e.g. \cite{minor-closed-small} for the number of graphs in proper minor closed graph classes). In the terminology of \cite{speed}, they all are at most factorial. In fact the family of classes without large bicliques is much richer and contains classes with a superfactorial speed of growth, such as projective plane graphs (or more generally $C_4$-free bipartite graphs), in which case the number of edges is $\Theta(n^{\frac{3}{2}})$. Recently, Daligault, Rao and Thomass\'e asked in \cite{DRT10} if every hereditary class which is well-quasi-ordered by the induced subgraph relation is of bounded clique-width. There are two reasons why this questions is interesting. First, it connects two seemingly unrelated notions. Second, if the question is answered affirmatively, this will have a strong algorithmic consequence. In particular, this will mean (through the use of Courcelle theorem \cite{CorMakRotics}), that any problem definable in Monadic Second Order Logic can be solved in a polynomial time on any class well-quasi-ordered by the induced subgraph relation. In the present paper, we answer this question affirmatively for graphs without large bicliques. More precisely, we prove that if a class $X$ without large bicliques is well-quasi-ordered by the induced subgraph relation, then the graphs in $X$ have bounded treewidth, i.e. there is a constant $c$ such that the treewidth of any graph in $X$ is at most $c$. Since treewidth and cliquewidth of graphs without large bicliques are known to be equivalent in the sense that one is bounded if and only if the other is \cite{GurWan}, the result affirmatively answers the question in \cite{DRT10} for graphs without large bicliques. Thus the above algorithmic consequence is confirmed e.g. for classes of graphs of bounded degree. In order to establish the main result (Theorem~\ref{thm:main-wqo}), we define in Section~\ref{sec:basic} an infinite family of graphs pairwise incomparable by the induced subgraph relation, which we call \emph{canonical graphs}. The main part of the proof of Theorem \ref{thm:main-wqo} is a combinatorial result stating that a graph without large bicliques and having a large treewidth has a large induced canonical graph. A consequence of this result is that if a class $X$ without large bicliques has unbounded treewidth, then $X$ contains an infinite subset of canonical graphs, i.e. an infinite antichain. This implies that classes of graphs without large bicliques that are well quasi-ordered by the induced subgraph relation must have bounded treewidth. To prove the main theorem, we first prove an auxiliary result (Theorem~\ref{thm:main}) stating that if a graph without large bicliques has a long path, it also has a long induced path. We note that this auxiliary theorem is sufficient to establish the main result of the paper if we confine ourselves to hereditary classes with a \emph{finite} number of forbidden induced subgraphs. All preliminary information related to the topic of the paper can be found in Section~\ref{sec:basic}. In Sections~\ref{sec:paths} and~\ref{sec:wqo} we prove Theorem~\ref{thm:main} and Theorem~\ref{thm:main-wqo}, respectively. \section{Notations and definitions} \label{sec:basic} We consider only simple undirected graphs without loops and multiple edges. An {\it independent set} in a graph is a set of vertices no two of which are adjacent, and a {\it clique} is a set of vertices every two of which are adjacent. As usual, by $K_n$, $P_n$ and $C_n$ we denote the complete graph, the chordless path and the chordless cycle on $n$ vertices, respectively, and $K_{n,m}$ is a complete bipartite graph with parts of size $n$ and $m$. Sometimes we also refer to $K_n$ as a clique and to $K_{n,m}$ as a biclique. If $n=m$ we say that $K_{n,m}$ is a biclique of {\it order} $n$. Given a graph $G$ and a subset $U$ of its vertices, the operation of contraction of $U$ into a single vertex $u$ consists in deleting $U$, introducing $u$ and connecting $u$ to every vertex of $G$ outside $U$ that has a neighbour in $U$. If $U$ consists of two adjacent vertices, this operation is called edge contraction. Let $H$ and $G$ be two graphs. We say that \begin{itemize} \item $H$ is an {\it induced subgraph} of $G$ if $H$ can be obtained from $G$ by vertex deletions, \item $H$ is a {\it subgraph} of $G$ if $H$ can be obtained from $G$ by vertex deletions and edge deletions, \item $H$ is a {\it minor} of $G$ if $H$ can be obtained from $G$ by vertex deletions, edge deletions and edge contractions. \end{itemize} Throughout the text, whenever we say that $G$ contains $H$, we mean that $H$ is a subgraph of $G$, unless we explicitly say that $H$ is an {\it induced} subgraph of $G$ (or $G$ contains $H$ as an {\it induced} subgraph). If $H$ is not an induced subgraph of $G$, we say that $G$ is $H$-free. By $R=R(k,r,m)$, we denote the Ramsey number, i.e. the minimum $R$ such that in every colouring of $k$-subsets of an $R$-set with $r$ colours there is a monochromatic $m$-set, i.e. a set of $m$ elements all of whose $k$-subsets have the same colour. A binary relation $\le$ on a set $X$ is a {\it quasi-order} if it is reflexive and transitive. If additionally $\le$ is antisymmetric, then it is a partial order. Two elements $x,y\in X$ are said to be incomparable if neither $x\le y$ nor $y\le x$. An {\it antichain} in a quasi-order is a set of pairwise incomparable elements. A quasi-order $(X,\le)$ is a {\it well-quasi-order} if $X$ contains no infinite strictly decreasing sequences and no infinite antichains. According to the celebrated Graph Minor Theorem of Robertson and Seymour, the set of all graphs is well-quasi-ordered by the graph minor relation \cite{minor-wqo}. This, however, is not the case for the more restrictive relations such as subgraph or induced subgraph. Consider for instance the graphs $H_1,H_2,\ldots$, where $H_i$ is the graph represented in Figure~\ref{fig:H}. It is not difficult to see that this sequence creates an infinite antichain with respect to both subgraph and induced subgraph relations. By connecting two vertices of degree one having a common neighbour in $H_i$, we obtain a graph represented on the left of Figure~\ref{fig:H'H''}. Let us denote this graph by $H'_i$. By further connecting the other pair of vertices of degree one we obtain the graph $H''_i$ represented on the right of Figure~\ref{fig:H'H''}. \begin{figure}[ht] \begin{center} \begin{picture}(180,70) \put(0,35){\circle*{4}} \put(30,35){\circle*{4}} \put(60,35){\circle*{4}} \put(80,35){\circle*{1}} \put(90,35){\circle*{1}} \put(100,35){\circle*{1}} \put(120,35){\circle*{4}} \put(150,35){\circle*{4}} \put(0,65){\circle*{4}} \put(0,5){\circle*{4}} \put(150,65){\circle*{4}} \put(150,5){\circle*{4}} \put(2,35){\line(1,0){26}} \put(32,35){\line(1,0){26}} \put(62,35){\line(1,0){8}} \put(110,35){\line(1,0){8}} \put(122,35){\line(1,0){26}} \put(0,37){\line(0,1){26}} \put(0,33){\line(0,-1){26}} \put(150,37){\line(0,1){26}} \put(150,33){\line(0,-1){26}} \put(10,40){1} \put(40,40){2} \put(130,40){$i$} \end{picture} \caption{The graph $H_{i}$} \label{fig:H} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \begin{picture}(170,65) \put(0,5){\circle*{4}} \put(0,45){\circle*{4}} \put(20,25){\circle*{4}} \put(50,25){\circle*{4}} \put(70,25){\circle*{1}} \put(80,25){\circle*{1}} \put(90,25){\circle*{1}} \put(110,25){\circle*{4}} \put(140,25){\circle*{4}} \put(22,25){\line(1,0){26}} \put(18,23){\line(-1,-1){16}} \put(18,27){\line(-1,1){16}} \put(142,23){\line(1,-1){16}} \put(142,27){\line(1,1){16}} \put(52,25){\line(1,0){8}} \put(100,25){\line(1,0){8}} \put(112,25){\line(1,0){26}} \put(160,5){\circle*{4}} \put(160,45){\circle*{4}} \put(160,7){\line(0,1){36}} \put(33,30){1} \put(123,30){$i$} \end{picture} \begin{picture}(170,65) \put(0,5){\circle*{4}} \put(0,45){\circle*{4}} \put(20,25){\circle*{4}} \put(50,25){\circle*{4}} \put(70,25){\circle*{1}} \put(80,25){\circle*{1}} \put(90,25){\circle*{1}} \put(110,25){\circle*{4}} \put(140,25){\circle*{4}} \put(0,7){\line(0,1){36}} \put(22,25){\line(1,0){26}} \put(18,23){\line(-1,-1){16}} \put(18,27){\line(-1,1){16}} \put(142,23){\line(1,-1){16}} \put(142,27){\line(1,1){16}} \put(52,25){\line(1,0){8}} \put(100,25){\line(1,0){8}} \put(112,25){\line(1,0){26}} \put(160,5){\circle*{4}} \put(160,45){\circle*{4}} \put(160,7){\line(0,1){36}} \put(33,30){1} \put(123,30){$i$} \end{picture} \caption{Graphs $H'_{i}$ and $H''_{i}$} \label{fig:H'H''} \end{center} \end{figure} We call any graph of the form $H_i$, $H'_i$ or $H''_i$ an $H$-graph. Furthermore, we will refer to $H''_i$ a {\it tight} $H$-graph and to $H'_i$ a {\it semi-tight} $H$-graph. In an $H$-graph, the path connecting two vertices of degree 3 will be called the {\it body} of the graph, and the vertices which are not in the body the {\it wings}. Following standard graph theory terminology, we call a chordless cycle of length at least four a {\it hole}. Let us denote by \begin{itemize} \item[$\cal C$] the set of all holes and all $H$-graphs. \end{itemize} It is not difficult to see that any two distinct (i.e. non-isomorphic) graphs in $\cal C$ are incomparable with respect to the induced subgraph relation. In other words, \begin{claim}\label{claim:antichain} $\cal C$ is an antichain with respect to the induced subgraph relation. \end{claim} Moreover, from the poof of Theorem~\ref{thm:main-wqo} we will see that for classes of graphs without large bicliques which are of unbounded tree-width this antichain is unavoidable, or {\it canonical}, in the terminology of \cite{Ding09}. Suggested by this observation, we introduce the following definition. \begin{definition} The graphs in the set $\cal C$ will be called {\sc canonical}. \end{definition} The {\it order} of a canonical graph $G$ is either the number of its vertices, if $G$ is a hole, or the the number of vertices in its body, if $G$ is an $H$-graph. \section{Long paths in graphs without large bicliques} \label{sec:paths} In this section, we prove that graphs without large bicliques containing a large path also contain a large induced (i.e. chordless) path. We start with the following auxiliary result, where by $P(r,m)$, we denote the minimum $n$ such that in every colouring of the elements of an $n$-set with $r$ colours there exists a subset of $m$ elements of the same colour (the pigeonhole principle). \begin{lemma}\label{lem:grid} For each $p$ and $q$ there is a number $C=C(p,q)$ such that whenever a graph $G$ contains two families of sets $\mathcal{A}=\{V_1, V_2, \ldots, V_{C}\}$ and $\mathcal{B}=\{W_1, W_2, \ldots, W_{C}\}$ with all sets being disjoint of size $p$ and with at least one edge between every two sets $V_i\in \mathcal{A}$ and $W_j\in\mathcal{B}$, then $G$ contains a biclique $K_{q,q}$. \end{lemma} \begin{proof} We define $r:=P(p^q,q)$ and $C(p,q):=P(p^{r},q)$ and consider an arbitrary collection $A$ of $r$ sets from $\cal A$. Since each set in $\cal B$ has a neighbour in each set in $\cal A$, the family of the sets in $\cal B$ can be coloured with at most $p^r$ colours so that all sets of the same colour have a common neighbour in each of the $r$ chosen sets of collection $A$. By the choice of $C(p,q)$, one of the colour classes contains a collection $B$ of at least $q$ sets. For each set in $A$, we choose a vertex which is a common neighbour for all sets in $B$ and denote the set of $r$ chosen vertices by $U$. The vertices of $U$ can be coloured with at most $p^q$ colours so that all vertices of the same colour have a common neighbour in each of the $q$ sets of collection $B$. By the choice of $r$, $U$ contains a colour class $U_1$ of least $q$ vertices. For each set in $B$, we choose a vertex which is a common neighbour for all vertices of $U_1$ and denote the set of $q$ chosen vertices by $U_2$. Then $U_1$ and $U_2$ form a biclique $K_{q,q}$. \end{proof} \begin{theorem}\label{thm:main2} For every $s$ and $q$ there is a number $Y=Y(s,q)$ such that every graph with a path of length at least $Y$ contains either a path $P_s$ as an induced subgraph or a biclique $K_{\lfloor q/2 \rfloor, \lceil q/2 \rceil}$ as a (not necessarily induced) subgraph. \end{theorem} \begin{proof} We use induction on $s$ and $q$. For $s=1$ and arbitrary $q$ or for $q=1$ and arbitrary $s$, we can take $Y(s,q)=1$. So assume $s>1$ and $q>1$. Let $t=Y(s, q - 1)$ and $k=Y(s - 1, C(t, q))$. Both numbers must exist by the induction hypothesis. Consider a graph $G$ with a path $P=v_1v_2 \ldots v_{kt}$ on $kt$ vertices and split $P$ into $k$ subpaths of $t$ vertices each. We denote the vertices of the $i$-th subpath by $V_i$ and form a graph $H$ on $k$ vertices $\{h_1, h_2, \ldots , h_k\}$ in which $h_ih_j$ is an edge if and only if there is an edge in $G$ joining a vertex of $V_i$ to a vertex of $V_j$. Since $h_i$ is joined to $h_{i+1}$ for each $i=1,\ldots,k-1$, the graph $H$ has a path on $k$ vertices, and since $k=Y(s - 1, C(t, q))$, it has either an induced path on $s-1$ vertices or a biclique of order $C(t,q)$. In the graph $G$, the latter case corresponds to two families of $C(t,q)$ pairwise disjoint subsets with $t$ vertices in each subset and with an edge between any two subsets from different families. Therefore, Lemma~\ref{lem:grid} applies proving that $G$ contains a biclique $K_{q,q}$. Now assume $H$ contains an induced path $P_{s-1}$. In the graph $G$, this path corresponds to an ordered sequence of subsets $V_{i_1}, V_{i_2}, \ldots, V_{i_{s-1}}$ with edges appearing only between consecutive subsets of the sequence. Therefore, in the subgraph of $G$ induced by these subsets, any vertex $v$ in $V_{i_1}$ is of distance at least $s - 2$ from any vertex $u$ in $V_{s-1}$. If the distance between $v$ and $u$ is $s-1$, the graph $G$ has an induced path $P_s$ and we are done. So, assume the distance between any two vertices of $V_{i_1}$ and $V_{i_{s-1}}$ is exactly $s - 2$, and consider a path with exactly one vertex $w_p$ in each $V_{i_p}$. If vertex $w_1$ has a neighbour $w\in V_{i_1}$ which is not adjacent to $w_2$, then $ww_1w_2 \ldots w_{s-1}$ is an induced path $P_s$ and we are done. Therefore, we must assume that $w_2$ is adjacent to every vertex of $V_{i_1}$, since this set induces a connected subgraph. As the size of $V_{i_1}$ is $t=Y(s, q - 1)$, it contains either an induced path $P_s$, in which case we are done, or a biclique $K_{\lfloor (q-1)/2 \rfloor, \lceil (q-1)/2 \rceil}$. In the latter case, the biclique together with $w_2$ form a biclique of the desired size $K_{\lfloor q/2 \rfloor, \lceil q/2 \rceil}$, so we are done as well. This completes the proof. \end{proof} Taking into account that a large biclique gives rise either to a large induced biclique or a large clique, Theorem~\ref{thm:main2} can also be restated as follows. \begin{theorem}\label{thm:main} For every $s$, $t$, and $q$, there is a number $Z=Z(s,t,q)$ such that every graph with a path of length at least $Z$ contains either $P_s$ or $K_t$ or $K_{q,q}$ as an induced subgraph. \end{theorem} It turns out that Theorem \ref{thm:main} is sufficient to establish the main claim of the paper if we confine ourselves to \emph{finitely defined} classes of graphs, i.e. those defined by forbidding finitely many induced subgraphs. Indeed, a finitely defined class $X$ is well-quasi-ordered by the induced subgraph relation only if a path $P_s$ for some $s$ is forbidden for $X$, since otherwise the class contains infinitely many cycles, i.e. an infinite antichain. Therefore, by Theorem \ref{thm:main}, if graphs in $X$ are $(K_t,K_{q,q})$-free, then they do not contain $P_{Z}$ as a (not necessarily induced) subgraph with $Z=Z(s,t,q)$. On the other hand, it is well-known \cite{Fellows89} that large treewidth of a graph implies the existence of a large path. Put it differently, a bound on the length of a path implies a bound on treewidth. Since we know that in a finitely defined class, well-quasi-ordered by the induced subgraph relation, the path length is bounded, we conclude that the treewidth is bounded as well. This gives us the following corollary. \begin{corollary} \label{col:aux} Let $X$ be a hereditary subclass of $(K_t,K_{q,q})$-free graphs defined by a finite collection of forbidden induced subgraphs. If $X$ is well-quasi-ordered by the induced subgraph relation, then $X$ is of bounded treewidth. \end{corollary} \section{Main result} \label{sec:wqo} The arguments given to justify Corollary~\ref{col:aux} are not applicable to hereditary classes defined by infinitely many forbidden induced subgraphs, because in this case well-quasi-orderability does not necessarily imply a bound on the length of a path. Indeed, consider for instance the class of $(K_{1,3},C_3,C_4,C_5,\ldots)$-free graphs. It consists of linear forests, i.e. graphs every connected component of which is a path. This class is well-quasi-ordered by the induced subgraph relation, but the path length is not bounded in this class. In order to address this more general situation, in this section we prove the following theorem which is the main result of the paper. \begin{theorem}\label{thm:main-wqo} If $X$ is a hereditary subclass of $(K_t,K_{q,q})$-free graphs which is well-quasi-ordered by the induced subgraph relation, then $X$ has a bounded treewidth. \end{theorem} To prove the theorem, we will show that a large treewidth combined with the absence of large bicliques implies the existence of a large induced canonical graph, which is a much richer structural consequence than just the existence of a long induced path. An important part of showing the existence of a large canonical graph is verifying that its body (see Section~\ref{sec:basic} for the terminology) is induced. This will be done by application of Theorem~\ref{thm:main}. A plan of the proof of Theorem~\ref{thm:main-wqo} is outlined in Section~\ref{sec:plan}. Sections~\ref{sec:tw-rake},~\ref{sec:from},~\ref{sec:dense},~\ref{sec:ltw},~\ref{sec:summary} contain various parts of the proof. \subsection{Plan of the proof} \label{sec:plan} To prove Theorem~\ref{thm:main-wqo} we will show that graphs of arbitrarily large tree-width contain either arbitrarily large bicliques as subgraphs or arbitrarily large canonical graphs as induced subgraphs. The main notion in our proof is that of a {\it rake-graph}. A {\it rake-graph} (or simply a {\it rake}) consists of a chordless path, the {\it base} of the rake, and a number of pendant vertices, called {\it teeth}, each having a private neighbour on the base. The only neighbour of a tooth on the base will be called the {\it root} of the tooth, and a rake with $k$ teeth will be called a $k$-rake. We will say that a rake is $\ell$-{\it dense} if any $\ell$ consecutive vertices of the base contain at least one root vertex. An example of a 1-dense 9-rake is given in Figure~\ref{fig:rake}. \begin{figure}[ht] \begin{center} \begin{picture}(200,50) \put(0,0){\circle*{5}} \put(20,0){\circle*{5}} \put(40,0){\circle*{5}} \put(60,0){\circle*{5}} \put(80,0){\circle*{5}} \put(100,0){\circle*{5}} \put(120,0){\circle*{5}} \put(140,0){\circle*{5}} \put(160,0){\circle*{5}} \put(180,0){\circle*{5}} \put(200,0){\circle*{5}} \put(20,20){\circle*{5}} \put(40,20){\circle*{5}} \put(60,20){\circle*{5}} \put(80,20){\circle*{5}} \put(100,20){\circle*{5}} \put(120,20){\circle*{5}} \put(140,20){\circle*{5}} \put(160,20){\circle*{5}} \put(180,20){\circle*{5}} \put(0,0){\line(1,0){20}} \put(20,0){\line(1,0){20}} \put(40,0){\line(1,0){20}} \put(60,0){\line(1,0){20}} \put(80,0){\line(1,0){20}} \put(100,0){\line(1,0){20}} \put(120,0){\line(1,0){20}} \put(140,0){\line(1,0){20}} \put(160,0){\line(1,0){20}} \put(180,0){\line(1,0){20}} \put(20,0){\line(0,1){20}} \put(40,0){\line(0,1){20}} \put(60,0){\line(0,1){20}} \put(80,0){\line(0,1){20}} \put(100,0){\line(0,1){20}} \put(120,00){\line(0,1){20}} \put(140,0){\line(0,1){20}} \put(160,0){\line(0,1){20}} \put(180,0){\line(0,1){20}} \end{picture} \end{center} \caption{1-dense 9-rake} \label{fig:rake} \end{figure} \medskip We will prove Theorem~\ref{thm:main-wqo} through a number of intermediate steps as follows. \begin{itemize} \item[1.] In Section~\ref{sec:tw-rake}, we observe that any graph of large tree-width contains a rake with many teeth as a subgraph. \item[2.] In Section~\ref{sec:from} we show that any graph containing a rake with many teeth as a subgraph contains either \begin{itemize} \item a {\it dense} rake with many teeth as a subgraph or \item a large canonical graph as an {\it induced} subgraph. \end{itemize} \item[3.] In Section~\ref{sec:dense} we prove that dense rake subgraphs necessarily imply either \begin{itemize} \item a large canonical graph as an {\it induced} subgraph or \item a large biclique as a subgraph. \end{itemize} \item[4.] In Section~\ref{sec:ltw}, we summarize the results of the previous sections to show that any graph of large tree-width contains either \begin{itemize} \item a large canonical graph as an {\it induced} subgraph or \item a large biclique as a subgraph. \end{itemize} \item[5.] In Section~\ref{sec:summary}, we use the result of Step 4 to prove Theorem~\ref{thm:main-wqo}. \end{itemize} \subsection{Rake subgraphs in graphs of large tree-width} \label{sec:tw-rake} \begin{lemma}\label{lem:tw-rake} For any natural $k$, there is a number $f(k)$ such that every graph of tree-width at least $f(k)$ contains a $k$-rake as a subgraph. \end{lemma} \begin{proof} A $k\times k$-grid is a graph with vertices $v_{i,j}$ $1\le i,j,\le k$ and edges between $v_{i,j}$ and $v_{i',j'}$ if and only if $|i-i'|+|j-j'|=1$. In \cite{RST94}, the authors proved that for each $k$ there is a function $f(k)$ such that every graph $G$ of tree-width at least $f(k)$ has a $k\times k$-grid as a minor. Consequently, any graph $G$ of tree-width at least $f(k)$ contains a $k$-rake as a minor. It follows that the graph $G$ contains a subgraph $H$ from which a $k$-rake can be obtained by contraction operations only. We deduce that $G$ contains a subgraph $H$, whose vertices admit a partition $V(H)=\cup_{i=1}^k V_i \cup_{i=1}^k V'_i$ into disjoint subsets $V_i$ and $V'_i$ such that $G[V_i]$ and $G[V_i']$ are connected for each $i \in \{1,2, \ldots, k\}$, there is at least one edge with endpoints in both $V_i$ and $V_{i+1}$ for each $i=\{1,2, \ldots, k-1\}$ and there is at least one edge with endpoints in both $V_i$ and $V'_i$ for each $i=\{1,2, \ldots, k\}$. To finish the proof we show that the graph $H$ contains a $k$-rake as a subgraph. First, for each $i=\{1, 2, \ldots, k-1\}$, let $x_iy_{i+1}$ be an edge with $x_i \in V_i$ and $y_{i+1} \in V_{i+1}$. Then, for each $i=\{2, 3, \ldots, k-1\}$, as $G[V_i]$ is connected, we can find a path $P_i$ in $G[V_i]$ connecting $y_i$ and $x_i$. We also define $P_1=\{x_1\}$ and $P_k=\{y_k\}$. These paths will constitute the base of the rake and one can attach tooth $t_i$ with root in $P_i$ as follows. If $V(P_i)=V_i$, let $t_i$ be a point in $V'_i$ which is adjacent to some point in $V_i$. Otherwise, if $V(P_i) \neq V_i$, let $t_i$ be a point in $V_i \backslash V(P_i)$ which has a neighbour in $V(P_i)$ (possible as $G[V_i]$ is connected). Thus $H$, and hence $G$, contains as a subgraph a $k$-rake with base $P_1 \cup P_2 \cup \ldots \cup P_k$ and teeth $\{t_1, t_2, \ldots, t_k\}$. \end{proof} \subsection{From rake subgraphs to dense rake subgraphs} \label{sec:from} The main result of this section is Lemma~\ref{lem:to-dense-rake} below. Its proof is based on the following auxiliary result. \begin{lemma}\label{lem:shorten} Let $G$ be a graph containing an $H$-graph $H^*$ (possibly tight or semi-tight) as a subgraph with the body being induced (i.e. chordless), and let $s\ge 2$ an integer. Then \begin{itemize} \item[(1)] either $G$ contains a path of length $t\in \{2,\ldots,s+1\}$ connecting a left wing of $H^*$ to its right wing with all intermediate vertices lying in the body, \item[(2)] or $G$ contains an induced canonical subgraph of order at least $s$. \end{itemize} \end{lemma} \begin{proof} Let $w'$ be a left wing and $w''$ be a right wing of $H^*$ and $U=\{u_1,\ldots,u_q\}$ be its body. Since $w'$ is adjacent to $u_1$ and $w''$ is adjacent to $u_q$, there must exist a sub-path $U'=\{u_i,\ldots,u_{i+t}\}$ of $U$ such that $u_i$ is the only neighbour of $w'$ in $U'$ and $u_{i+t}$ is the only neighbour of $w''$ in $U'$. We assume that $w',w'',U'$ are chosen so that $t$ (the length of the path $U'$) is as small as possible. This implies, in particular, that no left wing has a neighbour in $U'$ other than $u_i$ and no right wing has a neighbour in $U'$ other than $u_{i+t}$. Assume now $t\ge s$. If $i=1$, we define $u_{i-1}$ to be the left wing different from $w'$, and if $i+t=q$, we define $u_{i+t+1}$ to be the right wing different from $w''$. If $w'$ is adjacent to $w''$ or $w'$ is adjacent to $u_{i+t+1}$ or $w''$ is adjacent to $u_{i-1}$ or $u_{i-1}$ is adjacent to $u_{i+t+1}$, then a chordless cycle of length at least $s+1$ arises. Otherwise, the vertices $w',w'',u_{i-1},u_i,\ldots,u_{i+t},u_{i+t+1}$ induce a canonical graph of order at least $s$. \end{proof} \begin{lemma}\label{lem:to-dense-rake} Let $k$ and $s$ be natural numbers. Every graph containing a $k+2$-rake as a subgraph contains either \begin{itemize} \item an $s+5$-dense $k$-rake a subgraph or \item a canonical graph of order at least $s$ as an induced subgraph. \end{itemize} \end{lemma} \begin{proof} Consider a graph $G$ containing a $k$-rake $R$ as a subgraph. For our construction it is essential that the second and second last vertices of the base of $R$ are roots while the first and the last vertices are not. To establish this condition we remove the teeth whose roots are the first or the last vertices and possibly shorten the base so that it would start just before the second root and end just after the second last root. This is where $k+2$ comes from. After this preprocessing, we proceed as follows. First, we transform any path between any two consecutive root vertices into a shortest, and hence a chordless, path by cutting along any possible chords. Now any two consecutive root vertices together with their teeth, with the path connecting them and with two other their neighbours in the base of $R$ form an $H$-graph satisfying conditions of Lemma~\ref{lem:shorten}. If one of these $H$-graphs contains an induced canonical subgraph of order at least $s$, the lemma is proved. Therefore, we assume that the wings of each of these graphs are connected by a short path as in (2) of Lemma~\ref{lem:shorten}. We now concatenate (glue) all these paths into the base of a new rake as follows. Consider three consecutive vertices $u_{i-1},u_i,u_{i+1}$ in the base of $R$ with $u_i$ being a root vertex but \emph{not the first one}. Let $v_i$ be the tooth of $u_i$. Also, denote by $P^l$ a short path connecting two wings of the $H$-graph on the left of $u_i$, and by $P^r$ a respective short path in the $H$-graph on the right of $u_i$. To simplify the discussion, we will assume that if $P^r$ starts at $u_{i-1}$, then its next vertex is neither $u_i$ nor $u_{i+1}$, since otherwise we can transform $P^r$ by starting it at $v_i$, which will increase the length of the path by at most 1. Also, we will assume that if $P^r$ starts at $v_i$, then its next vertex is not $u_{i+1}$, since otherwise we can transform $P^r$ by adding $u_i$ between $v_i$ and $u_{i+1}$, which will increase the length of the path by at most 1. We apply similar (symmetric) assumptions with respect to $P^l$. With these assumptions in mind, we now do the following. \begin{itemize} \item If both $P^l$ and $P^r$ contain $u_i$, then both of them start at $v_i$ (according to the above assumption). In this case, we glue the two paths at $u_i$, define it to be a root vertex in the new rake and define $v_i$ to be its tooth. \item Assume that, say, $P^l$ contains $u_i$ (implying it contains $v_{i}$), while $P^r$ does not. Assume in addition that $P_l$ contains $u_{i-1}$. \begin{itemize} \item If $P^r$ starts at $u_{i-1}$, then we glue the two paths at $u_{i-1}$ (by cutting $u_i$ and $v_i$ off $P^l$), define $u_{i-1}$ to be a root vertex and $u_i$ to be its tooth in the new rake. \item If $P^r$ starts at $v_{i}$, then we glue the two paths at $v_{i}$, define $u_{i}$ to be a root vertex and $u_{i+1}$ to be its tooth in the new rake. \end{itemize} \item The same as in the previous case with the only difference that $P^l$ does not contain $u_{i-1}$, \begin{itemize} \item If $P^r$ starts at $u_{i-1}$, then we replace $v_i$ by $u_{i-1}$ in $P^l$, glue the two paths at $u_{i-1}$, define $u_{i}$ to be a root vertex and $v_i$ to be its tooth in the new rake. \item If $P^r$ starts at $v_{i}$, then (like in the previous case) we glue the two paths at $v_{i}$, define $u_{i}$ to be a root vertex and $u_{i+1}$ to be its tooth in the new rake. \end{itemize} \item Assume that neither $P^l$ nor $P^r$ contains $u_i$, then we distinguish between the following cases. \begin{itemize} \item If both paths start at $v_i$, then we glue them at $v_i$, define it to be a root vertex and $u_i$ its tooth in the new rake. \item If one of them, say $P^l$, starts at $v_i$, and the other one, that is $P^r$, starts at $u_{i-1}$, then we concatenate them by adding $u_i$ (which is adjacent to both $v_i$ and $u_{i-1}$), define $u_i$ to be a root vertex and $u_{i+1}$ its tooth in the new rake. \item If $P^l$ starts at $u_{i+1}$ $P^r$ starts at $u_{i-1}$, then we again concatenate them by adding $u_i$, define $u_i$ to be a root vertex and $v_{i}$ its tooth in the new rake. \end{itemize} \end{itemize} The procedure outlined above creates a new rake with $k$ teeth. The length of each path used in the construction is initially at most $s+1$. In order to incorporate the assumptions regarding $P^l$ and $P^r$ we increase them by at most $1$ on each end, so the resulting length is at most $s+3$. Finally, the process of assignment of roots may require further increase by at most $1$ on each end. Hence, we conclude that the new rake is $s+5$-dense. \end{proof} \subsection{Dense rake subgraphs} \label{sec:dense} \begin{lemma}\label{lem:dense} For every $s,q$ and $\ell$, there is a number $D=D(s,q,\ell)$ such that every graph containing an $\ell$-dense $D$-rake as a subgraph contains either \begin{itemize} \item a canonical graph of order at least $s$ as an induced subgraph or \item a biclique of order $q$ as a subgraph. \end{itemize} \end{lemma} \begin{proof} To define the number $D=D(s,q,\ell)$, we introduce intermediate notations as follows: $b:=2(q-1)s^q+2sq+4$ and $c:=R(2,2,\max(b,2q))$, where $R$ is the Ramsey number. With these notations the number $D$ is defined as follows: $D=D(s,q,\ell):=Z(\ell c^2,2q,q)$, where $Z$ is the number defined in Theorem~\ref{thm:main}. Consider a graph $G$ containing an $\ell$-dense $D$-rake $R^0$ as a subgraph. The base of this rake is a path $P^0$ of length at least $D$ and hence, by Theorem~\ref{thm:main}, the base contains either a biclique of order at least $q$ as a subgraph (in which case we are done) or an {\it induced} path $P$ of length at least $\ell c^2$. Let us call any (inclusionwise) maximal sequence of consecutive vertices of $P^0$ that belong to $P$ a {\it block}. Assume the number of blocks is more than $c$. Let $P'$ be the subpath of $P$ induced by the first $c$ blocks. Let $w_1, \dots, w_c$ be the rightmost vertices of the blocks. Let $v_1, \dots, v_c$ be the vertices such that each $v_i$ is the vertex of $P_0$ immediately following $w_i$. Then $P'$ together with $v_1, \dots, v_c$ create a $c$-rake with $P'$ being the induced base, $v_1, \dots, v_c$ being the teeth and $w_1, \dots, w_c$ being the respective roots. If the number of blocks is at most $c$, then $P^0$ must contain a block of size at least $\ell c$, in which case this block also forms an induced base of a $c$-rake (since $R^0$ is $\ell$-dense). We see that in either case $G$ has a $c$-rake with an induced base. According to the definition of $c$, the $c$ teeth of this rake induce a graph which has either a clique of size $2q$ (and hence a biclique of order $q$ in which case we are done), or an independent set of size $b$. By ignoring the teeth outside this set we obtain a $b$-rake $R$ with an induced base and with teeth forming an independent set. Let us denote the base of $R$ by $U$, its vertices by $u_1,\ldots,u_m$ (in the order of their appearances in the path), and the teeth of $R$ by $t_1,\ldots,t_b$ (following the order of their root vertices). Denote $r:=(q-1)s^q+2$ and consider two sets of teeth $T_1=\{t_2,t_3,\ldots,t_r\}$ and $T_2=\{t_{b-1},t_{b-2},\ldots, t_{b-r+1}\}$. By definition of $r$ and $b$, there are $2sq$ other teeth between $t_r$ and $t_{b-r+1}$, and hence there is a set $M$ of $2sq$ consecutive vertices of $U$ between the root of $t_r$ and the root of $t_{b-r+1}$. We partition $M$ into $2q$ subsets (of consecutive vertices of $U$) of size $s$ each and for $i=1,\ldots,2q$ denote the $i$-th subset by $M_i$. If each vertex of $T_1$ has a neighbour in each of the first $q$ sets $M_i$, then by the Pigeonhole Principle there is a biclique of order $q$ with $q$ vertices in $T_1$ and $q$ vertices in $M$ (which can be proved by analogy with Lemma~\ref{lem:grid}). Similarly, a biclique of order $q$ arises if each vertex of $T_2$ has a neighbour in each of the last $q$ sets $M_i$. Therefore, we assume that there are two vertices $t_a\in T_1$ and $t_b\in T_2$ and two sets $M_x$ and $M_y$ with $x<y$ such that $t_a$ has no neighbours in $M_x$, while $t_b$ has no neighbours in $M_y$. By definition, $t_a$ has a neighbour in $U$ (its root) on the left of $M_x$. If additionally $t_a$ has a neighbour to the right of $M_x$, then a chordless cycle of length at least $s$ arises (since $|M_x|=s$ and $t_a$ has no neighbours in $M_x$), in which case the lemma is true. This restricts us to the case, when all neighbours of $t_a$ in $U$ are located to the left of $M_x$. By analogy, we assume that all neighbours of $t_b$ in $U$ are located to the right of $M_y$. Let $u_i$ be the rightmost neighbour of $t_a$ in $U$ and $u_j$ be the leftmost neighbour of $t_b$ in $U$. According to the above discussion, $i<j$ and $j-j>2s$. But then the vertices $t_a,t_b, u_{i-1}, u_i,\ldots,u_j,u_{j+1}$ induce an $H$-graph (possibly tight or semi-tight) of order more than $s$ (the existence of vertices $u_{i-1}$ and $u_{j+1}$ follows from the fact that $T_1$ does not include $t_1$, while $T_2$ does not include $t_b$). \end{proof} \subsection{Canonical graphs and bicliques in graphs of large tree-width} \label{sec:ltw} \begin{theorem}\label{thm:prefinal} For every $s,q$, there is a number $X=X(s,q)$ such that every graph of tree-width at least $X$ contains either \begin{itemize} \item a canonical graph of order at least $s$ as an induced subgraph or \item a biclique of order $q$ as a subgraph. \end{itemize} \end{theorem} \begin{proof} We define $X(s,q)$ as $X(s,q):=f(D(s,q,s+5)+2)$, where $f$ comes from Lemma~\ref{lem:tw-rake} and $D$ comes from Lemma~\ref{lem:dense}. If a graph $G$ has tree-width at least $X(s,q)$, then by Lemma~\ref{lem:tw-rake} it contains $D(s,q,s+5)+2$-rake $R$ as a subgraph. Then, by Lemma~\ref{lem:to-dense-rake}, $G$ contains either a canonical graph of order at least $s$ as an induced subgraph, or an $s+5$-dense $D(s,q,s+5)$-rake as a subgraph. In the first case, the theorem is proved. In the second case, we conclude by Lemma~\ref{lem:dense} that $G$ contains either a canonical graph of order at least $s$ as an induced subgraph or a biclique of order $q$ as a subgraph. \end{proof} \subsection{Proof of Theorem~\ref{thm:main-wqo}} \label{sec:summary} Let $\cal Y$ be a hereditary class of graphs with $K_{q,q}$ and $K_r$ contained in the set of forbidden induced subgraphs and assume $\cal Y$ is well-quasi-ordered by the induced subgraph relation. \begin{comment} If $\cal Y$ is of bounded tree-width, then the set $M$ of forbidden graphs must include a complete graph and a complete bipartite graph, since the tree-width is bounded neither for complete graphs nor for complete bipartite graphs. Therefore, $\cal Y$ is subquadratic, which proves the first part of the theorem. Assume now that $\cal Y$ is subquadratic. Then there must exist $q$ such that no graph in $\cal Y$ contains $K_{q,q}$ as a subgraph. \end{comment} Suppose by contradiction that $\cal Y$ contains an infinite sequence ${\cal Y}'$ of graphs of increasing tree-width. In this sequence, there must exists a graph $G^1$ of tree-width at least $X(s,q)$, where $X(s,q)$ is defined in Theorem~\ref{thm:prefinal} and $s$ is an arbitrarily chosen constant. Then by Theorem~\ref{thm:prefinal} $G^1$ contains a canonical graph $H^1$ of order at least $s$. We denote the order of $H^1$ by $s_1$ and find in ${\cal Y}'$ a graph $G^2$ of tree-width at least $X(s_1+1,q)$. $G^2$ must contains a canonical graph $H^2$ of order $s_2\ge s_1+1$, and so on. In this way, we construct an infinite sequence $H^1,H^2,\ldots$, which form an antichain by Claim~\ref{claim:antichain}. This contradicts the assumption that $\cal Y$ is well-quasi-ordered by the induced subgraph relation and hence shows that $\cal Y$ is of bounded tree-width. \begin{comment} \section{Byproduct: Deciding WQO for subquadratic classes} In this section, we analyze the following problem. Given a finite collection of graphs $G_1,\ldots, G_k$, decide whether the class of $(G_1,\ldots, G_k)$-free graphs is well-quasi-ordered by induced subgraphs or not. Decidability of this problem for two forbidden induced subgraphs was shown in \cite{KL11}. This problem was also studied for the pattern containment relation on permutations \cite{Brignall}, the embeddability relation on tournaments \cite{Latka}, the minor ordering of matroids \cite{matroid}, however, the decidability was shown only for one or two forbidden elements (permutations, tournaments, matroids). Whether this problem is decidable for larger numbers of forbidden elements is a big open question. In this section, we answer this question positively for the induced subgraph relation in subquadratic classes of graphs. \begin{theorem} Let $X$ be a subquadratic class of graphs defined by a finite collection $F$ of forbidden induced subgraphs. Then $X$ is well-quasi-ordered by the induced subgraph relation if and only if $F$ contains a path $P_k$. \end{theorem} \begin{proof} If $F$ does not contain a path $P_k$, then $X$ contains infinitely many cycles, because by forbidding finitely many graphs we can exclude only finitely many cycles from $X$. Assume now that $F$ contains a path $P_k$. Since $X$ is subquadratic, $F$ must also contain a clique and a biclique. But then, by Theorem~\ref{thm:main}, graphs in $X$ do not contain large paths as subgraphs. In other words, there is a constant $t$ such that $X$ is a subclass of the class $Y$ containing no $P_t$ as a subgraph. In \cite{Ding92}, Ding showed that a class of graphs closed under taking subgraphs is well-quasi-ordered by the induced subgraph relation if and only if it contains finitely many cycles and finitely many graphs of the form $H_i$. According to this result, $Y$, and hence $X$, is well-quasi-ordered by induced subgraphs. \end{proof} \end{comment} \bibliographystyle{plain}
train/arxiv
BkiUdeI4eIXhzGirDLac
5
1
\section{Introduction} Myocardial perfusion (blood flow) imaging with positron emission tomography (PET) has been applied in clinical cardiology to diagnose and characterize cardiovascular diseases \cite{Kaufmann2005, DiCarli2007, Schindler2010, Murthy2018}. Various perfusion radiotracers have been used \cite{Maddahi2014}. Despite their success, the accessibility of these flow tracers remains limited for clinical applications. For example, $^{15}$O-water is the gold standard for measuring blood flow \cite{Iida1988, Danad2014} but its half-life is very short (2.05 minutes), requiring onsite cyclotron for tracer production and is not approved for routine clinical use. $^{13}$N-ammonia \cite{Muzik1993, Slomka2012} and $^{82}$Rb-chloride \cite{Mullani1983, Lortie2007, ElFakhri2009, Nesterov2014} are the two blood flow radiotracers routinely used in clinical practice \cite{Maddahi2014}. However, $^{13}$N-ammonia also requires an onsite or nearby cyclotron due to its short half-life of 10 minutes . $^{82}$Rb-chloride can be produced by a mobile generator despite its short half-life (76 seconds). Nevertheless, the cost of a rubidium generator is $\geq\$30,000$ for every 4-6 weeks and only affordable to those hospitals or centers with a high throughput of cardiac patients \cite{Maddahi2012, DiCarli2007}. Together, these challenges in practice limit the access to PET studies for perfusion. $^{18}$F-fluorodeoxyglucose (FDG) is the most broadly used clinical PET radiotracer in clinics, mainly for metabolic imaging \cite{Maddahi2012}. In clinical cardiology, $^{18}$F-FDG PET, which has a longer half life of 110 minutes, is commonly used in combination with a short half-life flow tracer to evaluate flow-metabolism mismatch in the myocardium \cite{Abraham2010}. Such a two-tracer PET method is the gold standard for myocardial viability assessment \cite{Camici2008}. The method is not widely available in the clinic because of the limited accessibility of current flow-specific radiotracers. A new flow tracer, $^{18}$F-fluopiridaz, has shown promising results in a Phase III trial \cite{Maddahi2020}. One of the benefits of this new tracer is that it has a longer half-life of 110 minutes, obviating the need for a second onsite generator. However, when combined with $^{18}$F-FDG for myocardial flow-metabolism imaging, this may result in longer clinic visit times because of the long half-life of the $^{18}$F isotope in the two tracers, and an increase in radiation exposure. The hypothesis in this study is that dynamic cardiac $^{18}$F-FDG PET imaging as a single tracer imaging can provide a surrogate for MBF by use of tracer kinetic modeling. The successful testing of this hypothesis may allow simultaneous imaging of myocardial blood flow and metabolism only using $^{18}$F-FDG without the need for a second flow-specific tracer. This single-tracer ($^{18}$F-FDG) multiparametric (myocardial flow and metabolism) imaging method has the potential to enable myocardial viability assessment with reduced imaging time, cost and radiation exposure as compared to the traditional two-tracer methods in clinical practice today \cite{DiCarli2007, Maddahi2012}. The potential of $^{18}$F-FDG for blood flow imaging has been explored outside cardiac imaging. By use of tracer kinetic modeling, several studies have shown that the FDG blood-to-tissue delivery rate $K_1$ correlates with blood flow in tumors \cite{Tseng2004, Mullani2008, Bernstine2011, Cochet2012, Humbert2018}. For example, Tseng \emph{et al.}\cite{Tseng2004} demonstrated linear correlations between $K_1$ of 60-minute dynamic FDG-PET and $^{15}$O-water blood flow in breast tumors, with a linear correlation coefficient r=0.62 before neoadjuvant chemotherapy and r=0.81 after the therapy. Later, Mullani \emph{et al.} \cite{Mullani2008} reported that for various types of tumor in 16 patients, regional tumor FDG $K_1$ estimated from a 2-minute first-pass dynamic PET scan has a correlation r=0.86 with the blood flow measured by $^{15}$O-water PET. Correlation of FDG $K_1$ to blood flow has also been reported in organs such as liver and brain \cite{Winterdahl2011, Walberer2012}. In pigs, hepatic FDG $K_1$ derived from a 3-minute early-dynamic FDG-PET scan correlated with hepatic blood flow measured by transit time flow meters with a high correlation r=0.94 \cite{Winterdahl2011}. In a rat model of stroke, Walberer \emph{et al.} also reported that cerebral FDG $K_1$ estimated by one-hour dynamic PET data had a correlation of r=0.86 with $^{15}$O-water blood flow \cite{Walberer2012}. These studies support the potential use of $^{18}$F-FDG for estimating blood flow, though the effectiveness of $^{18}$F-FDG $K_1$ likely depends on the specific tissue types because FDG extraction fraction varies in different tissues. Our work is specifically focused on \emph{myocardial} blood flow imaging using $^{18}$F-FDG. There is no prior study yet attempting to demonstrate the effectiveness of FDG flow in the myocardium. The importance of this work relies in the possible application of myocardial FDG flow to myocardial flow-metabolism mismatch evaluation for myocardial viability assessment and potentially more broadly, to rest-stress perfusion imaging for IHD diagnosis. Toward that end, our previous work specifically evaluated the practical identifiability of myocardial FDG $K_1$ quantification under different scan durations \cite{Zuo2020}. The results showed it is feasible to quantify FDG $K_1$ in the myocardium using appropriate kinetic modeling. The purpose of this paper is to directly compare myocardial FDG $K_1$ with MBF that is determined by a flow-specific tracer in human patients with IHD and non-IHD. \section{Methods} \subsection{Dynamic $^{18}$F-FDG PET and Dynamic $^{82}$Rb PET Data Acquisition} Fourteen patients with ischemic heart disease (IHD) or suspected cardiac sarcoidosis who were scheduled for PET myocardial viability assessment were enrolled into this study after giving informed consent. The study is approved by Institutional Review Board at the University of California, Davis. Each patient underwent a dynamic $^{82}$Rb-PET scan and a dynamic FDG-PET scan, both scans operated on a GE Discovery ST PET/CT scanner in two-dimensional mode. For dynamic FDG-PET imaging, patients received approximately 20 mCi $^{18}$F-FDG with a bolus injection. List-mode data acquisition was commenced right after the FDG injection and lasted for 60 minutes. A low-dose transmission CT scan was then performed at the beginning of PET scan to provide CT images for PET attenuation correction. The raw data were then binned into a total of 49 dynamic frames: 30 $\times$ 10s, 10 $\times$ 60s and 9 $\times$ 300s. Dynamic FDG-PET images were reconstructed using the standard ordered-subset expectation-maximization (OSEM) algorithm. All data corrections, including normalization, dead-time correction, attenuation correction, scatter correction, and random correction, were included in the reconstruction process. For dynamic Rb-PET imaging, patients received approximately 30 mCi $^{82}$Rb-chloride with a bolus injection. The dynamic scan lasted for nine mintues. The acquired list-mode raw data were binned into 16 dynamic frames: 12 $\times$ 10s, 2 $\times$ 30s, 1 $\times$ 60s, 1 $\times$ 300s. Other processing was the same as the FDG-PET scans. \subsection{Kinetic Modeling of Cardiac $^{18}$F-FDG PET Data} Image-derived time activity curves (TACs) of different regions of interest (ROI) were extracted from the left ventricle (LV) cavity, right ventricle (RV) cavity, and myocardium, denoted by $C_{LV} (t)$, $C_{RV} (t)$, and $C_T (t)$, respectively. Based on the analysis in our previous work \cite{Zuo2020}, a two-tissue compartmental model was used to model the one-hour dynamic FDG-PET data: \begin{eqnarray} \frac{dC_f (t)}{dt} &=& K_1 C_{LV}(t)-(k_2+k_3 ) C_f(t)+ k_4 C_m(t),\\ \frac{dC_m (t)}{dt}&=&k_3 C_f(t)- k_4 C_m(t), \end{eqnarray} where $C_f(t)$ is the activity concentration of free FDG and $C_m(t)$ denotes the activity concentration of metabolized tracer in the myocardium tissue space. $K_1$ is the tracer delivery rate from the blood space to the tissue space with the unit mL/min/g, assuming the density of myocardium is 1 g/mL; $k_2$ (/min) is the rate constant of tracer exiting the tissue space; $k_3$ (/min) is phosphorylation rate; $k_4$ (/min) is the dephosphorylation rate. With $v_{L}$ and $v_{R}$ denoting the fractional blood volumes attributed from the LV and RV, the total activity that is measured by PET is described by \begin{equation} C_T(t) =(1-v_{LV} - v_{RV})[C_f(t) + C_m(t)]+v_{LV}C_{LV}(t)+ v_{RV}C_{RV}(t) \end{equation} There is a total of six unknown parameters ($v_{LV}$, $v_{RV}$, $K_1$, $k_2$, $k_3$, $k_4$) in the model. It is therefore referred to as the ``2T6P" model in our work \cite{Zuo2020}. In addition to the aforementioned 2T6P model, we also studied two simplified models: (1) 2T5P model: the 2T6P model with $k_4$=0, and (2) 1T4P model: the one-tissue (1T) model which is equivalent to the 2T6P model with both $k_3$ and $k_4$ set at 0. Our previous study \cite{Zuo2020} showed that these models are respectively required for modeling the dynamic FDG-PET data of different scan durations. For each model, the unknown kinetic parameters were estimated using nonlinear least-square curve fitting, see \cite{Zuo2020} for more details. \subsection{Reference MBF by Dynamic Rb-PET} Similar to the analysis of dynamic FDG-PET data described above, ROIs were drawn in the LV cavity, RV cavity and myocardium regions to extract regional TACs from the dynamic Rb-PET images. The myocardial TAC was modeled using a one-tissue compartmental model \cite{Lortie2007} with the following expression: \begin{equation} C_T(t) =(1-v_{LV}' - v_{RV}')K_1' \exp(-k_2't)\otimes C_{LV}(t)+v_{LV}'C_{LV}(t)+ v_{RV}'C_{RV}(t), \end{equation} where $K_1'$ and $k_2'$ represent the rate constants of $^{82}$Rb transport between the plasma space and tissue space. $v_{LV}'$ and $v_{RV}'$ are the fractional blood volumes attributed from the LV and RV, respectively. This is corresponding to the same 1T4P model for FDG kinetic analysis. All the $^{82}$Rb kinetic parameters ($v_{LV}'$, $v_{RV}'$, $K_1'$, $k_2'$) were estimated using nonlinear least-square curve fitting in a way similar \cite{Zuo2020}. The estimated $^{82}$Rb $K_1$ was then converted into myocardial blood flow $\mathrm{MBF}$ using Lortie's formula for $^{82}$Rb-PET data \cite{Lortie2007}: \begin{equation} \mathrm{MBF} = K_1' \left[1-a\cdot \exp\left(-\frac{b}{\mathrm{MBF}}\right)\right], \end{equation} where $a = 0.77$ and $b = 0.63$ (mL/min/g). \subsection{Statistical Correlation Analysis} We used the Pearson's correlation analysis approach to analyze the potential correlation between myocardial FDG $K_1$ and $^{82}$Rb MBF. The effect of FDG scan duration on the correlation of FDG $K_1$ with MBF was also investigated for the three different FDG kinetic models (2T6P, 2T5P, and 1T4P). Pearson's correlation analysis was also conducted between MBF and biological variables such as age (years), body mass index (BMI) (kg/m$^2$), and blood glucose (BG) level (mg/dL). Two-sample t test was also employed on FDG $K_1$ and MBF which were divided into different groups. A p value $\le$0.05 was considered as statistically significant. To further study the independent association of FDG $K_1$ with MBF, the biological variables that are correlated with FDG $K_1$ and MBF were included in a multivariate linear regression using the following equation: \begin{equation} \mathrm{MBF} = \boldsymbol{\beta}^T\mathbf{X} \end{equation} where $\boldsymbol{\beta}=(\beta_1, \beta_2, ...)^T$ is the coefficient vector, $\mathbf{X}=(x_1,x_2, ...)^T$ consists of the variables including FDG $K_1$, biological variables, and 1. A variable with a p value less than or equal to 0.05 was considered as statistically significant for predicting MBF. All the analyses were done using MATLAB (MathWorks, MA). \begin{table} \caption{Characteristics of the patients enrolled in the study. } \centering \footnotesize \begin{tabular}{lcccccccc} \hline Patient &IHD& Age (years) & Sex &Diabetic & BMI & BG (mg/dL) & FDG Scan Time (min)\\ \hline 1 & Y & 58 &M &Y &38.6 &127 & 60\\ 2 & Y &73 &M &N &24.4 &113 & 50\\ 3 & Y & 61 &M &N &33.9 &88 &60\\ 4 & N & 71 &F &N &22.4 &116 &60\\ 5 & Y & 55 &F &N &24.5 &118 &40\\ 6 & N & 57 &M &N &27.4 &/ &60\\ 7 & Y & 63 &M &N &28.8 &105 &60\\ 8 & Y & 59 &M &Y &28.3 &135 &60\\ 9 & Y & 65 &M &Y &27.2 &130 &50\\ 10 & N & 81 &M &N &25.8 &85 &60\\ 11 & Y & 74 &M &N &28.2 &84 &60\\ 12 & Y & 83 &M &N &33.6 &/ &60\\ 13 & Y & 59 &M &Y &35.9 &107 &60\\ 14 & N & 69 &F &N &31.4 &82 &30\\ \hline \end{tabular} \label{tbl:patients} \end{table} \subsection{Detectability of FDG $K_1$ and MBF} A very recent study shows that rest MBF We also evaluated the detectability of FDG $K_1$ and MBF for differentiating IHD from non-IHD. Assuming an equivalent prewhitening numeric observer for the detection task \cite{Barrett1993}, the signal-to-noise ratio (SNR) of the test statistic for a measure $x$ (e.g. FDG $K_1$ or $\mathrm{MBF}$) is defined by \begin{equation} \mathrm{SNR}^2(x)=\frac{\left[\mathrm{E}(x|H_1)-\mathrm{E}(x|H_0)\right]^2}{\frac{1}{2}\mathrm{var}(x|H_1)+\frac{1}{2}\mathrm{var}(x|H_0)} \end{equation} where $\mathrm{E}$ denotes the mean operation and $\mathrm{var}$ denotes taking the variance of $x$ under two different conditions $H_1$ and $H_0$, e.g., IHD versus non-IHD. In addition to the numeric observer SNR, we also performed a receiver operating characteristic (ROC) analysis using the logistic regression. The area under the ROC curve (AUC) was calculated to evaluate the detection performance of MBF and FDG $K_1$. \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[width = .43\textwidth]{./figs/TAC_myo_fit_FDG_1T4P.png}} \subfigure[]{ \includegraphics[width = .43\textwidth]{./figs/TAC_myo_fit_FDG_2T5P.png} } \subfigure[]{ \includegraphics[width= .43\textwidth]{./figs/TAC_myo_fit_FDG_2T6P.png}} \subfigure[]{ \includegraphics[width = .43\textwidth]{./figs/TAC_myo_fit_Rb_1T4P.png}} \caption{Examples of myocardial TAC fitting: (a) Fitting of the first 2-minute early-dynamic FDG data with the 1T4P model; (b) Fitting of the first 15-minute dynamic FDG data with the 2T5P model; (c) Fitting of the 60-minute dynamic FDG data with the 2T6P model; (d) Fitting of the 9-minute dynamic Rb data with the 1T4P model. } \label{fig:TAC_fitting} \end{figure*} \section{Results} \subsection{Patient Characteristics} Among the fourteen patients enrolled in the study, ten were diagnosed as IHD and four were diagnosed with or suspected of cardiac sarcoidosis prior to the scans. All of the patients completed the dynamic $^{82}$Rb-PET scan. Four patients did not complete the full one-hour dynamic FDG-PET scan due to discomfort, but their available data were still included in this study. Other characteristics of the patients, including age, sex, diabetic status, BMI, BG level (before PET imaging), and dynamic FDG-PET scan duration are provided in Table 1. Unavailable data is marked with `/' in the table. \begin{table} \caption{Estimates of FDG $K_1$ and $^{82}$Rb-chloride MBF} \centering \footnotesize \begin{tabular}{lccccc} \hline & \multicolumn{3}{c}{$^{18}$F-FDG $K_1$ (mL/min/g) } && $^{82}$Rb-chloride\\ \cline{2-4} Patient No. & 2T6P-1H & 2T5P-15M & 1T4P-2M && MBF (mL/min/g)\\\hline 1 & 0.14 & 0.14& 0.11 && 0.46\\ 2 & / & 0.43& 0.43 && 0.65\\ 3 & 0.62 & 0.69& 0.71 && 0.90\\ 4 & 0.68 & 0.69& 0.67 && 1.84\\ 5 & / & 0.46& 0.48 && 0.71\\ 6 & 0.59 & 0.60& 0.59 && 0.81\\ 7 & 0.28 & 0.27& 0.31 && 0.45\\ 8 & 0.29 & 0.28& 0.38 && 0.64\\ 9 & / & 0.38& 0.62 && 0.75\\ 10 & 0.67 & 0.62& 0.58 && 0.95\\ 11 & 0.49 & 0.48& 0.49 && 0.62\\ 12 & 0.58 & 0.57& 0.71 && 0.65\\ 13 & 0.30 & 0.30& 0.45 && 0.56\\ 14 & / & 0.43& 0.53 && 1.00\\ \hline \end{tabular} \label{tbl:eg_Ks_FDG} \end{table} \subsection{Myocardial TAC Fitting and Kinetics} Figure \ref{fig:TAC_fitting}(a-c) shows examples of myocardial FDG TAC fitting for the first 2-min data using the 1T4P model (1T4P-2M), first 15-min data using the 2T5P model (2T5P-15M), and 1-hour data using the full 2T6P model (2T6P-1H). Figure \ref{fig:TAC_fitting}(d) shows an example of a 9-minute Rb TAC fitted with the 1T4P model. All of the fits demonstrated a good match between the measured time points and predicted TAC by the models. The estimated $^{18}$F-FDG $K_1$ and $^{82}$Rb-chloride MBF values of all the patients are summarized in Table \ref{tbl:eg_Ks_FDG}. For FDG kinetics, the results from three different protocols (2T6P-1H, 2T5P-15M, and 1T4P-2M) were included. For the 2T6P-1H protocol, the results of four patients were not available because these patients did not complete the full one-hour dynamic scan. \begin{figure*}[t] \centering \subfigure[]{\includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .32\textwidth]{./figs/MBF_RB_K1_FDG_2T6P.png}} \subfigure[]{\includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .32\textwidth]{./figs/MBF_RB_K1_FDG_2T5P_40.png}} \subfigure[]{\includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .32\textwidth]{./figs/MBF_RB_K1_FDG_1T4P_2min.png}}\\ \subfigure[]{\includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .32\textwidth]{./figs/MBF_RB_K1_FDG_2T6P_CAD.png}} \subfigure[]{\includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .32\textwidth]{./figs/MBF_RB_K1_FDG_2T5P_CAD_40.png}} \subfigure[]{\includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .32\textwidth]{./figs/MBF_RB_K1_FDG_1T4P_CAD_2min.png}} \caption{Correlation plots between MBF and FDG $K_1$ estimated by three different protocols: (a, d) 2T6P-1H, (b, e) 2T5P-15M, (c, f) 1T4P-2M. Top row (a-c): all patients; Bottom row (d-f): IHD patients.} \label{fig:K1MBF} \end{figure*} \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .31\textwidth]{./figs/r_Pearson_t_2T6P.png}} \subfigure[]{ \includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .31\textwidth]{./figs/r_Pearson_t_2T5P.png} } \subfigure[]{ \includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .31\textwidth]{./figs/r_Pearson_t_1T4P.png} } \subfigure[]{ \includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .31\textwidth]{./figs/r_Pearson_t_2T6P_CAD.png}} \subfigure[]{ \includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .31\textwidth]{./figs/r_Pearson_t_2T5P_CAD.png} } \subfigure[]{ \includegraphics[trim=0cm 0cm 0.8cm 0cm, clip, width = .31\textwidth]{./figs/r_Pearson_t_1T4P_CAD.png} } \caption{Pearson's correlation between MBF and FDG $K_1$ as a function of scan duration. (a, d) 2T6P, (b, e) 2T5P, (c, f) 1T4P. Top row (a-c): all patients; Bottom row (d-f): IHD patients.} \label{fig:r_t} \end{figure*} \subsection{Correlation Between FDG $K_1$ and MBF} Figure \ref{fig:K1MBF}(a-c) show the correlation plots between MBF and FDG $K_1$ in the all-patient group, which includes both IHD and non-IHD patients, for the three protocols: 2T6P-1H, 2T5P-15M, and 1T4P-2M. Fig. \ref{fig:K1MBF}(d-f) show the results from the IHD patient group. With the 2T6P protocol, the Pearson's correlation coefficient (denoted as r) was 0.69 (p=0.026) in the all-patient group. 2T5P-15M demonstrated a similar result, while 1T4P-2M had a reduced r value. The correlation became higher in the IHD group as the large MBF values were excluded and the linear approximation of FDG $K_1$ to MBF became better. The effect of scan duration on the Pearson's r between MBF and FDG $K_1$ is shown in Figure \ref{fig:r_t}(a-c) for the all-patient group. The maximum Pearson's r between MBF and FDG $K_1$ by the 2T6P model was reached when the scan duration is one hour, while the scan duration was 10-20 minutes for the 2T5P model and 2-3 minutes for the 1T4P model. These results are consistent with our previous analysis of the identifiability of FDG $K_1$ \cite{Zuo2020}. The results for the IHD group are shown in Figure \ref{fig:r_t}(d-f), demonstrating different temporal patterns from the all-patient group. In the IHD group, the maximum r values by the three protocols are also approximately comparable to each other. Based on these results (table \ref{tbl:eg_Ks_FDG} and Fig. \ref{fig:r_t}), the 2T5P-15M protocol is indicated to be equivalent to 2T6P-1H for FDG $K_1$ quantification and for correlating with MBF. Considering a larger sample size is available to the 2T5P-15M protocol than 2T6P-1H, in the following sections we mainly focus on using the FDG $K_1$ data from 2T5P-15M for further data analysis. \subsection{Effect of Other Biological Variables} Table \ref{tbl:r_p_MBF_others} shows the Pearson's r and p values between FDG $K_1$ (estimated by 2T5P-15M) and the biological variables including age, BMI, and BG level in the IHD group and all-patient group. The results of MBF were also included in the table for comparison. FDG $K_1$ tended to correlate with BG (r=-0.63, p=0.07 in the IHD group and r=-0.51, p=0.09 in the all-patient group), while similar trends were not observed for MBF. BMI tended to inversely correlate with both FDG $K_1$ (r=-0.43, p=0.13) and MBF (r=-0.49, p=0.08) in the all-patient group but not in the IHD group (p$>$0.4). Given BMI tended to correlate with both MBF and FDG $K_1$ in the all-patient group with borderline p values ($\approx 0.1$), we added BMI in the multivariate linear regression of FDG $K_1$ with MBF to further test whether or not BMI is a potential confounding factor. FDG $K_1$ remained a significant predictor of MBF (p=0.031) while BMI became insignificant (p=0.325). The result shows the association between MBF and FDG $K_1$ is strong. \begin{table} \centering \footnotesize \begin{tabular}{l | llccccccc} \hline\hline & & \multicolumn{3}{c}{IHD Patients} && \multicolumn{3}{c}{All Patients}\\ \cline{3-5}\cline{7-9} Biological Variables &&Age&BMI &BG &&Age&BMI &BG\\ \hline\hline \multirow{2}*{FDG $K_1$}&r &0.44 &-0.18&-0.63 && 0.43 & -0.43 &-0.51 \\ &p &0.20&0.61&0.07 & & 0.13 & 0.13 &0.09 \\\hline \multirow{2}*{$^{82}$Rb MBF}& r & 0.03&-0.25&-0.21 &&0.24 & -0.49 &-0.13 \\ &p &0.93&0.48&0.58 && 0.40 & 0.08 &0.68 \\ \hline\hline \end{tabular} \caption{Pearson's r and p of FDG $K_1$ (by 2T5P-15M) and MBF with patients' age, BMI, and blood glucose (BG) concentration.} \label{tbl:r_p_MBF_others} \end{table} \begin{table}[t] \centering \footnotesize \begin{tabular}{l|cc} \hline\hline &IHD & non-IHD \\\hline $^{82}$Rb MBF (ml/g/min) & 0.64$\pm$0.13 & 1.15$\pm$0.47\\ FDG $K_1$ (ml/g/min) & 0.40$\pm$0.16 & 0.59$\pm$0.11\\ Myocardial FDG extraction fraction (\%) & 61$\pm$17 &55$\pm$18\\ \hline\hline \end{tabular} \caption{Comparison of MBF, FDG $K_1$, and myocardial FDG extraction in CAD and non-CAD patients.} \label{tbl:det_mean} \end{table} \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[width = .45\textwidth]{./figs/boxplot_MBF_CADSAR.png} } \subfigure[]{ \includegraphics[width = .45\textwidth]{./figs/boxplot_K1_CADSAR_2T5P.png} } \subfigure[]{ \includegraphics[width = .45\textwidth]{./figs/boxplot_Ef_CADSAR_2T5P.png} } \caption{Box plots of PET measures in IHD and non-IHD groups. (a) MBF, (b) FDG $K_1$, (c) myocardial FDG extraction fraction E.} \label{fig:det_boxplot} \end{figure*} \subsection{Detection Performance} Table \ref{tbl:det_mean} summarizes FDG $K_1$, Rb MBF, and myocardial FDG extraction fraction ($K_1$/MBF) in the IHD and non-IHD groups. Fig. \ref{fig:det_boxplot} further shows the boxplots of MBF and FDG $K_1$ for detecting IHD versus non-IHD patients. The t test of the two groups further indicated MBF of IHD was significantly lower than MBF of non-IHD (p=0.006). A similar relationship holds for FDG $K_1$. FDG $K_1$ is lower in IHD than in non-IHD. However, the difference of FDG $K_1$ in the two groups is only with a borderline p value of 0.054. The estimated FDG extraction fraction in the myocardium varied from 30\%-87\% and was slightly higher in IHD than non-IHD but without a statistical significance (p=0.59). The abilities of MBF and FDG $K_1$ for detecting IHD are further compared in Fig. \ref{fig:det_roc} using the SNR detectability index and ROC analysis. The SNR of MBF was higher than FDG $K_1$ (1.49 versus 1.36). Both MBF and FDG $K_1$ achieved a good detection performance for classifying IHD versus non-IHD with AUC$\geq$0.85. FDG $K_1$, however, had a lower AUC than MBF (0.85 versus 0.97). \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[width = .48\textwidth]{./figs/detect_SNR.png} } \subfigure[]{ \includegraphics[width = .48\textwidth]{./figs/detect_ROC.png} } \caption{Comparison of MBF with FDG $K_1$ for differentiating IHD from non-IHD. (a) SNR, (b) ROC.} \label{fig:det_roc} \end{figure*} \section{Discussion} In this paper, we investigated the correlation between myocardial FDG $K_1$ and MBF in fourteen patients with IHD or non-IHD (cardiac sarcoidosis). Following our previous work on identifiability analysis for cardiac FDG $K_1$ quantification, three protocols (2T6P-1H, 2T5P-15M, and 1T4P-2M) were evaluated for their appropriateness for the task. The results showed 2T5P-15M and 1T4P-2M may provide comparable correlation results as compared to the full-one hour protocol (2T6P-1H) (figure \ref{fig:K1MBF} and figure \ref{fig:r_t}). The correlation results are slightly different from the identifiability results which showed that 1T4P protocol may compromise FDG $K_1$ quantification when using the one-hour data as the reference. The disparity is likely due to the physiological difference between FDG $K_1$ and MBF. Our study further showed that FDG has a moderate extraction fraction of 60\% in the myocardium (table \ref{tbl:det_mean} and figure \ref{fig:det_boxplot}(c)) as compared to $^{82}$Rb MBF. In the study, myocardial FDG $K_1$ was associated with MBF (figure \ref{fig:K1MBF}) and demonstrated a higher linear correlation in the IHD group (r=0.81) than in the all-patient group (r=0.69). The reduced correlation in the all-patient group is presumably because high MBF presents in the non-IHD patients, which is associated with a lower FDG extraction fraction and demonstrates a nonlinear relationship. The correlation between FDG $K_1$ and MBF remained statistically significant after the adjustment for BMI. When used for detecting IHD versus non-IHD, both FDG $K_1$ and MBF achieved a good performance with AUC$\geq0.85$ (figure \ref{fig:det_roc}). Despite the correlation with MBF and potential as a surrogate for MBF, myocardial FDG $K_1$ demonstrated differences from MBF. FDG $K_1$ tended to negatively correlate with blood glucose level (p=0.09, table \ref{tbl:r_p_MBF_others}), while MBF did not (p=0.68). The FDG extraction in the myocardial also varies and is far from an ideal flow tracer for which the extraction fraction is close to 1.0. All of these may present a limitation for the use of FDG $K_1$. In fact, for detecting IHD versus non-IHD, myocardial FDG $K_1$ was not fully competitive to MBF as shown in figure \ref{fig:det_roc}. It is worth noting that following the Renkin-Crone model \cite{Carson2005}, the FDG delivery rate $K_1$ by conventional kinetic modeling is a mix of blood flow and glucose transport rate. Therefore, the conventional FDG $K_1$ estimate may be not the optimal parameter to approximate MBF. One possible solution is to separate the estimation of blood flow and glucose transport rate from FDG by using high-temporal resolution dynamic PET imaging and more advanced kinetic modeling \cite{Wang2019, Wang2018}, which will be explored in the future work following our preliminary analyses. The paper must acknowledge several limitations. First, the sample size is small. The imaging study is complex as it consists of both dynamic FDG-PET scan and Rb-PET scan, making patient accrual challenging. The results from the 14 patients mainly provided a preliminary report to warrant future studies. Second, this paper did not include a specific investigation of potential confounders with adherence to fasting state. How this would affect the correlation between FDG $K_1$ and MBF is worth a further investigation. Third, the analysis of this study was limited to evaluation of global myocardial quantification. The major reason that we did not include segment-level investigation was the dynamic data of myocardial segments in this study suffer from noise. The noise performance of the scanner (2002 GE Discovery ST model) used in this study was far from optimal for exploring segment-based $K_1$, as indicated by the result from the previous identifiability analysis \cite{Zuo2020}. Third, motion could affect the accuracy of kinetic quantification but was not considered in this study. However, we do not expect the effect of motion to result in a significant change to the results given that the quantification was performed on large, global ROIs and the spatial resolution of the PET scanner is only about 6-8 mm. Our future work will exploit state-of-the-art PET scanners and advanced image reconstruction algorithms to implement the high-temporal resolution kinetic modeling strategies \cite{Wang2019, Wang2018} for FDG flow quantification in the myocardium. Latest clinical PET scanners have an effective sensitivity gain of 4-6 fold as compared to a typical conventional scanner GE Discovery 690 (see Table IV in \cite{Wang20TRPMS}), and are way better than the GE DST scanner used in this study. In particular, the EXPLORER total-body PET/CT scanner \cite{Cherry2017, Badawi2019} has an ultrahigh sensitivity for dynamic imaging, making it more feasible to perform pixel-wise parametric imaging including in the myocardium. Furthermore, improved dynamic image reconstruction using machine learning concepts has been developed for dynamic PET imaging, e.g. with the kernel methods (e.g. \cite{Wang2015}) or deep neural-network methods (e.g. \cite{Gong2019, Reader2020}) as recent examples. The progress in PET instrumentation and algorithms may provide a new opportunity to explore the use of FDG for myocardial blood flow imaging. \section{Conclusion} Dynamic $^{18}$F-FDG PET with tracer kinetic modeling has the potential to derive MBF without the need for a flow-specific radiotracer. The results from the patient data comparing FDG delivery rate $K_1$ with $^{82}$Rb MBF show that FDG $K_1$ is closely associated with MBF in the myocardium, with a reasonably high linear correlation (r=0.81) in the IHD patients and a moderate correlation (r=0.69) when non-IHD patients are included. The numeric observer SNR and ROC analyses show that both myocardial FDG $K_1$ and MBF can detect IHD versus non-IHD, though FDG $K_1$ is not fully competitive to MBF for this specific task. This result stresses the importance of developing improved kinetic modeling for accurate myocardial FDG flow imaging. \section*{Acknowledgments} This work was supported in part by National Institutes of Health (NIH) under grant no. R21 HL131385 and American Heart Association under grant no. 15BGIA25780046. The work of J.E.L. is also supported in part by the Harold S. Geneen Charitable Trust Awards Program and the National Center for Advancing Translational Sciences, NIH, grant number UL1 TR001860 and linked award KL2 TR001859. The authors thank Denise Caudle, Michael Rusnak, and Ben Spencer for their assistance in the dynamic PET/CT data acquisition, Diana Ramos for her efforts in patient recruitments, and the patients that agreed to participate in these studies. \section*{References} \bibliographystyle{unsrt}
train/arxiv
BkiUeFA5qhLBWkLTSyQ0
5
1
\section{Introduction} We consider a stochastic differential game model for an exhaustible commodity market, such as oil. The dynamic market evolution is driven by the use of existing reserves to produce energy and exploration/discovery of new reserves. We assume a Cournot-type competition where each producer chooses their production rate; this resembles e.g.~OPEC members who adjust their output to influence crude oil prices. Extraction of the commodity generates a revenue stream but carries the depletion trade-off. To offset the resulting lower reserves, producers undertake efforts to explore for new reserves. Exploration is uncertain: continuous exploration efforts stochastically lead to discrete discoveries of additional reserves. Individually, producers aim to maximize their total expected profits which are equal to price times quantity extracted, minus the production and exploration costs. Strategically, the producers interact via the global market price $p$ that is determined by the aggregate production. To model this oligopoly among commodity producers (i.e.~energy firms), we assume a continuum of homogenous agents who compete in a single market for energy. Each agent is small enough to be a price taker, yet in equilibrium the aggregate behavior fully determines supply and in turn the clearing price. For simplicity, we assume constant demand, focusing on producers' choices. This model is reasonable to describe the long-term behavior of the market (on the scale of years), where micro-economic fluctuations are averaged out and commodity supply and reserves is the main determinant of market structure. While the literature on single-agent optimization for exhaustible resource extraction dates back to the 1970's \cite{DasguptaHealBook,Pindyck78,Pindyck80}, the first rigorous treatment of a dynamic continuous-time non-cooperative model was by Harris et al.~\cite{HHS} in 2010. They studied an $N-$player Cournot game via the associated system of nonlinear HJB partial differential equations but due to numerical challenges illustrations were limited to the two-player model. Further analysis of competitive duopoly was carried out in \cite{LS-Cournot} and our earlier work \cite{LudkovskiYang14}; the special case of a single exhaustible player competing against $N$ renewable producers of varying profitability was studied in \cite{SircarLedvina12,DasarathySircar14}. \subsection{Mean field game approach} In a differential game model with a finite number $N$ of players, their equilibrium strategies can be determined by a system of Hamilton-Jacobi-Bellman-Isaacs (HJB-I) equations derived from the dynamic programming principle. The dimension of the system in general increases in $N$, which makes the game model intractable for large $N$. Mean field game (MFG) approach simplifies the modeling by considering equilibria with a continuum of homogenous players; the respective finite dimensional game state expands into a distribution $m(\cdot)$. The main idea is then to consider the optimization problem of the representative agent; the latter becomes a regular stochastic control problem with the competitive effect captured via a mean-field interaction driven by $m(\cdot)$. In turn, the aggregate behavior of the players implies dynamics on the distribution $m$ of agent states. This leads to a system of two partial differential equations (PDE's) which is viewed as an approximation to a single multi-dimensional HJB-I PDE in the original finite-$N$ setup. We refer to \cite{BensoussanFrehseBook,CarmonaBook} for the general theory of MFGs. In our context, the individual states are the reserves' levels $X$, and the interaction is via the market price $p$ that is related to total production $Q$ across all the producers. Thus, $p$ enters the game value function of the representative producer as the mean field term and drives her choice of production and exploration controls. In turn, the distribution of reserves is driven by the latter production rates and exploration efforts. The key aspect of such oligopolistic MFG's, first introduced in Gu\'{e}ant's PhD thesis~\cite{GueantPhD,GLL10}, is the mean-field interaction via the aggregate $Q$. Since producers directly optimize their own production rates, this corresponds to a mean-field game of \emph{controls}, in contrast to the standard situation where the interaction is through the density $m(\cdot)$. A second distinguishing feature of Cournot MFG's is the hard exhaustibility constraint when reserves reach zero. Different possibilities at $X_t = 0$ include leaving the game (which magnifies market power of remaining producers); switching to a renewable/inexhaustible resource; prolonging production via costly reserve replenishment. All the above choices lead to non-standard boundary conditions in the respective equations, necessitating tailored treatment. Two further crucial aspects of oligopoly MFG concern the prescribed dynamics of the reserves process $(X_t)$ and the inverse demand curve that relates $p$ to aggregate production $Q$. For the latter aspect, starting with \cite{GLL10}, price was assumed to be linear (decreasing) in quantity, which brings forward some of the tractability of the original linear-quadratic MFG's. This choice was maintained in \cite{ChanSircar14,Graber16,GraberBensoussan15}. Very recently, Chan and Sircar \cite{ChanSircar16} also investigated MFG's with power-type demand curves. We note that even with linear price schedule, the overall link between production and price is non-linear due to the exhaustibility condition at $x=0$ which requires to separately keep track of exhausted and producing firms. For an exhaustible resource, reserves are non-increasing and are completely determined by past production. However, this does not capture the real-life aspect of replenishment of fossil-fuel commodities, where global reserve depletion has to a large degree been offset with ongoing discoveries (deep-sea oil, shale gas, oil sands, and so on). Such discoveries take place thanks to exploration activities determined by the respective exploration efforts. In the early model of Pindyck \cite{Pindyck78}, exploration was incremental, leading to deterministic reserve additions. Subsequent extensions represented exploration as a point process counting new reserve discoveries, see \cite{ArrowChang82,DeshmukhPliska80,HaganCaflisch94} which is also the choice we pursue below. Another alternative, which is motivated by \emph{uncertainty} about current reserves, has been to introduce exogenous Brownian noise, i.e.~make reserves stochastically fluctuating. This is also convenient for theoretical and numerical purposes and has been commonly used in recent MFG literature, see \cite{ChanSircar14,GraberBensoussan15,GraberMouzouni17,GLL10}. Let us also mention further possibilities of $(X_t)$ following a stochastic differential equation with controlled volatility and drift \cite{Pindyck80} and a common noise MFG model to capture systematic shocks to aggregate reserves~\cite{Graber16}. Mathematically, the MFG setup leads to an HJB equation to model a representative agent's strategy, and a transport equation to model the evolution of the distribution of all the producers' states. The structure of these equations is determined by the prescribed dynamics of $(X_t)$. When $X_t$ is deterministic, the equations are first-order, see e.g.~\cite{CardaliaguetGraber15}. When $X_t$ includes Brownian shocks (cf.~\cite{GLL10,ChanSircar14}), the HJB equation is second-order and the transport equation is the usual Kolmogorov forward equation. In contra-distinction, discrete discoveries add a first-order non-local (``delay'') term to both the HJB \cite{LS-Cournot,LudkovskiYang14} and transport equations. These features are central to the numerical resolution of MFG's that requires handling a coupled system of nonlinear PDE's. We refer to \cite{Achdou13,Achdou16,CarliniSilva14} for a general summary of different computational approaches and their convergence, including finite difference and semi-Langrangian schemes. An alternative common approach \cite{GLL10,ChanSircar14,ChanSircar16} is based on Picard-like iterative schemes. \subsection{Summary of Contributions} In this paper we apply the MFG approach to model energy markets with a large population of competing producers of exhaustible but replenishable resources. Our main focus is the strategic interaction between exploration and production (E\&P), in a dynamic, stochastic, game-theoretic framework. E\&P is a major theme in the business decisions of energy firms, but is rarely tackled as such in mathematical models. Some of the topics we investigate are: (i) the price effect of exploration; (ii) aggregate production path implied by the model; (iii) aggregate exploration efforts; (iv) possibility of a stationary equilibrium where exploration exactly offsets production; (v) impact of exploration uncertainty on the solution. Our analysis yields quantitative insights into the macro behavior of commodity industries on long-time horizons, linking up with colloquial topics of ``peak oil'', ``value of exploration R\&D'' and ``postponing the exhaustion Doomsday''. Specifically, the stationary model where individual resource levels stochastically change, but the overall reserves distribution and associated aggregate production and price remain constant, is a feature that to our knowledge is new in the oligopoly MFG literature. Relative to existing Cournot MFG models, we emphasize the analysis of stochastic exploration which leads to first-order, non-local MFG equations. In that sense, our work fits into two different strands of game-theoretic models of energy production. On the one hand, we extend \cite{ChanSircar14,GLL10} who considered exhaustible resource MFGs but without exploration; thus reserves were non-increasing. On the other hand, we extend the duopoly model \cite{LS-Cournot} to the limiting oligopoly with a continuum of producers. In the duopoly each producer directly influences the price; in the MFG model herein, each producer has negligible power on market price that is rather driven by the \emph{aggregate} production. The closest work to ours is by Chan and Sircar \cite{ChanSircar16} who primarily focused on competition of exhaustible producers who switch to a costly renewable resource upon ultimate depletion. They also studied competition of a large group of exhaustible producers alongside a single renewable producer, similar to the major-minor model of Huang \cite{Huang10}. Section 5 of \cite{ChanSircar16} then briefly discusses resource exploration and the respective stationary equilibrium. Compared to their illustration, we provide multiple additional analyses, including a detailed treatment of both the time-dependent and time-stationary equilibria, convergence to stationarity as the problem horizon increases, and study of the ``fluid limit''. The latter is a law-of-large-numbers scaling that maintains exploration/production controls but removes the associated uncertainty. This mechanism allows to quantify the pure impact of uncertainty on the MFG model, linking to the deterministic first-order MFG, which is another new development relative to existing literature. Our setup generates several numerical challenges due to the non-local terms in the PDE's and a non-local coupling between them; a major part of the paper is devoted to constructing a computational scheme to solve the MFG equations. Specifically, we decouple the HJB and transport equations via a Picard-like iteration that alternately updates the optimal production and exploration controls, and the reserves distribution function (which in turn determines the market price). For the HJB equation and similar to \cite{ChanSircar16}, we employ a method of lines, discretizing the space dimension and solving the resulting ordinary differential equations in the time dimension. The latter still constitutes a coupled system of ODE's due to the exploration control and the implicit boundary condition at $x=0$. For the transport equation we use a fully explicit finite-difference scheme. However, due to the non-smooth dynamics of $(X_t)$, rather than working with the density $m(dx)$, we operate with the corresponding cumulative distribution function $\eta(\cdot)$, and moreover separately treat the proportion $\pi(t)$ of exhausted producers. This paper is organized as follows. In Section~\ref{sec: Model}, we introduce the $N$-player Cournot game that motivates the MFG model in the limit $N\rightarrow \infty$. Section~\ref{sec: Mean field game Nash equilibrium} discusses the doubly coupled system of HJB and transport equations that characterize the MFG Nash equilibrium. Section~\ref{sec: Numerical methods and examples} is devoted to numerical methods for solving this system and presents numerical illustrations. The rest of the paper then presents two modifications of the main model that yield important economic insights. In Section~\ref{sec: Stationary mean field game Nash equilibrium}, we study the stationary MFG model, in which the reserves distribution remains invariant due to the counteracting effects of production and exploration. Section~\ref{sec: Fluid limit of exploration process} investigates the asymptotic ``fluid limit'' regime whereby the exploration process becomes deterministic, so that discovery of new resources happens at high frequency with infinitesimal discovery amounts. The paper concludes with Section \ref{sec:conclusion} and an Appendix that contains most of the proofs. \section{Model} \label{sec: Model} \subsection{Finite player Cournot game } \label{sec: Cournot game of $N$ players} We consider an energy market with $N$ producers (players). Each producer uses exhaustible resources, such as oil, to produce energy. Let $X^i_t$ represent the reserves level of player $i$, $i=1,\ldots,N$. Each $X^i_t$ takes values in the set $\mathds{R}_+$ of nonnegative real numbers. Reserves level $X^i_t$ decreases at a controlled production rate $q^i_t \geq 0$, and also has random discrete increment due to exploration. We use a controlled point process $(N^i_t)$ to model arrivals of new discoveries. Specifically, $N^i_t$ has intensity $\lambda(t) a^i_t$, where $a^i_t$ is the exploration effort controlled by player $i$. The parameter $\lambda(t)$ is rate of discovery per unit exploration effort which reflects the current exploration techniques and overall resources underground, it is thus taken as exogenously given and the same for all producers. Since total resources underground are depleted over time, it is reasonable to assume that $\lambda(t)$ is decreasing in $t$ and $\lim_{t\rightarrow \infty} \lambda(t) = 0$. Let $\tau_n^i$ be the $n$-th arrival time of the point process $N_t^i$, then the inter-arrival time between the $n$-th and $(n+1)$-st arrivals satisfies the following probability distribution \[ \mathds{P} \left( \tau^i_{n+1} > \tau^i_{n}+t \right) = \exp \left( - \int_{\tau^i_n}^{\tau^i_{n}+t} \lambda(s) a^i_s d s \right) . \] Let $\delta$ denote the unit amount of a discovery, which is assumed to be a positive constant as in \cite{LS-Cournot,LudkovskiYang14}. The unit amount of discovery $\delta$ can be random in general, see Remark \ref{remark: random delta}. The reserves dynamics of each player are given by the following stochastic differential equation \begin{align} \label{individual reserves process} dX^i_t & = -q^i_t \mathds{1}_{\{X^i_t>0\}} dt + \delta d N^i_t , \quad i = 1,2,\ldots,N , \qquad X^i_0 = x^i_0 \geq 0, \end{align} where $x^i_0$ is the initial reserves level. The indicator function $\mathds{1}_{\{X^i_t>0\}}$ means that production must be shut down, $q^i_t = 0$, whenever reserves run out, $X^i_t=0$. With \eqref{individual reserves process} reserves decrease continuously between discoveries according to the production schedule and experience an instantaneous jump of size $\delta$ at discovery epochs. \textbf{Cost functions.} We assume that all producers have identical quadratic cost functions of production and exploration, denoted by $C_q(\cdot)$ and $C_a(\cdot)$ respectively, \begin{equation} \label{production and exploration cost functions} C_q(q) = \kappa_1 q + \beta_1 \frac{q^2}{2}, \qquad C_a(a) = \kappa_2 a + \beta_2 \frac{a^2}{2} . \end{equation} The coefficients $\beta_{1,2} > 0$ of the quadratic terms are assumed to be positive, making the cost functions strictly convex and guaranteeing that the optimal production and exploration effort levels are finite. The coefficients $\kappa_{1,2} \ge 0 $ of the linear terms represent constant marginal cost of production and exploration due to the use of facilities and labor, while $\beta_{1}, \beta_2$ of the quadratic terms represent increasing marginal costs due to negative externalities (such as rising labor costs or nonlinear taxation). We note that when $\kappa_2 =0$ then exploration is always undertaken, otherwise $a^\ast_t = 0$ could be optimal. \textbf{Supply-demand equilibrium. } We assume there is a single market price $p$ (so the market is undifferentiated which is a reasonable assumption for the energy industry); in equilibrium $p$ is determined by equating the total demand to the total supply at that price level. This equilibrium is achieved instantaneously at each date $t$, which is viewed as fixed in the following exposition. In addition to the $N$ producers we assume there are $M$ undifferentiated consumers. The demand of consumer $j$, denoted by $d^j$, depends on the price through the linear demand function $d^j = D(p)= L - p$. Note that demand is finite even at zero price and the demand elasticity is bounded. The aggregate demand, denoted by $Q^{(M)}_{demand}$, is the sum $$ Q^{(M)}_{demand} := \sum_{j=1}^M d^j = M(L-p). $$ We now equate total demand with total supply $Q^{(N)}$ and substitute the right-hand-side above to obtain the equilibrium relation between total supply and price, $ Q^{(N)} = M(L-p). $ Finally, the clearing price can be represented through the inverse demand function \begin{equation} p_t = L - \frac{1}{M} Q^{(N)}_t = L - \frac{N}{M} \left( \frac{1}{N}\sum_{i=1}^N q^i_t \right) = L - \frac{N}{M} \bar{Q}_t, \label{inverse demand function} \end{equation} where $\bar{Q}_t$ is the \emph{average} production rate. To obtain a nontrivial limiting price as the number of producers and the number of consumers both go to infinity, we see that it is necessary to take $M \propto N$. Without loss of generality (if necessary by redefining $L$), we assume $M=N$, so that $p_t= L - \bar{Q}_t= D^{-1}(\bar{Q}_t)$ where $L$ can be interpreted as the cap on prices as supply vanishes. \subsection{Game value functions and strategies} In a continuous-time Cournot game model, each player continuously chooses rate of production $q^i_t$ in order to maximize profit which is equal to the revenue $p_t\cdot q^i_t$, minus the production and exploration costs, integrated and discounted at a rate $r>0$. We work on a finite time horizon $[0, T]$, where $T$ is exogenously specified. The role of the horizon will be revisited in the sequel. The price $p_t$ each player receives is determined through the inverse demand function \eqref{inverse demand function}. Denoting player-$i$'s strategy by $s^i :=\left( q^{i}, a^{i} \right)$, the overall strategy profile for all the $N$ players is $\boldsymbol{s}:= \left( s^1, s^2, ... , s^N \right)$. Starting with reserves state $\left( X^1_t, \cdots , X^N_t \right) =: \bf{X}_t = \bf{x}$, each player's objective functional $\mathcal{J}^i, i=1,\ldots,N$ on the horizon $[t,T]$ is defined as the total discounted profit \begin{align} & \mathcal{J}^i \left( \boldsymbol{s} ; t, \bf{x} \right) := \E \left\{ \int_t^T \left[ D^{-1}\left( \frac{1}{N} \sum_{j=1}^N q^j_s \right) q_s^i - C_q(q_s^i) - C_a(a_s^i) \right] e^{-r(s - t)} \ ds \ \Big| \ \bf{X}_t = \bf{x} \right\}, \label{N players objective functionals} \end{align} where the expectation is over the random point processes $N^j$'s that drive $X^j$'s and hence $q^j$'s. We focus on the admissible set $\mathcal{A}$ of strategies whereby $s^i_t = (q^i_t, a^i_t)$ are Markovian feedback controls $q^i_t=q^i(t, {\bf X}_t)$, $a^i_t=a^i(t, {\bf X}_t)$ such that $\mathcal{J}^i({\bf{s}}; t, \bf{x}) < \infty$, $\forall {\bf{x} } \in \mathds{R}^N_{+}$, for all $i = 1, \ldots, N$. From \eqref{N players objective functionals}, we see that each player's choice of strategy depends on the strategies of all the others, leading to a non-cooperative game. Our aim is to investigate the resulting (Markov feedback) Nash equilibrium. Importantly, the feedback structure of the controls together with \eqref{inverse demand function} imply that player $i$'s dependence on $\bf{X}_t$ can be summarized by his individual reserves $X^i_t$ and the aggregate distribution of all players' reserves. The latter is characterized through the upper-cumulative distribution function defined by $$ \eta^{(N)}(t, x) := \frac{1}{N}\sum_{j=1}^N \mathds{1}_{ \left\{ X^j_t \geq x \right\} }. $$ Thus the Markovian feedback controls $\left( q^i, a^i \right)$ can be equivalently represented as \begin{align} q^i_t & = q^i\left( t, X^i_t ; \eta^{(N)}(t,\cdot) \right) , \qquad a^i_t = a^i\left( t, X^i_t; \eta^{(N)}(t,\cdot) \right) , \qquad i=1,\ldots,N. \notag \end{align} \begin{defn}[Nash equilibrium] A Nash equilibrium of the $N$-player game is a strategy profile $\boldsymbol{s}^\ast = \left( s^{1,\ast},\ldots,s^{N,\ast} \right)$, with $s^{i,\ast} := \left( q^{i, \ast}, a^{i, \ast} \right) $ such that \begin{align} \mathcal{J}^i\left( \boldsymbol{s}^\ast; t, \bf{x} \right) \geq \mathcal{J}^i\left( (\boldsymbol{s}^{\ast, -i}, s^i) ; t, \bf{x} \right) , \quad \forall i \in \{ 1, 2, \ldots, N \}, \end{align} where $(\boldsymbol{s}^{\ast,-i}, s^i)$ is the strategy profile $\boldsymbol{s}^\ast$ with the $i$-th entry replaced by arbitrary $s^i=(q^{i}, a^{i}) \in \mathcal{A}$. \end{defn} In words, a Nash equilibrium is the set of strategies of the $N$ players such that no one can better off by unilaterally changing his own strategy. Theoretically the Nash equilibrium of the $N$-player game can be found by Hamilton-Jacobi-Isaacs (HJB-I) approach. HJB-I approach is to use dynamic programming principle to derive the partial differential equation of each player's game value function, with other players' strategies as entries. It is extremely hard to find a Nash equilibrium by using the (HJB-I) approach either analytically or numerically, even for small $N$, e.g. $N=2$. Thus in the following Section~\ref{Mean field game problem as N infinity}, we introduce the mean field game model as $N \rightarrow \infty$, which serves as an approximation to the Nash equilibrium of the game when number of players is very large. \subsection{Mean field game problem as $N \rightarrow \infty$} \label{Mean field game problem as N infinity} As the number of players becomes very large $N \rightarrow \infty$, thanks to the Law of Large Numbers, the empirical distribution $\eta^{(N)}$ is expected to converge to a CDF $\eta$. The limiting $\eta(t, x)$ is regarded as the reserves distribution among all players at date $t$, which means, for a given $t$ and $x$, the proportion of all players at time $t$ with reserves level greater than or equal to $x$. The production and exploration controls continue to take the Markovian feedback form \begin{align} \label{production and exploration controls mean field game} q_t &=q(t, X_t; \eta(t,\cdot)), \quad a_t =a(t, X_t; \eta(t,\cdot)). \end{align} To re-solve for the supply-demand equilibrium clearing price, we use the total quantity $Q(t)$ of production at time $t$, defined as the Stieltjes integral of a representative producer's production rate with respect to the reserves distribution, \begin{equation} Q(t) := - \int_0^\infty q(t, x; \eta) \ \eta(t, dx) . \label{mfg total production definition} \end{equation} Note that $\eta(t,x)$ is decreasing in $x$, thus we add a negative sign to the integral in order to keep $Q(t)$ positive. The definition in \eqref{mfg total production definition} is the limit of the original $\bar{Q}_t$ that was defined for the $N$-player game. As before $Q(t)$ is linked to the clearing price via \begin{equation} p(t, \eta(t,\cdot)) = D^{-1}(Q(t))= L + \int_0^\infty q(t, x; \eta) \ \eta(t, dx) . \label{mfg price function} \end{equation} For a representative producer who starts with initial reserves level $X_t=x$ (with $x \in \mathbb{R}_+$ now a scalar), and representing all other players' states via $\eta(\cdot, \cdot)$ taken as given, the mean-field objective functional is defined analogously to \eqref{N players objective functionals}: \begin{align} \label{mfg objective functional} \mathcal{J} \left( q, a ; t, x, \eta \right) := \E \left\{ \int_t^T \left[ p(s,\eta(s,\cdot)) q_s - C_q(q_s) - C_a(a_s) \right] e^{-r(s-t)} \ ds \ \middle| X_t = x\right\} . \end{align} Above the strategies $(q_t, a_t)$ take the Markovian feedback form \eqref{production and exploration controls mean field game} and the reserves distribution $(\eta(s,\cdot))$ is a probability upper-CDF for all $s \in [t,T]$. We again remark that the profit of a player depends on all the other players through the mean field term $Q(s)$. We define the mean field game Nash equilibrium of our model as \begin{defn}[Mean field game Markov Nash equilibrium] \label{Mean field game Markov Nash equilibrium definition} A MFG MNE is a triple $\left( q^\ast, a^\ast, \eta^\ast \right)$ of adapted processes on $[0,T]$ such that, denoting by $X^\ast_t$ the solution of \begin{equation} \label{reserves dynamics mean field game case} dX^\ast_t = -q^\ast(t,X^\ast_t, \eta^\ast(t,\cdot) ) \, dt + \delta d N^\ast_t , \quad t \geq 0 , \qquad X^\ast_0 \sim \eta^\ast(0,\cdot), \end{equation} then $\eta^\ast(t, x) = \PP( X^\ast_t \ge x)$ is the upper-CDF of $X^\ast_t \, \forall t \in [0, T]$ and \begin{equation} \label{mfg optimality condition} \mathcal{J}(q^\ast, a^\ast; t, x, \eta^\ast) \geq \mathcal{J}(q, a; t, x, \eta^\ast), \ \forall (q, a)\in \mathcal{A} . \end{equation} \end{defn} Definition \ref{Mean field game Markov Nash equilibrium definition} consists of two conditions. One condition, which we can call optimality condition, is that each producer chooses strategy $(q^\ast, a^\ast)$ which gives optimal game value, given the others' strategies. The second condition, which we can call consistency condition, is that the reserves dynamics of each player under the control of the strategy $(q^\ast, a^\ast)$ has the upper cumulative distribution function $\eta^\ast$ that is the same as the one that enters the objective functional. In Section \ref{sec: Mean field game Nash equilibrium}, we introduce the differential equations characterizing the MFG MNE defined in Definition \ref{Mean field game Markov Nash equilibrium definition}, which is the core problem of this paper. \section{Mean field game Nash equilibrium} \label{sec: Mean field game Nash equilibrium} Solving for MFG MNE involves two partial differential equations. One equation is the HJB equation of the game value function of a representative producer, which is derived by a dynamic programming principle and yields the equilibrium production and exploration strategies $(q^\ast, a^\ast)$. The other equation is the transport equation characterizing the distribution $\eta^\ast$ of reserves process $X^\ast$ controlled by the strategies $(q^\ast, a^\ast)$ obtained from the HJB equation. Section \ref{sec: Game value function of a representative player}, treats the HJB equation associated to the game value function of a representative producer. The PDE that characterizes the evolution of the reserves distribution will be discussed in Section~\ref{sec: Transport equation of reserves distribution}. The overall coupled system associated to the MFG MNE is taken up in Section~\ref{sec: System of doubly coupled HJB-transport equations}. Details of numerical methods and examples will be discussed in Section \ref{sec: Numerical methods and examples}. \subsection{Game value function of a representative producer} \label{sec: Game value function of a representative player} Let us fix a sequence of probability CDF's $\eta(t,\cdot)$. Associated with the objective functional \eqref{mfg objective functional}, we define the game value function $v^\eta(t,x )$ of a representative producer by \begin{align} \label{mfg value function} v^\eta(t,x) & := \sup_{(q, a) \in \mathcal{A} } \mathcal{J} \left( q, a ; t, x, \eta \right) \notag \\ & = \sup_{ (q, a) \in \mathcal{A} } \E \left\{ \int_t^{T}\left[ p(s; \eta) q_s - C_q(q_s) - C_a(a_s) \right] e^{-r(s-t)} ds \ \middle| \ X_t = x \right\} , \end{align} where the player chooses her production rate $q(t, X_t; \eta)$ and exploration rate $a(t, X_t; \eta)$ from the set $\mathcal{A}$ of Markovian feedback controls \eqref{production and exploration controls mean field game}. Note that above $\eta$ is treated as an exogenous parameter, while the price is still endogenous being a function of total production: $p(t; \eta(t,\cdot)) = D^{-1}\left(Q(t) \right)$ as in \eqref{mfg price function}. This introduces a global dependence between the map $x \mapsto q(t,x)$ and $p(t)$. Define the forward difference operator $\Delta_x$ as $\Delta_x v(t, x) :=v(t, x+\delta) - v(t, x)$. \begin{lemma} \label{lemma: mean field game explicit HJB equation} The game value function $v^\eta(t, x)$ defined by \eqref{mfg value function} satisfies the HJB equation \begin{align} \label{mfg HJB} 0 & = \frac{\partial}{\partial t} v^\eta(t,x) -r v^\eta(t,x) + \frac{1}{2\beta_1} \left[ \left(p(t; \eta(t,\cdot)) - \kappa_1 - \frac{\partial}{\partial x} v^\eta(t,x) \right)^+ \right]^2 \notag \\ & \quad + \frac{1}{2\beta_2} \left[ ( \lambda(t) \Delta_x v^\eta(t,x) - \kappa_2 )^+ \right]^2 , \end{align} with terminal condition $v^\eta(T,x) = 0$, where the optimal $q^\eta(t, x)$ and $a^\eta(t, x)$ are given by \begin{align} & q^\eta(t,x) = \frac{1}{\beta_1} \left( L - Q^\eta(t) - \kappa_1 - \frac{\partial}{\partial x} v^\eta(t, x) \right)^+ , \label{eq:q-star} \\ & a^\eta(t, x) = \frac{1}{\beta_2}\left( \lambda(t) \Delta_x v^\eta(t,x) - \kappa_2 \right)^+ , \label{optimal mfg q and a explicit} \end{align} with $Q^\eta(t)$ uniquely determined by the equation \begin{align} Q^\eta(t) & + \int_0^\infty \frac{1}{\beta_1}\left( L - \kappa_1 - \frac{\partial}{\partial x} v^\eta(t,x) - Q^\eta(t) \right)^+ \eta(t, dx) = 0. \label{eq:Q-star} \end{align} The price $p(t)$ depends on $q^\eta(t,\cdot)$ and the given reserves distribution $\eta(t,\cdot)$ via \eqref{mfg price function}. \end{lemma} \begin{proof} The associated HJB equation of \eqref{mfg value function} derived by the dynamic programming principle, is \begin{multline} \label{eq:direct-hjb} 0 = \frac{\partial}{\partial t} v^\eta(t,x) - r v^\eta(t,x) + \sup_{a \geq 0} \left[ - C_a(a) +a\lambda(t) \Delta_x v^\eta(t,x) \right] \\ + \sup_{q\geq 0}\left[ p(t; \eta(t,\cdot)) q - C_q(q) - q \frac{\partial}{\partial x} v^\eta(t,x) \right] , \end{multline} where the forward difference term $\Delta_x v(t, x)$ is due to the jumps in the reserves dynamics, cf.~\cite{LS-Cournot}. The optimal exploration rate $a^\eta$ is determined by the first order condition \begin{align} a^\eta(t, x) & = \argmax_{ a \geq 0} \left[ - C_a(a) +a\lambda(t) \Delta_x v^\eta(t,x) \right] = \frac{1}{\beta_2}\left( \lambda(t) \Delta_x v^\eta(t,x) - \kappa_2 \right)^+ , \label{eq:a-foc} \end{align} where we plugged the quadratic form of $C_a$ from \eqref{production and exploration cost functions}. Similarly, maximizing the last term in \eqref{eq:direct-hjb} to solve for the optimal production rate $q^\eta$ leads to the first order condition \begin{align} 0 & = \frac{\partial}{\partial q} \left[ p(t,\eta(t,\cdot)) q^\eta(t,x) - C_q(q^\eta(t,x)) - q^\eta(t,x) \frac{\partial}{\partial x} v^\eta(t,x) \right] \notag \\ \Leftrightarrow \quad \beta_1 q^\eta(t,x) & = \left( p(t,\eta(t,\cdot)) - \kappa_1 - \frac{\partial}{\partial x} v^\eta(t,x) \right)^+. \label{eq:q-foc} \end{align} Using $p(t,\eta(t,\cdot)) = L - Q^\eta(t)$ yields \eqref{eq:q-star}. Integrating the right-hand side of \eqref{eq:q-star} with respect to $\eta(t, \cdot)$, \begin{align} \int_0^\infty \frac{1}{\beta_1}\left( L - \kappa_1 - \frac{\partial}{\partial x} v^\eta(t,x) - Q^\eta(t) \right)^+ \eta(t, dx) = \int_0^\infty q^\eta(t,x) \eta(t, dx) = -Q^\eta(t). \label{eq:integral q eta} \end{align} Thus, $Q^\eta(t))$ satisfies $G(Q^\eta(t)) = 0$ as in \eqref{eq:Q-star} where \begin{align*} G( Q ) = Q+ \int_0^\infty \frac{1}{\beta_1}\left( L - \kappa_1 - \frac{\partial}{\partial x} v^\eta(t,x) - Q \right)^+ \eta(t, dx) . \end{align*} Assuming $L > \kappa_1$ (otherwise production is never profitable and $Q(t) = 0$), we have $G(0) > 0$ and $G( L-\kappa_1 ) < 0$. Moreover, $Q \mapsto G(Q)$ is continuous because the integrand is uniformly bounded, $\left| \left( L - \kappa_1 - \frac{\partial}{\partial x} v^\eta(t,x) - Q \right)^+ \right| \leq (L - \kappa_1)$. Since $Q \mapsto G(Q)$ is decreasing it follows that a unique root $Q(t)$ exists in $[0, L-\kappa_1]$. Finally \eqref{mfg HJB} follows by using \eqref{eq:a-foc} and \eqref{eq:q-foc} in \eqref{eq:direct-hjb}. \end{proof} We observe two non-standard features of the HJB equation \eqref{mfg HJB}. First, the optimal production control \eqref{eq:q-star} does not only depend on the individual producer's value function $\frac{\partial}{\partial x} v^\eta(t,x)$, but also on the reserves distribution of all the players through the mean field term $\int_0^\infty \frac{\partial}{\partial z} v^\eta(t,z) \eta(t, dz)$. Second, \eqref{mfg HJB} contains two \emph{non-local} terms: the forward difference $v^\eta(t, x+\delta) - v^\eta(t,x)$ and the integral $\int_0^\infty \frac{\partial}{\partial z}v^\eta(t, z) \eta(t, dz)$. The HJB equation has two boundary conditions. At $t=T$ we take $v(T, x) = 0$ as no more production is assumed possible beyond the prescribed horizon. Furthermore, the exhaustibility constraint $x \ge 0$ imposes a boundary condition at $x=0$ similar to the model in \cite{LS-Cournot} for a single exhaustible producer. Since production $q(t,0)=0$ is zero on the boundary $x=0$, we have \begin{align} 0&= \frac{\partial}{\partial t} v^\eta(t,0) - r v^\eta(t,0) + \sup_{a\geq 0} \left[ - C_a( a ) + a \lambda(t) \Delta_x v^\eta(t,0) \right] \notag \\ &= \frac{\partial}{\partial t} v^\eta(t,0) - rv^\eta(t,0) + \frac{1}{2\beta_2} \left[ ( \lambda(t) \Delta_x v^\eta(t,0) - \kappa_2 )^+ \right]^2 , \qquad 0\leq t < T. \label{HJB equation on boundary} \end{align} We will use the boundary condition equation \eqref{HJB equation on boundary} in the numerical schemes. At the other extreme, as $x \to \infty$ then $\Delta_x v^\eta(t,x) \to 0$ and hence for $\kappa_2 > 0$ we have $a^\eta(t,x) = 0$ from \eqref{eq:a-foc}. Thus, there is a saturation reserves level $x_{sat}(t)$ \cite{LS-Cournot,LudkovskiYang14} such that $a^\eta(t,x) = 0 \forall x \ge x_{sat}(t)$: with a lot of reserves and a strictly positive marginal cost, exploration becomes unprofitable (furthermore, since $v^\eta(t,x)$ is expected to be concave in $x$, $a^\eta(t,x)$ is monotone decreasing). \subsection{Transport equation of reserves distribution} \label{sec: Transport equation of reserves distribution} In this section~we study evolution of the reserves distribution through the transport equation of the upper-cumulative distribution function $\eta(t, \cdot)$ of the reserves process $X_t$ from \eqref{individual reserves process} where $N_t$ is a point process with controlled rate $\lambda(t) a_t$, and the production rate $q_t = q(t, X_t)$ and exploration rate $a_t = a(t, X_t)$ are given, i.e.~treated as exogenous inputs. When reserves reach zero $X_t=0$, production shuts down $q_t = 0$. With exploration effort being made, the reserves level $X_t$ can bounce back to $X_{\tau}=\delta$, however the waiting time until next discovery is strictly positive. As a result, $\PP(X_t = 0) >0$, i.e.~the distribution of $X_t$ has a point mass at $x=0$. Thus to study the evolution of the distribution of $X_t$, we consider two parts: the upper-cumulative distribution function $\eta(t, x) = \PP \left( X_t \geq x \right)$ in the interior $x>0$; and the boundary probability $\pi(t):= \PP(X_t = 0)=1-\eta(t,0+)$. The upper-CDF $\eta(t, x)$ is regarded as the proportion of players with reserves level greater than or equal to $x$, and $\pi(t)$ is interpreted as the proportion of producers with no reserves. The following proposition gives the piecewise PDE that $\eta$ satisfies. See the proof in Appendix \ref{Proof of proposition transport equation}. Observe that because production slows down as reserves are exhausted $\lim_{x \downarrow 0} q(t,x) = 0$, there is no boundary condition for $\eta$ at $x=0$; instead $\pi(t)$ shows up in the PDE for $\eta$. \begin{prop}[Transport equation] \label{prop: transport equation} The distribution of the reserves process $X_t$ is characterized by the pair $\left( \pi(t), \eta(t, x) \right)$, where $\eta(t,x)= \PP(X_t \geq x)$, $0<x<\infty$, and $\pi(t) := 1-\eta(t,0+)$: \begin{subequations} \begin{align} \label{eq:transport-2} \frac{\partial}{\partial t} \eta(t,x) &= \lambda(t) a(t,0) \pi(t) - \int_{0+}^x \lambda(t) a(t,z)\eta(t, dz) + q(t,x)\frac{\partial}{\partial x} \eta(t,x) , \quad 0<x \leq \delta ; \\ \frac{\partial}{\partial t} \eta(t,x) &= - \int_{x -\delta}^x \lambda(t) a(t,z) \eta(t, dz) + q(t,x) \frac{\partial}{\partial x} \eta(t,x), \quad x> \delta . \label{eq:transport-3} \end{align}\label{transport equation} \end{subequations} with given initial condition $\eta(0, x) = \eta_0(x)$ and $\pi(0)= 1-\eta_0(0+)$. \end{prop} The discontinuity of $\eta(t,\cdot)$ at $x=0$ generates higher order discontinuities at $x=\delta, 2\delta, 3\delta, \cdots$. Indeed, at $x = k\delta$ only the first $(k-1)$ derivatives of $\eta(t,x)$ exist. In other words, the distribution of $X_t$ has a point mass at $x=0$, a first-order discontinuity (non-continuous density) at $x=\delta$ and a smooth density for all other $x > 0$. This non-smoothness is the reason why we do not work with the ill-defined density ``$m(t,x) = -\frac{\partial}{\partial x} \eta(t,x)$''. \begin{remark} \label{remark: random delta} The size of new discoveries $\delta$ can be random in general. We may model discovery amounts via a stochastic sequence $\delta_n, n=1, 2, \ldots,$ where each $\delta_n$ is identically distributed with some distribution $F_\delta(\cdot)$ and independent of everything else in the model. Introducing $F_\delta$ entails replacing the integral $\int_{x-\delta}^x \lambda(t) a(t,z) \eta(t, dz)$ in \eqref{eq:transport-3} with $\int_0^x F(du) \int_{x-u}^x \lambda(t) a(t, z) \eta(t, dz)$. Similarly, in the HJB equation we would replace $v(t, x+\delta)$ with $\int_0^\infty v(t, x+u) F_\delta(du)$. For simplicity we stick to fixed discovery sizes for the rest of the article. \end{remark} \subsection{System of HJB-transport equations} \label{sec: System of doubly coupled HJB-transport equations} The consistency condition of Definition \ref{Mean field game Markov Nash equilibrium definition} implies that a MFG MNE is characterized by the HJB equation \eqref{mfg value function} where we plug-in the equilibrium CDF $\eta^\ast$, and the transport equation \eqref{transport equation} where we plug-in the equilibrium $q^\ast$ and $a^\ast$. The equilibrium price process is $p^\ast(t) = L + \int_0^\infty q^\ast(t, z) \eta^\ast(t, dz)$. The resulting system is summarized in the following. \begin{prop}[MFG PDE's] \label{prop: Mean field game partial differential equations} The mean field game Nash equilibrium $(q^\ast, a^\ast, \eta^\ast)$ is determined by the HJB equation: \begin{align} \label{mfg HJB equation} 0&= \frac{\partial}{\partial t} v(t,x) - r v(t,x) + \left[ - C_a( a^\ast(t,x) ) + a^\ast(t,x)\lambda(t) \Delta_x v(t,x) \right] \notag \\ & \quad + \left[ p^\ast(t) q^\ast(t,x) - C_q(q^\ast(t,x)) - q^\ast(t,x) \frac{\partial}{\partial x} v(t,x) \right], \quad 0< x , \ 0\leq t < T , \end{align} where the $q^\ast(t, x)$ and $a^\ast(t, x)$ are given by \begin{align}\label{eq:q-star-2} & q^\ast(t,x) = \frac{1}{\beta_1} \left( L - Q(t) - \kappa_1 - \frac{\partial}{\partial x} v(t, x) \right)^+ , \\ \label{eq:a-star-2} & a^\ast(t, x) = \frac{1}{\beta_2}\left( \lambda(t) \Delta_x v(t,x) - \kappa_2 \right)^+ , \end{align} with $Q(t)$ uniquely determined by the equation \begin{align} Q(t) & = - \int_0^\infty \frac{1}{\beta_1}\left( L - \kappa_1 - \frac{\partial}{\partial x} v(t,x) - Q(t) \right)^+ \eta^\ast(t, dx) = - \int_0^\infty q^\ast(t, x) \eta^\ast(t,d x), \label{total production} \end{align} and the transport equation: \begin{subequations} \begin{align} \frac{\partial}{\partial t} \eta^\ast(t,x) &= \lambda(t) a^\ast(t,0) (1-\eta^\ast(t,0+)) - \int_{0+}^x \lambda(t) a^\ast(t,z)\eta^\ast(t, dz) + q^\ast(t,x)\frac{\partial}{\partial x} \eta^\ast(t,x) , \quad 0<x \leq \delta ; \label{eq:eta-1} \\ \frac{\partial}{\partial t} \eta^\ast(t,x) &= - \int_{x -\delta}^x \lambda(t) a^\ast(t,z) \eta^\ast(t, dz) + q^\ast(t,x) \frac{\partial}{\partial x} \eta^\ast(t,x), \quad\qquad x > \delta . \label{eq:eta-2} \end{align} \label{mfg transport equation} \end{subequations} \end{prop} The HJB equation and transport equation are doubly coupled with $\eta^\ast$ entering the HJB equation through the aggregate production which is an integral of optimal production rates $q^\ast(t,x)$ with respect to the mean-field reserves distribution $\eta^\ast(t,dx)$. Conversely, the optimal production and exploration rates $(q^\ast, a^\ast)$ obtained from the HJB equation of a representative producer drive the reserves distribution $\eta^\ast(\cdot)$. Existence, uniqueness, and regularity of the solutions of the system of MFG PDE's is still an ongoing challenge and an area of active research. For the system \eqref{mfg HJB equation}--\eqref{mfg transport equation} the difficulty in proving existence and uniqueness of solutions lies in the non-local coupling term $\int_0^\infty q(t,x) \eta(t, dx)$ and the forward delay term $\Delta_x v(t, x) = v(t, x+\delta) - v(t,x)$. In the more common \emph{local} coupling situation, the mean-field interaction for a representative producer with state $(t,x)$ is of the form $F(t, x, m(t, x))$, i.e.~the player interacts with the density of her neighbors $m(t,x)$ at the same $(t,x)$. In contrast, in the supply-demand context, the interaction includes \emph{all} players, namely their production rates (that can be linked to the marginal values $\frac{\partial}{\partial x} v(t, x) $) across all $x$. Related proofs for second-order Cournot MFG PDEs have been provided in \cite{GraberBensoussan15, GraberMouzouni17}. The respective reserves dynamics involve Brownian noise and no jump terms (no exploration). Specifically, Graber and Bensoussan \cite{GraberBensoussan15} established existence and uniqueness of MFG MNE in the case that players leave the game after exhaustion (Dirichlet boundary conditions), while \cite{GraberMouzouni17} recently proved existence and uniqueness of solutions in the case where reserves can be exogenously infinitesimally replenished at $x=0$ (corresponding eventually to Neumann boundary conditions). Their model (with zero volatility) can be viewed as the non-exploration $\lambda \equiv 0$ sub-case of our model. However, exogenous discoveries imply that the reserves distribution is a probability density on $(0,X_{max})$, obviating the need to track $\pi(t)$ which significantly simplifies the respective proof. In a related vein, Cardaliaguet and Graber~\cite{CardaliaguetGraber15} gave detailed proof of existence and uniqueness of equilibrium solution for first order MFG's with local coupling. However, first order MFG PDEs with non-local terms $v(t, x+\delta) - v(t, x)$ to the best of our knowledge have not been discussed in existing literature (except in passing in \cite[Sec 5]{ChanSircar16}), and the respective existence, uniqueness, and regularity of solutions remain an open problem. The MFG framework links the individual strategic behavior of each producer with the macro-scale organization of the market. Therefore the main economic insights concern the resulting \emph{aggregate} quantities that describe the overall evolution of the market. For this purpose, we recall the total production $Q(t)$ defined in \eqref{total production} $A(t)$ the total discovery, and $R(t)$ the total reserves, which are defined respectively as \begin{align} R(t) & = \int_0^\infty \eta^\ast(t, x) dx, \label{total reserves} \\ A(t) & = -\delta \int_0^\infty \lambda(t) a^\ast(t,x) \eta^\ast(t, d x). \label{total discovery} \end{align} Note that $R(t) = \int_0^\infty \PP( X_t \ge x) dx = \E[ X_t]$ justifying its meaning of total reserves. The following Lemma \ref{lemma: relation of Q A R}, proven in Appendix \ref{app: relation of Q A R}, shows the relation between these quantities of interest. It can be interpreted as conservation of mass for the reserves: at the macro-scale total reserves change is simply the net difference between reserves additions (via new discoveries $A(\cdot)$) and reserves consumption (via production $Q(\cdot)$). \begin{lemma} \label{lemma: relation of Q A R} We have the relation \begin{align} \frac{d}{dt}R(t) & = -Q(t) + A(t) , \label{relation of Q A R differential form} \quad\text{i.e.~}\quad R(t) = R(0) - \int_0^t Q(s) \, ds + \int_0^t A(s) \, ds . \end{align} \end{lemma} \section{Numerical methods and examples} \label{sec: Numerical methods and examples} We use an iterative scheme to numerically solve the system of HJB equation \eqref{mfg HJB equation} and transport equation \eqref{mfg transport equation}, similar to the approach in \cite{GLL10,ChanSircar14}. The Picard-like iterations start with an initial price process $p^{(0)}(\cdot)$ as an input into the MFG value function \eqref{mfg HJB}, which reduces to a standard optimization problem for the production and exploration rates $(q^{(0)}, a^{(0)})$. Then we input $(q^{(0)}, a^{(0)})$ into the equation \eqref{transport equation} of reserves evolution to solve for $\eta^{(0)}(\cdot, \cdot)$. The $q^{(0)}$ and $\eta^{(0)}$ obtained are used to update the price \eqref{mfg price function}, via $p^{(1)}(t) = \frac{1}{2}\left[ D^{-1}\left( -\int_0^\infty q^{(0)}(t, x) \eta^{(0)}(t, dx)\right) + p^{(0)}(t) \right] $. The updated price $p^{(1)}(\cdot)$ is then used for a new iteration. As $k \to \infty$, the iterations are expected to converge to a fixed point, i.e.~a triple $(q^{(\infty)}, a^{(\infty)}, \eta^{(\infty)})$ that simultaneously satisfies the HJB equation \eqref{mfg HJB equation} and transport equation \eqref{mfg transport equation} and hence yields a MFG MNE. For numerical purposes we restrict to a bounded space domain $[0,X_{max}]$ which is further partitioned using a mesh $0=x_0<x_1< ... < x_M= X_{max}$, with equal mesh size $\Delta x = x_m - x_{m-1}, m=1,\ldots,M$. Below we fix $\Delta x = 0.1$ in all the computational examples. Other numerical parameters of our examples are summarized in Table \ref{Parameters values for numerical analysis}. \begin{table}[htb] \centering $$\begin{array}{l|l} \text{ Cost Functions} & \kappa_1 = \kappa_2 = 0.1, \quad \beta_1 = \beta_2 = 1 \\ \text{ Max Price/Int Rate} & L = 5, \quad r= 0.1 \\ \text{ Reserves dynamics } & \delta = 1, \quad \lambda = 1 \\ \text{ Numerical Scheme} & T=50, X_{max} = 120, \Delta x = 0.1 \\ \hline \end{array} $$ \caption{Parameter values used for all numerical illustrations in Section \ref{sec: Numerical methods and examples}.} \label{Parameters values for numerical analysis} \end{table} In Section~\ref{sec: Numerical method for HJB equation}, we introduce the numerical method to solve the HJB equation of a representative producer's game value function with price $p(t)$ exogenously given. In Section~\ref{sec: Numerical method for transport equation}, we introduce the numerical method to solve the equation \eqref{transport equation} of reserves distribution controlled by the optimal $(q, a)$ obtained in the previous step. In Section~\ref{sec: Numerical method for the system of HJB and transport equations}, we show the iterative scheme to solve the coupled HJB and transport equations. \subsection{Numerical scheme for the HJB equation} \label{sec: Numerical method for HJB equation} In this section~we solve for mean field game value function $v(t,x)$ defined by \eqref{mfg HJB} with an exogenously given price $p(t)$. Treating $p(t)$ as exogenous allows us to avoid the production control formula in \eqref{eq:q-star} which has a mean-field dependence via $\int_0^\infty \frac{\partial}{\partial z} v(t, z)\eta(t, dz)$. Instead we use \eqref{eq:q-foc} that only depends on the player's own reserves state $x$, and reduces to a standard optimal stochastic control problem. For the exploration control we work with the first order condition as in \eqref{eq:a-foc}. The HJB equation \eqref{mfg HJB} with boundary condition \eqref{HJB equation on boundary} is similar to the single-agent problem in \cite{LS-Cournot}. The latter paper considered a time-stationary model which reduced the HJB equation to a first order nonlinear ordinary differential equation in $x$. In contrast, \eqref{mfg HJB} has time-dependence and hence is a genuine PDE. We employ a method of lines to discretize the $x$ variable and treat the HJB PDE as a system of ordinary differential equations in time variable $t$ with the terminal condition $v(T,x) = 0$. The space derivative of $v(t, x)$ at each space grid point $x_m$ is approximated by a backward difference quotient $\frac{\partial}{\partial x} v(t, x_m) \approx \frac{v(t, x_m) - v(t, x_{m-1})}{\Delta x}$. The non-local term $\Delta_x v(t, x_m)$ is approximated by $\Delta_x v(t, x_m) \approx v(t, x_{m+d}) - v(t, x_m)$ with $d = \lfloor \frac{\delta}{\Delta x} \rfloor$ so that $x_m + \delta \simeq x_{m + d}$. We solve for $v(\cdot, x_m)$ as an ordinary differential equation in variable $t$, viewing $v(t, x_{m-1})$ and $v(t, x_{m+d})$ as source terms, \begin{multline} \frac{\partial}{\partial t} v(t, x_m) \approx r v(t, x_m) - \frac{1}{2\beta_1} \left[ \left( p(t) - \kappa_1 - \frac{v(t, x_m) - v(t, x_{m-1})}{\Delta x} \right)^+ \right]^2 \\ - \frac{1}{2\beta_2} \left[ ( \lambda(t) [v(t, x_{m+d}) - v(t, x_m)] - \kappa_2 )^+ \right]^2 , \quad m = 1, \ldots, M-d. \label{eqn: ode vt interior} \end{multline} For the boundary case $m=0$, production stops and the equation becomes \begin{align} \frac{\partial}{\partial t} v(t, x_0) = r v(t, x_0) - \frac{1}{2\beta_2} \left[ ( \lambda(t) [v(t, x_{d}) - v(t, x_0)] - \kappa_2 )^+ \right]^2 . \label{eqn: ode vt left boundary} \end{align} Recall that for $x$ large enough, saturation level of reserves is reached and no exploration effort is made. We take $X_{max}$ such that this would be true for $x_{M-d+1}, \ldots, x_M=X_{max}$ whereby the term $( \lambda(t) \Delta_x v(t, x) - \kappa_2 )^+$ vanishes and \eqref{eqn: ode vt interior} simplifies to \begin{align} \frac{\partial}{\partial t} v(t, x_m) = r v(t, x_m) - \frac{1}{2\beta_1} \left[ \left( p(t) - \kappa_1 - \frac{\partial}{\partial x} v(t, x_m) \right)^+ \right]^2 ,\quad m = M-d+1, \ldots, M. \label{eqn: ode vt right boundary} \end{align} We use Matlab's Runge-Kutta solver \texttt{ode45} to solve (backward in time) the system \eqref{eqn: ode vt interior}--\eqref{eqn: ode vt right boundary} of ordinary differential equations for $\{v(t, x_m): m = 0, 1, \ldots, M\}$. \subsubsection{A numerical example of the HJB equation }\label{sec:hjb-ex} To illustrate the above approach to solve the HJB equation \eqref{mfg HJB}, we consider an example with a constant exogenous price $p(t)=3, \forall t \le T$. To prescribe $\lambda(t)$, observe that intuitively chances of a new discovery should be proportional to the remaining reserves underground. Assuming the global exploitable reserves decrease (linearly) in time due to ongoing exploration and production, we are led to consider a linear link between $t$ and discovery rate $\lambda(t)$: \begin{align} \lambda(t) = \left( 1 - t/\bar{T}\right)^+. \notag \end{align} The time $\bar{T}$ can be viewed as global exhaustion of the commodity. Figure \ref{fig: q_single_agent and a_single_agent} shows the resulting optimal production rate $q(t,x)$ and exploration effort $a(t,x)$ for several intermediate $t$'s. At each $t$, production rate $q(t,x)$ is increasing in reserves level $x$ (asymptotically reaching $p(t) - \kappa_1$ as $ x\to\infty$), while exploration effort $a(t,x)$ is decreasing in $x$, becoming zero $a(t,x) = 0$ for $x \ge 80$. The monotonicity of $q(t,\cdot)$ and $a(t,\cdot)$ is due to decreasing marginal value of reserves, which is consistent with the results in \cite{LS-Cournot,LudkovskiYang14}. Both production and exploration rates decrease in $t$, because the discovery rate $\lambda(t)$ is decreasing, which gives decreasing motivation for exploration and in turn lowers production as marginal value of reserves rises. The above $q(t,x)$ and $a(t,x)$ for $0\leq t \leq T$ and $0 \leq x \leq X_{max}$ will be used in the next Section~\ref{sec: Numerical method for transport equation} as input to compute the evolution of reserves distribution. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \begin{minipage}{0.47\textwidth} \includegraphics[width=0.98\textwidth, height=2.4in,trim=1in 2.65in 1in 2.2in]{q_single_agent} \end{minipage} & \begin{minipage}{0.47\textwidth} \includegraphics[width=0.98\textwidth, height=2.4in,trim=1in 2.65in 1in 2.2in]{a_single_agent} \end{minipage} \end{tabular} \begin{minipage}{0.97\textwidth} \caption{ Production and exploration controls $(q, a)$ associated with the HJB equation \eqref{mfg HJB} under constant price $p(t)=3$ and $\lambda(t) = (1-0.025t)^+, 0\le t \le T$. Left panel: production rate $q(t,x)$. Right panel: optimized exploration rate $a(t,x)$. } \label{fig: q_single_agent and a_single_agent} \end{minipage} \end{center} \end{figure} \subsection{Numerical scheme for transport equation} \label{sec: Numerical method for transport equation} We now assume given controls $q(t,x), a(t,x)$ and take up the evolution of the reserves distribution. To numerically solve the transport equations of $\eta(t, x)$ we use a fully explicit finite difference scheme which replaces derivatives with discretized increments of the respective functions over a grid. We use the same partition in the space domain $[0, X_{max}]$ using $\Delta x$ as in the previous section. To justify this bounded domain for the $x$-variable, recall the discussion about saturation level $x_{sat}$ at the end of Section \ref{sec: Game value function of a representative player} which motivates us to assume that $a(t,x) = 0$ for $x$ large enough, and in turn implies that $\eta(t,x) =0$ for $x$ large enough (e.g.~$x \ge \sup_t x_{sat}(t) + \delta$, with the additional assumption that the support of the initial distribution $\eta_0$ is also bounded). Thus, we apply the numerical boundary condition $\eta(t, X_{max}) = 0$ for all $t$. (Even if $a(t,x) > 0$ for all $x$, we still expect that the right tail of $\eta$ should become negligible for $x$ large and so can be numerically truncated at $X_{Max}$.) Furthermore, we partition the time domain $[0, T]$ using a mesh $0 = t_0 < t_1 < ... < t_N = T$ with $t_n = n \Delta t$. To handle the boundary at $x=0$, the values $\eta(t, \cdot)$ and $q(t, \cdot)$ at $x=0+$ are numerically approximated by $\eta(t, x_1)$ and $q(t, x_1)$, respectively. With the above setup, we approximate both derivatives in time and in space by forward difference quotients: \begin{equation} \frac{\partial}{\partial t} \eta(t_n, x_m) \approx \frac{\eta(t_{n+1}, x_m) - \eta(t_n, x_m)}{\Delta t} , \quad \frac{\partial}{\partial x} \eta(t_n, x_m) \approx \frac{\eta(t_{n}, x_{m+1}) - \eta(t_n, x_{m})}{\Delta x} . \label{approximation to space derivative} \nonumber \end{equation} By choosing $d = \lfloor \frac{\delta}{\Delta x} \rfloor$, so that $x_m -\delta \simeq x_{m-d}$ we approximate the integral term in \eqref{eq:transport-2}--\eqref{eq:transport-3} with a Riemann sum \begin{equation} -\int^{x_m}_{(x_{m}-\delta)_+} \lambda(t) a(t,x) \eta(t, dx) \approx \sum_{j=m-d + 1 \vee 1}^{m} \lambda(t_n) a(t_n, x_j) \left( \eta(t_n, x_{j-1}) - \eta(t_n, x_{j}) \right), \label{approximation to integral term} \end{equation} where $\eta(t_n, x_{j-1}) - \eta(t_n, x_{j})$ is the proportion of producers with reserves in the interval $[x_{j-1}, x_j]$. We start with given initial condition $\eta(t_0, x_m) = \eta_0(x_m)$, $m=0, \ldots, M$, and solve forward in time using the right-edge boundary condition $\eta(t_n, x_M) = 0$, $n=0, ..., N$. We take $\eta(t_n,x_0) = 1$ and interpret $\eta(t_n,x_1) \approx \eta(t_n, 0+)$ so that $\pi(t_n) = \eta(t_n, x_0) - \eta(t_n, x_1)$. We then solve for $\eta(t_{n+1}, \cdot)$ forward in space, splitting into cases according to $x_m \lessgtr \delta$. For $0<x_m<\delta$ (i.e.~$m=1,2,\ldots$), which corresponds to \eqref{eq:transport-2}, we obtain the numerical value of $\eta(t_{n+1}, x_m)$ as \begin{align} \eta(t_{n+1}, x_m) & = \eta(t_{n}, x_m) + \Delta t q(t_n, x_m) \frac{\eta(t_n, x_{m+1}) - \eta(t_n, x_{m})}{\Delta x} \notag\\ & \quad - \Delta t \sum_{j=1}^m \lambda(t_n) a(t_n, x_j) \left( \eta(t_n, x_j) - \eta(t_n, x_{j-1}) \right) . \label{numerical eta 1} \end{align} where the term for $j=1$ corresponds to $\lambda(t_n) a(t_n,0) \pi(t_n)$ in \eqref{eq:transport-2}. For $x_M > x_m > \delta$, cf.~-\eqref{eq:transport-3}, we obtain the numerical value of $\eta(t_{n+1}, x_m)$ by \begin{align} \notag \eta(t_{n+1}, x_m) & = \eta(t_n, x_m) - \Delta t \sum_{j=m-d+1}^{m} \lambda(t_n) a(t_n, x_j) \left( \eta(t_n, x_{j} ) - \eta(t_n, x_{j - 1}) \right) \notag \\ & \qquad + q(t_n, x_m) \left( \eta(t_n, x_{m+1}) - \eta(t_n, x_{m}) \right) \frac{ \Delta t }{\Delta x} . \label{numerical eta 2} \end{align} Note that the above equations require only the values $a(t_n, x_m), q(t_n, x_m)$ and there is no difficulty in combining a method-of-lines approach for the HJB portion of the MFG equations with the above fully discretized finite-difference scheme for the transport equation. \subsubsection{Illustrating the Evolution of Reserves Distribution}\label{sec:transport-ex} As an example suppose that the initial reserves distribution has a parabolic initial density $m_0(x)$ $$ m_0(x)=\frac{6x(u-x)}{u^3} \quad \Leftrightarrow \quad \eta_0(x) = 1- 3 (x/u)^2 + 2 (x/u)^3 \quad\text{for}\quad 0 \le x \le u, $$ and $m_0(x) = 0$ otherwise. In the example shown in Figure \ref{fig: P_t and m_t example}, we take $u=10$, cf.~$m(0,x)$ on the left panel of the Figure. The evolution of boundary probability $\pi(t) = \PP(X_t = 0)$ and the density of reserves distribution $m(t,x) = - \frac{\partial}{\partial x}\eta(t, x)$ are shown in Figure~\ref{fig: P_t and m_t example}. Numerically the density function is approximated by a difference quotient $m(t_n, x_m) \approx \frac{\eta(t_n, x_{m}) - \eta(t_n, x_{m+1})}{\Delta x}$. Since discovery rate $\lambda(t)$ decreases in time, the reserves density $m(t,x)$ shifts towards zero as time evolves, as shown on the left panel of Figure~\ref{fig: P_t and m_t example}. Similarly, the proportion $\pi(t)$ of producers with no remaining reserves increases in $t$ and zero global reserves are left shortly after discovery becomes impossible $\inf\{t : \pi(t) = 1 \} \simeq 41$, cf.~right panel of Figure \ref{fig: P_t and m_t example}. We also note the discontinuity of $m(t,\cdot)$ at $x=\delta$ due to the discrete reserves jumps from $X_t = 0$. \begin{figure}[htb] \begin{center} \begin{tabular}{rl} \begin{minipage}{0.47\textwidth} \includegraphics[width=0.98\textwidth,height=2.1in,trim=0.95in 3in 1.25in 2.95in]{m_real} \end{minipage} & \begin{minipage}{0.47\textwidth} \includegraphics[width=0.98\textwidth, height=2.1in,trim=.95in 3in 1.25in 2.95in]{boundary_probability_single_agent} \end{minipage} \end{tabular} \begin{minipage}{0.97\textwidth} \caption{ Evolution of reserves distribution under the production and exploration controls $(q, a)$ obtained in Section \ref{sec: Numerical method for HJB equation}. The discovery rate is $\lambda(t) = (1-0.025t)^+$ and unit amount of a discovery is $\delta = 1$. Left panel: Density of reserves distribution $m(t,x)=-\frac{\partial}{\partial x} \eta(t, x)$ for several $t$'s. Right: Proportion of producers with no reserves $\pi(t) = \PP(X_t = 0)$. After $t=41$ all reserves are exhausted. } \label{fig: P_t and m_t example} \end{minipage} \end{center} \end{figure} \subsection{Numerical scheme for the MFG system} \label{sec: Numerical method for the system of HJB and transport equations} We introduce an iterative scheme to solve the system of coupled HJB and transport equations. Our solution strategy consists of a loop over the following three steps. The loops are repeated over the iterations $k=0,1,\ldots$ until numerical convergence. To initialize, we start with an initial price process $p^{(0)}(t)$ (greater than $\kappa_1$, to ensure strictly positive production rate). In Step 1, given the current $p^{(k)}(\cdot)$, the numerical scheme in Section~\ref{sec: Numerical method for HJB equation} is implemented for the HJB equation, outputting the optimal production $q^{(k)}$ and exploration $a^{(k)}$ rates. Next in Step 2, these $q^{(k)}$ and $a^{(k)}$ are substituted into the transport equation to solve for $\eta^{(k)}$, following the scheme in Section~\ref{sec: Numerical method for transport equation}. We then compute the total production $Q^{(k)}$ by using a Riemann sum to approximate the integral of $q^{(k)}(t,x)$ with respect to $\eta^{(k)}(t,\cdot)$. Finally, in Step 3 we update the price to $p^{(k+1)}$. Observe that if $p^{(k)}(t)$ is lower than equilibrium price $p^\ast(t)$ for all $t\in [0, T]$, the resulting $Q^{(k)}(t)$ will be lower than the equilibrium $Q(t)$. As a result, $D^{(-1)}(Q^{(k)}(t))$ will be higher than $p^\ast(t)$, and vice versa. Thus to speed up convergence, we take $p^{(k+1)}(t)$ in the next iteration to be the average of $p^{(k)}(t)$ and $D^{-1}(Q^{(k)}(t))$. Numerically we observe that this yields a monotone sequence of $p^{(k)}(t)$'s, improving convergence to the equilibrium $p^\ast(t)$. {\bf Step 0}. Start with an initial guess $p^{(0)}(t), t \in [0,T]$ of market price. {\bf Step 1}. For iteration $k=0,1,2,...$, and given $p^{(k)}(\cdot)$, solve the HJB equation \eqref{mfg HJB equation} to obtain $v^{(k)}(t,x)$ and the corresponding $q^{(k)}(t,x)$ and $a^{(k)}(t,x)$ as in \eqref{eq:q-star-2}-\eqref{eq:a-star-2}. {\bf Step 2}. With the above $q^{(k)}$ and $a^{(k)}$ solve the transport equation to obtain $\eta^{(k)}(t,x)$ satisfying \eqref{transport equation}. {\bf Step 3}. Update the market price via the new total quantity of production \begin{equation*} p^{(k+1)}(t):= \frac{ D^{-1}\left( Q^{(k)}(t)\right) + p^{(k)}(t) }{2} \ \quad\text{with} \quad Q^{(k)}(t) = \sum_{m=1}^{M-1} q^{(k)}(t,x_m) [ \eta^{(k)}(t,x_m) - \eta^{(k)}(t,x_{m+1})]. \end{equation*} {\bf Repeat} Steps 1 --- 3 until convergence in the sup-norm defined as $\left\| \cdot \right\|_\infty :=\sup_{[0, T]\times[0, X_{max}]}|\cdot|$. Iteration will stop when tolerance of error $TolError$ is satisfied \begin{align} \left\| v^{(k+1)} - v^{(k)} \right\|_\infty < TolError, \quad\text{and}\quad \left\| \eta^{(k+1)} - \eta^{(k)} \right\|_\infty < TolError . \label{eq:tol-error} \end{align} We continue with the running example where the discovery rate is $ \lambda(t) = \left(1- 0.025t\right)^+$, $\delta = 1$ and initial price process is $p^{(0)}(t) = 3 \forall t$. Recall that the solutions obtained in Sections~\ref{sec:hjb-ex} and \ref{sec:transport-ex} can be viewed as the first iteration $k=0$ of the above scheme. Figure~\ref{fig: Convergence of iterations} illustrates the iterations over $k=0,1,\ldots$ with the resulting HJB value functions $v^{(k)}(t,x)$ at fixed time $t=10$. In each iteration $k$, if $v^{(k)}(t, x)$ is lower than the equilibrium value $v^\ast(t, x)$ for all $x \in [0, X_{max}]$ with some $t$ fixed, then in the next iteration $v^{(k+1)}(t, x)$ will move up towards the equilibrium level $v(t, x)$. This pointwise monotone convergence in $x$ is observed in Figure~\ref{fig: Convergence of iterations}. Numerical convergence with a tolerance of $TolError=10^{-6}$ in \eqref{eq:tol-error} is achieved after $k=4$ iterations. \begin{figure}[htb] \begin{center} \includegraphics[width=0.5\textwidth,height=2.2in,trim=0.85in 3in 1.1in 2.65in]{MFG_Iterations_v} \begin{minipage}{0.97\textwidth} \caption{Convergence of the numerical scheme in Section \ref{sec: Numerical method for the system of HJB and transport equations}. We start with initial guess $p^{(0)}(t) = 3 \forall t \in [0, T]$, and discovery rate $\lambda(t) = (1-0.025t)^+$. Game value function at $t=10$, $v^{(k)}(t, x)$, converges after $k\geq 4$ iterations. } \label{fig: Convergence of iterations} \end{minipage} \end{center} \end{figure} Figure~\ref{fig: Q A R exhaustible} shows the resulting evolution of total production $Q(t)$, total discovery rate $A(t)$, and total reserves level $R(t)$. Total reserves $R(t)$ decrease as production proceeds; in turn decreasing $R(t)$ lowers the total production rate $Q(t)$ and raises market price $p(t)$. Interestingly we observed a hump shape in $t \mapsto A(t)$: initially exploration efforts rise, then peak and gradually decline. This complex relationship is driven by the changing exploration success parameter $\lambda(t)$ (that discourages exploration as time progresses) and the reserves distribution $\eta(t,x)$ (which encourages exploration as reserves tend to get depleted on average). To get some further insights, we compare these results with the non-exploration (NE) case that has zero discovery rate $\lambda(t)=0$. When $\lambda(t)=0$, no exploration effort will be made $a^\ast(t,x) \equiv 0$ as there is no hope to have any discovery. Consequently, producers simply gradually extract their initial reserves, eventually leading to total depletion, $R^{NE}(t) = 0$ for $t > 10.5$ in the Figure. This postponement of the reserves ``Doomsday'' is illustrated in the right-most panel of Figure~\ref{fig: Q A R exhaustible} that plots the evolution of the proportion of exhausted producers' $\pi(t)$. In comparison, for a model with exploration ultimate depletion only happens around $t=41$ (recall that $\lambda(t) = 0$ after $t=40)$. In fact at $t=10$, less than 10\% of producers have no reserves. As expected, because exploration increases global reserves, $R^E(t) \ge R^{NE}(t)$, the respective marginal value of reserves is lower and hence production is boosted, $Q^E(t) \ge Q^{NE}(t) \forall t$. Thus, exploration not only delays exhaustion but also unambiguously raises revenues. \begin{figure}[htb] \begin{center} \begin{tabular}{rccl} \begin{minipage}{0.23\textwidth} \includegraphics[width=0.97\textwidth,height=2in,trim=1.25in 2.95in 1.35in 2.85in]{Q_exhaustible} \end{minipage} & \begin{minipage}{0.23\textwidth} \includegraphics[width=0.97\textwidth,height=2in,trim=1.35in 2.95in 1.35in 2.85in]{R_exhaustible} \end{minipage} & \begin{minipage}{0.23\textwidth} \includegraphics[width=0.97\textwidth,height=2in,trim=1.25in 2.95in 1.35in 2.85in]{A_exhaustible} \end{minipage} & \begin{minipage}{0.23\textwidth} \includegraphics[width=0.97\textwidth,height=2in,trim=1.25in 2.95in 1in 2.85in]{bdryP} \end{minipage} \end{tabular} \end{center} \begin{minipage}{0.97\textwidth} \caption{From left to right: evolution of total production $Q(t)$, total reserves $R(t)$, total discovery rate $A(t)$ and proportion $\pi(t)$ of producers with no reserves as a function of $t$. We show discovery rate $\lambda(t) = (1-0.025t)^+$, in comparison to no-exploration $\lambda(t)=0$. \label{fig: Q A R exhaustible}} \end{minipage} \end{figure} \section{Stationary mean field game Nash equilibrium} \label{sec: Stationary mean field game Nash equilibrium} In Section~\ref{sec: Mean field game Nash equilibrium} we studied a generic model with time-inhomogeneous discovery rate $\lambda(t)$, which would typically be taken to be decreasing in time. When there are still abundant resources underground, it is reasonable to assume that the discovery rate is time-homogeneous $\lambda(t) = \lambda$, for some $\lambda > 0$. Thanks to exploration, the commodity used up for production can be compensated by new discoveries, and thus a \emph{stationary} level of production and exploration can be obtained. In this section, we discuss such stationary MFG equilibria denoted by $(\tilde{q}, \tilde{a}, \tilde{\eta})$. Specifically, if the reserves has initial distribution $X_0\sim \tilde{\eta}$, and all the players apply the strategy $ q_t = \tilde{q}(x; \tilde{\eta})$ and $a_t = \tilde{a}(x; \tilde{\eta})$, then the reserves process \begin{align}\label{eq:stat-X} d X_t = - \tilde{q}(X_t)\mathds{1}_{\{ X_t >0\}} dt + \delta d \tilde{N}_t \end{align} has the distribution $\tilde{\eta}(\cdot)$ for all $t>0$, that is, the reserves distribution is invariant in time. We define the stationary objective functional $\tilde{\mathcal{J}}$ of a player with current reserves level $x$ and conditionally on a reserves distribution $\tilde{\eta}(\cdot)$ as \begin{equation} \label{stationary mfg objective functional} \tilde{\mathcal{J}}( \tilde{q}, \tilde{a} ;x, \tilde{\eta}) : = \E \left\{ \int_0^{\infty}\left[ D^{-1}\left( \tilde{Q}(\tilde{\eta}) \right) \tilde{q}(X_t) - C_q(\tilde{q}(X_t)) - C_a(\tilde{a}(X_t)) \right] e^{-rt} dt \ \middle| \ X_0 = x\right\} , \end{equation} where $\tilde{Q}(\tilde{\eta}) := - \int_0^\infty \tilde{q}(x) \tilde{\eta}(dx) $ is the stationary aggregate production. \begin{defn}[Stationary MFG MNE] \label{defn: Stationary mean field game Nash equilibrium} A stationary mean field game Nash equilibrium is a triple $\left( \tilde{q}^\ast, \tilde{a}^\ast, \tilde{\eta} \right)$ such that for $(X_t)$ from \eqref{eq:stat-X} the distribution of reserves $\tilde{\eta} = \PP( X_t \ge x) \forall t$ is unchanged under the strategies $(\tilde{q}^\ast, \tilde{a}^\ast)$, and \begin{equation} \label{stationary mfg optimality condition} \tilde{v}( x) \equiv \tilde{\mathcal{J}}( \tilde{q}^\ast, \tilde{a}^\ast; \tilde{\eta}) \geq \tilde{\mathcal{J}}(q, a; \tilde{\eta}), \quad \forall (q, a)\in \mathcal{A} . \end{equation} \end{defn} The following Proposition \ref{prop: Stationary mean field game partial differential equations} gives the system of stationary HJB and transport equations for $\tilde{v}, \tilde{\eta}$ under a constant discovery rate $\lambda > 0$. Intuitively, it is equivalent to the equations in the previous section after dropping the dependence on $t$. Consequently, we pass from PDE's to ordinary differential equations in $x$. \begin{prop}[Characterizing stationary MFG equilibrium] \label{prop: Stationary mean field game partial differential equations} The stationary value function $\tilde{v}$ and upper-CDF $\tilde{\eta}$ satisfy \begin{align} \label{stationary mfg HJB equation} r \tilde{v}( x) & = \left[ - C_a( \tilde{a}^\ast( x) ) + \tilde{a}^\ast( x)\lambda \Delta_x \tilde{v}( x) \right] + \left[ \tilde{p} \tilde{q}^\ast( x) - C_q(\tilde{q}^\ast( x)) - \tilde{q}^\ast( x) \tilde{v}'( x) \right] , \quad x > 0 ; \\ \label{stationary mfg transport equation} & \begin{cases} 0 = \lambda \tilde{a}^\ast (0) (1-\tilde{\eta}(0+)) - \int_{0+}^x \lambda \tilde{a}^\ast (z) \tilde{\eta}(dz) + \tilde{q}^\ast(x ) \tilde{\eta}'(x) , \qquad 0<x \leq \delta , \\ 0 = - \int_{x -\delta}^x \lambda \tilde{a}^\ast( z ) \tilde{\eta}( dz) + \tilde{q}^\ast(x) \tilde{\eta}'(x), \qquad\qquad\qquad x > \delta , \end{cases} \end{align} where the equilibrium stationary production and exploration rates ($\tilde{q}^\ast$, $\tilde{a}^\ast$) and price $\tilde{p}$ are \begin{align} & \tilde{q}^\ast(x) = \frac{1}{\beta_1} \left( L - \tilde{Q} - \kappa_1 - \tilde{v}'(x) \right)^+ , \notag \\ & \tilde{a}^\ast( x) = \frac{1}{\beta_2}\left( \lambda \Delta_x \tilde{v}( x) - \kappa_2 \right)^+ , \notag \\ & \tilde{p} = D^{-1}\left( \tilde{Q} \right) = L + \int_0^\infty \tilde{q}^\ast(x) \tilde{\eta}(dx) , \end{align} with $\tilde{Q}$ uniquely determined by the equation \begin{align} \tilde{Q} & = - \int_0^\infty \frac{1}{\beta_1}\left( L - \kappa_1 - \tilde{v}'(x) - \tilde{Q} \right)^+ \tilde{\eta}(dx) . \label{eq:stat-Q} \end{align} \end{prop} Similar to \cite{LS-Cournot}, the boundary condition for $\tilde{v}(0)$ is determined by \begin{equation} \tilde{v}(0) = \sup_{a \geq 0} \E \left[ e^{-r\tau} \tilde{v}(\delta) - \int_0^\tau e^{-rt} C_a(a) dt \right] = \sup_{a \geq 0} \frac{a\lambda \tilde{v}(\delta) - C_a(a)}{r+a\lambda} . \label{stationary HJB boundary} \end{equation} \begin{remark} If the rate of new discoveries is zero, $\lambda=0$ then from the transport equation \eqref{stationary mfg transport equation} we have $\tilde{\eta}'(x) = 0$ for all $x > 0$, which implies that there is no producer with positive reserves level in the long run. \end{remark} \subsection{Solving for stationary MFG equilibria} \label{sec: Examples of stationary mean field game} For the stationary MFG developed in \eqref{stationary mfg HJB equation}-\eqref{stationary mfg transport equation}, the iterative scheme introduced in section \ref{sec: Numerical method for the system of HJB and transport equations} is not directly applicable. The challenge lies in solving the stationary transport equation \eqref{stationary mfg transport equation}. We see that the singularity at $x=0$ creates in effect a \emph{free boundary} at $x=0$ that describes the balance between the density for $x> 0$ and the point mass $\tilde{\pi}$ of exhausted producers. It is not clear how to directly handle this free boundary without ending with an intractable global system of coupled nonlinear equations. To overcome this issue, we exploit the link between the time-dependent and time-stationary MFG models. Specifically, with a constant discovery rate $\lambda$ and large horizon $T$, the strategies $(v^\ast, a^\ast)$ have only weak dependence on $t$. Thus, we expect a convergence of the reserves process $X_t$ to an invariant distribution, since with a feedback, time-independent control it forms a recurrent Markov process on $\mathbb{R}_+$. This suggests to take $T$ large, solve the MFG on $[0,T]$ and then ``extract'' a $(\tilde{v}, \tilde{a}, \tilde{\eta})$ to approximate the true stationary solution. A numerical illustration of a non-stationary MFG with constant discovery rate $\lambda(t) = \lambda \equiv 1$ is shown in Figure \ref{fig: stationary mfg}. The lower panels of Figure \ref{fig: stationary mfg} show the evolution of total production $Q(t)$, total discovery $A(t)$, and total reserves level $R(t)$, which are defined by \eqref{total production}-\eqref{total discovery}. We observe a boundary layer for small $t$ (roughly $t \in [0, 12]$) arising from the non-equilibrium initial distribution $\eta_0(dx)$, and another boundary layer (roughly for $t \in [45,50]$) arising from the terminal condition $v(T,x) = 0$. The latter causes $\lim_{t \to T} R(t) = 0, \lim_{t \to T} A(t) = 0$ observed in the plots. (Note that as the horizon is approached, total production rises in order to spend down all reserves and reach $R(T) = 0$.) At the same time, for the intermediate $t$'s all the quantities are effectively time-independent and hence should be close to the stationary MFG equilibrium solution. In particular, due to the conservation of reserves, $R(t) \simeq 1.9$ for $t\in [15,45]$ we observe that $Q(t) \simeq A(t)$ on that time interval. Similarly, the respective reserves distribution $\eta(t,x)$ is almost independent of $t$, cf.~the plot of $\pi(t)$ in Figure~\ref{fig: stationary mfg}. Put another way, the actual value of the horizon $T$ is essentially irrelevant as it only determines where the end-of-the-world boundary layer appears (around $t=T-5$ in the plot) and has negligible effect on the solution prior to that. A rigorous treatment of this phenomenon has been given in Cardaliaguet et al.~\cite{Cardaliaguet12} for a locally coupled MFG, and Cardaliaguet et al.~\cite{CardaliaguetPorretta13} for a special case of a non-local coupling. According to \cite{CardaliaguetPorretta13}, for each $t\in [0, T]$, the solution $(v(t, x), \eta(t, x))$ of a non-stationary MFG model converges in $L^2$-norm to the solution $(\tilde{v}(x), \tilde{\eta}(x))$ of stationary MFG model as $T\to \infty$. Furthermore, in their setting the difference between stationary and non-stationary mean field game equilibrium solutions, measured by $L^2$-norm, is minimized at $t = T/2$. Extending these proofs to the setting of Cournot MFGs with exploration is left for future research. In light of these results, we can obtain an approximate solution of the stationary MFG MNE by solving the non-stationary equations \eqref{mfg HJB equation} and \eqref{mfg transport equation} with constant discovery rate $\lambda(t) \equiv \lambda$, employing the same iterative scheme as in Section~\ref{sec: Numerical method for the system of HJB and transport equations}. Then the solution $(v(t, x), \eta(t, x))$ at $t = T/2$ is taken as approximate solution of the stationary mean field game model \eqref{stationary mfg HJB equation}-\eqref{stationary mfg transport equation}, i.e., $\tilde{v}(x) \approx v(T/2, x)$ and $\tilde{\eta}(x) \approx \eta(T/2, x)$ for all $x \in [0, X_{max}]$. A related approach was taken in Chan and Sircar~\cite{ChanSircar16} where the stationary MFG solution was obtained by solving non-stationary transport equation coupled with stationary HJB equation and taking the large time limit. In the example shown in Figure \ref{fig: stationary mfg} we had $T=50$, and so we use the intermediate solution $( v(t, x), \eta(t, x) ) \approx (\tilde{v}(x), \tilde{\eta}(x))$ at $t = T/2 = 25$ as an approximation to the corresponding time-stationary MFG. The upper left panel of Figure~\ref{fig: stationary mfg} shows the (approximate) stationary reserve density $\tilde{m}(x) \approx \frac{\partial}{\partial x}\eta(25, x)$. We observe that $\tilde{m}(x)$ increases in $x$ for $0<x \leq \delta$ where the rate of discovery is higher than the rate of production, and decreases for $x > \delta$. Similarly, we can extract the stationary total production $\tilde{Q}$, total discovery $\tilde{A}$, and total reserves level $\tilde{R}$ by looking at $Q(t), A(t), R(t)$ at $t=T/2=25$. (Due to conservation of mass $\tilde{A} = \tilde{Q}$.) \begin{figure}[htb] \begin{center} \begin{tabular}{rl} \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth, height=2in,trim=1in 2.85in 1.2in 2.3in]{m_stationary} \end{minipage} & \begin{minipage}{0.45\textwidth} \includegraphics[width=.98\textwidth, height=2in,trim=1.2in 2.85in 1.2in 2.3in]{Prob_Xt_0} \end{minipage} \\ \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth, height=2in,trim=1in 2.85in 0.5in 2.3in]{A_and_Q} \end{minipage} & \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth, height=2in,trim=1.2in 2.85in 1.2in 2.3in]{Reserves} \end{minipage} \end{tabular} \begin{minipage}{0.97\textwidth} \caption{MFG solution with a constant $\lambda(t) \equiv \lambda = 1$ and $T=50$ to illustrate the relationship between the time-dependent and stationary solutions. Upper left panel: Density $m(t, x)$ of reserves distribution. Upper right: Proportion $\pi(t)$ of producers without reserves. Lower left: Total exploration rate $A(t)$ and total production $Q(t)$. Lower right: Total reserves $R(t)$. } \label{fig: stationary mfg} \end{minipage} \end{center} \end{figure} \subsection{Comparative Statics for the Stationary MFG} It is instructive to study the effect of exploration on the equilibrium of the stationary mean field game. Figure~\ref{fig: stationary_Q_against_lambda stationary_R_against_lambda stationary_bdryP_against_lambda} shows the effect of discovery rate $\lambda$ on the aggregate stationary quantities $\tilde{Q}$, $\tilde{A}$, and $\tilde{R}$, all of which have positive relation with $\lambda$. As discoveries take place faster with larger $\lambda$, the marginal value of each discovery decreases which yields an ambiguous effect to exploration effort $\tilde{a}^\ast$. In the top right panel of Figure~\ref{fig: stationary_Q_against_lambda stationary_R_against_lambda stationary_bdryP_against_lambda}) we observe that for low values of $\lambda$, $\lambda \to \tilde{a}^\ast(x; \lambda)$ increases, i.e.~exploration is encouraged by higher likelihood of discovery. However, for high $\lambda$'s, $\lambda \to \tilde{a}^\ast(x; \lambda)$ is decreasing pointwise as the producer becomes ``lazy'' and does not see a need to work as hard, since new reserves are so easy to come by. In aggregate across $x$, we do observe a positive relation between $\lambda$ and total discovery rate $\tilde{A}$ (top left panel). Due to $\tilde{A}=\tilde{Q}$, this translates into higher aggregate production and lower prices. Easier discoveries also raise the stationary level of reserves $\tilde{R}$, although the underlying impact on $\tilde{\eta}$ is non-monotone. This is illustrated in the bottom right panel of Figure~\ref{fig: stationary_Q_against_lambda stationary_R_against_lambda stationary_bdryP_against_lambda} which plots the density $\tilde{m}(x)$ for several different $\lambda$'s and highlights multiple phenomena of interest. On the one hand, we observe that $\lambda \tilde{a}^\ast(0)$ monotonically increases in $\lambda$ which reduces the expected time until next discovery at $x=0$ and hence lowers the stationary proportion $\tilde{\pi}$. In the same vein, $\tilde{R}$ rises in $\lambda$ and shifts the whole $\tilde{m}$ to the right. On the other hand,, the spread, i.e.~variance of $\tilde{m}$ starts falling as $\lambda$ keeps rising. Thus, for low $\lambda$, $\tilde{\eta}$ is more spread out and $\tilde{\pi}$ is higher; for high $\lambda$ $\tilde{\eta}$ is concentrated around the average $\tilde{R}$. Moreover, the support of $\tilde{m}$ has a hump shape in $\lambda$. Recall that due to exploration saturation, $\tilde{m}$ is supported on $[0, \tilde{x}_{sat}+\delta]$ where $\tilde{x}_{sat}$ is the saturation level. We find that $\tilde{x}_{sat}$ first rises and then falls in terms of $\lambda$. For example, when $\lambda=1$, we have $\tilde{x}_sat = 64.8$ which can be compared to $\tilde{x}_{sat}(\lambda=0.2) = 60.7$ and $\tilde{x}_{sat}(\lambda = 10) = 23.9$. In the latter situation when $\lambda$ is very big, there is no reason to hold many reserves (instead resources can be replenished almost instantaneously), so $\tilde{v}(x)$ approaches its horizontal asymptote quickly and hence exploration only takes place for small $x$. A further phenomenon is that when $\lambda$ is very small, e.g.~$\lambda < 0.05$ in Figure \ref{fig: stationary_Q_against_lambda stationary_R_against_lambda stationary_bdryP_against_lambda}, exploration stops entirely ($\tilde{A}=0$ leading to $\tilde{R}=0$) in stationary equilibrium. This occurs because when $\kappa_2>0$ and $\lambda$ is small enough, the expected addition of value $\lambda \Delta_x \tilde{v}(x)$ is always smaller than the cost $\kappa_2$ and thus no exploration efforts will be made. Thus, when discoveries are ``too difficult'', exploration will cease even if there are still potential new reserves remaining underground, $\lambda > 0$. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \begin{minipage}{0.3\textwidth} \includegraphics[width=0.98\textwidth,height=1.9in,trim=1.2in 3in 1.25in 2.7in]{stationary_q_against_lambda} \end{minipage} & \begin{minipage}{0.5\textwidth} \includegraphics[width=0.98\textwidth,height=1.9in,trim=0.8in 2.95in 1.2in 2.8in]{stationary_a_lambda} \end{minipage} \\ \begin{minipage}{0.3\textwidth} \includegraphics[width=0.98\textwidth,height=1.9in,trim=1.2in 2.95in 1.2in 2.65in]{stationary_R_against_lambda} \end{minipage} & \begin{minipage}{0.5\textwidth} \includegraphics[width=0.98\textwidth,height=1.9in,trim=1.2in 3in 1.42in 2.85in]{stationary_m_lambda} \end{minipage} \end{tabular} \begin{minipage}{0.97\textwidth} \caption{ Stationary MFG solution as a function of discovery rate $\lambda$. \emph{Top Left} panel: Stationary aggregate exploration/production $\tilde{A}=\tilde{Q}$. \emph{Top Right:} Stationary exploration effort $\tilde{a}^\ast(x)$. \emph{Bottom Left:} Stationary aggregate reserves $\tilde{R} = \int_0^\infty x \tilde{m}(dx) $; \emph{Bottom Right:} Stationary distribution $\tilde{m}(x)$. Note that the total mass on $(0,\infty)$ is $1-\tilde{\pi}$ which depends on $\lambda$. As before, there is a discontinuity at $x=\delta =1$. \label{fig: stationary_Q_against_lambda stationary_R_against_lambda stationary_bdryP_against_lambda}} \end{minipage} \end{center} \end{figure} \section{Fluid limit of exploration process} \label{sec: Fluid limit of exploration process} The stochasticity of the exploration process depends on two factors: the discovery rate $\lambda$ per unit exploration effort, and the size $\delta$ of each discovery. To study the effect of randomness of the exploration process on equilibrium production and reserves distribution we introduce an asymptotic parameter $\epsilon>0$ (cf.~\cite{HaganCaflisch94}), rescaling $\lambda_\epsilon := \lambda / \epsilon$ and $\delta_\epsilon := \delta \epsilon$. As $\epsilon \downarrow 0$, we have the discovery rate $\lambda_\epsilon \uparrow \infty$ and unit discovery amount $\delta_\epsilon \downarrow 0$, which means that the exploration process becomes more deterministic. In the sequel we use $\epsilon$ to index the respective MFG equilibria. For the limiting case $\epsilon = 0$ the exploration process is fully deterministic. This is known as the fluid limit since we fully average out the stochasticity in $(X_t)$ without modifying its average (in the sense of expected value) behavior. Intuitively in the fluid limit, the difference term $\Delta_x v(t,x) = v(t,x+\delta)-v(t,x)$ becomes $\frac{\partial}{\partial x} v_0(t,x)$ and the integral becomes $ \delta a_0^*(t,x) \frac{\partial}{\partial x} \eta_0(t,x)$, removing the non-local term. The resulting MFG equations are given by \eqref{fluid limit HJB non stationary}--\eqref{mfg transport equation fluid limit non stationary} below. \begin{align} \label{fluid limit HJB non stationary} & 0 = \frac{\partial}{\partial t} v_0 (t , x) - r v_0(t, x) + \frac{1}{2\beta_1} \left[ \left(p_0(t) - \kappa_1 - \frac{\partial}{\partial x} v_0(t , x) \right)^+ \right]^2 \notag \\ & \quad + \frac{1}{2\beta_2} \left[ \left( \lambda \delta \frac{\partial}{\partial x}v_0(t,x) - \kappa_2 \right)^+ \right]^2; \\ & \frac{\partial}{\partial t} \eta_0(t,x) = \left( - \lambda \delta a^\ast_0(t,x) + q^\ast_0(t,x) \right) \frac{\partial}{\partial x} \eta_0 (t,x), \quad x > 0 , \label{mfg transport equation fluid limit non stationary} \end{align} where the optimal production rate $q^\ast_0$ and exploration rate $a^\ast_0$ are \begin{align} q^\ast_0(t, x) &= \arg \max_{q \geq 0} \left[ p_0(t)q - C_q(q) - q \frac{\partial}{\partial x} v_0(t,x) \right] \notag \\ & = \frac{1}{\beta_1} \left( p_0(t) - \kappa_1 - \frac{\partial}{\partial x} v_0(t,x) \right)^+ , \label{optimal production fluid limit non stationary} \\ a^\ast_0(t, x) & =\arg \max_{ a \geq 0} \left[ - C_a(a) +a \lambda \delta \frac{\partial}{\partial x} v_0(t,x) \right] = \frac{1}{\beta_2}\left( \lambda \delta \frac{\partial}{\partial x} v_0(t,x) - \kappa_2 \right)^+, \label{optimal exploration fluid limit non stationary} \end{align} and $p_0(t) = L + \int_0^\infty q^\ast_0(t,x) \eta_0(t,dx)$. Note that there is no ``boundary'' at $x=0$ for $\eta_0$ because depletion is never explicitly encountered; it only imposes the constraint $a^\ast_0(t,0) \ge q^\ast_0(t,0)$. The boundary conditions $v_0(t, 0)$ and $\frac{\partial}{\partial x} v_0(t, 0)$ are given explicitly by the following Lemma proven in Appendix~\ref{proof of lemma boundary conditions fluid limit non stationary HJB}. \begin{lemma} \label{lemma: boundary conditions fluid limit non stationary HJB} The boundary conditions $v_0(t, 0)$ and $\frac{\partial}{\partial x} v_0(t, 0)$ satisfy \begin{align} & \frac{\partial}{\partial x} v_0(t, 0) = \frac{\beta_2 (p_0(t) - \kappa_1 ) + \beta_1 \lambda \delta \kappa_2}{\beta_1 \lambda^2 \delta^2 + \beta_2} ; \label{v0 partial x boundary condition} \\ &v_0(t, 0) = \int_t^T \left[ \frac{\lambda \delta (p_0(s) - \kappa_1) - \kappa_2}{\beta_1 \lambda^2 \delta^2 + \beta_2} \right]^2 (1+\lambda^2 \delta^2) \e^{-r(s - t)} ds . \label{v0 boundary condition} \end{align} \end{lemma} By taking $T \to \infty$, we may then consider the stationary fluid limit MFG. Mathematically, this yields the simplest setup as it removes the non-local ``delay'' term associated with discrete exploration, as well as the time-dependence, leaving with a coupled system of two ODE's. In fact, the following Proposition~\ref{prop: stationary mean field game equilibrium in fluid limit} implies that economically the stationary MFG in the fluid limit reduces to just a couple of algebraic relations. \begin{prop}[Stationary mean field game equilibrium in fluid limit] \label{prop: stationary mean field game equilibrium in fluid limit} The stationary MFG MNE in fluid limit ($\epsilon = 0$) is summarized as \begin{enumerate} \item[(i).] The stationary reserves distribution is $\tilde{\pi}_0 =1$, i.e.~all producers hold no reserves, $\tilde{R}_0 = 0$. \item[(ii).] The equilibrium total production $\tilde{Q}_0$ and market price in the fluid limit are given by \begin{align} \tilde{Q}_0 = \tilde{q}^\ast_0(0) = \frac{[ (L-\kappa_1)\lambda \delta - \kappa_2 ]^+ }{\beta_2 + (1+\beta_1)\lambda \delta}, \quad\text{and} \qquad \tilde{p}_0 = L- \tilde{Q}_0; \label{total production in fluid limit} \end{align} \item[(iii).] The equilibrium exploration control is $\tilde{a}^\ast_0(x) = 0$ $\forall x > 0$ and \begin{equation} \label{equation: fluid limit boundary production and exploration relation} \tilde{a}^\ast_0(0) = \frac{1}{\delta \lambda} \tilde{q}_0^\ast(0). \end{equation} \end{enumerate} \end{prop} The proof of Proposition~\ref{prop: stationary mean field game equilibrium in fluid limit} is Appendix \ref{section: proof of stationary mean field game equilibrium in fluid limit}. In the case of fluid limit $\epsilon=0$, discovery of new resources happens in a completely deterministic way, thus it is not necessary to hold reserves for production. Producers starting with positive reserves will not explore until reserves run out. Once reserves level reaches zero, equation \eqref{equation: fluid limit boundary production and exploration relation} implies that a player without reserves will choose production and exploration strategies such that the production rate exactly equals the rate of reserves increment due to his exploration effort. This explains how zero reserves can be sustained in equilibrium. Overall, the above Proposition shows that the stationary equilibrium with deterministic exploration is trivial, i.e.~only $x=0$ matters and the system of ODE's effectively collapses to algebraic equations linking $\tilde{Q}_0$ and $\tilde{A}_0$ to model parameters. This shows that the stochastic model is strictly more complex than the deterministic one. \subsection{Numerical scheme and illustration} \label{sec: Numerical example of fluid limit model} The iterative scheme in Section~\ref{sec: Numerical method for the system of HJB and transport equations} is easily adapted to solve the fluid limit system \eqref{fluid limit HJB non stationary}--\eqref{mfg transport equation fluid limit non stationary}. As in Section~\ref{sec: Numerical method for HJB equation}, we employ method of lines to numerically solve the HJB equation. The space derivative of $v_0(t, x)$ at each spatial grid point $x_m$ is approximated by a backward difference quotient $\frac{\partial}{\partial x} v_0(t, x_m) \simeq \frac{v_0(t, x_m) - v_0(t, x_{m-1})}{\Delta x}$ so that $\frac{\partial}{\partial t} v_0(t, x_m)$ becomes a function of $v_0(t, x_m)$ and $v_0(t, x_{m-1})$: \begin{multline} \frac{\partial}{\partial t} v_0 (t , x_m) = r v_0(t, x_m) - \frac{1}{2\beta_1} \left[ \left(p_0(t) - \kappa_1 - \frac{v_0(t, x_m) - v_0(t, x_{m-1})}{\Delta x} \right)^+ \right]^2 \\ - \frac{1}{2\beta_2} \left[ \left( \lambda \delta \frac{v_0(t, x_m) - v_0(t, x_{m-1})}{\Delta x} - \kappa_2 \right)^+ \right]^2 , \quad m = 1, 2, \ldots, M. \label{eqn: system of numerical equations fluid limit} \end{multline} We use Matlab's Runge-Kutta ODE solver \texttt{ode45} to solve the system \eqref{eqn: system of numerical equations fluid limit} of ordinary differential equations for $\{v(t, x_m): m = 0, 1, \ldots, M\}$ backward in time with boundary condition $v_0(t, x_0) \equiv v_0(t, 0)$ given by \eqref{v0 boundary condition} and initial condition $v(T, x_m)=0$ for all $m=0, 1, ..., M$. We use forward in time and forward in space scheme to solve the transport equation \eqref{mfg transport equation fluid limit non stationary}. As in section~\ref{sec: Numerical method for transport equation}, we also prescribe the boundary condition $\eta_0(t_n, x_M) = 0$, $n=0, ..., N$ at $x_M \equiv X_{max}$ which assumes that $X_{max}$ is larger than the saturation level. We directly set $\eta_0(t_n, x_0) = 1$ and obtain the numerical values of $\eta_0(t_{n+1}, x_m)$ for $m=1,\ldots, M$ via \begin{align} \eta_0(t_{n+1}, x_m) &= \eta_0(t_{n}, x_m) +\Delta t \left[ - \lambda \delta a_0(t_n, x_m) + q_0(t_n, x_m) \right] \frac{\eta_0(t_n, x_{m+1}) - \eta_0(t_n, x_{m})}{\Delta x}. \notag \end{align} Figure~\ref{fig: fluid limit Q and R} illustrates the resulting solution both in the time-dependent model described above (left panel) and its stationary version (middle and right panels) similar to Section \ref{sec: Stationary mean field game Nash equilibrium}. We observe two distinct features of interest. First, we find that uncertainty discourages exploration as the discounting effect lowers the NPV of putting in effort today for a delayed reward at discovery date $\tau$. As a result, more uncertainty decreases aggregate production $\tilde{Q}$ and raises prices. Second, uncertainty encourages ``hoarding'', i.e.~holding additional reserves as a buffer against running out due to depletion Consequently, $\tilde{R}_\epsilon$ increases in $\epsilon$ (right panel of Figure~\ref{fig: fluid limit Q and R}). At the same time, as $\epsilon \downarrow 0$ stationary reserves level $\tilde{R}_\epsilon \downarrow 0$. Indeed, in the limit $\epsilon=0$, production can be viewed as a perfect just-in-time supply chain: effort is expended to find an infinitesimal amount of new underground resources which are immediately extracted and sold for profit. Thus, exploration effort becomes equivalent to a secondary production cost, the cost of securing the commodity supply to exactly match the desired production rate, and the precautionary need for reserves vanishes. Thus we conclude that economically uncertainty regarding discoveries carries a \emph{cost}. \begin{figure}[htb] \begin{center} \hspace*{6pt} \begin{tabular}{rcc} \begin{minipage}{0.3\textwidth} \includegraphics[width=0.98\textwidth,height=2in,trim=1.35in 2.95in 1.25in 3in]{Q0_fluid_limit} \end{minipage} & \begin{minipage}{0.3\textwidth} \includegraphics[width=0.98\textwidth,height=2in,trim=1.45in 3.05in 1.45in 3.05in]{Q_FluidLimit_epsilon} \end{minipage} & \begin{minipage}{0.3\textwidth} \includegraphics[width=0.98\textwidth,height=2in,trim=1.35in 2.95in 1.35in 3in]{R_epsilon_fluid_limit} \end{minipage} \end{tabular} \begin{minipage}{0.97\textwidth} \caption{Equilibrium production and reserves level in the regime $\lambda_\epsilon = \lambda/ \epsilon$ and $\delta_\epsilon = \delta \epsilon$ for different values of $\epsilon$. \emph{Left} panel: Evolution of total production $Q_\epsilon(t)$ for several levels of $\epsilon$. \emph{Middle:} Stationary production $\tilde{Q}_\epsilon$ against $\epsilon$. \emph{Right:} Stationary reserves level $\tilde{R}_\epsilon$ against $\epsilon$. For $\epsilon =0$ we have $\tilde{Q}_0 = \frac{5 \cdot 1-0.2}{1 + 2\cdot 1} = 1.6$ from \eqref{total production in fluid limit} and $\tilde{R}_0 = 0$. \label{fig: fluid limit Q and R}} \end{minipage}\end{center} \end{figure} \section{Conclusion}\label{sec:conclusion} We investigate joint production and exploration of exhaustible commodities in a mean-field oligopoly. The ability to expend effort to find new resources creates several new phenomena that modify both the mathematical and economic structure of the market. First, exploration weakens the exhaustibility constraint and in particular permits existence of a stationary model where individual producer reserves evolve, but the market price and aggregate quantities are invariant over time. Second, exploration modifies the role of holding reserves ---rather than determining future available means of production, reserves are partly used as a buffer to mitigate running out. As a result, if exploration is instantaneous and deterministic, no reserves are needed. This was explored in our analysis of the fluid limit model and connects to the early single-agent works from 1970s. Third, exploration control brings novel mathematical challenges to Cournot MFGs, in particular due to the non-local term (from discrete reserve events) in the transport equation and the non-smooth reserves distribution that involves a point mass at $x=0$ and a density on $(0,\infty)$. Fourth, the time-stationary Cournot MFG creates to a non-standard ``free boundary'' feature which required an approximation with a time-dependent version. Among our insights is analysis about the ambiguous effects of exploration uncertainty and exploration frequency on the MFG equilibrium. This highlights the intricate interaction between stochasticity, reserves and the two types of controls, in addition to the game effects. A further important contribution is development of tailored numerical schemes to solve the various versions of the Cournot models which require different handling of the boundary conditions, of space- and time-dimensions and of the first-order-condition terms that determine the optimal controls. In our illustrations, the role of time horizon $T$ was mainly secondary and only affected the discovery rate $\lambda(t)$. A more extensive calibration could be made to add additional $t$-dependency, which could be used as means to capture learning-by-doing, or to capture the intuition that discovery sizes might get smaller over time. Another variant of the presented MFG approach would be to consider competition between a single major energy producer and a large population of minor energy producers, cf.~\cite{Huang10,CarmonaZhu16}. This would correspond for example to the dominant role played by the Organization of Petroleum Exporting Countries (OPEC) in the crude oil market, with OPEC controlling about 40\% of the world's oil production. Due to the resulting market power, the minor producers choose production strategies based on the production strategy of OPEC. The corresponding game model would involve a game value function for the major player, a game value function for a representative minor producer, and the reserves distribution of minor producers. The price is then determined by the aggregate production of the major plus all minor producers. Another open problem is to establish the existence and uniqueness of the MFG MNE with stochastic discoveries, and the regularity of the associated value function. As discussed, the corresponding reserves distribution is non-smooth with a point mass at $x=0$, so only weak regularity is expected. Intuitively, better regularity might be possible if the discovery distribution is continuous (rather than a fixed amount $\delta$), although this could generate further challenges for the HJB equation, introducing a bonified integral term into \eqref{mfg HJB equation}. Such theoretical analysis could also help to rigorize the convergence of the proposed numerical scheme.
train/arxiv
BkiUfhg5qhLB3L4qc1We
4
0.8
\section*{acknowledgments} \begin{acknowledgments} I am grateful to D. Weiss for clarifying some details of the KWW experiment and T. Deguchi, J.-S. Caux, R. van den Berg, and L. F. Santos for useful comments in the early stage of this work. Support from the Fulbright-Colciencias fellowship, from the Columbia GSAS faculty fellowship, from ICAM and from Prof. Andy Millis is acknowledged. \end{acknowledgments}
train/arxiv
BkiUam3xK7Ehm4qsyNrt
5
1
\section{Introduction} With the explosion of information, massive news are published on online news platforms such as Microsoft News and Google News \cite{das2007google, lavie2010user}, which can easily get the users overwhelmed when they try to find the news they are interested in \cite{okura2017embedding}. To tackle this problem, many news recommendation methods have been proposed to provide personalized news feeds and alleviate information overload for users \cite{wang2018dkn,wu2019npa,zhu2019dan,hu2019graph}. Since news articles usually contain abundant textual content, learning accurate news representations from news texts is the prerequisite for high-quality news recommendation \cite{wu2020mind}. Most existing news recommendation methods use shallow NLP models to learn news representations. For example, \citet{an2019lstur} propose to use a CNN network to learn contextual word representations by capturing local context information and use an attention network to select important words in news titles. \citet{wu2019nrms} propose to use a multi-head self-attention layer to capture the global relation between words in news titles, and also use an attention network to compute the news representation. However, it is difficult for these shallow models to accurately capture the deep semantic information in news texts \cite{devlin2019bert}, which limits their performance on news recommendation. Pre-trained language models (PLMs) are powerful in text modeling and have empowered various NLP tasks \cite{devlin2019bert, liu2019roberta}. A few recent works delve into employing PLMs for news understanding in news recommendation \cite{wu2021empower,xiao2021training,zhang2021unbert}. For example, \citet{wu2021empower} propose to replace these shallow models with the PLM to capture the deep contexts in news texts. However, these methods simply finetune the PLM with the news recommendation task, the supervision from which may not optimally train the PLM to capture semantic information in news texts and may be insufficient to solve the domain shift problem. Moreover, PLMs usually have a large number of parameters \cite{lan2020albert}. For example, the BERT-base model \cite{devlin2019bert} contains 12 Transformer layers \cite{vaswani2017transformer} and up to 110M parameters. Deploying these PLM-based news recommendation models to provide low-latency online services requires extensive computational resources. In this paper, we propose a Tiny-NewsRec approach to improve both the effectiveness and the efficiency of PLM-based news recommendation\footnote{The source codes of our Tiny-NewsRec method are available at https://github.com/yflyl613/Tiny-NewsRec.}. In our approach, we design a self-supervised domain-specific post-training method to adapt the generally pre-trained language models to the news domain with the task of news title and news body matching. In this way, the domain-specific PLM-based news encoder can better capture the semantic information in news texts and generate more discriminative representations, which are beneficial for news content understanding and user interest matching in the following news recommendation task. In addition, we propose a two-stage knowledge distillation method to compress the large PLM while maintaining its performance\footnote{We focus on task-specific knowledge distillation.}. In the first stage, the student PLM is forced to mimic the domain-specifically post-trained teacher PLM in the matching task between news titles and news bodies to learn news semantic modeling. In the second stage, the domain-specific teacher PLM is first finetuned with different random seeds on the news recommendation task to obtain multiple task-specific teachers. Then we propose a multi-teacher knowledge distillation framework to transfer task-specific knowledge from these teacher models to the student model. Since different teachers may have different abilities on different samples, for each training sample, we assign different teachers with different weights based on their performance on this sample, which allows the student model to learn more from the best teacher. Extensive experiment results on two real-world datasets show that our approach can reduce the model size by 50\%-70\% and accelerate the inference speed by 2-8 times while achieving better performance. The main contributions of this paper are as follows: \begin{itemize} \item We propose a Tiny-NewsRec approach to improve both the effectiveness and the efficiency of PLM-based news recommendation. \item We propose to domain-specifically post-train the PLM-based news encoder with a self-supervised matching task between news titles and news bodies before task-specific finetuning to better fill the domain gap. \item We propose a two-stage knowledge distillation method with multiple teacher models to compress the large PLM-based news recommendation model while maintaining its performance. \item Extensive experiments on two real-world datasets validate that our method can effectively improve the performance of PLM-based news recommendation models while reducing the model size by a large margin. \end{itemize} \section{Related Work} \subsection{PLM-based News Recommendation} With the great success of pre-trained language models (PLMs) in multiple NLP tasks, many researchers have proposed to incorporate the PLM in news recommendation and have achieved substantial gain \cite{xiao2021training, zhang2021unbert, wu2021empower}. For example, \citet{zhang2021unbert} proposed a UNBERT approach, which is a BERT-based user-news matching model. It takes in the concatenation of the user's historical clicked news and the candidate news, and uses the PLM to capture multi-grained user-news matching signals at both word-level and news-level. \citet{wu2021empower} proposed a state-of-the-art PLM-based news recommendation method named PLM-NR, which instantiates the news encoder with a PLM to capture the deep semantic information in news texts and generate high-quality news representations. However, these methods simply finetune the PLM with the news recommendation task, the supervision from which may be insufficient to fill the domain gap between general corpora and the news domain~\cite{gururangan2020adapt}. Besides, the PLMs are usually with large parameter sizes and high computational overhead~\cite{lan2020albert}. Different from these methods, our approach can better fill the domain gap with an additional domain-specific post-training task and further reduce the computational cost with the two-stage knowledge distillation method. \subsection{PLM Knowledge Distillation} Knowledge distillation is a technique that aims to compress a heavy teacher model into a lightweight student model while maintaining its performance \cite{hinton2015distill}. In recent years, many works explore to compress large-scale PLMs via knowledge distillation \cite{tang2019distilling,seyed2020assist,wang2020minilm,sun2020mobilebert,xu2020theseus}. For example, \citet{Tang2019bilstm} utilized the output soft label of a BERT-large model to distill it into a single-layer BiLSTM. \citet{sun2019bertpkd} proposed a patient knowledge distillation approach named BERT-PKD, which lets the student model learn from both the output soft labels from the last layer of the teacher model and the hidden states produced by intermediate layers. \citet{Sanh2020distilbert} proposed a DistilBERT approach, which distills the student model at the pre-training stage with a combination of language modeling, distillation, and embedding cosine-distance losses. \citet{jiao2020tinybert} proposed a TinyBERT approach, which lets the student model imitate the output probabilities, embeddings, hidden states, and attention score matrices of the teacher model at both the pre-training stage and the finetuning stage. There are also a few works that aim to distill the PLM for specific downstream tasks \cite{lu2021twinbert,wu2021newsbert}. For example, \citet{wu2021newsbert} proposed a NewsBERT approach for intelligent news applications. A teacher-student joint learning and distillation framework is proposed to collaboratively learn both teacher and student models. A momentum distillation method is also designed to incorporate the gradients of the teacher model into the update of the student model which can better transfer the useful knowledge learned by the teacher model. However, these knowledge distillation methods neglect the potential domain gap between the pre-training corpora and the downstream task domain. Besides, they only use one teacher model to guide the training of the student model, which may provide insufficient or even biased supervision \cite{wu2021teacher}. Therefore, we first use a large domain-specifically post-trained teacher model to help the student model better adapt to the news domain. Then we use a multi-teacher knowledge distillation framework to transfer richer knowledge from a set of finetuned teacher models to the student. \section{Methodology} In this section, we introduce the details of our Tiny-NewsRec approach, which can fill the domain gap between general corpora and the news domain, and distill the large PLM for news recommendation applications. We first introduce the structure of the PLM-based news recommendation model. Then we introduce the self-supervised matching task between news titles and news bodies used for domain-specific post-training. Finally, we introduce the workflow of our two-stage knowledge distillation method with a multi-teacher knowledge distillation framework. \subsection{News Recommendation Model} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig/framework.pdf} \caption{Structure of our PLM-based news recommendation model.} \label{fig.framework} \end{figure} \begin{figure*}[t] \resizebox{\linewidth}{!}{ \centering \subfigure[First stage knowledge distillation.]{\label{post-train-kd} \begin{minipage}[t]{0.319\textwidth} \centering \includegraphics[width=\textwidth]{fig/post-train-kd.pdf} \end{minipage} } \subfigure[Second stage knowledge distillation.]{\label{fine-tune-kd} \begin{minipage}[t]{0.681\textwidth} \centering \includegraphics[width=\textwidth]{fig/fine-tune-kd.pdf} \end{minipage} } } \caption{Illustration of our two-stage knowledge distillation method.} \label{fig.two_stage} \end{figure*} We first introduce the overall structure of our PLM-based news recommendation model. As shown in Fig.\ref{fig.framework}, it consists of three major components, i.e. a shared PLM-based news encoder, a user encoder, and a click prediction module. The shared news encoder aims to learn news representations from news texts. Following PLM-NR \cite{wu2021empower}, we use a PLM to get the contextual representation of each token in the input news. Then we use an attention network to aggregate these representations and feed its output into a dense layer to get the final news representation. The user encoder aims to learn the user representation $\mathbf{u}$ from the representations of the last $L$ news clicked by the user, i.e. $[\mathbf{h}_1, \mathbf{h}_2, ..., \mathbf{h}_L]$. Following \citet{wu2019naml}, we implement it with an attention network to select important news. In the click prediction module, we use dot product to calculate the relevance score between the candidate news representation $\mathbf{h}_c$ and the target user representation $\mathbf{u}$, and take it as the predicted result, i.e. $\hat{y}_c=\mathbf{h}_c^T\mathbf{u}$. \subsection{Domain-specific Post-training} Since the supervision from the news recommendation task may not optimally train the PLM to understand the news content, directly finetuning the PLM on the news recommendation data may be insufficient to fill the domain gap between general corpora and the news corpus \cite{gururangan2020adapt}. In order to better adapt the PLM to the news domain, we propose to conduct domain-specific post-training before finetuning it with the news recommendation task. We design a self-supervised matching task between news titles and news bodies which can make the PLM-based news encoder better at capturing and matching semantic information in news texts. Given a pair of news title and news body, the news encoder is trained to predict whether they come from the same news article. The model structure for this task is shown in the right half of Fig.\ref{post-train-kd}. The architecture of the PLM-based news encoder is the same as that in Fig.\ref{fig.framework}. Following previous works \cite{huang2013web, wu2019naml}, we adopt the negative sampling method to construct the training samples. Given the $i$-th news body, we take its corresponding news title as the positive sample and randomly select $N$ other news titles as negative samples. We use the PLM-based news encoder to get the news body representation $\mathbf{h}_b$ and the news title representations $\mathbf{h}_t=[\mathbf{h}_t^+,\mathbf{h}^-_{t_1},...,\mathbf{h}^-_{t_N}]$. Then we take the dot product of the news body representation and each news title representation as the predicted score, which is denoted as $[\hat{y}^+,\hat{y}^-_1,...\hat{y}^-_N]$. These predicted scores are further normalized with the softmax function and the predicted probability of the positive sample is formulated as follows: \begin{equation*} p_i=\frac{\exp{(\hat{y}^+)}}{\exp{(\hat{y}^+)}+\sum_{j=1}^N{\exp{(\hat{y}^-_j)}}}. \end{equation*} To maximize the predicted probability of the positive sample, we use the Cross-Entropy loss as the loss function, which can be formulated as follows: \begin{equation*} \mathcal{L}_{match} = -\sum_{i\in \mathcal{T}}{\operatorname{log}(p_i)}, \end{equation*} where $\mathcal{T}$ is the set of positive training samples. In this way, the domain-specifically post-trained PLM-based news encoder can generate more similar representations for related texts and distinguish them from the others, which can alleviate the anisotropy problem of the sentence embedding generated by the PLM \cite{jun2019degeneration,ethayarajh2019contextual,li2020sentence}. As a result, the news representations generated by the news encoder can be more discriminative and better at capturing semantic similarity, which is beneficial to the user interests matching in the following news recommendation task. \subsection{Two-stage Knowledge Distillation} Although with our proposed domain-specifically post-train then finetune procedure, the PLM-based news recommendation model can achieve superior performance, it still has high computational overhead and is difficult to meet the speed requirement of low-latency online services. In order to achieve our goal of efficiency, we further propose a two-stage knowledge distillation method. The overall framework is shown in Fig.\ref{fig.two_stage}. In our method, the student model is first trained to imitate the domain-specifically post-trained teacher model in the matching task between news titles and news bodies. Then we finetune the domain-specifically post-trained teacher model with different random seeds on the news recommendation task and use these finetuned teacher models to guide the finetuning of the student model via a multi-teacher knowledge distillation framework. In the first stage, in order to help the student PLM better adapt to the news domain, we use the large domain-specifically post-trained teacher news encoder to guide the student model in the matching task. The model framework is shown in Fig.\ref{post-train-kd}. To encourage the student model to make similar predictions as the teacher model in the matching task, we use a distillation loss to force the student model to imitate the output soft labels of the teacher model. Given a piece of news body and $N+1$ news titles, the soft labels predicted by the teacher model and the student model are denoted as $\hat{\mathbf{y}}^{(t)}=[{{}\hat{y}^+}^{(t)},{{}\hat{y}^-_1}^{(t)},...,{{}\hat{y}^-_N}^{(t)}]$ and $\hat{\mathbf{y}}^{(s)}=[{{}\hat{y}^+}^{(s)},{{}\hat{y}^-_1}^{(s)},...,{{}\hat{y}^-_N}^{(s)}]$ respectively. The distillation loss in the first stage is formulated as follows: \begin{equation*} \mathcal{L}^{(1)}_{distill}=T_1^2\cdot\operatorname{CE}(\hat{\mathbf{y}}^{(t)}/T_1, \hat{\mathbf{y}}^{(s)}/T_1), \end{equation*} where $T_1$ is the temperature hyper-parameter in the first stage that controls the smoothness of the predicted probability distribution of the teacher model, and $\operatorname{CE}$ stands for the Cross-Entropy loss function. Besides, the learned news title representations and news body representations are very important in the matching task, which will directly affect the final predicted score. Therefore we use an embedding loss to align the output representations of the teacher news encoder and the student news encoder. Denote the news title representations and the news body representation learned by the teacher news encoder as $\mathbf{h}_t^{(t)}=[{\mathbf{h}^+_t}^{(t)},{\mathbf{h}^-_{t_1}}^{(t)},...,{\mathbf{h}^-_{t_N}}^{(t)}]$ and $\mathbf{h}_b^{(t)}$, and denote these representations learned by the student news encoder as $\mathbf{h}_t^{(s)}=[{\mathbf{h}^+_t}^{(s)},{\mathbf{h}^-_{t_1}}^{(s)},...,{\mathbf{h}^-_{t_N}}^{(s)}]$ and $\mathbf{h}_b^{(s)}$ respectively, the embedding loss in the first stage is formulated as follows: \begin{equation*} \mathcal{L}_{emb}^{(1)}=\operatorname{MSE}(\mathbf{h}_t^{(t)},\mathbf{h}_t^{(s)})+\operatorname{MSE}(\mathbf{h}_b^{(t)}, \mathbf{h}_b^{(s)}), \end{equation*} where $\operatorname{MSE}$ stands for the Mean Squared Error loss function. The overall loss function for the student model in the first stage knowledge distillation is the weighted summation of the distillation loss, the embedding loss, and the target loss, which is formulated as follows: \begin{equation*} \mathcal{L}^{(1)}=\mathcal{L}_{distill}^{(1)}+\mathcal{L}_{emb}^{(1)}+\beta_1\cdot\operatorname{CE}(\hat{\mathbf{y}}^{(s)}, \mathbf{y}), \end{equation*} where $\mathbf{y}$ is the one-hot ground-truth label of the matching task and $\beta_1$ is the hyper-parameter that controls the impact of the teacher model in the first stage knowledge distillation. In the second stage, in order to transfer more comprehensive knowledge to the student model during finetuning with the news recommendation task, we propose a multi-teacher knowledge distillation framework which is shown in Fig.\ref{fine-tune-kd}. We first finetune the domain-specifically post-trained teacher model with $M$ different random seeds on the news recommendation task. Then these $M$ teacher models are used to guide the finetuning of the student news encoder that got from the first stage knowledge distillation. For each training sample, we assign a weight to the $i$-th teacher model according to its performance on this sample, which is measured by the Cross-Entropy loss between its predicted score $\hat{\mathbf{y}}_c^{(t_i)}$ and the ground-truth label of the input training sample $\mathbf{y}_c$. The loss is further multiplied with a positive learnable parameter $\omega$ which is used to enlarge the difference between teacher models. Denote the weight of the $i$-th teacher model on a training sample as $w_i$, it is formulated as follows: \begin{align*} w_i &= \frac{\operatorname{exp}(-\operatorname{CE}(\hat{\mathbf{y}}_c^{(t_i)}, \mathbf{y}_c)\times \omega)}{\sum_{j=1}^M{\operatorname{exp}(-\operatorname{CE}(\hat{\mathbf{y}}_c^{(t_j)}, \mathbf{y}_c)\times \omega)}}, \end{align*} Similar to the first stage, we use the distillation loss to force the student model to make similar predictions as the best teacher model on a training sample. Since now we have several teacher models with different weights, we use the weighted summation of all the soft labels of teacher models as guidance. Therefore the distillation loss is formulated as follows: \begin{equation*} \mathcal{L}_{distill}^{(2)} = T_2^2\cdot\operatorname{CE}(\sum_{i=1}^M{w_i\cdot\hat{\mathbf{y}}_c^{(t_i)}}/T_2, \hat{\mathbf{y}}_c^{(s)}/T_2). \end{equation*} where $T_2$ is the temperature hyper-parameter in the second stage. In addition, since the news representation and the user representation are the keys in the news recommendation task, we also let the student model imitate the learned news representations and user representations of teacher models. Considering that the representations learned by each teacher model may lie in different spaces, we use an additional dense layer for each teacher model to project their learned representations into one unified space. The embedding loss of news representations and user representations between the $i$-th teacher model and the student model are denoted as $\mathcal{L}^{(2)}_{news_i}$ and $\mathcal{L}^{(2)}_{user_i}$ respectively, which are formulated as follows: \begin{align*} \mathcal{L}_{news_i}^{(2)} &= \operatorname{MSE}(\mathbf{W}_{news}^{(t_i)}\times \mathbf{h}_{news}^{(t_i)}+\mathbf{b}_{news}^{(t_i)}, \mathbf{h}_{news}^{(s)}), \\ \mathcal{L}_{user_i}^{(2)} &= \operatorname{MSE}(\mathbf{W}_{user}^{(t_i)}\times \mathbf{u}^{(t_i)}+\mathbf{b}_{user}^{(t_i)}, \mathbf{u}^{(s)}), \end{align*} where $\mathbf{h}_{news}^{(t_i)}$ and $\mathbf{h}_{news}^{(s)}$ represent the news representations of the input historical clicked news and the candidate news learned by the $i$-th teacher model and the student model respectively. $\mathbf{W}_{news}^{(t_i)}$, $\mathbf{b}_{news}^{(t_i)}$ and $\mathbf{W}_{user}^{(t_i)}$, $\mathbf{b}_{user}^{(t_i)}$ are the learnable parameters used to project the news representations and user representations learned by the $i$-th teacher model. The total embedding loss is the weighted summation of all the embedding losses of news representations and user representations between the student model and each teacher model, i.e. $\mathcal{L}_{emb}^{(2)}=\sum_{i=1}^Mw_i\cdot(\mathcal{L}^{(2)}_{news_i}+\mathcal{L}^{(2)}_{user_i})$. The overall loss function for the student model in the second stage knowledge distillation is also the weighted summation of the distillation loss, the embedding loss, and the target loss, which is formulated as follows: \begin{equation*} \mathcal{L}^{(2)}=\mathcal{L}_{distill}^{(2)}+\mathcal{L}_{emb}^{(2)}+\beta_2\cdot\operatorname{CE}(\hat{\mathbf{y}}_c^{(s)}, \mathbf{y}_c), \end{equation*} where $\beta_2$ controls the impact of the teacher models in the second stage knowledge distillation\footnote{The effectiveness of each part of the loss function is verified in Appendix.}. \section{Experiments} \subsection{Datasets and Experiment Settings} We conduct experiments with three real-world datasets, i.e. \textit{MIND}, \textit{Feeds} and \textit{News}. \textit{MIND}\footnote{https://msnews.github.io/} is a public dataset for news recommendation \cite{wu2020mind}, which contains the news click logs of 1,000,000 users on the Microsoft News website in six weeks\footnote{We randomly choose 1/4 samples from the training set as our training data due to the limitation of training speed.}. \textit{Feeds} is also a news recommendation dataset collected on the Microsoft News App from 2020-08-01 to 2020-09-01. We use the impressions in the last week for testing, and randomly sample 20\% impressions from the training set for validation. \textit{News} contains news articles collected on the Microsoft News website from 2020-08-01 to 2020-09-20, which is used for domain-specific post-training. Detailed statistics of these datasets are summarized in Table~\ref{dataset}. \begin{table}[h] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{lrlr} \Xhline{1.5pt} \multicolumn{4}{c}{\textbf{MIND}} \\ \hline \# News & 161,013 & \# Users & 1,000,000 \\ \# Impressions & 15,777,377 & \# Clicks & 24,155,470 \\ Avg. title length & 11.52 & & \\ \hline \multicolumn{4}{c}{\textbf{Feeds}} \\ \hline \# News & 377,296 & \# Users & 10,000 \\ \# Impressions & 320,925 & \# Clicks & 437,072 \\ Avg. title length & 11.93 & & \\ \hline \multicolumn{4}{c}{\textbf{News}} \\ \hline \# News & 1,975,767 & Avg. title length & 11.84 \\ Avg. body length & 511.43 & & \\ \Xhline{1.5pt} \end{tabular} } \caption{Detailed statistics of \textit{MIND}, \textit{Feeds} and \textit{News}.}\label{dataset} \end{table} \begin{table*}[t] \resizebox{\linewidth}{!}{ \begin{tabular}{l|llll|llll} \Xhline{1.5pt} \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{4}{c|}{\textbf{MIND}} & \multicolumn{4}{c}{\textbf{Feeds}} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{1}{c}{AUC} & \multicolumn{1}{c}{MRR} & \multicolumn{1}{c}{nDCG@5} & \multicolumn{1}{c|}{nDCG@10} & \multicolumn{1}{c}{AUC} & \multicolumn{1}{c}{MRR} & \multicolumn{1}{c}{nDCG@5} & \multicolumn{1}{c}{nDCG@10} \\ \hline PLM-NR-12 (FT) & 69.72±0.15 & 34.74±0.10 & 37.99±0.11 & 43.71±0.07 & 67.93±0.13 & 34.42±0.07 & 37.46±0.09 & 45.09±0.07 \\ PLM-NR-12 (FP) & 69.82±0.14 & 34.90±0.11 & 38.17±0.09 & 43.83±0.07 & 68.11±0.11 & 34.49±0.12 & 37.58±0.07 & 45.11±0.08 \\ PLM-NR-12 (DP)${}^*$ & \textbf{70.20±0.10} & \textbf{35.27±0.08} & \textbf{38.54±0.07} & \textbf{44.20±0.08} & \textbf{68.71±0.08} & \textbf{35.10±0.09} & \textbf{38.32±0.06} & \textbf{45.83±0.08} \\ \hline PLM-NR-4 (FT) & 69.49±0.14 & 34.40±0.10 & 37.64±0.10 & 43.40±0.09 & 67.46±0.12 & 33.71±0.11 & 36.69±0.08 & 44.36±0.09 \\ PLM-NR-2 (FT) & 68.99±0.08 & 33.59±0.14 & 36.81±0.11 & 42.61±0.11 & 67.05±0.14 & 33.33±0.09 & 36.15±0.10 & 43.90±0.12 \\ PLM-NR-1 (FT) & 68.12±0.12 & 33.20±0.07 & 36.29±0.09 & 42.07±0.10 & 66.26±0.10 & 32.55±0.12 & 35.22±0.07 & 42.99±0.09 \\ \hline TinyBERT-4 & 69.77±0.13 & 34.83±0.09 & 38.02±0.11 & 43.69±0.09 & 67.73±0.11 & 34.00±0.08 & 37.03±0.10 & 44.59±0.12 \\ TinyBERT-2 & 69.44±0.17 & 34.11±0.07 & 37.55±0.08 & 43.14±0.07 & 67.35±0.13 & 33.69±0.05 & 36.59±0.08 & 44.21±0.09 \\ TinyBERT-1 & 68.42±0.12 & 33.55±0.10 & 36.69±0.09 & 42.35±0.08 & 66.53±0.10 & 32.81±0.07 & 35.61±0.11 & 43.29±0.09 \\ \hline NewsBERT-4 & 69.85±0.17 & 34.91±0.09 & 38.19±0.09 & 43.84±0.08 & 68.34±0.13 & 34.58±0.06 & 37.69±0.09 & 45.27±0.08 \\ NewsBERT-2 & 69.62±0.09 & 34.67±0.12 & 37.86±0.11 & 43.54±0.11 & 67.90±0.07 & 34.26±0.09 & 37.29±0.10 & 44.86±0.11 \\ NewsBERT-1 & 68.67±0.11 & 33.95±0.07 & 37.05±0.14 & 42.74±0.13 & 67.00±0.10 & 33.24±0.11 & 36.09±0.08 & 43.80±0.07 \\ \hline Tiny-NewsRec-4${}^*$ & \textbf{70.40±0.05} & \textbf{35.43±0.08} & \textbf{38.76±0.05} & \textbf{44.43±0.04} & \textbf{68.93±0.06} & \textbf{35.21±0.09} & \textbf{38.43±0.08} & \textbf{45.97±0.10} \\ Tiny-NewsRec-2 & 70.28±0.07 & 35.32±0.07 & 38.65±0.07 & 44.28±0.08 & 68.58±0.03 & 34.82±0.07 & 38.02±0.09 & 45.57±0.07 \\ Tiny-NewsRec-1 & 69.85±0.03 & 34.93±0.08 & 38.21±0.09 & 43.84±0.09 & 68.14±0.05 & 34.53±0.07 & 37.61±0.08 & 45.14±0.08 \\ \Xhline{1.5pt} \end{tabular} } \caption{Performance comparisons of different models. (FT=Finetune, FP=Further Pre-train, DP=Domain-specific Post-train)\\ ${}^*$Improvements over other baselines are significant at $p<0.01$ (by comparing the models with the same number of layers).}\label{result} \end{table*} In our experiments, we apply the pre-trained UniLMv2 \cite{bao2020unilmv2} to initialize the PLM in the news encoder. The dimension of the news representation and the query vector in the attention network is 256 and 200 respectively. The temperature hyper-parameters $T_1$ and $T_2$ are both set to 1. $\beta_1$ and $\beta_2$ in the loss functions of our two-stage knowledge distillation method are set to 1 and 0.1 respectively. The number of teacher models $M$ is set to 4. We use the Adam optimizer \cite{kingma2014adam} for model training. The detailed experiment settings are listed in Appendix. All the hyper-parameters are tuned on the validation set. Following \citet{wu2020mind}, we use AUC, MRR, nDCG@5, and nDCG@10 to measure the performance of news recommendation models. We independently repeat each experiment 5 times and report the average results with standard deviations. \subsection{Performance Comparison} In this section, we compare the performance of the teacher model PLM-NR-12 (\textbf{D}omain-specific \textbf{P}re-train) that trained with our domain-specific post-train then finetune procedure, and the student models trained with our Tiny-NewsRec approach with several baseline methods, including: \begin{itemize} \item \textbf{PLM-NR} (\textbf{F}ine\textbf{t}une) \cite{wu2021empower}, a method which applies the PLM in the news encoder and directly fine-tunes it with the news recommendation task. We compare the performance of its 12-layer UniLMv2 version and its variant using the first 1, 2, or 4 layers. \item \textbf{PLM-NR} (\textbf{F}urther \textbf{P}re-train) \cite{sun2020finetune}, a variant of PLM-NR where we first further pre-train the UniLMv2 model with the MLM task~\cite{devlin2019bert} on the \textit{News} dataset and then finetune it with the news recommendation task. \item \textbf{TinyBERT} \cite{jiao2020tinybert}, a state-of-the-art two-stage knowledge distillation method for compressing the PLM. For a fair comparison, we compare the performance of the 1-layer, 2-layer, and 4-layer student models distilled from the PLM-NR-12 (DP). \item \textbf{NewsBERT} \cite{wu2021newsbert}, a PLM knowledge distillation method specialized for intelligent news applications. For a fair comparison, we use the domain-specifically post-trained 12-layer UniLMv2 model to initialize the PLM in the teacher model and jointly train it with the 1-layer, 2-layer, or 4-layer student model. \end{itemize} Table~\ref{result} shows the performance of all the compared methods on the \textit{MIND} and \textit{Feeds} datasets. From the results, we have the following observations. First, comparing with state-of-the-art knowledge distillation methods (i.e. NewsBERT and TinyBERT), our Tiny-NewsRec achieves the best performance in all 1-layer, 2-layer, and 4-layer student models. This is because in the first stage the domain-specifically post-trained teacher model can help the student fill the domain gap between general corpora and the news domain. Besides, we use a multi-teacher knowledge distillation framework which can transfer richer knowledge to the student model. Second, our Tiny-NewsRec achieves comparable performance with the teacher model PLM-NR-12 (DP). It is noted that Tiny-NewsRec contains much fewer parameters and has lower computational overhead, which validates the effectiveness of our two-stage knowledge distillation method. Third, with the same parameter size, methods applying knowledge distillation (i.e. TinyBERT, NewsBERT, and Tiny-NewsRec) outperform the traditional pre-train then finetune paradigm (i.e. PLM-NR (FT)). This is because the guidance from the teacher model such as output soft labels can provide more useful information than the ground-truth label. Fourth, PLM-NR-12 (FP) outperforms PLM-NR-12 (FT). This is because further pre-training the UniLMv2 model can make it specialized to the news data distribution, therefore boost its ability of news modeling. Finally, PLM-NR-12 (DP) outperforms PLM-NR-12 (FP). This is because our proposed matching task can help the PLM-based news encoder better at capturing the semantic information in news texts and generate more discriminative news representations, which can effectively help the user interest matching in the following news recommendation task. \subsection{Effectiveness of Multiple Teacher Models} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{fig/num_of_teachers_NAML.pdf} \caption{Impact of different number of teacher models.} \label{fig.num_teacher} \end{figure} In this section, we conduct experiments to validate the effectiveness of using multiple teacher models in our second stage knowledge distillation. We vary the number of teacher models $M$ from 1 to 4 and compare the performance of the 4-layer student model on the \textit{Feeds} dataset\footnote{We only include results on the \textit{Feeds} dataset due to space limit. The results on the \textit{MIND} dataset are in Appendix.}. The results are shown in Fig.\ref{fig.num_teacher}. From the results, we find that the performance of the student model improves with the number of teacher models. This may be because these teacher models usually can complement each other. With more teacher models, the student model can receive more comprehensive knowledge and obtain better generalization ability. \subsection{Effectiveness of Two-stage Knowledge Distillation} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{fig/ablation_NAML.pdf} \caption{Effectiveness of each stage in our framework.} \label{fig.ablation} \end{figure} In this section, we further conduct several experiments to verify the effectiveness of each stage in our two-stage knowledge distillation method. We compare the performance of the 4-layer student model distilled with our Tiny-NewsRec approach and its variant with one stage removed on the \textit{Feeds} dataset. The results are shown in Fig.\ref{fig.ablation}. From the results, we first find that the second stage knowledge distillation plays a critical role in our approach as the performance of the student model declines significantly when it is removed. This is because the guidance from multiple teacher models in the second stage such as learned news and user representations can provide much more useful information than the ground-truth label, which encourages the student model to behave similarly as the teacher models. The complement between these teacher models also enables the student model to have better generalization ability. Second, the performance of the student model also declines after we remove the first stage knowledge distillation, and the student model that only learns from the domain-specifically post-trained teacher model in the first stage and finetunes on news recommendation data alone still outperforms PLM-NR (FT). This is because our matching task used for domain-specific post-training can better adapt the PLM to the news domain and enable it to generate more discriminative news representations, which can be transferred to the following news recommendation task and boosts the performance of the PLM-based news recommendation model. \subsection{Efficiency Evaluation} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{fig/performance_NAML.pdf} \caption{Model size and inference speed of the teacher model and student models.} \label{fig.speed} \end{figure} In this section, we conduct experiments to evaluate the efficiency of the student models distilled with our Tiny-NewsRec approach. As in news recommendation, encoding news with the PLM-based news encoder is the main computational overhead, we measure the inference speed of the model in terms of the number of news that can be encoded per second with a single GPU. The test results and the number of parameters of the 1-layer, 2-layer, and 4-layer student models and the 12-layer teacher model PLM-NR-12 (DP) are shown in Fig.\ref{fig.speed}. The results show that our Tiny-NewsRec method can reduce the model size by 50\%-70\% and increase the inference speed by 2-8 times while achieving comparable or even better performance. These results verify that our approach can improve the effectiveness and efficiency of the PLM-based news recommendation model at the same time. \subsection{Hyper-parameter Analysis} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig/beta.pdf} \caption{Influence of the hyper-parameter $\beta_1$ and $\beta_2$.} \label{fig.beta} \end{figure} As shown in Fig. \ref{fig.beta}, we analyze the influence of two important hyper-parameters, $\beta_1$ and $\beta_2$ in the loss functions of our two-stage knowledge distillation method on the \textit{Feeds} dataset. First, we fix $\beta_1$ to 1.0 and vary the value of $\beta_2$ from 0 to 0.3. We find that the performance is not optimal when $\beta_2$ is close to 0, and a relatively large $\beta_2$ (e.g. $\beta_2>0.15$) also hurts the performance. This may be because in the second stage knowledge distillation, the supervision signals from both finetuned teacher models and ground-truth labels are useful, while those from teacher models are more important. Thus, a moderate selection of $\beta_2$ from 0.05 to 0.15 is recommended. Then we fix $\beta_2$ to 0.1 and vary the value of $\beta_1$ from 0.7 to 1.3. We find that the model achieves optimal performance when $\beta_1$ is set to 1, and the performance declines when we either increase or decrease the value of $\beta_1$. This may be because in the first stage knowledge distillation we only use one domain-specifically post-trained teacher model to guide the student model. The supervision from the teacher model and the ground-truth label may have equal contributions and complement each other. Thus, setting the value of $\beta_1$ around 1 is recommended. \section{Conclusion and Future Work} In this paper, we propose a Tiny-NewsRec approach to improve the effectiveness and the efficiency of PLM-based news recommendation with domain-specific post-training and a two-stage knowledge distillation method. Before finetuning, we conduct domain-specific post-training on the PLM-based news encoder with a self-supervised matching task between news titles and news bodies to make the generally pre-trained PLM better model the semantic information in news texts. In our two-stage knowledge distillation method, the student model can first adapt to the news domain with the guidance from the domain-specifically post-trained teacher model. Then a multi-teacher knowledge distillation framework is used to transfer task-specific knowledge from a set of finetuned teacher models to the student during finetuning. We conduct extensive experiments on two real-world datasets and the results demonstrate that our approach can effectively improve the performance of the PLM-based news recommendation model with considerably smaller models. In the future, we plan to deploy our Tiny-NewsRec in online personalized news recommendation service to verify its online performance. Besides, comparing with these single-teacher knowledge distillation methods, our approach will introduce additional training cost in order to train multiple teacher models. We are also interested in reducing the training cost of our two-stage knowledge distillation method while keeping its performance.
train/arxiv
BkiUa_vxK19JmejM7ZKd
5
1
\section{Introduction} \label{sec:introduction} In recent years the gradient flow has attracted much attention for practical and conceptual reasons \cite{Luscher:2009eq,Luscher:2010iy,Luscher:2011bx,Luscher:2013cpa, Suzuki:2013gza,Makino:2014taa,Kikuchi:2014rla}. Practically, as shown by L\"uscher and Weisz \cite{Luscher:2010iy,Luscher:2011bx}, the gradient flow in nonabelian gauge theory does not induce extra UV divergences in the bulk, so that the bulk theory is finite once the boundary theory is properly renormalized. Hence the ultralocal products of bulk operators automatically give renormalized composite operators, and this fact yields a lot of applications including a construction of energy-momentum tensor on the lattice \cite{Suzuki:2013gza,Makino:2014taa}. On the other hand, there has been an expectation that the gradient flow may be interpreted as a renormalization group (RG) flow (see, e.g., \cite{Luscher:2013vga,Kagimura:2015via,Yamamura:2015kva, Aoki:2016ohw,Makino:2018rys}). This expectation is based on the observation made in \cite{Luscher:2010iy}. To see this, let us consider a Euclidean scalar field theory in $d$ dimensions with the bare action $S_0[\phi]$. We assume that the theory is implemented with some UV cutoff $\Lambda_0$. The gradient flow is then given by \begin{align} \partial_\tau\phi_\tau(x) &={} -\frac{\delta S_0}{\delta \phi(x)}[\phi_\tau], \quad \phi_{\tau=0}(x) = \phi_0(x). \label{flow0} \end{align} If the field is canonically normalized as $\int_x [(1/2)(\partial_\mu\phi)^2+\cdots]$, then the flow equation gives a heat equation with perturbation: \begin{align} \partial_\tau\phi_\tau(x) &= \partial_\mu^2\phi_\tau(x) +~\cdots, \label{heat_eq} \end{align} which can be solved a \footnote In this paper we only consider scalar field theory, but our discussion should be easily extended to other field theories. We use a standard polymorphic notation; $\int_x$ represents $\int d^d x$ when $x$ are spacetime coordinates while $\int_p$ stands for $\int d^d p/(2\pi)^d$ when $p$ are momenta. We often denote $\phi(x)$ by $\phi_x$. } \begin{align} \phi_\tau(x) = \int_y K_\tau(x-y)\,\phi_0(y) + \cdots, \end{align} where $K_\tau(x-y)$ is the heat kernel: \begin{align} K_\tau(x-y) = \int_p e^{i p(x-y) - \tau\,p^2} = \frac{1}{(4\pi\tau)^{d/2}}\,e^{-(x-y)^2/4\pi\tau}. \end{align} Thus, $\phi_\tau(x)$ can be interpreted as an effective field which is coarse-grained from $\phi_0(y)$ within the radius $r\propto \sqrt{\tau}$. However, this interpretation is not perfectly matched with the philosophy of the renormalization group. In fact, if we denote the solution to \eqref{flow0} by $\phi_\tau(\phi_0)=\bigl(\phi_\tau(x;\phi_0)\bigr)$ so as to specify its initial value, the distribution function of $\phi$ at time $\tau$ will be given by \begin{align} p_\tau[\phi] = \frac{1}{Z_0}\,\int [d\phi_0]\, \delta[\phi-\phi_\tau(\phi_0)]\, e^{-S_0[\phi_0]} \quad\Bigl( Z_0 \equiv \int [d\phi_0]\, e^{-S_0[\phi_0]}\Bigr). \end{align} The flow equation gives the field $\phi$ a tendency to approach the classical solution of the original bare action $S_0[\phi]$, and thus $p_\tau[\phi]$ will take a sharp, $\delta$ function-like peak at the classical solution in the large $\tau$ limit, but this is not what we expect in the renormalization group; $\phi_\tau$ at large $\tau$ should be regarded as a low-energy effective field, which can be well treated as the classical solution to the low-energy effective action at scale $\Lambda=1/\sqrt{\tau}$, not to the bare action which itself can be regarded as giving an effective theory at the original cutoff $\Lambda_0\,(\gg\Lambda)$. In this paper, we propose a novel gradient flow that gives the field a tendency to approach the classical solution of the effective action at scale $\Lambda=1/\sqrt{\tau}$ when the derivative is taken: \begin{align} \partial_\tau\phi_\tau(x) &={} -\frac{\delta S_\tau}{\delta \phi(x)}[\phi_\tau], \quad \phi_{\tau=0}(x) = \phi_0(x). \label{flow1} \end{align} Assuming that the initial value $\phi_0(x)$ is distributed according to the distribution function $e^{-S_0[\phi_0]}/Z_0$, we impose the self-consistency condition that the classical solution $\phi_\tau(x)$ be distributed with $e^{-S_\tau[\phi]}/Z_\tau$ \footnote{ Note that the partition function is constant in time, $Z_\tau \equiv \int[d\phi]\,e^{-S_\tau[\phi]} = Z_0$. } \begin{align} e^{-S_\tau[\phi]} \equiv \int [d\phi_0]\,\delta[\phi-\phi_\tau(\phi_0)]\, e^{-S_0[\phi_0]}, \label{consistency0a} \end{align} where $\phi(x)$ should have only the coarse-grained degrees of freedom. We investigate the consequences of this requirement, and argue that the obtained equation for $S_\tau[\phi]$ may be regarded as a RG equation if one makes a field-variable transformation at every step such that the kinetic term is kept to take the canonical form. This paper is organized as follows. In Section \ref{sec:formulation} we write down the basic equation that determines the evolution of $S_\tau[\phi]$. In Section \ref{sec:LPA} we consider a local potential approximation (LPA) to our equation, and show that the result has a nice interpretation with Feynman diagrams. In Section \ref{sec:epsilon} we make an $\varepsilon$ expansion of the LPA and show that it reproduces the eigenvalues of the linearized RG transformation around both the Gaussian and the Wilson-Fisher fixed points to the order of $\epsilon$. Section \ref{sec:conclusion} is devoted to conclusion and outlook. \section{Formulation} \label{sec:formulation} We first rewrite the consistency condition \eqref{consistency0a} to a differential form \footnote{ In this paper, in order to simplify discussions, we do not seriously take into account the anomalous dimension $\gamma=\eta/2$, which may be incorporated by adding a term $(\gamma/2\tau)\,\phi_\tau(x)$ to the right-hand side of the first equation in \eqref{flow1}. } \begin{align} \partial_\tau e^{-S_\tau[\phi_\tau]} &=\int [d\phi_0] \int_x \Bigl(\frac{\delta}{\delta \phi(x)} \delta\bigl[\phi-\phi_\tau(\phi_0)\bigr]\Bigr)\, \bigl( -\partial_\tau \phi_\tau(x)\bigr)e^{-S_0[\phi_0]} \nonumber \\ &=\int [d\phi_0]\int_x \Bigl(\frac{\delta}{\delta \phi(x)} \delta \bigl[ \phi-\phi_\tau\left( \phi_0\right)\bigr]\Bigr)\, \frac{\delta S_\tau}{\delta \phi(x)} [\phi_\tau ]\, e^{-S_0[\phi_0]} \nonumber \\ &=\int_x \frac{\delta}{\delta \phi(x)} \Bigl[ \frac{\delta S_\tau[\phi]}{\delta \phi(x)}\,e^{-S_\tau[\phi]}\Bigr], \label{flow1a} \end{align} which in turn gives the following differential equation for $S_\tau[\phi]$: \begin{align} \partial_\tau S_\tau [\phi] &= \int_x \Bigl[{} - \frac{\delta^2 S_\tau[\phi]}{\delta \phi(x)^2} + \frac{\delta S_\tau[\phi]}{\delta \phi(x)} \frac{\delta S_\tau[\phi]}{\delta \phi(x)}\Bigr]. \label{flow1b} \end{align} However, one can easily see that UV divergences arise from the second-order functional derivative at the same point, $\delta^2 S/\delta\phi(x)^2$. The reason why such UV divergences appear in the effective theory is that we have not taken into account the fact that $\phi(x)$ should have only the coarse-grained degrees of freedom with cutoff $\Lambda=1\/\sqrt{\tau}$. To see how to incorporate this fact, it is helpful to consider a sharp cutoff for a while, instead of the smooth smearing with the heat kernel $K_\tau(x-y)$. Namely, we assume that the flowed field is cut off as $\phi_\tau(x)=\int_{|p|\leq 1/\sqrt{\tau}} e^{i p x}\,\phi_{\tau,p}$, and accordingly that the action $S_\tau[\phi]$ depends only on the lower modes $\phi_p$ $(|p|\leq 1/\sqrt{\tau})$ of the scalar field $\phi(x)=\int_p e^{i p x}\,\phi_{p}$. Then, the calculation in \eqref{flow1a} will be modified as \begin{align} \partial_\tau e^{-S_\tau[\phi]} &=\int [d\phi_0] \int_{|p|\leq 1/\sqrt{\tau}} \Bigl(\frac{\delta}{\delta \phi_p} \delta[\phi-\phi_\tau(\phi_0)] \Bigr)\, \bigl( -\partial_\tau \phi_{\tau,p}\bigr)e^{-S_0[\phi_0]} \nonumber \\ &=\int [d\phi_0] \int_{|p|\leq 1/\sqrt{\tau}} \Bigl(\frac{\delta}{\delta \phi_p} \delta[\phi-\phi_\tau(\phi_0)] \Bigr)\, \frac{\delta S_\tau}{\delta \phi_{-p}} [\phi_\tau ]\, e^{-S_0[\phi_0]} \nonumber \\ &=\int_{|p\leq 1/\sqrt{\tau}} \frac{\delta}{\delta \phi_p} \Bigl[ \frac{\delta S_\tau[\phi]}{\delta \phi_{-p}}\,e^{-S_\tau[\phi]}\Bigr]. \label{flow2a_sharp} \end{align} Returning back to the smooth cutoff with the heat kernel, eq.~\eqref{flow2a_sharp} will be expressed as \begin{align} \partial_\tau e^{-S_\tau[\phi]} &=\int_{x,y}\ K_\tau(x-y)\,\frac{\delta}{\delta \phi(x)} \Bigl[\frac{\delta S_\tau[\phi]}{\delta \phi(y)}\, e^{-S_\tau[\phi]}\Bigr], \label{flow2a} \end{align} which is equivalent to the equation \begin{align} \partial_\tau S_\tau[\phi] &=\int_{x,y} K_\tau(x-y)\Bigl[ \frac{\delta S_\tau[\phi]}{\delta \phi(x)} \frac{\delta S_\tau[\phi]}{\delta \phi(y)} -\frac{\delta^2 S_\tau[\phi]}{\delta \phi(x)\delta \phi(y)}\Bigr]. \label{flow2b} \end{align} We see that there no longer exist divergences of the aforementioned type. For the rest of this paper, we treat \eqref{flow2b} as the equation that {\em defines} the flow of $S_\tau(\phi)$. We here make an important comment that \eqref{flow2a} can be rewritten in the form of Fokker-Planck equation: \begin{align} \partial_\tau e^{-S_\tau [\phi]} &=\int_{x,y}\,K_\tau(x-y)\, \Bigl[ \frac{\delta^2S[\phi]}{\delta \phi(x)\delta \phi(y)} -\frac{\delta S_\tau[\phi]}{\delta \phi(x)} \frac{\delta S_\tau[\phi]}{\delta \phi(y)}\Bigr]\,e^{-S_\tau[\phi]} \nonumber \\ &=\int_{x,y}\,\frac{\delta}{\delta \phi(x)}\,K_\tau(x-y)\, \Bigl[ \frac{\delta}{\delta \phi(y)} +2\frac{\delta S_\tau[\phi]}{\delta \phi(y)}\Bigr]\, e^{-S_\tau[\phi]}, \label{FK1} \end{align} which corresponds to the Langevin equation \begin{align} \partial_\tau \phi_\tau(x) &= \nu_\tau(x) - 2\,\int_y K_\tau(x-y) \frac{\delta S_\tau[\phi]}{\delta \phi(y)} \label{Langevin0} \end{align} with the Gaussian white noise $\nu_\tau(x)$ normalized as \begin{align} \Braket{\nu_\tau(x)\nu_{\tau'}(y)}_\nu &= 2\, \delta( \tau-\tau')\,K_\tau(x-y). \end{align} The solution $\phi_\tau(x)$ to the Langevin equation now depends on the noise $\nu_\tau(x)$ as well as the initial value $\phi_0(x)$, \begin{align} \phi_\tau(x)=\phi_\tau\left(x;\phi_0,\nu\right). \end{align} Then, denoting the Gaussian measure of $\nu$ by $[d\rho(\nu)]$, the distribution function $e^{-S_\tau[\phi]}/Z_\tau$ [see \eqref{consistency0a}] can also be written as \begin{align} e^{-S_\tau[\phi]} &= \int [d\phi_0]\, \bigl\langle \delta[\phi-\phi_\tau(\phi_0,\nu)] \bigr\rangle_\nu\, e^{-S_0[\phi_0]} \nonumber \\ &= \int [d\phi_0][d\rho(\nu)]\, \delta[\phi-\phi_\tau(\phi_0,\nu)] \, e^{-S_0[\phi_0]}. \label{consistency0b} \end{align} The Langevin equation \eqref{Langevin0} shows that the field $\phi_\tau(x)$ makes a random walk due to the noise term, but at the same time it tries to approach the classical solution to $S_\tau[\phi]$. We thus find the mathematical equivalence between two expressions \eqref{consistency0a} and \eqref{consistency0b} that have different meanings; the former is purely deterministic in the course of evolution while the latter is stochastic. This observation may support an idea that a seemingly deterministic evolution is actually accompanied by an integration over some fluctuating degrees of freedom. \section{Local potential approximation} \label{sec:LPA} In order to investigate how our equation \eqref{flow2b} works as a RG equation, we make a local potential approximation \cite{Wilson:1973jj,Hasenfratz:1985dm,Morris:1994ie}: \begin{align} S_\tau[\phi]&= \int_x \Bigl[ V_\tau(\phi_x) + \frac{1}{2}\, ( \partial_\mu \phi_x)^2 \Bigr]. \label{LPA0} \end{align} The canonical form of kinetic term is particularly important for our purpose to interpret the gradient flow as a RG flow [see discussions around \eqref{heat_eq}]. However, even when we normalize the field $\phi_x$ in this way at time $\tau$, the action may no longer take a canonical form at $\tau+\epsilon$. In order for the interpretation $\Lambda=1/\sqrt{\tau}$ to hold also at time $\tau+\epsilon$ [i.e.\ $\Lambda-\delta\Lambda=1/\sqrt{\tau+\epsilon} = (\tau\,e^{\epsilon/\tau})^{-1/2}$], we then need to make a field-variable transformation at $\tau+\epsilon$ to retain the kinetic term in the canonical form. For making necessary calculations, it is convenient to start from the local potential approximation of the second order: \begin{align} I_\tau[\varphi] &\equiv \int_x \Bigl[ U_\tau(\varphi_x) + \frac{1}{2}\, W_\tau(\varphi_x)\,( \partial_\mu \varphi_x)^2 \Bigr] \label{LPA2} \end{align} and to investigate the evolution of $U_\tau(\varphi)$ and $W_\tau(\varphi)$ from $\tau$ to $\tau+\epsilon$ with the initial values $U_\tau(\varphi)=V_\tau(\varphi)$ and $W_\tau(\varphi)=1$. One can easily derive the following combined equations \footnote{ Among formulas that may be useful in deriving the equations are \begin{align} &\partial_x^2 K_\tau (x-y) = \partial_\tau K_\tau(x-y), \qquad \int_{x-y} K_\tau(x-y)\,(x-y)_\mu (x-y)_\nu = 2\,\tau\,\delta_{\mu\nu}, \nonumber \\ &\int_{x,y} K_\tau (x-y)\,f(\phi_x)\,g(\phi_y) = \int_x\bigl[\, f(\phi_x)\,g(\phi_x) - \tau\,(\partial_\mu\phi_x)^2\,f'(\phi_x)\,g'(\phi_x) + O(\tau^2) \bigr]. \nonumber \end{align} } \begin{align} \partial_\tau U_\tau(\varphi)& = U_\tau'(\varphi)^2 -\frac{1}{(4\pi\tau)^{d/2}}\,U''_\tau(\varphi) -\frac{d}{2\tau}\,\frac{1}{(4\pi\tau)^{d/2}}\,W(\varphi), \label{RG1} \\ \partial_\tau W_\tau(\varphi)& =2\, U'_\tau(\varphi)\, W'_\tau(\varphi) +4\, U''_\tau(\varphi)\, W_\tau(\varphi) -2\, \tau\, U''_\tau(\varphi)^2 -\frac{1}{(4\pi\tau)^{d/2}}\,W''_\tau(\varphi). \label{RG2} \end{align} From these, we find that the coefficient of $(1/2) (\partial_\mu\varphi_x)^2$ changes from the normalized value $W_\tau(\varphi)\equiv 1$ to \begin{align} W_{\tau+\epsilon}(\varphi) &= 1+\epsilon\,\partial_\tau W_\tau(\varphi) =1 + \epsilon\bigl[ 4\,U_\tau''(\varphi) - 2\tau\,U_\tau''(\varphi)^2\bigr] \nonumber \\ &\equiv 1 + 2\epsilon\, \rho_\tau'(\varphi). \end{align} Thus, the canonically normalized field $\phi$ at $\tau+\epsilon$ is given by integrating the equation $d\phi/d\varphi=\sqrt{W_{\tau+\epsilon}(\varphi)} =1 + \epsilon\,\rho_\tau'(\varphi)$, and we find the following relation to the order of $\epsilon$: \begin{align} \varphi &= \phi - \epsilon\,\rho_\tau(\phi) = \phi - \epsilon\,\int_0^{\phi} d\phi\, \bigl[ 2\,U_\tau''(\phi) - \tau\,U_\tau''(\phi)^2\bigr]. \end{align} The Jacobia \footnote{ The prime means that the determinant or the trace should be taken on the partial functional space under the projection of $K_\tau(x-y)$. } ${\rm Det}'\,(\delta\varphi/\delta\phi) = e^{{\rm Tr}'\,\log\,(\delta\varphi/\delta\phi)}$ is calculated with \begin{align} {\rm Tr}'\,\log(\delta\varphi/\delta\phi) = \int_{x,y} K_\tau(x-y)\,\log\bigl[ 1-\epsilon\,\rho_\tau'(\phi_x)\bigr] \,\delta^d(x-y) ={}- \epsilon \int_x \frac{1}{(4\pi\tau)^{d/2}}\,\rho_\tau'(\phi_x). \end{align} By putting everything together, the change of the local potential for the canonically normalized field $\phi$ is given as follows [recall the initial condition $U_\tau(\phi)=V_\tau(\phi)$]: \begin{align} V_{\tau+\epsilon}(\phi) &= \bigl[\,U_\tau(\varphi) + \epsilon\,\partial_\tau U_\tau(\varphi)\bigr] \bigr|_{\varphi=\phi-\epsilon\,\rho_\tau(\phi)} + \epsilon\,\frac{1}{(4\pi\tau)^{d/2}}\,\rho'_\tau(\phi) \nonumber \\ &= V_\tau(\phi) + \epsilon\,\Bigl[ - V_\tau'(\phi)^2 + \frac{1}{(4\pi\tau)^{d/2}}\,V_\tau''(\phi) + \tau\,V_\tau'(\phi)\,\int_0^\phi d\phi\,V_\tau''(\phi)^2 \nonumber \\ &~~~ - \frac{\tau}{(4\pi\tau)^{d/2}}\,V_\tau''(\phi)^2 - \frac{d}{2\,\tau(4\pi\tau)^{d/2}} \Bigr]. \label{dimful} \end{align} Note that the terms $V_\tau'(\phi)^2$ and $V_\tau''(\phi)$ appear in \eqref{dimful} as ${}-V_\tau'(\phi)^2 + {\rm const.}\,V_\tau''(\phi)$ that have the same signs as those in the Polchinski equation \cite{Polchinski:1983gv}, although the signs of the terms $U_\tau'(\phi)^2$ and $U_\tau''(\phi)$ are opposite in \eqref{RG1}. To get dimensionless expressions, we use the cutoff $\Lambda=1/\sqrt{\tau}=\tau^{-1/2}$ at time $\tau$ as \begin{align} &x_\mu = \tau^{1/2}\,\bar x_\mu,\quad \partial_\mu = \tau^{-1/2}\,\bar\partial_\mu, \quad \phi_x = \tau^{-(d-2)/4}\,\bar\phi_{\bar x}, \end{align} which gives the relation \begin{align} V_\tau(\phi) = \tau^{-d/2}\,\bar V_\tau(\bar\phi) ~~\mbox{with}~~ \phi = \tau^{-(d-2)/4}\,\bar\phi . \label{Vtau} \end{align} Here we have placed the bar on quantities to indicate that they are dimensionless. On the other hand, we use the cutoff $\Lambda-\delta\Lambda = 1/\sqrt{\tau+\epsilon} = (\tau\,e^{\epsilon/\tau})^{-1/2}$ at time $\tau + \epsilon$ as \begin{align} x_\mu = (\tau\,e^{\epsilon/\tau})^{1/2}\,\bar x_\mu, \quad \partial_\mu = (\tau\,e^{\epsilon/\tau})^{-1/2}\,\bar\partial_\mu, \quad \phi_x = (\tau\,e^{\epsilon/\tau})^{-(d-2)/4}\,\bar\phi_{\bar x}, \end{align} which leads to the relation \begin{align} V_{\tau+\epsilon}(\phi) = (\tau\,e^{\epsilon/\tau})^{-d/2}\, \bar V_{\tau+\epsilon}\bigl(\bar\phi\bigr) ~~\mbox{with}~~ \phi = (\tau\,e^{\epsilon/\tau})^{-(d-2)/4}\,\bar\phi. \label{Vtauepsilon} \end{align} Substituting \eqref{Vtau} and \eqref{Vtauepsilon} to \eqref{dimful}, we finally obtain the following equation for the dimensionless local potential (we remove the bar from the expression for notational simplicity): \begin{align} \tau\,\partial_\tau V_\tau(\phi) &= \frac{d}{2}\,V_\tau(\phi) - \frac{d-2}{4}\,\phi\,V_\tau'(\phi) - V_\tau'(\phi)^2 + B_d\,V_\tau''(\phi) - B_d\,V_\tau''(\phi)^2 \nonumber \\ &~~~+ V_\tau'(\phi)\,\int_0^\phi d\phi\,V_\tau''(\phi)^2 - \frac{d}{2}\,B_d \quad \Bigl(B_d\equiv \frac{1}{(4\pi)^{d/2}} \Bigr). \label{AF_LPA} \end{align} Note that the first two terms in \eqref{AF_LPA} reflect the simple rescalings of the potential and the field variable. The next three terms have a natural interpretation with Feynman diagrams (see Fig.~\ref{feynman}). \begin{figure}[t] \begin{center} \includegraphics[width=15cm]{feynman.eps} \begin{quote} \caption{ A Feynman diagrammatic interpretation of \eqref{AF_LPA}. The shaded circle represents minus the potential, $-V_\tau(\phi)$.} \label{feynman} \end{quote} \end{center} \vspace{-6ex} \end{figure} In fact, the third term in \eqref{AF_LPA} represents the contraction of a propagator in a 1-particle reducible diagram, while the fourth term stands for that of a propagator in a 1-particle irreducible diagram. The fifth term represents the contraction of propagators in a 2-particle reducible diagram. \section{$\varepsilon$ expansion} \label{sec:epsilon} The equation \eqref{AF_LPA} can be solved iteratively in dimension $d=4-\varepsilon$ with $0<\varepsilon \ll 1$. Expanding the potential as \begin{align} V(\phi) = v_0 + \frac{v_2}{2!}\,\phi^2 + \frac{v_4}{4!}\,\phi^4 + \cdots, \end{align} the first few terms in \eqref{AF_LPA} are given by \begin{align} \tau\,\partial_\tau v_2 &= v_2 -2\,v_2^2 + 2\,v_2^3 + B_d\,v_4 -2\,B_d\,v_2 v_4, \label{v2} \\ \tau\,\partial_\tau v_4 &= \frac{\varepsilon}{2}\,v_4 - 8\,v_2 v_4 + 12\,v_2^2 v_4 -6\,B_d v_4^2 - 2\,B_d\,v_2 v_6 + B_d v_6, \label{v4} \\ \tau\,\partial_\tau v_6 &= (-1+\varepsilon)\,v_6 - 20\,v_4^2 + 76\,v_2 v_4^2 -12\,v_2 v_6 + 18\,v_2^2 v_6 \nonumber \\ &~~~ -30\,B_d\,v_4 v_6 + B_d\,v_8 - 2\,B_d\,v_2 v_8, \label{v6} \\ \tau\,\partial_\tau v_8 &= \Bigl(-2+\frac{3\varepsilon}{2}\Bigr)\,v_8 -16\,v_2 v_8 - 112\,v_4 v_6 + 24\,v_2^2 v_8 + 336\,v_4^3 \nonumber \\ &~~~ + 464\,v_2 v_4 v_6 - 56\,B_d v_4 v_8 - 70\,B_d v_6^2. \label{v8} \end{align} In addition to the Gaussian fixed point ($v^\ast_n = 0$), a nontrivial fixed point $v_n^\ast$ can be found with the ansatz $v_2^\ast=O(\varepsilon)$, $v_4^\ast=O(\varepsilon)$, $v_6^\ast=O(\varepsilon^2)$ and $v_n^\ast=O(\varepsilon^3)$ $(n\geq 8)$: \begin{align} v_2^\ast = -\,\frac{1}{36}\,\varepsilon + O(\varepsilon^2),\quad v_4^\ast = \frac{1}{36 B_4}\,\varepsilon + O(\varepsilon^2),\quad v_6^\ast = -\,\frac{20}{(36 B_4)^2}\,\varepsilon^2 + O(\varepsilon^3),\quad v_8^\ast = O(\varepsilon^3). \end{align} By linearizing \eqref{v2}--\eqref{v8} around these values, the first two eigenvalues are found to be $1-\varepsilon/6+O(\varepsilon^2)$ and $-\varepsilon/2+O(\varepsilon^2)$, which agree with those of the linearized RG transformation at the Wilson-Fisher fixed point (note that $-\Lambda\,\partial_\Lambda = 2\,\tau\,\partial_\tau$). \section{Conclusion and outlook} \label{sec:conclusion} In this paper, we investigated the RG structure of the gradient flow. To generate the flow, instead of using the original bare action, we proposed to use the action $S_\tau[\phi]$ at flow time $\tau$. We wrote down the basic equation that determines the evolution of the action and considered a LPA to our equation, and showed that the result has a nice interpretation with Feynman diagrams. We also made an $\varepsilon$ expansion of the LPA and showed that it reproduces the eigenvalues of the linearized RG transformation around both the Gaussian and the Wilson-Fisher fixed points to the order of $\epsilon$. In order to simplify the argument, we have not seriously taken into account the anomalous dimension, which actually could be neglected to the order of approximation we made in the $\varepsilon$ expansion. A careful treatment of the anomalous dimension will be given in our forthcoming paper. In addition to higher-order calculations of $\varepsilon$ expansion, it should be interesting to investigate the LPA of the $O(N)$ vector model. It is tempting to regard our equation \eqref{flow2b} as a sort of exact renormalization group \cite{Wilson:1973jj,Wegner:1972ih,Polchinski:1983gv,Wetterich:1992yh} (see \cite{Bagnuls:2001pr,Polonyi:2001se,Rosten:2010vm} for a nice review on this subject). However, one must be careful in establishing this relationship, because the RG interpretation of \eqref{flow2b} is possible only when we make a field-variable transformation at every step such that the kinetic term is kept in the canonical form [see discussions below \eqref{LPA0}]. It thus should be interesting to write down an equation which incorporates the effect of the change of variable in a form of differential equation. In developing the present work further, it must be important to investigate whether the gradient flow of the present paper [eq.~\eqref{flow1}] also has a nice property in the renormalization of the flowed fields and their composite operators. In fact, a prominent feature of the conventional gradient flow \eqref{flow0} is, as was mentioned in Introduction, that there appear no extra divergences in the $(d+1)$-dimensional bulk theory. For example, let us consider the expectation value of an operator constructed from the flowed field, $\mathcal{O}[\phi_\tau]$: \begin{align} \bigl\langle \mathcal{O}[\phi_\tau] \bigr\rangle_{S_0} \equiv \frac{1}{Z_0}\,\int [d\phi_0]\, e^{-S_0[\phi_0]}\,\mathcal{O}[\phi_\tau(\phi_0)], \label{vev_S0} \end{align} where $\phi_\tau(\phi_0)$ is the solution to \eqref{flow0}. This gives a finite quantity once a proper regularization is implemented at the initial cutoff $\Lambda_0$, and this absence of extra divergences is attributed to the fact that $\phi_\tau(x;\phi_0)$ takes the form $\phi_\tau(x;\phi_0)=\int_y\,K_\tau{(x-y)}\,\phi_0(y)+\cdots$. Now let us consider the expectation value of the same operator $\mathcal{O}[\phi]$ with respect to our effective action $S_\tau[\phi]$: \begin{align} \bigl\langle \mathcal{O}[\phi] \bigr\rangle_{S_\tau} &\equiv \frac{1}{Z_\tau}\,\int [d\phi]\, e^{-S_\tau[\phi]}\,\mathcal{O}[\phi] \nonumber \\ &= \frac{1}{Z_\tau}\,\int [d\phi][d\phi_0]\,e^{-S_0[\phi_0]}\, \delta[\phi-\phi_\tau(\phi_0)]\,\mathcal{O}(\phi) \nonumber \\ &= \frac{1}{Z_\tau}\,\int [d\phi_0]\, e^{-S_0[\phi_0]}\,\mathcal{O}[\phi_\tau(\phi_0)], \label{vev_Stau} \end{align} where $\phi_\tau(x;\phi_0)$ is now the solution to our flow equation \eqref{flow1}. Note that this solution also has the form $\phi_\tau(x;\phi_0)=\int_y\,K_\tau{(x-y)}\,\phi_0(y)+\cdots$ because we make a field-variable transformation at every step such that $S_\tau[\phi]$ takes the canonical form, $S_\tau[\phi] = \int_x\,\bigl[(1/2)\,(\partial_\mu\phi_(x))^2+\cdots\bigr]$. We thus expect that two expectation values \eqref{vev_S0} and \eqref{vev_Stau} share the same properties for the finiteness at short distances. We leave the confirmation of this expectation for future work. Although the present paper only discusses scalar field theory, the extension to other field theories should be straightforward. The generalization to field theories in curved spacetime must also be interesting. A study along these lines is now in progress and will be reported elsewhere. \section*{Acknowledgments} The authors thank Daisuke Kadoh, Yoshio Kikukawa, Nobuyuki Matsumoto, Tetsuya Onogi, Hidenori Sonoda and Hiroshi Suzuki for useful discussions. This work was partially supported by JSPS KAKENHI (Grant Number 16K05321). \baselineskip=0.9\normalbaselineskip
train/arxiv
BkiUebI4eIOjSEHJUr1r
5
1
\section{Introduction} Coronal mass ejections (CMEs) are the most spectacular eruptive phenomena in our solar system. They are able to release a large quantity of plasma and magnetic flux into the heliosphere and severely disturb the space environment around the Earth \citep[e.g.,][]{gosling93}. Over the last 20 years, although the solar community has made a considerable progress in many aspects of understanding CMEs, the important issue of how CMEs are initiated remains elusive \citep{chen11_review,schmieder12}. In this Letter, we report the compound eruption activity involving two inter-connected flux ropes (FRs; magnetic field lines wound around each other) in the same active region, and then elucidate their initiation mechanism. In terms of whether involving magnetic reconnection, existing initiation models can be divided into two groups; one group are reconnection-based models including tether-cutting reconnection \citep{moore01} and breakout reconnection \citep{antiochos99,chen00,karpen12}, and the other group are FR-based ideal magnetohydrodynamics (MHD) models, involving catastrophic loss-of-equilibrium \citep{forbes91,isenberg93}, kink instability \citep{torok04}, and torus instability \citep{kliem06}. In the tether-cutting (breakout) model, key mechanism solely concerns the magnetic reconnection in the CME core (overlying) field region that increases upward magnetic pressure (reduce the overlying tension), thus initiating the explosive eruption of CMEs. Differing from the reconnection models, \cite{forbes91} and \cite{isenberg93} showed that the FR will lose equilibrium in an ideal MHD process if the photospheric sources of the overlying field converge toward each other. In addition to this catastrophic loss of equilibrium, kink or torus instability is also capable of initiating the CME explosive eruption. The ideal kink instability develops if the average twist of the FR is greater than a threshold \citep[e.g., 3.5$\pi$;][]{torok04}. The torus instability takes place if the restoring force caused by overlying field decreases faster than the outward directed Lorenz self-force as the flux rope expands outward \citep{kliem06,olmedo10}. With the development of these theoretical models, validating and distinguishing them observationally becomes a matter of great necessity. We here investigate in detail the initiation of a compound CME activity originating from two successive FR eruptions observed by the Atmospheric Imaging Assembly \citep[AIA;][]{lemen12} on board \textit{Solar Dynamics Observatory} (\textit{SDO}). We find that the slow rise of the first CME is most likely due to the quasi-separatrix-layer (QSL) reconnection in the low corona; while the slow rise of the second one results from the partial opening of the overlying field by the first CME. However, for their initiation of the impulsive acceleration, we contribute the mechanism to be the ideal torus instability. Data and results are presented in Section 2 and 3, respectively. Summary and discussions are given in Section 4. \section{Instruments} The data we used are mainly from the AIA, as well as the Helioseismic and Magnetic Imager \citep[HMI;][]{schou12} on broad \textit{SDO}. AIA provides the EUV images of the solar corona with the temporal cadence of 12 seconds, the pixel size of 0.6\arcsec, and the field of view (FOV) of 1.3$R_\odot$. The HMI provides the vector magnetic field of the solar photosphere with almost the same pixel size and FOV as AIA but the temporal cadence of 12 minutes. The Large Angle and Spectrometric Coronagraph \citep[LASCO;][]{brueckner95} on board \emph{SOHO} and the Sun--Earth Connection Coronal and Heliospheric Investigation \citep[SECCHI;][]{howard08} on board \emph{STEREO} observe the CMEs in white light. \textit{GOES} X-ray data reveal the SXR (SXR) 1--8 {\AA} flux of CME-associated flares. \section{Results} \subsection{Kinematics of CMEs} On 2012 January 23, there are two CMEs (CME1 and CME2) that appeared successively in the FOVs of LASCO/C2 and SECCHI/COR1, finally manifesting as a compound CME in the FOV of LASCO/C3 \citep[also see;][]{joshi13,shen13,liuy13}. The selected snapshots are displayed in Figure \ref{f1}. Here, we take advantage of the graduated cylindrical shell (GCS) model developed by \citet{thernisien06} to reconstruct the three-dimensional (3D) morphology of the CMEs and obtain the true heights of the CMEs. The forward fitting model includes three positional parameters and three dimensional parameters: the Carrington longitude ($\phi$) and latitude ($\theta$), the tilt angle ($\gamma$) with respect to the equator, the height ($h$) and aspect ratio ($\kappa$) of the FR, and the half-angle ($\alpha$) between the two FR legs. For the CME1 at $\sim$03:30 UT, the fitting parameters are $\phi$=219$^\circ$, $\theta$=25$^\circ$, $\gamma$=--49$^\circ$, $h=4.1$ $R_\sun$, $\kappa$=0.4, and $\alpha$=32$^\circ$; for the CME2 at $\sim$04:00 UT, $\phi$=210$^\circ$, $\theta$=32$^\circ$, $\gamma$=--71$^\circ$, $h=4.2$ $R_\sun$, $\kappa$=0.4, and $\alpha$=40$^\circ$. The corresponding wireframe rendering are shown in Figure \ref{f1}. The FR of CMEs have been reported to appear as an EUV hot channel structure (HCS) \citep{zhang12,cheng12_dem,cheng13_driver,li13} and/or bubble \citep{cheng11_fluxrope,suyang12,patsourakos13} in the AIA images. For the two CMEs studied we inspect the AIA images and find that the FR1 only appeared in 131 {\AA} and 94 {\AA} passbands (Figure \ref{f2}(b)) but not in any other ones, indicating that it should be hot\citep[$\ge$7MK;][]{odwyer10,reeves11,cheng11_fluxrope}. On the other hand, the FR2 is visible in all AIA passbands, although most noticeable in 131 {\AA} and 94 {\AA} images (Figure \ref{f2}(e)). The mixture of hot and cold plasmas within the FR2 probably is attributed to the presence of filament material \citep[e.g.,][]{cheng12_dem}. More detailed evolution of the two HCSs please refers to the attached movies. In order to clearly display the rising motion of the HCSs, as well as the expansion of the overlying field, we make two slices (slice1 and slice2 in Figure \ref{f2}(b) and (e)) along the HCS rising directions and one slice (slice3 in Figure \ref{f2}(h)) along the overlying field expanding direction. The time evolution of the HCSs and the overlying field along these slices are shown in the slice-time plots in Figure \ref{f3}(a)--(c). Through these time-stacking plots, we measure the height of the two HCSs with time (diamonds in Figure \ref{f3}(a) and (b)). The height--time data are plotted in Figure \ref{f3}(d). From Figure \ref{f2}(g)--(i), as well as the stack plot of the slice3 (as shown by two inclined lines in Figure \ref{f3}(c)), we notice the slow expansion of the overlying field after the FR1 eruption, which most probably leads to the initial rise motion of the FR2. With the height-time data, we further calculate the velocity and acceleration. In order to reduce the uncertainty in the height measurement, a cubic spline smoothing method is used to smooth the height \citep[also see,][]{patsourakos10_genesis,cheng13_driver}. Then a piece-wise numerical derivative method is applied to calculate the velocity \citep[e.g.,][]{zhang01,zhang04,cheng10_buildup}. The deduced velocity--time profile is displayed in Figure \ref{f3}(e). The uncertainty in the velocity is mainly from the uncertainty in the height measurement (to be 4 pixels; 1700 km for AIA observations and 44,000 km for GCS fitting). Similarly, we derive the CME acceleration and resulting uncertainty, as shown in Figure \ref{f3}(f). Note that all heights refer to the top of the CME FRs from the solar surface. From Figure \ref{f3}(e) and (f), one can find that the kinematical evolution of the CMEs are tightly associated with the emission variation of associated flares. The velocity evolutions of the CME1 and CME2 are very consistent in time with the SXR 1--8 {\AA} flux profiles associated with flare1 and flare2. The acceleration of the CME1 and CME2 grows in step with the derivative of the SXR profile. The close temporal correlation between the CME kinematics and the flare emission implies that CMEs and associated flares are not two independent eruption phenomena but only two distinct manifestations of the same eruption process \citep{linjun00,priest02,zhang06,temmer10}. \subsection{Onset of Impulsive Acceleration of CMEs} In order to investigate the initiation of the CMEs, we first determine the onset of the impulsive acceleration. Assuming that the height evolution of the CME FR in the low corona follows a function $h(t)=c_{0}e^{(t-t_{0})/\tau}+c_{1}(t-t_{0})+c_{2}$, where $h(t)$ is height, $t$ is time, $\tau, t_{0}, c_{0}, c_{1}, c_{2}$ are five free coefficients. The model includes two components: the linear and the exponential, which correspond to the slow rise phase characterized by a constant velocity and the initial impulsive acceleration phase characterized by an exponentially increase of velocity, respectively. The model is reasonable due to the fact that the velocity of the CMEs increases rapidly once the impulsive acceleration is triggered whether by the flare reconnection \citep[e.g.,][]{antiochos99,moore01,karpen12} or by the torus instability \citep[e.g.,][]{torok05,olmedo10}. The fitting to the data is achieved by the ``mpfit.pro" routine. With the fitting parameters, the onset of the CMEs is defined as a time where the exponential velocity is equal to the linear velocity ($t_{\rm onset} $=$\tau \ln (c_{1} \tau /c_{0})+t_{0}$). Accordingly, the height at the onset time corresponds to the critical height $h({t_{\rm onset}})$ of the eruption. We further use 100 Monte Carlo (MC) simulations to estimate the uncertainties of our results. For each MC realization, the measured heights are first perturbed randomly by an amount of $\delta$ within a sigma equal to the error of the height, and then re-fitting the model to the heights. The final onset time and onset height are the averages of 100 onset times and onset heights derived by 100 MC realizations. The corresponding uncertainties are triple the standard deviations (3$\sigma$) of the MC fitting. The fitting results are shown in Figure \ref{f4}. For the CME1, we determine the onset at 02:02 UT with the error of 2 minutes; the onset height is 84.4$\pm$4.2 Mm. For the CME2, the onset is 03:34 UT with the error of 1 minute; the onset height is 86.2$\pm$6.0 Mm. Moreover, the uncertainty of the reference point of measuring height is estimated to be $\sim$7.0 Mm (10$''$). Therefore, the final uncertainties of the onset heights for the CME1 and CME2 are $\sim$11.2 Mm and 13.0 Mm, respectively. From Figure \ref{f4}(c) and (f), one can find that the onset of the CMEs is earlier than that of associated flares by a few minutes. Here we define the onset of flares as a time where the derivative of the SXR flux becomes positive and begins to increase successively. For CME1 and CME2, the leading times, i.e., the onset of CME impulsive acceleration relative to the onset of the flare impulsive phase, are about 2 minutes. Actually, for the flare2 the onset time in \emph{GOES} record is 03:38 UT; the leading time would be of $\sim$4 minutes if this onset time is adopted. The results imply that the impulsive acceleration onset of the CME FRs is most probably caused by ideal MHD instability rather than by the flare reconnection. In addition, we note that the impulsive acceleration of the FRs should occur earlier than that we determined if the projection effect for the heights in the AIA FOV is taken into account. The result should be strengthened because it is even earlier for the FRs to ascend to the critical height of the torus instability. \subsection{Magnetic Properties of CMEs} Previous theoretical works have proved that the occurrence of torus instability of a FR depends on the decay index of the background magnetic field $B$ with height $h$ \citep[$n$=$-d \ln B/d \ln h$;][]{kliem06,isenberg07,olmedo10,demoulin10}. The decay index is computed from the potential field model based on the vertical component of vector magnetograms provided by the HMI (Figure \ref{f5}(a)--(c)). The distributions of the average decay index with height over three regions (the main polarity inverse line (PIL), two rectangle regions near the main PIL; Figure \ref{f5}(c)) are shown in Figure \ref{f5}(d). One can find that at the onset of the CME impulsive eruption, the FR1 (FR2) reached the height of 84.4$\pm$11.2 Mm (86.2$\pm$13.0 Mm) where the decay index is 1.7$\pm$0.1, which is slightly larger than the nominal critical value $\sim$1.5 of the torus instability occurrence \citep{torok05}. This result supports the theory of the torus instability as the trigger of the impulsive acceleration of CMEs. Note that here the decay index is calculated from the horizontal component of 3D magnetic field because the vertical component does not have a role in constraining the FR. Also note that, in torus instability models, the field induced by the current inside the FR and the background field constraining the FR constitute the total magnetic field, which usually can be modeled by the nonlinear force-free field (NLFFF) \citep[e.g.,][]{guo10_filament,sun12}. Considering the fact that it is difficult to separate the background field from the FR in the NLFFF model, the potential field thus is used to be an approximation of the background field \citep[also see,][]{fan07,aulanier10,cheng13_driver}. Based on the HMI vector data at 00:00 UT on January 23 (Figure \ref{f5}(c)), we extrapolate the 3D NLFFF using the optimization algorithm in cartesian coordinate \citep{wheatland00,wiegelmann04}. The selected field lines are displayed in Figure \ref{f5}(e). One can find that the extrapolation suggests there exist two FR structures (marked as FR$_{\rm A}$ and FR$_{\rm B}$), both of which have an associated filament visible in the AIA 304 {\AA} passband (Figure \ref{f5}(f)). The FR$_{\rm A}$ finally erupted as the CME2, while the FR$_{\rm B}$ always stayed there during the whole eruption process. However for the FR1 discussed later, we are not able to reconstruct a corresponding FR even resorting to the optimization algorithm in spherical geometry with a larger FOV \citep[e.g.,][]{guo12_spher}. This may not be surprising, since the FR1 as observed in AIA is a larger and higher structure, which can not be well modeled with the current extrapolation technique. \section{Summary and Discussions} In this Letter, we report the initiation process of a compound CME activity consisting of two successfully erupted FRs. We find that the kinematics of the FRs in the low corona have two phases: a slow rise phase and an impulsive acceleration phase, which can be fitted very well by a model consisting of a linear component and an exponential component. In the slow rise phase of the FR1, we inspect the AIA images in all EUV and UV passbands and find that some brightenings spread sporadically along the PIL under the FR1 and at its two footpoints as well. It indicates that magnetic reconnection probably take place inside or around the FR1. The reconnection is most likely to be at the QSL surrounding the FR in the low corona \citep{aulanier10}. The QSL reconnection is too weak to produce nonthermal particles. It is different from the reconnection process occurring during the flare, in terms of both intensity and location. With this QSL reconnection taking place, the FR1 is allowed to grow and rise slowly. However for the FR2, the driving mechanism of the slow rise phase might be different from that of FR1. Due to the eruption of the FR1, the overlying field is opened partially, leading to the expansion of the ambient magnetic field. The expansion decreases the downward magnetic tension, causing the slow rise of the FR2 \citep{torok11,lynch13}. Through 3D NLFFF extrapolation, it is indicated that the FR2 exists prior to the slow rise phase for a long time. While for the FR1, we do not know whether it exists or not prior to the slow rise. It may be formed locally through the QSL reconnection in the slow rise phase \citep[e.g.,][]{aulanier10,liur10} or there has existed a nascent FR, and the QSL reconnection only plays a role in enhancing the poloidal flux of the FR1. In the end of the slow rise phase, the FR1 (FR2) reaches the height of 84.4$\pm$11.2 Mm (86.2$\pm$13 Mm), where the decrease of the background magnetic field with height is fast enough such that the torus instability takes place, thus triggering the impulsive acceleration. It is worth mentioning that we do not find evidence of the kink instability from the observations. As the impulsive acceleration commences, the FR quickly stretches the anchored overlying field upward, and then the magnetic field underneath starts to reconnect impulsively. Such runaway reconnection is the cause of the observed flare rise phase. The time lag between the onset of associated flare and that of torus instability is only a few minutes, probably shorter. Without high cadence observations from AIA, it would be difficult to distinguish the relative timing between flux rope impulsive acceleration and flare reconnection onset. As the flare starts, the CME acceleration and the flare reconnection are coupled together. The flare reconnection is able to rapidly convert ambient magnetic field into the FR, enhancing the upward Lorentz self-force, thus further accelerating the FR. In response to the escaping of the FR, reduced magnetic pressure would drive a plasma inflow, which in turn causes more ambient magnetic field reconnecting. Therefore, the runaway reconnection after the flare onset is in a positive feedback process that impulsively accelerates the CME and enhances the flare emission. \acknowledgements We thank the referee, T. T{\"o}r{\"o}k, Y. M. Wang, C. L. Shen, R. Liu, K. Liu, and P. F. Chen for their valuable comments and discussions. SDO is a mission of NASA's Living With a Star Program. X.C., M.D.D., and Y.G. are supported by NSFC under grants 10673004, 10828306, and 10933003 and NKBRSF under grant 2011CB811402. J.Z. is supported by NSF grant ATM-0748003, AGS-1156120 and NASA grant NNG05GG19G.
train/arxiv
BkiUeEzxK1Thg9qFeOl4
5
1
\section{Introduction} The study of topological measures (initially called quasi-measures) began with papers by J. F. Aarnes \cite{Aarnes:TheFirstPaper}, \cite{Aarnes:ConstructionPaper}, and \cite{Aarnes:Pure}. There are now many papers devoted to topological measures and corresponding non-linear functionals; their application to symplectic topology has been studied in numerous papers (beginning with \cite{EntovPolterovich}) and a monograph (\cite{PoltRosenBook}). To date, however, almost all these works deal with topological measures on compact spaces. In \cite{Aarnes:LC} J. F. Aarnes gives a definition of a topological measure on a locally compact space, presents a procedure for obtaining topological measures from solid set functions on a locally compact, connected, locally connected space, and constructs some examples. While \cite{Aarnes:LC} contains many interesting ideas, it is not entirely satisfactory. It contains incomplete proofs and sometimes asks the reader to adapt lengthy proofs from other papers to its subject matter. In addition, the approach in \cite{Aarnes:LC} makes heavy use of sets that are connected and co-connected (i.e. have connected complements). We do not think this is the right approach for the non-compact setting. For example, using these sets one may end up constructing trivial topological measures (see Example 6.2 in \cite{Aarnes:LC}). Finally, the paper has never been published in a refereed mainstream journal. The construction technique employed by Aarnes for a compact space $X$ in \cite{Aarnes:ConstructionPaper} was later nicely simplified by D. J. Grubb, who used semi-solid sets in a compact space. Grubb presented his elegant construction in a series of lectures in 1998, but, unfortunately, never published it. Influenced by ideas of Aarnes and Grubb, we have developed an approach for constructing topological measures on locally compact spaces. Instead of sets that are connected and have connected complements we use sets that are connected and whose complement has finitely many bounded and unbounded components. Our approach allows us to extend a solid set function (see Definition \ref{DeSSFLC}) to a topological measure on $X$ when $X$ is a locally compact, connected, and locally connected space; the restriction of a topological measure to solid sets with compact closure is a solid set function that uniquely determines the topological measure. We obtain an easy way to construct topological measures on non-compact locally compact spaces whose one-point compactification has genus 0. (See \cite{Aarnes:ConstructionPaper}, and section \ref{examplesTmLC} for more information about genus.) Thus, we are able to produce a variety of topological measures on $\R^n$, half-spaces, punctured balls, etc.. The paper is organized as follows. In section \ref{Prelim} we give necessary topological preliminaries. In section \ref{SolidSemisoid} we study the structure of solid and semi-solid sets. In section \ref{TM} we give a definition and basic properties of topological measures on locally compact spaces, and in section \ref{SSF} we do the same for solid-set functions. In section \ref{ExtBssKc} on a locally compact, connected, and locally connected space we extend a solid-set function from bounded solid sets to compact connected and bounded semi-solid sets. In section \ref{BCOX} the extension is done to the finite unions of disjoint compact connected sets, and in section \ref{ExttoTM} the extension produces a topological measure that is uniquely defined by a solid set function (see Theorem \ref{extThLC} and Theorem \ref{ExtUniq}). In section \ref{examplesTmLC} we give examples and present an easy way (Theorem \ref{tmXtoXha}) to generate topological measures on locally compact, connected, and locally connected spaces whose one-point compactification has genus 0 from existing examples of topological measures on compact spaces. In this paper by a component of a set we always mean a connected component. We denote by $\overline E$ the closure of a set $E$ and by $\partial E$ the boundary of $E$. We denote by $ \bigsqcup$ a union of disjoint sets. \begin{definition} \label{debddset} A set $A \subseteq X$ is called bounded if $\overline A$ is compact. A set $A$ is solid if $A$ is connected, and $X \setminus A$ has only unbounded components. A set $A$ is semi-solid if $A$ is connected, and $X \setminus A $ has only finitely many components. \end{definition} Several collections of sets will be used often. They include: $\calO(X)$, the collection of open subsets of $X $; $\calC(X)$, the collection of closed subsets of $X $; and $\calK(X)$, the collection of compact subsets of $X $. $\calP(X)$ is the power set of $X$. Often we will work with open, compact or closed sets with particular properties. We use subscripts $c, s$ or $ ss$ to indicate (open, compact, closed) sets that are, respectively, connected, solid, or semi-solid. For example, $ \calO_{c}(X)$ is the collection of open connected subsets of $X$, and $ \calK_{s}(X)$ is the collection of compact solid subsets of $X$. Given any collection $\mathscr{E}\subseteq\calP(X)$, we denote by $\mathscr{E}^*$ the subcollection of all bounded sets belonging to $\mathscr{E}$. For example, $\calA^{*}(X) = \kx \cup \calO^{*}(X)$ is the collection of compact and bounded open sets, and $\calA_{ss}^{*}(X) = \calK_{ss}(X) \cup \calO_{ss}^{*}(X) $ is the collection of bounded open semi-solid and compact semi-solid sets. By $\calK_{0}(X)$ we denote the collection of finite unions of disjoint compact connected sets. \begin{definition} A non-negative set function $ \mu$ on a family of sets that includes compact sets is called compact-finite if $ \mu(K) <\infty$ for each compact $K$. A non-negative set function is called simple if it only assumes values $0$ and $1$. \end{definition} We consider set functions that are not identically $\infty$. \section{Preliminaries} \label{Prelim} This section contains necessary topological preliminaries. Some results in this section are known, but sometimes we give proofs for the reader's convenience. \begin{remark} \label{netsSETS} An easy application of compactness (see, for example, Corollary 3.1.5 in \cite{Engelking}) shows that \begin{itemize} \item[(i)] If $K_\alpha \searrow K, K \subseteq U,$ where $U \in \calO(X),\ K, K_\alpha \in \calC(X)$, and $K$ and at least one of $K_\alpha$ are compact, then there exists $\alpha_0$ such that $ K_\alpha \subseteq U$ for all $\alpha \ge \alpha_0$. \item[(ii)] If $U_\alpha \nearrow U, K \subseteq U,$ where $K \in \calK(X), \ U, \ U_\alpha \in \calO(X)$ then there exists $\alpha_0$ such that $ K \subseteq U_\alpha$ for all $\alpha \ge \alpha_0$. \end{itemize} \end{remark} \begin{remark} \label{OpenComp} \begin{itemize} \item[(a)] Suppose$X$ is connected, $ U \in \calO_{c}(X)$ and $F \in \calC_{c}(X)$ are disjoint sets. If $\overline U \cap F \ne \O$ then $U \sqcup F $ is connected. \item[(b)] If $X$ is locally compact and locally connected, for each $x \in X$ and each open set $U$ containing $x$, there is a connected open set $V$ such that $ x \in V \subseteq \overline V \subseteq U $ and $\overline V$ is compact. \item[(c)] If $V = \bigsqcup_{s \in S} V_s$ where $V$ and $V_s $ are open sets, then $\overline{ V_s} \cap V_t = \O$ for $ s \ne t$. In particular, if $X$ is locally connected, and $V = \bigsqcup_{s \in S} V_s$ is a decomposition of an open set $V$ into connected components, then all components $V_s$ are open, and $\overline{ V_s} \cap V_t = \O$ for $ s \ne t$. \end{itemize} \end{remark} \begin{lemma} \label{prelLemma} Let $U$ be an open connected subset of a locally compact and locally connected set $X$. Then for any $x, y \in U$ there is $ V_{xy} \in \calO_{c}^{*}(X)$ such that $ x,y \in V_{xy} \subseteq \overline{ V_{xy}} \subseteq U$. \end{lemma} \begin{proof} Fix $x \in U$. Let $$A = \{ y \in U : \exists V_{xy} \in \calO_{c}^{*}(X) \mbox{ such that } x,y \in V_{xy} \subseteq \overline{ V_{xy}} \subseteq U \}. $$ Clearly, $A$ is open, since if $y \in A$ then $V_{xy} \subseteq A$. The set $ U \setminus A$ is also open, since if $y \in U \setminus A$ then by Remark \ref{OpenComp} there exists $V \in \calO_{c}^{*}(X)$ such that $y \in V \subseteq \overline V \subseteq U$. In fact, $V \subseteq U \setminus A$. (Otherwise, if $ z \in V \cap A$ then $ V_{xz} \cup V$ is a bounded open connected set with $x,y \in V_{xz} \cup V \subseteq \overline{ V_{xz}} \cup \overline V \subseteq U$, i.e. $y \in A$.) Thus, $ U \setminus A$ is also open. Since $x \in A$, we must have $A = U$. \end{proof} We would like to note the following fact. (See, for example, \cite{Dugundji}, Chapter XI, 6.2) \begin{lemma} \label{easyLeLC} Let $K \subseteq U, \ K \in \kx, \ U \in \calO(X)$ in a locally compact space $X$. Then there exists a set $V \in \calO^{*}(X)$ such that $$ K \subseteq V \subseteq \overline V \subseteq U. $$ \end{lemma} \noindent In the spirit of this result we can say more, given connectedness. \begin{lemma} \label{LeConLC} Let $X$ be a locally compact, locally connected space, $K \subseteq U, \ K \in \kx, \ U \in \calO(X)$. If either $K$ or $U$ is connected there exist a set $V \in \calO_{c}^{*}(X)$ and a set $C \in \calK_{c}(X)$ such that $$ K \subseteq V \subseteq C \subseteq U. $$ One may take $C = \overline V$. \end{lemma} \begin{proof} Case 1: $ K \in \calK_{c}(X)$. For each $ x \in K$ by Remark \ref{OpenComp} there is $ V_x \in \calO_{c}^{*}(X)$ such that $ x \in V_x \subseteq \overline{ V_x} \subseteq U$. By compactness of $K$, we may write $ K \subseteq V_{x_1} \cup \ldots \cup V_{x_n}$. Since both $K$ and $V_{x_i} $ are connected and $ x_i \in K \cap V_{x_i}$, $\, K \cup V_{x_i}$ is connected for each $ i =1, \ldots, n$. Hence, $$ V = \bigcup\limits_{i=1}^n V_{x_i} = \bigcup\limits_{i=1}^n (K \cup V_{x_i}) $$ is a bounded open connected set for which $$ K \subseteq V \subseteq \overline V \subseteq \bigcup_{i=1}^n \overline{ V_{x_i}} \subseteq U. $$ Take $C = \overline V$. \\ Case 2: $ U \in \calO_{c}(X)$. As in Case 1 we may find $V_1, \ldots, V_n \in \calO_{c}^{*}(X)$ such that $$ K \subseteq V_1 \cup \ldots \cup V_n \subseteq \overline{ V_1} \ldots \cup \overline{V_n} \subseteq U .$$ Pick $ x_i \in V_i$ for $i=1, \ldots, n$. By Lemma \ref{prelLemma} choose $W_i \in \calO_{c}^{*}(X)$ with $ x_1, \ x_i \in W_i \subseteq \overline{ W_i} \subseteq U$ for $ i=2, \ldots, n$. Then the set $V_1 \cup W_i \cup V_i$ is connected for each $ i =2, \ldots, n$. Then $$ V = \bigcup_{i=1}^n V_i \cup \bigcup_{i=2}^n W_j = \bigcup_{i=2}^n (V_1 \cup W_i \cup V_i)$$ is open connected and $$ K \subseteq \bigcup_{i=1}^n V_i \subseteq V \subseteq \overline V \subseteq \bigcup_{i=1}^n \overline{ V_i} \cup \bigcup_{i=2}^n \overline{ W_i} \subseteq U.$$ Again, let $C = \overline V$. \end{proof} \begin{lemma} \label{LeCCoU} Let $X$ be a locally compact, locally connected space. Suppose $K \subseteq U, \ K \in \kx, \ U \in \calO(X)$. Then there exists $C \in \calK_{0}(X)$ such that $ K \subseteq C \subseteq U$. \end{lemma} \begin{proof} Let $U = \bigsqcup_{i \in I'} U_i$ be the decomposition into connected components. Since $X$ is locally connected, each $U_i$ is open, and by compactness of $K$ there exists a finite set $I \subseteq I'$ such that $K \subseteq \bigsqcup_{i \in I} U_i$. Then $ K \cap U_i = K \setminus \bigsqcup\limits_{j \in I, \ j \ne i} U_j$ is a compact set. For each $i \in I$ by Lemma \ref{LeConLC} choose $C_i \in \calK_{c}(X)$ such that $K \cap U_i \subseteq C_i \subseteq U_i$. The set $C = \bigsqcup_{i \in I} C_i$ is the desired set. \end{proof} \begin{lemma} \label{CmpCmplBdA} Let $X$ be a connected, locally connected space. Let $ A \in \calA_{c}(X)$ and let $B$ be a component of $X \setminus A$. Then \begin{itemize} \item[(i)] If $A$ is open then $B$ is closed and $\overline{A} \cap B \neq \O$. \item[(ii)] If $A$ is closed then $B$ is open and $A \cap \overline{B} \neq \O.$ \item[(iii)] $A \sqcup \bigsqcup\limits_{s \in S} B_s$ is connected for any family $\{ B_s\}_{s \in S} $ of components of $ X \setminus A$. \item[(iv)] $B$ is connected and co-connected. \end{itemize} \end{lemma} \begin{proof} The proof of the first two parts is not difficult. For the third part, observe that by Remark \ref{OpenComp} $A \bigsqcup B$ is connected for each component $B$ of $X \setminus A$. To prove the last part, let $X \setminus A = \bigsqcup_{s \in S} B_s$ be a decomposition into connected components. For each $ t \in S$ connected component $B_t$ is also co-connected, because $$ X \setminus B_t =A \sqcup \bigsqcup_{s \ne t} B_s $$ is a connected set by the previous part. \end{proof} \begin{lemma} \label{LeAaCompInU} Let $X$ be a connected, locally connected space. Let $K \in \kx, \ K \subseteq U \in \calO_{c}^{*}(X)$. Then at most a finite number of connected components of $X \setminus$ K are not contained in $U.$ \end{lemma} \begin{proof} Let $ X \setminus K = \bigsqcup_{s \in S} W_s$ be the decomposition of $ X \setminus K$ into connected components. Note that each component $ W_s$ intersects $U$ since otherwise we would have $W_s \subseteq X \setminus U$, so $\overline{ W_s} \subseteq X \setminus U$, so $\overline{ W_s} \cap K =\O$, which contradicts Lemma \ref{CmpCmplBdA}. Assume that there are infinitely many components of $X \setminus K$ that are not contained in $U$. Then we may choose components $W_i, \ i=1, 2, \ldots$, such that $W_i \cap U \neq\O$ and $W_i \cap (X \setminus U) \neq \O$ for each $i$. Connectivity of $W_i$ implies that $ W_i \cap \partial U \neq \O$ for each $i$. Let $x_i \in W_i \cap \partial U$. By compactness of $\partial U$, let $x_0 \in \partial U$ be the limit point of $(x_i)$. Then $x_0 \in X \setminus U \subseteq X \setminus K = \bigsqcup_{s \in S} W_s$, i.e. $x_0 \in W_t $ for some $t \in S$. But then all but finitely many $x_i$ must also be in $W_t$, which is impossible, since $W_i \cap W_t =\O$ for $t \neq i$. \end{proof} \begin{corollary} \label{CoBddComp} Let $X$ be a connected, locally connected space. Let $K \in \kx$ and let $W$ be the union of bounded components of $X \setminus K$. Then $W \in \calO^{*}(X)$. \end{corollary} \begin{proof} By Lemma \ref{LeConLC} pick $V \in \calO_{c}^{*}(X)$ such that $ K \subseteq V$. From Lemma \ref{LeAaCompInU} it follows that $W$ is bounded. By Lemma \ref{CmpCmplBdA} $W$ is open. \end{proof} \begin{remark} \label{ReUnbddComp} If $A \subseteq B, \ B \in \calA^{*}(X)$ then $ X \setminus B \subseteq X \setminus A$ and each unbounded component of $X \setminus B$ is contained in an unbounded component of $ X \setminus A$. \end{remark} \begin{lemma} \label{LeUnbddComp} Let $X$ be a connected, locally connected space. Assume $ A \subseteq B, \ B \in \calA^{*}(X)$. Then each unbounded component of $X \setminus B$ is contained in an unbounded component of $ X \setminus A$ and each unbounded component of $ X \setminus A$ contains an unbounded component of $X \setminus B$. \end{lemma} \begin{proof} Suppose first that $ A \subseteq K, \ K \in \kx$. The first assertion is Remark \ref{ReUnbddComp}. Now let $E$ be an unbounded component of $X \setminus A$ which contains no unbounded components of $X \setminus K$. Then $E$ is contained in the union of $K$ and all bounded components of $X \setminus K$. By Corollary \ref{CoBddComp} this union is a bounded set, and so is $E$, which leads to a contradiction. Therefore, each unbounded component of $X \setminus A$ must contain an unbounded component of $X \setminus K$. Now suppose $ A \subseteq B, \ B \in \calA^{*}(X)$. Choose $K \in \kx$ such that $ A \subseteq B \subseteq K$. Let $E$ be an unbounded component of $X \setminus A$. By the previous part, $ E$ contains an unbounded component $Y$ of $ X \setminus K$. But $Y \subseteq G$ for some unbounded component $G$ of $ X \setminus B$. Then $G \subseteq E$. \end{proof} \begin{lemma} \label{LeNoUnbdComp} Let $X$ be locally compact, locally connected. Let $A \in \calA^{*}(X)$. Then the number of unbounded components of $ X \setminus A$ is finite. \end{lemma} \begin{proof} Suppose first that $ A \in \kx$. By Lemma \ref{LeConLC} let $U \subseteq \calO_{c}^{*}(X)$ be such that $ A \subseteq U $. Then the assertion follows from Lemma \ref{LeAaCompInU}. Now suppose that $ A \in \calO^{*}(X)$. Then $\overline A \in \kx$, so the number of unbounded components of $ X \setminus \overline A$ is finite. From Lemma \ref{LeUnbddComp} it follows that the number of unbounded components of $X \setminus A$ is also finite, since it does not exceed the number of unbounded components of $ X \setminus \overline A$. \end{proof} \begin{lemma} \label{LeCleverSet} Let $X$ be locally compact, connected, locally connected. Suppose $D \subseteq U$ where $ D \in \kx, \ U \in \calO^{*}(X).$ Let $C$ be the intersection of the union of bounded components of $X \setminus D$ with the union of bounded components of $X \setminus U$. Then $C$ is compact and $ U \sqcup C$ is open. \end{lemma} \begin{proof} Write $$ X \setminus D = V \sqcup W,$$ where $V$ is the union of bounded components of $ X \setminus D$, and $W$ is the union of unbounded components of $ X \setminus D.$ Also write $$ X \setminus U = B \sqcup F,$$ where $B$ is the union of bounded components of $ X \setminus U$, and $F$ is the union of unbounded components of $X \setminus U.$ By Lemma \ref{LeNoUnbdComp} $F$ is a closed set. Let $$ C = V \cap B.$$ Clearly, $C$ and $U$ are disjoint. To see that $U \sqcup C$ is open, note first that $U \sqcup B = X \setminus F$ is an open set. Hence, $$ U \sqcup C = U \sqcup(V \cap B) = (U \cup V) \cap (U \sqcup B)$$ is also an open set. Now we shall show that $C$ is closed, i.e. that $X \setminus C$ is open. Note that $ F \subseteq W$ by Remark \ref{ReUnbddComp}. The set $W$ is open by Lemma \ref{CmpCmplBdA}. Now \begin{eqnarray*} X \setminus C &=& X \setminus (B \cap V) = (X \setminus B) \cup (X \setminus V) \\ &=& ( U \sqcup F) \cup (D \sqcup W) = (U \cup D) \cup (F \cup W) = U \cup W \end{eqnarray*} is an open set. By Corollary \ref{CoBddComp} the set $C$ is bounded. \end{proof} \section{Solid and Semi-solid sets} \label{SolidSemisoid} \begin{remark} \label{ReFinNoComp} Let $X$ be locally compact, locally connected. From Lemma \ref{LeNoUnbdComp} it follows that a bounded set $B$ is semi-solid if and only if the number of bounded components of $X \setminus B$ is finite. For a bounded solid set $A$ we have: $$ X \setminus A = \bigsqcup_{i=1}^n E_i $$ where $ n \in \N$ and $E_i$'s are unbounded connected components. \end{remark} \begin{lemma} \label{SolidCompoLC} Let $X$ be locally compact, locally connected. If $A \in \calA^{*}(X)$ then each bounded component of $X \setminus A$ is a solid bounded set. \end{lemma} \begin{proof} Let $$ X \setminus A = \bigsqcup_{i \in I} B_i \sqcup \bigsqcup_{j \in J} D_j $$ be the decomposition of $X \setminus A$ into components, where $B_i, \ i \in I$ are bounded components, and $D_j, \ j \in J$ are unbounded ones. Pick a bounded component $B_k$. Then $$ X \setminus B_k = A \sqcup \bigsqcup_{i \neq k} B_i \sqcup \bigsqcup_{j \in J} D_j $$ Note that the set on the right hand side is connected by Lemma \ref{CmpCmplBdA} and unbounded. Hence, $B_k$ is solid. \end{proof} A set $A \in \calA_{c}^{*}(X)$ may not be solid. But we may make it solid by filling in the "holes" by adding to $A$ all bounded components of $X \setminus A$. More precisely, we have the following result. \begin{lemma} \label{leSolidHu} Let $X$ be locally compact, locally connected. For $A \in \calA_{c}^{*}(X)$ let $\{A_i\}_{i=1}^n$be the unbounded components of $X \setminus A$ and $\{B_s\}_{s \in S}$ be the bounded components of $X \setminus A$. Then the set $\tilde{A} = A \sqcup \bigsqcup_{s \in S} B_s =X \setminus \bigsqcup_{i=1}^n A_i $ is solid. \end{lemma} \begin{proof} The set $\tilde A $ is connected by Lemma \ref{CmpCmplBdA}. It is clear that $X \setminus \tilde A$ has only unbounded components. \end{proof} \begin{definition} \label{solid hull} Let $X$ be locally compact, locally connected. For $A \in \calA_{c}^{*}(X)$ let $\{A_i\}_{i=1}^n$be the unbounded components of $X \setminus A$ and $\{B_s\}_{s \in S}$ be the bounded components of $X \setminus A$. We say that $\tilde{A}= A \sqcup \bigsqcup_{s \in S} B_s= X \setminus \bigsqcup_{i=1}^n A_i $ is a solid hull of $A$. \end{definition} The next lemma gives some properties of solid hulls of connected sets that are bounded open or compact. \begin{lemma}\label{PrSolidHuLC} Let $X$ be locally compact, connected, locally connected. Let $A, B \in \calA_{c}^{*}(X)$. \begin{enumerate}[label=(a\arabic*),ref=(a\arabic*)] \item \label{part1} If $ A \subseteq B$ then $\tilde{A} \subseteq \tilde{B}.$ \item \label{part2} $\tilde{A}$ is a bounded solid set, $A \subseteq \tilde{A}$, and $A$ is solid iff $A = \tilde{A}.$ \item \label{part3} $\tilde{\tilde{A}} = \tilde{A}.$ \item \label{part4} If $A$ is open, then so is $ \tilde{A}$. If $A$ is compact, then so is $\tilde{A}.$ \item \label{part5} If $A, B$ are disjoint bounded connected sets, then their solid hulls $\tilde{A}, \tilde{B}$ are either disjoint or one is properly contained in the other. \end{enumerate} \end{lemma} \begin{proof} Part \ref{part1} follows since each unbounded component of $ X \setminus B$ is contained in an unbounded component of $ X \setminus A$. If $A$ is compact, choose by Lemma \ref{LeConLC} a set $U \in \calO_{c}^{*}(X)$ that contains $A$. Since $\tilde A$ is a union of $A$ and bounded components of $X \setminus A$, applying Lemma \ref{LeAaCompInU} we see that $\tilde A$ is bounded. The rest of parts \ref{part2} and \ref{part3} is immediate. For part \ref{part4}, note that if $A$ is open (closed) then each of finitely many (by Lemma \ref{LeNoUnbdComp}) unbounded components of $X \setminus A$ is closed (open) by Lemma \ref{CmpCmplBdA}. To prove part \ref{part5}, let $A, B \in \calA_{c}^{*}(X)$ be disjoint. If $A \subseteq \tilde B$ then $\tilde A \subseteq \tilde B$ by parts \ref{part1} and \ref{part3}. To prove that the inclusion is proper, assume to the contrary that $\tilde A = \tilde B$. If one of the sets $A, B$ is open and the other is closed, this equality means that $\tilde A$ is a proper clopen subset of $X$, which contradicts the connectivity of $X$. Suppose $A$ and $B$ are both closed (both open). Then it is easy to see that $A = E$, where $E$ is a bounded component of $X \setminus B$, an open (closed) set. Thus, $A$ is a proper clopen subset of $X$, which contradicts the connectivity of $X$. Therefore, $\tilde A$ is properly contained in $\tilde B$. Similarly, if $ B \subseteq \tilde A$ then $ \tilde B \subseteq \tilde A $, and the inclusion is proper. Suppose neither of the above discussed cases $A \subseteq \tilde B$ or $B \subseteq \tilde A$ occurs. Then by connectedness we must have: $$ A \subseteq G , \ \ B \subseteq E$$ where $G$ is an unbounded component of $ X \setminus B$ and $E$ is an unbounded component of $ X \setminus A$. Then $ B \subseteq \tilde B \subseteq X \setminus G \subseteq X \setminus A$, i.e. $\tilde B$ is contained in a component of $ X \setminus A$. Since $\tilde B$ is connected and $ B \subseteq E$ we must have $ \tilde B \subseteq E \subseteq X \setminus \tilde A$. \end{proof} \begin{lemma} \label{LeCsInside} Let $X$ be locally compact, connected, locally connected. If $K \subseteq U, \ K \in \kx, \ U \in \calO_{s}^{*}(X)$ then there exists $C \in \calK_{s}(X)$ such that $$ K \subseteq C \subseteq U.$$ \end{lemma} \begin{proof} One may take $C$ to be the solid hull of the set $\overline V$ from Lemma \ref{LeConLC}. Then $C \subseteq U$ by Lemma \ref{PrSolidHuLC}. \end{proof} \begin{lemma} \label{opensolid} Let $X$ be locally compact, connected, locally connected. Let $K \subseteq V, \ K \in \calK_{s}(X), \ V \in \calO(X)$. Then there exists $ W \in \calO_{s}^{*}(X)$ such that $$ K \subseteq W \subseteq \overline W\subseteq V.$$ \end{lemma} \begin{proof} By Lemma \ref{LeConLC} we may choose $ U \in \calO_{c}^{*}(X)$ such that \begin{eqnarray} \label{V} K \subseteq U \subseteq \overline U \subseteq V. \end{eqnarray} Since $ K \in \calK_{s}(X)$ let $$ X \setminus K = \bigsqcup_{j=1}^n V_j$$ be the decomposition into connected components. Each $V_j $ is an unbounded open connected set. Since $X\setminus U \subseteq X \setminus K$, for each $j=1, \ldots, n$ let $E_j$ be the union of all bounded components of $X \setminus U$ contained in $V_j$, and let $F_j$ be the union of (finitely many by Lemma \ref{LeNoUnbdComp}) unbounded components of $X \setminus U$ contained in $V_j$. By Lemma \ref{CmpCmplBdA} each $F_j$ is closed. By Lemma \ref{LeUnbddComp} each $F_j$ is non-empty. Then by Lemma \ref{CmpCmplBdA} non-empty set $F_j \cap \overline U \subseteq V_j$ and $F_j \cap \overline U \in \kx$. Now, $E_j \subseteq \tilde U$, so $ E_j $ is bounded. Note that $X = K \sqcup \bigsqcup_{j=1}^n V_j$, and a limit point $x$ of $E_j$ can not be in $V_i$ for $i \neq j$; and it can not be in $K$, since in this case a neighborhood $U$ of $x$ contains no points of $E_j$. Thus, $\overline{ E_j} \subseteq V_j$. Then $(F_j \cap \overline U) \cup \overline{ E_j}$ is a compact set contained in $V_j$. By Lemma \ref{LeConLC} there exists $ D_j \in \calK_{c}(X)$ such that \begin{eqnarray} \label{3sh} (F_j \cap \overline U) \cup \overline{ E_j} \subseteq D_j \subseteq V_j. \end{eqnarray} Let $$ B_j = D_j \cup F_j.$$ Then $B_j$ is connected because from (\ref{3sh}) one sees that $D_j $ intersects every component comprising $F_j$. Thus, each $B_j$ is an unbounded closed connected set, $B_j \cap K =\O$. Set $$B= \bigcup_{j=1}^n B_j.$$ Then $B \cap K = \O$. Now $ K \subseteq X \setminus B$, so let $O$ be the connected component of $X \setminus B$ such that $ K \subseteq O \subseteq X \setminus B$. Since $B= \bigcup_{j=1}^n B_j \subseteq X \setminus O$, $B$ is contained in the union of unbounded components of $X \setminus O$. Hence, each bounded component of $ X \setminus O$ is disjoint from $B$, and so $\tilde O \subseteq X \setminus B$. Thus $$ K \subseteq O \subseteq \tilde O \subseteq X \setminus B \subseteq U.$$ By (\ref{V}) we see that $$ K \subseteq \tilde O \subseteq U \subseteq \overline U \subseteq V$$ and we may take $W =\tilde O$. \end{proof} \begin{remark} The closure of a solid set need not be solid. For example, in the infinite strip $X = \R \times [0,1] $ the open set $ U = ((1,3) \times (0,1)) \cup ((5,7) \times(0,1)) \cup ((2,6) \times (0.25 , 0.75))$ is solid, while its closure is not. \end{remark} \begin{lemma} \label{ossreg} Let $X$ be locally compact, connected, locally connected. Suppose $K \subseteq W, \ K \in \calK_{c}(X), \ W \in \calO_{ss}(X)$. Then there exist $V \in \calO_{ss}^{*}(X)$ and $ D \in \calK_{ss}(X)$ such that $$ K \subseteq V \subseteq D \subseteq W.$$ \end{lemma} \begin{proof} By Lemma \ref{LeConLC} choose $U \in \calO_{c}^{*}(X)$ and $ C \in \calK_{c}(X)$ such that $$ K \subseteq U \subseteq C \subseteq W.$$ Let $X \setminus W = \bigsqcup_{i=1}^n E_i, \ X \setminus C = \bigsqcup_{t \in T} V_t, \ X \setminus U = \bigsqcup_{s \in S} D_s$ be decompositions into connected components of $X \setminus W, \ X \setminus C, \ X \setminus U$ respectively. Then $$ \bigsqcup_{i=1}^n E_i \subseteq \bigsqcup_{t \in T} V_t \subseteq \bigsqcup_{s \in S} D_s.$$ Let $T_0 = \{ t \in T: \ V_t \mbox{ is unbounded } \}$. Let us index by $T'$ the family of all bounded components of $X \setminus C$ each of which contains a component of $X \setminus W$. So $ \bigsqcup_{i=1}^n E_i \subseteq \bigsqcup_{t \in T_0} V_t \sqcup \bigsqcup_{t \in T'} V_t$. Note that $T'$ is a finite index set. Now let us index by $S'$ the family of all bounded components of $X \setminus U$ each of which contains a component $V_t$ for some $t \in T' $. Note that $S'$ is a finite index set and $$ \bigsqcup_{t \in T'} V_t \subseteq \bigsqcup_{s \in S'} D_s.$$ Consider $$ V = \tilde U \setminus \bigsqcup_{s \in S'} D_s.$$ Then $V$ is bounded. Also, $V$ is open. By Lemma \ref{CmpCmplBdA} $V$ is connected. Since $$ X \setminus V = (X \setminus \tilde U) \sqcup\bigsqcup_{s \in S'} D_s \subseteq \bigsqcup_{s \in S} D_s = X \setminus U$$ we see that $V \in \calO_{ss}^{*}(X)$ (as the first equality indicates that $X \setminus V$ has finitely many components), and that $U \subseteq V$. Now consider $$ D = \tilde C \setminus \bigsqcup_{t \in T' } V_t.$$ Then $D$ is compact. By Lemma \ref{CmpCmplBdA} $D$ is connected. We have $$ X \setminus D = (X \setminus \tilde C) \sqcup \bigsqcup_{t \in T'} V_t \subseteq (X \setminus \tilde U) \sqcup \bigsqcup_{s \in S'} D_s = X \setminus V,$$ so $X \setminus D$ has finitely many components, and $V \subseteq D$. Thus, $D \in \calK_{ss}(X)$. Also, $$ X \setminus W = \bigsqcup_{i=1}^n E_i \subseteq \bigsqcup_{t \in T_0} V_t \sqcup \bigsqcup_{t \in T'} V_t = (X \setminus \tilde C) \sqcup \bigsqcup_{t \in T'} V_t = X \setminus D.$$ Therefore, $ D \subseteq W$. Then we have: $$ K \subseteq U \subseteq V \subseteq D \subseteq W,$$ where $ V \in \calO_{ss}^{*}(X)$ and $ D \in \calK_{ss}(X)$. \end{proof} Let $V$ be an open subset of $X$ endowed with the subspace topology. Let $D \subseteq V$. By $\overline D^V$ we denote the closure of $D$ in $V$ with the subspace topology. As before, $\overline D$ stands for the closure of $D$ in $X$. \begin{lemma} \label{76a} Let $V \in \calO(X), \ D \subseteq V$. Suppose $V$ is endowed with the subspace topology. \begin{itemize} \item[a)] If $D$ is bounded in $V$ with the subspace topology then $\overline D^V = \overline D$ and $\overline D \subseteq V$. \item[b)] If $D$ is bounded in $X$ and $\overline D \subseteq V$ then $D$ is bounded in $V$. \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[a)] If $D$ is bounded in $V$(with the subspace topology) then $\overline D^V$ is a compact subset of $V$, and so is a compact in $X$, hence, closed in $X$. That is, $\overline{ \overline D^V} = \overline D^V$. Since clearly $\overline D^V \subseteq \overline D$ and $ D \subseteq \overline D^V$, we have: $$ \overline D \subseteq \overline{ \overline D^V} = \overline D^V \subseteq \overline D.$$ It follows that $ \overline D = \overline D^V \subseteq V$. \item[b)] Since $\overline D$ is compact in $X$ it is easy to see that $\overline D^V$ is compact in $V$. \end{itemize} \end{proof} \begin{remark} \label{bddInV} Let $V \in O^*(X)$ be endowed with the subspace topology. From Lemma \ref{76a} we see that $D$ is bounded in $V$ iff $\overline D \subseteq V$. Hence, $D$ is unbounded in $V$ iff $\overline D \cap (X \setminus V) \neq \O$. \end{remark} The next two results give relations between being a solid set in a subspace of $X$ and being a solid set in $X$. \begin{lemma} \label{LeSolidInV} Let $X$ be locally connected. Let $C \subseteq V, \ C \in \calC_{s}(X), \ V \in \calO(X)$. Then $C \in \calC_{s}(V)$, i.e. connected components of $V \setminus C$ are unbounded subsets of $V$. \end{lemma} \begin{proof} Suppose $V \setminus C = \bigsqcup_{s \in S} V_s$ is the decomposition into connected components in $V$. Note that $$ X \setminus C = (X \setminus V) \sqcup (V \setminus C) =(X \setminus V) \sqcup \bigsqcup_{s \in S} V_s $$ Assume that there exists $r \in S$ such that $V_r$ is bounded in $V$. By Lemma \ref{76a} $\overline{ V_r} \cap (X \setminus V) =\O$. Also, by Remark \ref{OpenComp} $\overline{V_r} \cap V_s = \O$ for each $s \ne r$. Thus, $ \overline{ V_r} \subseteq C \sqcup V_r$. Since $V_r \subseteq X \setminus C$ and $V_r$ is connected in $X$, assume that $V_r$ is contained in a component $U$ of $X \setminus C$. Then $ V_r \subseteq U \cap \overline{V_r} \subseteq U \cap (C \sqcup V_r) \subseteq V_r$, so $ U \cap \overline{V_r} \subseteq V_r$. Thus, $U =(U \cap \overline{V_r}) \sqcup (U \setminus \overline V_r) = V_r \sqcup (U \setminus \overline V_r)$ is the disconnection of $U$, unless $U = V_r$. This shows that $U=V_r$ is a component of $X \setminus C$. But this is impossible, since $V_r$ is bounded and $C$ is solid. \end{proof} \begin{lemma} Let $A \subseteq V, \ V \in \calO_{s}^{*}(X)$. If $A \in \calA_{s}(V) $ then $A \in \calA_{s}^{*}(X)$. \end{lemma} \begin{proof} If $ A \in \calA_{s}(V)$ then $A$ is connected in $X$ and bounded in $X$. Since $V \in \calO_{s}^{*}(X)$, we may write $X \setminus V = \bigsqcup_{i \in I} F_i$ where $F_i$ are unbounded connected components. Let $V \setminus A = \bigsqcup_{s \in S} E_s$ be the decomposition into connected components in $V$. Each $E_s$ is unbounded in $V$, i.e., $\overline{ E_s} \cap (X \setminus V) \ne \O$, hence, $\overline{ E_s} \cap F_i \ne \O$ for some $ i \in I$. Let $I' = \{ i \in I: \ F_i \cap \overline{ E_s} \ne \O \mbox{ for some } E_s \}$, and for $i \in I'$ let $S_i = \{ s \in S : \ \overline{ E_s} \cap F_i \ne \O \}$. For $i \in I'$ the set $ F_i \cup \bigsqcup_{s \in S_i} E_s$ is unbounded and connected. Since $$ X \setminus A = (X \setminus V) \sqcup (V \setminus A) = \bigsqcup_{i \in I'} (F_i \cup \bigsqcup_{s \in S_i} E_s) \sqcup \bigsqcup_{i \in I \setminus I'} F_i $$ is a disjoint union of unbounded connected sets, the proof is complete. \end{proof} Now we shall take a closer look at the structure of an open solid or semi-solid set that contains a closed solid or closed connected set. \begin{lemma} \label{LeDecompV} Let $X$ be locally compact, connected, locally connected. Let $ C \subseteq V, \ C \in \calK_{s}(X)$. \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item Suppose $V \in \calO_{s}^{*}(X)$. If $V \setminus C$ is connected then $$V = C \sqcup W \mbox{ where } W \in \calO_{ss}^{*}(X).$$ If $ V \setminus C$ is disconnected then $$ V = C \sqcup \bigsqcup_{i=1}^n V_i \mbox{ where } V_i \in \calO_{s}^{*}(X), \ i=1, \ldots, n. $$ \item Suppose $ V \in \calO_{ss}^{*}(X)$. Then $$ V = C \sqcup \bigsqcup_{i=1}^n V_i \mbox{ where } V_i \in \calO_{ss}^{*}(X), \ i=1, \ldots, n. $$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item Suppose $V \in \calO_{s}^{*}(X)$ and let $$X \setminus V = \bigsqcup_{s \in S} F_s$$ be the decomposition into connected components, so $S$ is a finite index set and each $F_s$ is unbounded. If $V \setminus C$ is connected then taking $W = V \setminus C$ we see that $$X \setminus W = X \setminus (V \setminus C) = (X \setminus V) \sqcup C =C \sqcup \bigsqcup_{s \in S} F_s $$ has finitely many components, i.e. $W \in \calO_{ss}^{*}(X)$. \\ Now assume that $V \setminus C$ is not connected. By Lemma \ref{LeSolidInV} and Remark \ref{bddInV} $C \in \calC_{s}(V)$ and is bounded in $V$. The set $ V \setminus C$ is also disconnected in $V$, so using Remark \ref{ReFinNoComp} let $$V \setminus C = \bigsqcup_{i=1}^n V_i, \ n \ge 2 $$ be the decomposition into connected (unbounded in $V$) components in $V$. Each $V_i$ is connected in $X$. To show that each $V_i \in \calO_{s}^{*}(X)$ we only need to check that the components of $X \setminus V_i$ are unbounded. For simplicity, we shall show it for $V_1$. For $2 \le j \le n$ by Lemma \ref{LeSolidInV} and Remark \ref{bddInV} $\overline{ V_j}$ intersects $X\setminus V$, hence, intersects some $F_s$. Let $S_1 = \{ s\in S: F_s \cap \overline{ V_j} \neq \O \mbox{ for some } 2 \le j \le n \} $. By Remark \ref{OpenComp} and Lemma \ref{CmpCmplBdA} the set $(\bigsqcup_{s \in S_1} F_s \sqcup C \sqcup \bigsqcup_{j=2}^n V_j )$ is connected. It is also unbounded. Now \begin{eqnarray*} X \setminus V_1 &=& (X \setminus V) \sqcup (V \setminus V_1) \\ &=& \bigsqcup_{s \in S} F_s \sqcup C \sqcup \bigsqcup_{j=2}^n V_j \\ &=& (\bigsqcup_{s \in S_1} F_s \sqcup C \sqcup \bigsqcup_{j=2}^n V_j ) \sqcup \bigsqcup_{s \in S \setminus S_1} F_s \end{eqnarray*} Since $X \setminus V_1$ is the disjoint union of connected unbounded sets, it follows that $V_1$ is solid. \item Suppose $V \in \calO_{ss}^{*}(X)$ and let $\bigsqcup_{j=1}^k F_j$ be the components of $X \setminus V$. By Lemma \ref{LeSolidInV} and Remark \ref{bddInV} $C \in \calC_{s}(V)$ and is bounded in $V$. Let $$V \setminus C = \bigsqcup_{i=1}^n V_i, \ n \ge 1 $$ be the decomposition into connected components in $V$ according to Remark \ref{ReFinNoComp}. Each $V_i$ is connected in $X$, and to show that each $V_i \in \calO_{ss}^{*}(X)$ we only need to check that $X \setminus V_i$ has finitely many components. For simplicity, we shall show it for $V_1$. We have: $$ X \setminus V_1 = (X \setminus V) \sqcup (V \setminus V_1) = \bigsqcup_{j=1}^k F_j \sqcup C \sqcup \bigsqcup_{i\ne 1} V_i .$$ Since $X \setminus V_1$ is a finite disjoint union of connected sets, the number of components of $ X \setminus V_1$ is finite, so $V_1 \in \calO_{ss}^{*}(X)$. \end{enumerate} \end{proof} \begin{lemma} \label{LeDecompU} Let $X$ be locally compact, connected, locally connected. Suppose $ C \subseteq U, \ \ C \in \calK_{c}(X), \ \ U \in \calO_{s}^{*}(X)$. If $\ U \setminus \tilde C$ is disconnected then $$ U = C \sqcup \bigsqcup_{s \in S} V_s, \ \ \ V_s \in \calO_{s}^{*}(X).$$ If $ \ U \setminus \tilde C$ is connected then $$ U = C \sqcup \bigsqcup_{s \in S} V_s \sqcup W, \ \ \ V_s \in \calO_{s}^{*}(X), \ W \in \calO_{ss}^{*}(X).$$ \end{lemma} \begin{proof} Note first that $\tilde C \in \calK_{s}(X)$ and $\tilde C \subseteq U$ by Lemma \ref{PrSolidHuLC}. Assume that $U \setminus \tilde C$ is disconnected. By Lemma \ref{LeDecompV} we may write $ U = \tilde C \sqcup \bigsqcup_{i=1}^n U_i, \ \ U_i \in \calO_{s}^{*}(X)$. But $\tilde C = C \sqcup \bigsqcup_{\alpha} V_{\alpha}$, where $V_{\alpha}$ are bounded components of $ X \setminus C$, so by Lemma \ref{SolidCompoLC} each $V_{\alpha} \in \calO_{s}^{*}(X)$. After reindexing, one may write $$ U = C \sqcup \bigsqcup_{s \in S} V_s, \ \ \ V_s \in \calO_{s}^{*}(X).$$ The proof for the case when $U \setminus \tilde C$ is connected follows similarly from Lemma \ref{LeDecompV}. \end{proof} \begin{lemma} \label{finiteT} Let $X$ be locally compact, connected, locally connected. Suppose that \[ V = \bigsqcup_{j=1}^m C_j \sqcup \bigsqcup_{t \in T} U_t \] where $V \in \calO_{ss}^{*}(X), \ C_j \in \calK_{s}(X), \ U_t \in \calO_{c}^{*}(X)$. Then $T$ is finite. \end{lemma} \begin{proof} The proof is by induction on $m$. Let $m=1$. Using Lemma \ref{LeDecompV} we have \[ V \setminus C_1 = \bigsqcup_{i=1}^n V_i = \bigsqcup_{t \in T} U_t.\] Since sets $V_i$ and $U_t$ are connected, $T$ must be finite. Now let $V = \bigsqcup_{j=1}^m C_j \sqcup \bigsqcup_{t \in T} U_t $ and assume that the result holds for any bounded open semi-solid set which contains less than $m$ compact solid sets. Using Lemma \ref{LeDecompV} we see that \[ V = C_1 \sqcup \bigsqcup_{i=1}^n V_i =C_1 \sqcup \bigsqcup_{j=2}^m C_j \sqcup \bigsqcup_{t \in T} U_t,\] where $V_i \in \calO_{ss}^{*}(X)$. All involved sets are connected, so each set $V_i$ is the disjoint union of sets from the collection $\{ C_2, \ldots, C_m, U_t, t \in T \}$. By the induction hypothesis each $V_i$ contains finitely many sets, and it follows that $T$ is finite. \end{proof} \begin{lemma} \label{finiteSP} Let $X$ be locally compact, connected, locally connected. If $A = \bigsqcup_{t \in T} A_t, \ \ A , A_t \in \calA_{s}^{*}(X)$ with at most finitely many $A_t \in \calK_{s}(X)$ then $T$ is finite. \end{lemma} \begin{proof} Assume first that $A \in \calO_{s}^{*}(X)$. If the cardinality $|T| > 1$ then there must be a compact solid set among $A_t$, and the result follows from Lemma \ref{finiteT}. Assume now that $A \in \calK_{s}(X)$ and write \[ A = \bigsqcup_{j=1}^m C_j \sqcup \bigsqcup_{t \in T} U_t, \] where $C_j \in \calK_{s}(X), \ U_t \in \calO_{s}^{*}(X)$. We need to show that $T$ is finite. By Lemma \ref{opensolid} choose $ V \in \calO_{s}^{*}(X)$ such that $A \subseteq V$. Then from Lemma \ref{LeDecompV} we may write $V \setminus A = \bigsqcup_{i=1}^n V_i$, where $ V_i \in \calO_{ss}^{*}(X)$. Then \[ V =\bigsqcup_{j=1}^m C_j \sqcup \bigsqcup_{t \in T} U_t \sqcup \bigsqcup_{i=1}^n V_i ,\] and by Lemma \ref{finiteT} $T$ is finite. \end{proof} \begin{remark} Lemma \ref{LeNoUnbdComp}, Lemma \ref{PrSolidHuLC}, Lemma \ref{opensolid}, and Lemma \ref{LeSolidInV} are close to Lemmas 3.5, 3.6, 3.8, 3.9, and 4.2 in \cite{Aarnes:LC}. Lemma \ref{LeCleverSet} is related to a part in the proof of Lemma 5.9 in \cite{Aarnes:LC}. The case " $V \setminus C$ is disconnected" in the first part of Lemma \ref{LeDecompV} is Lemma 4.3 in \cite{Aarnes:LC}, and Lemma \ref{finiteSP} is an expanded (to compact sets as well) version of Lemma 4.4 in \cite{Aarnes:LC}. In all instances our proofs are modified, expanded, or different, compared to the proofs in \cite{Aarnes:LC}. \end{remark} \section{Definition and basic properties of topological measures on locally compact spaces} \label{TM} \begin{Definition}\label{DeTMLC} A topological measure on $X$ is a set function $\mu: \calC(X) \cup \calO(X) \to [0,\infty]$ satisfying the following conditions: \begin{enumerate}[label=(TM\arabic*),ref=(TM\arabic*)] \item \label{TM1} if $A,B, A \sqcup B \in \calK(X) \cup \calO(X) $ then $ \mu(A\sqcup B)=\mu(A)+\mu(B); $ \item \label{TM2} $ \mu(U)=\sup\{\mu(K):K \in \kx, \ K \subseteq U\} $ for $U\in\calO(X)$; \item \label{TM3} $ \mu(F)=\inf\{\mu(U):U \in \calO(X), \ F \subseteq U\} $ for $F \in \calC(X)$. \end{enumerate} \end{Definition} \begin{remark} It is important that in Definition \ref{DeTMLC} condition \ref{TM1} holds for sets from $\calK(X) \cup \calO(X)$. In fact, \ref{TM1} fails on $\calC(X) \cup \calO(X)$. See Example \ref{puncdisk} or Example \ref{linetm} below. \end{remark} The following result gives some immediate properties of topological measures on locally compact spaces. \begin{lemma} \label{propTMLC} The following is true for a topological measure: \begin{enumerate}[label=(t\arabic*),ref=(t\arabic*)] \item \label{l1} $\mu$ is monotone, i.e. if $ A \subseteq B, \ A, B \in \calC(X) \cup \calO(X)$ then $\mu(A) \le \mu(B)$. \item \label{smooth} If an increasing net $U_s \nearrow U$, where $U_s, U \in \calO(X)$ then $\mu(U_s) \nearrow \mu(U)$. In particular, $\mu$ is additive on $\calO(X)$. \item \label{l5} $\mu( \O) = 0$. \item \label{kl} If $V \sqcup K \subseteq U$, where $U , V \in \calO(X), \ K \in \kx$ then $\mu(V) + \mu(K) \le \mu(U).$ \item \label{l2} If $\mu$ is compact-finite then $\mu(A) < \infty$ for each $A \in \calA^{*}(X)$. $\mu$ is finite (i.e. $\mu(X) < \infty$) iff $\mu$ is real-valued. \item \label{CoRegulLC} If $X$ is locally compact, locally connected then for any $U \in \calO(X)$ $$\mu(U)=\sup \{ \mu(C): \ C \in \calK_{0}(X), \ C\subseteq U\}.$$ \item \label{l8} If $X$ is connected then $$ \mu(X) = \sup\{ \mu(K) : \ K \in \calK_{c}(X) \} .$$ If $X$ is locally compact, connected, locally connected then also $$ \mu(X) = \sup\{ \mu(K) : \ K \in \calK_{s}(X) \} .$$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label=(t\arabic*),ref=(t\arabic*)] \item The monotonicity is immediate from Definition \ref{DeTMLC} if sets $A$ and $B$ are both open or both closed. It is also easy to show the monotonicity in the case when one of the sets is open and the other one is closed. \item Suppose $U_s \nearrow U, U_s, U \in \calO(X)$. Let compact $K \subseteq U.$ By Remark \ref{netsSETS}, there is $t \in S$ such that $K \subseteq U_s$ for all $s \ge t$. Then $\mu(K) \le \mu(U_s) \le \mu(U)$ for all $s \ge t$, and we see from the inner regularity (whether $ \mu(U) < \infty$ or $ \mu(U) =\infty$) that $\mu(U_s) \nearrow \mu(U)$. \item Easy to see since $ \mu$ is not identically $ \infty$. \item Easy to see from part \ref{TM2} of Definition \ref{DeTMLC}. \item If $U$ is an open bounded set then $ \mu(U) \le \mu(\overline U) < \infty$. The second statement is obvious. \item By Lemma \ref{LeCCoU} for arbitrary $K \subseteq U, \ K \in \calC(X), \ U \in \calO(X)$ there is $C \in \calC_{0}(X)$ with $ K \subseteq C \subseteq U$. By monotonicity $ \mu(K) \le \mu(C) \le \mu(U).$ Then \begin{eqnarray*} \mu(U) &=& \sup\{ \mu(K): \ K \in \calC(X), \ K \subseteq U\} \\ &\le& \sup\{ \mu(C): \ C \in \calC_{0}(X), \ K \subseteq C \subseteq U\} \le \mu(U) \end{eqnarray*} \item Follows from Lemma \ref{LeConLC} and Lemma \ref{PrSolidHuLC}. \end{enumerate} \end{proof} \begin{proposition} \label{PrFinAddLC} Let $X$ be locally compact. A set function $\mu: \calO(X) \cup \calC(X) \rightarrow [0,\infty] $ satisfying \ref{TM2} and \ref{TM3} of Definition \ref{DeTMLC} also satisfies \ref{TM1} if the following conditions hold: \begin{enumerate}[label=(c\arabic*),ref=(c\arabic*)] \item \label{usl1} $\mu(U \sqcup V ) = \mu(U) + \mu(V) $ for any disjoint open sets $U,V$ \item \label{usl2} $\mu(U) = \mu(K) + \mu(U \setminus K) $ whenever $K \subseteq U, \ K \in \kx, \ U \in \calO(X).$ \end{enumerate} \end{proposition} \begin{proof} Our proof is an expanded version of the proof of Proposition 2.2 in \cite{Alf:ReprTh} where the result first appeared for compact-finite topological measures. Suppose that $\mu$ is a set function satisfying \ref{TM2}, \ref{TM3} as well as conditions \ref{usl1} and \ref{usl2}. We need to show that $\mu$ satisfies \ref{TM1}. $X$ is completely regular, so it is evident from \ref{usl1} and \ref{TM3} that $\mu$ is finitely additive on $\calO(X)$ and on $\kx$. Hence, we only need to check \ref{TM1} in the situation when $ A \in \kx, \ B \in \calO(X)$, and $ A \sqcup B$ is either compact or open. If $ A \sqcup B$ is open then using condition \ref{usl2} we get: $$ \mu(A \sqcup B) = \mu((A \sqcup B) \setminus A) + \mu(A) = \mu(B) + \mu(A) .$$ Now suppose $A \sqcup B \in \kx$. Note that \ref{TM3} implies monotonicity of $\mu$ on $\kx$. Let $ C \in \kx, \ C \subseteq B$. Then finite additivity and monotonicity of $\mu$ on $\kx$ gives: $$ \mu(A) + \mu(C) = \mu(A \sqcup C) \le \mu(A \sqcup B).$$ By \ref{TM2} $$ \mu(A) + \mu(B) \le \mu(A \sqcup B).$$ Now we will show the opposite inequality. It is obvious if $\mu(A) = \infty$, so let $\mu(A) < \infty$, and for $ \epsilon >0$ pick $ U \in \calO(X)$ such that $A \subseteq U$ and $\mu(U) < \mu(A) + \epsilon$. Then compact set $A \sqcup B$ is contained in the open set $B \cup U$. Also, the compact set $ (A \sqcup B) \setminus U = B \setminus U$ is contained in $ B \cup U$, and $ (B \cup U) \setminus (B \setminus U) = U$. Applying \ref{TM2} and then condition \ref{usl2} we see that \begin{eqnarray*} \mu(A \sqcup B) &\le& \mu(B \cup U) = \mu((B \cup U) \setminus (B\setminus U)) + \mu(B \setminus U) \\ &=& \mu(U) + \mu(B \setminus U) \le \mu(U) + \mu(B) \\ &\le& \mu(A) + \mu(B) + \epsilon \end{eqnarray*} Thus, $$ \mu(A \sqcup B) \le \mu(A) + \mu(B) .$$ This finishes the proof. \end{proof} \begin{remark} \label{dualwrong} The condition \ref{usl2} of Proposition \ref{PrFinAddLC}, $ \mu(U) = \mu(K) + \mu(U \setminus K) $ for $U$ open and $K$ compact, is a very useful one. Of course, any topological measure satisfies this condition. It is interesting to note that a similar condition regarding a bounded open subset of a closed set fails for topological measures, i.e. \[ \mu(F) = \mu(U) + \mu(F \setminus U) \] where $F$ is closed and $U$ is open bounded, in general is not true, as Example \ref{linetm} below shows. \end{remark} \section{Solid set functions} \label{SSF} Our goal now is to extend a set function defined on a smaller collection of subsets of $X$ than $\calO(X) \cup \calC(X)$ to a topological measure on $X$. One such convenient collection is the collection of solid bounded open and solid compact sets, and the corresponding set function is a solid set function. \begin{definition} \label{DeSSFLC} A function $ \lambda: \calA_{s}^{*}(X) \rightarrow [0, \infty) $ is a solid set function on $X$ if \begin{enumerate}[label=(s\arabic*),ref=(s\arabic*)] \item \label{superadd} whenever $\bigsqcup\limits_{i=1}^n C_i \subseteq C, \ \ C, C_i \in \calK_{s}(X)$, we have $ \sum\limits_{i=1}^n \lambda(C_i) \le \lambda(C)$; \item \label{regul} $ \lambda(U) = \sup \{ \lambda(K): \ K \subseteq U , \ K \in \calK_{s}(X) \}$ for $U \in \calO_{s}^{*}(X)$; \item \label{regulo} $ \lambda(K) = \inf \{ \lambda(U) : \ K \subseteq U, \ U \in \calO_{s}^{*}(X) \}$ for $ K \in \calK_{s}(X)$; \item \label{solidparti} if $A = \bigsqcup_{i=1}^n A_i, \ \ A , A_i \in \calA_{s}^{*}(X)$ then $ \lambda(A) = \sum\limits_{i=1}^n \lambda (A_i)$. \end{enumerate} \end{definition} \begin{lemma} \label{PrPropSsfLC} Let $X$ be locally compact, connected, locally connected. Suppose $\lambda$ is a solid set function on $X$. Then \begin{itemize} \item[(i)] $\lambda(\O) = 0$ \item[(ii)] if $ \bigsqcup_{s \in S} A_s \subseteq A, $ where $A_s, A \in \calA_{s}^{*}(X)$, then $\sum_{s \in S } \lambda(A_s)\le \lambda(A)$ \end{itemize} \end{lemma} \begin{proof} From Definition \ref{DeSSFLC} we see that $\lambda(\O) = 0$. Now let $ \bigsqcup_{s \in S} A_s \subseteq A, $ where $A_s, A \in \calA_{s}^{*}(X)$. Since $\sum_{s \in S } \lambda(A_s) = \sup \{ \sum_{s \in S'} \lambda(A_s) : \ S' \subseteq S, \ S' \mbox{ is finite } \}$, it is enough to assume that $S$ is finite. By regularity in Definition \ref{DeSSFLC} we may take all sets $A_s$ to be disjoint compact solid. If also $A \in \calK_{s}(X)$, the assertion is just part \ref{superadd} of Definition \ref{DeSSFLC}. If $A \in \calO_{s}^{*}(X)$ then there exists $ C \in \calK_{s}(X)$ such that $\bigsqcup_{s \in S } A_s \subseteq C \subseteq A$ by Lemma \ref{LeCsInside}. Now the assertion follows from parts \ref{superadd} and \ref{regul} of Definition \ref{DeSSFLC}. \end{proof} \section{Extension to $\calA_{ss}^{*}(X) \cup \calK_{c}(X)$} \label{ExtBssKc} We start with a solid set function $ \lambda: \calA_{s}^{*}(X) \rightarrow [0, \infty)$ on a locally compact, connected, locally connected space $X$. Our goal is to extend $\lambda$ to a topological measure on $X$. We shall do this in steps, each time extending the current set function to a new set function defined on a larger collection of sets. \begin{definition} \label{la1LC} Let $X$ be locally compact, connected, locally connected. For $A \in\calA_{ss}^{*}(X) \cup \calK_{c}(X)$ define $$ \lambda_1(A) = \lambda(\tilde{A}) - \sum_{i \in I} \lambda(B_i),$$ where $\{ B_i : \ i \in I\} $ is the family of bounded components of $X \setminus A$. \end{definition} By Lemma \ref{SolidCompoLC} each $B_i \in \calA_{s}^{*}(X)$. If $A \in\calA_{ss}^{*}(X) \cup \calK_{c}(X)$ then $\bigsqcup_{i \in I} B_i \subseteq \tilde A$ and by Lemma \ref{PrPropSsfLC} $$\sum_{i \in I} \lambda(B_i) \le \lambda(\tilde A).$$ \begin{lemma} \label{Prla1LC} The set function $\lambda_1: \calA_{ss}^{*}(X) \cup \calK_{c}(X) \rightarrow [0, \infty) $ defined in Definition \ref{la1LC} satisfies the following properties: \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item \label{pa1} $\lambda_1$ is real-valued and $\lambda_1 = \lambda$ on $\calA_{s}^{*}(X).$ \item Suppose $\bigsqcup_{i=1}^n A_i \sqcup \bigsqcup_{s \in S} B_s \subseteq A$, where $A, A_i \in \calA_{ss}^{*}(X) \cup \calK_{c}(X)$ and $B_s \in \calA_{s}^{*}(X)$. Then $$ \sum_{i=1}^n \lambda_1(A_i) + \sum_{s \in S} \lambda_1(B_s) \le \lambda_1(A).$$ In particular, if $\bigsqcup_{i=1}^n C_i \subseteq C$ where $C_i, C \in \calK_{c}(X)$ then $$ \sum_{i=1}^n \lambda_1(C_i) \le \lambda_1(C)$$ and if $A \subseteq B, \ A,B \in \calA_{ss}^{*}(X) \cup \calK_{c}(X) $ then $$\lambda_1(A) \le \lambda_1(B).$$ \item Suppose that $\bigsqcup_{i=1}^n A_i \sqcup \bigsqcup_{s \in S} B_s = A$, where $A, A_i \in \calA_{ss}^{*}(X) \cup \calK_{c}(X)$ and $B_s \in \calA_{s}^{*}(X)$ with at most finitely many of $B_s \in \calK_{s}(X)$. Then $$ \sum_{i=1}^n \lambda_1(A_i) + \sum_{s \in S} \lambda_1(B_s) = \lambda_1(A).$$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label=(\roman*),ref=(\roman*)] \item Obvious from Lemma \ref{PrSolidHuLC}, Definition \ref{la1LC}, and Lemma \ref{PrPropSsfLC}. \item Suppose that $\bigsqcup_{i=1}^n A_i \sqcup \bigsqcup_{s \in S} B_s \subseteq A$, where $A, A_i \in \calA_{ss}^{*}(X) \cup \calK_{c}(X)$ and $B_s \in \calA_{s}^{*}(X)$. We may assume that $A \in \calA_{s}^{*}(X)$, since the inequality \begin{eqnarray} \label{number1} \sum_{i=1}^n \lambda_1(A_i) + \sum_{s \in S} \lambda_1(B_s) \le \lambda_1(A) \end{eqnarray} is equivalent to \begin{eqnarray} \label{number2} \sum_{i=1}^n \lambda_1(A_i) + \sum_{s \in S} \lambda_1(B_s) + \sum_{t \in T} \lambda_1 (D_t) \le \lambda_1(\tilde{A}), \end{eqnarray} where $\{D_t: \ t \in T \}$ is the disjoint family of bounded components of $X \setminus A$, and by Lemma \ref{SolidCompoLC} each $D_t \in \calA_{s}^{*}(X)$. The proof is by induction on $n$. For $n=0$ the statement is Lemma \ref{PrPropSsfLC}. Suppose now $n \ge 1$ and assume the result is true for any disjoint collection (contained in a bounded solid set) of bounded semi-solid or compact connected sets among which there are less than $n$ non-solid sets. Assume now that we have $n$ disjoint sets $A_1, \ldots, A_n$ from the collection $\calA_{ss}^{*}(X) \cup \calK_{c}(X)$. Consider a partial order on $\{ A_1, A_2, \ldots, A_n \}$ where $A_i \le A_j$ iff $\tilde{A_i} \subseteq \tilde{A_j}$. (See Lemma \ref{PrSolidHuLC}.) Let $ A_1, \ldots, A_p$ where $ p \le n$ be maximal elements in $\{ A_1, A_2, \ldots A_n \}$ with respect to this partial order. For a maximal element $A_k, k \in \{ 1, \ldots, p\}$ define the following index sets: $$ I_k = \{ i \in \{ p+1, \ldots, n\}: \, A_i \mbox{ is contained in a bounded component of } X\setminus A_k \}, $$ $$ S_k = \{ s \in S : \ B_s \mbox{ is contained in a bounded component of } X\setminus A_k \}. $$ Let $\{ E_{\alpha} \}_{\alpha \in H} $ be the disjoint family of bounded components of $X \setminus A_k$. Then we may say that $$ I_k = \bigsqcup_{\alpha \in H} I_{k,\alpha}, \ S_k = \bigsqcup_{\alpha \in H} S_{k,\alpha} $$ where $$ I_{k, \alpha} = \{ i \in \{ p+1, \ldots, n\} : \ A_i \subseteq E_\alpha \}, $$ $$ S_{k, \alpha} = \{ s \in S : \ B_s \subseteq E_\alpha \}. $$ The set $I_k$ and each set $I_{k, \alpha}$ has cardinality $< n$. The set $E_\alpha$ is a solid set according to Lemma \ref{SolidCompoLC}, and \begin{eqnarray} \label{zv1} \bigsqcup_{i \in I_{k, \alpha}} A_i \sqcup \bigsqcup_{s \in S_{k, \alpha}} B_s \subseteq E_\alpha. \end{eqnarray} By induction hypothesis $$ \sum_{i \in I_{k, \alpha}} \lambda_1(A_i) + \sum_{s \in S_{k, \alpha}} \lambda_1(B_s) \le \lambda_1(E_\alpha). $$ It follows that \begin{align*} \sum_{i \in I_k} \lambda_1(A_i) + \sum_{s \in S_k} \lambda_1(B_s) &= \sum_{\alpha \in H}\left( \sum_{i \in I_{k, \alpha}} \lambda_1(A_i) + \sum_{s \in S_{k, \alpha}} \lambda_1(B_s) \right) \\ &\le \sum_{\alpha \in H} \lambda_1 (E_\alpha). \end{align*} Then using part \ref{pa1} and Definition \ref{la1LC} we have: \begin{align} \label{Aktilde} \lambda_1 (A_k) + \sum_{i \in I_k} \lambda_1(A_i) + \sum_{s \in S_k} \lambda_1(B_s) \le \lambda_1 (A_k) +\sum_{\alpha \in H} \lambda_1 (E_\alpha) = \lambda_1(\tilde{A_k}). \end{align} Notice that $\tilde{A_1}, \ldots, \tilde{A_p}$, being the maximal elements, are all disjoint by part \ref{part5} of Lemma \ref{PrSolidHuLC}. This also implies that the sets $I_k, \ k=1, \ldots, p$ are disjoint (otherwise, if $ i \in I_k$ and also $ i \in I_m, \ 1\le k, m \le p$ then $\tilde{A_k} \cap \tilde{A_m} \ne \O$). Similarly, the sets $S_k, \ k=1, \ldots, p$ are also disjoint. Consider the index set $$ S' = S \setminus \bigsqcup_{k=1}^p S_k.$$ Note that $ \{1, \ldots, n\} = \{1, \ldots, p\} \sqcup \bigsqcup_{k=1}^p I_k $. Indeed, if $i \in \{1, \ldots, n\} \setminus \{ 1, \ldots, p\}$ we must have $ A_i \subseteq \tilde A_i \subseteq \tilde A_k$ for some maximal element $A_k$ (where $ k \in \{1, \ldots, p\}$), and since $A_i$ and $A_k$ are disjoint, $A_i$ must be contained in a bounded component of $A_k$, i.e. $ i \in I_k$. Now we have: \begin{eqnarray*} \sum_{i=1}^n \lambda_1(A_i) &+& \sum_{s \in S} \lambda(B_s) \\ &=& \sum_{k=1}^p \left( \lambda_1(A_k) + \sum_{i \in I_k} \lambda_1(A_i) + \sum_{s \in S_k} \lambda(B_s) \right) + \sum_{s \in S'} \lambda(B_s) \\ &\le& \sum_{k=1}^p \lambda(\tilde{A_k}) + \sum_{s \in S'} \lambda(B_s) \\ &\le& \lambda(\tilde{A}) \end{eqnarray*} The first inequality is by formula (\ref{Aktilde}), and for the last inequality we applied Lemma \ref{PrPropSsfLC}, since $ \{\tilde{A_k}\}_{k=1}^p \bigsqcup \{B_s\}_{ s \in S'}$ is a collection of disjoint solid sets contained in the solid set $A$. \item The proof is almost identical to the proof of the previous part, and we keep the same notations. Again, we may assume that $A \in \calA_{s}^{*}(X)$, since the inequalities (\ref{number1}) and (\ref{number2}) become equalities. The proof is by induction on $n$, and the case $n=0$ is given by Lemma \ref{finiteSP} and part \ref{solidparti} of Definition \ref{DeSSFLC}. The inequalities in the induction step become equalities once one observes that (\ref{zv1}) above becomes $\bigsqcup_{i \in I_{k, \alpha}} A_i \sqcup \bigsqcup_{s \in S_{k , \alpha}} B_s =E_{\alpha}$ (note that $\tilde A_k \subseteq A$, so $E_{\alpha} \subseteq A$). Since $\bigsqcup_{k=1}^p \tilde{A_k} \sqcup \bigsqcup_{s \in S'} B_s = A$, the last inequality in the proof of the previous part becomes an equality by Lemma \ref{finiteSP} and part \ref{solidparti} of Definition \ref{DeSSFLC}. \end{enumerate} \end{proof} \section{Extension to $\calK_{0}(X)$} \label{BCOX} Our goal now is to extend the set function $\lambda_1$ to a set function $\lambda_2$ defined on $\calK_{0}(X).$ Recall that $ K \in \calK_{0}(X)$ if $ K = \bigsqcup_{i=1}^n K_i$ where $ n \in \N$ and $K_i \in \calK_{c}(X)$ for $ i=1, \ldots, n.$ \begin{definition} \label{la2LC} For $K = \bigsqcup\limits_{i=1}^n K_i,$ where $ K_i \in \calK_{c}(X)$, let $$ \lambda_2(K ) = \sum_{i=1}^n \lambda_1(K_i). $$ \end{definition} \begin{lemma} \label{Lela2LC} The set function $\lambda_2$ from Definition \ref{la2LC} satisfies the following properties: \begin{itemize} \item[(i)] $\lambda_2$ is real-valued, $\lambda_2 = \lambda_1 $ on $\calK_{c}(X)$ and $\lambda_2 = \lambda$ on $ \calK_{s}(X)$. \item[(ii)] $\lambda_2$ is finitely additive on $\calK_{0}(X)$ \item[(iii)] $\lambda_2$ is monotone on $\calK_{0}(X).$ \end{itemize} \end{lemma} \begin{proof} The first part easily follows from the definition of $\lambda_2$ and Lemma \ref{Prla1LC}. The second part is obvious. To prove the third one, let $ C \subseteq K, $ where $C, \ K \in \calK_{0}(X)$. Write $ C= \bigsqcup\limits_{i=1}^n C_i, \ K = \bigsqcup\limits_{j=1}^m K_j,$ where the sets $C_i (i=1, \ldots, n)$ and $ K_j (j =1, \ldots, m)$ are compact connected. By connectivity, each $C_i$ is contained in one of the sets $K_j.$ Consider index sets $ I_j = \{ i : \ C_i \subseteq K_j \} $ for each $ j = 1, \ldots, m.$ By Lemma \ref{Prla1LC} we have $\sum_{i \in I_j} \lambda_1(C_i) \le \lambda_1(K_j).$ Then \begin{eqnarray*} \lambda_2(C) & = & \sum_{i=1}^n \lambda_1(C_i) = \sum_{j=1}^m \sum_{i \in I_j} \lambda_1(C_i) \le \sum_{j=1}^m \lambda_1 (K_j) = \lambda_2 (K) \end{eqnarray*} \end{proof} \section{Extension to $\calO(X) \cup \calC(X)$} \label{ExttoTM} We are now ready to extend the set function $\lambda_2$ to a set function $\mu$ defined on $\calO(X) \cup \calC(X).$ \begin{definition} \label{muLC} For an open set $U$ define $$ \mu(U) = \sup\{ \lambda_2(K) : \ K \subseteq U , \ K \in \calK_{0}(X) \}, $$ and for a closed set $F$ let $$ \mu(F) = \inf \{ \mu(U): \ F \subseteq U, \ U \in \calO(X) \}.$$ \end{definition} Note that $ \mu$ may assume $ \infty$. \begin{lemma} \label{PropMuLC} The set function $\mu$ in Definition \ref{muLC} satisfies the following properties: \begin{enumerate}[label=(p\arabic*),ref=(p\arabic*)] \item \label{monotoneLC} $\mu$ is monotone, i.e. if $A \subseteq B, \, A,B \in \calO(X) \cup \calC(X)$ then $ \mu(A) \le \mu(B)$. \item \label{finiteness} $\mu(A) < \infty$ for each $ A \in \calA^{*}(X)$, so $ \mu$ is compact-finite. \item \label{ineqla2} $\mu \ge \lambda_2$ on $\calK_{0}(X).$ \item \label{CoAppr} Let $ K \subseteq V, K \in \kx, \ V \in \calO(X)$. Then for any positive $\epsilon$ there exists $ K_1 \in \calK_{0}(X)$ such that $ K \subseteq K_1 \subseteq V$ and $ \mu(K_1) - \mu(K) < \epsilon.$ \item \label{extla2} $\mu = \lambda$ on $ \calA_{s}^{*}(X).$ \item\label{OpenFinAddLC} $ \mu$ is finitely additive on open sets. \item\label{CloFinAddLC} If $G = F \sqcup K$, where $G, F \in \calC(X), \ K \in \calK(X)$ then $\mu(G) = \mu(F) + \mu(K).$ In particular, $\mu$ is finitely additive on compact sets. \item \label{AddBox} $\mu$ is additive on $\calO(X)$, i.e. if $V = \bigsqcup\limits_{i \in I} V_i$, where $ V, \ V_i \in \calO(X)$ for all $ i \in I$, then $\mu(V) = \sum\limits_{i \in I} \mu(V_i). $ \item \label{superaddF} If $G \sqcup V = F$ where $ G, F \in \calC(X), \ V \in \calO(X)$ then $ \mu(G) + \mu(V) \le \mu(F).$ \item \label{superaddU} If $G \sqcup V \subseteq U$ where $ G \in \calC(X), \ V,U \in \calO(X)$ then $ \mu(G) + \mu(V) \le \mu(U).$ \item \label{mula1} $\mu = \lambda_1$ on $\calK_{c}(X)$ and $\mu = \lambda_2$ on $\calK_{0}(X)$. \item \label{regularityLC} $\mu(U) = \sup\{\mu(C): \ C \subseteq U , \ C \in \kx \} , \ \ \ U \in \calO(X).$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label=(p\arabic*)] \item It is obvious that $\mu$ is monotone on open sets and on closed sets. Let $ V \in \calO(X), F \in \calC(X)$. The monotonicity in the case $ F \subseteq V$ is obvious. Suppose $ V \subseteq F$. For any open set $U $ with $ F \subseteq U$ we have $V \subseteq U$, so $ \mu(V) \le \mu(U)$, Then taking infimum over sets $U$ we obtain $ \mu(V) \le \mu(F)$. \item Let $K \in \calK(X)$. By Lemma \ref{LeConLC} choose $V \in \calO_{c}^{*}(X)$ and $C \in \calK_{c}(X)$ such that $ K \subseteq V \subseteq C \subseteq U$. For any $D \in \calK_{0}(X), D \subseteq V$ by Lemma \ref{Lela2LC} we have $\lambda_2(D) \le \lambda_2(C)$, and $\lambda_2(C) < \infty$. By Definition \ref{muLC} $\mu(V) \le \lambda_2(C)$, and then $ \mu(K) \le \mu(V) \le \lambda_2(C) < \infty$. Thus, $ \mu$ is compact-finite. If $U$ is an open bounded set then $ \mu(U) \le \mu(\overline{U}) < \infty$. \item Let $K \in \calK_{0}(X).$ For any open set $U$ containing $K$ we have $\mu(U) \ge \lambda_2(K)$ by the definition of $\mu$. Then, again from the definition of $\mu$, $\mu(K) \ge \lambda_2(K).$ \item $\mu(K) < \infty$, so by Definition \ref{muLC} find $U \in \calO(X)$ such that $ U \subseteq V, \, \mu(U) - \mu(K) < \epsilon$. Let $U_1, \ldots, U_n$ be finitely many connected components of $U$ that cover $K$. By Lemma \ref{LeConLC} pick $V_i \in \calO_{c}^{*}(X)$ such that $K \cap U_i \subseteq V_i \subseteq \overline{ V_i} \subseteq U_i$ for $ i =1, \ldots, n$. We may take $K_1 = \bigsqcup_{i=1}^n \overline{ V_i}$, for $ K_1 \subseteq V$ and $$ \mu(K_1) - \mu(K) < \mu(\bigsqcup_{i=1}^n U_i) - \mu(K) \le \mu(U) - \mu(K) < \epsilon .$$ \item First we shall show that $\mu= \lambda$ on $\calO_{s}^{*}(X)$. Let $U \in \calO_{s}^{*}(X)$, so by part \ref{finiteness} $ \mu(U) < \infty$. By Definition \ref{muLC}, given $\epsilon > 0$, choose $K \in \calK_{0}(X)$ such that $K \subseteq U$ and $ \mu(U) -\epsilon < \lambda_2(K)$. By Lemma \ref{LeCsInside} there exists $ C \in \calK_{s}(X)$ such that $ K \subseteq C \subseteq U$. Now using Lemma \ref{Lela2LC} and Definition \ref{DeSSFLC} we have: \begin{eqnarray*} \mu(U) - \epsilon &<& \lambda_2(K) \le \lambda_2(C) \\ &\le& \sup \{ \lambda_2(C) : \ \ C \subseteq U, \ C \in \calK_{s}(X)\} \\ &=& \sup \{ \lambda(C) : \ \ C \subseteq U, \ C \in \calK_{s}(X)\} = \lambda(U). \end{eqnarray*} Hence, $\mu(U) \le \lambda(U)$. For the opposite inequality, observe that by Lemma \ref{Lela2LC} $\lambda = \lambda_2$ on $\calK_{s}(X)$, so by Definition \ref{DeSSFLC} \begin{eqnarray*} \lambda(U) &=& \sup\{ \lambda(C): \ C \subseteq U, C \in \calK_{s}(X) \} \\ &=& \sup\{ \lambda_2 (C): \ C \subseteq U, C \in \calK_{s}(X) \} \\ &\le& \sup\{ \lambda_2 (C): \ C \subseteq U, C \in \calK_{0}(X) \} = \mu(U). \end{eqnarray*} Therefore, $\mu(U) = \lambda(U)$ for any $ U \in \calO_{s}^{*}(X)$. Now we shall show that $\mu = \lambda$ on $\calK_{s}(X)$. From part \ref{ineqla2} above and Lemma \ref{Lela2LC} we have $\mu \ge \lambda_2 = \lambda$ on $\calK_{s}(X)$. Since $\mu = \lambda$ on $\calO_{s}^{*}(X)$, for $C \in \calK_{s}(X)$ we have by Definition \ref{DeSSFLC} and Defintion \ref{muLC}: \begin{eqnarray*} \lambda(C) &=& \inf \{ \lambda(U): \ \ U \in \calO_{s}^{*}(X), \ C \subseteq U \} \\ &=& \inf \{ \mu(U): \ \ U \in \calO_{s}^{*}(X), \ C \subseteq U \} \\ &\ge& \inf \{ \mu(U): \ \ U \in \calO(X) , \ C \subseteq U \} = \mu(C) \end{eqnarray*} Therefore, $\mu = \lambda $ on $\calK_{s}(X)$. \item Let $U_1 , U_2 \in \calO(X)$ be disjoint. For any $C_i, C_2 \in \calK_{0}(X)$ with $ C_i \subseteq U_i, \ i=1,2$ we have by Lemma \ref{Lela2LC} and Definition \ref{muLC} $$ \lambda_2(C_1) + \lambda_2(C_2) = \lambda_2(C_1 \sqcup C_2) \le \mu(U_1 \sqcup U_2).$$ Then by Definition \ref{muLC} we obtain $$ \mu(U_1) + \mu(U_2) \le \mu(U_1 \sqcup U_2).$$ For the converse inequality, note that given $ C \subseteq U_1 \sqcup U_2, \ C \in \calK_{0}(X)$ we have $C_i = C \cap U_i \in \calK_{0}(X), \ i =1,2$ (since each connected component of $C$ must be contained either in $U_1$ or in $U_2$) and $C = C_1 \sqcup C_2$. Then $$ \lambda_2(C) = \lambda_2(C_1) + \lambda_2(C_2) \le \mu(U_1) + \mu(U_2),$$ giving $$ \mu(U_1 \sqcup U_2) \le \mu(U_1) + \mu(U_2).$$ \item Let $C_1, C_2$ be a compact and a closed set that are disjoint. Given $ U \in \calO(X), \ C_1 \sqcup C_2 \subseteq U$ we may find disjoint open sets $U_1, U_2$ such that $$ U_1 \sqcup U_2 \subseteq U, \ \ \ C_i \subseteq U_i, \ \ i=1,2.$$ Then by parts \ref{OpenFinAddLC} and \ref{monotoneLC} $$ \mu(C_1) + \mu(C_2) \le \mu(U_1) + \mu(U_2) = \mu(U_1 \sqcup U_2) \le \mu(U), $$ so using Definition \ref{muLC} we have $$ \mu(C_1) + \mu(C_2) \le \mu(C_1 \sqcup C_2).$$ For the converse inequality, observe that for any $U_1, U_2 \in \calO(X)$ such that $C_i \subseteq U_i, \ i=1,2$ one may find disjoint open sets $V_1, V_2$ with $C_i \subseteq V_i \subseteq U_i, \ i=1,2$. Then by parts \ref{OpenFinAddLC} and \ref{monotoneLC} $$ \mu(C_1 \sqcup C_2) \le \mu(V_1 \sqcup V_2) = \mu(V_1) + \mu(V_2) \le \mu(U_1) + \mu(U_2), $$ which gives by Definition \ref{muLC} $$ \mu(C_1 \sqcup C_2) \le \mu(C_1) + \mu(C_2).$$ \item Let $ V = \bigsqcup\limits_{i \in I} V_i$ with $V, V_i \in \calO(X)(X)$ for all $ i \in I$. By parts \ref{OpenFinAddLC} and \ref{monotoneLC} for any finite $I' \subseteq I$ $$ \sum_{i \in I'} \mu(V_i) = \mu(\bigsqcup_{i \in I'} V_i) \le \mu(V). $$ Then $\sum_{i \in I} \mu(V_i) \le \mu(V).$ To prove the opposite inequality, first assume that $ \mu(V) < \infty$. For $\epsilon >0 $ find a compact $C \in \calK_{0}(X)$ contained in $V$ such that $ \mu(V) - \epsilon < \lambda_2(C).$ By compactness, $ C \subseteq \bigsqcup\limits_{i \in I'} V_i$ for some finite subset $I'$ of $I$. Then $C = \bigsqcup\limits_{i \in I'} C_i$ where $C_i = C \cap V_i \subseteq V_i$, and $C_i \in \calK_{0}(X)$ for each $i \in I'.$ By Lemma \ref{Lela2LC} and part \ref{ineqla2} we have: \begin{eqnarray*} \mu(V)-\epsilon &<& \lambda_2(C) = \lambda_2(\bigsqcup_{i \in I'} C_i) = \sum_{i \in I'} \lambda_2(C_i) \le \sum_{i \in I'} \mu(C_i) \\ & \le & \sum_{i \in I'} \mu(V_i) \le \sum_{i \in I} \mu(V_i) \end{eqnarray*} Therefore, $\mu(V) \le \sum\limits_{i \in I} \mu(V_i)$. This shows that $\mu(V) = \sum\limits_{i \in I} \mu(V_i)$ when $ \mu(V) < \infty$. Now suppose $ \mu(V) = \infty$. For $ n \in \N$ find a compact $ K \subseteq V$ such that $ \mu(K) > n$. Choose a finite index set $I_n \subseteq I$ such that $ K \subseteq \sqcup_{i \in I_n} V_i$. Then $$ \sum_{i \in I} \mu(V_i) \ge \sum_{i \in I_n} \mu(V_i) = \mu (\bigsqcup_{i \in I_n} V_i) \ge \mu(K) > n.$$ It follows that $ \sum_{i \in I} \mu(V_i) = \infty = \mu(V)$. \item It is enough to show the statement for the case $ \mu(F) < \infty$. If $K \subseteq V, \ K \in \calK_{0}(X)$ then $ G \sqcup K \subseteq F$. By parts \ref{ineqla2}, \ref{CloFinAddLC} and \ref{monotoneLC} $\ \mu(G) + \lambda_2(K) \le \mu(G) + \mu(K) \le \mu(F)$. Then $ \mu(G) + \mu(V) \le \mu(F)$. \item It is enough to show the statement for the case $ \mu(U) < \infty$. If $K \subseteq V, \ K \in \calK_{0}(X)$ then $F= G \sqcup K \subseteq U$. By parts \ref{ineqla2}, \ref{CloFinAddLC}, and Definition \ref{muLC} $\mu(G) + \lambda_2(K) \le \mu(G) + \mu(K) = \mu(F) \le \mu(U)$. Then $ \mu(G) + \mu(V) \le \mu(U)$. \item Let $C \in \calK_{c}(X)$. According to Lemma \ref{SolidCompoLC} and Definition \ref{solid hull} write $\tilde C \in \calK_{s}(X)$ as $ \tilde C = C \sqcup \bigsqcup_{i \in I} U_i$ where $U_i \in \calO_{s}^{*}(X)$ are the bounded components of $X \setminus C$. Given $\epsilon>0$ choose by Definition \ref{DeSSFLC} $V \in \calO_{s}^{*}(X)$ such that $\tilde C \subseteq V$ and $ \lambda(V) -\lambda(\tilde C) < \epsilon$. By parts \ref{AddBox}, \ref{superaddF}, and \ref{monotoneLC} $$ \mu(C) + \sum_{i\in I} \mu(U_i) = \mu(C) + \mu (\bigsqcup_{i\in I} (U_i)) \le \mu(\tilde C) \le \mu(V).$$ Then using part \ref{extla2} and Definition \ref{la1LC} we have: \begin{align*} \mu(C) &\le \mu(V) - \sum_{i \in I} \mu(U_i) = \lambda(V) - \sum_{i\in I} \lambda(U_i) \\ &\le \lambda(\tilde C) - \sum_{i\in I} \lambda(U_i) + \epsilon = \lambda_1(C) + \epsilon \end{align*} Thus, $\mu(C) \le \lambda_1(C)$. By part \ref{ineqla2} and Lemma \ref{Lela2LC} $\mu(C) \ge \lambda_2(C) =\lambda_1(C)$. So $\mu =\lambda_1$ on $\calK_{c}(X)$. From part \ref{CloFinAddLC} and Definition \ref{la2LC} we have $\mu = \lambda_2$ on $\calK_{0}(X)$. \item Using part \ref{ineqla2} \begin{eqnarray*} \mu(U) &=& \sup \{\lambda_2(C) : C \subseteq U , \ C \in \calK_{0}(X) \} \\ &\le& \sup \{\mu(C) : C \subseteq U , \ C \in \calK_{0}(X) \} \\ &\le& \sup \{\mu(C) : C \subseteq U , \ C \in \kx \} \end{eqnarray*} For the converse inequality, given $ C \subseteq U, \ U \in \calO(X), \ C \in \kx$ choose by Lemma \ref{LeCCoU} $K \in \calK_{0}(X)$ with $ C \subseteq K \subseteq U$. Then by parts \ref{monotoneLC} and \ref{mula1} $\mu(C) \le \mu(K) = \lambda_2(K)$, so \begin{eqnarray*} \sup \{\mu(C) : C \subseteq U , \ C \in \kx \} &\le& \sup \{\lambda_2(K) : K \subseteq U , \ K \in \calK_{0}(X) \} \\ &=& \mu(U). \end{eqnarray*} \end{enumerate} \end{proof} \begin{lemma}\label{BigLemmaLC} For the set function $\mu$ in Definition \ref{muLC} $$ \mu(U) = \mu(K) + \mu(U \setminus K) $$ whenever $ K \subseteq U, \ K \in \kx, \ U \in \calO(X)$. \end{lemma} \begin{proof} We shall prove the statement in steps. Recall that $\mu = \lambda_1$ on $\calK_{c}(X)$ and $ \mu = \lambda_2$ on $\calK_{0}(X)$ by part \ref{mula1} of Lemma \ref{PropMuLC}. \\ STEP 1. We shall show that $\mu(U) = \mu(C) + \mu(U \setminus C) $ whenever $C \subseteq U, \ U \in \calO_{ss}^{*}(X), \ C = C_1 \sqcup \ldots \sqcup C_n, \ C_j \in \calK_{s}(X).$ \\ Let $C = C_1 \sqcup C_2 \sqcup \ldots \sqcup C_n$, where each $C_j \in \calK_{s}(X)$. The proof is by induction on $n$. Suppose $n=1$, i.e. $C \in \calK_{s}(X)$. By Lemma \ref{LeDecompV} \begin{eqnarray} U = C \sqcup \bigsqcup_{i=1}^n U_i \end{eqnarray} where each $U_i \in \calO_{ss}^{*}(X)$. By Lemma \ref{Prla1LC} $$ \mu(U) = \mu(C) + \sum_{i=1}^n \mu (U_i). $$ Then $$ \mu(U) - \mu(C) = \sum_{i=1} ^n \mu (U_i) = \mu( U \setminus C), $$ where the last equality follows from additivity of $\mu$ on $\calO(X) $ in Lemma \ref{PropMuLC}. Suppose that result holds for all $ C \subseteq U, \ U \in \calO_{ss}^{*}(X)$ where $C$ is the disjoint union of less than $n$ sets $C_j \in \calK_{s}(X)$. Now let $C = C_1 \sqcup C_2 \sqcup \ldots \sqcup C_n$, where each $C_j \in \calK_{s}(X)$. By Lemma \ref{LeDecompV} \begin{eqnarray} \label{UCS1} U \setminus C_1 = \bigsqcup_{i=1}^m U_i \end{eqnarray} where each $U_i \in \calO_{ss}^{*}(X)$. By connectivity each $C_j, \ j=2, \ldots, n$ is contained in one of the sets $U_i$. For $ i =1, \ldots, m$ let $K_i$ be the disjoint union of those $C_j, \ j \in \{2, \ldots, n\} $ that are contained in $U_i$. Notice that each $K_i$ is the union of no more than $n-1$ disjoint sets, and $\bigsqcup_{i=1}^m K_i = \bigsqcup_{j=2}^n C_j. $ By induction hypothesis, \begin{align} \label{inductS1} \mu(U_i) = \mu(U_i \setminus K_i) + \mu(K_i). \end{align} By finite additivity of $\mu$ on compact sets in Lemma \ref{PropMuLC} \begin{align} \label{unionS1} \mu(C) = \mu(\bigsqcup_{j=2}^n C_j) + \mu(C_1) = \mu(\bigsqcup_{i=1}^m K_i) + \mu(C_1) = \sum_{i=1}^m \mu(K_i) + \mu(C_1). \end{align} Also we have \begin{align} \label{UKS1} U \setminus C = (U \setminus C_1) \setminus \bigsqcup_{j=2}^n C_j = (\bigsqcup_{i=1}^m U_i) \setminus (\bigsqcup_{i=1}^m K_i ) = \bigsqcup_{i=1}^m (U_i \setminus K_i ). \end{align} By the first part of the induction proof $$ \mu(U) = \mu(U \setminus C_1) + \mu (C_1). $$ Using (\ref{UCS1}), additivity of $\mu$ on $\calO^{*}(X)$ in Lemma \ref{PropMuLC}, (\ref{inductS1}), (\ref{unionS1}), and (\ref{UKS1}) we obtain: \begin{eqnarray*} \mu(U) &=& \mu(U \setminus C_1) + \mu(C_1) \\ &=& \mu(\bigsqcup_{i=1}^m U_i) + \mu(C_1) \\ &=& \sum_{i=1}^m \mu(U_i) + \mu(C_1) \\ &=& \sum_{i=1}^m \mu(U_i \setminus K_i) +\sum_{i=1}^m \mu(K_i) + \mu(C_1) \\ &=& \sum_{i=1}^m \mu(U_i \setminus K_i) + \mu(C) \\ &=& \mu(U\setminus C) + \mu(C) \end{eqnarray*} STEP 2. We shall show that $\mu(U) = \mu(C) + \mu(U \setminus C) $ whenever $C \subseteq U, \ C \in \calK_{0}(X), \ U \in \calO_{s}^{*}(X).$ \\ Let $C = C_1 \sqcup C_2 \sqcup \ldots \sqcup C_n$, where each $C_i \in \calK_{c}(X)$. The proof is by induction on $n$. Suppose $n=1$, i.e. $C \in \calK_{c}(X)$. By Lemma \ref{LeDecompU} $$ U = C \sqcup W \sqcup \bigsqcup_{s \in S} V_s $$ where $V_s \in \calO_{s}^{*}(X), \ W \in \calO_{ss}^{*}(X)$ ($W$ may be empty). By Lemma \ref{Prla1LC} $$ \mu(U) = \mu(C) + \sum_{s \in S} \mu (V_s) + \mu(W). $$ Then \[ \mu(U) - \mu(C) = \sum_{s \in S} \mu (V_s) + \mu(W) = \mu( U \setminus C), \] where the last equality follows from additivity of $\mu$ on $\calO(X)$ in Lemma \ref{PropMuLC}. Suppose that the result holds for all $ C \subseteq U, \ U \in \calO_{s}^{*}(X), \ C \in \calK_{0}(X)$ where $C$ is the disjoint union of less than $n$ sets $C_i \in \calK_{c}(X)$. Now assume that $C = C_1 \sqcup \ldots \sqcup C_n, \ C_i \in \calK_{c}(X)$. As in the proof of Lemma \ref{Prla1LC}, consider partial order on $\{ C_1, \ldots, C_n \}$ where $C_i \le C_j$ iff $ \tilde C_i \subseteq \tilde C_j$. Some parts of the argument here are as in the proof of Lemma \ref{Prla1LC}. Let $C_1, \ldots, C_p, \ p \le n$ be maximal elements in $\{ C_1, \ldots, C_n \}$ with respect to this partial order. Then $\tilde C_1, \ldots, \tilde C_p$ are disjoint. This implies that the family \[ \{W_s: \ s \in S\} = \bigsqcup_{k=1}^p \{ \mbox{ bounded components of } X \setminus C_k\} \] is a disjoint family of sets. Each $ W_s \in \calO_{s}^{*}(X)$ by Lemma \ref{SolidCompoLC} and $\bigsqcup_{s \in S} W_s \in \calO^{*}(X)$, because $\bigsqcup_{s \in S} W_s \subseteq U$. Let $I = \{1, \ldots, n\} \setminus \{1, \ldots, p \}$. For each $ i \in I$ $C_i$ is non-maximal element, and there exists $k \in \{ 1, \ldots, p\}$ such that $ C_i \subseteq \tilde C_k$. In other words, each non-maximal set $C_i, \ i \in I$ is contained in a bounded component of $X \setminus C_k$ for some maximal element $C_k$ (for some $ k \in \{1, \ldots, p\}$), that is $C_i \subseteq W_s$ for some $s \in S$. Let $S_1$ be a finite subset of $S$ such that for $s \in S_1$ the set $W_s$ contains some $C_i, \ i \in I$. Let $S' = S \setminus S_1$. For each $s \in S_1$ let $C_s$ be the disjoint union of those sets $C_i, i \in I$ that are contained in $W_s$. Since $|I| \le n-1$, each $C_s$ is the union of no more than $n-1$ disjoint sets, and by induction hypothesis for each $ s \in S_1$ \begin{align} \label{2sh} \mu(W_s ) = \mu( W_s \setminus C_s) + \mu(C_s). \end{align} Note also that \begin{align} \label{1sh} \bigsqcup_{s \in S_1} C_s = \bigsqcup_{i \in I} C_i. \end{align} Then using Definition \ref{solid hull} and (\ref{1sh}) we see that: \begin{eqnarray*} \tilde C_1 \sqcup \ldots \sqcup \tilde C_p &=& C_1 \sqcup \ldots \sqcup C_p \sqcup \bigsqcup_{s \in S} W_s \\ &=& C_1 \sqcup \ldots \sqcup C_p \sqcup \bigsqcup_{s \in S_1} W_s \sqcup \bigsqcup_{s \in S'} W_s \\ &=& C_1 \sqcup \ldots \sqcup C_p \sqcup \bigsqcup_{s \in S_1} C_s \sqcup \bigsqcup_{s \in S_1} (W_s \setminus C_s) \sqcup \bigsqcup_{s \in S'} W_s \\ &=& C_1 \sqcup \ldots \sqcup C_p \sqcup \bigsqcup_{i \in I} C_i \sqcup \bigsqcup_{s \in S_1} (W_s \setminus C_s) \sqcup \bigsqcup_{s \in S'} W_s \\ &=& C_1 \sqcup \ldots \sqcup C_n \sqcup \bigsqcup_{s \in S_1} (W_s \setminus C_s) \sqcup \bigsqcup_{s \in S'} W_s \end{eqnarray*} We write \begin{eqnarray} \label{CtildeUnion} \tilde C_1 \sqcup \ldots \sqcup \tilde C_p = C_1 \sqcup \ldots \sqcup C_n \sqcup W = C \sqcup W \end{eqnarray} where $$W = \bigsqcup_{s \in S_1} (W_s \setminus C_s) \sqcup \bigsqcup_{s \in S'} W_s$$ is an open bounded set (since $W \subseteq \bigsqcup_{s \in S} W_s \subseteq U$). Using Definition \ref{solid hull} and Definition \ref{la1LC}, (\ref{2sh}), (\ref{1sh}), finite additivity of $\mu$ on $\kx$ and additivity of $\mu$ on $\calO(X)$ in Lemma \ref{PropMuLC} we have: \begin{eqnarray*} \mu(\bigsqcup_{k=1}^p \tilde C_k) &=& \sum_{k=1}^p \mu(C_k) + \sum_{s \in S} \mu(W_s) \\ &=& \sum_{k=1}^p \mu(C_k) + \sum_{s \in S_1} \mu(W_s) + \sum_{s \in S'} \mu(W_s) \\ &=& \sum_{k=1}^p \mu(C_k) + \sum_{s \in S_1} \mu(C_s) + \sum_{s \in S_1} \mu(W_s \setminus C_s) + \sum_{s \in S'} \mu(W_s) \\ &=& \sum_{k=1}^p \mu(C_k) + \sum_{i \in I} \mu(C_i) + \sum_{s \in S_1} \mu(W_s \setminus C_s) + \sum_{s \in S'} \mu(W_s) \\ &=& \mu(C_1 \sqcup \ldots \sqcup C_n) + \mu(W) =\mu(C) + \mu(W). \end{eqnarray*} The sets $ U \setminus (C \sqcup W) = U \setminus (\tilde C_1 \sqcup \ldots \sqcup \tilde C_p)$ and $W$ are disjoint open bounded sets whose union is $U \setminus C$. Now using the result of Step 1, (\ref{CtildeUnion}), just obtained equality $\mu(\bigsqcup_{k=1}^p \tilde C_k) = \mu(W) + \mu(C)$, and additivity of $\mu$ on $\calO(X)$ in Lemma \ref{PropMuLC} we have: \begin{eqnarray*} \mu(U) &=& \mu(U \setminus \bigsqcup_{k=1}^p \tilde C_k) + \mu(\bigsqcup_{k=1}^p \tilde C_k) \\ &=& \mu(U \setminus (C \sqcup W )) + \mu(W) + \mu(C) \\ &=& \mu(U \setminus C) + \mu(C) \end{eqnarray*} STEP 3. We shall show that $\mu(U) = \mu(K) + \mu(U \setminus K) $ whenever $K \subseteq U, \ K \in \kx, \ U \in \calO_{c}^{*}(X).$ \\ Using part \ref{regularityLC} of Lemma \ref{PropMuLC} and Lemma \ref{LeConLC} choose sets $ W \in \calO_{c}^{*}(X)$ and $ D \in \calK_{c}(X)$ such that \begin{eqnarray} \label{WD} K \subseteq W \subseteq D \subseteq U \mbox{ and } \mu(U) - \mu(W) < \epsilon. \end{eqnarray} Let $B$ be the union of bounded components of $ X \setminus U$ and let the open set $V$ be the union of bounded components of $ X \setminus D$. Set $$C = B \cap V.$$ By Lemma \ref{LeCleverSet} $C$ is compact and $ U \sqcup C$ is open. The solid hull $\tilde D= D \sqcup V$. Then by part \ref{superaddF} of Lemma \ref{PropMuLC} $\mu(D) + \mu(V) \le \mu(\tilde D)$. Note that by Lemma \ref{PrSolidHuLC} $V \subseteq \tilde D \subseteq \tilde U = U \sqcup B$. Then $$ V \subseteq U \sqcup (B \cap V) = U \sqcup C. $$ It follows that $$ K \sqcup C \subseteq D \sqcup V = \tilde D \subseteq U \sqcup C .$$ Since $U \sqcup C$ is open, by Lemma \ref{opensolid} we may find $ W' \in \calO_{s}^{*}(X)$ such that \begin{eqnarray} \label{sha} K \sqcup C \subseteq \tilde D \subseteq W' \subseteq U \sqcup C. \end{eqnarray} Then \begin{eqnarray} \label{W'} W' \setminus(K \sqcup C) \subseteq U \setminus K. \end{eqnarray} According to part \ref{CoAppr} of Lemma \ref{PropMuLC}, pick $K_1 \in \calK_{0}(X)$ such that \begin{eqnarray} \label{K1} K \sqcup C \subseteq K_1 \subseteq W' \mbox{ and } \mu(K_1) \le \mu(K \sqcup C) + \epsilon. \end{eqnarray} By Step 2, $\mu(W') = \mu(W' \setminus K_1) + \mu(K_1)$. Now using (\ref{WD}), Definition \ref{la1LC}, (\ref{sha}), (\ref{K1}), (\ref{W'}), additivity on $\calO(X)$ and finite additivity of $\mu$ on $\kx$ in Lemma \ref{PropMuLC} we have: \begin{eqnarray*} \mu(U) - \epsilon &<& \mu(W) \le \mu(D)= \mu(\tilde D) - \mu(V) \\ &\le& \mu(\tilde D) - \mu(C) \\ &\le& \mu(W') - \mu(C) \\ &=& \mu(W' \setminus K_1) + \mu(K_1) - \mu(C) \\ &\le& \mu(W' \setminus (K \sqcup C)) + \mu(K \sqcup C) + \epsilon - \mu(C) \\ &\le& \mu(U \setminus K) + \mu(K) + \mu(C) -\mu(C) + \epsilon\\ &=& \mu(U \setminus K) + \mu(K) + \epsilon \end{eqnarray*} It follows that $\mu(U) \le \mu(U \setminus K) + \mu(K)$. The opposite inequality is part \ref{superaddU} of Lemma \ref{PropMuLC}. \\ STEP 4. We shall show that $\mu(U) = \mu(K) + \mu(U \setminus K) $ whenever $K \subseteq U, \ K \in \kx, \ U \in \calO^{*}(X).$ \\ Let $ U = \bigsqcup_{i \in I} U_i$ be the decomposition of $U$ into connected components, and let $I'$ be a finite subset of $I$ such that $ K \subseteq \bigsqcup_{i \in I'} U_i$. For each $i \in I'$ the set $K_i = K \cap U_i = K \setminus \bigsqcup_{j \in I' \setminus {i}} U_j \in \kx$ and \begin{eqnarray} \label{Bb} K = \bigsqcup_{i \in I'} K_i. \end{eqnarray} By Step 3 we know that \begin{eqnarray} \label{A} \mu(K_i) + \mu(U_i \setminus K_i) = \mu(U_i) \ \ \ \ \ \mbox{for each} \ \ i \in I'. \end{eqnarray} Then using (\ref{Bb}), finite additivity of $\mu$ on $\kx$ and additivity of $\mu$ on $\calO(X)$ in Lemma \ref{PropMuLC}, and (\ref{A}) we have: \begin{eqnarray*} \mu(K) &+& \mu(U \setminus K) = \mu(\bigsqcup_{i \in I'} K_i) + \mu (U \setminus \bigsqcup_{i \in I'} K_i) \\ &=& \sum_{i \in I'} \mu(K_i) + \sum_{i \in I'} \mu(U_i \setminus K_i) + \sum_{i \in I \setminus I'} \mu(U_i) \\ &=& \sum_{i \in I'} \mu(U_i) + \sum_{i \in I \setminus I'} \mu(U_i) = \sum_{i \in I} \mu(U_i) = \mu(U) \end{eqnarray*} STEP 5. We shall show that $\mu(U) = \mu(K) + \mu(U \setminus K) $ whenever $K \subseteq U, \ K \in \kx, \ U \in \calO(X).$ \\ First assume that $ \mu(U) < \infty$. Given $\epsilon >0$ by Definition \ref{muLC} find $C \in \kx $ such that $ K \subseteq C$ and $\mu(U) - \mu(C) < \epsilon $. Using Lemma \ref{easyLeLC} find $ V \in \calO^{*}(X)$ such that $$ K \subseteq C \subseteq V \subseteq U.$$ By Step 4 $\ \ \mu(V) = \mu(V \setminus K) + \mu(K)$. Then using monotonicity of $\mu$ in Lemma \ref{PropMuLC} we see that \begin{align} \label{muCVK} \mu(C) \le \mu(V) = \mu(V \setminus K) + \mu(K) \le \mu(U \setminus K) + \mu(K). \end{align} Then $ \mu(U) - \epsilon < \mu(C) \le \mu(U \setminus K) + \mu(K)$. Therefore, $ \mu(U) \le \mu(U \setminus K) + \mu(K)$. The opposite inequality is part \ref{superaddU} of Lemma \ref{PropMuLC}. Therefore, if $ \mu(U) < \infty$ then $$ \mu(U) = \mu(U \setminus K) + \mu(K).$$ Now assume $ \mu(U) = \infty$. For $n \in \N$ choose $ C \in \calK(X)$ such that $ K \subseteq C$ and $ \mu(C) > n$. By Lemma \ref{easyLeLC} find $ V \in \calO^{*}(X)$ such that $$ K \subseteq C \subseteq V \subseteq U.$$ Using again (\ref{muCVK}) we have: $$ n < \mu(C) \le \mu(V \setminus K) + \mu(K),$$ i.e. $n - \mu(K) \le \mu(V \setminus K) \le \mu(U \setminus K)$. Since $ \mu(K) \in \R$ by part \ref{finiteness} of Lemma \ref{PropMuLC}, it follows that $ \mu (U \setminus K) = \infty$, and $ \mu(U \setminus K) + \mu(K) = \mu(U)$. \end{proof} \begin{theorem} \label{extThLC} Let $X$ be locally compact, connected, locally connected. A solid set function on $X$ extends uniquely to a compact-finite topological measure on $X$. \end{theorem} \begin{proof} Definitions \ref{la1LC}, \ref{la2LC} and \ref{muLC} extend solid set function $\lambda$ to a set function $\mu$. We would like to show that $\mu$ is a topological measure. Definition \ref{muLC} and part \ref{regularityLC} of Lemma \ref{PropMuLC} show that $\mu$ satisfies \ref{TM2} and \ref{TM3} of definition \ref{DeTMLC}. Proposition \ref{PrFinAddLC}, part \ref{OpenFinAddLC} of Lemma \ref{PropMuLC}, and Lemma \ref{BigLemmaLC} show that $\mu$ is a topological measure. By part \ref{finiteness} of Lemma \ref{PropMuLC} $\mu$ is compact-finite. To show that the extension from a solid set function to a topological measure is unique suppose $\nu$ is a topological measure on $X$ such that $ \mu = \nu = \lambda$ on $ \calA_{s}^{*}(X)$. If $A \in \calK_{c}(X)$ then by Definition \ref{solid hull} $A = \tilde A \setminus (\bigsqcup_{s \in S} B_s)$, where $ \tilde A, B_s \in \calA_{s}^{*}(X)$, so from Definition \ref{la1LC} it follows that $\mu = \nu$ on $ \calK_{c}(X)$, and, hence, on $\calK_{0}(X)$. From part \ref{CoRegulLC} of Lemma \ref{propTMLC} it then follows that $\mu = \nu$ on $\calO(X)$, so $ \mu = \nu$. \end{proof} \begin{remark} \label{extsumme} We will summarize the extension procedure for obtaining a topological measure $\mu$ from a solid set function $\lambda$ on a locally compact, connected, locally connected space. First, for a compact connected set $C$ we have: $$ \mu(C) = \lambda(\tilde C) - \sum_{i=1}^n \lambda(B_i), $$ where $\tilde C$ is the solid hull of $C$ and $B_i$ (open solid sets) are bounded components of $X \setminus C$. For $C \in \calK_{0}(X)$, i.e. for a compact set which is the union of finitely many disjoint compact connected sets $C_1, \ldots, C_n$, we have: $$ \mu (C) = \sum_{i=1}^n \mu(C_i). $$ For an open set $U$ we have: $$ \mu(U) = \sup\{ \mu(K) : \ K \subseteq U , \ K \in \calK_{0}(X) \}, $$ and for a closed set $F$ let $$ \mu(F) = \inf \{ \mu(U): \ F \subseteq U, \ U \in \calO(X) \}.$$ \end{remark} \begin{theorem} \label{Tpart2} If a solid set function $\lambda$ is extended to a topological measure $\mu$ then the following holds: if $\lambda: \calA_{s}^{*}(X) \rightarrow \{0,1\}$ then $\mu$ also assumes only values $0$ and $1$; if $\sup \{ \lambda(K): \ K \in \calK_{s}(X)\} = M < \infty$ then $\mu$ is finite and $ \mu(X) = M.$ \end{theorem} \begin{proof} Follows from Remark \ref{extsumme}, part \ref{extla2} of Lemma \ref{PropMuLC}, and part \ref{l8} of Lemma \ref{propTMLC}. \end{proof} \begin{theorem} \label{ExtUniq} The restriction $\lambda$ of a compact-finite topological measure $\mu$ to $\calA_{s}^{*}(X)$ is a solid set function, and $\mu$ is uniquely determined by $\lambda$. \end{theorem} \begin{proof} Let $\lambda$ be the restriction of $\mu$ to $ \calA_{s}^{*}(X)$. Monotonicity of a topological measure (see Lemma \ref{propTMLC}) and \ref{TM1} of Definition \ref{DeTMLC} show that $\lambda$ satisfies conditions \ref{superadd} and \ref{solidparti} of Definition \ref{DeSSFLC}. For $ U \in \calO_{s}^{*}(X)$ by \ref{TM2} let $ K \in \calK(X)$ be such that $\mu(U) - \mu(K) < \epsilon$ and by Lemma \ref{LeCsInside} we may assume that $K \in \calK_{s}(X)$. Part \ref{regul} of Definition \ref{DeSSFLC} follows. Part \ref{regulo} of Definition \ref{DeSSFLC} follows from \ref{TM3} and Lemma \ref{opensolid}. Since $\mu$ is compact-finite, $\lambda$ is real-valued. Therefore, $\lambda$ is a solid set function. \end{proof} \begin{remark} \label{additionalPropMu} Lemma \ref{PrPropSsfLC}, Lemma \ref{Prla1LC}, and Lemma \ref{PropMuLC} give us some additional properties of topological measures. For example, by part \ref{CloFinAddLC} of Lemma \ref{PropMuLC}, if a closed set $F$ and a compact $K$ are disjoint, then $\mu(F \sqcup K) = \mu(F) + \mu(K)$. \end{remark} \section{Examples} \label{examplesTmLC} When $X$ is compact, a set is called solid if it and its complement are both connected. For a compact space $X$ we define a certain topological characteristic, genus. See \cite{Aarnes:ConstructionPaper} for more information about genus $g$ of the space. We are particularly interested in spaces with genus 0. One way to describe the ``$g=0$'' condition is the following: if the union of two open solid sets in $X$ is the whole space, their intersection must be connected. (See \cite{Grubb:IrrPart}.) Intuitively, $X$ does not have holes or loops. In the case where $X$ is locally path connected, $g=0$ if the fundamental group $\pi_1(X)$ is finite (in particular, if $X$ is simply connected). Knudsen \cite{Knudsen} was able to show that if $H^1(X) = 0 $ then $g(X) = 0$, and in the case of CW-complexes the converse also holds. The following two remarks for a compact space follow from results in \cite{Aarnes:ConstructionPaper}: \begin{remark} \label{genconn} $g(X) =0$ if and only if $X \setminus \bigsqcup\limits_{i=1}^n C_i$ is connected for any finite disjoint family $\{C_i\}_{i=1}^n$ of closed solid sets. \end{remark} \begin{remark} \label{ReIrrPart} If there is only one open (closed) solid set in a solid partition of $X$ (i.e. a partition of $X$ into a union of disjoint sets each of which is open solid or closed solid), then there is only one closed (open) solid set in this partition. \end{remark} \begin{remark} When $X$ is compact, a solid-set function on $X$ extends in a unique way to a topological measure on $X$. For precise definitions and extension procedure see \cite{Aarnes:ConstructionPaper}. \end{remark} The majority of existing examples of topological measures on compact spaces are given for spaces with genus 0. Here is one: \begin{example} [Aarnes circle measure] \label{Aatm} Let $X$ be the unit square and $B$ be the boundary of $X.$ Fix a point $p$ in $X \setminus B$. Define $\mu $ on solid sets as follows: $\mu (A) = 1$ if i) $B \subset A$ or ii) $ p \in A $ and $A \cap B \ne \O$. Otherwise, we let $ \mu(A) = 0 $. Then $ \mu $ is a solid set function and, hence, extends to a topological measure on $X$. Note that $\mu$ is not a point mass. To demonstrate that $\mu $ is not a measure we shall show that $\mu$ is not subadditive. Let $A_1$ be a closed solid set consisting of two adjacent sides of $B$, $A_2$ be a closed solid set that is the other two adjacent sides of $B$, and $A_3 = X \setminus B$ be an open solid subset of $X$. Then $X = A_1 \cup A_2 \cup A_3, \ \mu(X) = 1 $, but $ \ \mu (A_1) + \mu(A_2) + \mu(A_3) = 0$. \end{example} The reason that the majority of existing examples of topological measures on compact spaces are given for the spaces with genus 0 is the following. To obtain a topological measure it is enough to define a solid-set function. When a space has genus 0, in the definition of a solid-set function the hardest condition to verify, the irreducible partition condition, becomes easy to verify. When $X$ is locally compact, the hardest condition in Definiton \ref{DeSSFLC} to verify is the condition \ref{solidparti} that deals with solid partitions. But, as we shall see in this section, it turns out that this condition holds trivially for spaces whose one-point compactification has genus $0$. In this section we denote by $\hat X$ the one-point compactification of $X$. \begin{lemma} \label{hatXsoli} Let $X$ be locally compact and $\hat X$ be its one-point compactification. If $A \in \calA_{s}^{*}(X)$ then $A$ is solid in $\hat X$. \end{lemma} \begin{proof} Since $A $ is connected in $X$, it is also is connected in $\hat X$. Let $X \setminus A = \bigsqcup_{i=1}^n B_i$ be the decomposition into connected components. Each $B_i$ is an unbounded subset of $X$. We can write $\hat X \setminus A = \bigcup_{i=1}^n E_i$ where each $E_i = B_i \cup \{ \infty\}$. It is easy to see that each $E_i$ is connected in $\hat X$. Thus, $\hat X \setminus A$ is connected, and so $A$ is solid in $\hat X$. \end{proof} \begin{lemma} \label{nosopart} Let $X$ be a locally compact space whose one-point compactification $\hat X$ has genus 0. If $A \in \calA_{s}^{*}(X)$ then any solid partition of $A$ is the set $A$ itself. \end{lemma} \begin{proof} Suppose first that $V \in \calO_{s}^{*}(X)$ and its solid partition is given by \[ V = \bigsqcup_{i=1}^n C_i \sqcup \bigsqcup_{j=1}^m U_j \] where each $C_i \in \calK_{s}(X)$ and each $U_j \in \calO_{s}^{*}(X)$. From Lemma \ref{hatXsoli} it follows that $\hat X \setminus V$ and each $C_i$ are closed solid sets in $\hat X$. Since $\hat X$ has genus $0$, by Remark \ref{genconn} \[ \hat X \setminus ((\hat X \setminus V) \sqcup \bigsqcup_{i=1}^n C_i) = \bigsqcup_{j=1}^m U_j \] must be connected in $\hat X$. It follows that $m=1$ and we may write \[ V = \bigsqcup_{i=1}^n C_i \sqcup U_1. \] Then $\{ U_1, \hat X \setminus V, C_1, \ldots, C_n\} $ is a solid partition of $\hat X$, and it has only one open set. By Remark \ref{ReIrrPart} this solid partition also has only one closed set in it, and it must be $\hat X \setminus V$. So each $C_i= \O$, and the solid partition of $V$ is $V = U_1$, i.e. the set itself. Now suppose that $C \in \calK_{s}(X)$ and its solid partition is given by \[ C = \bigsqcup_{i=1}^n C_i \sqcup \bigsqcup_{j=1}^m U_j \] where each $C_i \in \calK_{s}(X)$ and each $U_j \in \calO_{s}^{*}(X)$. Then $\{ \hat X \setminus C, U_1, \ldots, U_m, C_1, \ldots, C_n\}$ is a solid partition of $\hat X$. Again by Remark \ref{genconn} \[ \hat X \setminus \bigsqcup_{i=1}^n C_i = (\hat X \setminus C) \sqcup U_1 \ldots \sqcup U_m \] must be connected in $\hat X$. It follows that $U_j = \O$ for $j=1, \ldots, m$. Then by connectivity of $C$ we see that the solid partition of $C$ must be the set itself. \end{proof} \begin{remark} \label{easyRn} From Lemma \ref{nosopart} it follows that for any locally compact space whose one-point compactification has genus 0 the last condition of Definition \ref{DeSSFLC} holds trivially. This is true, for example, for $X = \R^n $, half-plane in $\R^n$ with $n \ge 2$, or for a punctured ball in $\R^n$ with the relative topology. \end{remark} \begin{example} Lemma \ref{nosopart} may not be true for spaces whose one-point compactification has genus greater than $0$. For example, let $X$ be an infinite strip $\R \times [0,1]$ without the ball of radius $1/4$ centered at $(-1/2, 1/2)$, so $\hat X$ has genus greater than $0$. It is easy to give an example of a solid partition of a bounded solid set (say, rectangle $[0,n] \times [0,1]$ or $(0,n) \times [0,1]$) which consists of $n$ solid sets (rectangles of the type $(i, i+1) \times [0,1]$ or $[i, i+1] \times [0,1]$) for any given odd $n \in \N, \, n >1$. \end{example} We are ready to give examples of topological measures on locally compact spaces. \begin{example} \label{ExDan2pt} Let $X$ be a locally compact space whose one-point compactification has genus 0. Let $\lambda$ be a real-valued topological measure on $X$ (or, more generally, a real-valued deficient topological measure on $X$; for definition and properties of deficient topological measures on locally compact spaces see \cite{Butler:DTMLC}). Let $P$ be a set of two distinct points. For each $A \in \calA_{s}^{*}(X)$ let $ \nu(A) = 0$ if $\sharp A = 0$, $ \nu(A) = \lambda(A) $ if $\sharp A = 1$, and $ \nu(A) = 2 \lambda(X)$ if $\sharp A = 2$, where $ \sharp A$ is the number of points in $ A \cap P$. We claim that $\nu$ is a solid set function. By Remark \ref{easyRn} we only need to check the first three conditions of Definition \ref{DeSSFLC}. The first one is easy to see. Using Lemma \ref{LeCsInside} and Lemma \ref{opensolid} it is easy to verify conditions \ref{regul} and \ref{regulo} of Definition \ref{DeSSFLC}. The solid set function $\nu$ extends to a unique finite topological measure on $X$. Suppose, for example, that $ \lambda$ is the Lebesgue measure on $X = \R^2$, the set $P$ consists of two points $p_1 = (0,0)$ and $p_2 = (2,0)$. Let $K_i$ be the closed ball of radius $1$ centered at $p_i$ for $i=1,2$. Then $K_1, K_2$ and $ C= K_1 \cup K_2$ are compact solid sets, $\nu(K_1) = \nu(K_2) = \pi, \, \nu(C) = 4 \pi$. Since $\nu$ is not subadditive, $\nu$ is a topological measure that is not a measure. \end{example} The next two examples are adapted from Example 2.2 in \cite{Aarnes:LC} and are related to Example \ref{Aatm}. \begin{example} \label{puncdisk} Let $X$ be the unit disk on the plane with removed origin. $X$ is a locally compact Hausdorff space with respect to the relative topology. Any subset of $X$ whose closure in $\R^2$ contains the origin is unbounded in $X$. For $A \in \calA_{s}^{*}(X)$ (since $A$ is also solid subset of the unit disk by Lemma \ref{hatXsoli}) we define $\mu' (A) = \mu(A)$ where $\mu$ is the solid set function on the unit disk from Example \ref{Aatm}. From Remark \ref{easyRn}, Lemma \ref{LeCsInside}, Lemma \ref{opensolid} and the fact that $\mu$ is a solid set function on $\hat X$ we see that $\mu'$ is a solid-set function on $X$. By Theorem \ref{extThLC} $\mu'$ extends uniquely to a topological measure on $X$, which we also call $\mu'$. Note that $\mu'$ is simple. We claim that $\mu'$ is not a measure. Let $U_1 = \{ z \in X: \ Im \ z > 0\}, \ U_2 = \{ z \in X: \ Im \ z < 0\}$ and $F = \{ z \in X: \ Im \ z = 0\}$. Then $U_1, U_2$ are open (unbounded) in $X$ and $F$ is a closed (unbounded) set in $X$ consisting of two disjoint segments. Note that $X= F \cup U_1 \cup U_2$. Using Remark \ref{extsumme} we calculate $\mu'(F) = \mu'(U_1) = \mu'(U_2) =0$. The boundary of the disk, $C$, is a compact connected set, $X \setminus C$ is unbounded in $X$, so $C \in \calK_{s}(X)$. Since $\mu'(C) = 1$, we have $\mu'(X) =1$. Thus, $\mu'$ is not subadditive, so it is not a measure. This example also shows that on a locally compact space finite additivity of topological measures holds on $\calK(X) \cup \calO(X)$ by Definition \ref{DeTMLC}, but fails on $\calC(X) \cup \calO(X)$. This is in contrast to topological measures on compact spaces, where finite additivity holds on $\calC(X) \cup \calO(X)$. \end{example} \begin{example} \label{linetm} Let $X = \R^2, \ l$ be a straight line and $p$ a point of $X$ not on the line $l$. For $A \in \calA_{s}^{*}(X)$ define $\mu(A) = 1$ if $A \cap l \neq \O$ and $p \in A$; otherwise, let $\mu(A) =0$. Using Lemma \ref{LeCsInside} and Lemma \ref{opensolid} it is easy to verify the first three conditions of Definition \ref{DeSSFLC}. From Remark \ref{easyRn} it follows that $\mu$ is a solid set function on $X$. By Theorem \ref{extThLC} $\mu$ extends uniquely to a topological measure on $X$, which we also call $\mu$. Note that $\mu$ is simple. We claim that $\mu$ is not a measure. Let $F$ be the closed half-plane determined by $l$ which does not contain $p$. Then using Remark \ref{extsumme} we calculate $\mu(F) = \mu(X \setminus F) = 0$, and $\mu(X) = 1$. Failure of subadditivity shows that $\mu$ is not a measure. The sets $F$ and $X \setminus F$ are both unbounded. Now take a bounded open disk $V$ around $p$ that does not intersect $l$. Then \[ X = V \sqcup (X \setminus V), \]\ where $V \in \calO^{*}(X), \ \mu(V) = \mu(X \setminus V) = 0$, while $\mu(X) =1$. This example also shows that on a locally compact space finite additivity of topological measures holds on $\calK(X) \cup \calO(X)$ by Definition \ref{DeTMLC}, but fails on $\calC(X) \cup \calO(X)$. It fails even in the situation $X = V \sqcup F$, where $ V $ is a bounded open set, and $F$ is a closed set. \end{example} The last two examples suggest that having a topological measure on $\hat X$ helps us to get a topological measure on $X$. In fact, we have the following result. \begin{theorem} \label{tmXtoXha} Let $X$ be a locally compact, connected, locally connected space whose one-point compactification $\hat X$ has genus 0. Suppose $\nu$ is a solid set function on $\hat X$. For $A \in \calA_{s}^{*}(X)$ define $\mu(A) = \nu(A)$. Then $\mu$ is a solid set function on $X$ and, thus, extends uniquely to a topological measure on $X$. \end{theorem} \begin{proof} Let $A \in \calA_{s}^{*}(X)$. By Lemma \ref{hatXsoli}, $A$ is a solid set in $\hat X$. Using Lemma \ref{LeCsInside}, Lemma \ref{opensolid}, the fact that $\nu$ is a solid set function on $\hat X$, and that a bounded solid set does not contain $\infty$ it is easy to verify the first three conditions of Definition \ref{DeSSFLC}. By Remark \ref{easyRn} $\mu$ is a solid set function on $X$. \end{proof} Theorem \ref{tmXtoXha} allows us to obtain a large variety of topological measures on a locally compact space from examples of topological measures on compact spaces. \begin{example} \label{nvssf} Let $X$ be a locally compact space whose one-point compactification has genus 0. Let $n$ be a natural number. Let $P$ be the set of distinct $2n+1$ points. For each $A \in \calA_{s}^{*}(X)$ let $ \nu(A) = i/n$ if $\sharp A = 2i$ or $2i+1$, where $ \sharp A$ is the number of points in $ A \cap P$. When $X$ is compact, a set function defined in this way is a solid-set function (see Example 2.1 in \cite{Aarnes:Pure}, Examples 4.14 and 4.15 in \cite{QfunctionsEtm}). By Theorem \ref{tmXtoXha} $\nu$ is a solid-set function on $X$; it extends to a unique topological measure on $X$ that assumes values $0, 1/n, \ldots, 1$. \end{example} We conclude with an example of another infinite topological measure. \begin{example} \label{mojexLC} Let $X=\R^n$ for any $n \ge 2$, and $\lambda$ be the Lebesque measure on $X$. For $U \in \calO_{s}^{*}(X)$ define $\mu(U) =0$ if $0 \le \lambda(U) \le 1$ and $\mu(U) = \lambda(U)$ if $\lambda(U) >1$. For $C \in \calK_{s}(X)$ define $\mu(C) = 0$ if $0 \le \lambda(C) < 1$ and $\mu(C) =\lambda(C)$ if $\lambda(C) \ge 1$. It is not hard to check the first three conditions of Definition \ref{DeSSFLC}. From Remark \ref{easyRn} it follows that $\mu$ is a solid set function on $X$. By Theorem \ref{extThLC} $\mu$ extends uniquely to a topological measure on $X$, which we also call $\mu$. Note that $\mu(X) = \infty$. $\mu$ is not subadditive, for we may cover a compact ball with Lebesque measure greater than 1 by finitely many balls of Lebesque measure less than 1. Hence, $\mu$ is not a measure. \end{example} {\bf{Acknowledgments}}: This work was conducted at the Department of Mathematics at the University of Illinois at Urbana-Champaign and the Department of Mathematics at the University of California Santa Barbara. The author would like to thank both departments for their hospitality and supportive environments.
train/arxiv
BkiUdCLxK1Thg98UfyCp
5
1
\section{Introduction} \label{sec:introduction} The field of ferromagnetism of thin films is currently undergoing a renaissance driven by advances in theory, experiment and technology \cite{dennis02,desimone06r, braun12,vonbergmann14,stamps14,kent15,manipatruni16}. Study of magnetic domain walls is remaining at the forefront of this activity and attracts a lot of attention from engineering, physical and mathematical communities. These studies are in part motivated by a new field of applied physics -- {\it spintronics} -- offering a great promise for creating the next generation of data storage and logic devices combining spin-dependent effects with conventional charge-based electronics \cite{allwood05, bader10}. There are two most common types of domain walls connecting the distinct preferred directions of magnetization in uniaxial materials: {\it Bloch and N\'eel walls} \cite{hubert}. Bloch walls appear in bulk ferromagnets, where the magnetization profile connecting the two opposite easy axis directions prefers an out-of-plane rotation with respect to the plane spanned by the wall direction and the easy axis. In ultrathin ferromagnetic films, on the other hand, the stray field energy penalizes out-of-plane rotations, and as a result the magnetization profile is constrained to the film plane. A domain wall profile connecting the two distinct preferred directions of magnetization via an in-plane rotation is called a {\it N\'eel wall}. N\'eel walls have been thoroughly investigated theoretically since their discovery, and their internal structure is currently fairly well understood. The main characteristics of the one-dimensional N\'eel wall profile typically include an inner core, logarithmically decaying intermediate regions and algebraic tails. These features have been predicted theoretically, using micromagnetic treatments \cite{hubert, dietze61, riedel71, garcia99, mo:jcp06}, and verified experimentally \cite{berger92}. Recent rigorous mathematical studies of N\'eel walls confirmed these predictions and provided more refined information about the profile of the N\'eel wall, including uniqueness, regularity, monotonicity, symmetry, stability and precise rate of decay \cite{melcher03, garcia04, desimone06, capella07, cm:non13,my:prsla16}. Another type of a domain wall has been recently observed in ultrathin ferromagnetic films with perpendicular anisotropy and strong antisymmetric exchange referred to as Dzyaloshinskii-Moriya interaction (DMI). The presence of DMI significantly alters the structure of domain walls, leading to formation of {\it chiral domain walls} in the interior and {\it chiral edge domain walls} at the boundary of the ferromagnetic sample \cite{thiaville12, rohart13, ms:prsla16}. These chiral domain walls and chiral edge domain walls play a crucial role in producing new types of magnetization patterns inside a ferromagnet and have been rigorously analyzed in \cite{ms:prsla16}. It is well known that magnetization configurations in ferromagnets are significantly affected by the presence of material boundaries \cite{hubert, dennis02, desimone06r}. To reduce the stray field, the magnetization vector tries to stay tangential to the material boundary, thus minimizing the presence of boundary magnetic charges. In ultrathin films, this forces the magnetization vector to lie almost entirely in the film plane and align tangentially along the film's lateral edges \cite{kohn05arma}. At the same time, these geometrically preferred directions may disagree with the intrinsic directions in the bulk film, determined by either a strong in-plane uniaxial crystalline anisotropy or an external in-plane magnetic field. The result of this incompatibility is another type of magnetic domain walls -- {\it edge domain walls}. These domain walls have been observed experimentally in magnetically coupled bilayers in the shape of strips with an easy axis normal to the strip \cite{ruhrig90}, and in single-layer strips with negligible crystalline anisotropy and varying in-plane uniform magnetic field \cite{mattheis97}. The origin of edge domain walls is due to the competition between magnetostatic, exchange and anisotropy energies. Strong uniaxial anisotropy defines the two preferred magnetization directions within the ferromagnetic film plane. On the other hand, at the film edge the magnetostatic energy penalizes the magnetization component normal to the edge, and consequently the magnetization prefers to lie in-plane and tangentially to the edge of the ferromagnetic film. The exchange energy allows for a continuous transition between these states and as a result an edge domain wall connecting the direction tangent to the film edge and the anisotropy easy axis direction is created. Experimental observations in soft ferromagnetic thin films and bilayers \cite{mattheis97,ruhrig90} indicate that edge domain walls, formed near the boundary of the sample due to a misalignement of the tangential and applied field directions, have an essentially one-dimensional character. This is confirmed by micromagnetic simulations in extended ferromagnetic strips performed in several regimes, including strong in-plane uniaxial anisotropy with no applied field (see Fig.~\ref{f:strip2d}) and no crystalline anisotropy with strong in-plane applied field (results not shown). Numerical simulations suggest that away from the side edges the domain walls have essentially one-dimensional profiles. Therefore, in order to investigate these profiles it is enough to model their behavior, employing a simplified one-dimensional micromagnetic energy capturing the essential features of the wall profiles. Such a description is expected to be appropriate for strips of soft ferromagnetic materials whose thickness does not exceed significantly the exchange length and whose width is much larger than the N\'eel wall width. The goal of this paper is to understand the formation of edge domain walls viewed as global energy minimizers of a reduced one-dimensional micromagnetic energy. We begin our analysis by deriving a one-dimensional energy functional describing edge domain walls (see \eqref{Ethb2}). Since we are specifically interested in one-dimensional domain wall profiles, we consider the problem on an unbounded domain consisting of a ferromagnetic film occupying a half-plane times a fixed interval with small thickness. However, this setup makes the energy of the wall infinite due to inconsistency between the preferred magnetization directions at the film edge and inside the film (see section~\ref{sec:statement-results}). Therefore, in order to have a well defined minimization problem, we need to renormalize the one-dimensional energy per unit edge length in a suitable way (see \eqref{Eb}). We show existence of a minimizer for this energy, using standard methods of the calculus of variations; see Theorem~\ref{t:exist}. The main difficulty lies in dealing with nonlocal magnetostatic energy term and identifying the proper space where the minimization problem makes sense. \begin{figure} \centering \centering \includegraphics[width=6.5in]{strip2d} \caption{A magnetization configuration containing edge domain walls in a strip obtained from micromagnetic simulations of a $20.7 \mu$m$\times 5.2\mu$m$\times 4$nm permalloy sample with vertical uniaxial anisotropy and no applied field (for further details, see section~\ref{sec:num}). The colormap corresponds to the angle between the magnetization vector (also shown by arrows) and the $y$-axis. Inside the dashed box (i.e., far from the side edges) the edge wall profiles are essentially one-dimensional.} \label{f:strip2d} \end{figure} We continue our analysis by deriving the Euler-Lagrange equation characterizing the profile of the edge domain wall; see Theorem \ref{t:regular}. This seemingly straightforward task, however, requires a rather careful and proper dealing with the nonlocal energy term. The main difficulty is related to the fact that we only have rather limited regularity of the energy minimizing solutions a priori. The information about further regularity is usually recovered through the use of the Euler-Lagrange equation and a bootstrap argument. As this information is not yet available, we need to carefully analyze the nonlocal term using methods from fractional Sobolev spaces and recover a weak form of the Euler-Lagrange equation. After deriving the Euler-Lagrange equation, we can prove higher regularity of the solutions using an adaptation of the standard elliptic regularity techniques. However, due to the difficulties arising in dealing with nonlocality we can only show the $C^2$ regularity of solutions. Further application of the bootstrap argument is then hindered by the lack of integrability of the contribution of the nonlocal term to the Euler-Lagrange equation when higher derivatives of the solution are considered, and the second derivative of the solution indeed blows up at the film edge. After establishing existence and regularity of the edge domain wall profiles, we investigate two specific regimes where we can provide refined information about the properties of the energy minimizing solutions of the Euler-Lagrange equation; see Theorem \ref{t:gammanu} and Theorem \ref{t:gammabeta}. The first regime that we consider is the regime of relatively small magnetostatic energy, which corresponds to very thin films. In this regime we show that all minimizers of the energy \eqref{Eb} are close to the standard local N\'eel wall-type profile. The second regime is the regime in which the boundary tangent and the easy axis directions are nearly parallel. In this case we show that there is a unique minimizer of the energy in \eqref{Eb}, and this minimizer is close to the uniform state. We corroborate our analytical findings and provide more information about the profiles of edge domain walls, using one-dimensional numerical simulations that employ the method from \cite{mo:jcp06}. Our paper is organized as follows. In section~\ref{sec:model}, starting from the full three-dimensional micromagnetic model we derive a variational model for edge domain walls that we intend to investigate in this paper. Section~\ref{sec:statement-results} is devoted to a rigorous formulation of the problem and includes the statements of the main results about existence, regularity and the qualitative features of edge domain walls. In sections~\ref{sec:proof-theor-reft}, \ref{sec:proof-theor-reft1}, \ref{sec:proof-theor-gammanu} and \ref{sec:proof-theor-gammabeta} we prove the main theorems formulated before in section~\ref{sec:statement-results}. In section~\ref{sec:num}, we present the results of numerical simulations, compare them with our analytical findings and discuss open problems. Finally, in Appendix~\ref{append} we provide a rigorous derivation of the one-dimensional micromagnetic energy in magnetic strips under natural assumptions on the one-dimensional magnetization profile. \section{Model} \label{sec:model} Consider a uniaxial ferromagnet occupying a domain $\Omega \subset \mathbb R^3$, with the easy axis oriented along the second coordinate direction. Then the micromagnetic energy associated with the magnetization state of the sample reads, in the SI units \cite{hubert,landau8}: \begin{multline} \label{Ephys} E(\mathbf M) = {A \over M_s^2} \int_\Omega |\nabla \mathbf M|^2 \, d^3 r + {K \over M_s^2} \int_\Omega (M_1^2 + M_3^2) d^3 r \\ - \mu_0 \int_\Omega \mathbf M \cdot \mathbf H \, d^3 r + \mu_0 \int_{\mathbb R^3} \int_{\mathbb R^3} {\nabla \cdot \mathbf M(\mathbf r) \, \nabla \cdot \mathbf M(\mathbf r') \over 8 \pi | \mathbf r - \mathbf r'|} \, d^3 r \, d^3 r'. \end{multline} Here $\mathbf M = (M_1, M_2, M_3)$ is the magnetization vector that satisfies $|\mathbf M|=M_s$ in $\Omega$ and $\mathbf M = 0$ in $\mathbb R^3\setminus \Omega$, the positive constants $M_s$, $A$ and $K$ are the saturation magnetization, exchange constant and the anisotropy constant, respectively, $\mathbf H$ is an applied external field, and $\mu_0$ is the permeability of vacuum. In \eqref{Ephys}, the terms in the order of appearance are the exchange, crystalline anisotropy, Zeeman and stray field terms, respectively, and $\nabla \cdot \mathbf M$ is understood distributionally. In this paper, we are interested in the situation in which $\Omega$ is a flat ultra-thin film domain, i.e., we have $\Omega = D \times (0, d)$, where $D \subset \mathbb R^2$ is a planar domain specifying the film shape and $d$ is the film thickness of a few nanometers. In this case the magnetization is expected to be essentially independent from the third coordinate, and the full three-dimensional micromagnetic energy admits a reduction to an energy functional that depends only on the average of the magnetization over the film thickness (see, e.g., \cite[Lemma 3]{kohn05arma}; for an analytical treatment in a closely related context, see \cite{kmn:arma,m:cmp}). Therefore, we introduce an ansatz $\mathbf M(x_1, x_2, x_3) = M_s (\mathbf m(x_1, x_2), 0) \chi_{(0,d)}(x_3)$, where $\mathbf m : \mathbb R^2 \to \mathbb R^2$ is a two-dimensional in-plane magnetization vector satisfying $|\mathbf m| = 1$ in $D$ and $|\mathbf m| = 0$ outside $D$, and $\chi_{(0,d)}$ is the characteristic function of $(0,d)$. Next, we define the exchange length $\ell$, the Bloch wall width $L$, and the {\em thin film parameter} $\nu$ measuring the relative strength of the magnetostatic energy \cite{mo:jcp06}: \begin{align} \label{lLQ} \ell = \sqrt{2 A \over \mu_0 M_s^2}, \qquad L = \sqrt{A \over K}, \qquad \nu = {\mu_0 M_s^2 d \over 2 \sqrt{A K}}, \end{align} and note that the above ansatz is relevant when $d \lesssim \ell$ \cite{desimone00,kohn05arma,mo:jcp06,doering14,doering16}. Then, measuring the energy in the units of $2 A d$ and lengths in the units of $L$, we obtain the following expression for the energy as a function of $\mathbf m$ \cite{garcia99}: \begin{align} \label{Em} E(\mathbf m) = \frac12 \int_D \left( |\nabla \mathbf m|^2 + m_1^2 - 2 \mathbf h \cdot \mathbf m \right) d^2 r + {\nu \over 2} \int_{\mathbb R^2} \int_{\mathbb R^2} K_\delta(|\mathbf r - \mathbf r'|) \nabla \cdot \mathbf m(\mathbf r) \, \nabla \cdot \mathbf m(\mathbf r) \, d^2 r \, d^2 r', \end{align} where $\delta = d / L$ is the dimensionless film thickness, \begin{align} \label{Kd} K_\delta(r) = {1 \over 2 \pi \delta} \left\{ \ln \left( {\delta + \sqrt{\delta^2 + r^2} \over r} \right) - \sqrt{1 + { r^2 \over \delta^2}} + { r \over \delta} \right\}, \end{align} and we set $\mathbf H = K / (\mu_0 M_s) (\mathbf h, 0)$ for $\mathbf h : \mathbb R^2 \to \mathbb R^2$, assuming that the applied field lies in the film plane. More explicitly, assuming that $\partial D$ is of class $C^2$, we have \begin{multline} \label{Emm} E(\mathbf m) = \frac12 \int_D \left( |\nabla \mathbf m|^2 + m_1^2 - 2 \mathbf h \cdot \mathbf m \right) d^2 r + {\nu \over 2} \int_D \int_D K_\delta(|\mathbf r - \mathbf r'|) \nabla \cdot \mathbf m(\mathbf r) \, \nabla \cdot \mathbf m(\mathbf r) \, d^2 r \, d^2 r' \\ - \nu \int_D \int_{\partial D} K_\delta(|\mathbf r - \mathbf r'|) \nabla \cdot \mathbf m(\mathbf r) (\mathbf m(\mathbf r') \cdot \mathbf n(\mathbf r')) \, d \mathcal H^1(\mathbf r') \, d^2 r \\ + {\nu \over 2} \int_{\partial D} \int_{\partial D} K_\delta(|\mathbf r - \mathbf r'|) (\mathbf m(\mathbf r) \cdot \mathbf n(\mathbf r)) (\mathbf m(\mathbf r') \cdot \mathbf n(\mathbf r')) \, d \mathcal H^1(\mathbf r') \, d \mathcal H^1(\mathbf r), \end{multline} where $\mathbf n$ is the outward unit normal vector to $\partial D$, and we took into account that the distributional divergence of $\mathbf m$ is the sum of the absolutely continuous part in $D$ and a jump part on $\partial D$. We now consider the thin film limit introduced in \cite{mo:jcp06} by sending $\delta$ to zero with $\nu$ and $D$ fixed. Observe that when $\delta$ is small, we have \begin{align} \label{Kdd} K_\delta(r) \simeq {1 \over 4 \pi r} \qquad \text{and} \qquad \int_{\partial D} K_\delta(|\mathbf r - \mathbf r'|) \, d \mathcal H^1(\mathbf r') \simeq {1 \over 2 \pi} \ln \delta^{-1}. \end{align} Therefore, when $\mathbf m$ does not vary appreciably on the scale of $\delta$, to the leading order we have $E(\mathbf m) \simeq E_\delta(\mathbf m)$, where \begin{multline} \label{Emd} E_\delta(\mathbf m) = \frac12 \int_D \left( |\nabla \mathbf m|^2 + m_1^2 - 2 \mathbf h \cdot \mathbf m \right) d^2 r + {\nu \over 8 \pi} \int_D \int_D {\nabla \cdot \mathbf m(\mathbf r) \, \nabla \cdot \mathbf m(\mathbf r) \over |\mathbf r - \mathbf r'|} \, d^2 r \, d^2 r' \\ - {\nu \over 4 \pi} \int_D \int_{\partial D} {\nabla \cdot \mathbf m(\mathbf r) (\mathbf m(\mathbf r') \cdot \mathbf n(\mathbf r')) \over |\mathbf r - \mathbf r'|} \, d \mathcal H^1(\mathbf r') \, d^2 r + {\nu \ln \delta^{-1} \over 4 \pi} \int_{\partial D} (\mathbf m(\mathbf r) \cdot \mathbf n(\mathbf r))^2 \, d \mathcal H^1(\mathbf r). \end{multline} Since the last term in \eqref{Emd} blows up as $\delta \to 0$, unless $\mathbf m \cdot \mathbf n = 0$ a.e. on $\partial D$, in the limit we recover \begin{align} \label{E0} E_0(\mathbf m) = \frac12 \int_D \left( |\nabla \mathbf m|^2 + m_1^2 - 2 \mathbf h \cdot \mathbf m \right) d^2 r + {\nu \over 8 \pi} \int_D \int_D {\nabla \cdot \mathbf m(\mathbf r) \, \nabla \cdot \mathbf m(\mathbf r) \over |\mathbf r - \mathbf r'|} \, d^2 r \, d^2 r', \end{align} with admissible configurations $\mathbf m \in H^1(D; \mathbb S^1)$ satisfying Dirichlet boundary condition $\mathbf m = s \mathbf t$ on $\partial D$, where $\mathbf t$ is the positively oriented unit tangent vector to $\partial D$ and $s : \partial D \to \{-1,1\}$. In fact, since the trace of $\mathbf m$ belongs to $H^{1/2}(\partial D; \mathbb R^2)$, the function $s$ is necessarily constant on each connected component of $\partial D$. Note that this creates a topological obstruction in the case when $D$ is simply connected, giving rise to boundary vortices at the level of $E_\delta$ \cite{moser04,kohn05arma,kurzke06}. At the same time, it is clear that for suitable multiply connected domains the considered admissible class is non-empty. A canonical example of the latter is an annulus (for a physics overview, see \cite{klaui03}). In the absence of crystalline anisotropy and applied field, the ground state of the magnetization in an annulus is easily seen to be a vortex state. However, this result no longer holds in the presence of crystalline anisotropy, since the latter does not favor alignment of $\mathbf m$ with the boundaries. In large annuli, this would lead to the formation of a boundary layer, in which the magnetization rotates from the direction tangential to the boundary to the direction of the easy axis. We call such magnetization configurations {\em edge domain walls} (for similar objects in a different micromagnetic context, see \cite{ms:prsla16}). \begin{figure} \centering \centering \includegraphics[width=3in]{strip} \caption{Illustration of the strip geometry.} \label{f:strip} \end{figure} Focusing on one-dimensional transition profiles in the vicinity of the boundary, we now consider $D$ to be a strip of width $w$ oriented at an angle $\beta \in [0, \pi/2]$ with respect to the easy axis (see Fig. \ref{f:strip}). We define \begin{align} \label{x} x = x_1 \cos \beta + x_2 \sin \beta \end{align} to be the variable in the direction normal to the strip axis. Then, with the applied field $\mathbf h$ set to zero, the energy of a magnetization configuration $\mathbf m = \mathbf m(x)$ per unit length of the strip is equal to (see Appendix~\ref{append}) \begin{align} \label{Ebm} E_{\beta,w}(\mathbf m) = \frac12 \int_0^w \left( |m_1'|^2 + |m_2'|^2 + m_1^2 \right) dx + {\nu \over 4 \pi} \int_0^w \int_0^w \ln |x - y|^{-1} \, m_\beta'(x) m_\beta'(y) \, dx \, dy, \end{align} where $m_\beta = \mathbf e_\beta \cdot \mathbf m$, with $\mathbf e_\beta = (\cos \beta, \sin \beta)$, provided that \begin{align} \label{mbbc} m_\beta(0) = m_\beta(w) = 0. \end{align} The energy in \eqref{Ebm} may also be rewritten using the operator $\left( - {d^2 \over dx^2} \right)^{1/2}$ (acting from $H^1(\mathbb R)$ to $L^2(\mathbb R)$ and understood via Fourier space, see Appendix~\ref{append}) \begin{align} \label{Ebm2} E_{\beta,w}(\mathbf m) = \frac12 \int_0^w \left( |m_1'|^2 + |m_2'|^2 + m_1^2 \right) dx + {\nu \over 4} \int_{-\infty}^\infty m_\beta \left( - {d^2 \over dx^2} \right)^{1/2} m_\beta \, dx. \end{align} Another useful representation of the energy in \eqref{Ebm} that expresses the double integral in terms of $m_\beta$ rather than its derivative is (see Appendix~\ref{append}) \begin{align} \label{Ebm3} E_{\beta,w}(\mathbf m) = \frac12 \int_0^w \left( |m_1'|^2 + |m_2'|^2 + m_1^2 \right) dx + {\nu \over 8 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty {(m_\beta(x) - m_\beta(y))^2 \over (x - y)^2} \, dx \, dy. \end{align} Lastly, we express the energy in \eqref{Ebm3} in terms of the angle $\theta$ between $\mathbf m$ and the easy axis in the counter-clockwise direction: \begin{align} \label{mth} \mathbf m = (-\sin \theta, \cos \theta). \end{align} With a slight abuse of notation, we get that the energy associated with $\mathbf m$ is given by \begin{align} \label{Ethb2} E_{\beta,w}(\theta) = \frac12 \int_0^w \left( |\theta'|^2 + \sin^2 \theta \right) dx + {\nu \over 8 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty {(\sin(\theta(x) - \beta) - \sin(\theta(y) - \beta))^2 \over (x - y)^2} \, dx \, dy, \end{align} where we set $\theta(x) = \beta$, for all $x \not\in (0, w)$. The energy functional in \eqref{Ethb2} is the starting point of our analysis throughout the rest of this paper. In particular, it is straightforward to show that minimizers of \eqref{Ethb2} exist among all $\theta - \beta \in H^1_0(0, w)$, are smooth in the interior and satisfy the Euler-Lagrange equation \begin{align} \label{Ethb2EL} 0 = {d^2 \theta \over dx^2} - \sin\theta\cos\theta -\frac{\nu}{2}\cos(\theta-\beta) \left( - {d^2 \over dx^2} \right)^{1/2} \sin(\theta-\beta) \qquad x \in (0, w), \end{align} where \cite{dinezza12} \begin{align} \label{halflapl} \left( - {d^2 \over dx^2} \right)^{1/2} u(x) = {1 \over \pi} \, \Xint-_{-\infty}^\infty {u(x) - u(y) \over (x - y)^2} \, dy, \end{align} and here and everywhere below $\Xint-$ denotes the principal value of the integral. Notice that the Euler-Lagrange equation in \eqref{Ethb2EL} coincides with the one for the classical problem of the N\'eel wall \cite{cm:non13}. \section{Statement of results} \label{sec:statement-results} We now turn to the problem of our main interest in this paper, which is to characterize a single edge domain wall. For this purpose, we would like to send the parameter $w$ to infinity and obtain an energy minimizing profile $\theta(x)$ solving \eqref{Ethb2EL} for all $x > 0$ and satisfying $\theta(0) = \beta$ (also setting $\theta(x) = \beta$ for all $x < 0$ in the definition of the last term in \eqref{Ethb2EL}). We note that for the problem on the semi-infinite domain with $\beta \in [0, \pi/2]$ the boundary condition at $x = 0$ is equivalent to that in \eqref{mbbc} because of the reflection symmetry, which makes the energy invariant with respect to the transformation \begin{align} \theta \to -\theta, \qquad \beta \to -\beta. \end{align} We also note that for $\beta = 0$ we clearly have $\theta = 0$ as the unique global minimizer for the energy in \eqref{Ethb2}. Therefore, in the following we always assume that $\beta > 0$. As can be seen from standard phase plane analysis, in the absence of the nonlocal term due to stray field, i.e., when $\nu = 0$, the edge domain wall solution is explicitly \begin{align} \label{nostray} \theta(x) = 2 \arctan \left( e^{-x} \tan {\beta \over 2} \right) \qquad \text{for} \qquad x > 0, \end{align} noting that for $\beta = \pi/2$ there is also another solution which is obtained from the one in \eqref{nostray} by a reflection with respect to $\theta = \pi/2$. Furthermore, for all $\beta \in (0, \pi/2)$ and $\nu = 0$ this is the unique solution of \eqref{Ethb2EL} satisfying $\theta(0) = \beta$ and approaching a constant as $x \to +\infty$. The profile $\theta(x)$ is decreasing monotonically from $\theta = \beta$ at $x = 0$ to $\theta = 0$ at $x = +\infty$ and decays exponentially at infinity. It also minimizes the energy in \eqref{Ethb2} with $\nu = 0$ and $w = \infty$ among all $\theta - \beta \in \mathring{H}^1_0(\mathbb R^+)$. By $\mathring{H}^1_0(\mathbb R^+)$ we mean the Hilbert space obtained as the completion of the space $C^\infty_c(\mathbb R^+)$ with respect to the homogeneous Sobolev norm \begin{align} \label{H10norm} \| u \|_{\mathring{H}^1_0(\mathbb R^+)}^2 := \int_0^\infty |u'|^2 dx. \end{align} Note that by Sobolev embedding the elements of $\mathring{H}^1_0(\mathbb R^+)$ may be identified with continuous functions vanishing at $x = 0$ (cf. \cite[Section 8.3]{brezis}). The minimizing property of $\theta(x)$ in \eqref{nostray} may be seen directly from the Modica-Mortola type inequality for the energy with $\nu = 0$ and $w = \infty$: \begin{align} \label{Eb00} E_\beta^0(\theta) := \frac12 \int_0^\infty \left( |\theta'|^2 + \sin^2 \theta \right) dx = \int_0^\infty | (\cos \theta)'| \, dx + \frac12 \int_0^\infty \left( |\theta'| - |\sin \theta| \right)^2 dx \notag \\ \geq \left| \int_0^\infty (\cos \theta)' \, dx \right| = | \cos \beta - \cos \theta_\infty| \geq 1 - \cos \beta. \end{align} In writing \eqref{Eb00}, we used weak chain rule \cite[Corollary 8.11]{brezis} and the fact that $\sin \theta \in H^1(\mathbb R^+)$ whenever $E_\beta^0(\theta) < +\infty$ and, hence, $\sin \theta(x) \to 0$ as $x \to +\infty$, implying that $\theta(x) \to \theta_\infty \in \pi \mathbb Z$ \cite[Corollary 8.9]{brezis}. Furthermore, by inspection the case of equality holds if and only if $\theta$ is given by \eqref{nostray}. A natural question is whether this type of boundary layer solution also exists for $\nu > 0$. We point out from the outset that if one formally sets $w = \infty$ in \eqref{Ethb2}, one runs into a difficulty that the nonlocal term in the energy evaluated on the function in \eqref{nostray} is infinite. Indeed, by positivity of the nonlocal and anisotropy terms, for any configuration with bounded energy we would have $\sin \theta \in H^1(\mathbb R^+)$ and, therefore, $\lim_{x \to \infty} \theta(x) = \theta_\infty \in \pi \mathbb Z$, as before. On the other hand, if $\theta_\infty - \beta \not\in 2 \pi \mathbb Z$, the nonlocal part of the energy becomes infinite: \begin{align} \label{log} \int_{-\infty}^\infty \int_{-\infty}^\infty {(\sin(\theta(x) - \beta) - \sin(\theta(y) - \beta))^2 \over (x - y)^2} \, dx \, dy \geq \int_R^\infty \int_{-\infty}^0 {\sin^2(\theta(y) - \beta) \over (x - y)^2} \, dx \, dy \qquad \notag \\ \geq \int_R^\infty {\sin^2(\theta(y) - \beta) \over y} \, dy \geq {1 \over 2} \sin^2 \beta \int_R^\infty {dy \over y} = +\infty, \end{align} where we chose a sufficiently large $R > 0$, such that $\sin^2(\theta(y) - \beta) \geq \frac12 \sin^2(\theta_\infty - \beta) = \frac12 \sin^2 \beta > 0$ for all $y > R$. This phenomenon has to do with the divergence of the energy of a pair of edge domain walls minimizing $E_{\beta,w}$ in \eqref{Ethb2} as $w \to \infty$. Indeed, for $\beta \not= 0$ an edge domain wall carries a net magnetic charge spread over a region of width of order 1 near the edge. Therefore, the self-interaction energy per unit length of a single edge domain wall diverges logarithmically with $w$, as can be seen by examining the argument in \eqref{log}. Thus, in order to concentrate on a single edge domain wall, we need to appropriately renormalize the wall energy by ``subtracting'' the infinite self-interaction energy of a single wall. To this end, we introduce a smooth cutoff function $\eta_\beta:\mathbb R \to [0,\beta]$ that satisfies $\eta_\beta(x) = \beta$ when $x\leq 0$, $\eta_\beta(x) = 0$ when $x \geq 1$, and $\eta_\beta'(x) \leq 0$ for all $x \in \mathbb R$, and formally subtract its contribution from the integrand in the last term in \eqref{Ethb2}. This produces the following {\em renormalized energy} \begin{multline} \label{Eb0} E_\beta(\theta) := \frac12 \int_0^\infty \left( |\theta'|^2 + \sin^2 \theta \right) dx \\ + {\nu \over 8 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty {(\sin (\theta(x) - \beta) - \sin (\theta(y) - \beta))^2 - (\sin (\eta_\beta(x) - \beta) - \sin (\eta_\beta(y) - \beta))^2 \over (x - y)^2} \, dx \, dy, \end{multline} which is clearly finite when $\theta = \eta_\beta$. Notice that $\eta_\beta(x)$ mimics the edge domain wall profile near the edge and thus has the same leading order self-energy as the edge wall, which is what motivates its introduction in \eqref{Eb0}. Care is needed in defining a suitable admissible class of functions $\theta(x)$ in order to make the last term in \eqref{Eb0} meaningful, as the integrand there may not be in $L^1(\mathbb R^2)$. The latter is related to the logarithmic divergence at infinity of the respective integrals mentioned earlier. Therefore, we rewrite the energy $E_\beta(\theta)$ in an equivalent form for smooth functions $\theta(x)$ satisfying $\theta(x) = \beta$ for all $x < 0$ and $\theta(x) = \pi n$ for some $n \in \mathbb Z$ and all $x > R \gg 1$: \begin{multline} \label{Eb} E_\beta(\theta) = \int_0^\infty \left( \frac12 |\theta'|^2 + \frac12 \sin^2 \theta + {\nu \over 4 \pi} \cdot {\sin^2 (\theta - \beta) - \sin^2 (\eta_\beta - \beta) \over x} \right) dx \\ + {\nu \over 8 \pi} \int_0^\infty \int_0^\infty {(\sin (\theta(x) - \beta) - \sin (\theta(y) - \beta))^2 \over (x - y)^2} \, dx \, dy \\ - {\nu \over 8 \pi} \int_0^\infty \int_0^\infty {(\sin (\eta_\beta(x) - \beta) - \sin (\eta_\beta(y) - \beta))^2 \over (x - y)^2} \, dx \, dy, \end{multline} as can be verified by a direct computation. We observe that this energy is well-defined, possibly taking value $+\infty$, on the admissible class \begin{align} \label{A} \mathcal{A} := \left\{ \theta \in C \big( \overline{\mathbb R^+} \big) : \theta - \beta \in \mathring{H}^1_0(\mathbb R^+) \right\}. \end{align} Indeed, the last term in \eqref{Eb} is independent of $\theta$ and finite (see Lemma \ref{l:Jetab}). Therefore, the main difficulty with the definition of $E_\beta$ comes from the last term in the first line of \eqref{Eb}. Nevertheless, as we show in Lemma \ref{l:sinx}, the integrand in the first line of \eqref{Eb} may be bounded from below by an integrable function that does not depend on $\theta$. This makes the definition of $E_\beta$ in \eqref{Eb} meaningful. We are now in the position to state our existence result for the edge domain walls, viewed as minimizers of the one-dimensional energy $E_\beta$ in \eqref{Eb} over the admissible class $\mathcal A$ in \eqref{A}. \begin{theorem} \label{t:exist} For each $\beta \in (0, \pi/2]$ and each $\nu > 0$, there exists $\theta \in \mathcal A$ such that $E_\beta(\theta) = \inf_{\tilde \theta \in \mathcal A} E(\tilde \theta)$. Furthermore, we have $\theta \in L^\infty(\mathbb R^+)$ and $\lim_{x \to \infty} \theta(x) = \theta_\infty$ for some $\theta_\infty \in \mathbb \pi \mathbb Z$. \end{theorem} \noindent We remark that the minimizers obtained in Theorem \ref{t:exist} do not depend on the specific choice of $\eta_\beta$. Indeed, denoting by $E_\beta(\theta, \eta_\beta)$ the value of the energy for a given $\theta$ and $\eta_\beta$, we have for any $\theta$ and $ \eta_\beta^{(1,2)}$ such that $E(\theta, \eta_\beta^{(1,2)}) < +\infty$: \begin{multline} E_\beta(\theta, \eta_\beta^{(2)}) - E_\beta(\theta, \eta_\beta^{(1)}) = {\nu \over 4 \pi} \int_0^\infty {\sin^2 (\eta_\beta^{(2)} - \beta) - \sin^2 (\eta_\beta^{(1)} - \beta) \over x} \, dx \\ + {\nu \over 8 \pi} \int_0^\infty \int_0^\infty {(\sin (\eta_\beta^{(1)}(x) - \beta) - \sin (\eta_\beta^{(1)} (y) - \beta))^2 - (\sin (\eta_\beta^{(2)}(x) - \beta) - \sin (\eta_\beta^{(2)}(y) - \beta))^2 \over (x - y)^2} \, dx \, dy, \end{multline} which is a constant independent of $\theta$. In particular, minimizers of $E_\beta(\cdot, \eta_\beta^{(1)})$ over $\mathcal A$ coincide with those of $E_\beta(\cdot, \eta_\beta^{(2)})$. The result in Theorem \ref{t:exist} should be contrasted with that for the case $\nu = 0$ discussed at the beginning of this section. For the latter, for all $\beta \in (0, \pi/2)$ we have existence of a unique minimizer in $\mathcal A$, which is monotone decreasing and converging to zero exponentially at infinity. In the case $\nu > 0$, on the other hand, our result does not exclude a possibility of winding near the film edge, expressed in the fact that one may have $\theta \to \pi n$ for some $n \not= 0$ as $x \to +\infty$. Similarly, neither uniqueness nor monotonicity of the energy minimizing profile are guaranteed a priori, and the decay at infinity is expected to follow a power law (cf. \cite{cm:non13} and section \ref{sec:num} below). We now turn to the questions of further regularity and the Euler-Lagrange equation satisfied by the minimizers obtained in Theorem \ref{t:exist}. Formally, the Euler-Lagrange equation associated with \eqref{Eb} coincides with \eqref{Ethb2EL} for all $x \in \mathbb R^+$ (again, extending $\theta$ to $\theta(x) = \beta$ for $x < 0$). However, care needs to be exercised once again, since the function $\sin (\theta - \beta)$ no longer belongs to $L^2(\mathbb R)$, so the standard approach to the definition of $\left( -{d^2 \over dx^2} \right)^{1/2}$ via Fourier space no longer applies directly. Nevertheless, we show that \eqref{Ethb2EL} still holds for the minimizers, provided that one uses the integral representation in \eqref{halflapl} as the definition for $\left( -{d^2 \over dx^2} \right)^{1/2}$. The latter is meaningful whenever $\theta$ is smooth, and we have explicitly \begin{multline} \label{EL} \theta''(x) = \sin \theta(x) \cos \theta(x) + {\nu \over 2 \pi} \cdot {\sin (\theta(x) - \beta) \cos (\theta(x) - \beta) \over x} \\ + {\nu \over 2 \pi} \cos (\theta(x) - \beta) \left( \Xint-_0^\infty {\sin (\theta(x) - \beta) - \sin ( \theta(y) - \beta) \over (x - y)^2} \, dy \right) \qquad \forall x > 0. \end{multline} This picture is made precise in the following theorem. \begin{theorem} \label{t:regular} For each $\beta \in (0, \pi/2]$ and each $\nu > 0$, let $\theta$ be a minimizer from Theorem \ref{t:exist}. Then $\theta \in C^2(\mathbb R^+) \cap C^1(\overline{\mathbb R^+}) \cap W^{1,\infty}(\mathbb R^+)$ and \eqref{EL} holds. In addition, we have $|\theta'(0)| = \sin \beta$ and $\lim_{x \to 0^+} |\theta''(x)| = \infty$. \end{theorem} We remark that the last statement in Theorem \ref{t:regular} prevents the minimizer in Theorem \ref{t:exist} to be smooth up to $x = 0$, in contrast with the case $\nu = 0$ (see \eqref{nostray}). In turn, because of the presence of the nonlocal term in \eqref{EL} further regularity of the minimizer for $x > 0$ cannot be established by a standard bootstrap argument. A further study of higher regularity of the domain wall profiles in the film interior would require a finer simultaneous treatment of the exchange and stray field terms and goes beyond the scope of the present paper. We end with a consideration of two parameter regimes in which further information can be obtained about the detailed structure of the energy minimizing profiles. In both these regimes the nonlocal term in the equation may be viewed as a perturbation. The first is the regime when $\beta \in (0, \pi / 2)$ is arbitrary, but $\nu$ is sufficiently small depending on $\beta$. Then we have the following result about the behavior of minimizers. \begin{theorem} \label{t:gammanu} Let $\beta \in (0, \pi/2)$ and let $\theta_\nu$ be a minimizer of \eqref{Eb} for a given $\nu > 0$. Then, as $\nu \to 0$ the minimizers $\theta_\nu$ converge uniformly on $[0, +\infty)$ to the minimizer $\theta_0$ of \eqref{Eb00}, defined in \eqref{nostray}. \end{theorem} \noindent We note that we need to avoid the value of $\beta = \pi/2$ in the statement of Theorem \ref{t:gammanu}, because even in the case $\nu = 0$ there are two possible minimizers: one is given by \eqref{nostray} and the other by its reflection with respect to $\theta = \pi / 2$. On the other hand, it is easy to see by an inspection of the proof of Theorem \ref{t:gammanu} that convergence in its statement is uniform in $\beta$ for all $0 < \beta \leq \beta_0 < \pi/2$. Note also that the statement of Theorem \ref{t:gammanu} implies that $\theta_\nu(x) \to 0$ as $x \to +\infty$ for all $\nu > 0$ sufficiently small depending on $\beta$. In other words, the domain wall profiles cannot exhibit winding in this parameter range. The second regime is for fixed values of $\nu > 0$ and sufficiently small $\beta > 0$. Here we have the following result. \begin{theorem} \label{t:gammabeta} Let $\nu >0$, let $0<\beta \leq \beta_0$ for some $\beta_0(\nu) > 0$, and let $\theta_\beta$ be a minimizer of \eqref{Eb}. Then $\theta_\beta$ is unique, and $\theta_\beta \to 0$ uniformly on $[0, +\infty)$ as $\beta \to 0$. \end{theorem} \noindent Again, the statement of Theorem \ref{t:gammabeta} implies that $\theta_\beta(x) \to 0$ as $x \to +\infty$ for all $\beta > 0$ sufficiently small depending on $\nu$. \section{Proof of Theorem \ref{t:exist}} \label{sec:proof-theor-reft} We begin by defining \begin{align} \label{Jbeta} J_\beta(\theta) := \int_{0}^{\infty}\int_{0}^{\infty} \frac{\brackets{\sin(\theta(x)-\beta) - \sin(\theta(y)-\beta)}^2}{(x-y)^2} \, dx \, dy, \end{align} and making the following basic observation. \begin{lemma} \label{l:Jetab} We have \begin{align} J_\beta(\eta_\beta) < +\infty. \end{align} \end{lemma} \begin{proof} Since $\eta_\beta(x) = 0$ for all $x \geq 1$, we may write \begin{align} J_\beta(\eta_\beta) & = \int_0^1 \int_0^1 \frac{\brackets{\sin(\eta_\beta (x)-\beta) - \sin(\eta_\beta (y)-\beta)}^2}{(x-y)^2} \, dx \, dy \notag \\ & \qquad + 2 \int_0^1 \int_1^\infty \frac{(\sin (\eta_\beta (y)-\beta) - \sin \beta)^2 }{(x-y)^2} \, dx \, dy \notag \\ & = \int_0^1 \int_0^1 \frac{\brackets{\sin(\eta_\beta (x)-\beta) - \sin(\eta_\beta (y)-\beta)}^2}{(x-y)^2} \, dx \, dy \notag \\ & \qquad + 2 \int_0^1 \frac{( \sin(\eta_\beta (y)-\beta) - \sin \beta)^2 }{1-y} \, dy. \label{Jbetaest} \end{align} By smoothness of $\eta_\beta$ we have \begin{align} |\sin(\eta_\beta (x)-\beta) - \sin(\eta_\beta (y)-\beta)| \leq C |x - y|, \end{align} for some $C > 0$ depending only on $\eta_\beta$. Therefore, the integrands in the right-hand side of \eqref{Jbetaest} are essentially bounded, yielding the result. \end{proof} With the result of Lemma \ref{l:Jetab} in hand, we can write the energy in \eqref{Eb} as \begin{align} E_\beta(\theta) = F_\beta(\theta) + {\nu \over 8 \pi} J_\beta(\theta) - {\nu \over 8 \pi} J_\beta(\eta_\beta), \end{align} where \begin{align} \label{Fb} F_\beta(\theta) := \int_0^\infty \left( \frac12 |\theta'|^2 + \frac12 \sin^2 \theta + {\nu \over 4 \pi} \cdot {\sin^2 (\theta - \beta) - \sin^2 (\eta_\beta - \beta) \over x} \right) dx. \end{align} While $J_\beta(\theta) \geq 0$ by definition, we also have the following result concerning $F_\beta(\theta)$. \begin{lemma} \label{l:sinx} For every $\theta \in \mathcal A$ we have \begin{align} \label{intsinx} \frac12 \sin^2 \theta(x) + {\nu \over 4 \pi} \cdot {\sin^2 (\theta(x) - \beta) - \sin^2 (\eta_\beta(x) - \beta) \over x} \geq - {C \over 1 + x^2} \qquad \forall x > 0, \end{align} for some $C > 0$ depending only on $\beta$, $\nu$ and $\eta_\beta$. Furthermore, $F_\beta(\theta)$ is bounded below independently of $\theta$. \end{lemma} \begin{proof} Since by the definition of $\eta_\beta$ we have $|\eta_\beta(x) - \beta| \leq C x$ for some $C > 0$ and all $x \in (0,1)$, the left-hand side of \eqref{intsinx} is bounded below on this interval. At the same time, by trigonometric identities and Young's inequality we have for all $x \geq 1$ and any $\varepsilon > 0$ \begin{align} {\sin^2 (\theta - \beta) - \sin^2 (\eta_\beta - \beta) \over x} = { \sin(\theta - 2 \beta) \sin \theta \over x} \geq -\varepsilon \sin^2 \theta - {1 \over 4 \varepsilon x^2}. \end{align} Hence, choosing $\varepsilon$ sufficiently small depending only on $\nu$, we can control the left-hand side of \eqref{intsinx} from below by $-C / x^2$ for all $x \in (1, \infty)$, where the constant $C > 0$ depends only on $\nu$. Combining the estimates on the two intervals then yields \eqref{intsinx}. Finally, the last statement follows from \eqref{intsinx} and the fact that the integrand in \eqref{Fb} is measurable on $\mathbb R^+$. \end{proof} \begin{proof}[Proof of Theorem \ref{t:exist}] Since $\inf_{\tilde\theta \in \mathcal A} E_\beta(\tilde\theta) \leq E_\beta(\eta_\beta) < +\infty$, and since $E_\beta(\theta) \geq -C$ for some $C > 0$ and all $\theta \in \mathcal A$ by Lemmas \ref{l:Jetab} and \ref{l:sinx}, there exists a sequence of $\theta_n \in \mathcal A$ such that \begin{align} -\infty < \inf_{\tilde\theta \in \mathcal A} E_\beta(\tilde\theta) = \liminf_{n \to \infty} E_\beta(\theta_n) < +\infty. \end{align} Furthermore, by Lemma \ref{l:sinx} and positivity of $J_\beta$ we have $\| \theta_n' \|_{L^2(\mathbb R^+)} \leq C$ for some $C > 0$. Therefore, upon extraction of subsequences we have $\theta_n' \rightharpoonup \theta'$ in $L^2(\mathbb R^+)$ for some $\theta \in H^1_{loc}(\mathbb R^+)$ and $\theta_n \to \theta$ in $C([0, R])$ for any $R > 0$ fixed by Sobolev embedding \cite[Theorem 8.8 and Proposition 8.13]{brezis}. By a diagonal argument we then conclude that upon further extraction of a subsequence we have $\theta_n(x) \to \theta(x)$ for every $x > 0$. Now, by the lower semicontinuity of the norm we have $\liminf_{n \to \infty} \| \theta_n' \|_{L^2(\mathbb R^+)} \geq \| \theta' \|_{L^2(\mathbb R^+)}$. Therefore, by Lemma \ref{l:sinx} and Fatou's lemma we get $\liminf_{n \to \infty} F_\beta(\theta_n) \geq F_\beta(\theta)$. Again, by Fatou's lemma we also get $\liminf_{n \to \infty} J_\beta(\theta_n) \geq J_\beta(\theta)$. In fact, since $(\theta_n)$ is a minimizing sequence, the inequalities above are equalities. This implies that $\theta_n - \beta \to \theta - \beta$ strongly in $\mathring{H}^1_0(\mathbb R^+)$, and, hence, we have $\theta \in \mathcal A$. Thus, $\theta$ is a minimizer of $E_\beta$ over $\mathcal A$. Finally, observe that by weak chain rule \cite[Corollary 8.11]{brezis} and Lemma \ref{l:sinx} we have $\sin \theta \in H^1(\mathbb R^+)$. Therefore, by \cite[Corollary 8.9]{brezis} we also have $\sin \theta \in C\big(\overline{\mathbb R^+} \big)$ and $\sin \theta(x) \to 0$ as $x \to +\infty$. On the other hand, from the Modica-Mortola type inequality and Lemmas \ref{l:Jetab} and \ref{l:sinx} we obtain \begin{align} \label{mm} \int_0^\infty |\sin \theta| \, |\theta'| \, dx \leq \frac12 \int_0^\infty \brackets{|\theta'|^2 + \sin^2\theta} d x < +\infty, \end{align} which implies that $\theta(x) \to \theta_\infty$ for some $\theta_\infty \in \pi \mathbb Z$ as $x \to +\infty$. This concludes the proof. \end{proof} \begin{remark} We have shown existence of a minimizer for the energy in \eqref{Eb} since a priori the energy in \eqref{Eb0} did not make sense for all $\theta \in \mathcal A$. However, if $\theta \in \mathcal A$ satisfies $E_\beta(\theta) <C$ (in the sense of \eqref{Eb}) then it is not difficult to see that the energy in \eqref{Eb0} also makes sense and coincides with that in \eqref{Eb}. \end{remark} \section{Proof of Theorem \ref{t:regular}} \label{sec:proof-theor-reft1} We now proceed to the proof of Theorem \ref{t:regular}, where we have to compute a variation of the energy in \eqref{Eb}. It is clear how to deal with all the terms except $J_\beta(\theta)$. In order to find the variation of $J_\beta(\theta)$, for notational convenience we define \begin{align} \label{u} u(x) := \sin (\theta(x) - \beta), \end{align} for $\theta \in \mathcal A$ and notice that \begin{align} \label{Jb2} \int_0^\infty \int_0^\infty {(u(x) - u(y))^2 \over (x - y)^2} \, dx \, dy = J_\beta(\theta). \end{align} We are taking a variation with respect to $\theta$ as follows: $\theta(x) \mapsto \theta(x) + \varepsilon \phi(x)$, with $\phi \in C^\infty_c(\mathbb R^+)$ and $\varepsilon \in \mathbb R$. It is convenient to introduce the corresponding variation in $u$, defined as $u(x) \mapsto u(x) + \varepsilon \psi_\varepsilon(x)$, where \begin{align} \label{psi} \psi_\varepsilon(x) := \begin{cases} {\sin (\theta(x) + \varepsilon \phi(x) - \beta) - \sin (\theta(x) - \beta) \over \varepsilon} & \varepsilon \not= 0, \\ \cos (\theta(x) - \beta) \phi(x) & \varepsilon = 0. \end{cases} \end{align} Note that for every $x \in \mathbb R^+$ we have \begin{align} \label{psi0} \psi_\varepsilon(x) \to \psi_0(x) = : \psi(x) \ \text{as} \ \varepsilon \to 0. \end{align} Before computing the variation of $J_\beta(\theta)$ we will need two technical lemmas concerning the properties of $\psi_\varepsilon$. \begin{lemma} \label{l:trig} Let $\varepsilon \in \mathbb R$, $\theta \in \mathcal A$, $\phi \in C^\infty_c(\mathbb R^+)$, and let $\psi_\varepsilon$ be defined in \eqref{psi}. Then $\psi_\varepsilon \in H^1(\mathbb R^+)$, and for almost every $x \in \mathbb R^+$ we have \begin{align} \label{phitr} |\psi_\varepsilon(x)| \leq |\phi(x)| \qquad \text{and} \qquad |\psi_\varepsilon'(x)| \leq |\phi'(x)| + |\theta'(x)| \, |\phi(x)|. \end{align} \end{lemma} \begin{proof} By mean value theorem, we have \begin{align} \psi_\varepsilon(x) = \cos(\theta(x) + \varepsilon \lambda_\varepsilon(x) \phi(x) - \beta) \phi(x), \end{align} for some $\lambda_\varepsilon(x) \in (0,1)$, which yields the first inequality in \eqref{phitr}. Next, applying the weak chain rule \cite[Corollary 8.11]{brezis}, we obtain $\psi_\varepsilon\in H^1_{loc}(\mathbb R^+)$, and for almost every $x \in \mathbb R^+$ we have \begin{align} \psi_\varepsilon'(x) = \cos (\theta(x) + \varepsilon \phi(x) - \beta) \phi'(x) + {\cos(\theta(x) + \varepsilon \phi(x) - \beta) - \cos(\theta(x) - \beta) \over \varepsilon} \, \theta'(x). \end{align} In particular, $\psi_\varepsilon'$ has compact support and, hence, $\psi_\varepsilon \in H^1(\mathbb R^+)$. Therefore, again by mean value theorem and triangle inequality we have for some $\lambda_\varepsilon(x) \in (0, 1)$ \begin{align} |\psi_\varepsilon'(x)| \leq |\cos (\theta(x) + \varepsilon \phi(x) - \beta)| \, |\phi'(x)| + |\sin (\theta(x) + \varepsilon \lambda_\varepsilon(x) \phi(x) - \beta) | \, |\phi(x)| \, |\theta'(x)|, \end{align} yielding the result. \end{proof} \begin{lemma} \label{l:h12} Let $\varepsilon \in \mathbb R$, $\theta \in \mathcal A$, $\phi \in C^\infty_c(\mathbb R^+)$, and let $\psi_\varepsilon$ be defined in \eqref{psi}. Then there exists $C > 0$ independent of $\varepsilon$ such that for every $\delta > 0$ there holds \begin{align} \label{psiepsh12} \iint_{\{ |x - y| \leq \delta \} } {(\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy \leq C \delta \quad \text{and} \quad \iint_{ \{ |x - y| \geq \delta \} } {(\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy \leq C \delta^{-1}. \end{align} In particular \begin{align} \label{psiepsh3} \int_0^\infty \int_0^\infty {(\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy \leq 2 C. \end{align} \end{lemma} \begin{proof} First of all, since $\psi_\varepsilon$ has compact support lying in $\mathbb R^+$, we can extend $\psi_\varepsilon$ by zero to the whole real line. By Lemma \ref{l:trig}, $\psi_\varepsilon$ is absolutely continuous and, hence, for all $x \not= y$ we have \begin{align} {\psi_\varepsilon(x) - \psi_\varepsilon(y) \over x - y} = \int_0^1 \psi_\varepsilon'(y + t (x - y)) dt. \end{align} Therefore \begin{align} & \iint_{ \{ |x - y| \leq \delta \} } {(\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy \notag \\ & \hspace{2cm} = \iint_{ \{ |x - y| \leq \delta \} } \int_0^1 \int_0^1 \psi_\varepsilon'(y + t (x - y)) \psi_\varepsilon'(y + s (x - y)) \, dt \, ds \, dx \, dy. \label{iintpsi} \end{align} Interchanging the order of integration and applying Cauchy-Schwarz inequality, from \eqref{iintpsi} we obtain \begin{align} \iint_{ \{ |x - y| \leq \delta \} } {(\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy \leq \int_0^1 \iint_{ \{ |x - y| \leq \delta \} } |\psi_\varepsilon'(y + s (x - y))|^2 \, dx \, dy \, ds. \end{align} Finally, using the new variable $z = x - y$ in place of $x$ yields \begin{align} \iint_{ \{ |x - y| \leq \delta \} } {(\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy\leq \int_0^1 \int_{- \delta}^{+ \delta} \int_{-\infty}^\infty |\psi_\varepsilon'(y + s z)|^2 \, dy \, dz \, ds = 2 \delta \int_0^\infty |\psi_\varepsilon'(y)|^2 \, dy, \end{align} which in view of Lemma \ref{l:trig} gives the first estimate in \eqref{psiepsh12}. To obtain the second estimate in \eqref{psiepsh12}, simply note that \begin{multline} \iint_{ \{ |x - y| \geq \delta \} } {(\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy \leq 2 \iint_{ \{ |x - y| \geq \delta \} } {|\psi_\varepsilon(x)|^2 + |\psi_\varepsilon(y)|^2 \over (x - y)^2} \, dx \, dy \\ = 4 \iint_{ \{ |x - y| \geq \delta \} } {|\psi_\varepsilon(y)|^2 \over (x - y)^2} \, dx \, dy = {8 \over \delta} \int_0^\infty |\psi_\varepsilon(y)|^2 dy, \end{multline} yielding the claim, once again, by Lemma \ref{l:trig}. Lastly, \eqref{psiepsh3} is an immediate corollary to \eqref{psiepsh12} with $\delta = 1$. \end{proof} We now establish G\^ateaux differentiability of $J_\beta(\theta)$ with respect to compactly supported smooth perturbations of $\theta$. \begin{lemma} \label{l:gateaux} Let $\theta \in \mathcal A$ be such that $J_\beta(\theta) < \infty$, let $\phi \in C^\infty_c(\mathbb R^+)$, and let $u$ and $\psi$ be defined in \eqref{u} and \eqref{psi0}, respectively. Then \begin{align} \lim_{\varepsilon \to 0} {J_\beta(\theta + \varepsilon \phi) - J_\beta(\theta) \over \varepsilon} = 2 \int_0^\infty \int_0^\infty {(u(x) - u(y)) (\psi(x) - \psi(y)) \over (x - y)^2} \, dx \, dy. \end{align} \end{lemma} \begin{proof} Observe that, using $\psi_\varepsilon$ defined in \eqref{psi}, we can write \begin{align} {J_\beta(\theta + \varepsilon \phi) - J_\beta(\theta) \over \varepsilon} = 2 \int_0^\infty \int_0^\infty {(u(x) - u(y)) (\psi_\varepsilon(x) - \psi_\varepsilon(y)) \over (x - y)^2} \, dx \, dy \notag \\ + \varepsilon \int_0^\infty \int_0^\infty { (\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy. \end{align} Next, for $\delta \in (0, 1)$ we split the integrals above into those over $\{ |x - y| \leq \delta\}$ and those over $\{ |x - y| > \delta\}$. Since $J_\beta(\theta) < \infty$, by Cauchy-Schwarz inequality and Lemma \ref{l:h12} the former are bounded by $C \sqrt{\delta}$ with $C \geq 0$ independent of $\varepsilon \in (-1,1) \backslash\{0\}$. To compute the latter, we use Lebesgue dominated convergence theorem to pass to the limit as $\varepsilon \to 0$ with $\delta$ fixed. Observe that since \begin{align} \label{sqdom} 2 \iint_{\{ |x - y| > \delta \} } {|u(x) - u(y)| \, |\psi_\varepsilon(x) - \psi_\varepsilon(y)| \over (x - y)^2} \, dx \, dy \leq \iint_{\{ |x - y| > \delta \} } {(u(x) - u(y))^2 \over (x - y)^2} \, dx \, dy \notag \\ + \iint_{\{ |x - y| > \delta \} } { (\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy, \end{align} and since by our assumption and \eqref{Jb2} the first integral is bounded, it is sufficient to dominate the integrand in the second term of the right-hand-side of \eqref{sqdom} by an integrable function independent of $\varepsilon$. Using Lemma \ref{l:trig}, we can write for all $|x - y| > \delta$: \begin{align} { (\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \leq { (|\psi_\varepsilon(x)| + |\psi_\varepsilon(y)|)^2 \over (x - y)^2} \leq 2 \, {|\phi(x)|^2 + |\phi(y)|^2 \over (x - y)^2} \chi_{\{ |x - y| > \delta\} }(x, y) =: G_\delta(x, y), \end{align} where $\chi_{\{ |x - y| > \delta\} }$ is the characteristic function of the set $\{ |x - y| > \delta\}$. Since $\phi$ is bounded and has compact support, we have $G_\delta \in L^1(\mathbb R^+ \times \mathbb R^+)$. Therefore, since $\psi_\varepsilon(x) \to \psi(x)$ as $\varepsilon \to 0$ for all $x \in \mathbb R^+$, by Lebesgue dominated convergence theorem we have \begin{align} \lim_{\varepsilon \to 0} & \iint_{\{ |x - y| > \delta \} } {(u(x) - u(y)) (\psi_\varepsilon(x) - \psi_\varepsilon(y)) \over (x - y)^2} \, dx \, dy = \iint_{\{ |x - y| > \delta \} } {(u(x) - u(y)) (\psi(x) - \psi(y)) \over (x - y)^2} \, dx \, dy \\ \lim_{\varepsilon \to 0} & \iint_{\{ |x - y| > \delta \} } { (\psi_\varepsilon(x) - \psi_\varepsilon(y))^2 \over (x - y)^2} \, dx \, dy = \iint_{\{ |x - y| > \delta \} } { (\psi(x) - \psi(y))^2 \over (x - y)^2} \, dx \, dy < \infty. \end{align} Lastly, combining this result with the estimates of the integrals over $\{ |x - y| \leq \delta\}$ and sending $\delta \to 0$ completes the proof, once again, by Lebesgue dominated convergence theorem, Lemma \ref{l:h12} and Cauchy-Schwarz inequality. \end{proof} With the differentiability of $J_\beta$ established in Lemma \ref{l:gateaux}, differentiability of $E_\beta$ then follows by a standard argument. Thus, we arrive at the following result that yields the Euler-Lagrange equation for a minimizer of $E_\beta$ over $\mathcal A$ in weak form. \begin{proposition} \label{p:ELw} Let $\theta$ be a minimizer of $E_\beta$ over $\mathcal A$. Then \begin{align} \label{ELw} {\nu \over 4 \pi} \int_0^\infty \int_0^\infty {( \sin (\theta(x) - \beta) - \sin(\theta(y) - \beta)) (\cos (\theta(x) - \beta) \phi(x) - \cos (\theta(y) - \beta) \phi(y)) \over (x - y)^2} \, dx \, dy\notag \\ + \int_0^\infty \left( \theta' \phi' + \phi \sin \theta \cos \theta \right) dx + {\nu \over 2 \pi} \int_0^\infty {\phi \sin (\theta - \beta) \cos (\theta - \beta) \over x} \, dx = 0 \qquad \forall \phi \in C^\infty_c\big(\mathbb R^+ \big). \end{align} \end{proposition} Our next goal is to find an alternative representation of the nonlocal term in \eqref{ELw} that would allow us to proceed with establishing higher regularity of the minimizers of $E_\beta$, ultimately obtaining the classical form of the Euler-Lagrange equation in \eqref{EL}. \begin{lemma} \label{l:h2} Let $\theta \in \mathcal A$ be such that $J_\beta(\theta) < \infty$. Then for every $\phi \in C^\infty_c\big(\mathbb R^+ \big)$ we have \begin{align} \label{ELs} \int_0^\infty \int_0^\infty {( \sin (\theta(x) - \beta) - \sin(\theta(y) - \beta)) (\cos (\theta(x) - \beta) \phi(x) - \cos (\theta(y) - \beta) \phi(y)) \over (x - y)^2} \, dx \, dy\notag \\ + 2 \int_0^\infty {\sin(\theta(x) - \beta) \cos(\theta(x) - \beta) \over x} \, \phi(x) \, dx \notag \\ = 2 \int_0^\infty \left( \Xint-_0^\infty {\cos (\theta(y) - \beta) \theta'(y) \over x - y} \, dy \right) \cos (\theta(x) - \beta) \phi(x) \, dx. \end{align} \end{lemma} \begin{proof} We begin by defining $u$ and $\psi$ as in \eqref{u} and \eqref{psi}, respectively, and extending them by zero to the whole of $\mathbb R$. We also similarly extend $\phi$. To simplify the notations, we still denote those extensions as $u$, $\psi$ and $\phi$, respectively. Next, we define \begin{align} \label{I} I := \int_{-\infty}^\infty \int_{-\infty}^\infty {( u(x) - u(y)) (\psi(x) - \psi(y)) \over (x - y)^2} \, dx \, dy, \end{align} and for $\delta > 0$ we write $I = I_1^\delta + I_2^\delta$, where \begin{align} I_1^\delta := \iint_ {\{ |x - y| > \delta \} }{ ( u(x) - u(y)) (\psi(x) - \psi(y)) \over (x - y)^2} \, dx \, dy, \label{I1} \\ I_2^\delta := \iint_ {\{ |x - y| \leq \delta \} } { ( u(x) - u(y)) (\psi(x) - \psi(y) ) \over (x - y)^2} \, dx \, dy. \label{I2} \end{align} Note that by our assumptions \eqref{I} and, hence, \eqref{I1} and \eqref{I2}, define absolutely convergent integrals. Indeed, by Lemma \ref{l:trig} and Cauchy-Schwarz inequality we have \begin{align} |I| & \leq J_\beta^{1/2}(\theta) \left( \int_{-\infty}^\infty \int_{-\infty}^\infty {(\psi(x) - \psi(y))^2 \over (x - y)^2} \, dx \, dy \right)^{1/2} \notag \\ & = J_\beta^{1/2}(\theta) \left( \int_0^\infty \int_0^\infty {(\psi(x) - \psi(y))^2 \over (x - y)^2} \, dx \, dy + 2 \int_0^\infty {|\psi(x)|^2 \over x} \, dx \right)^{1/2}, \end{align} which is finite by Lemma \ref{l:h12}. Let now $w_n \in C^\infty_c(\mathbb R^+)$ be such that $w_n \to \theta - \beta$ in $\mathring{H}^1_0(\mathbb R^+)$, and extend $w_n$ by zero for $x < 0$. We claim that if $u_n := \sin w_n$, then we have $I_n \to I$ as $n \to \infty$, where \begin{align} I_n := \int_{-\infty}^\infty \int_{-\infty}^\infty {( u_n(x) - u_n(y)) (\psi(x) - \psi(y) ) \over (x - y)^2} \, dx \, dy. \end{align} Indeed, define $I^\delta_{1,n}$ and $I^\delta_{2,n}$ as in \eqref{I1} and \eqref{I2} with $u$ replaced by $u_n$. Arguing as in the proof of Lemma \ref{l:gateaux}, in view of the pointwise convergence of $u_n$ to $u$ we have $I^\delta_{1,n} \to I_1^\delta$. At the same time, by the argument in the proof of Lemma \ref{l:h12} and boundedness of $u_n'$ in $L^2(\mathbb R)$, we also have $|I^\delta_{2,n}| \leq C \delta$ for some $C > 0$ independent of $n$. Thus, the claim follows by arbitrariness of $\delta$. Now, since both $u_n$ and $\psi $ belong to $H^1(\mathbb R)$ by Lemma \ref{l:trig} and \cite[Corollary 8.10 and Corollary 8.11]{brezis}, we may proceed by expressing $I_n$ in Fourier space \cite[Theorem 7.12]{lieb-loss}: \begin{align} \label{InF} I_n = \int_{-\infty}^\infty |k| \mathcal F^*[ u_n] \mathcal F[ \psi ] dk, \end{align} where ``$*$'' stands for complex-conjugate and \begin{align} \label{F} \mathcal F[u] := \int_{-\infty}^\infty e^{-i k x} u(x) \, dx. \end{align} Alternatively, we can write \eqref{InF} as \begin{align} I_n = \int_{-\infty}^\infty (-i \, \text{sgn} (k))^{*} \mathcal F^*[ u_n'] \mathcal F[ \psi ] dk, \end{align} We now pass to the limit $n \to \infty$ by taking advantage of the fact that $u_n' \to u'$ in $L^2(\mathbb R)$ and, therefore, we have \begin{align} \label{hilb} I = \int_{-\infty}^\infty (-i \, \text{sgn} (k))^{*} \mathcal F^*[ u'] \mathcal F[ \psi ] dk = 2 \int_{-\infty}^\infty \left( \Xint-_{-\infty}^\infty { u'(y) \over x - y} \, dy \right) \psi(x) \, dx, \end{align} where we inserted the definition of Hilbert transform \cite[Section 5.1.1]{grafakos}. Finally, to obtain the desired formula, we separate out the contribution of the negative real axis in the integral over $y$ in \eqref{I}. We have \begin{align} I = \int_0^\infty \int_0^\infty {( u(x) - u(y)) (\psi(x) - \psi(y) ) \over (x - y)^2} \, dx \, dy + 2 \int_{-\infty}^0 \int_0^\infty { u(x) \psi(x) \over (x - y)^2} \, dx \, dy \notag \\ = \int_0^\infty \int_0^\infty {( u(x) - u(y)) (\psi(x) - \psi(y) ) \over (x - y)^2} \, dx \, dy + 2 \int_0^\infty {u(x) \psi(x) \over x} \, dx, \end{align} where we took into account that since $\phi \in C^\infty_c\big( \mathbb R^+ \big)$ (see \eqref{psi0} for the definition of $\psi$), there is no singularity in the integral near $x = 0$. At the same time, from \eqref{hilb} we get \begin{align} I = 2 \int_0^\infty \left( \Xint-_0^\infty { u'(y) \over x - y} \, dy \right) \psi(x) \, dx, \end{align} which concludes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{t:regular}] By Proposition \ref{p:ELw}, $\theta$ solves \eqref{ELw}. At the same time, by Lemma \ref{l:h2} we have \begin{align} \label{th2} \int_0^\infty \theta \phi'' \, dx = \int_0^\infty g \phi \, dx \qquad \forall \phi \in C^\infty_c(\mathbb R^+), \end{align} where \begin{align} \label{gH} g(x) := \sin \theta(x) \cos \theta(x) + {\nu \over 2 \pi} \cos(\theta(x) - \beta) \, \Xint-_0^\infty {\cos (\theta(y) - \beta) \theta'(y) \over x - y} \, dy. \end{align} Notice that since $\sin \theta \in L^2(\mathbb R^+)$ and $\theta' \in L^2(\mathbb R^+)$, by the properties of Hilbert transform \cite[Theorem 5.1.7 and Theorem 5.1.12]{grafakos} we have that $g \in L^2(\mathbb R^+)$ as well. Hence $\theta'' \in L^2(\mathbb R^+)$, which by Sobolev embedding \cite[Theorem 8.8]{brezis} implies that $\theta \in C^1(\overline{\mathbb R^+})$ and $\theta' \in L^\infty(\mathbb R^+)$. Focusing now on the nonlocal term, let $u(x)$ be defined by \eqref{u}, extended, as usual, by zero to $x < 0$, and let $h(x)$ denote the integral in \eqref{gH}, with $x \in \mathbb R$. Note that by chain rule we have $u \in C^1(\overline{\mathbb R^+})$, and $u'(x)$ experiences a jump discontinuity at $x = 0$ whenever $u'(0^+) \not= 0$. Also, by weak chain rule \cite[Corollary 8.11]{brezis} we have $u'' \in L^2(\mathbb R^+)$. Passing to the Fourier space as in the proof of Lemma \ref{l:h2} (but treating $h$ as a member of $\mathcal S'(\mathbb R)$ now), for any $\phi \in C^\infty_c(\mathbb R^+)$, extended by zero to $x < 0$, we can write \begin{multline} -\int_{-\infty}^\infty h \phi' \, dx = \frac12 \int_{-\infty}^\infty |k| \mathcal F^*[\phi] \mathcal F[u'] \, dk = \frac12 \int_{-\infty}^\infty (-i \, \text{sgn} (k)) \mathcal F^*[\phi] \mathcal F[u''] \, dk \\ + \frac{u'(0^+)}{2} \int_{-\infty}^\infty (-i \, \text{sgn} (k)) \mathcal F^*[\phi] \, dk = \int_0^\infty \left( \, \Xint-_0^\infty {u''(y) \over x - y} \, dy \right) \phi(x) \, dx + u'(0^+) \int_0^\infty {\phi(x) \over x} \, dx, \end{multline} where by $u''$ we mean the absolutely continuous part of the distributional derivative of $u'$ on $\mathbb R$. Thus, we have \begin{align} h'(x) = \Xint-_0^\infty {u''(y) \over x - y} \, dy + {u'(0) \over x} \qquad \text{in} \quad \mathcal D'(\mathbb R^+). \end{align} In this expression, the first term is still in $L^2(\mathbb R)$. Similarly, the second term is in $L^2_{loc}(\mathbb R^+)$. By \eqref{th2} and weak product and chain rules \cite[Corollary 8.9 and Corollary 8.11]{brezis}, this then yields that $\theta''' \in L^2_{loc}(\mathbb R^+)$, implying, in particular, that $\theta \in C^2(\mathbb R^+)$. Furthermore, by \eqref{th2} we have \begin{align} \label{th3} \theta''(x) = g(x) \qquad \forall x > 0. \end{align} Finally, again by passing to Fourier space \cite[Section 3]{dinezza12} we have \begin{align} \Xint-_0^\infty {u'(y) \over x - y} \, dy = \Xint-_{-\infty}^\infty {u(x) - u(y) \over (x - y)^2} \, dy = \Xint-_0^\infty {u(x) - u(y) \over (x - y)^2} \, dy + {u(x) \over x}, \end{align} where we took into account that $u(0) = 0$. Substituting this expression to \eqref{th3} yields \eqref{EL}. To prove the remaining statements about the behavior of $\theta(x)$ near $x = 0$, we multiply \eqref{th3} by $\theta'(x)$ and integrate over $\mathbb R^+$. Since the Hilbert transform is an anti-hermitian operator from $L^2(\mathbb R)$ to $L^2(\mathbb R)$ \cite[Section 5.1.1]{grafakos}, and since $\cos (\theta(x) - \beta) \theta'(x)$, extended by zero for $x < 0$, belongs to $L^2(\mathbb R)$, the contribution of the last term in the right-hand side of \eqref{gH} vanishes. At the same time, since $\theta' \in H^1(\mathbb R^+)$ we have $\theta'(x) \to 0$ as $x \to +\infty$ \cite[Corollary 8.9]{brezis}. Since also $\sin \theta(x) \to 0$ as $x \to +\infty$, by \cite[Theorem 8.2]{brezis} we have \begin{align} |\theta'(0)|^2 = \sin^2 \theta(0). \end{align} The boundary condition then implies that $ |\theta'(0)| = \sin \beta$. In particular, $\theta'(0) \not= 0$. Thus, the function $u'(x)$ defined above experiences a jump discontinuity at $x = 0$, leading to a logarithmic divergence of the integral in \eqref{gH} as $x \to 0^+$. This concludes the proof. \end{proof} \section{Proof of Theorem~\ref{t:gammanu}} \label{sec:proof-theor-gammanu} With a slight abuse of notation we denote the energy in \eqref{Eb} by $E_\beta^\nu$. We first show that $\theta_\nu - \beta \to \theta_0 -\beta$ in $\mathring{H}^1_0(\mathbb R^+)$ as $\nu \to 0$. Let us assume that $E_\beta^\nu (\theta_\nu) \leq C$ and $0<\nu<1$. Using the same arguments as in Lemma~\ref{l:sinx}, we obtain \begin{align} \frac14 \sin^2 \theta(x) + {\nu \over 4 \pi} \cdot {\sin^2 (\theta(x) - \beta) - \sin^2 (\eta_\beta(x) - \beta) \over x} \geq - {C \over 1 + x^2} \qquad \forall x > 0, \end{align} where $C > 0$ depends only on $\beta$ and $\eta_\beta$. Therefore we have \begin{equation} E_\beta^0 (\theta_\nu) = \frac12 \int_0^\infty \left( |\theta_\nu'|^2 + \sin^2 \theta_\nu \right) dx \leq C, \end{equation} which immediately implies (see the proof of Theorem~\ref{t:exist}) that $\theta_\nu - \beta \rightharpoonup \theta -\beta $ in $\mathring{H}^1_0(\mathbb R^+)$ and $\theta \in \mathcal A$. Now we prove $\Gamma$-convergence of energies with respect to the weak convergence in $\mathring{H}^1_0(\mathbb R^+)$ (for a general introduction to $\Gamma$-convergence, see, e.g., \cite{braides}). Let us assume that $\nu_n \to 0$ and $\theta^n - \beta \rightharpoonup \theta -\beta$ in $\mathring{H}^1_0(\mathbb R^+)$. Then by Sobolev embedding \cite[Theorem 8.8]{brezis}, upon extraction of a subsequence we also have $\theta^n(x) \to \theta(x)$ for all $x > 0$. Therefore, by lower semicontinuity of the norm, Fatou's lemma and positivity of $J_\beta$ we have \begin{equation} \liminf_{n \to \infty} E_\beta^{\nu_n} (\theta^n) \geq E_\beta^0 (\theta). \end{equation} Now, taking any $\theta - \beta \in \mathring{H}^1_0(\mathbb R^+)$ such that $E_\beta^{\nu_n}(\theta) < +\infty$, we can construct a sequence $\theta^n \equiv \theta$ that trivially satisfies \begin{equation} \limsup_{n \to \infty} E_\beta^{\nu_n} (\theta^n) = E_\beta^0 (\theta), \end{equation} establishing the $\Gamma$-limit sought. Using the properties of $\Gamma$-convergence \cite{braides}, we then have $\lim_{\nu \to 0} E_\beta^\nu (\theta_\nu) = E_\beta^0 (\theta_0)$, where $\theta_0$ is given by the right-hand side of \eqref{nostray}, which implies $\theta_\nu - \beta \to \theta_0 -\beta$ in $\mathring{H}^1_0(\mathbb R^+)$ and $\sin\theta_\nu \to \sin\theta_0$ in $H^1(\mathbb R^+) \cap C(\overline{\mathbb R^+})$. However, from the Modica-Mortola type inequality in \eqref{mm} we also have for some $C>0$ depending only on $\beta$ and $\eta_\beta$: \begin{align} \int_0^\infty |\sin\theta_\nu||\theta'|\, dx \leq E_\beta^\nu(\theta_\nu) + \frac{C}{2} \nu \leq E_\beta^0 (\theta_0) + C\nu = 1 -\cos\beta + C\nu . \end{align} This implies that $\theta_\nu(x) \in (-C \nu, \beta + C \nu)$ for all $x > 0$ and some $C>0$ depending only on $\beta$ and $\eta_\beta$. Hence for any fixed $\beta \in (0, \frac12 \pi)$ we can always choose $\nu_0 > 0$ such that for all $\nu<\nu_0$ we have $\theta_\nu(x) \in [-\frac12 \beta - \frac14 \pi, \frac12 \beta + \frac14 \pi] \subset (-\frac12 \pi, \frac12 \pi)$ for all $x > 0$. Recall also that $\theta_0(x) \in (0, \beta) \subset [-\frac12 \beta - \frac14 \pi, \frac12 \beta + \frac14 \pi]$ for all $x > 0$. Thus, using mean value theorem, with some $\tilde\theta(x)$ between $\theta_\nu(x)$ and $\theta_0(x)$, we arrive at \begin{align} |\sin \theta_\nu(x) - \sin \theta_0(x)| = |\cos \tilde\theta(x)|\, | \theta_\nu(x) - \theta_0(x)| \geq C | \theta_\nu(x) - \theta_0(x)| \qquad \forall x > 0, \end{align} for some $C > 0$ depending only on $\beta$. In particular, in view of the uniform convergence of $\sin\theta_\nu$ to $\sin\theta_0$, for any $\varepsilon > 0$ and all $\nu$ sufficiently small we have \begin{align} \sup_{x \in \mathbb R} |\theta_\nu(x) - \theta_0(x)| \leq C \sup_{x \in \mathbb R} |\sin\theta_\nu(x) - \sin\theta_0(x)| <\varepsilon. \end{align} This concludes the proof. \qed \section{Proof of Theorem~\ref{t:gammabeta}} \label{sec:proof-theor-gammabeta} Let us first show that $\theta_\beta \to 0$ as $\beta \to 0$ uniformly in $C(\overline{\mathbb R^+})$. We define $\phi_\beta(x) = \max\{ \beta (1- x), 0\}$ for $0<\beta<{\pi \over 4}$ and all $x \in \mathbb R$. It is clear that $E_\beta (\theta_\beta) \leq E_\beta (\phi_\beta)$ and, therefore, after a straightforward computation, \begin{align} \frac12 \int_0^\infty \left( |\theta_\beta'|^2 + \sin^2 \theta_\beta \right)\, dx &\leq \frac12 \int_0^\infty \left( |\phi_\beta'|^2 + \sin^2 \phi_\beta \right)\, dx + {\nu \over 4 \pi} \int_0^\infty {\sin^2 (\phi_\beta - \beta) - \sin^2 (\theta_\beta - \beta) \over x} \, dx \notag\\ &+ {\nu \over 8 \pi} \int_0^\infty \int_0^\infty {(\sin (\phi_\beta(x) - \beta) - \sin (\phi_\beta(y) - \beta))^2 \over (x - y)^2} \, dx \, dy \notag \\ & \leq \beta^2 + {\nu \over 4 \pi} \int_0^\infty {\sin^2 (\phi_\beta - \beta) - \sin^2 (\theta_\beta - \beta) \over x} \, dx \notag\\ &+ {\nu \over 8 \pi} \int_0^\infty \int_0^\infty {(\phi_\beta(x) - \phi_\beta(y))^2 \over (x - y)^2} \, dx \, dy \notag \\ &\leq \left( {\nu \over 4 \pi} +1 \right) \beta^2 + {\nu \over 4 \pi} \int_0^\infty {\sin^2 (\phi_\beta - \beta) - \sin^2 (\theta_\beta - \beta) \over x} \, dx. \label{Itf} \end{align} Now, with the help of trigonometric identities and Young's inequality we estimate the last integral in \eqref{Itf}: \begin{align} & {\nu \over 4 \pi} \int_0^\infty {\sin^2 (\phi_\beta - \beta) - \sin^2 (\theta_\beta - \beta) \over x} \, dx \notag \\ & \leq {\nu \over 4 \pi} \int_0^1 {\sin^2 (\phi_\beta - \beta) \over x} \, dx + {\nu \over 4 \pi} \int_1^\infty {\sin (2\beta -\theta_\beta) \sin \theta_\beta \over x} \, dx \notag \\ & \leq {\nu \beta^2 \over 4 \pi} + {\nu \over 4 \pi} \int_1^\infty {\sin 2\beta \cos \theta_\beta \sin \theta_\beta - \cos 2\beta \sin^2 \theta_\beta \over x} \, dx \notag \\ & \leq {\nu \beta^2 \over 4 \pi} + {\nu \beta \over 2 \pi} \int_1^\infty {|\sin \theta_\beta | \over x} \, dx \leq {\nu (\nu + \pi) \beta^2 \over 4 \pi^2} + \frac{1}{4} \int_1^\infty \sin^2\theta_\beta \, dx, \end{align} recalling that $0 < \beta < {\pi \over 4}$ and, therefore, $\cos 2 \beta > 0$. Combining the above inequalities, we obtain \begin{align} \label{Cb214} \frac14 \int_0^\infty \left( |\theta_\beta'|^2 + \sin^2 \theta_\beta \right)\, dx \leq C \beta^2, \end{align} for some $C > 0$ depending only on $\nu$. Hence the left-hand side of \eqref{Cb214} vanishes as $\beta \to 0$, and by \eqref{mm} this implies $\theta_\beta \to 0$ uniformly in $C(\overline{\mathbb R^+})$ in this limit. Now we prove uniqueness of minimizers when $\beta$ is small enough. From the arguments above it is clear that for all $\delta > 0$ and all $\beta$ sufficiently small we have $\theta_\beta \in (-\delta, \delta)$. Therefore, if $u_\beta := \sin(\theta_\beta -\beta)$, then $\theta_\beta = \beta + \arcsin u_\beta \in C^1(\overline{\mathbb R^+})$ and $u_\beta \in (-2 \delta, 2 \delta)$ for all sufficiently small $\delta$. We rewrite the energy $E_\beta(\theta_\beta)$ in terms of $u_\beta$: \begin{align} E_\beta(\theta_\beta) &= \frac12 \int_0^\infty \left( \frac{|u_\beta'|^2}{1-u_\beta^2} + \sin^2(\beta + \arcsin u_\beta) + {\nu \over 4 \pi} \cdot \frac{u_\beta^2 - \sin^2(\eta_\beta -\beta)}{x} \right) dx \notag \\ &+ {\nu \over 8 \pi} \int_0^\infty \int_0^\infty {(u_\beta(x) - u_\beta(y))^2 \over (x - y)^2} \, dy \, dx \notag \\ & - {\nu \over 8 \pi} \int_0^\infty \int_0^\infty {(\sin (\eta_\beta(x) - \beta) - \sin (\eta_\beta(y) - \beta))^2 \over (x - y)^2} \, dy \, dx. \label{Econvex} \end{align} It is straightforward to show that the right-hand side of \eqref{Econvex} is a strictly convex functional of $u_\beta$ for all $\delta$ sufficiently small and, therefore, the minimizer of $E_\beta$ is unique when $\beta$ is small enough. \qed \section{Numerics and discussion} \label{sec:num} We conclude this paper by presenting the results of some numerical simulations that exhibit edge domain walls and discussing some of their distinctive characteristics. We first perform a micromagnetic simulation of the remanent magnetization configuration in a ferromagnetic strip, using the simplified two-dimensional thin film model from section \ref{sec:model} (for details of the numerical algorithm, see \cite{mo:jcp06}). The result of the simulation is presented in Figure~\ref{f:strip2d} and shows the long-time asymptotic stationary magnetization configuration formed as the result of solving the overdamped Landau-Lifshitz-Gilbert equation with the energy from \eqref{Emd} in a strip of lateral dimensions $128.25 \times 32.25$ (in the units of $L$). The initial condition was taken in the form of the magnetization saturated to the direction slightly away from vertical. The thin film parameter was fixed at $\nu = 20$. The dimensionless parameters above correspond, for example, to those of a permalloy strip ($M_s = 8 \times 10^5$ A/m, $A = 1.3 \times 10^{-11}$ J/m and $K = 5 \times 10^2$ J/m$^3$ \cite{nistmumag}) with dimensions $20.7 \mu$m$\times 5.2\mu$m$\times 4$nm, for which $\ell = 5.69$ nm and $L = 161$ nm, giving an edge domain wall width of order 1 $\mu$m. To obtain the one-dimensional domain wall profiles numerically, we solve a parabolic version of \eqref{EL} for $\theta = \theta(x, t)$ in $\mathbb R^+ \times \mathbb R^+$: \begin{align} \label{ELt} \theta_t = \theta_{xx} - \sin \theta \cos \theta - {\nu \over 2} \cos (\theta - \beta) \left( -{d^2 \over dx^2} \right)^{1/2} \sin (\theta - \beta) , \end{align} with Dirichlet boundary condition \begin{align} \label{th0t} \theta(0, t) = \beta, \end{align} and initial data \begin{align} \label{th0x} \theta(x,0) = \frac{2 \beta}{1+e^{x/2}}, \end{align} which is a monotonically decreasing function that asymptotes to zero as $x \to +\infty$. To solve the above problem, we employ a finite-difference discretization and an optimal-grid-based method to compute the stray field, extending $\theta$ by its boundary value to $x < 0$. The details of the numerical method can be found, once again, in \cite{mo:jcp06}. The domain wall profiles are then identified with the steady states of \eqref{ELt} reached as $t \to \infty$. \begin{figure}[t] \begin{center} \includegraphics[width=.9\textwidth]{boundary-walls-tails.pdf} \caption{Computed boundary wall profiles for $\beta = \pi/2, \pi/4, \pi/8$. In panels (a) and (d), $\nu=1$; in panels (b) and (e), $\nu=10$; in panels (c) and (f), $\nu=50$. The upper panels (a)-(c) show the computed profiles for the given values of $\beta$ and $\nu$. The lower panels (d)-(f) show the same profiles in log-log coordinates, with the dashed lines indicating the $1/x$ decay.} \label{boundarywalls} \end{center} \end{figure} We collect the results of our numerical solution of the above problem for a range of physically relevant values of $\beta$ and $\nu$ in Figs. \ref{boundarywalls} and \ref{windingnonmono}. The upper panels (a)-(c) of Fig. \ref{boundarywalls} show the profiles of edge domain walls with $\beta$ equal to $\pi/2$ (red curves), $\pi/4$ (green curves) and $\pi/8$ (blue curves). The lower panels show the same profiles in log-log coordinates, with the dashed lines indicating $1/x$ decay. Each pair of panels corresponds to a different value of $\nu$: $\nu = 1$ in panels (a) and (d); $\nu = 10$ in panels (b) and (e); $\nu = 50$ in panels (c) and (f). In all the simulations, a discretization step $\Delta x = 0.125$ was used near the edge on a non-uniform grid with a stretch factor $b = 20$ \cite{mo:jcp06} and terminating at $x \simeq 6 \times 10^3$. A 16-node optimal geometric grid was used in the transverse direction to compute the stray field \cite{ingerman00,mo:jcp06}. \begin{figure} \centering \includegraphics[width=\textwidth]{winding-nonmono.pdf} \caption{Edge domain walls exhibiting winding and lack of monotonicity obtained by solving \eqref{ELt} for $\nu = 10$ and different values of $\beta$. In (a), $\beta = -\pi/2, \pi/2, 3 \pi/2, 5 \pi/2$. In (b), $\beta = -3 \pi / 4, \pi/4, 5 \pi / 4, 9 \pi/4$. In (c), the non-monotone decay in the tails of the solutions for $\beta = -5\pi/8$ (red), $\beta = -3\pi/4$ (green) and $\beta = -7\pi/8$ (blue) at large $x$ is emphasized. } \label{windingnonmono} \end{figure} For $0 < \beta \leq \pi/2$ the obtained domain wall profiles exhibit monotonic decay from $\theta = \beta$ at $x = 0$ to $\theta = 0$ at $x = +\infty$. Thus, qualitatively the profiles are similar to those in \eqref{nostray} corresponding to the case $\nu = 0$. This is in agreement with the predictions of Theorem \ref{t:gammanu} and Theorem \ref{t:gammabeta} for the cases $\nu \lesssim 1$ and $\beta \lesssim {\pi \over 2}$, respectively. At the same time, one can see from Figs. \ref{boundarywalls}(b) and \ref{boundarywalls}(c) that as the value of $\nu$ is increased, the profiles develop a multiscale structure, whereby an inner core forms near the edge on the $O(\nu)$ length scale for $\nu \gg 1$, followed by either an exponential ($\beta = \pi/2$) or an algebraic ($\beta \not= \pi/2$) tail. Heuristically, this scaling may be derived by balancing the anisotropy and the stray field terms in \eqref{EL}. A structure similar to this was reported previously for N\'eel walls at large values of $\nu$ \cite{garcia99,melcher03,garcia04,mo:jcp06,hubert}. Notice that all the profiles are regular and exhibit a finite slope near the edge, in agreement with Theorem \ref{t:regular}. Focusing on the decay of the domain wall profiles, we observe that even though the overall shape of the profile may be qualitatively similar to that in \eqref{nostray}, they exhibit slow algebraic decay for all $\beta \not= \pi/2$, even for small values of $\nu$, see Figs. \ref{boundarywalls}(d)--(f). In all those cases, the profiles exhibit a decay rate proportional to $1/x$, which can be explained by the fact that there is a build-up of magnetic charge near the material edge, which results in a stray field decaying like $1/x$ away from the edge. This should be contrasted to the asymptotic $1/x^2$ decay observed in N\'eel walls \cite{cm:non13}. At the same time, for the special value of $\beta = \pi/2$ the decay becomes exponential, which can also be seen directly from \eqref{EL}. Indeed, when $\beta = \pi/2$, the $\cos (\theta(x) - \beta)$ factor multiplying the contribution of the stray field vanishes as $x \to +\infty$, making the anisotropy term dominate at large values of $x$ and, therefore, resulting in exponential decay. We note that the domain wall profiles obtained by us numerically in Fig. \ref{boundarywalls} are not guaranteed to be those corresponding to the global energy minimizers in Theorem \ref{t:exist}. Instead, they may correspond only to local energy minimizers. In fact, neither monotonicity, nor uniqueness of the energy minimizing edge domain wall profiles are known a priori, in contrast to the N\'eel walls in the bulk of the material \cite{cm:non13,my:prsla16}. In order to assess whether other types of local minimizers may exist in the problem under consideration, we performed further numerical studies of solutions of \eqref{ELt}--\eqref{th0x} by considering values of $\beta$ outside the interval $(0, \pi/2]$. The obtained stationary solutions for $\nu = 10$ are shown in Fig. \ref{windingnonmono}. All these solutions decay to zero as $x \to \infty$, indicating a nontrivial winding (i.e., a variation of $\theta$ by more than $\pi/2$) in each one for $\beta \not\in [0, \pi/2]$. We emphasize that these domain wall profiles are stabilized by nonlocal effects, since in the absence of stray field, i.e., when $\nu = 0$, such solutions do not exist. At the same time, the solutions with winding appear to have higher energy than the corresponding ones in Fig. \ref{boundarywalls} for the same value of $\nu$, indicating that the global energy minimizers do not exhibit winding. Furthermore, for $-\pi < \beta < -\pi/2$ the solutions exhibit overshoot and a non-monotone decay to zero as $x \to +\infty$, see Fig. \ref{windingnonmono}(c). Thus, monotonicity of the initial data in \eqref{th0x} is not preserved under \eqref{ELt}. Let us also mention that we tried different non-monotone initial conditions, but did not obtain any other solutions than those shown in Fig. \ref{windingnonmono}. However, monotone solutions with larger winding were observed numerically for still larger values of $\beta$. According to our numerics, it appears that edge domain wall solutions with arbitrarily large winding are possible. To conclude, we note that an a priori lack of monotonicity is an obstacle for proving the precise asymptotic decay of the profiles, using the methods of \cite{cm:non13}. A broader question of interest is whether the one-dimensional domain wall profiles in Theorem \ref{t:exist} are also minimizers, in some suitable sense, of the two-dimensional micromagnetic energy in \eqref{E0}. It is well known that magnetic domains often exhibit spatially modulated exit structures near the material boundary \cite{hubert}, and spatially periodic and more complicated two-dimensional edge domains have been observed experimentally in thin films with strong in-plane crystalline anisotropy \cite{dennis02}.
train/arxiv
BkiUfd3xK6nrxjHzDrWM
5
1
\section{Introduction} Interstellar travel became technologically plausible in the 1950s, when the energy release of thermonuclear fusion was observed in the first hydrogen bombs. First studies were based on the idea of a pulse drive, directly propelled by the explosions of atomic bombs behind the craft \citep{1965Sci...149..141D, 1968PhT....21j..41D}, evolving into a direct fusion rocket \citep{1978JBIS...31S...5B}. These designs were manned interstellar arks with masses of order 10 million tonnes and speeds of 10\,\% the speed of light. Classical rockets, both chemical and nuclear, suffer from the limitations imposed by Tsiolkovsky's rocket equation \citep{1992CeMDA..53..227P}: if a rocket carries its own fuel, the ratio of total rocket mass versus final velocity is an exponential function, making high speeds extremely expensive. A different method, which does not require the fuel to be accelerated with the ship, has been proposed by Johannes \citet{kepler1604}. After observing a comet, he suggested that the cometary tail points away from the sun due to a ``breeze'', and proposed to ``provide ships or sails adapted to the heavenly breezes, and there will be some who will brave even that void.''. James Clerk Maxwell predicted that radiation carries momentum and exerts pressure: ``Hence in a Medium in which waves are propagated there is a pressure in the direction normal to the wave, and numerically equal to the energy in unit of volume'' \citep{Maxwell1873,maxwell1990scientific}. \citet{1967Natur.213..588R} noted that there was no obvious way to decelerate the spacecraft at the target star system. Only recently, \citet{2017ApJ...835L..32H} and \citet{2017arXiv170403871H} suggested to decelerate using the stellar radiation and gravitation in a maneuver they referred to as photogravitational assist. A project by the ``Breakthrough Initiatives''\footnote{\url{http://breakthroughinitiatives.org}} provides monetary support (of order 100\,m USD) for research on gram-scale robotic spacecrafts, using a light sail for propulsion \citep{2016arXiv160401356L,2017Natur.542...20P}. Between ``Project Orion'', and the ``nanocraft concept'', there is a factor of $10^{13}$ in weight. The smaller weight results in lower build- and launch costs, a benefit that could make such a mission affordable within the current century. When we compare the early studies with the most recent concept, we have to distinguish that the main purpose of interstellar travel shifted from colonization of exoplanets with human (biological) settlers to unmanned research probes, taking spectroscopic and photographic measurements of the putative biological environment on potentially habitable exoplanets. Software and hardware engineering has made sufficient progress since the 1960s that such probes can be highly autonomous. Consequently, the required mass for probes can be reduced. Our benefit from autonomous interstellar probes is purely in the information they send back to us. Thus we shall seek to maximize the amount of information we can obtain from them. A major issue is that these probes are designed to be very light-weight, and thus limited in terms of power. While traditional, fusion-based concepts proposed the use of high \citep[MW,][]{2016JBIS...69...278G} power at GHz frequencies for data transfer, small sailing probes can not have a fusion reactor on board and will have to rely on photovoltaic energy, which delivers of order kW per square meter surface area. In the current era of high resolution video, a high data rate to transfer spectacular observations of an alien world could be important for the public reception of such a mission, and thus its financial funding. It is therefore crucial to optimize interstellar communication, precisely the data rate, to maximize the data volume of scientific and public data. In this work, paper I of the series, our contributions are: (1) to introduce the variables in the framework of data transfer between telescopes; (2) to assess limiting factors such as extinction, noise, and technological constraints; and (3) to calculate optimal frequencies and achievable data rates for exemplary cases. \section{Method to calculate data rates} The free-space photon flux $F$ received from a telescope at distance $d$ can be calculated as \citep{kaushal2017free}: \begin{equation} \label{eq1} F = \frac{P_{\rm t}}{\pi h f (\theta d)^2} \end{equation} where $P_{\rm t}$ is the transmitted power, $f$ the photon frequency, and $h$ Planck's constant ($\approx6.626\times10^{-34}$\,J\,s). The (half) opening angle of the diverging light beam is $\theta_{\rm d} = Q_{\rm R} \lambda/D_{\rm t}$ (in radians) with $Q_{\rm R}\approx1.22$ for a diffraction limited circular transmitting telescope of diameter $D_{\rm t}$ \citep{rayleigh}, and $\lambda=c/f$ with $c$ as the speed of light in vacuum ($299,792,458$\,m\,s$^{-1}$). In a receiving telescope with aperture $A_{\rm R}=\pi D_{\rm r}^2/4$ we obtain the flux \begin{align} \label{eq2} F_{\rm r} = \frac{P_{\rm t}}{\pi h f (Q_{\rm d} \lambda / D_{\rm t})^2 d^2} \times \frac{\pi D_{\rm r}^2}{4} \nonumber \\ = \frac{P_{\rm t} D_{\rm t}^2 D_{\rm r}^2}{4 h f Q_{\rm d}^2 \lambda^2 d^2} &({\rm s}^{-1}). \end{align} This assumes a uniform plane-wave illumination. A telescope with central obscuration and plane-wave gaussian-beam illumination has been calculated by \citet{1974ApOpt..13.2134K}, and the flux loss from pointing errors by \citet{1987ApOpt..26.2055M}; but these secondary effects will be neglected here. For a laserbeam, the narrower ``waist'' leads to an intensity pattern with a characteristic angular beam size given by \citep{duarte2015tunable,2015PASP..127..540T} $\theta_{\rm L} = Q_{\rm L} (2/\pi) \lambda/D$, or $\theta_{\rm L} / \theta_{\rm d} \approx 0.5$, which leads to a tightening of the beam. Note that a laserbeam shape is not maintained in systems where laser light is broadened with a beam expander and then focused with a telescope, and so we neglect this possibility here. The widely used approximation\footnote{Approximations and mistakes in the literature will be discussed in section~\ref{lit}.} of the diffraction-limited aperture, $\theta \approx \lambda / D$, leads to an overestimate of the received photon flux on the receiver side by $\approx49$\%. This can be verified by setting $Q_{\rm R}=1$ versus $Q_{\rm R}=1.22$ numerically. The considerable difference comes from the fact that $\theta$ enters the equation through the inverse square law. The precise value, $\theta = 1.2196..\lambda / D$ comes from the Fraunhofer diffraction where this number is the first zero of the order-one Bessel function of the first kind, $J_1(x)/\pi$. Several factors will constrain the achievable data rates. Regarding the loss of photons, the most important are interstellar extinction (section~\ref{loss_ext}), of which we denote the surviving fraction as $0<S_{\rm E}<1$. For ground-based telescopes, atmospheric transmission allows for the reception of another fraction of photons, $0<S_{\rm A}<1$ (section~\ref{atmo}). The receiver efficiency is denoted as $0<\eta<1$. Technological constraints on the telescopes will be denoted as $Q$ (section~\ref{sec_tech_limit}). Other small factors, such as scintillation and scattering (section~\ref{scatter}), might play a role and can be included in calculations in a similar manner, but we neglect them here for brevity. The major noise sources are atmospheric sky background (section~\ref{sky}), zodiacal light (section~\ref{noise_zodi}), and others (sections~\ref{star},~\ref{instrumental_noise}). \subsection{Channel capacity for a coherent wave} \label{channel_capacity} We now define the theoretical maximum data rates based on frequency, signal and noise. For completeness, we will first discuss the optimum case where the number of photons received is sufficiently large to form a coherent wave. While this might not be realistic for most schemes of interstellar communication (section~\ref{results}), it is useful to define the classical upper bound. The maximum rate at which information can be transmitted over a communications channel is \citep{Shannon1949}: \begin{equation} \label{shannon} C=B \log_2 \left(1+\frac{S}{N}\right) \end{equation} where $C$ is the channel capacity (in bits per second), $B$ is the bandwidth of the channel (in Hertz), $S$ is the average signal power (in Watt) and $N$ is the average gaussian noise (in Watt). The bandwidth is the difference between the highest ($f_{\rm H}$) and lowest ($f_{\rm L}$) frequency in a continuous set of frequencies. To compare data rates for different frequencies, we can approximate bandwidth with frequency by taking a constant fractional bandwidth, $b$. With $f_{\rm C}$ as the center frequency, we can define $b=(f_{\rm H}-f_{\rm L})/f_{\rm C}$. With a value of e.g. $b=0.1$, we can approximate $B\approx c/\lambda$ (in Hz). Channel capacity is proportional to frequency and to the logarithm of S/N. These relations suggest that the frequency should be increased to the practical maximum, and that the signal power should merely be increased to overcome noise, with little benefit beyond. If the received number of photons (after extinction and other losses) is sufficiently large to form a coherent wave, we can plug Eq.~\ref{eq2} (as the signal $S$ in photons per second) into \ref{shannon}, and define the noise equally in photons per second: \begin{equation} \text{DSR}\textsubscript{c} = \frac{c}{\lambda} \log_2 \left(1+\frac{F_{\rm r}}{N_{{\rm \gamma}}}\right) \end{equation} where $N_{\rm \gamma}$ is the number of photons ($\gamma$) from noise per second (physical and instrumental). Then, the data signaling rate for the case of a continuous wave, $\text{DSR}\textsubscript{c}$, can be conveniently calculated in units of bits/s if $P$ is in Watt. Intuitively, one would assume that at least one photon is required to transfer one bit of information, but this is incorrect: More than one bit can be encoded per photon. This is done with a modulation scheme to define an alphabet, often using a combination of polarization, phase, frequency and amplitude modulation \citep[e.g.,][]{1995ASPC...74..369J}. Each symbol of such an alphabet can encode several bits, scaling with the logarithm to base 2 of the number of members. This is called spectral efficiency and is measured in (bits/s)/Hz. Modulation rate, spectral efficiency and data rate can be increased for a constant bandwidth at the cost of an exponential rise in SNR, or, for a constant noise level, in an exponential increase in $P$. For the extreme case of communication with negligible losses (e.g., $d \to 0$), Eq.~\ref{shannon} suggests the use of infinitely high bandwidth. However, infinite frequencies (and infinite capacity) are unphysical. In the classical sense, the limit comes from the fact that an increase in bandwidth also increases noise power (Shannon's power efficiency limit). A noiseless channel has infinite capacity: with Eq.~\ref{shannon} we have $C=B \, \log_2 \, (1+\infty)=\infty$. However, in reality noise is never zero because photons are quantized (section~\ref{instrumental_noise}). Then, the capacity of Shannon's limit becomes \citep[][p. 5-117]{chitodecommunication}: \begin{equation} \lim_{B \to \infty} C = \frac{S}{N} \log_e 2 \approx 1.44 \frac{S}{N} \end{equation} In the framework of quantum state propagation, any transmission system can exchange only a limited (quantized) amount of information in a given time frame \citep{1978ITIT...24..657Y}, and is thus limited by physical resources \citep{1981PhRvD..23..287B}. Therefore, increasing frequency to infinity does not increase data rate to infinity \citep{2004PhRvL..92b7902G}. \begin{figure} \includegraphics[width=\linewidth]{figure_holevo} \caption{\label{figure_holevo}Capacity $C_{\rm th}$ in bits per photon as a function of the number of photons per mode, $M$. The larger the number of modes, the more bits can be encoded per photon, however the ultimate bound (black) is logarithmic. When accounting for thermal noise per mode $N_{\rm M}$ (fractions in the plot), the limits are even tighter (red lines).} \end{figure} \subsection{The photon limited case} The limit for Eq.~\ref{shannon} only applies if the number of photons is sufficiently large to form a coherent wave. In many schemes for interstellar communication (section~\ref{results}), the data rate is photon-limited. Then, Holevo's bound \citep{holevo1973bounds} establishes the upper limit to the amount of information which can be transmitted with a quantum state. It applies independently from the frequency of the wave, and assumes that a number (quantity) of modes can be used per photon, which originate from the photons' dimensions, namely polarization, frequency and time of arrival. The inverse of this quantity, $M$, is the number of photons per mode. Then, as shown by \citet{2004PhRvL..92b7902G}, the ultimate quantum limit of bits per photon can be expressed as: \begin{equation} C_{\rm ult}=g(\eta M) \end{equation} where $\eta$ is the receiver efficiency and $g(x)=(1+x) \log_2 (1+x)-x \log_2 x$ so that $g(x)$ is a function\footnote{An introduction into quantum information theory and the usual notation can be found in \citet{2014PhRvA..89d2309T}.} of $\eta \times M$. In the presence of thermal noise, it was conjectured \citep{2004PhRvA..70c2315G} and recently proven \citep{2014NaPho...8..796G} that the capacity is: \begin{equation} \label{thermal_holevo} C_{\rm th}=g(\eta M + (1-\eta) N_{\rm M}) - g((1-\eta)N_{\rm M}) \end{equation} where $N_{\rm M}$ is the average number of noise photons per mode. It is an open question if the maximum can fully, or only approximately be achieved in practice \citep{2012arXiv1202.0518W,2012arXiv1202.0533G}. The achievable capacity is shown for a wide range of modes in Figure~\ref{figure_holevo}. It is clear that even large numbers of modes and small fractional noise increase the number of bits per photon only within a factor of a few. We can multiply Eq.~\ref{eq2} and Eq.~\ref{thermal_holevo} to calculate the data rate for the photon limited case of two communicating telescopes: \begin{equation} \label{bits_holevo} \text{DSR}\textsubscript{$\gamma$} = C_{\rm th} F_{\rm r} \end{equation} where \text{DSR}\textsubscript{$\gamma$} is in units of bits/s when $P$ is in Watt. It assumes that the free path loss caused by $\eta, d, S_{\rm E}, S_{\rm A}$ is known and accounted for in the encoding scheme. Variations and uncertainties in the number of received photons can be treated as an additional noise source, but optimal encoding schemes will be neglected in this paper. In the following sections, we will discuss the values in these equations. \begin{figure} \includegraphics[width=\linewidth]{extinction} \caption{\label{extinction}Fraction of photons that defies interstellar extinction ($S_{\rm E}$), as a function of wavelength $\lambda$, shown for different distances. The shaded area represents the Lyman continuum ($\approx50-91.2$\,nm) which is opaque even for the closest stars due to the ionization of neutral hydrogen \citep{1959PASP...71..324A,2000ApJ...542..914W,2007eua..book.....B}.} \end{figure} \section{Signal losses} \subsection{Loss of photons from extinction} \label{loss_ext} From the IR to the UV, extinction is caused by the scattering of radiation by dust, while at wavelengths shorter than the Lyman limit (91.2\,nm), extinction is dominated by photo-ionisation of atoms \citep{1996Ap&SS.236..285R}. For short interstellar distances, extinction in the optical is small, $\approx0.1$\,mag within 100\,pc, $0.05-0.15$\,mag out to 200\,pc \citep{1998A&A...340..543V}. It is much larger towards the galactic center, $E(B-V)\approx3$ at $A(V)>44$\,mag at 550\,nm \citep{2008A&A...488..549P,2011ApJ...737...73F}, an attenuation by a factor of $10^{-18}$. Another prominent feature in measured extinction curves is a ``bump'' in the UV at 217.5\,nm \citep{1965ApJ...142.1683S,1969ApJ...157L.125S}, where extinction is about an order of magnitude higher. It is attributed to organic carbon and amorphous silicates present in the grains \citep{2005Sci...307..244B}. Other features are the water ice absorption at $3.1\,\mu$m and the 10 and $18\,\mu$m silicate absorption. While higher frequencies have higher channel capacities for coherent waves, and allow for tighter beams (at a given telescope size), they also generally suffer from higher extinction between UV and IR. To analyze this trade-off (section~\ref{sec_ext}), we use the synthetic extinction curve presented in \citet{2003ARA&A..41..241D,2003ApJ...598.1017D,2003ApJ...598.1026D} which covers wavelengths from 1\,cm (30\,GHz) to 1\,{\r{A} (12.4\,keV). We scale this curve for different distances using $A(V)=1.8$\,mag per kpc in the galactic plane \citep{2003dge..conf.....W}, equivalent to $E(B-V)=0.28$\,mag per kpc \citep{2002A&A...389..871D}. For the highest extinction values towards the galactic center, where $E(B-V)\approx3$, we use measurements for the optical and IR \citep{2011ApJ...737...73F} and the UV \citep{2004ApJ...616..912V,2016ApJ...828...69M} and interpolate in between individual data points with a spline. This extinction curves covers the wavelength range from $0.1-27\,\mu$m. While extinction is typically given in astronomical magnitudes, we convert these to the fraction of photons received over distance ($S_{\rm E}$), and show examples in Figure~\ref{extinction}. \begin{figure*} \includegraphics[width=.5\linewidth]{figure_atmo} \includegraphics[width=.5\linewidth]{figure_atmo_zoom} \caption{\label{figure_atmo}Left: Surviving fraction of photons after atmospheric transmission ($S_{\rm A}$) as a function of wavelength. Short-ward of UV (291\,nm), transmission remains at zero. Data is for Mauna Kea in best (20-percentile) conditions. Right: Zoom into the IR with fluctuations from 0.2 to unity transmission with typical line widths of $2\,{\rm \r{A}}=0.2$\,nm.} \end{figure*} \subsection{Loss of photons from atmospheric transmission} \label{atmo} The earth is surrounded by an atmosphere \citep{1842RSPT..132..225F}, which is essential for almost all life on this planet \citep{2007Sci...315...92C}, but of the greatest annoyance for almost all astronomers \citep{1950PASP...62..133K}. For a space telescope there is no loss of photons from a surrounding cloud of gas, dust and water, so that the surviving fractions of photons is $S_{\rm A}=1$. On earth, atmospheric transmission depends on the wavelength and varying characteristics, such as the content of water vapor in the air. As an example, we use a transmission curve $S_{\rm A}(\lambda)$ for Mauna Kea with a water vapor column of 1\,mm, which represents excellent observing conditions, and occurs in the 20\% of the best nights of an average year \citep{1992nstc.rept.....L,2009JGRD..11418105G}. This curve covers the wavelength range of 200\,nm--10\,cm (3\,GHz). Figure~\ref{figure_atmo} shows the part up to 10\,mm (30\,GHz), after which transmission reaches near unity. Transmission is zero for all practical purposes for wavelengths below 291\,nm, above 20\,m, and between $30-200$\,$\mu$m. In the optical and infrared, transmission is highly variable due to numerous absorption lines from water, carbon dioxide, ozone and other gases. When communicating with photons in a narrow (nm) bandwidth, as is common with lasers, the exact wavelength must be chosen carefully, because transmission fluctuates rapidly. For example, $S_{\rm A}=0.98$ at $\lambda=934.36$\,nm, but $S_{\rm A}=0.22$ at $\lambda=934.52$\,nm, a spectral distance of only 0.16\,nm. Under good atmospheric conditions, transmission can be close to unity for many wavelengths in the optical and near- to mid-infrared. For brevity, we neglect other atmospheric effects such as scattering and turbulence \citep[``seeing'',][]{1995ApOpt..34.5461C} which is a variation of the optical refractive index and enlarges the point spread function of the telescope, if not corrected for with adaptive optics \citep{1998aoat.book.....H}. \begin{figure} \includegraphics[width=\linewidth]{figure_lambda_D} \caption{\label{figure_lambda_D}Technologically achieved resolution for space telescopes (earth 2017) as a function of wavelength. Focusing high-energy waves is increasingly difficult.} \end{figure} \subsection{Technological limits of telescopes} \label{sec_tech_limit} The angular beam size is limited to $Q_{\rm R}\geq1.22$ (Rayleigh limit), or $Q_{\rm L}\geq1$ for a laserbeam. Technology may place a stricter limit. We have examined the angular resolution of current (earth 2017) space telescopes for different wavelengths. As can be seen in Figure~\ref{figure_lambda_D}, $Q_{\rm real}/Q_{\rm R}$ is an exponential function for wavelengths $\lambda < 300$\,nm, indicating the technological difficulty to focus wavelengths in UV and shorter. For diffraction-limited telescope mirrors, the polished surfaces need to have surface smoothness $< \lambda/4$ \citep{1935lett.book.....D}, which makes the production of telescopes for UV, X-ray and $\gamma$-ray increasingly difficult. Additionally, the refractive index of all known materials is close to 1 at high (keV) energies, making it difficult to focus photons efficiently and avoid absorption \citep{2008PhyU...51...57A}. With today's technology, resolution in the milli-arcsec regime is possible at optical wavelengths, but X-rays are limited to angular resolutions of 20\,arcsec \citep{2014SPIE.9151E..2WS}, a difference of 4 orders of magnitude. For example, the Swift X-Ray satellite has an angular resolution of 18 arcsec at $\lambda=1$\,nm (1.5\,keV) from a 30\,cm aperture \citep{2005SSRv..120..165B}, while the diffraction limit would be $1.22\lambda/D=8\times10^{-4}$\,arcsec, so that $Q_{\rm real}/Q_{\rm R} = 4\times10^{-5}$. Technology is believed to eventually achieve sub-arcsec resolution at X-rays, but at the expense of large designs, with focal lengths of $10^5$\,km \citep{2004SPIE.5168..411G}. \subsection{Technological limits of the receiver} \label{tech_limit} Photon energy depends on wavelength, $E=hc/ \lambda$, which should make it easier to detect higher energy photons in theory. In practice, single photon detection with high quantum efficiency is possible throughout a wide range of wavelengths, from X-Rays \citep{2013MedPh..40d1913T,2015MedPh..42..491T} to microwaves \citep{2012PhRvB..86q4506P,2015arXiv151206939W}. Interestingly, even the human eye can detect single photons in the visible light \citep{2016NatCo...712172T}. We will neglect a possible wavelength-dependence in quantum efficiency of photon detectors in this paper. This is supported by the much stronger influence from technological limits in focusing beams (section~\ref{sec_tech_limit}), and the influence of interstellar extinction on photon throughput (section~\ref{loss_ext}), so that detector differences (of a few percent) will be negligible for most practical cases. \begin{figure} \includegraphics[width=\linewidth]{figure_atmo_glow} \caption{\label{figure_atmo_glow}Atmospheric sky background on Mauna Kea as a function of wavelength.} \end{figure} \section{Noise} \label{noise} Noise sources can be astrophysical (scattering of the signal, background light) or instrumental (shot noise and read noise). For ground-based telescopes, the total noise has been measured (section~\ref{sky}), for space-based telescopes, it will be discussed in sections 4.2--4.6. \subsection{Atmospheric sky background} \label{sky} For a telescope located on earth, the total sky background which enters as noise into the receiver can be measured by observing a (maximally) empty sky area. Naturally, it includes all sources: terrestrial, solar system, and interstellar. Precise raw sky emission data is available for many observatory sites, and as in section~\ref{atmo} we use Mauna Kea as an example. The measurements are for the sky background only and do not include the emission from a telescope or sensor (which has been subtracted out). The data were manufactured with a synthetic sky transmission \citep{1992nstc.rept.....L} subtracted from unity. This gives an emissivity which is then multiplied by a blackbody function with a temperature of 273\,K \citep{2009JGRD..11418105G}. The authors added emission spectra based on observations from Mauna Kea, and the dark sky continuum mostly from zodiacal light. Finally, the curve has been scaled to produce 18.2\,mag\,arcsec$^{-2}$ in the H band, as observed on Mauna Kea by \citet{1993PASP..105..940M}. The resolution of the final data product is 0.1\,nm\footnote{Data files from \url{http://www.gemini.edu/sciops/telescopes-and-sites/observing-condition-constraints/optical-sky-background}}. These values are in agreement with measurements from the darkest observatory sites on earth, which have an optical sky background minimum of $22$\,mag\,arcsec$^{-2}$ \citep{2008ASPC..400..152S}, corresponding to an optical flux of a few \,$\gamma$\, arcsec$^{-2}$\,sec$^{-1}$ from unresolved sky sources, air glow, and zodiacal light. The sky background at Mauna Kea is shown in Figure~\ref{figure_atmo_glow} and covers the band from 300\,nm--30\,$\mu$m. Similarly to the transmission (section~\ref{atmo}), background levels vary by up to 3 orders of magnitude over few nm. Generally, the flux is $\approx \gamma$\,nm$^{-1}$\,arcsec$^{-2}$\,m$^{-2}$ in the optical and NIR, with a steep increases for $\lambda>2.5$\,$\mu$m, and reaches $10^7 \gamma$\,nm$^{-1}$\,arcsec$^{-2}$\,m$^{-2}$ at 10\,$\mu$m. This indicates that earth-based interstellar communication is favorable for $\lambda<2.5$\,$\mu$m. For telescopes on other planets, we would need to know precisely the exoplanet atmospheres, exozodiacal dust, etc. which may result in a different noise structure; a detailed discussion is beyond the scope of this paper. \begin{figure} \includegraphics[width=\linewidth]{zodi2} \caption{\label{zodi2}All-sky map at 12\,$\mu$m taken by the COBE satellite \citep{1992ApJ...397..420B,1998ApJ...508...44K}. The horizontal line is the galactic plane, the S-shaped band represents the solar system ecliptic, where zodiacal light is $>100\times$ higher than near the ecliptic poles \citep[blue colors,][]{1980A&A....84..277L}.} \end{figure} \subsection{Background light from zodiacal light} \label{noise_zodi} Space telescopes are not affected by the strong atmospheric light. However, they still collect undesired photons. The strongest source is sunlight which is scattered off of dust grains in the solar system, an effect called zodiacal light. In the ecliptic plane, it can be as bright as $1.5\times10^{-6}$\,ergs\,s$^{-1}$cm$^{-2}$\r{A}$^{-1}$. It is faintest at heliocentric longitude 130$^{\circ}-170^{\circ}$ away from the sun because of larger scattering angles, and at low ecliptic latitudes $<30^{\circ}$ because of the minimum in the interplanetary dust column density at levels $<10^{-7}$\,ergs\,s$^{-1}$cm$^{-2}$\r{A}$^{-1}$ \citep{2002ApJ...571...56B}. The scattering strength only weakly depends on wavelength and closely resembles the solar spectrum between 150\,nm and 10\,$\mu$m \citep{1981A&A...103..177L,1995Icar..115..199M}. These levels contribute a flux of order 3\,$\gamma$\,nm$^{-1}$\, arcsec$^{-2}$\,m$^{-2}$ at 1\,$\mu$m in the ecliptic, and decrease to 0.1 (0.03) photons at latitude 45$^{\circ}$ (90$^{\circ}$). We show an all-sky map in Figure~\ref{zodi2} which makes it clear that the source's location on the sky is important, in addition to the wavelength. \begin{figure} \includegraphics[width=\linewidth]{figure_background_sources} \caption{\label{figure_background_sources}Intensity of the extragalactic background after removal of the zodiacal light foreground (which is strongest in the visible and IR). The peak in the optical is from nuclear fusion, the peak in the FIR from re-radiated dust. The UV/soft X-ray background at a wavelength of 10--100\,nm is unknown. Data from \citet{1998ASPC..139...17L,2016RSOS....350555C,2016ApJ...827....6S}.} \end{figure} \subsection{Background light from galactic and extragalactic sources} \label{noise_atmo} The Galactic light comes from stars, starlight scattered from interstellar dust, and line emission from the warm interstellar medium. Its levels are of order $10^{-9}$\,ergs\,s$^{-1}$cm$^{-2}$\r{A}$^{-1}$ between 200\,nm and 1\,$\mu$m. The mean flux of the optical extragalactic background light has been measured as $4.0\pm2.5$, $2.7\pm1.4$ and $2.2\pm1.0 \times 10^{-9}$\,ergs\,s$^{-1}$cm$^{-2}$\r{A}$^{-1}$ at wavelengths of 300\,nm, 550\,nm and 800\,nm \citep{2002ApJ...571...56B}. Compared to the zodiacal light, these sources are weaker by two orders of magnitude and are only relevant if the source is near the ecliptic poles, where zodi is smallest; and for wavelengths outside the zodi-band of $\approx0.3-300$\,$\mu$m. \subsection{Scintillation and scattering of photons} \label{scatter} Extinction causes not only a loss of photons from absorption, but also scattering. The latter reduces the ``prompt'' pulse height and produces an exponential tail \citep{2000ASPC..213..545H}. Scatter broadening is well known from pulsars and magnetars. As an extreme example, magnetars close (0.1\,pc) to the galactic center with dispersion measures $\text{DM}=1778$\,pc\,cm$^{-3}$ have their pulses broadened to $1.423\pm0.32$\,s at 1.2\,GHz, and $0.2\pm0.07$\,ms at 18.95\,GHz, following a power law with index $-2.8$ \citep{2014ApJ...780L...3S}. A single pulse which is broadened to a width of one second results in a very low data rate of order bit/s. Extrapolating with the power law indicates that nanosecond pulse widths ($10^{-9}$\,s) can be expected for frequencies $>500$\,GHz ($\lambda<0.6$\,mm), and the broadening should become shorter than the wavelength at $\lambda \approx \mu$m. For these higher frequencies, the amplitude level of the scattering tail, and its length, is unknown in practice. Limits from the Crab pulsar show no detectable scattering tail at UV and optical wavelengths for an optical millisecond pulse width and $E(B-V)=0.52$ \citep{2000ApJ...537..861S}, consistent with the power law scaling from radio observations. These results indicate that the impact of extinction is mainly on the absorption for frequencies $>500$\,GHz ($<0.6$\,mm), and not on pulse broadening. Therefore, we neglect this effect in our calculations, but suggest further research in this area. \subsection{Background light from the target star and celestial bodies} \label{star} On the direct path, even modest-sized telescopes receive a relevant number of photons from nearby stars. For example, $5\times10^{10}\,\gamma$\,sec$^{-1}$\,m$^{-2}$ from $\alpha$\,Cen~A \citep[distance 1.3 pc,][$L= 1.522 L_{\odot}$]{2016A&A...594A.107K}. From Proxima Centauri ($L=1.38\times10^{-4} L_{\odot}$), it is $4.25\times10^{6}\,\gamma$\,sec$^{-1}$\,m$^{-2}$, or $3.5\times10^9$ ($3.5\times10^{5}$)\,$\gamma$\,sec$^{-1}$\,m$^{-2}$ from a sun-like star in a distance of 10 LY (1000 LY). A coronograph or occulter could be used to block a significant part of this flux \citep[$10^{-9}$,][]{2006ApJS..167...81G,2015RAA....15..453L}. Additionally, a filter with a small band-pass, e.g. 1\,nm, would reduce the flux further by $>10^3$. A good angular resolution of the receiving telescope would be helpful to separate the transmitter from the nearby target star. For comparison, a probe at a distance of 1\,au from the star $\alpha$\,Cen~A would appear at an angular separation of 1.42\,arcsec as seen from earth, resolvable even with small telescopes, assuming sufficient contrast. The flux levels from reflected exoplanet light and exozodiacal dust is many orders of magnitude fainter than from the flux in the home solar system, and can thus be neglected. \subsection{Instrumental noise} \label{instrumental_noise} Apart from a loss of photons from imperfect reflection or transmission in the receiver, the conversion from photons to electrons (e.g. with CCDs or photomultiplier tubes, which are analogue devices) causes a small, but nonzero amount of noise. Even a perfect instrument will produce some noise. Fundamentally, this originates from the fact that photons and electrons are quantized \citep{1905AnP...322..132E}, so that only a finite number can be counted in a given time. This phenomenon is the shot noise \citep{1918AnP...362..541S}, and is correlated with the brightness of the target. \begin{figure*} \includegraphics[width=.5\linewidth]{proxima_earth} \includegraphics[width=.5\linewidth]{proxima_space} \caption{\label{proxima}Data rate to a probe at Proxima, as a function of wavelength, for the listed parameters. Left: Receiver on earth peaks for $\lambda=429.6$\,nm. Right: Receiver in space peaks at 300\,nm. See text for discussion.} \end{figure*} \section{Results} \label{results} \subsection{A Starshot-like probe at $\alpha$\,Centauri} We will now calculate exemplary quantitative data rates. Our default example is to maximize data rates with a probe at $\alpha$\,Cen ($d=1.3$\,pc) and examine the influence of the variables presented in the previous section. Our standard example probe uses a telescope with a circular aperture $D_{\rm t}=1$\,m, through which it transmits with a power of $P=1,000$\,W. The telescope quality $Q_{\rm R}\approx1.22$ for $\lambda>300$\,nm is of current (earth 2017) technology, and positioned in space. The hypothetical receiver has $D_{\rm r}=39$\,m, comparable to the upcoming generation of ``extremely large telescopes'' (E-ELTs). It must be located in the southern hemisphere, e.g. in Chile, because $\alpha$\,Cen is not observable from Mauna Kea's northern latitude which served as an example in previous sections. The total receiver efficiency is $\eta=0.5$. It uses $N=10^5$ modes, which could be done with a $R=100,000$ spectrograph, $10^5$ time slots, or a combination of both. The atmospheric sky background represents very good (20-percentile) conditions as described in section~\ref{sky}. From the transmitted $P=1,000$\,W ($2.2\times10^{21}$\,$\gamma$\,s$^{-1}$ at $\lambda=429.6$\,nm), the theoretical flux near earth after free-space loss is $1,860$\,$\gamma$\,s$^{-1}$ in the receiver aperture. Interstellar extinction for this wavelength and distance is $\approx0.3$\%, causing a loss of 6 photons, or a reduction to $1,854$\,$\gamma$\,s$^{-1}$. Sky transparency is 0.74, so that $1,369$\,$\gamma$\,s$^{-1}$ survive. This is the signal before receiver efficiency. Regarding the total sky background, we assume that the filter width at the receiver has a bandpass of 1\,nm, and the on-sky resolution is 1\,arcsec. We neglect the photon flux from $\alpha$\,Cen as it can be effectively suppressed (section~\ref{star}), and is then negligible compared to the atmospheric background of 0.6\,$\gamma$\,nm$^{-1}$\,s\,arcsec$^{-2}$\,m$^{-2}$ (section~\ref{atmo}), resulting in 702 noise photons per second in the telescope. We will discuss the case of blended sources (probe and star) in section~\ref{sec:blend}. We also neglect the noise flux from the receiver itself. Following Eq.~\ref{thermal_holevo}, the Holevo bound with our noise is then 1.81 bits per photon. This includes the receiver efficiency of $\eta=0.5$. We can now multiply the received photons with the encoding limit and estimate $1,369$\,$\gamma$\,s$^{-1}$ at $1.81$\,bits\,$\gamma^{-1}$ $=2480$\,bits/s. This is also the peak value at $\lambda=429.6$\,nm in Figure~\ref{proxima} (left), indicating that any other wavelength decreases the effective data rate. In practice, this is an upper bound; realistic data rates including sensor noise, margin for error, etc. will yield smaller data rates by a factor of a few. The cut-off for $\lambda<290$\,nm comes from the atmospheric intransparency (Figure~\ref{figure_atmo}). The decline in data rate towards longer wavelengths comes from two effects: the decrease in telescope focusing (section~\ref{sec_tech_limit}), and increasing atmospheric noise (Figure~\ref{figure_atmo_glow}). Individual atmospheric absorption lines can be clearly seen which should be avoided for communication (Figures~\ref{figure_atmo},~\ref{figure_atmo_glow}). \begin{figure} \includegraphics[width=\linewidth]{figure_holevo_noise} \caption{\label{hnoise}Capacity $C_{\rm th}$ (in bits/photon) is a logarithmic function of thermal noise $N_{\rm M}$ (in photons per mode).} \end{figure} \subsection{Space-based receiver} For the space-based analysis, we restrict the receiver size to $D_{\rm r}=10$\,m to make it more realistic for the current technological level. The optimal wavelength is now $\lambda \approx 300$\,nm which is limited by the telescope quality (Figure~\ref{figure_lambda_D}). Noise levels are dominated by zodiacal light; $\alpha$\,Cen is $42^{\circ}$ from the ecliptic, resulting in noise levels of $\approx0.1$\,$\gamma$\,nm$^{-1}$\,s\,arcsec$^{-2}$\,m$^{-2}$ and a higher capacity of 2.83 bits per photon. The signal decreases to 174\,$\gamma$\,$^{-1}$ for a maximum data rate of 494\,bits/s. \begin{figure*} \includegraphics[width=.5279\linewidth]{figure_grid_photons_space} \includegraphics[width=.4721\linewidth]{figure_grid_photons_earth} \caption{\label{fig_photons}Best frequency (brightest color) as a function of distance. The influence of free space loss has been subtracted out as it would have overpowered all other parameters. Left: Space-based telescope. Right: earth-based, including atmospheric transmission. The optimal wavelength is close to $0.3$\,$\mu$m for distances $<200$\,pc and increases to the mid-IR for larger distances.} \end{figure*} \subsection{Power} At first approximation, data rate is a linear function of power, $\text{DSR}\textsubscript{$\gamma$} \propto P$. This holds for constant capacity $C_{\rm th}$ which however depends on the ratio of signal to noise, and thus decreases for decreasing signal. The effect is small for $S \gg N$ but becomes very considerable for $N>S$. As shown in Figure~\ref{hnoise}, a capacity $C_{\rm th}=1$\,bits per photon is possible for $N_{\rm M} \leqq 0.13$ (noise photons per mode) in our standard example using $M=10^{-5}$ modes and receiver efficiency $\eta=0.5$. Capacity is a logarithmic function of SNR, and the sweet spot appears between 0.1--5 bits/photon, which is achievable for $10^{-6}<N_{\rm M}<10$ assuming $10^{5}$ modes and $\eta=0.5$. \subsection{Transmitter size} The transmitter size for a circular aperture scales as $\text{DSR}\textsubscript{$\gamma$} \propto D_{\rm t}^2$ assuming no technological limitations, which we identify as possible for current (earth 2017) technology at $\lambda>300$\,nm. Increasing the dish size to focus optical lasers is thus very beneficial for the data rate, and it is recommended to make the aperture as large and high-quality as possible. \subsection{Receiver size} The receiver size for a circular aperture scales as $\text{DSR}\textsubscript{$\gamma$} \propto D_{\rm r}^2$, and we here relax the technological limitations: imperfect focusing will still collect all photons (signal), but collect more noise due to the larger beam width; the total effect is however much smaller. For a real application, this additional noise factor can be modeled. \subsection{Interstellar Extinction} \label{sec_ext} Extinction is largely irrelevant for the shortest interstellar distances, $<1$\% in the optical to $\alpha$\,Cen. Outside of the Lyman continuum ($\approx50<\lambda<91.2$\,nm), any frequency is equally suitable. The situation changes significantly for distances $>200$\,pc, where optical extinction is $>0.5$ (compare Figure~\ref{extinction}). To examine the optimal choice of wavelength versus distance due to extinction, we have plotted the normalized photon rate in Figure~\ref{fig_photons}, and subtracted out the free-space loss. The optimal wavelength for space-based communication is limited by technology at 300\,nm out to 200\,pc, and increases to 3\,$\mu$m for the longest paths in the galaxy. For an earth-based receiver, the lower limit is 420\,nm due to limited atmospheric transmission, and special care must be taken not to select a narrow absorption line. In this calculation, we assumed uniform extinction of $A(V)=1.8$\,mag per kpc in the galactic plane \citep{2003dge..conf.....W}. In reality, however, the situation is much more complex. Extinction in the galactic plane can vary on small scales (because of individual molecular clouds), and on large (degree) scales \citep{2016ApJ...821...78S}. Galactic communication with maximized data rates will require a precise measurement along each line of sight (communication path) to choose the best wavelength. If a civilization, or a club of civilizations, prefers to choose a single frequency for all distances, it will be at $\approx3\,\mu$m. Then, long distance communication is near optimal (it would be prohibitive at shorter wavelengths), while data rates for short-distance communication are smaller by a factor of a few compared to individual optima. \begin{figure*} \includegraphics[width=\linewidth]{pluto_resized} \caption{\label{pluto}Pluto image taken by ``New Horizons'' with a compressed (lossy) data volume of $\approx400$\,kbits for the shown quality.} \end{figure*} \section{Discussion} \subsection{Assessment of achievable data rates} Achievable data rates are of order kbits/s per KW for a meter-sized probe at $\alpha$\,Cen. For comparison, the NASA probe ``New Horizons'' achieved a data rate of 1\,kbits/s at $P=13$\,W from Pluto, and transmitted a total of 50\,Gbits ($5\times10^{10}$\,bits, buffered) over the course of 15 months. The transfer of an image as shown in Figure~\ref{pluto} with a compressed volume of $\approx400$\,kbits takes 7\,min to transfer at 1\,kbits/s for a $P=1$\,kW at $\alpha$\,Cen, or days (to weeks with problematic SNR) at $P=1$\,W, which might be regarded as acceptable. Photovoltaic energy is available at a level of kW\,m$^{-2}$ at au distance from the star, so that a probe in orbit (perhaps decelerated by stellar photons, \citet{2017ApJ...835L..32H,2017arXiv170403871H}) has no power issues for transmissions. A fly-by probe at 20\%\,c, however, transverses the au distance in 17 minutes, translating into a data volume of order Mbits\,m$^{-2}$ if the whole time were used for transmission (which is unrealistic, given that the target exoplanet is to be observed). Available photovoltaic energy decreases with the inverse square to the distance from the star, and by integrating over an exemplary trajectory with a closest encounter of 1\,au to the star we can estimate the total collected photovoltaic energy, during the fly-by, of order kWh\,m$^{-2}$. With this energy, perhaps stored on-board and used for later transmission, the probe can send a few Mbits\,m$^{-2}$, i.e. a few high-quality images (Figure~\ref{pluto}). Alternative options would require an onboard energy source. \subsection{Onboard storage requirements} The data volume during the fly-by governs the size of the transmission buffer. ``New Horizons'' carried a total of 16 GBs, which contained all the data it recorded during the fly-by, and which was transfered afterwards. The same scheme could be used for a fly-by at $\alpha$\,Cen. If the probe starts to transmit after the observations, and transmits 1\,bit/s (1\,kbit/s) for a total of 10 years remaining lifetime, it can transfer (and needs to store) a total of 31\,Mbits (31\,Gbits). Both are low numbers and can be stored with current (Earth 2017) technology at millimeter sizes and milligram masses. \subsection{Earth's rotation} A space-based receiver, for example at a Lagrangian point, can be used near-continuously. Earth-based telescopes, however, suffer from Earth's rotation (daylight) and weather. When ``New Horizons'' encountered Pluto, the entire NASA Deep Space Network was online to ensure there were no breaks in reception. If the communication scheme with $\alpha$\,Cen is the same, a large number of telescopes will be required. We can, however, replace (expensive) 39\,m E-ELTs with a number of smaller telescopes. To replace one E-ELT in terms of aperture, $\approx1,500$\,telescopes with $d=1$\,m are required, or 24 telescopes with $d=8$\,m. \subsection{Laser line width, orbital motion, beam sizes} \label{orbital_motion} Transmitter and receiver are in relative motion, which results in a change of path length, as already noted by \citet{2013arXiv1305.4684M,2015AcAau.107...20M}. If the sender (receiver) is located on a planet which orbits a star, the Doppler shift will cause a shift in the sender (receiver) frequency. For example, earth's equatorial speed is 465.1\,m/s, or a frequency shift of $1.55\times10^{-6}$. This is an order of magnitude smaller than current spectrographs ($R=100,000$), but larger than typical high-power laser line-widths (350\,MHz, or $6\times10^{-6}$, \citet{1999ApOpt..38.6347D}) by a factor of a few. Laser line width in the mHz range, although at low ($10^{-12}$\,W) power, have been demonstrated \citep{2009PhRvL.102p3601M,2012NaPho...6..687K}. For such small line widths, the shift would need to be modeled and compensated. Regarding noise per mode (atmospheric, zodiacal etc.), very narrow line widths are preferred. Narrow line widths may give rise to additional noise sources, namely instrumental frequency shifts in the sender and/or receiver, or a change in the interstellar scattering geometry, which may also result in non-gaussian noise per sub-channel. For the closest stars within a few pc, large optical telescopes (10-100\,m) have diffraction-limited (adaptive optics) beam sizes smaller than typical orbits (au) of exoplanets. When using such tight beams in the transmitter, the position of the receiving telescope (e.g. on a planet in motion) needs to be known with high accuracy at the time of arrival of the photons \citep{1992lbsa.conf..637S,2016QuEle..46..966M}. \subsection{Blend of probe and star} \label{sec:blend} In the previous sections, we have neglected the noise flux from the target star. This is justified for sky-projected separations which allow for the use of coronographs, and suppress $10^{-9}$ \citep{2006ApJS..167...81G,2015RAA....15..453L} of the starlight ($4.25\times10^{6}\,\gamma$\,sec$^{-1}$\,m$^{-2}$) at a separation of 1\,au at Proxima Centauri. During most of the flight, the problem is much less severe because of the large proper motion of $\alpha$\,Cen \citep[3.7\,arcsec\,yr$^{-1}$,][]{2016A&A...594A.107K}. We can estimate data rates for this increased noise level within the Holevo bound for this situation, and get a capacity of order $10^{-5}$\,bits per second per Watt. Such a low data rate is insufficient for the transfer of images or other observational data, but may be sufficient for simple telemetry and onboard health status. \subsection{Current technological level and photon dimensions} The Holevo bound assumes the use of a number of modes to encode information into photons. The available modes in photons are their time of arrival (sometimes called phase modulation), their frequency (or color), and their polarization. Realizing $10^{-5}$ photons per mode will require many ($>10^5$) modes to encode the information. This can be done with a combination of color, timing and polarization. Commonly used are time-frequency modulations. The usage of polarized light is less common, but might be beneficial for our case. Starlight is polarized only by a few percent \citep{2002AIPC..609...44F}, so that the use of polarization, which is possible for lasers, can reduce noise levels by a factor of two. We now examine currently available technology. For the sender, the shortest possible laser pulse length has decreased by 11 orders of magnitude during the last 50 years, from $100\,\mu$s in the free-running laser of \citet{1960Natur.187..493M} to 67 attoseconds \citep[$10^{-18}$\,s,][]{2012OptL...37.3891Z}. For a detailed history of the exponential improvements, see \citet{2004RPPh...67..813A}. While the pulse length is very short, the repetition rate is slower by many orders of magnitude. The highest data rates are currently found in fiber-optic communication by sending pulses of light through an optical fiber, with a current record of order Tbits/s ($10^{12}$\,bits/s) on one glass fiber \citep{2016NatSR...621278M}. Commercial products are available with data rates $1-3$ orders of magnitude below this value. The industry standard employs 100 channels with a channel spacing of 100 GHz (0.8\,nm) between $1530-1612$\,nm with a typical bandwidth (frequency range) of $186-196$\,THz \citep{ITU2012}. Limiting factors are small bandwidth (82\,nm, or $b=5$\%), the wavelength stability of lasers with thermal changes, signal degradation from nonlinear effects in optical fibers, inter-channel crosstalk and (clock) timing jitter. On the receiver side, current photon-counting detectors can be relatively fast (timings below $10^{-10}$\,s) and efficient ($>90$\%) with a low dark count rate ($<1$\,c.p.s.), but suffer from longer ($10^{-7}$\,s) reset times \citep{2013NaPho...7..210M}. Classical photomultiplier tubes offer timings (and reset times) of $10^{-9}$\,s \citep{2006NIMPA.563..368D}. Current photon detectors are fast enough to sample 10\,GHz frequencies at the Nyquist limit \citep[$B<f/2$,][]{1928TAIEE..47..617N}. These limits, however, are technological, and further improvements can be expected. The ideal instrument for high-mode communication would be a high-throughput, high-resolution spectrograph with low-noise, high-speed photon counters on each sub-channel. \subsection{Bi-directional communication} The focus in this paper was the communication from a distant, small, power-limited probe towards home-base. The opposite way, perhaps to send new instructions, is comparably easier: Home-base has less stringent limits on aperture size and power. Telescope diameters might be larger by 1--2 orders of magnitude, and power by several orders of magnitude. A major issue might be that the probe needs to ``listen'' at the moment the photons arrive, and not spend the time sending, making observations, or in hibernation. A simple solution would be pre-arranged timeslots. \subsection{Comparison to the literature} \label{lit} In his ``Roadmap to Interstellar Flight'', \citet{2016arXiv160401356L} recently approximated the communication flux as (his section 5.6, our notation) $F=D^2 P / (4 d^2\lambda^2)$ which yields an overestimate by $\approx11.7$\% compared to our Eq.~\ref{eq2}. In their ``Search for nanosecond optical pulses'', \citet{2000ASPC..213..545H} and \citet{2004ApJ...613.1270H} describe the received photon flux as \begin{equation} N_{\rm d}= \frac{\pi^2 D^2 D^2 E_{\rm p}}{16 \lambda d^2 h c} \end{equation} (their Eq. 2, neglecting extinction; they set $D=D_{\rm r}=D_{\rm t}$). Numerically, this produces a received photon flux which is too high by $\approx3.67\times$. In their ``Search for Optical Laser Emission'', \citet{2015PASP..127..540T} define the received photon flux in the same form as in our Eq.~\ref{eq2} (their Eq.~5), but with an incorrect divisor of 2, resulting in $4\times$ too many photons received. The work by \citet{1996SPIE.2704...61H} discusses beam widths and frequencies of interstellar laser communications, but neglects extinction, and consequently proposes laser communication in the Lyman H$\alpha$ line at 126\,nm over distances of $3,000$\,LY, which is impossible because of very high UV extinction (Figure~\ref{extinction}). A more traditional interstellar radio communication design from $\alpha$\,Cen has recently been published by \citet{2016JBIS...69...278G}. It presents scenarios for antennas with sizes of 1--15\,km on both sides, transmitting MW power at 32\,GHz, achieving a data rate of Gbits/s ($10^9$\,bits/s). The antenna weight is mentioned as $40,000$\,kg, and the total space-ship weight is $10^7$\,kg. Clearly, if such masses and power can be sent to other stars, the question of communication will be trivial in comparison. \subsection{\texttt{PyCom} software package} We provide the Python-based software package \texttt{PyCom} as open source under the free MIT license\footnote{\hyperref[]{http://github.com/hippke/communication/}}. The repository provides function calls for the equations in this paper, a tutorial, and scripts to reproduce all figures. \section{Conclusion} In this work (paper I of the series), we have set the framework of data transfer between telescopes, using the example of a light-weight, power-limited probe at $\alpha$\,Cen. We have explored limiting factors such as extinction, noise, and technological constraints. We have calculated optimal frequencies and achievable data rates. The Holevo bound gives an upper limit of a few bits per photon for realistic signal and noise levels from a communication between a meter-sized probe at $\alpha$\,Cen and a large (39\,m) telescope on earth. The achievable data rate is of order bits per second per Watt. For a probe with a size of a few meters, and photovoltaic energy of KW\,m$^{-2}$, power levels might be KW, resulting in data rates of order kbits/s. The optimal wavelength for a communication with $\alpha$\,Cen, at current technological levels, is 300\,nm (space-based receiver) to 400\,nm (earth-based) and increases with distance, due to extinction, to a maximum of $\approx3$\,$\mu$m to the center of the galaxy at 8\,kpc. A critical requirement in this scheme is the coronagraphic suppression of the stellar background at the level of $10^{-9}$ within a few tenths of an arcsecond of the bright star, which has not been demonstrated yet. Further research on this topic is encouraged. In paper II, the use of a stellar gravitational lens will be discussed. In paper III, we will relax technological constraints to explore the ultimate, most efficient interstellar communication scheme which yields insight into communication of more advanced life in the universe, if it exists. \acknowledgments The author is thankful to Ren\'{e} Heller, Duncan Forgan, John Learned and Tony Zee for helpful discussions, and to the Breakthrough Initiatives for an invitation to the Breakthrough Discuss 2017 conference at Stanford University. \bibliographystyle{yahapj}
train/arxiv
BkiUfPI5qhDCoPRSOBjF
5
1
\section{Introduction} \label{intro} \begin{wrapfigure}{R}{8cm} \vspace{-25pt} {\includegraphics[width=0.48\textwidth, clip]{tease}} \caption{A simple example illustrating the goal of our work, which is to learn temporal attention between nodes by observing the dynamics of events. The learned attention is then used to make better future predictions. Dotted edges denote attention values yet to be updated.} \label{fig:tease} \end{wrapfigure} Graph structured data arise from fields as diverse as social network analysis, epidemiology, finance, and physics, among others ~\cite{bronstein2017geometric, hamilton2017representation, zhou2018graph, battaglia2018relational}. A graph $\mathcal{G} = (\cal V, \cal E )$ is comprised of a set of $N$ nodes, $\cal V$, and the edges, $\cal E$, between them. For example, a social network graph may consist of a set of people (nodes), and the edges may indicate whether two people are friends. Recently, graph neural networks (GNNs)~\cite{Scarsellietal2009,li2015gated,bruna2013spectral, kipf2016semi, Santoroetal2017, Hamiltonetal2017} have emerged as a key modeling technique for learning representations of such data. These models use recursive neighborhood aggregation to learn latent features, ${Z}^{(t)} \in \mathbb{R}^{N \times c}$, of the nodes at state $t$, given features, ${Z}^{(t-1)} \in \mathbb{R}^{N \times d}$, at the previous state, $t-1$: \begin{equation} \label{eq:gcn} {Z}^{(t)}= f \left({Z}^{(t-1)}, A, W^{(t)} \right), \end{equation} \noindent where $A \in \mathbb{R}^{N\times N}$ is an adjacency matrix of graph $\cal G$. $W^{(t)}$ are trainable parameters; $d, c$ are input and output dimensionalities, and $f$ is some differentiable function, which is typically an aggregation operator followed by a nonlinearity. State $t$ can correspond to either a layer in a feedforward network or a time step in a recurrent net~\cite{li2015gated}. The focus of GNNs thus far has been on static graphs~\cite{bronstein2017geometric}--- graphs with a fixed adjacency matrix $A$. However, a key component of network analysis is often to predict the state of a graph as $A$ evolves over time. For example, knowledge of the evolution of person-to-person interactions during an epidemic facilitates analysis of how a disease spreads \cite{bansal2010dynamic}, and can be expressed in terms of the links between people in a dynamic graph. Other applications include predicting friendship, locations of players or some interaction between them in team sports, such as soccer~\cite{minderer2019unsupervised, GraphVRNN}. In many dynamic graph applications, edges can be: 1) short term (and usually frequent) interactions, such as direct contacts, messaging, passes in sports, or 2) long term intrinsic connections, such as sibling connections, affiliations, and formations. In practice, these long term edges are often specified by humans, and can be suboptimal and expensive to obtain. The performance of methods such as DyRep~\cite{DyRep} rely heavily on the quality of long term edge information. To alleviate this limitation, we take a different approach and infer long term structure jointly with the target task. The inferred structure is modelled as temporal attention between edges (Fig.~\ref{fig:tease}). We use DyRep~\cite{DyRep} as a backbone model and Neural Relational Inference (NRI)~\cite{NRI} to learn attention. We also propose a bilinear transformation layer for pairs of node features instead of concatenation as typically employed in prior work, including DyRep and NRI. On two dynamic graph datasets, Social Evolution~\cite{madan2012sensing} and GitHub\footnote{https://www.gharchive.org/}, we achieve strong performance on the task of dynamic link prediction, and show interpretability of learned temporal attention sliced at particular time steps. \section{Related Work} \label{sec:relwork} Prior work \cite{sarkar2007latent, loglisci2017leveraging, yuan2017temporal, meng2018subgraph, sanchez2018graph, GraphVRNN, NRI} addressing the problem of learning representations of entire dynamic graphs has tended to develop methods that are very specific to the application, with only a few shared ideas. This is primarily due to the difficulty of learning from temporal data in general and temporal graph-structured data in particular, which remains an open problem~\cite{zhou2018graph} that we address in this work. \textbf{RNN-based methods.} Given an evolving graph $[...,{\cal{G}}_{t-1}, {\cal{G}}_t, {\cal{G}}_{t+1},...]$, where $t \in [0,T-1]$ is a discrete index at a continuous time point $\tau$, most prior work uses some variant of a recurrent neural network (RNN) to update node embeddings over time \cite{chang2016compositional, yuan2017temporal, sanchez2018graph, NRI, GraphVRNN}. The weights of RNNs are typically shared across nodes~\cite{chang2016compositional, yuan2017temporal, sanchez2018graph, NRI}; however, in the case of smaller graphs, a separate RNN per-node can be learned to better tune the model~\cite{GraphVRNN}. RNNs are then combined with some function that aggregates features of nodes. While this can be done without explicitly using GNNs (e.g., using the sum or average of all nodes instead), GNNs impose an inductive relational bias specific to the domain, which often improves performance~\cite{yuan2017temporal, NRI, sanchez2018graph, GraphVRNN}. \textbf{Temporal knowledge graphs.} The problem of inferring missing links in a dynamic graph is analogous to the problem of inferring connections in a temporal knowledge graph (TKG). In a TKG, nodes and labelled edges represent entities and the relationships between them, respectively \cite{goel2019diachronic}. Each edge between two nodes encodes a real-world fact that can be represented as a tuple, such as: \texttt{(Alice, Liked, Chopin, 2019)}. In this example, Alice and Chopin are the nodes, the fact that Alice liked Chopin is the relationship, and 2019 is the timestamp associated with that fact. It may be that Alice no longer likes Chopin, as tastes change over time. To allow these timestamps to inform link prediction, previous work has provided a score for the entire tuple together \cite{goel2019diachronic}, or by extending static embedding methods to incorporate the time step separately \cite{jiang2016towards, dasgupta2018hyte, ma2019embedding, garcia2018learning}. These methods tend to focus on learning embeddings of particular relationships, instead of learning the entire TKG. \textbf{Dynamic link prediction}. There are several recent papers that have explored dynamic link prediction. In DynGem \cite{goyal2018dyngem}, graph autoencoders are updated in each time step. The skip-gram model in \cite{nguyen2018continuous} incorporates temporal random walks. Temporal attention is revisited in DySAT \cite{sankar2018dynamic}, whereas \cite{goyal2018dyngraph2vec} learn the next graph in a sequence based on a reconstruction error term. DynamicTriad \cite{zhou2018dynamic} uses a triad closure to learn temporal node embeddings. LSTMs are featured in both GC-LSTM \cite{chen2018gc} and DyGGNN \cite{taheri2019learning}, and both involve graph neural networks, although only the latter uses whole-graph encoding. DyRep~\cite{DyRep} is a method that has been proposed for learning from dynamic graphs based on temporal point processes~\cite{aalen2008survival}. This method: \vspace{-5pt} \begin{itemize \item supports two time scales of graph evolution (i.e. long-term and short-term edges); \item operates in continuous time; \item scales well to large graphs; and \item is data-driven due to employing a GNN similar to~\eqref{eq:gcn}. \end{itemize} \vspace{-5pt} These key advantages make DyRep favorable, compared to other methods discussed above. Closely related to our work, there are a few applications where graph ${\cal G}_t$ is considered to be either unknown or suboptimal, so it is inferred simultaneously with learning the model~\cite{yuan2017temporal, xu2019unsupervised, NRI}. Among them, \cite{yuan2017temporal, xu2019unsupervised} focus on visual data, while NRI~\cite{NRI} proposes a more general framework and, therefore, is adopted in this work. For a review on other approaches for learning from dynamic graphs we refer to~\cite{kazemi2019relational}. \section{Background: DyRep} \label{sec:background} Here we describe relevant details of the DyRep model. A complete description can be found in~\cite{DyRep}. DyRep is a representation framework for dynamic graphs evolving according to two elementary processes: \begin{itemize \setlength\itemsep{0em} \item $k=0$: \textbf{Long-term association}, in which nodes and edges are added or removed from the graph affecting the evolving adjacency matrix $A^{t} \in \real^{N \times N}$. \item $k=1$: \textbf{Communication}, in which nodes communicate over a short time period, whether or not there is an edge between them. \end{itemize} \begin{wrapfigure}{R}{8cm} \centering \vspace{-20pt} \includegraphics[width=0.48\textwidth, trim={0cm 0cm 0cm 0cm}, clip]{dyrep_update.pdf} \vspace{-20pt} \caption{A recursive update of node embeddings $\mathbf{z}$ and temporal attention $S$. An event between nodes $u=1$ and $v=3$ creates a temporary edge that allows the information to flow (in pink arrows) from neighbors of node $u$ to node $v$. Orange edges denote updated attention values. Normalization of attention values to sum to one can affect attention of neighbors of node $v$. Dotted edges denote absent edges (or edges with small values).} \vspace{-25pt} \label{fig:update} \end{wrapfigure} For example, in a social network, association may be represented as one person adding another as a friend. A communication event may be an SMS message between two friends (an association edge exists) or an SMS message between people who are not friends yet (an association edge does not exist). These two processes interact to fully describe information transfer between nodes. Formally, an event is a tuple $o^t=(u, v, \tau, k)$ of type $k \in \{0, 1\}$ between nodes $u$ and $v$ at continuous time point $\tau$ with time index $t$. For example, the tuple $o^1=(1, 3, \text{9:10AM}, 1)$ would indicate that node $u=1$ communicates with node $v=3$ at time point $t=1$ corresponding to $\tau=$9:10 in the morning. \subsection{Node update} Each event between any pair of nodes $u,v$ triggers an update of node embeddings $\mathbf{z}_{u}^t, \mathbf{z}_{v}^t \in \real^d$ followed by an update of temporal attention $S^t\in \real^{N \times N}$ in a recursive way (Fig.~\ref{fig:update}). That is, updated node embeddings affect temporal attention, which in turn affects node embeddings at the next time step. In particular, the embedding of node $v$, and analogously node $u$, is updated based on the following three terms: \begin{equation} \label{eq:node_update} \mathbf{z}_v^t = \sigma \left[ \right. \underbrace{W^{\text{S}}\mathbf{h}^{\text{S}, t-1}_u}_{\textbf{Attention-based}} + \underbrace{W^{\text{R}} \mathbf{z}_v^{t_v-1}}_{\text{Self-propagation}} + \underbrace{W^{\text{T}}(\tau - \tau^{t_v-1})}_{\text{Temporal shift}} \left. \right], \end{equation} where $W^{\text{S}} \in \real^{d \times d}$, $W^{\text{R}} \in \real^{d \times d}$ and $W^{\text{T}} \in \real^{d}$ are learned parameters and $\sigma$ is a nonlinearity. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth, trim={0cm 0cm 0cm 0cm}, clip]{overview} \caption{Overview of our approach relative to DyRep~\cite{DyRep}, in the context of dynamic link prediction. During training, events $o^t$ are observed, affecting node embeddings ${Z}$. In contrast to DyRep, which updates attention weights $S^t$ in a predefined hard-coded way based on associative connections $A^t$, such as \textsc{CloseFriend}, we assume that graph $A^t$ is unknown and our latent dynamic graph (LDG) model based on NRI~\cite{NRI} infers $S^t$ by observing how nodes communicate. We show that learned $S^t$ has a close relationship to certain associative connections. Best viewed in colour.} \label{fig:overview} \end{figure*} \paragraph{Attention-based term.} We highlight the importance of the first term, which is affected by temporal attention $S^{t-1}$ between node $u$ and all its one-hop neighbors ${\cal{N}}^{t-1}_u$ \cite{DyRep}: \begin{equation} \label{eq:h_struct} \mathbf{h}^{\text{S}, t-1}_u = f\left[\text{softmax}(S^{t-1}_{u})_i \cdot (W^{h}\mathbf{z}^{t-1}_i), \forall i \in {\cal{N}}^{t-1}_u \right], \end{equation} where $f$ is an aggregator and $W^{\text{h}} \in \real^{d \times d}$ are learned parameters. Note that features of node $u$'s neighbors are used to update node $v$'s embedding, which can be interpreted as creating a temporal edge by which node features propagate between the two nodes. The amount of information propagated from node $u$'s neighbors is controlled by attention $S^{t-1}$, which we suggest to learn as described in Section~\ref{sec:model}. The other two terms in \eqref{eq:node_update} include a recurrent update of node $v$'s features and a function of the waiting time between the current event and the previous event involving node $v$. \subsection{Attention update} In DyRep, attention $S^t$ between nodes $u,v$ depends on three terms: 1) long term associations ${A}^{t-1}$; 2) attention $S^{t-1}$ at the previous time step; and 3) \textit{conditional intensity} $\lambda^t_{k,u,v}$ (for simplicity denoted as $\lambda^t_k$ hereafter) of events between nodes $u$ and $v$, which in turn depends on their embeddings: \begin{equation} \label{eq:mp_S} S^t_{uv} = f_{S}({A}^{t-1}, S^{t-1}, \lambda^t_k), \end{equation} where $f_{S}$ is \textsc{Algorithm 1} in \cite{DyRep}. Let us briefly describe the last term.\looseness-1 \paragraph{Conditional intensity $\lambda$.} Conditional intensity $\lambda^t_k$ represents the instantaneous rate at which an event of type $k$ (i.e.,~association or communication) occurs between nodes $u$ and $v$ in the infinitesimally small interval $(\tau, \tau + \delta \tau]$ \cite{schoenberg2010introduction}. DyRep formulates the conditional intensity as a softplus function of the concatenated learned node representations ${\mathbf{z}}^{t-1}_{u}, {\mathbf{z}}^{t-1}_{v} \in \real^d$: \begin{equation} \label{eq:lambda_org} \lambda^t_k = \psi_k \log \left(1 + \exp \left\{ \frac{\tp{\omega_k} \left[{\mathbf{z}}^{t-1}_{u} , {\mathbf{z}}^{t-1}_{v} \right]}{\psi_k} \right\} \right), \end{equation} \noindent where $\psi_k$ is the scalar trainable rate at which events of type $k$ occur, and $\omega_k \in \real^{2d}$ is a trainable vector that represents the compatibility between nodes $u$ and $v$ at time $t$; $[\cdot, \cdot]$ is the concatenation operator. Combining~\eqref{eq:lambda_org} with~\eqref{eq:mp_S}, we can notice that $S^t_{u,v}$ implicitly depends on the embeddings of nodes $u$ and $v$. In DyRep, $\lambda$ is mainly used to compute the loss and optimize the model, so that its value between nodes involved in the events should be high, whereas between other nodes it should be low. But $\lambda$ also affects attention values using a hard-coded algorithm, which makes several assumptions (see \textsc{Algorithm 1} in \cite{DyRep}). In particular, in the case of a communication event ($k=1$) between nodes $u$ and $v$, attention $S^t$ is only updated if an association exists between them ($A_{u,v}^{t-1} = 1$). Secondly, to compute attention between nodes $u$ and $v$, only their one-hop neighbors at time $t-1$ are considered. Finally, no learned parameters directly contribute to attention values, which can limit information propagation, especially in case of imperfect associations $A^{t-1}$. In this paper, we extend the DyRep model in two ways. First, we examine the benefits of learned attention instead of the DyRep's algorithm in~\eqref{eq:mp_S} by using a variational autoencoder, more specifically its encoder part, from NRI~\cite{NRI}. This permits learning of a sparse representation of the interactions between nodes instead of using a hard-coded function $f_S$ and known adjacency matrix $A$ (Section~\ref{sec:encoder}). Second, both the original DyRep work~\cite{DyRep} and \cite{NRI} use concatenation to make predictions for a pair of nodes (see \eqref{eq:lambda_org} above and (6) in \cite{NRI}), which only captures relatively simple relationships. We are interested in the effect of allowing a more expressive relationship to drive graph dynamics, and specifically to drive temporal attention (Sections~\ref{sec:encoder} and~\ref{sec:bilinear}). We demonstrate the utility of our model by applying it to the task of dynamic link prediction on two graph datasets (Section~\ref{sec:exp}). \section{Latent Dynamic Graph Model} \label{sec:model} In~\cite{NRI}, Neural Relational Inference (NRI) was proposed, showing that in some settings, models that use a learned representation, in which the human-specified graph structure is discarded, can outperform models that use the explicit graph. A learned sparse graph representation can retain the most salient features, i.e., only those connections that are necessary for the downstream task, whereas the human-specified graph may have redundant connections. While NRI learns the latent graph structure from observing node movement, we learn the graph by observing how nodes communicate. In this spirit, we repurpose the encoder of NRI, combining it with DyRep, which gives rise to our latent dynamic graph (LDG) model, described below in more detail (Fig.~\ref{fig:overview}). We also summarize our notation in Table~\ref{table:symbols} at the end of this section. \subsection{Bilinear Encoder} \label{sec:encoder} \begin{figure*}[btph] \centering \includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm}, clip]{nri.pdf} \vspace{-10pt} \caption{Inferring an edge $S^t_{u,v}$ of our latent dynamic graph (LDG) using two passes, according to~\eqref{eq:mp_enc1}-\eqref{eq:mp_enc4}, assuming an event between nodes $u=1$ and $v=3$ has occurred. Even though only nodes $u$ and $v$ have been involved in the event, to infer the edge $S^t_{u,v}$ between them, interactions with all nodes in a graph are considered.\looseness-1} \label{fig:enc} \end{figure*} DyRep's encoder \eqref{eq:mp_S} requires a graph structure represented as an adjacency matrix $A$. We propose to replace this with a sequence of learnable functions $f_{S}^{\text{enc}}$, borrowed from~\cite{NRI}, that only require node embeddings as input: \begin{equation} \label{eq:enc_general} S^t = f_{S}^{\text{enc}}({Z}^{t-1}). \end{equation} Given an event between nodes $u$ and $v$, our encoder takes the embedding of each node $j \in \cal V $ at the previous time step $\mathbf{z}_j^{t-1}$ as an input, and returns an edge embedding $\mathbf{h}_{(u,v)}^2$ between nodes $u$ and $v$ using two passes of node and edge mappings, denoted by the superscripts $^1$ and $^2$ (Fig.~\ref{fig:enc}): \begin{numcases}{\text{1$^{st}$ pass}} \ \ \ \forall j: \mathbf{h}_j^1 &= $f_{\text{node}}^1(\mathbf{z}_j^{t-1})$ \label{eq:mp_enc1} \\[0.2em] \forall i,j: \mathbf{h}_{(i,j)}^1 &= $f^1_{\text{edge}}(\tp{\mathbf{h}_i^1} W^1 \mathbf{h}_j^1)$ \label{eq:mp_enc2} \end{numcases} \begin{numcases}{\hspace{0pt}\text{2$^{nd}$ pass}} \ \ \forall j: \mathbf{h}_j^2 &= $f_{\text{node}}^2(\sum_{i \neq j} \mathbf{h}_{(i,j)}^1)$ \label{eq:mp_enc3} \\[0.2em] u,v : \mathbf{h}_{(u,v)}^2 &= $f_{\text{edge}}^2(\tp{\mathbf{h}_u^2} W^2 \mathbf{h}^2_v)$ \label{eq:mp_enc4} \end{numcases} \noindent where $f_{\text{node}}^1, f_{\text{edge}}^1, f_{\text{node}}^1, f_{\text{edge}}^2$ are two-layer, fully-connected neural networks, as in~\cite{NRI}; $W^1$ and $W^2$ are trainable parameters implementing bilinear layers. In detail: \begin{itemize} \item \eqref{eq:mp_enc1}: $f_{\text{node}}^1$ transforms embeddings of all nodes; \item \eqref{eq:mp_enc2}: $f_{\text{edge}}^1$ is a ``node to edge'' mapping that returns an edge embedding $\mathbf{h}_{(i,j)}^1$ for all pairs of nodes $(i,j)$; \item \eqref{eq:mp_enc3}: $f_{\text{node}}^2$ is an ``edge to node'' mapping that updates the embedding of node $j$ based on its all incident edges; \item \eqref{eq:mp_enc4}: $f_{\text{edge}}^2$ is similar to the ``node to edge'' mapping in the first pass, $f_{\text{edge}}^1$, but only the edge embedding between nodes $u$ and $v$ involved in the event is used. \end{itemize} The softmax function is applied to the edge embedding $\mathbf{h}_{(u,v)}^2$, which yields the edge type posterior as in NRI~\cite{NRI}: \begin{equation} q_\phi({S}^t_{u,v} | {Z}^{t-1}) \equiv \textrm{softmax}\left(\mathbf{h}_{(u,v)}^2\right), \end{equation} \noindent where $S_{u,v}^t \in \real^r$ are temporal one-hot attention weights sampled from the multirelational conditional multinomial distribution $q_\phi(S^t_{u,v} | {Z}^{t-1})$, hereafter denoted as $q_\phi(S | Z)$ for brevity; $r$ is the number of edge types (note that in DyRep $r=1$); and $\phi$ are parameters of the neural networks in~\eqref{eq:mp_enc1}-\eqref{eq:mp_enc4}. $S_{u,v}^t$ is then used to update node embeddings at the next time step, according to~\eqref{eq:node_update} and~\eqref{eq:h_struct}. Replacing \eqref{eq:mp_S} with \eqref{eq:enc_general} means that it is not necessary to maintain an explicit representation of the human-specified graph in the form of an adjacency matrix. The evolving graph structure is implicitly captured by $S^t$. While $S^t$ represents temporal attention weights between nodes, it can be thought of as a graph evolving over time, therefore we call our model a Latent Dynamic Graph (LDG). This graph, as we show in our experiments, can have a particular semantic interpretation. The two passes in~\eqref{eq:mp_enc1}-\eqref{eq:mp_enc4} are important to ensure that attention $S^t_{u,v}$ depends not only on the embeddings of nodes $u$ and $v$, $\mathbf{z}_u^{t-1}$ and $\mathbf{z}_v^{t-1}$, but also on how they interact with other nodes in the entire graph. With one pass, the values of $S^t_{u,v}$ would be predicted based only on local information, as only the previous node embeddings influence the new edge embeddings in the first pass (8). This is one of the core differences of our LDG model compared to DyRep, where $S^t_{u,v}$ only depends on $\mathbf{z}_u^{t-1}$ and $\mathbf{z}_v^{t-1}$ (see \eqref{eq:mp_S}). Unlike DyRep, the proposed encoder generates multiple edges between nodes, i.e., $S_{u,v}^t$ are one-hot vectors of length $r$. We therefore modify the ``Attention-based'' term in~\eqref{eq:node_update}, such that features $h_u^{\text{S}, t-1}$ are computed for each edge type and parameters $W^{\text{S}}$ act on concatenated features from all edge types, i.e., $W^{\text{S}} \in \real^{rd \times d} $. \subsection{Bilinear Intensity Function} \label{sec:bilinear} Earlier in~\eqref{eq:lambda_org} we introduced conditional intensity $\lambda^t_k$, computed based on concatenating node embeddings. We propose to replace this concatenation with bilinear interaction: \begin{equation} \label{eq:lambda_bilinear} \lambda^t_k = \psi_k \log \left(1 + \exp \left\{ \frac{ \left[\tp{{\mathbf{z}}^{t-1}_{u}} \Omega_k {\mathbf{z}}^{t-1}_{v} \right]}{\psi_k} \right\} \right), \end{equation} \noindent where $\Omega_k \in \real^{d \times d}$ are trainable parameters that allows more complex interactions between evolving node embeddings. \textbf{Why bilinear?} Bilinear layers have proven to be advantageous in settings like Visual Question Answering (e.g.,~\cite{kim2016hadamard}), where multi-modal embeddings interact. It takes a form of $\mathbf{x}W\mathbf{y}$, so that each weight $W_{ij}$ is associated with a pair of features $\mathbf{x}_i, \mathbf{y}_j$, where $i,j \in d$ and $d$ is the dimensionality of features $\mathbf{x}$ and $\mathbf{y}$. That is, the result will include products $\mathbf{x}_iW_{ij}\mathbf{y}_j$. In the concatenation case, the result will only include $\mathbf{x}_iW_{i}$, $\mathbf{y}_jW_{j}$, i.e. each weight is associated with a feature either only from $\mathbf{x}$ or from $\mathbf{y}$, not both. As a result, bilinear layers have some useful properties, such as separability~\cite{tenenbaum2000separating}. In our case, they permit a richer interaction between embeddings of different nodes. So, we replace NRI's and DyRep's concatenation layers with bilinear ones in \eqref{eq:mp_enc2}, \eqref{eq:mp_enc4} and \eqref{eq:lambda_bilinear}. \begin{table}[tpbh] \vspace{10pt} \caption{Mathematical symbols used in model description. Number of relation types $r=1$ in DyRep and $r > 1$ in the proposed LDG.} \vspace{-10pt} \label{table:symbols} \begin{center} \begin{tabular}{llp{0.7\linewidth}} \hline Notation & Dimensionality & Definition\\ \hline \hline \\[-0.1cm] $\tau$ & $1$ & Point in continuous time \\%& DyRep \& LDG \\ $t$ & $1$ & Time index \\ $t_v -1$ & $1$ & Time index of the previous event involving node $v$ \\ $\tau^{t_v -1}$ & $1$ & Time point of the previous event involving node $v$ \\[0.1cm] \hline \\[-0.1cm] $i,j$ & $1$ & Index of an arbitrary node in the graph \\ $v,u$ & $1$ & Index of a node involved in the event \\ $A^t$ & ${N \times N}$ & Adjacency matrix at time $t$ \\%& DyRep \& LDG \\ $\mathcal{N}_u(t)$ & $\mid\mathcal{N}_u(t)\mid$ & One-hop neighbourhood of node $u$ \\ $\text{Z}^{t}$ & ${N \times d}$ & Node embeddings at time $t$ \\%& DyRep \& LDG \\ ${\mathbf{z}}_v^t$ & $d$ & Embedding of node $v$ at time $t$ \\[0.1cm] $\mathbf{h}_u^{S,t-1}$ & $d$ & Attention-based embedding of node $u$'s neighbors at time $t-1$ \\ \hline \\[-0.1cm] $\mathbf{h}_j^1$ & $d$ & Learned hidden representation of node $j$ after the first pass \\ $\mathbf{h}_{i,j}^1$ & $d$ & Learned hidden representation of an edge between nodes $i$ and $j$ after the first pass\\ $\mathbf{h}_j^2$ & $d$ & Learned hidden representation of node $j$ after the second pass \\ $\mathbf{h}_{(u,v)}^2$ & $r$ & Learned hidden representation of an edge between nodes $u$ and $v$ involved in the event after the second pass (Fig.~\ref{fig:enc})\\[0.1cm] \hline\\[-0.1cm] $S^t$ & ${N \times N \times r}$ & Attention at time $t$ with $r$ multirelational edges \\ $S_{u}^{t}$ & ${N \times r}$ & Attention between node $u$ and its one-hop neighbors at time $t$ \\ $S_{u,v}^{t}$ & ${r}$ & Attention between nodes $u$ and $v$ at time $t$ for all edge types $r$ \\ $\lambda_k^t$ & $1$ & Conditional intensity of edges of type $k$ at time $t$ between nodes $u$ and $v$ \\ $\psi_k$ & $1$ & Trainable rate at which edges of type $k$ occur \\ $\omega_k$ & $2d$ & Trainable compatibility of nodes $u$ and $v$ at time $t$ \\ $\Omega_k$ & $d \times d$ & Trainable bilinear interaction matrix between nodes $u$ and $v$ at time $t$ \\ \hline \end{tabular} \end{center} \vspace{0pt} \end{table} \subsection{Training the LDG Model} Given a minibatch with a sequence of $\mathcal{P}$ events, we optimize the model by minimizing the following cost function: \begin{equation} \label{eq:loss} \mathcal{L} = \mathcal{L}_{\textsubscript{events}} + \mathcal{L}_{\textsubscript{nonevents}} + \textrm{KL}[q_\phi(S | Z) || p_\theta(S)], \end{equation} where $\mathcal{L}{\textsubscript{events}} = -\sum_{p=1}^\mathcal{P} \log (\lambda^{t_p}_{k_p})$ is the total negative log of the intensity rate for all events between nodes $u_p$ and $v_p$ (i.e., all nodes that experience events in the minibatch); and $\mathcal{L}_{\textsubscript{nonevents}} = \sum_{m=1}^\mathcal{M} \lambda^{t_m}_{k_m} $ is the total intensity rate of all nonevents between nodes $u_m$ and $v_m$ in the minibatch. Since the sum in the second term is combinatorially intractable in many applications, we sample a subset of nonevents according to the Monte Carlo method as in~\cite{DyRep}, where we follow their approach and set ${\cal M} = 5 {\cal P}$. The first two terms, $\mathcal{L}{\textsubscript{events}}$ and $\mathcal{L}_{\textsubscript{nonevents}}$, were proposed in DyRep~\cite{DyRep} and we use them to train our baseline models. The KL divergence term, adopted from NRI~\cite{NRI} to train our LDG models, regularizes the model to align predicted $q_\phi(S | Z)$ and prior $p_\theta(S)$ distributions of attention over edges. Here, $p_\theta(S)$ can, for example, be defined as $[\theta_1, \theta_2, ..., \theta_r]$ in case of $r$ edge types. Following~\cite{NRI}, we consider uniform and sparse priors. In the uniform case, $\theta_i=1/r, i =1,\ldots,r$, such that the KL term becomes the sum of entropies $H$ over the events $p = [1, \ldots, \cal P]$ and over the generated edges excluding self-loops ($u \neq v$): \begin{equation} \textrm{KL}[q_\phi(S | Z) || p_\theta(S)] = - \sum_p \sum_{u \neq v} H_\phi^p, \end{equation} where entropy is defined as a sum over edge types $r$: $H_\phi^p =-\sum q_\phi^p \log{q_\phi^p}$ and $q_\phi^p$ denotes distribution $q_\phi(S^{t_p}_{u,v} | {Z}^{t_p-1})$. In case of sparse $p_\theta(S)$ and, for instance, $r=2$ edge types, we set $p_\theta(S)=[0.90, 0.05, 0.05]$, meaning that we generate $r + 1$ edges, but do not use the non-edge type corresponding to high probability, and leave only $r$ sparse edges. In this case, the KL term becomes: \begin{equation} \textrm{KL}[q_\phi(S | Z) || p_\theta(S)] = - \sum_p \sum_{u \neq v} ( H_\phi^p -H_{\phi,q}^p ), \end{equation} where $H_{\phi,q}^p = -\sum q_\phi^p \log{p_\theta(S)} $. During training, we update $S^{t_p}_{u,v}$ after each $p$-th event and backpropagate through the entire sequence in a minibatch. To backpropagate through the process of sampling discrete edge values, we use the Gumbel reparametrization~\cite{maddison2016concrete}, as in~\cite{NRI}. \begin{figure}[tpbh] \centering \vspace{-10pt} \begin{small} \begin{tabular}{ccc} \rotatebox[]{90}{\textsc{Social Evolution}} & \includegraphics[width=0.3\textwidth, align=c]{social_baseline_bilinear_train_MAR} & \includegraphics[width=0.3\textwidth, align=c]{social_enc_sp_bilinear_train_MAR} \\ \rotatebox[]{90}{\textsc{GitHub}} & \includegraphics[width=0.3\textwidth, align=c]{github_baseline_bilinear_train_MAR} & \includegraphics[width=0.3\textwidth, align=c]{github_enc_sp_bilinear_train_MAR} \\ & (a) DyRep and Bilinear DyRep & (b) Bilinear sparse LDG \\ \end{tabular} \end{small} \vspace{-5pt} \caption{(a) Training curves and test MAR for \textbf{baseline DyRep}, \textbf{bilinear DyRep}, and \textbf{larger baseline DyRep}, with a number of trainable parameters equal to the bilinear DyRep. (b) Training curves and test MAR for the \textbf{bilinear LDG with sparse prior}, showing that all three components of the loss (see~\eqref{eq:loss}) generally decrease or plateau, reducing test MAR.} \label{fig:train_curves} \vspace{-10pt} \end{figure} \begin{wraptable}{R}{8cm} \vskip -0.2in \caption{Datasets statistics used in experiments.} \vspace{-15pt} \label{table:stats} \begin{center} \begin{footnotesize} \begin{sc} \begin{tabular}{lcc} \toprule Dataset & Social Evolution & Github \\ \midrule \# nodes & 83 & 284 \\ \# initial associations & 575 & 149 \\ \# final associations & 708 & 710 \\ \# train comm events & 43,469 & 10,604 \\ \# train assoc events & 365 & 1,040 \\ \# test comm events & 10,462 & 8,667 \\ \# test assoc events & 73 & 415 \\ \bottomrule \end{tabular} \end{sc} \end{footnotesize} \end{center} \vskip -0.1in \end{wraptable} \section{Experiments} \label{sec:exp} \subsection{Datasets} We evaluate our model on two datasets (Table~\ref{table:stats}). \textbf{Social Evolution~\cite{madan2012sensing}.} This dataset consists of over one million events $o^t=(u, v, \tau, k)$. We preprocess this dataset in a way similar to~\cite{DyRep}. A communication event ($k=1$) is represented by the sending of an \textscbody{SMS} message, or a \textscbody{Proximity} or \textscbody{Call} event from node $u$ to node $v$; an association event ($k=0$) is a \textscbody{CloseFriend} record between the nodes. We also experiment with other associative connections (Fig.~\ref{fig:stats_baselines}). As \textscbody{Proximity} events are noisy, we filter them by the probability that the event occurred, which is available in the dataset annotations. The filtered dataset on which we report results includes 83 nodes with around 43k training and 10k test communication events. As in~\cite{DyRep}, we use events from September 2008 to April 2009 for training, and from May to June 2009 for testing. \textbf{GitHub.} This dataset is provided by GitHub Archive and compared to Social Evolution is a very large network with sparse association and communication events. To learn our model we expect frequent interactions between nodes, therefore we extract a dense subnetwork, where each user initiated at least 200 communication and 7 association events during the year of 2013. Similarly to~\cite{DyRep}, we consider \textsc{Follow} events in 2011-2012 as initial associations, but to allow more dense communications we consider more event types in addition to \textsc{Watch} and \textsc{Star}: \textsc{Fork}, \textsc{Push}, \textsc{Issues}, \textsc{IssueComment}, \textsc{PullRequest}, \textsc{Commit}. This results in a dataset of 284 nodes and around 10k training events (from December to August 2013) and 8k test events (from September to December 2013). We evaluate models only on communication events, since the number of association events is small, but we use both for training. At test time, given tuple $(u, \tau, k)$, we compute the conditional density of $u$ with all other nodes and rank them~\cite{DyRep}. We report Mean Average Ranking (MAR) and HIST@10: the proportion of times that a test tuple appears in the top 10. \vspace{-7pt} \subsection{Implementation Details} \vspace{-7pt} We train models with the Adam optimizer~\cite{kingma2014adam}, with a learning rate of $2 \times 10^{-4}$, minibatch size $\mathcal{P} = 200$ events, and $d=32$ hidden units per layer, including those in the encoder~\eqref{eq:mp_enc1}-\eqref{eq:mp_enc4}. We consider two priors, $p_\theta(S)$, to train the encoder: uniform and sparse with $r=2$ edge types in each case. For the sparse case, we generate $r + 1$ edges, but do not use the non-edge type corresponding to high probability and leave only $r$ sparse edges. We run each experiment 10 times and report the average and standard deviation of MAR and HIST@10 in Table~\ref{table:results}. We train for 5 epochs with early stopping. To run experiments with random graphs, we generate $S$ once in the beginning and keep it fixed during training. For the models with learned temporal attention, we use random attention values for initialization, which are then updated during training (Fig.~\ref{fig:edges}). \newcommand{0.23\textwidth}{0.13\textwidth} \begin{figure}[tpbh] \vskip 0.1in \begin{center} \begin{tiny} \setlength{\tabcolsep}{2pt} \begin{tabular}{c|ccccc|c} \multicolumn{6}{c|}{\small \textsc{Timestamped associative connections available in the dataset}} & \small \textsc{Our LDG} \\ \multicolumn{6}{c|}{} \\ & \textsc{BlogLivejournalTwitter} & \textsc{CloseFriend} & \textsc{FacebookAllTaggedPhotos} & \textsc{PoliticalDiscussant} & \textsc{SocializeTwicePerWeek} & \textsc{Random} \\ \rotatebox[origin=c]{90}{\parbox{2.2cm}{\centering \textsc{September 2008}}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{BlogLivejournalTwitter_2008_9_9}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{CloseFriend_2008_9_9}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{FacebookAllTaggedPhotos_2008_9_9}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{PoliticalDiscussant_2008_9_9}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{SocializeTwicePerWeek_2008_9_9}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{Random_sparse_0}}\\ \hline & \multicolumn{5}{c|}{} \\ & \multicolumn{5}{c|}{} & \textsc{Learned} \\ \rotatebox[origin=c]{90}{\parbox{1.5cm}{\centering \textsc{April 2009}}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{BlogLivejournalTwitter_2009_4_17}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{CloseFriend_2009_4_17}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{FacebookAllTaggedPhotos_2009_4_17}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{PoliticalDiscussant_2009_4_17}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{SocializeTwicePerWeek_2009_4_17}} & {\includegraphics[width=0.23\textwidth, trim={2cm 0.5cm 3cm 0cm}, clip, align=c]{Learned_sparse_0}} \\ \end{tabular} \end{tiny} \end{center} \caption{An adjacency matrix of the latent dynamic graph (LDG), $S^t$, for one of the sparse edge types generated at the end of training, compared to randomly initialized $S^t$ and associative connections available in the Social Evolution dataset at the beginning (top row) and end of training (bottom row). A quantitative comparison is presented in Table~\ref{table:results}.} \label{fig:edges} \end{figure} \subsection{Quantitative Evaluation} \vspace{-7pt} On Social Evolution, we report results of the baseline DyRep with different human-specified associations and compare them to the models with learned temporal attention (LDG) (Table~\ref{table:results}, Fig.~\ref{fig:stats_baselines}). Models with learned attention perform better than most of the human-specified associations, further confirming the finding from~\cite{NRI} that the underlying graph can be suboptimal. However, these models are still slightly worse compared to \textscbody{CloseFriend}, which means that this graph is a strong prior. We also show that bilinear models consistently improve results and exhibit better training behaviour, including when compared to a larger linear model with an equivalent number of parameters (Fig.~\ref{fig:train_curves}). GitHub is a more challenging dataset with more nodes and complex interactions, so that the baseline DyRep model has a very large MAR of 100 (random predictions have the MAR of around 140). On this dataset, bilinear models provide a much larger gain, while models with learned attention further improve results. The baseline model using \textsc{Follow} associations is performing poorly, even compared to our model with randomly initialized graphs. These results imply that we should either carefully design graphs or learn them jointly with the rest of the model as we do in our work. \begin{table*}[tpbh] \caption{Results on the Social Evolution and GitHub datasets in terms of MAR and HITS@10 metrics. We proposed models with bilinear interactions and learned temporal attention. Bolded results denote best performance for each dataset.} \label{table:results} \vskip 0.15in \begin{center} \begin{scriptsize} \begin{sc} \begin{tabular}{p{0.5cm}lccccc} \toprule & Model & \multirow{2}{*}{\parbox{1.2cm}{Learned \newline attention}} & \multicolumn{2}{c}{MAR $\downarrow$} & \multicolumn{2}{c}{HITS@10 $\uparrow$} \\ & & & Concat & Bilinear & Concat & Bilinear \\ \midrule \multirow{7}{*}{\rotatebox[]{90}{{Social Evolution}}} & & & & & & \\ & DyRep (\textsc{CloseFriend}) & \xmark & 16.0 $\pm$ 3.0 & \textbf{11.0 $\pm$ 1.2} & 0.47 $\pm$ 0.05 & \textbf{0.59 $\pm$ 0.06} \\ & DyRep (\textsc{Fb}) & \xmark & 20.7 $\pm$ 5.8 & 15.0 $\pm$ 2.0 & 0.29 $\pm$ 0.21 & 0.38 $\pm$ 0.14 \\ \cline{2-7} \\ & LDG (random, uniform) & \xmark & 19.5 $\pm$ 4.9 & 16.0 $\pm$ 3.3 & 0.28 $\pm$ 0.19 & 0.35 $\pm$ 0.17 \\ & LDG (random, sparse) & \xmark & 21.2 $\pm$ 4.4 & 17.1 $\pm$ 2.6 & 0.26 $\pm$ 0.10 & 0.37 $\pm$ 0.08 \\ & LDG (learned, uniform) & \checkmark & 22.6 $\pm$ 6.1 & 16.9 $\pm$ 1.1 & 0.18 $\pm$ 0.09 & 0.37 $\pm$ 0.06 \\ & LDG (learned, sparse) & \checkmark & 17.0 $\pm$ 5.8 & 12.7 $\pm$ 0.9 & 0.37 $\pm$ 0.14 & 0.50 $\pm$ 0.06 \\ \midrule \midrule \multirow{7}{*}{\rotatebox[]{90}{{GitHub}}} & & & & & & \\ & DyRep (\textsc{Follow}) & \xmark & 100.3 $\pm$ 10 & 73.8 $\pm$ 5 & 0.187 $\pm$ 0.011 & 0.221$\pm$ 0.02 \\ \cline{2-7} \\ & LDG (random, uniform) & \xmark & 90.3 $\pm$ 17.1 & 68.7 $\pm$ 3.3 & 0.21 $\pm$ 0.02 & 0.24 $\pm$ 0.02 \\ & LDG (random, sparse) & \xmark & 95.4 $\pm$ 14.9 & 71.6 $\pm$ 3.9 & 0.20 $\pm$ 0.01 & 0.23 $\pm$ 0.01 \\ & LDG (learned, uniform) & \checkmark & 92.1 $\pm$ 15.1 & 66.6 $\pm$ 3.6 & 0.20 $\pm$ 0.02 & 0.27 $\pm$ 0.03 \\ & LDG (learned, sparse) & \checkmark & 90.9 $\pm$ 16.8 & 67.3 $\pm$ 3.5 & 0.21 $\pm$ 0.03 & 0.28 $\pm$ 0.03 \\ & LDG (learned, sparse) + Freq & \checkmark & 47.8 $\pm$ 5.7 & \textbf{43.0 $\pm$ 2.8} & 0.48 $\pm$ 0.03 & \textbf{0.50 $\pm$ 0.02 }\\ \bottomrule \end{tabular} \end{sc} \end{scriptsize} \end{center} \vskip -0.05in \end{table*} \vspace{-5pt} \subsection{Interpretability of Learned Attention} \vspace{-5pt} While the models with a uniform prior have better test performance than those with a sparse prior in some cases, sparse attention is typically more interpretable. This is because the model is forced to infer only a few edges, which must be strong since that subset defines how node features propagate (Table~\ref{table:edges}). In addition, relationships between people in the dataset tend to be sparse. To estimate agreement of our learned temporal attention matrix with the underlying association connections, we take the matrix $S^t$ generated after the last event in the training set and compute the area under the ROC curve (AUC) between $S^t$ and each of the associative connections present in the dataset. \definecolor{highlight}{rgb}{0.8,0.9,0.85} \begin{wraptable}{R}{8cm} \vspace{-15pt} \caption{Edge analysis for the LDG model with a learned graph using the area under the ROC curve (AUC, \%); random chance AUC=50\%. \textsc{CloseFriend} is highlighted as the relationship closest to our learned graph.} \vspace{-15pt} \label{table:edges} \begin{center} \begin{scriptsize} \begin{sc} \setlength{\tabcolsep}{3pt} \begin{tabular}{lcc|cc} \toprule Associative Connection & \multicolumn{2}{c|}{Initial} & \multicolumn{2}{c}{Final} \\ & \tiny Uniform & \tiny Sparse & \tiny Uniform & \tiny Sparse \\ \midrule & \multicolumn{2}{c|}{} & & \\ BlogLivejournalTwitter & 53 & 57 & 57 & 69 \\ \cellcolor{highlight}CloseFriend & \cellcolor{highlight}65 & \cellcolor{highlight}69 & \cellcolor{highlight}76 & \cellcolor{highlight}84 \\ FacebookAllTaggedPhotos & 55 & 57 & 58 & 62 \\ PoliticalDiscussant & 59 & 63 & 61 & 68 \\ SocializeTwicePerWeek & 60 & 66 & 63 & 70 \\ \midrule \midrule Github Follow & 79 & 80 & 85 & 86 \\ \bottomrule \end{tabular} \end{sc} \end{scriptsize} \end{center} \vspace{-15pt} \end{wraptable} These associations evolve over time, so we consider initial and final associations corresponding to the first and last training events. AUC is used as opposed to other metrics, such as accuracy, to take into account sparsity of true positive edges, as accuracy would give overoptimistic estimates. We observe that LDG learns a graph most similar to \textscbody{CloseFriend}. This is an interesting phenomenon, given that we only observe how nodes communicate through many events between non-friends. Thus, the learned temporal attention matrix is capturing information related to the associative connections. We also visualize temporal attention sliced at different time steps and can observe that the model generally learns structure similar to human-specified associations (Fig.~\ref{fig:graphs_github}). However, attention can evolve much faster than associations, which makes it hard to analyze in detail. Node embeddings of bilinear models tend to form more distinct clusters, with frequently communicating nodes generally residing closer to each other after training (Fig.~\ref{fig:embeddings}). We notice bilinear models tend to group nodes in more distinct clusters. Also, using the random edges approach clusters nodes well and embeds frequently communicating nodes close together, because the embeddings are mainly affected by the dynamics of communication events. \begin{figure}[t] \centering \setlength{\tabcolsep}{0pt} \begin{small} \begin{tabular}{ccc} {\includegraphics[width=0.2\textwidth, trim={2cm 1.5cm 2cm 1.5cm}, clip]{social_friends_last}} & {\includegraphics[width=0.2\textwidth, trim={2cm 1.5cm 2cm 1.5cm}, clip]{social_learned_attn_it1_rel0}} & {\includegraphics[width=0.2\textwidth, trim={2cm 1.5cm 2cm 1.5cm}, clip]{social_learned_attn_it2_rel0}} \\ \\ {\includegraphics[width=0.2\textwidth, trim={2cm 1.5cm 2cm 1.5cm}, clip]{github_follow_last_freq}} & {\includegraphics[width=0.2\textwidth, trim={2cm 1.5cm 2cm 1.5cm}, clip]{github_learned_attn_it1}} & {\includegraphics[width=0.2\textwidth, trim={2cm 1.5cm 2cm 1.5cm}, clip]{github_learned_attn_it2}} \\ (a) Final associations & (b) Learned $S^{t_1}$ & (c) Learned $S^{t_2}$ \\ \end{tabular} \end{small} \vspace{-5pt} \caption{Final associations of the subgraphs for the Social Evolution (top row) and GitHub (bottom row) datasets compared to attention values at different time steps selected randomly.} \label{fig:graphs_github} \vspace{-10pt} \end{figure} \begin{figure}[tbph] \centering \begin{tabular}{cccc} \includegraphics[height=3cm, trim={0cm 0cm 0cm 0cm}, clip]{stats_baselines_MAR_social} & \includegraphics[height=3cm, trim={0cm 0cm 0cm 0cm}, clip]{stats_baselines_HITS_social} & \includegraphics[height=3cm, trim={0cm 0cm 0cm 0cm}, clip]{stats_baselines_MAR_github} & \includegraphics[height=3cm, trim={0cm 0cm 0cm 0cm}, clip]{stats_baselines_HITS_github} \\ \multicolumn{2}{c}{Social Evolution} & \multicolumn{2}{c}{GitHub} \\ \end{tabular} \vspace{-5pt} \caption{Predicting links by leveraging training data statistics without any learning (``\texttt{no learn}'') turned out to be a strong baseline. We compare it to learned models with different human-specified graphs used for associations. Here, for the Social Evolution dataset the abbreviations are following: \textsc{Blog}: BlogLivejournalTwitter, \textsc{Cf}: CloseFriend, \textsc{Fb}: FacebookAllTaggedPhotos, \textsc{Pol}: PoliticalDiscussant, \textsc{Soc}: SocializeTwicePerWeek, \textsc{Random}: Random association graph.} \label{fig:stats_baselines} \vskip -0.1in \end{figure} \subsection{Leveraging the Dataset Bias} \label{sec:freq} While there can be complex interactions between nodes, some nodes tend to communicate only with certain nodes, which can create a strong bias. To understand this, we report results on the test set, which were obtained simply by computing statistics from the training set (Fig.~\ref{fig:stats_baselines}). \def 0.23\textwidth {0.23\textwidth} \begin{wrapfigure}{R}{8cm \begin{center} \tiny \vspace{-20pt} \setlength{\tabcolsep}{0pt} \begin{tabular}{ccc} & \textsc{Linear} & \textsc{Bilinear} \\ \rotatebox[origin=c]{90}{\textsc{DyRep (CloseFriend)}} & \includegraphics[width=0.23\textwidth, align=c]{baseline} & \includegraphics[width=0.23\textwidth, align=c]{bilinear} \\ \rotatebox[origin=c]{90}{\textsc{LDG (random, sparse)}} & \includegraphics[width=0.23\textwidth, align=c]{rand_sp} & \includegraphics[width=0.23\textwidth, align=c]{rand_sp_bilinear} \\ \rotatebox[origin=c]{90}{\textsc{LDG (learned, sparse)}} & \includegraphics[width=0.23\textwidth, align=c]{mlp_sp} & \includegraphics[width=0.23\textwidth, align=c]{mlp_sp_bilinear} \\ & \multicolumn{2}{c}{{\includegraphics[width=0.3\textwidth,trim={2cm 8.7cm 0cm 0cm}, clip]{legend}}} \\ \end{tabular} \vspace{-10pt} \end{center} \caption{tSNE node embeddings after training (coordinates are scaled for visualization) on the Social Evolution dataset. Lines denote associative or sampled edges. Darker points denote overlapping nodes. Red, green, and cyan nodes correspond to the three most frequently communicating pairs of nodes, respectively.\looseness-1} \label{fig:embeddings} \vspace{-10pt} \end{wrapfigure} For example, to predict a link for node $u$ at time $\tau$, we randomly sample node $v$ from those associated with $u$. In the \textscbody{Random} case, we sample node $v$ from all nodes, except for $u$. This way we achieve high performance in some cases, e.g., MAR=11 by exploiting \textscbody{SocializeTwicePerWeek}, which is equivalent to learned DyRep. This result aligns with our intuition that people who socialize with each other tend to communicate more, whereas such associations as friends tend to be stronger, longer term relationships that do not necessary involve frequent communications. Another bias present in the datasets is the frequency bias, existing in many domains, e.g. visual relationship detection~\cite{zellers2018neural}. In this case, to predict a link for node $u$ at time $\tau$ we can predict the node with which it had most of communications in the training set. On GitHub this creates a very strong bias with performance of MAR=58. We combine this bias with our LDG model by averaging model predictions with the frequency distribution and achieve our best performance of MAR=45. Note that this bias should be used carefully as it may not generalize to another test set. A good example is the Social Evolution dataset, where communications between just 3-4 nodes correspond to more than 90\% of all events, so that the rank of 1-2 can be easily achieved. However, such a model would ignore potentially changing dynamics of events at test time. Therefore, in this case the frequency bias is exploiting the limitation of the dataset and the result would be overoptimistic. In contrast the DyRep model and our LDG model allow node embeddings to evolve over time based on current dynamics of events, so that it can adapt to such changes. \section{Conclusion} \label{sec:conclusion} \vspace{-5pt} We introduced a novel model extending DyRep and NRI for dynamic link prediction. We found that, in many cases, attention learned using our model can be advantageous compared to attention computed based on the human-specified graph, which can be suboptimal and expensive to produce. Furthermore, we showed that bilinear layers can capture essential pairwise interactions outperforming a more common concatenation-based layer in all evaluated cases. \subsubsection*{Acknowledgements} \vspace{-5pt} This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. The authors also acknowledge support from the Canadian Institute for Advanced Research and the Canada Foundation for Innovation. The authors also thank Elahe Ghalebi and Brittany Reiche for their helpful comments.
train/arxiv
BkiUem7xK1yAga6JtROg
5
1
\subsection{Workflow of \bp} Figure~\ref{fig:workflow} illustrates the workflow of \bp. \bp ingests a sequence of web requests from web applications as its input, \textit{e}.\textit{g}.\@\xspace, with a length of $n-1$. Then it extracts the contents from web requests and converts them into a sequence of events $\{e_1, e_2, \ldots, e_{n-1}\}$. After event extraction, \bp performs context-based modeling where it first encodes the events into event embedding and sequence embedding. Event embedding represents the content of each event. Sequence embedding represents the order of each event in the sequence. The embedding output will then be used to build a neural network (as described in Section~\ref{subsec:contex-based-model}) to learn long-term dependencies of the events. The trained context-based model will calculate the probability distribution of possible events to appear as the next one given $\{e_1, e_2, \ldots, e_{n-1}\}$ and provide the forecast of the upcoming event $e_n$. In addition, \bp uses the predicted probability to evaluate the anomaly score of $e_n$. We describe the three main components in the following sections. \subsection{Event Extraction} \label{sec:data_pre} The purpose of event extraction is to extract semantic events from web requests. We use sequences of events to characterize web requests. For well-formatted web requests (\textit{e}.\textit{g}.\@\xspace, following REST API design), one can easily map these requests to events by extracting their HTTP methods and well-defined endpoints. However, most web requests may not share a well-organized representation or follow a consistent format. For example, URI paths may not be named around resources. Web developers may use various URI paths for the same resource on the web server. For instance, WordPress provides five different URIs and a custom one for users to access a post\footnote{\url{https://codex.wordpress.org/Using_Permalinks}}. URI paths can be generated by randomized algorithms or encoding algorithms. Diverse web requests impede the effort to extract semantic events from URIs. We propose a three-step event extraction method, including content extracting, path uniforming, and ``rare'' event identifying. \begin{itemize}[leftmargin=*] \item \textbf{Extract Content}. We extract three components from web requests: HTTP methods (\textit{i}.\textit{e}.\@\xspace, GET, POST, UPDATE, etc.), URI paths, and the number of URI parameters. Our experimental results show that using merely these three components are effective in representing user behavior. \item \textbf{Uniform Path}. We apply a two-character Markov Chain model to detect the ``random'' elements in URI paths. We first segment URI paths into ``elements'' separated by special characters such as ``/'' and ``-''. We then investigate every character in the element from left to right. If the likelihood of the upcoming character based on the preceding two characters is lower than a certain threshold, then we consider the element as ``random.''~\footnote{We detect randomness in URIs based on a gibberish detection tool (\url{https://github.com/rrenaud/Gibberish-Detector}).} \item \textbf{Identify ``RARE'' Events}. We consider the events occurring less than $T$ times in the training data as ``RARE'' events. In this way, we learn the information of ``RARE'' events during training, which helps us to understand if such rare events are anomalous or not. \end{itemize} The entire process of pre-processing data proceeds as follows: \begin{enumerate}[leftmargin=*] \item Extract HTTP methods, URI paths, and URI queries from web requests; \item Segment URI paths into ``elements'' by special characters as delimiters; \item Flag the ``elements'' as ``RANDOM,'' if they are randomly generated or encoded; \item Calculate the number of key-value pairs in the URI query; \item Concatenate HTTP method, ``derandomized'' URI path, and the number of the URI query as an event. \item If an event has never seen in the training set or has occurred less than $T$ times, convert the event into a ``RARE'' event. \end{enumerate} \subsection{Context-Based Modeling} \label{subsec:contex-based-model} We propose context-based web request modeling, which takes a sequence of contextual events as input and outputs a sequence of corresponding events. We can mask the event of interest in the input sequence and train a model to predict it for the given sequence. Recurrent Neural Networks (RNNs), as well as their variants such as Long Short-Term Memory (LSTM)~\cite{lstm}, have been proposed for security analytics in the sequential analysis due to their outstanding performance. Recently, self-attention neural networks ~\cite{devlin2018bert} have been shown to be much more effective to capture long-term dependencies in a sequence compared to RNNs. Specifically, RNN passes the hidden states through the previous state while self-attention neural networks construct direct links between events within the context, which brings great merit in learning from the long-distance context. \textbf{Self-supervised learning.} \label{sec:pretrain} To compensate for the lack of labeled data, we design a self-supervised learning task for event forecasting. Most existing supervised models are limited to the high-quality labeled data. In this paper, we leverage the existing event requests as the labels without any manual annotations. In practice, we randomly mask 25\% events in the input sequence and replace these masked events with ``mask'' labels in the input sequence. We train the neural networks to predict ``mask'' events. In this way, the neural network learns the relationship of events and their dependency in the sequences. We use this neural network as a pretrained model for further event forecasting and anomaly detection. Our experimental results show that self-supervised pre-training significantly improves the prediction performance. \subsection{Anomaly Detection} \label{sec:detection} We propose a way to calculate the anomaly score to quantitatively measure the likelihood of a new web request being anomalous. Given the current context, we predict a set of web requests that are likely to appear with associated probabilities. For a received web request, we rank it with the predicted set of web requests based on their associated probabilities calculated by the trained neural network. We calculate the anomaly score for the incoming web request as follows: \begin{equation} \label{eq:anomaly_score} s = 1-\frac{1}{\tau + 1}, \end{equation} where $\tau$ denotes the rank of the newly received request based on its likelihood to appear (\textit{i}.\textit{e}.\@\xspace, the probability calculated by the context-based model), and $s$ is in the range of $(0, 1)$. Higher anomaly scores indicate higher confidence level in classifying the event as an anomaly. The anomaly score indicates the degree of deviation of the incoming web request from the expected normal requests. In Section~\ref{sec:anomaly_eval}, we show that the proposed anomaly score is able to differentiate anomalous web requests (\textit{e}.\textit{g}.\@\xspace, produced by various real-world web based attacks) from normal requests. \subsection{Self-Attention Based Modeling} \begin{figure}[!tb] \centering \includegraphics[width=0.6\linewidth]{figures/single_bp.png} \caption{Self-Attention Based Modeling.} \label{fig:single_model} \vspace{-20pt} \end{figure} The design of self-attention based modeling is shown in Figure~\ref{fig:single_model}. We first embed input events using the embedding layer, then use a self-attention neural network to encode the sequence and learn the dependency between events. In the output layer, we apply a Softmax function to squash the neural network and predict future events with associated probabilities. We introduce two critical components of our adaptation: 1) event embedding and sequence embedding; and 2) a self-attention neural network. \textbf{Event embedding and sequence embedding.} Word embedding is widely used to represent the semantic meaning of tokens in NLP tasks. We adopt the basic idea of word embedding by extracting semantic representation of web events and generate event embedding. Sequential information represents the relative positions of events in the sequence. However, self-attention neural networks do not contain sequential information of events, because the distances of events are the same. Therefore, we add sequential information into the neural networks using a sequence embedding layer. Specifically, our embedding layer maps the web event and its position in the sequence into two 128-dimension vectors: $\{EE_1, EE_2, \ldots, EE_n\}$ and $\{SE_1, SE_2,$ $\ldots, SE_n\}$. After generating the embedding on events and their positions, we sum up these two sequences as an embedded sequence $\{em_1, em_2, \ldots, em_n\}$, and feed it to the encoding layers of the self-attention network. \textbf{Self-attention neural network.} We use a new type of neural networks, self-attention neural network~\cite{vaswani2017attention}, to solve sequential prediction problem. Specifically, we adapt BERT~\cite{devlin2018bert}, a self-attention based neural network. BERT has been shown to outperform RNNs in almost all NLP tasks, and achieve state-of-the-art performance~\cite{devlin2018bert}. The success of BERT mainly comes from a scaled dot-product attention neural network: \begin{equation} \mathrm{Attention}(Q, K, V) = \mathrm{Softmax}(\frac{QK^T}{\sqrt{d_k}})V, \end{equation} where $Q$, $K$, $V$ are the query, key, and value of dimension $d_k$. In the self-attention neural network of \bp, $Q$, $K$, $V$ come from the same sequence of embedded events. The network learns to pay attention to the specific events in the sequence and captures the dependencies between events in the sequence. We use a model called Transformer, including a multi-head attention neural network (stacking several self-attention neural networks), a normalization layer, and a Softmax function. For the details of self-attention mechanism and Transformer, we refer the reader to~\cite{vaswani2017attention}. \vspace{-0.3cm} \subsection{Bi-LSTM Based Modeling} \vspace{-0.3cm} In addition to self-attention neural network, we implement a bidirectional LSTM neural network, called Bi-LSTM, which is widely used in sequential analysis. Similarly to Self-Attention Neural Network, we apply the event embedding here to encode events and use LSTM to represent their sequential relationship. Figure~\ref{fig:lstm_model} illustrates the architecture of Bi-LSTM based modeling. Bi-LSTM uses an embedding layer to convert an input event into a 128-dimensional vector, then deploys multiple bidirectional LSTM layers to extract semantic information from the events. We test 1, 2, 3 layers of LSTM for each direction in the experiment. A fully connected layer with a Softmax function is stacked on top of LSTM layers to output the final prediction related to the event of interest. \begin{figure}[!tb] \centering \begin{subfigure}{0.47\linewidth} \centering \includegraphics[width=\linewidth]{figures/lstm.png} \caption{Bi-LSTM Based Modeling.} \label{fig:lstm_model} \end{subfigure} \begin{subfigure}{0.47\linewidth} \centering \includegraphics[width=\linewidth]{figures/attention.png} \caption{LSTM-Attention Based Modeling.} \label{fig:attention_model} \end{subfigure} \caption{Bi-LSTM and LSTM-Attention Based Modeling.} \vspace{-20pt} \end{figure} \subsection{LSTM-Attention Based Modeling} Attention mechanism is recently proposed to surpass recurrent neural networks by remembering longer sequences. The attention mechanism aims to pinpoint key events from a long sequence. We adapt an LSTM-Attention neural network using an additive attention mechanism. Figure~\ref{fig:attention_model} illustrates the architecture of LSTM-Attention based modeling. The additive attention mechanism sums up the outputs of LSTM with their weights and outputs the weighted sum as the prediction. The LSTM-Attention neural network consists of an embedding layer, bidirectional LSTM layers, an additive attention layer, and a fully connected layer with Softmax. The embedding layer and LSTM layers follow the setting in Bi-LSTM neural network. The attention layer learns the weight of each event and applies them to the final output. Note that the neural network in~\cite{yu2018deephttp} detects anomaly based on the content of a single request. However, we use LSTM-Attention to predict an event in a sequence. We set the hidden number of the embedding layer in Bi-LSTM and LSTM-Attention to 128 and apply a drop-out mechanism with 20\% dropout in LSTM layers to avoid overfitting. \vspace{-0.3cm} \subsection{Analysis on Misclassified Events} \subsubsection{Context Modeling for Web requests of Correlated Web Applications} \subsection{Experimental Setup} \label{sec:setting} \input{4_setup.tex} \vspace{-0.5cm} \subsection{Evaluation of \bp on Web Event Forecast} \label{sec:model_compare} \input{4_prediction.tex} \vspace{-0.5cm} \subsection{Evaluation of \bp on Anomaly Detection} \label{sec:anomaly_eval} \input{4_anomaly.tex} \vspace{-0.5cm} \subsection{Neural Network Comparison} \label{sec:network_compare} \input{4_compare.tex} \subsection{Evaluation of Different Model Settings} \label{sec:setting_compare} \input{4_settings.tex} \subsection{Instant Predicting vs. Predicting with More Events} \label{sec:center} \begin{table}[!tb] \centering \caption{Performance of Predicting Centered Events.} \label{tab:centered} \begin{tabular}{llrr} \toprule \textbf{Application} & \textbf{Model} & \textbf{Top-1 Accuracy (\%)} & \textbf{Top-10 Accuracy (\%)} \\ \hline \multirow{3}{*}{Workqueue} & Bi-LSTM & 97.07 & 99.79 \\ & LSTM-Attention & 97.27 & 99.81 \\ & Self-Attention & \textbf{98.33} & \textbf{99.91}\\ \hline \multirow{3}{*}{DataRepo1} & Bi-LSTM & 89.35 & 98.44\\ & LSTM-Attention & 89.78 & 98.06\\ & Self-Attention & \textbf{90.33} & \textbf{99.16}\\ \hline \multirow{3}{*}{DevOpsApp} & Bi-LSTM & 74.00 & 91.72 \\ & LSTM-Attention & 71.85 & 89.41 \\ & Self-Attention & \textbf{81.25} & \textbf{94.67}\\ \hline \multirow{3}{*}{DataAnalyzer1} & Bi-LSTM & 84.31 & 96.00 \\ & LSTM-Attention & 81.70 & \textbf{96.78} \\ & Self-Attention & \textbf{85.01} & 96.61\\ \hline \multirow{3}{*}{DataAnalyzer2} & Bi-LSTM & 86.87 & 97.93\\ & LSTM-Attention & 86.83 & 97.71\\ & Self-Attention & \textbf{87.79} & \textbf{98.57}\\ \hline \multirow{3}{*}{DataRepo2} & Bi-LSTM & 96.87 & 99.87\\ & LSTM-Attention & 97.23 & 99.90\\ & Self-Attention & \textbf{97.96} & \textbf{99.96}\\ \bottomrule \end{tabular} \vspace{-20pt} \end{table} \textbf{Evaluation of predicting centered events.} In the previous experiments, we predicted the last event in a sequence. For many web applications, requests are generated concurrently by a single action. The concurrent requests make it possible for us to leverage contextual events following the event of interest. We study the case where the event of interest to be predicted is centered by contextual events. Table~\ref{tab:centered} shows the performance of \bp predicting centered events. Comparing Table~\ref{tab:model_comparison} and~\ref{tab:centered}, we observe that the prediction performance of centered events is improved for all three models in general. For example, for Workqueue, the Top-1 accuracy achieved by self-attention based model increases from 75.21\% to 98.33\%. Self-attention based model achieves more improvement than Bi-LSTM and LSTM-attention based models when the event of interest is centered by contextual ones. The performance of prediction improves significantly when we predict the centered event instead of the last one. When predicting centered event, events located after the event of interest provide important information. In this way, the model incorporates context from both directions (i.e., left and right). On average of six applications, \bp reduces Top-1 error rate by 52.56\% and Top-5 error rate by 57.84\% for predicting centered events. \subsection{Web Event Forecasting} \vspace{-0.2cm} Web event forecast has been investigated for many years. Su \textit{et} \textit{al}.\@\xspace extracted access path from server logs and used n-gram models to predict web events for web caching and prefetching~\cite{su2000whatnext}. Awad \textit{et} \textit{al}.\@\xspace analyzed various supervised machine learning approaches for forecasting web events, such as Support Vector Machine, Markov Model and its variant, All-Kth Markov Model~\cite{awad2008predicting,awad2012prediction}. Da \textit{et} \textit{al}.\@\xspace summarized several clustering and Markov-based approaches for predicting web page access~\cite{da2018survey}. In this work, we target the enterprise web applications and demonstrates superior performance in forecasting web events compared to the existing approaches. \vspace{-0.3cm} \subsection{Web Anomaly Detection} \vspace{-0.2cm} Many statistical models have been used to detect anomaly for web applications~\cite{kruegel2003anomaly,et2004anomaly,robertson2006using}. Kruegel \textit{et} \textit{al}.\@\xspace \cite{kruegel2003anomaly,kruegel2005multi} leveraged statistical models for characterizing HTTP query attributes such as query attribute length, attribute character distribution, and etc. Statistical models output probability values of a query and its individual attributes. The probability values reflect the likelihood of the occurrence with respect to an established profile. Juan \textit{et} \textit{al}.\@\xspace conducted Kruskal-Wallis and Kolmogorov-Smirnov test on payload length and payload histogram and modeled payload of normal web requests using Markov Chain~\cite{et2004anomaly}. Sakib and Huang detected HTTP-based Botnet C\&C traffic based on features from web request URLs and DNS responses. Three anomaly detection methods were used in the detection system: Chebyshev's Inequality, One-class SVM, and Nearest Neighbor based Local Outlier Factor. Many supervised machine learning provides have been used to detect anomaly for web applications by providing a binary prediction of normal or abnormal web requests learning from the historical data. Pham \textit{et} \textit{al}.\@\xspace surveyed different machine learning algorithms such as random forest, logistic regression, decision tree, AdaBoost, and SGD that are used to build Web intrusion detection systems~\cite{pham2016anomaly}. Oprea \textit{et} \textit{al}.\@\xspace detected malware in enterprises based on malicious HTTP traffic~\cite{oprea2018made}. They leveraged 89 features extracted from enterprise networks and applied several supervised machine learning algorithms (\textit{e}.\textit{g}.\@\xspace, logistic regression, decision trees, random forest, and SVM) to learn from these features. Clustering and dimension-reduction are common techniques used in unsupervised learning based solutions~\cite{sipola2011anomaly,juvonen2012adaptive,Juvonen12015anomaly}. These solutions first extracted features from HTTP GET parameters and URLs, and then used Random Projection (RP), Principal Component Analysis (PCA), and Diffusion Map (DM) to reduce the dimensionality of the data. Clustering algorithms (\textit{e}.\textit{g}.\@\xspace, K-means) have been applied to identify abnormal behavior. Zolotukhin \textit{et} \textit{al}.\@\xspace \cite{zolotukhin2014anomaly} used several unsupervised learning algorithms such as PCA, K-means, Density-Based Spatial Clustering (DBSCAN) to model URL and User-Agent in HTTP headers and detect anomalies in web requests. Recently deep learning approaches, in particular RNNs, have been established as state-of-the-art approaches in anomaly detection tasks. Liang \textit{et} \textit{al}.\@\xspace considered URLs as natural language sequences and applied LSTM and GRU to classify URLs as normal or abnormal requests~\cite{liang2017anomaly}. Yu \textit{et} \textit{al}.\@\xspace proposed a neural network consisting of Bidirectional LSTMs and an attention model to extract critical components from URI path and body~\cite{yu2018deephttp}. Liu \textit{et} \textit{al}.\@\xspace proposed an attention-based deep neural network, which located the malicious regions from HTTP requests and posts, and classified the malicious HTTP requests~\cite{liu2019locate}. These approaches focus on analyzing the contents in a single web request. We focus on a sequence of web requests, which involves connections among requests and represents users' normal patterns and web application flow characteristics. \vspace{-0.3cm} \subsection{Deep Neural Networks for Log Data Analysis} \vspace{-0.2cm} Deep neural networks have been used to analyze log data. Du \textit{et} \textit{al}.\@\xspace proposed to model a sequence of system logs using LSTM and identified abnormal logs from normal execution~\cite{deeplog17}. An abnormal event is flagged if such an event is not within top-K probabilities to appear next. Shen \textit{et} \textit{al}.\@\xspace leveraged RNNs to predict future events based on previous observations using security logs collected from an intrusion prevention system~\cite{shenccs18}. The work focuses on the prediction of the upcoming security event given a sequence of events. Recently, Recurrent Neural Networks (RNNs) and its variants, Long Short-Term Memory (LSTM) \cite{lstm} and gated recurrent neural networks \cite{gatedrnn}, have been established as compelling techniques in security analytics research. The RNN based methods analyze the behavior of security event logs or system logs in a session. However, applying these models to web anomaly detection is non-trivial. Logs generated by machines (\textit{e}.\textit{g}.\@\xspace, heartbeat) are much easier to be detected and predicted compared to web events generated by humans due to human's unpredictable behaviors. To analyze web events, we adapt a self-attention mechanism to learn from the contextual events. With the proposed event and sequence embedding techniques, the adapted self-attention mechanism captures the dependency of long-distance events from human behaviors. \vspace{-0.4cm} \subsection{Neural Network Comparison - False Positive} \subsection{Evaluation on Predicting Centered Events} \begin{figure*}[!tb] \centering \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/fpr_workqueue_m.png} \caption{Workqueue} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/fpr_zaim_m.png} \caption{DataRepo1} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/fpr_jenkins_m.png} \caption{DevOpsApp} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/fpr_ig_m.png} \caption{DataAnalyzer1} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/fpr_ripjar_m.png} \caption{DataAnalyzer2} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/fpr_ti_m.png} \caption{DataRepo2} \end{subfigure} \caption{False Positive Rate Comparison (Predicting Centered Events).} \label{fig:fpr_offline} \end{figure*} \begin{figure*}[!tb] \centering \begin{subfigure}{0.32\linewidth} \includegraphics[width=0.85\linewidth]{figures/roc_workqueue_m.jpg} \caption{Workqueue} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=0.85\linewidth]{figures/roc_zaim_m.jpg} \caption{DataRepo1} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=0.85\linewidth]{figures/roc_jenkins_m.jpg} \caption{DevOpsApp} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=0.85\linewidth]{figures/roc_ig_m.jpg} \caption{DataAnalyzer1} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=0.85\linewidth]{figures/roc_ripjar_m.jpg} \caption{DataAnalyzer2} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=0.85\linewidth]{figures/roc_ti_m.jpg} \caption{DataRepo2} \end{subfigure} \caption{ROC Curve Comparison (Predicting Centered Events).} \label{fig:roc_offline} \end{figure*} \begin{figure*}[!tb] \centering \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/ws_workqueue_0.png} \caption{Workqueue} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/ws_zaim_0.png} \caption{DataRepo1} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/ws_jenkins_0.png} \caption{DevOpsApp} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/ws_ig_0.png} \caption{DataAnalyzer1} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/ws_ripjar_0.png} \caption{DataAnalyzer2} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=0.85\linewidth]{figures/ws_ti_0.png} \caption{DataRepo2} \end{subfigure} \caption{Model Comparison with Different Settings (Sequence Length and Pre-training) (Predicting Centered Events).} \label{fig:setting_offline} \end{figure*} \section{Introduction} \label{sec:intro} \input{1_intro_new.tex} \section{Workflow of \bp} \label{sec:workflow} \input{2_workflow.tex} \section{Methodology of Context-Based Modeling} \label{sec:methods} \input{3_method.tex} \section{Evaluation} \label{sec:eval} \input{4_exp.tex} \section{Related Work} \input{5_related.tex} \section{Conclusion} \input{6_conclusion.tex} \bibliographystyle{splncs04}
train/arxiv
BkiUfEs5qhDCkVn5uO25
5
1
\section{Introduction} \par Social media services such as Twitter, Facebook, and WeChat\footnote{https://www.wechat.com/en/} empower millions of users to consume content from and disseminate information to their social counterparts. For example, WeChat, one of the most popular friend-based social media services in China, generates and circulates over $1.5$ million articles in the form of embedded posts or external links~\cite{knobloch2005impact,li2018weseer}. Given the abundant content on social media, users face challenges of identifying authentic and high-quality information~\cite{potter2007media}. Take WeChat as an example, some public accounts on it mainly publish content that can be easily monetized to grab the audience's eyeballs but lacks substance~\cite{zeng2017social}. \par One way to safeguard the integrity of information for users is developing algorithms to remove fake news and promote high-quality ones~\cite{eslami2015always,rader2015understanding}. However, these algorithms could not present precisely what a specific user is interested in the current stage. As an another means, many social media services offer the ``share'' features for users to spread information in their social circles. These users are called ``gatekeepers'', who pass along and comment on already available news items based on their interests \cite{twitter2012, gatekeeper2008}. Previous research on social media has shown how the gatekeepers on Twitter affect the audiences' information selection during a special event (e.g., 2009 Israel-Gaza conflict \cite{twitter2012}), and how users play roles of the gatekeepers to control information flows on Reddit~\cite{leavitt2017role}. However, unlike Twitter and Reddit in which gatekeepers often have no close relationship with users (e.g., a famous star as the gatekeeper), WeChat builds friend-based social media in which a user ideally knows all members in his/her circle. Bakshy et al. demonstrated that friends can expose individuals to cross-cutting content on Facebook \cite{facebook2015}. Nevertheless, WeChat does not have algorithmically ranked News Feed as Facebook does, but present contents shared by friends in an ordered timeline manner. Moreover, most of the WeChat users are from China, and they have a different cultural background than the users of Facebook who mainly come from western countries. There is a lack of understanding of how the friend network acting as a gatekeeper on WeChat affects the users' content curation behaviors. Such an understanding is important as it can not only help the content creators to learn how their works are spread in the friend-based social network but also facilitate such social media platforms to manage the information flow. \par In this paper, we use WeChat as a lens to investigate how users leverage their friend networks as latent gatekeepers for content curations on the friend-based social media. Specifically, we reveal how WeChat users exploit the composition and tie strength of their friend network to safeguard the relevance, importance, popularity, and/or quality of information they consume. We further examine how users adopt the gatekeeping mechanism according to the changes of friend networks and interests over an extended period of time. We take a mixed-methods approach~\cite{wisdom2013mixed} to study the possible ``friend network as a latent gatekeeper'' phenomenon on WeChat. On the one hand, we quantitatively analyze over seven million WeChat users' reading behaviors and infer how these users accommodate and safeguard their varying information needs through different friend communities and social ties. On the other hand, we conduct an online survey with $216$ participants to qualitatively understand how and why users view the gatekeepers in their networks. In general, we find that users tend to turn to the friend network for information consumption if there is overloading information. They tend to exploit weak-ties getting exposed to new domains and turn to strong-ties when demanding credible and reliable information. Elder users with shorter WeChat experiences and fewer friends depend primarily on their friend network for information consumption. Users leverage social circles to gatekeep information interests and the interests and attention paid to them curated by one social circle can shift to another circle. The major contributions of this paper are as follows: \begin{itemize} \item We qualitatively and quantitatively study the potential phenomenon of WeChat users leveraging their friend network as a collective and dynamic latent gatekeeper. \item We discuss the insights derived from our approach to inspire the future design of socio-technical systems. \end{itemize} \section{Background and Related Work} \subsection{Gatekeeping in Social Media Era} \par Unlike traditional media, today's social media can be indelibly remarked as We Media~\cite{bowman2003we}, i.e., user-operated media, in which there is no clear boundary between information producers, disseminators, and consumers, and the contents published are no longer constrained by length, timeliness, and the relevance to readers in a geographical and cultural sense~\cite{althaus2000patterns,galtung1965structure,mccombs1977predicting}. Although social media users play an active role in shaping the online information landscape~\cite{Shapiro1999Loneliness}, they might not have the same level of professional qualities as the experts in the media industry for ``gatekeeping'', i.e., scrutinizing content, safeguard its validity, veracity, and integrity before reaching the public~\cite{lewin1943forces}. Ira Basen~\cite{basen2011news} pointed out that digital media platforms have fewer filters and gates than traditional media, making it challenging for users to determine what is new and what is important. Keen~\cite{keen2011cult} mentioned that Web 2.0 has a negative impact on gatekeeping because of the reduction in gates or official gatekeepers who are accountable and professional. He maintained that ``\textit{gatekeepers are a necessity due to the flood of information coming digitally.}'' Clark~\cite{Clark2015Gatekeeping} interviewed several news professionals and asked how social media plays roles in their daily professional lives, showing that the downsizing of newsrooms has made an impact on the traditional role of the editors as a gatekeeper. Besides, different from the definition of ``gatekeeper'' for traditional media that in a sense if something is ``gatekept'', it won't go public to anyone, many social media services offer the ``share'' features for users to spread information in their social circles. In other words, even if a user decides not to share the content, the content could still be seen by its friends through other friends (if they choose to share the content). The above studies mainly focus on studying the changes of gatekeeping from traditional media era to today's social media era, under the context that social media consumers have to face a flood of fake news and information. Leavitt et al.~\cite{leavitt2017role} looked at Reddit to understand how the design of Reddit's platform impacts the information visibility in response to ongoing events in the context of controlling information flows (through gatekeeping). Similar to the functions of gatekeeping in social media platforms, social media influencers (SMIs) represent a new type of independent third party endorser who shape audience attitudes through blogs, tweets, and the use of other social media~\cite{abidin2017familygoals,freberg2011social,khamis2017self,matias2017followbias}, and there are technologies developed to identify and track the influencers. However, different from social influencers, the gatekeeping in social platform plays a latent role in a collective and dynamic manner. In our work, we focus on Moments - a distinguishing feature of friend network in WeChat, and study how WeChat users utilize their friend networks as latent gatekeepers collectively and dynamically to safeguard the information they consume. \subsection{Algorithmic Content Curation on Social Media} \par People are increasingly relying on online socio-technical systems that employ algorithmic content curation to organize, select and present information. Several studies have addressed customers' perception of automated curation~\cite{eslami2015always,rader2015understanding}. For example, Rader et al.~\cite{rader2015understanding} investigated user understanding of algorithmic curation in Facebook's News Feed through an online survey. They found that over $60$\% of the respondents implicate that the algorithmic News Feed caused them to miss posts from friends yet they still believe the algorithm that prioritizes posts for display in the News Feed. Similarly, Eslami et al.~\cite{eslami2015always} conducted a user study with $40$ Facebook users to examine their perceptions of the Facebook news feed curation algorithm. In contrast with the above algorithmic curation, in this paper, we study social media users' consumption of contents that come directly from their friend networks in the absence of any automated curation. Unlike algorithmic curation that arranges and ranks the news items based on designated features, consumption of friend-curated content on WeChat offers users complete controls over information selection. It would be interesting to have an in-depth analysis of the users' internal ranking scheme of friend-curated content. \subsection{Factors that Affect Users' Content Curation} \par Social media is one essential way for people to curate information and previous literature has studied several factors that affect users' content curation behaviors. For example, Leskovec et al.~\cite{leskovec2009meme} studied the spread of news across websites and found that blogs generally lag only a few hours behind mainstream news sites. Agrawal et al.~\cite{agarwal2008identifying} proposed a model to identify influential blog contributors. They found that the number of times a blog post is shared and the number of comments on it generates are positively related to the influence of its contributors. Khan~\cite{khan2017social} conducted an online survey that covered 1143 registered YouTube users and identified the factors that motivate user participation and consumption on YouTube through regression analysis. User-user relationship with various strengths is also one important factor that influence user's information seeking experience~\cite{arnaboldi2013egocentric,berkovsky2012personalized,gilbert2012predicting,gilbert2009predicting,kahanda2009using,panovich2012tie,petroczi2007measuring,wu2010detecting,xiang2010modeling,zhuang2011modeling}. Gilbert et al.~\cite{gilbert2009predicting} bridged the gap between social theory and social practice by predicting the strength of interpersonal relationships in social media and conducting user study-based experiments on over 2000 social media relationships. Wu et al.~\cite{wu2010detecting} identified the two different types of intimate relationships among employees in enterprise social networks. Granovetter, M.S.~\cite{granovetter1977strength} proposed ``weak-ties'', which he believed can break through the social circles formed by strong-ties, enabling us to reach a diverse group of people and information. On the contrary, Krackhardt, D.~\cite{krackhardt2003strength} believed that strong-ties are the bonds of trust between people, so they are more willing to accept information brought by strong-ties than weak ones. In addition to qualitative research, many scholars leveraged data models as tools to quantify the relationship between social influence and the scale of information propagation~\cite{leskovec2007dynamics,sun2009gesundheit,wei2010diffusion}. However, similar user-user relationship research on WeChat is still relatively scarce. As one complement to the works above, we leverage a mixed-methods approach empirically to exploring how WeChat users exploit the composition and tie strength of their friend network to safeguard the relevance, importance, popularity, and/or quality of information they consume. It would be interesting to see whether and how different social tie strength can be reflected as gatekeepers in content curation. \section{Methods} \subsection{Article Reading and Gatekeeping on WeChat} \par As a prevailing social media App, WeChat has its distinctive features. First, it owns the characteristics of traditional media. Users can read articles directly from the homepage of the subscription accounts from the WeChat Official Account Platform (\autoref{fig:one}(1)), which serves as the main source of articles for publication. Subscription accounts are often used similarly to daily news feeds because they can push one or several new update(s) to their followers every day~\cite{li2018weseer}. The update(s) could contain a single article or multiple articles bundled together. Users may subscribe to as many accounts as they like. All subscription accounts are placed together in a subscription accounts folder on the timeline of users. Similar to bloggers on Twitter, a WeChat subscription account also has its fixed author(s). Second, it owns the typical features of social media. The most intuitive feature is that users can read articles shared by and forward them to their friend network (\autoref{fig:one}(2)). WeChat provides three channels, namely, Moments, private chatting, and group chatting for users to access and read articles curated by their friends. In this work, we focus on content curation through the Moments. Users can share articles through their Moments, an immensely popular feature used to share pictures, short videos, texts, and links with their friends. Users can scroll through this stream of contents, similarly to Facebook newsfeed but they appear in chronological order. Users can also share articles to a specified friend or a group of friends via direct private or group chatting (\autoref{fig:one}(3)). \begin{figure}[h] \includegraphics[width=\linewidth]{figures/wechat.png} \caption{Article reading on WeChat. (1) Subscription accounts publish articles like news feed. (2) Users can repost articles on their Moments. (3) Users can repost articles via private chatting or group chatting.} \label{fig:one} \end{figure} \par The gatekeeping process on WeChat is depicted in Figure~\ref{fig:workflow}. Given the large amount of information from the Internet, the hosts of subscription accounts act as the first level of gatekeepers. Users who want to curate content can directly read articles from these accounts, or they can read articles shared by their friends in the Moments. In the latter case, the user's friend network is acting as the second level of gatekeepers. The subscription accounts and the friend network determine collectively which articles in the whole platform get to appear on individual users' news feed. \begin{figure*}[h] \includegraphics[width=\linewidth]{figures/RQ1-3.png} \caption{Gatekeeping process on WeChat and research questions in our study.} \label{fig:workflow} \end{figure*} \subsection{Research Questions} \par To understand how users leverage their friend networks as latent gatekeepers for content curations, we first need to examine to what extent and in what way the subscription accounts and friend networks act collectively as gatekeepers for users. Therefore we have our first research question: \begin{wrapfigure}{l}{0.03\textwidth} \vspace{-10pt} \begin{center} \includegraphics[width=0.08\textwidth]{figures/balance.png} \end{center} \vspace{0pt} \vspace{-10pt} \end{wrapfigure} \textbf{RQ1. How do WeChat users utilize friend networks and subscriptions collectively as latent gatekeepers for content consumption?} \par Li et al. revealed that WeChat users are often confronted by abundant friend-curated content from a wide variety of sources~\cite{li2018weseer}. Users may need additional cues to reduce the cognitive burden of deciding what to read. One potentially useful and always available cue is the composition (e.g., classmate, relative) and tie strength (e.g., how close is the relationship) of their friend networks. If a close friend is believed to be highly knowledgeable or trustworthy about public affairs, these positive evaluations may transfer to the information curated by him/her. We have our second question regarding the composition and tie strength of the friend network: \begin{wrapfigure}{l}{0.03\textwidth} \vspace{-10pt} \begin{center} \includegraphics[width=0.08\textwidth]{figures/group_spring.png} \end{center} \vspace{-10pt} \end{wrapfigure} \textbf{RQ2. Any difference between \textbf{a) }social circles and\textbf{ b)} social ties when acting as gatekeepers?} \par Noted that the social contacts and information interests can change noticeably over time. Understanding these dynamics allows us to leverage those facets to improve relevance, and better manage influence and different ``gatekeepers'' in information dissemination~\cite{wang2016measuring}. Therefore, we study the third research question regarding temporal dynamics in the gatekeeping process: \begin{wrapfigure}{l}{0.03\textwidth} \vspace{-10pt} \begin{center} \includegraphics[width=0.08\textwidth]{figures/time.png} \end{center} \vspace{0pt} \vspace{-15pt} \end{wrapfigure} \par \textbf{RQ3. How do WeChat Users adapt gatekeeping for content consumption over time?} \subsection{Quantitative Analysis Method} \par Archival data obtained through collaboration with WeChat reveal that messages coming from all WeChat channels, i.e., subscription accounts, private chatting, group chatting, and friend network (i.e., Moments), collectively create users' information landscape on the platform, each taking up different proportions. On average, 57\% of the articles consumed by a WeChat user come directly from subscription accounts. The remaining 43\% are shared by friends through private chatting (11\%), group chatting (18\%), and Moments (71\%) under different social scenes~\cite{zhang2018mobile}. Private chatting ``digests'' the articles exchanged in the communication~\cite{wu2014wechat}. Group chatting is a private conversation among a group of users pre-gathered for certain purposes. Note that members of a WeChat group may not necessarily be friends of one another. Therefore, it is a social environment with complicated and unpredictable factors~\cite{qiu2016lifecycle}. Comparatively, Moments is like a public bulletin for one's entire friend network, publishing all the contents posted by friends in a timeline manner. Theoretically, people can browse content on their Moments at will, similar to how they can treat posts curated in the subscription account folder (if we consider subscription accounts as a special type of ``friend''). As the focus of this paper is on how users proactively leverage their friend network as a latent gatekeeper of their information landscape, we only consider voluntary reading behaviors related to the two broadcasting channels, i.e., subscription accounts and Moments. \subsubsection{Data collection and Description} \par The dataset in this work was collected by our collaboration colleagues from WeChat, Tencent. Particularly, the dataset used for \textbf{RQ1} and \textbf{RQ2} contains a one-week log of article-reading activities from March $12^{th}$ - $18^{th}$, 2018 curated via the subscription accounts and friend network of 7,234,753 users, a stochastic sampling on all users. The dataset is anonymized with all identifiable information removed. It consists of three parts: \begin{itemize} \item \textbf{A1. User Attributes} include user information within the selected time frame, such as age, registration duration, the number of friend, and the list of official accounts subscribed. \item \textbf{A2. User Social Relationship} contains the list of friends of each user and their social relationship with the users. In this paper, we describe a social relationship through the following two dimensions: \begin{itemize} \item \textbf{\textit{D1. Social Similarity}} calculates the number of common friend between two users, which is a common practice to indicate to what extent two users are similar in social network analysis~\cite{banks2008social}. We thereby adopt it to obtain the social similarity between two users. \item \textbf{\textit{D2. Social Circle}} presents social community in this work. Our collaboration experts generate four types of labels for communities in one's friend network: colleagues, family, schoolmates, and others (e.g., real estate agency and WeChat business) by adopting a community detection algorithm \textit{Fast Unfolding}~\cite{blondel2008fast}. \end{itemize} \item \textbf{A3. User Article Consumption} includes (1) the list of articles published by all the subscription accounts that a user follows; (2) the list of articles consumed by the user from the subscription accounts; (3) the list of articles curated by the friend network; and (4) the list of articles consumed by the user from the friend network. Note that these data can be filtered based on the attributes D1 and D2. \end{itemize} \par To answer \textbf{RQ3}, we collect an additional one-year article-reading data (201709 - 201807) of $10,000$ users via stochastic sampling on WeChat user pool. Compared with the previous one-week dataset for \textbf{RQ1} and \textbf{RQ2}, this one-year sampling dataset only contains \textit{A1. User Attributes} and \textit{D2. Social Circles}. This dataset has an additional attribute that is not included in the one-week dataset: \begin{itemize} \item \textbf{A4. Article Categories} specify the aggregated number of articles each user consumes in terms of different article categories. \end{itemize} \subsubsection{Preliminaries and Computational Metrics} \par In this subsection, we provide a brief overview of the definition of metrics and terms used in the quantitative analysis for \textbf{RQ1} and \textbf{RQ2} (note: metrics for \textbf{RQ3} are described in \textbf{RQ3} subsection in \textit{RESULT} section). We define information consumption on WeChat as a behavior of user clicking on a certain article curated by friends that shows up in a user's Moments. To the user, these friends are gatekeeper of his/her Moments, determining what can be circulated in it. We define: \begin{itemize} \item \textbf{M1. Click-through Rate (CTR)} is the ratio of the number of consumed articles to the number of exposed articles for a WeChat user. \item \textbf{M2. Influence Ratio (IR)} is the influence ratio of user $i$ to user $j$ which is measured as: \begin{equation} r_{i->j} = \frac{m_{i->j}}{n_i} \end{equation} where $m_{i->j}$ is the number of times user $j$ read articles shared by user $i$ and $n_i$ is the total number of articles that user $i$ share. The IR quantifies the pairwise influence between the user and his/her friend. The larger the IR, the more attention user $j$ pays to contents curated by user $i$, and thus the greater influence user $i$ has on user $j$. \item \textbf{M3. Total influence of friends' gatekeeping} indicates the total effect of friends' gatekeeping on user $j$ which is defined as: \begin{equation} r_j = \frac{\sum_{i \in F_j}{m_{i->j}}}{\sum_{i \in F_j}n_i} \end{equation} where $F_j$ is the set of friends of users $j$. \item \textbf{M4. Influence of a social circle} indicates the influence of a certain type of social circle $E$ which is defined as: \begin{equation} r_E = \frac{m_E}{n_E} = \frac{\sum_{(i,j)\in E}(m_{i->j}+m_{j->i})}{\sum_{i \in V(E)}n_i} \end{equation} where $V(E)$ is the involved users in the set of friends $E$. \item \textbf{M5. Influence of subscription accounts on user} $j$ is defined as: \begin{equation} s_j = \frac{k_j}{l_j} \end{equation} where $l_j$ is the total number of articles published by all the subscription accounts that user $j$ follow, and $k_j$ is the total number of articles that user $j$ reads directly from these subscription accounts. \item \textbf{M6. The ratio of the influence of subscription $s$ over the influence of friends $r$}: $\frac{s}{r}$ describes how users split the gatekeeping responsibilities between subscription accounts and the friend network. \end{itemize} \subsection{Qualitative Analysis Method} \par To verify the potential quantitative results with the users and to understand why users have some specific behaviors for gatekeeping of friend networks, we conduct an additional qualitative analysis with WeChat users. \subsubsection{Participants} \par We recruited $216$ participants (males: 53.7\%, females: 46.3\%; age (19-25): 24.0\%, age (26-30): 28.2\%, age (31-40): 24.5\%, age (41-60): 20.8\% and age (60+): 2.50\%) via an online survey service to understand their information consumption experience on WeChat. All the survey participants have a good knowledge of WeChat, as well as information consumption on WeChat. Particularly, we choose the participants with good operation skills of WeChat, for which they could provide us more comprehensive insights. We further invited $10$ survey respondents (P1-P10) (males: 60\%; females: 40\%; age (19-25): 20\%, age (26-30): 20\%, age (31-40): 30\%, age (41-60): 20\% and age (60+): 10\%) for follow-up interviews about their choices in the survey. Each interview took $15$ minutes and was audio recorded. \subsubsection{Design of Questionnaires} \par The questions used in the questionnaires take the form of multiple choices. As a supplement for \textbf{RQ1}, we ask participants to choose what kinds of articles are most welcome from their friend network and what kinds of articles they would further curate and repost. For \textbf{RQ2}, they are asked to indicate from which social circle(s) (options: family, colleague, schoolmate) do they curate information in their friend networks and choose the possible reasons we provide. For \textbf{RQ3}, we ask them whether and when would their social circles as ``gatekeepers'' experienced some changes. \section{Results} \par In this section, we summarize the results for the research questions one by one, following the style of first presenting the quantitative results then listing the qualitative one if applicable. \begin{figure}[h] \includegraphics[width=\linewidth]{figures/RQ11.png} \caption{X-axis: the ratio of the number of articles $l$ published by the subscription accounts over the number of articles $n$ curated by the friend network AND y-axis: the ratio of the influence of subscription accounts $s$ over the influence of the friend network $r$.} \label{fig:rq11} \end{figure} \subsection{RQ1: Gatekeeping by Subscription Accounts and Friend Network Collectively} \par \autoref{fig:rq11} shows the relationship between $\frac{s}{r}$ (the ratio of the influence of subscription $s$ over the influence of friends $r$) and $\frac{l}{n}$ (the ratio of the number of articles $l$ published by the subscription over the number of articles $n$ shared by friends). We can see that $\frac{s}{r}$ decreases with the increase in $\frac{l}{n}$. Noted that in \autoref{fig:rq11}, we adopt a logarithmic axis. One can see that the ratio of the influence of subscription accounts over the friend network has a power-law dependence on the ratio of the articles published by the subscription accounts over the ones by the friend network: \begin{equation} \frac{s}{r} \approx \beta(\frac{l}{n})^{\alpha} \label{equ:5} \end{equation} \begin{figure*}[h] \includegraphics[width=\linewidth]{figures/RQ13.png} \caption{Distribution of different age, registration duration, and number of friends over the four levels of click-through rate (CTR).} \label{fig:rq13} \end{figure*} \par Through linear regression, we obtain $\alpha \approx -0.48$ and $\beta \approx 0.75$ with the significant level exceeding $99$\%. With a further deduction from Equation~\ref{equ:5}, we can infer that when $\frac{l}{n} > 0.55$, $\frac{s}{r} < 1$. That is: when the number of articles published by the subscription accounts exceeds 55\% of the number of articles shared by friends, the influence of the friend network will be greater than that of the subscription accounts (\textbf{Finding 1 (\textbf{F1})}). \par Due to the zero-sum nature of attention~\cite{zhu1992issue}, WeChat users rely primarily on the subscription accounts and the friend network to filter information within their reach. \textbf{F1} suggests that users are likely to adjust their degree of reliance on each channel based on the quantity and quality of its content supply. If a user only subscribes to a few official accounts selectively, articles received from this channel are limited in quantity and more likely to catch the user's attention upon arrival. On the contrary, when articles from the subscription accounts are flooding the user's wall, the subscription channel can no longer help separate the attention-worthy content from the unworthy effectively. The user may instead turn to the friend network with finer ``gates'' to control the information flow. \par To conduct an in-depth analysis of the target users who are likely to consume content from the friend network, we divide the value of \textbf{M1} CTR into four ranges: $0$ - $0.05$ as low CTR, $0.05$ - $0.15$ as medium CTR, $0.15$ - $0.3$ as high CTR, and over $0.3$ as extra-high CTR according to the input of domain experts from our collaborator. We then group users by their CTR range and plot the distribution of three user attributes \textbf{A1} in each group: \textit{age} ($1$ - $18$, $19$ - $25$, $26$ - $30$, $31$ - $40$, $41$ - $60$ and over $61$), \textit{registration duration} (years), and \textit{the number of friends} with the ranges of $0$, $1$ - $20$, $21$ - $50$, $51$ - $100$, $101$ - $200$, $201$ - $400$, and $401$ - $800$. \autoref{fig:rq13} (left) shows the distribution of different age groups over the four CTR groups. The $41$-$60$ age group has the highest percentage in each CTR group, especially in high ($42.57$\%) and extra-high CTR ($41.44$\%), largely surpassing the other CTR groups in this age range. The distribution of registration period over the four CTR categories (\autoref{fig:rq13} (middle)) shows that the higher CTR user groups tend to have a shorter registration history. We also find that users of high or extra-high CTR tend to have fewer friends on WeChat (noted as \textbf{F2}) (\autoref{fig:rq13} (right)). \par The qualitative results for \textbf{RQ1} show the diversity of the curated and gatekeeping content. 79.6\% of the survey respondents stated that they often consumed articles on WeChat. Articles with \textit{attractive titles} (42.6\%), \textit{news and events} (38\%), \textit{practical knowledge} (34.7\%), \textit{financial and investment knowledge} (26.4\%), and \textit{funny stories} (25.5\%) are most welcome from the friend network. Users would further curate and repost the articles about \textit{practical knowledge} (56\%), \textit{insightful stories} (28.7\%), \textit{industry trends} (27.8\%), \textit{``chicken-soup'' articles} (nourishing stories for one's soul) (26.4\%), and \textit{current events} (21.8\%), etc., a bit different from what they consume from the friend network. Respondents (P7, P9, females; P3, P5, males) stated that ``\textit{sometimes, these articles represent what we thought,}'' and ``\textit{are well responsive to our current status.}'' Responses from different participants indicate different media literacy, showing that contents curated by different friends can be quite diverse. \subsection{RQ2: Gatekeeping by Composition and Tie Strength of Friend Network} \begin{figure}[h] \includegraphics[width=\linewidth]{figures/RQ21.png} \caption{In overall, with the increase of x-axis: social similarity, y-axis: CTR also increases.} \label{fig:rq21} \end{figure} \subsubsection{RQ2a: About Social Circle} \par To explore how the composition of friend network help WeChat users filter information in Moments, we analyze to what extent friends with different social attributes (\textit{D1. Social Similarity} and \textit{D2. Social Circle}) serve as users' channel of choice for content consumption (measured by CTR). We divide the value range of social similarity (number of common friends) into seven intervals: $0$, $1$ - $3$, $4$ - $5$, $6$ - $10$, $11$ - $20$, $21$ - $40$, and over $40$ according to the input of domain experts from our collaborator. As shown in \autoref{fig:rq21}, CTR goes up as social similarity increases. Among individuals having over 40 common friends with a user, those from the user's schoolmates circle achieve the highest CTR. In the rest of the social similarity intervals, CTR of the family circle is the highest (noted as \textbf{F3}). When rendering the CTR values of friends from various social circles for users at different ages (\autoref{fig:rq24}), we find that users across all age groups generally prefer to consume articles curated by the family circle, followed by colleagues and schoolmates. Interestingly, the influence of schoolmates first declines among users at age 20 and then rises again among users aged 32 and over, eventually surpassing the CTR of colleagues among users at age 54 and exceeding the family CTR for users aged 60 or older. Colleagues have about the same level of influence as (sometimes slightly higher than) family for users in their 20s and 30s, and gradually lose the leading position to family and then to schoolmates among users over 40. In a word, the proportion of information intake from different social circles varies across WeChat users in different age groups (noted as \textbf{F4}). \begin{figure}[h] \includegraphics[width=\linewidth]{figures/RQ24.png} \caption{CTR of four social circles (i.e., colleagues, family, schoolmates, and others over the ages from 18 to 69).} \label{fig:rq24} \end{figure} \par \textbf{F3-4} manifests that people always care about family-curated content. However, 60.7\% of the survey respondents do not consume information from the family circle, especially from the elder ones. Among them, 72.5\% reported that they are not interested in the topics curated by the elder family members, and 29\% indicated that their reading interests are not similar. ``\textit{Family members like to share articles about inspiring stories, health maintenance, and festival-greetings, etc.,}'' said P3 (male). 29.4\% of those who consume family-curated articles ``\textit{because of emotional support}'' and 74.1\% of them ``\textit{care about what my family members are interested in.}'' Only 22.4\% reported that they share similar interests with their family. ``\textit{We may just take a glance at the title or quickly go through the contents,}'' said P7 and P9 (females). In this case, it seems that people may already have a pre-assumption for the information curated by their families. Consuming information from them may be largely due to emotional support, rather than the information relevance or importance. To further verify this hypothesis, we need more data such as the average time of reading an article curated by different circles. Apart from the family circle, 74\% and 76\% of the survey participants like to consume information from colleagues and schoolmates because of the topics (71.5\%) and similar hobbies (46.7\%). 27.3\% of users reading articles from colleagues stated that ``\textit{these articles can be conversation-makers in the company.}'' 23.6\% of users (with 34.43\% from the age of $26$-$30$) who do not like to read articles from schoolmates stated that ``\textit{our life becomes different.}'' \subsubsection{RQ2b: About Social Tie Strength} \par We then explore the use of tie strength of their friend network for filtering information in Moments. When Granovetter, M.S. first proposed the concepts of strong-/weak-ties, he did not provide a strict definition but a qualitative description: strong-ties refer to frequent connections and close relationships, and weak-ties are accidental connections with seldom communications~\cite{granovetter1977strength}. In this study, we follow the approach proposed by Gupte et al.~\cite{gupte2012measuring} and employ Jaccard Index\footnote{https://en.wikipedia.org/wiki/Jaccard\_index} of \textit{D1: Social Similarity} to measure the tie strength quantitatively: \begin{equation} J(F_i, F_j) = \frac{\lvert F_i \cap F_j \rvert}{\lvert F_i \cup F_j \rvert} \end{equation} where $F_i$ is user $i$'s friends and $F_j$ is user $j$'s friends. \begin{figure}[h] \includegraphics[width=\linewidth]{figures/RQ22.png} \caption{Bars show relationships between tie strength (x-axis) and normalized influence ratio (left y-axis) from subscribed and unsubscribed cases. The curve shows relationships between tie strength and repetition rate (right y-axis).} \label{fig:rq22} \end{figure} \begin{figure*}[h] \includegraphics[width=\linewidth]{figures/RQ31.png} \caption{T-SNE projection of all the sampled users. The average ratio is computed for each social circle in each cluster.} \label{fig:rq31} \end{figure*} \par \textbf{Strong Ties Bring Trust.} Articles curated by friends on WeChat may be either from the subscription accounts that \textbf{(Case 1)} a user has already followed or from those \textbf{(Case 2)} the user has not yet followed. We utilize \textbf{A2-3} to compute and compare \textbf{M2. Influence Ratio} in the two cases, and apply normalization as follows. For each case, we divide the influence ratio in each segment of tie strength by the minimum influence ratio among all segments in that case, and then obtain the normalized influence ratios of the corresponding cases in each segment. As shown in \autoref{fig:rq22}, we find the influence of strong-ties among most segments in Case 2 is much higher than that in Case 1. For example, in Case 2, the influence ratio of the segment of [0.05, 0.1) is six times higher than that of the segment of [0, 0.001), whereas in Case 1, it is only two times. Case 2 confirms the ``strong-ties theory'' proposed by Krackhardt, D.: ``\textit{consuming articles from unknown sources means making changes (e.g., exposure to new knowledge, cognitive changes.), and it is with discomfort; however, strong-ties can help overcome this discomfort.}''~\cite{krackhardt2003strength} In Case 1, users have direct exposure to the articles from the subscription account. If they have determined to read the articles (or not), seeing the articles in their friends' Moments may not change their decision. This is perhaps why the normalized influence ratios for Case 1 are pretty similar between the $2^{nd}$ and $7^{th}$ bar. However, there is a noticeable increase of Case 1 influence ratios in the last two bars, suggesting the likely persuasion effect of the strongest ties. This finding indicates that strong-ties bring a sense of trust as a gatekeeper. In other words, if an article comes from a subscription account that a user has not followed, the trust brought by this article is deficient. However, the friend curation behavior makes up for this lack of trust and thus this behavior can be considered as a trustful gatekeeper (noted as \textbf{F5}). \par \textbf{Weak Ties Bring Serendipity.} We investigate the repetition rate of the articles curated by the friend network in each segment of tie strength (we calculate the repetition rate which indicates the percentage of friends ever curating the same articles based on \textit{A2-3}) (the green curve in \autoref{fig:rq22}). We find that with the increase of tie strength, the repetition rate rises gradually. This indicates that weak-ties are more likely to bring information that users have not seen before, and can act as ``gates'' leading to ``unexpectedness'' (noted as \textbf{F6}). This finding also corroborates the ``weak-ties theory''~\cite{granovetter1977strength} that ``\textit{information curated by strong-ties is likely to be similar and redundant, whereas weak-ties can break the boundaries of people's inherent social circles and bring new information.}'' \par In the qualitative study, we asked the interviewees about how they treat the information curated by strong-/weak-ties. ``\textit{I want to learn why my close friends read this article,}'' said P6 (female), ``\textit{I will consume information from people I occasionally meet because of their comparatively fresh information.}'' ``\textit{I pay special attention to some friends who have special ideas, or opinion leaders,}'' said P9 (female). ``\textit{I'm interested in articles from strong-ties, but sometimes, weak relationships will also bring some current affairs-related articles which I am interested in,}'' said P5 (male). ``\textit{Sometimes, I have a clear idea of what I want and go straight to appropriate friends for contents known to fulfill my consumption demand; other times, I am open to new information and just click around,}'' said P4 (male). When we further inquired him whether these friends could act as a ``filter'', he said, ``\textit{definitely, with so much information, I will choose the information curated by my close friends with trustworthiness.}'' \par 49.5\% survey respondents stated that they would follow up a popular event only after their friends have exploded with it, compared with that 32.4\% of the respondents would proactively seek relevant information in no time. From the interviews, we confirm that people may transfer positive evaluations from trustful friends to their curated information, ``\textit{I tend to believe what my friends believe,}'' said P7-8 (females), which is also consistent with our quantitative analysis of ``strong-ties bring trust'' (\textbf{F5}), i.e., an article curated by a close friend will increase the trust in the corresponding unfollowed subscription accounts. To further verify whether users will follow and consume information from these unfollowed subscription accounts, more data are needed. \begin{figure*}[h] \includegraphics[width=\linewidth]{figures/RQ32.png} \caption{Color blocks indicate feature importance of four social circles for different article categories. Cluster 1 (colleagues occupying the most) and Cluster 2 (family occupying the most) are compared using data of 201709.} \label{fig:rq32} \end{figure*} \subsection{RQ3: Friend Network Gatekeeping Mingling with Temporal Dynamics} \par We take two steps to address this research question. \par \textbf{Step 1: Computing feature importance at each time frame.} We first derive a computational model to infer how WeChat users leverage different social circles to gatekeep the relevance/types of articles within their reach at each time frame of one-month. We start with depicting each user $u$ by its friend network composition (\textbf{D2}), $v_u = (r_{c}, r_{f}, r_{s}, r_{o})$, where $r_c$, $r_f$, $r_s$, $r_o$ represent the ratio of the number of friends in \textit{colleagues}, \textit{family}, \textit{schoolmates}, and \textit{others}, respectively, to all friends. Based on its vector representation $v_u$, we use K-Means to cluster all sampled users into four clusters. Each cluster indicates a different friend network composition. The number of clusters can be dynamically adjusted. As shown in \autoref{fig:rq31}, when k = 4, we can achieve a balanced distribution among all clusters. Next, via regression analysis, we fit the friend network composition space to the article consuming space. The regression analysis has long been used to model the relationship between variables and to estimate how a dependent variable responds to changes~\cite{li2018embeddingvis}. Under the assumption that network composition can affect article consumption behaviors, we regard each social circle as an observed variable and use its combination to regress the consumption space for a certain article category (\textbf{A4}). For each cluster, we weigh each social circle by its contribution to the article consumption by using feature importance, i.e., we approximate the two spaces by $ f(n_k, n) \approx reg(w_{c}*r_{c}, w_{f}*r_{f}, w_{s}*p_{s}, w_{o}*p_{o}) $ where $f$ computes the ratio of the number of consumed articles ($n_k$) of category $k$ to all consumed articles ($n$) and $w$ is the feature importance. \par In this step, we apply five widely-used machine learning algorithms, including \textit{Linear Regression (LR)}, \textit{Lasso}, \textit{Multiple-layer Perceptron (MLP)}, \textit{Decision Tree (DT)}, and \textit{Random Forest (RF)} to conduct regression analysis. Among them, LR and Lasso are linear regressors and the rest fit data with non-linear kernels. We use the coefficient of determination ($R^2$) commonly employed in regression analysis to assess model performance (Table~\ref{tab:result}). The results indicate that DT performs the best with a sufficiently high $R^2$ score. \begin{table}[h] \begin{tabular}{l|l|l|l|l} {Regression} & {$R^2_{id=0}$} & {$R^2_{id=1}$} & {$R^2_{id=2}$} & {$R^2_{id=3}$} \\ \hline {LR} & 0.102 & 0.145 & 0.079 & 0.089 \\ \hline {Lasso} & 0.051 & 0.072 & 0.052 & 0.062 \\ \hline {MLP} & 0.252 & 0.225 & 0.191 & 0.430 \\ \hline \textbf{DT} & \textbf{0.761} & \textbf{0.742} & \textbf{0.543} & \textbf{0.801} \\ \hline {RF} & {0.540} & {0.680} & {0.601} & {0.762} \end{tabular} \caption{Results for different regression models.} \label{tab:result} \end{table} \begin{figure*}[h] \includegraphics[width=\linewidth]{figures/RQ33.png} \caption{Word clouds indicate topics that social circles of Cluster 1 contribute to over time. The font size in word clouds encodes feature importance of the corresponding circle that contributes to the topic.} \label{fig:rq33} \end{figure*} \par We then apply DT to extract the feature importance of each social circle for each cluster to reflect their contribution to the circulation of each type of article. \autoref{fig:rq32} gives an example. In Cluster 1, colleagues occupy about 60\% of the friend network, followed by the family circle (25\%). Color blocks indicate the feature importance of the corresponding circles for different article categories. One can see that for Cluster 1, topics with high feature importance are \textit{practical knowledge} from family as well as \textit{baby and child}, \textit{holidays}, and \textit{career events} from colleagues. For Cluster 2 in which family circle dominates (80\%), the distribution of topics with high feature importance (e.g., \textit{traditional culture}, and \textit{traveling} from family, \textit{games} and \textit{shopping} from schoolmates) is different from that in Cluster 1. For a specific cluster of users, different social circle curates different topics (noted as \textbf{F7}). For example, \textit{alcohol and tobacco}-related articles come mostly from the schoolmate circle, and the schoolmate circle tends to circulate information about \textit{career events} and \textit{entertainment}. There is also the phenomenon that information about the same topic is curated by a different circle in different clusters. For example, in Cluster 1, the family is the main source of \textit{sports} and \textit{food}-related information, while such articles come mostly from ``others'' in Cluster 2. Another example is that users in Cluster 1 take in \textit{holiday}-related content primarily from colleagues, while those in Cluster 2 read about \textit{holidays} from their schoolmates' posts. This is mainly due to different clusters of users who may have quite different media literacy~\cite{potter2007media}. \par \textbf{Step 2: Visualizing feature importance over time.} After calculating the feature importance for each month, we visualize the topics with feature importance encoded by font size in a word cloud. Take three social circles of users from Cluster 1 at four different time frames (i.e., 201709, 201712, 201803, and 201806) as an example (\autoref{fig:rq33}). Apart from some topics that are always dominated by certain social circles, e.g., \textit{traveling} for colleagues and \textit{plants} for schoolmates, \textit{housing} emerges on Dec. 2017 in the schoolmate circle and this circle then contributes significantly to \textit{housing}, followed by the family circle contributing more to \textit{housing}. \textit{Tea art}-related articles shift between the family circle and the schoolmate circle. We also inspect other clusters and identify similar phenomenons, i.e. although different social circles tend to curate some relatively stable topics (e.g., traveling) with strong characteristics, some interests can shift between circles and the attention paid to them has its own ups and downs (noted as \textbf{F8}). \par In the qualitative results for \textbf{RQ3}, 50.5\% of survey participants reported that leveraging different social circles as ``gatekeepers'' experienced some changes, with 32.4\% of them after they entered colleges, 36.1\% of them after they started to work, and 37.1\% when their lives shift to a new stage, such as getting married, becoming parents, and getting retired. The follow-up interviews complement some detailed explanations. ``\textit{now I prefer to read professional articles related to my major, and I will explore them by subscription accounts or my friend network,}'' reported by P1 (student, male); ``\textit{I read more articles related to my field so I pay more attention to my colleagues,}'' said P4 (male). ``\textit{After getting married, I consume more information about life, emotions, personal growth or the contents curated by my friends in similar circumstances. In fact, I have unsubscribed some subscription accounts about my original interests,}'' said P6 (recent married, female). \section{Discussion} \par The insights found in this study can provide implications for the future designs of socio-technical systems, e.g., social recommenders. Users tend to adjust their degree of reliance on subscription accounts or friend networks based on the quantity and quality of their content suppliers (\textbf{F1}). Meanwhile, elder users with fewer WeChat experience are more likely to consume friend-curated content (\textbf{F2}). For one thing, having more spare time, most of them enjoy socializing with old friends and classmates on social media (\textbf{F4}). As indicated in~\cite{madden2010older}, ``\textit{social media users are more likely to reconnect with people from their past, and these renewed connections provide a strong support network when people are near retirement.}'' For another, this group of users usually have strong information needs. Therefore, social recommender strategies for their contents, if applied, need to be tailored. \par Regarding how gatekeeping gets reflected in different social circles and ties, survey participants indicate that the family circle cannot fully function as a gatekeeper, different from the quantitative analysis (\textbf{F3}). This is because of emotional supports, or because people may already have a pre-assumption of the contents curated by them. The qualitative study also finds that people's reading interests may shift over time due to (1) changes of their information needs and tastes may have altered; and (2) changes of their social circles composition around the same time, which is in accordance with \textbf{F7-8}: although users with similar friend composition tend to get stable curation for some information, some articles and the attention paid to them can shift between different social circles. The performance of DT regression indicates that the article consumption space preserves different social circles in a non-linear way, i.e., different social circles can share common information interests. People with a clear idea of what they want to read will go straight to appropriate gatekeepers known to fulfill the consumption demands. These gatekeepers can be either close friends with strong-ties or trustful followed subscription accounts. In addition, the quantitative analysis indicates that the information curation behaviors of friends with strong-ties hold implications for the unfollowed subscription outlet trust (\textbf{F5}). With a little help from these friends, people may be able to connect with new subscription accounts and improve their readiness to participate in an informed democracy. People are also willing to acquire new knowledge from weak-ties (\textbf{F6}), which bring serendipity. In either case, users manage different gatekeepers as content curators. We can, therefore, model pairwise friend influence and apply to potential recommendation scenarios such as the social advertisements to give more exposure to users who are more likely to consume friend-curated content. \subsection{Limitation} \par There are several limitations to this research. First, in \textbf{RQ3}, since we only consider the relationship between social circles and article categories, regression models may fail to capture other factors with a possibly high correlation between the social circle and article consumption space. As we only have a one-year longitudinal data due to high maintenance cost of our collaborators, and users are clustered only once based on their social composition, we cannot conclude that changes of social circles would influence information gatekeeping, since dramatic changes in users' social network or interests are unlikely to happen overnight. Second, in \textbf{RQ2}, we only use the number of common friends to measure tie strength. A more objective metric may be chatting frequency between two users. Third, in most cases, we just employ CTR to infer users' reading interests and do not dig into deeper the motivations behind their clicks by using other metrics. \section{Conclusion} \par In this paper, we conduct a mixed-methods approach to studying ``friend network as a latent gatekeeper'' phenomenon on a friend-based social media WeChat. We analyze over seven million users to infer how they accommodate and safeguard their information through social circles and social ties. We also conduct a survey of 216 WeChat users about their reading activities on WeChat. Results indicate that WeChat users prefer the friend network when information is overloaded. They like to leverage weak-ties getting exposed to new domains, and turn to strong-ties when demanding credible and reliable information. Elder users with fewer experience using WeChat are more likely to consume friend-curated contents. Users leverage social circles to gatekeep information interests and the interests and attention paid to them can shift from one social circle to another. \begin{acks} We are grateful for the valuable feedback and comments provided by the anonymous reviewers. This research was supported by WeChat-HKUST Joint Lab on AI Technology (WHATLAB) grant\#1617170-0 and HKUST-WeBank Joint Laboratory Project Grant No.: WEB19EG01-d. \end{acks} \bibliographystyle{ACM-Reference-Format}
train/arxiv
BkiUfao25V5jWIVPNcnw
5
1
\section{Decidability} \label{decide} As is well-known, the logical theory $\Th({\mathbb{N}},+)$, sometimes called Presburger arithmetic, is decidable \cite{Presburger:1929,Presburger:1991}. B\"uchi \cite{Buchi:1960} showed that if we add the function $V_k(n) = k^e$, for some fixed integer $k \geq 2$, where $e = \max \{ i \ : \ k^i \, | \, n \}$, then the resulting theory is still decidable. This theory is powerful enough to define finite automata; for a survey, see \cite{Bruyere&Hansel&Michaux&Villemaire:1994}. As a consequence, we have the following theorem (see, e.g., \cite{Shallit:2013}): \begin{theorem} There is an algorithm that, given a proposition phrased using only the universal and existential quantifiers, indexing into one or more $k$-automatic sequences, addition, subtraction, logical operations, and comparisons, will decide the truth of that proposition. \label{one} \end{theorem} Here, by a $k$-automatic sequence, we mean a sequence $\bf a$ computed by deterministic finite automaton with output (DFAO) $M = (Q, \Sigma_k, \Delta, \delta, q_0, \kappa) $. Here $\Sigma_k := \lbrace 0,1,\ldots, k-1 \rbrace$ is the input alphabet, $\Delta$ is the output alphabet, and outputs are associated with the states given by the map $\kappa:Q \rightarrow \Delta$ in the following manner: if $(n)_k$ denotes the canonical expansion of $n$ in base $k$, then ${\bf a}[n] = \kappa(\delta(q_0, (n)_k))$. The prototypical example of an automatic sequence is the Thue-Morse sequence ${\bf t} = t_0 t_1 t_2 \cdots$, the fixed point (starting with $0$) of the morphism $0 \rightarrow 01$, $1 \rightarrow 10$. It turns out that many results in the literature about properties of automatic sequences, for which some had only long and involved proofs, can be proved purely mechanically using a decision procedure. It suffices to express the property as an appropriate logical predicate, convert the predicate into an automaton accepting representations of integers for which the predicate is true, and examine the automaton. See, for example, the recent papers \cite{Allouche&Rampersad&Shallit:2009,Goc&Henshall&Shallit:2012,Goc&Saari&Shallit:2013,Goc&Mousavi&Shallit:2013,Goc&Schaeffer&Shallit:2013}. Furthermore, in many cases we can explicitly enumerate various aspects of such sequences, such as subword complexity \cite{Charlier&Rampersad&Shallit:2012}. Beyond base $k$, more exotic numeration systems are known, and one can define automata taking representations in these systems as input. It turns out that in the so-called Pisot numeration systems, addition is computable \cite{Frougny:1992a,Frougny&Solomyak:1996}, and hence a theorem analogous to Theorem~\ref{one} holds for these systems. See, for example, \cite{Bruyere&Hansel:1997}. It is our contention that the power of this approach has not been widely appreciated, and that many results, previously proved using long and involved ad hoc techniques, can be proved with much less effort by phrasing them as logical predicates and employing a decision procedure. Furthermore, many enumeration questions can be solved with a similar approach. We have implemented a decision algorithm for one such system; namely, Fibonacci representation. In this paper we report on our results obtained using this implementation. We have reproved many results in the literature purely mechanically, as well as obtained new results, using this implementation. The paper is organized as follows. In Section~\ref{fibrep}, we briefly recall the details of Fibonacci representation. In Section~\ref{proofsf} we report on our mechanical proofs of properties of the infinite Fibonacci word; we reprove many old results and we prove some new ones. In Section~\ref{finitefib} we apply our ideas to prove results about the finite Fibonacci words. In Section~\ref{rotefib} we study a special infinite word, the Rote-Fibonacci word, and prove many properties of it, including a new avoidability result. In Section~\ref{other} we look briefly at another sequence, the Fibonacci analogue of the Thue-Morse sequence. In Section~\ref{additive} we apply our methods to another avoidability problem involving additive squares. In Section~\ref{enumer} we report on mechanical proofs of some enumeration results. Some details about our implementation are given in the last section. \section{Fibonacci representation} \label{fibrep} Let the Fibonacci numbers be defined, as usual, by $F_0 = 0$, $F_1 = 1$, and $F_n = F_{n-1} + F_{n-2}$ for $n \geq 2$. (We caution the reader that some authors use a different indexing for these numbers.) It is well-known, and goes back to Ostrowski \cite{Ostrowski:1922}, Lekkerkerker \cite{Lekkerkerker:1952}, and Zeckendorf \cite{Zeckendorf:1972}, that every non-negative integer can be represented, in an essentially unique way, as a sum of Fibonacci numbers $(F_i)_{i\geq 2}$, subject to the constraint that no two consecutive Fibonacci numbers are used. For example, $43 = F_9 + F_6 + F_2$. Also see \cite{Carlitz:1968,Fraenkel:1985}. Such a representation can be written as a binary string $a_1 a_2 \cdots a_n$ representing the integer $\sum_{1 \leq i \leq n} a_i F_{n+2-i}$. For example, the binary string $10010001$ is the Fibonacci representation of $43$. For $w = a_1 a_2 \cdots a_n \in \Sigma_2^*$, we define $[a_1 a_2 \cdots a_n]_F := \sum_{1 \leq i \leq n} a_i F_{n+2-i}$, even if $a_1 a_2 \cdots a_n$ has leading zeroes or consecutive $1$'s. By $(n)_F$ we mean the {\it canonical} Fibonacci representation for the integer $n$, having no leading zeroes or consecutive $1$'s. Note that $(0)_F = \epsilon$, the empty string. The language of all canonical representations of elements of ${\mathbb{N}}$ is $\epsilon + 1(0+01)^*$. Just as Fibonacci representation is the analogue of base-$k$ representation, we can define the notion of {\it Fibonacci-automatic sequence} as the analogue of the more familiar notation of $k$-automatic sequence \cite{Cobham:1972,Allouche&Shallit:2003}. We say that an infinite word ${\bf a} = (a_n)_{n \geq 0}$ is Fibonacci-automatic if there exists an automaton with output $M = (Q, \Sigma_2, q_0, \delta, \kappa, \Delta)$ that $a_n = \kappa(\delta(q_0, (n)_F))$ for all $n \geq 0$. An example of a Fibonacci-automatic sequence is the infinite Fibonacci word, $${\bf f} = f_0 f_1 f_2 \cdots = 01001010\cdots$$ which is generated by the following 2-state automaton: \begin{figure}[H] \begin{center} \begin{tikzpicture}[node distance=2cm,on grid,>=stealth',initial text=,auto, every state/.style={inner sep=1pt,minimum size=1cm}, every loop/.style={shorten >=0,looseness=0}] \node[state,initial] (q_0) {$q_0/{\tt 0}$}; \node[state] (q_1) [right=of q_0] {$q_1/{\tt 1}$}; \path[->] (q_0) edge [loop above] node {\tt 0} () (q_0.10) edge node {\tt 1} (q_1.170) (q_1.190) edge node {\tt 0} (q_0.350); \end{tikzpicture} \end{center} \caption{Canonical Fibonacci representation DFAO generating the Fibonacci word} \label{fig:f-dfao} \end{figure} To compute $f_i$, we express $i$ in canonical Fibonacci representation, and feed it into the automaton. Then $f_i$ is the output associated with the last state reached (denoted by the symbol after the slash). Another characterization of Fibonacci-automatic sequences can be found in \cite{Shallit:1988a}. A basic fact about Fibonacci representation is that addition can be performed by a finite automaton. To make this precise, we need to generalize our notion of Fibonacci representation to $r$-tuples of integers for $r \geq 1$. A representation for $(x_1, x_2,\ldots, x_r)$ consists of a string of symbols $z$ over the alphabet $\Sigma_2^r$, such that the projection $\pi_i(z)$ over the $i$'th coordinate gives a Fibonacci representation of $x_i$. Notice that since the canonical Fibonacci representations of the individual $x_i$ may have different lengths, padding with leading zeroes will often be necessary. A representation for $(x_1, x_2, \ldots, x_r)$ is called canonical if it has no leading $[0,0,\ldots 0]$ symbols and the projections into individual coordinates have no occurrences of $11$. We write the canonical representation as $(x_1, x_2, \ldots, x_r)_F$. Thus, for example, the canonical representation for $(9,16)$ is $[0,1][1,0][0,0][0,1][0,0][1,0]$. Thus, our claim about addition in Fibonacci representation is that there exists a deterministic finite automaton (DFA) $M_{\rm add}$ that takes input words of the form $[0,0,0]^* (x,y,z)_F$, and accepts if and only if $x +y =z$. Thus, for example, $M_{\rm add}$ accepts $[0,0,1][1,0,0][0,1,0][1,0,1]$, since the three strings obtained by projection are $0101, 0010, 1001$, which represent, respectively, $4$, $2$, and $6$ in Fibonacci representation. This result is apparently originally due to Berstel \cite{Berstel:1982}; also see \cite{Berstel:1986b,Frougny:1988,Frougny:1991b,Ahlbach&Usatine&Frougny&Pippenger:2013}. Since this automaton does not appear to have been given explicitly in the literature and it is essential to our implementation, we give it here. The states of $M_{\rm add}$ are $Q = \lbrace 0,1,2,\ldots, 16 \rbrace$, the input alphabet is $\Sigma_2 \times \Sigma_2 \times \Sigma_2$, the final states are $F = \lbrace 1,7,11 \rbrace$, the initial state is $q_0 = 1$, and the transition function $\delta$ is given below. The automaton is incomplete, with any unspecified transitions going to a non-accepting dead state that transitions to itself on all inputs. This automaton actually works even for non-canonical expansions having consecutive $1$'s; an automaton working only for canonical expansions can easily be obtained by intersection with the appropriate regular languages. The state $0$ is a ``dead state'' that can safely be ignored. \begin{table}[H] \begin{center} \begin{tabular}{c|cccccccc} & [0,0,0] & [0,0,1] & [0,1,0] & [0,1,1] & [1,0,0] & [1,0,1] & [1,1,0] & [1,1,1] \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 2 & 3 & 1 & 3 & 1 & 0 & 3 \\ 2 & 4 & 5 & 6 & 4 & 6 & 4 & 7 & 6 \\ 3 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 5 & 0 & 4 & 5 & 4 & 5 & 6 & 4 \\ 5 & 0 & 0 & 0 & 0 & 0 & 0 & 9 & 0 \\ 6 & 2 & 10 & 1 & 2 & 1 & 2 & 3 & 1 \\ 7 & 8 & 11 & 0 & 8 & 0 & 8 & 0 & 0 \\ 8 & 3 & 1 & 0 & 3 & 0 & 3 & 0 & 0 \\ 9 & 0 & 0 & 5 & 0 & 5 & 0 & 4 & 5 \\ 10 & 0 & 0 & 9 & 0 & 9 & 0 & 12 & 9 \\ 11 & 6 & 4 & 7 & 6 & 7 & 6 & 13 & 7 \\ 12 & 10 & 14 & 2 & 10 & 2 & 10 & 1 & 2 \\ 13 & 0 & 15 & 0 & 0 & 0 & 0 & 0 & 0 \\ 14 & 0 & 0 & 0 & 0 & 0 & 0 & 16 & 0 \\ 15 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & 0 \\ 16 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 0 \\ \end{tabular} \end{center} \caption{Transition table for $M_{\rm add}$ for Fibonacci addition} \end{table} We briefly sketch a proof of the correctness of this automaton. States can be identified with certain sequences, as follows: if $x,y,z$ are the identical-length strings arising from projection of a word that takes $M_{\rm add}$ from the initial state $1$ to the state $t$, then $t$ is identified with the integer sequence $([x0^n]_F + [y0^n]_F - [z0^n]_F)_{n \geq 0}$. With this correspondence, we can verify the following table by a tedious induction. In the table $L_n$ denotes the familiar Lucas numbers, defined by $L_n = F_{n-1} + F_{n+1}$ for $n \geq 0$ (assuming $F_{-1} = 1$). If a sequence $(a_n)_{n \geq 0}$ is the sequence identified with a state $t$, then $t$ is accepting iff $a_0 = 0$. \begin{table}[H] \begin{center} \begin{tabular}{c|c} state & sequence \\ \hline 1 & {\bf 0} \\ 2 & $(-F_{n+2})_{n \geq 0}$ \\ 3 & $(F_{n+2})_{n \geq 0}$ \\ 4 & $(-F_{n+3})_{n \geq 0}$ \\ 5 & $(-F_{n+4})_{n \geq 0}$ \\ 6 & $(-F_{n+1})_{n \geq 0}$ \\ 7 & $(F_n)_{n \geq 0}$ \\ 8 & $(F_{n+1})_{n \geq 0}$ \\ 9 & $(-L_{n+2})_{n \geq 0}$ \\ 10 & $(-2F_{n+2})_{n \geq 0}$ \\ 11 & $(-F_n)_{n \geq 0}$ \\ 12 & $(-2F_{n+1})_{n \geq 0}$ \\ 13 & $(L_{n+1})_{n \geq 0}$ \\ 14 & $(-3F_{n+2})_{n \geq 0}$ \\ 15 & $(2F_{n+1})_{n \geq 0}$ \\ 16 & $(-2F_n-3L_n)_{n \geq 0}$ \end{tabular} \end{center} \caption{Identification of states with sequences} \end{table} Note that the state $0$ actually represents a set of sequences, not just a single sequence. The set corresponds to those representations that are so far ``out of synch'' that they can never ``catch up'' to have $x +y = z$, no matter how many digits are appended. \begin{remark} We note that, in the spirit of the paper, this adder itself can, in principle, be checked mechanically (in $\Th({\mathbb{N}}, 0)$, of course!), as follows: First we show the adder $\cal A$ is specifying a function of $x$ and $y$. To do so, it suffices to check that $$ \forall x \ \forall y \ \exists z \ {\cal A}(x,y,z)$$ and $$ \forall x \ \forall y \ \forall z \ \forall z' \ {\cal A}(x,y,z) \wedge {\cal A}(x,y,z') \implies z = z' .$$ The first predicate says that there is at least one sum of $x$ and $y$ and the second says that there is at most one. If both of these are verified, we know that $\cal A$ computes a function $A= A(x,y)$. Next, we verify associativity, which amounts to checking that $$ \forall x \ \forall y \ \forall z \ A(A(x,y),z) = A(x,A(y,z)) .$$ We can do this by checking that $$ \forall x\ \forall y \ \forall z \ \forall w \ \forall r \ \forall s \ \forall t \ ({\cal A}(x,y,r) \ \wedge \ {\cal A}(r,z,t) \ \wedge \ {\cal A}(y,z,s) ) \ \implies \ {\cal A}(x,s,t) . $$ Finally, we ensure that $\cal A$ is an adder by induction. First, we check that $\forall x \ A(x,0) = x$, which amounts to $$ \forall x\ \forall y \ {\cal A}(x,0,y) \iff x = y .$$ Second, we check that if $A(x,1) = y$ then $x < y$ and there does not exist $z$ such that $x < z < y$. This amounts to $$ \forall x, y, {\cal A}(x,1,y) \implies ((x < y) \ \wedge \ \neg \exists z \ (x<z) \ \wedge \ (z<y) ) .$$ This last condition shows that $A(x,1) = x+1$. By associativity $A(x,y+1) = A(x,A(y,1)) = A(A(x,y),1) = A(x,y) + 1$. By induction, $A(x,y) = A(x,0)+y = x+y$, so we are done. \end{remark} Another basic fact about Fibonacci representation is that, for canonical representations containing no two consecutive $1$'s or leading zeroes, the radix order on representations is the same as the ordinary ordering on ${\mathbb{N}}$. It follows that a very simple automaton can, on input $(x,y)_F$, decide whether $x < y$. Putting this all together, we get the analogue of Theorem~\ref{one}: \begin{proc}[Decision procedure for Fibonacci-automatic words] \label{proc:Fib-auto-decide} \ \\ {\bf Input:} $m,n \in {\mathbb{N}}$, $m$ DFAOs witnessing Fibonacci-automatic words ${\bf w}_1,{\bf w}_2,\dots,{\bf w}_m$, a first-order proposition with $n$ free variables $\varphi(v_1,v_2,\dots,v_n)$ using constants and relations definable in $\Th({\mathbb{N}},0,1,+)$ and indexing into ${\bf w}_1,{\bf w}_2,\dots,{\bf w}_m$. \\ {\bf Output:} DFA with input alphabet $\Sigma_2^n$ accepting $\{ (k_1,k_2,\dots,k_n)_F \;:\; \varphi(k_1,k_2,\dots,k_n) \text{ holds} \}$. \end{proc} We remark that there was substantial skepticism that any implementation of a decision procedure for Fibonacci-automatic words would be practical, for two reasons: \begin{itemize} \item first, because the running time is bounded above by an expression of the form $$2^{2^{\Ddots^{ 2^{p(N)}}}}$$ where $p$ is a polynomial, $N$ is the number of states in the original automaton specifying the word in question, and the number of exponents in the tower is one less than the number of quantifiers in the logical formula characterizing the property being checked. \item second, because of the complexity of checking addition (15 states) compared to the analogous automaton for base-$k$ representation (2 states). \end{itemize} Nevertheless, we were able to carry out nearly all the computations described in this paper in a matter of a few seconds on an ordinary laptop. \section{Mechanical proofs of properties of the infinite Fibonacci word} \label{proofsf} Recall that a word $x$, whether finite or infinite, is said to have period $p$ if $x[i] = x[i+p]$ for all $i$ for which this equality is meaningful. Thus, for example, the English word ${\tt alfalfa}$ has period $3$. The {\it exponent} of a finite word $x$, written $\exp(x)$, is $|x|/P$, where $P$ is the smallest period of $x$. Thus $\exp({\tt alfalfa}) = 7/3$. If $\bf x$ is an infinite word with a finite period, we say it is {\it ultimately periodic}. An infinite word $\bf x$ is ultimately periodic if and only if there are finite words $u, v$ such that $x = uv^\omega$, where $v^\omega= vvv \cdots$. A nonempty word of the form $xx$ is called a {\it square}, and a nonempty word of the form $xxx$ is called a {\it cube}. More generally, a nonempty word of the form $x^n$ is called an $n$'th power. By the {\it order} of a square $xx$, cube $xxx$, or $n$'th power $x^n$, we mean the length $|x|$. The infinite Fibonacci word ${\bf f} = 01001010 \cdots = f_0 f_1 f_2 \cdots$ can be described in many different ways. In addition to our definition in terms of automata, it is also the fixed point of the morphism $\varphi(0) = 01$ and $\varphi(1) = 0$. This word has been studied extensively in the literature; see, for example, \cite{Berstel:1980b,Berstel:1986b}. In the next subsection, we use our implementation to prove a variety of results about repetitions in $\bf f$. \subsection{Repetitions} \label{repe-subsec} \begin{theorem} The word $\bf f$ is not ultimately periodic. \end{theorem} \begin{proof} We construct a predicate asserting that the integer $p \geq 1$ is a period of some suffix of $\bf f$: $$ (p \geq 1) \ \wedge \ \exists n \ \forall i \geq n\ {\bf f}[i] = {\bf f}[i+p] . $$ (Note: unless otherwise indicated, whenever we refer to a variable in a predicate, the range of the variable is assumed to be ${\mathbb{N}} = \lbrace 0, 1, 2, \ldots \rbrace$.) From this predicate, using our program, we constructed an automaton accepting the language $$ L = 0^*\ \lbrace (p)_F \ : \ (p \geq 1) \ \wedge \ \exists n \ \forall i \geq n \ {\bf f}[i] = {\bf f}[i+p] \rbrace .$$ This automaton accepts the empty language, and so it follows that ${\bf f}$ is not ultimately periodic. Here is the log of our program: \begin{verbatim} p >= 1 with 4 states, in 60ms i >= n with 7 states, in 5ms F[i] = F[i + p] with 12 states, in 34ms i >= n => F[i] = F[i + p] with 51 states, in 15ms Ai i >= n => F[i] = F[i + p] with 3 states, in 30ms p >= 1 & Ai i >= n => F[i] = F[i + p] with 2 states, in 0ms En p >= 1 & Ai i >= n => F[i] = F[i + p] with 2 states, in 0ms overall time: 144ms \end{verbatim} The largest intermediate automaton during the computation had 63 states. A few words of explanation are in order: here ``{\tt F}'' refers to the sequence $\bf f$, and ``{\tt E}'' is our abbreviation for $\exists$ and ``{\tt A}'' is our abbreviation for $\forall$. The symbol ``{\tt =>}'' is logical implication, and ``{\tt \&}'' is logical and. \end{proof} From now on, whenever we discuss the language accepted by an automaton, we will omit the $0^*$ at the beginning. We recall an old result of Karhum\"aki \cite[Thm.~2]{Karhumaki:1983}: \begin{theorem} $\bf f$ contains no fourth powers. \end{theorem} \begin{proof} We create a predicate for the orders of all fourth powers occurring in $\bf f$: $$(n > 0) \ \wedge \ \exists i \ \forall t<3n \ {\bf f}[i+t] = {\bf f}[i+n+t] . $$ The resulting automaton accepts nothing, so there are no fourth powers. \begin{verbatim} n > 0 with 4 states, in 46ms t < 3 * n with 30 states, in 178ms F[i + t] = F[i + t + n] with 62 states, in 493ms t < 3 * n => F[i + t] = F[i + t + n] with 352 states, in 39ms At t < 3 * n => F[i + t] = F[i + t + n] with 3 states, in 132ms Ei At t < 3 * n => F[i + t] = F[i + t + n] with 2 states, in 0ms n > 0 & Ei At t < 3 * n => F[i + t] = F[i + t + n] with 2 states, in 0ms overall time: 888ms \end{verbatim} \end{proof} The largest intermediate automaton in the computation had 952 states. Next, we move on to a description of the orders of squares occurring in $\bf f$. An old result of S\'e\'ebold \cite{Seebold:1985b} (also see \cite{Iliopoulos&Moore&Smyth:1997,Fraenkel&Simpson:1999}) states \begin{theorem} All squares in $\bf f$ are of order $F_n$ for some $n \geq 2$. Furthermore, for all $n \geq 2$, there exists a square of order $F_n$ in $\bf f$. \label{squares} \end{theorem} \begin{proof} We create a predicate for the lengths of squares: $$(n > 0) \ \wedge \ \exists i \ \forall t<n \ {\bf f}[i+t] = {\bf f}[i+n+t] .$$ When we run this predicate, we obtain an automaton that accepts exactly the language $10^*$. Here is the log file: \begin{verbatim} n > 0 with 4 states, in 38ms t < n with 7 states, in 5ms F[i + t] = F[i + t + n] with 62 states, in 582ms t < n => F[i + t] = F[i + t + n] with 92 states, in 12ms At t < n => F[i + t] = F[i + t + n] with 7 states, in 49ms Ei At t < n => F[i + t] = F[i + t + n] with 3 states, in 1ms n > 0 & Ei At t < n => F[i + t] = F[i + t + n] with 3 states, in 0ms overall time: 687ms \end{verbatim} \end{proof} The largest intermediate automaton had 236 states. We can easily get much, much more information about the square occurrences in $\bf f$. The positions of all squares in $\bf f$ were computed by Iliopoulos, Moore, and Smyth \cite[\S~2]{Iliopoulos&Moore&Smyth:1997}, but their description is rather complicated and takes 5 pages to prove. Using our approach, we created an automaton accepting the language $$ \{ (n,i)_F \ : \ (n > 0) \ \wedge \ \forall t<n \ {\bf f}[i+t] = {\bf f}[i+n+t] \} . $$ This automaton has only 6 states and efficiently encodes the orders and starting positions of each square in $\bf f$. During the computation, the largest intermediate automaton had 236 states. Thus we have proved \begin{theorem} The language $$ \{ (n,i)_F \ : \ \text{there is a square of order $n$ beginning at position $i$ in {\bf f}} \}$$ is accepted by the automaton in Figure~\ref{squareorders}. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{fibsquares.pdf} \caption{Automaton accepting orders and positions of all squares in $\bf f$} \label{squareorders} \end{center} \end{figure} \end{theorem} Next, we examine the cubes in $\bf f$. Evidently Theorem~\ref{squares} implies that any cube in $\bf f$ must be of order $F_n$ for some $n$. However, not every order occurs. \begin{theorem} The cubes in $\bf f$ are of order $F_n$ for $n \geq 4$, and a cube of each such order occurs. \end{theorem} \begin{proof} We use the predicate $$(n > 0) \ \wedge \ \exists i \ \forall t<2n \ {\bf f}[i+t] = {\bf f}[i+n+t] .$$ When we run our program, we obtain an automaton accepting exactly the language $(100)0^*$, which corresponds to $F_n$ for $n \geq 4$. \begin{verbatim} n > 0 with 4 states, in 34ms t < 2 * n with 16 states, in 82ms F[i + t] = F[i + t + n] with 62 states, in 397ms t < 2 * n => F[i + t] = F[i + t + n] with 198 states, in 17ms At t < 2 * n => F[i + t] = F[i + t + n] with 7 states, in 87ms Ei At t < 2 * n => F[i + t] = F[i + t + n] with 5 states, in 1ms n > 0 & Ei At t < 2 * n => F[i + t] = F[i + t + n] with 5 states, in 0ms overall time: 618ms \end{verbatim} \end{proof} The largest intermediate automaton had 674 states. Next, we encode the orders and positions of all cubes. We build a DFA accepting the language $$ \{ (n,i)_F \ : \ (n > 0) \ \wedge \ \forall t<2n \ {\bf f}[i+t] = {\bf f}[i+n+t] \} . $$ \begin{theorem} The language $$ \{ (n,i)_F \ : \ \text{there is a cube of order $n$ beginning at position $i$ in {\bf f}} \}$$ is accepted by the automaton in Figure~\ref{cubeorders}. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{fibcubes.pdf} \caption{Automaton accepting orders and positions of all cubes in $\bf f$} \label{cubeorders} \end{center} \end{figure} \end{theorem} Finally, we consider all the maximal repetitions in $\bf f$. Let $p(x)$ denote the length of the least period of $x$. If ${\bf x} = a_0 a_1 \cdots$, by ${\bf x}[i..j]$ we mean $a_i a_{i+1} \cdots a_j$. Following Kolpakov and Kucherov \cite{Kolpakov&Kucherov:1999a}, we say that ${\bf f}[i..i+n-1]$ is a {\it maximal repetition} if \begin{itemize} \item[(a)] $p({\bf f}[i..i+n-1]) \leq n/2$; \item[(b)] $p({\bf f}[i..i+n-1]) < p({\bf f}[i..i+n]) $; \item[(c)] If $i > 0$ then $p({\bf f}[i..i+n-1]) < p({\bf f}[i-1..i+n-1])$. \end{itemize} \begin{theorem} The factor ${\bf f}[i..i+n-1]$ is a maximal repetition of $\bf f$ iff $(n,i)_F$ is accepted by the automaton depicted in Figure~\ref{maxreps2}. \begin{figure}[H] \begin{center} \includegraphics[width=3.5in]{output_maxreps2.pdf} \caption{Automaton accepting occurrences of maximal repetitions in $\bf f$} \label{maxreps2} \end{center} \end{figure} \end{theorem} An {\it antisquare} is a nonempty word of the form $x \overline{x}$, where $\overline{x}$ denotes the complement of $x$ ($1$'s changed to $0$'s and vice versa). Its order is $|x|$. For a new (but small) result we prove \begin{theorem} The Fibonacci word $\bf f$ contains exactly four antisquare factors: $01, 10, 1001, $ and $10100101$. \end{theorem} \begin{proof} The predicate for having an antisquare of length $n$ is $$ \exists i \ \forall k < n \ {\bf f}[i+k] \not= {\bf f}[i+k+n] .$$ When we run this we get the automaton depicted in Figure~\ref{antisquare}, specifying that the only possible orders are $1$, $2$, and $4$, which correspond to words of length $2$, $4$, and $8$. \begin{figure}[H] \begin{center} \includegraphics[width=5.5in]{fib-antisquares.pdf} \caption{Automaton accepting orders of antisquares in $\bf f$} \label{antisquare} \end{center} \end{figure} Inspection of the factors of these lengths proves the result. \end{proof} \subsection{Palindromes and antipalindromes} We now turn to a characterization of the palindromes in $\bf f$. Using the predicate $$ \exists i \ \forall j<n \ {\bf f}[i+j] = {\bf f}[i+n-1-j], $$ we specify those lengths $n$ for which there is a palindrome of length $n$. Our program then recovers the following result of Chuan \cite{Chuan:1993b}: \begin{theorem} There exist palindromes of every length $\geq 0$ in $\bf f$. \end{theorem} We could also characterize the positions of all nonempty palindromes. The resulting 21-state automaton is not particularly enlightening, but is included here to show the kind of complexity that can arise. \begin{figure}[H] \begin{center} \includegraphics[width=4.8in]{output_palindrome-positions.pdf} \caption{Automaton accepting orders and positions of all nonempty palindromes in $\bf f$} \label{palindrome-orders} \end{center} \end{figure} Although the automaton in Figure~\ref{palindrome-orders} encodes all palindromes, more specific information is a little hard to deduce from it. For example, let's prove a result of Droubay \cite{Droubay:1995}: \begin{theorem} The Fibonacci word $\bf f$ has exactly one palindromic factor of length $n$ if $n$ is even, and exactly two palindromes of length $n$ if $n$ odd. \end{theorem} \begin{proof} First, we obtain an expression for the lengths $n$ for which there is exactly one palindromic factor of length $n$. \begin{multline*} \exists i \ (\forall t<n \ {\bf f}[i+t] = {\bf f}[i+n-1-t]) \ \wedge \ \\ \forall j \ (\forall s<n\ {\bf f}[j+s] = {\bf f}[j+n-1-s]) \implies ( \forall u<n\ {\bf f}[i+u] = {\bf f}[j+u]) \end{multline*} The first part of the predicate asserts that ${\bf f}[i..i+n-1]$ is a palindrome, and the second part asserts that any palindrome ${\bf f}[j..j+n-1]$ of the same length must in fact be equal to ${\bf f}[i..i+n-1]$. When we run this predicate through our program we get the automaton depicted below in Figure~\ref{onepal}. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{1-pal-lengths.pdf} \caption{Automaton accepting lengths with exactly one palindrome} \label{onepal} \end{center} \end{figure} It may not be obvious, but this automaton accepts exactly the Fibonacci representations of the even numbers. The easiest way to check this is to use our program on the predicate $\exists i \ n = 2i$ and verify that the resulting automaton is isomorphic to that in Figure~\ref{onepal}. Next, we write down a predicate for the existence of exactly two distinct palindromes of length $n$. The predicate asserts the existence of two palindromes ${\bf x}[i..i+n-1]$ and ${\bf x}[j..j+n-1]$ that are distinct and for which any palindrome of the same length must be equal to one of them. \begin{multline*} \exists i\ \exists j\ (\forall t<n\ {\bf f}[i+t] = {\bf f}[i+n-1-t]) \ \wedge \ (\forall s<n\ {\bf f}[j+s] = {\bf f}[j+n-1-s]) \ \wedge \ \\ (\exists m<n\ {\bf f}[i+m] \not= {\bf f}[j+m]) \ \wedge \ \\ ( \forall u (\forall k < n\ {\bf f}[u+k] = {\bf f}[u+n-1-k]) \implies (( \forall l<n\ {\bf f}[u+l] = {\bf f}[i+l]) \ \vee \ (\forall p<n \ {\bf f}[u+p] = {\bf f}[j+p]))) \end{multline*} Again, running this through our program gives us an automaton accepting the Fibonacci representations of the odd numbers. We omit the automaton. \end{proof} The prefixes are factors of particular interest. Let us determine which prefixes are palindromes: \begin{theorem} The prefix ${\bf f}[0..n-1]$ of length $n$ is a palindrome if and only if $n = F_i - 2$ for some $i \geq 3$. \end{theorem} \begin{proof} We use the predicate $$ \forall i<n\ {\bf f}[i] = {\bf f}[n-1-i]$$ obtaining an automaton accepting $\epsilon + 1 + 10(10)^*(0+01)$, which are precisely the representations of $F_i - 2$. \end{proof} Next, we turn to the property of ``mirror invariance''. We say an infinite word $\bf w$ is mirror-invariant if whenever $x$ is a factor of $\bf w$, then so is $x^R$. We can check this for $\bf f$ by creating a predicate for the assertion that for each factor $x$ of length $n$, the factor $x^R$ appears somewhere else: $$\forall i \geq 0 \ \exists j \text{ such that } {\bf f}[i..i+n-1] = {\bf f}[j..j+n-1]^R .$$ When we run this through our program we discover that it accepts the representations of all $n \geq 0$. Here is the log: \begin{verbatim} t < n with 7 states, in 99ms F[i + t] = F[j + n - 1 - t] with 264 states, in 7944ms t < n => F[i + t] = F[j + n - 1 - t] with 185 states, in 89ms At t < n => F[i + t] = F[j + n - 1 - t] with 35 states, in 182ms Ej At t < n => F[i + t] = F[j + n - 1 - t] with 5 states, in 2ms Ai Ej At t < n => F[i + t] = F[j + n - 1 - t] with 3 states, in 6ms overall time: 8322ms \end{verbatim} Thus we have proved: \begin{theorem} The word ${\bf f}$ is mirror invariant. \end{theorem} An {\it antipalindrome} is a word $x$ satisfying $x = \overline{x^R}$. For a new (but small) result, we determine all possible antipalindromes in $\bf f$: \begin{theorem} The only nonempty antipalindromes in $\bf f$ are $01$, $10$, $(01)^2$, and $(10)^2$. \end{theorem} \begin{proof} Let us write a predicate specifying that ${\bf f}[i..i+n-1]$ is a nonempty antipalindrome, and further that it is a first occurrence of such a factor: $$ (n > 0) \ \wedge\ (\forall j<n \ {\bf f}[i+j] \not= {\bf f}[i+n-1-j]) \ \wedge \ (\forall i' < i \ \exists j<n\ {\bf f}[i'+j] \not= {\bf f}[i+j]) . $$ When we run this through our program, the language of $(n,i)_F$ satisfying this predicate is accepted by the following automaton: \begin{figure}[H] \begin{center} \includegraphics[width=5in]{antipal.pdf} \caption{Automaton accepting orders and positions of first occurrences of nonempty antipalindromes in $\bf f$} \label{antipal} \end{center} \end{figure} It follows that the only $(n,i)$ pairs accepted are $(2,0), (2,1), (4,3), (4,4)$, corresponding, respectively, to the strings $01$, $10$, $(01)^2$, and $(10)^2$. \end{proof} \subsection{Special factors} Next we turn to special factors. It is well-known (and we will prove it in Theorem~\ref{sturmcomp} below), that ${\bf f}$ has exactly $n+1$ distinct factors of length $n$ for each $n \geq 0$. This implies that there is exactly one factor $x$ of each length $n$ with the property that both $x0$ and $x1$ are factors. Such a factor is called {\it right-special} or sometimes just {\it special}. We can write a predicate that expresses the assertion that the factor ${\bf f}[i..i+n-1]$ is the unique special factor of length $n$, and furthermore, that it is the first occurrence of that factor, as follows: \begin{multline*} (\forall i' < i \ \exists s < n \ {\bf f}[i'+s] \not= {\bf f}[i+s]) \ \wedge \ \exists j \ \exists k \ ((\forall t < n\ {\bf f}[j+t] = {\bf f}[i+t]) \\ \wedge \ (\forall u < n\ {\bf f}[k+u] = {\bf f}[i+u]) \ \wedge \ ({\bf f}[j+n] \not= {\bf f}[k+n])) . \end{multline*} \begin{theorem} The automaton depicted below in Figure~\ref{special} accepts the language $$\{ (i,n)_F \ : \ \text{the factor } {\bf f}[i..i+n-1] \text{ is the first occurrence of the unique special factor of length $n$} \} .$$ \begin{figure}[H] \begin{center} \includegraphics[width=3.5in]{output_special-factors.pdf} \caption{Automaton accepting first positions and lengths of special factors in $\bf f$} \label{special} \end{center} \end{figure} \end{theorem} Furthermore it is known (e.g., \cite[Lemma 5]{Pirillo:1997}) that \begin{theorem} The unique special factor of length $n$ is ${\bf f}[0..n-1]^R$. \end{theorem} \begin{proof} We create a predicate that says that if a factor is special then it matches ${\bf f}[0..n-1]^R$. When we run this we discover that all lengths are accepted. \end{proof} \subsection{Least periods} We now turn to least periods of factors of ${\bf f}$; see \cite{Saari:2007} and \cite{Epple&Siefken:2014} and \cite[Corollary 4]{Currie&Saari:2009}. Let $P$ denote the assertion that $n$ is a period of the factor ${\bf f}[i..j]$, as follows: \begin{eqnarray*} P(n,i,j) &=& {\bf f}[i..j-n] = {\bf f}[i+n..j] \\ &=& \forall \ t\ \text{ with $i \leq t \leq j-n$ we have } {\bf f}[t] = {\bf f}[t+n] . \end{eqnarray*} Using this, we can express the predicate $LP$ that $n$ is the least period of ${\bf f}[i..j]$: $$ LP(n,i,j) = P(n,i,j) \text{ and } \forall n' \text{ with } 1 \leq n' < n \ \neg P(n',i,j).$$ Finally, we can express the predicate that $n$ is a least period as follows $$L(n) = \exists i, j \geq 0 \text{ with $0 \leq i+n \leq j-1$ } LP(n, i, j) .$$ Using an implementation of this, we can reprove the following theorem of Saari \cite[Thm.~2]{Saari:2007}: \begin{theorem} If a word $w$ is a nonempty factor of the Fibonacci word, then the least period of $w$ is a Fibonacci number $F_n$ for $n \geq 2$. Furthermore, each such period occurs. \end{theorem} \begin{proof} We ran our program on the appropriate predicate and found the resulting automaton accepts $10^+$, corresponding to $F_n$ for $n \geq 2$. \end{proof} Furthermore, we can actually encode information about all least periods. The automaton depicted in Figure~\ref{leastp} accepts triples $(n,p,i)$ such that $p$ is a least period of ${\bf f}[i..i+n-1]$. \begin{figure}[H] \begin{center} \includegraphics[width=5.5in]{output_all-least-periods.pdf} \caption{Automaton encoding least periods of all factors in $\bf f$} \label{leastp} \end{center} \end{figure} We also have the following result, which seems to be new. \begin{theorem} Let $n \geq 1$, and define $\ell(n)$ to be the smallest integer that is the least period of some length-$n$ factor of $\bf f$. Then $\ell(n) = F_j$ for $j \geq 1$ if $L_j-1 \leq n \leq L_{j+1}-2$, where $L_j$ is the $j$'th Lucas number defined in Section~\ref{fibrep}. \label{allpers} \end{theorem} \begin{proof} We create an automaton accepting $(n,p)_F$ such that (a) there exists at least one length-$n$ factor of period $p$ and (b) for all length-$n$ factors $x$, if $q$ is a period of $x$, then $q \geq p$. This automaton is depicted in Figure~\ref{least-period-over} below. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{leastper.pdf} \caption{Automaton encoding smallest period over all length-$n$ factors in $\bf f$} \label{least-period-over} \end{center} \end{figure} The result now follows by inspection and the fact that $(L_j-1)_F = 10 (01)^{(j-2)/2}$ if $j \geq 2$ is even, and $100 (10)^{(j-3)/2}$ if $j \geq 3$ is odd. \end{proof} \subsection{Quasiperiods} We now turn to quasiperiods. An infinite word $\bf a$ is said to be {\it quasiperiodic} if there is some finite nonempty word $x$ such that ${\bf a}$ can be completely ``covered'' with translates of $x$. Here we study the stronger version of quasiperiodicity where the first copy of $x$ used must be aligned with the left edge of $\bf w$ and is not allowed to ``hang over''; these are called {\it aligned covers} in \cite{Christou&Crochemore&Iliopoulos:2012}. More precisely, for us ${\bf a} = a_0 a_1 a_2 \cdots$ is quasiperiodic if there exists $x$ such that for all $i \geq 0$ there exists $j\geq 0$ with $i-n < j \leq i$ such that $a_j a_{j+1} \cdots a_{j+n-1} = x$, where $n = |x|$. Such an $x$ is called a {\it quasiperiod}. Note that the condition $j \geq 0$ implies that, in this interpretation, any quasiperiod must actually be a prefix of $\bf a$. The quasiperiodicity of the Fibonacci word $\bf f$ was studied by Christou, Crochemore, and Iliopoulos \cite{Christou&Crochemore&Iliopoulos:2012}, where we can (more or less) find the following theorem: \begin{theorem} A nonempty length-$n$ prefix of $\bf f$ is a quasiperiod of $\bf f$ if and only if $n$ is not of the form $F_n - 1$ for $n \geq 3$. \end{theorem} In particular, the following prefix lengths are not quasiperiods: $1$, $2$, $4$, $7$, $12$, and so forth. \begin{proof} We write a predicate for the assertion that the length-$n$ prefix is a quasiperiod: $$\forall i \geq 0 \ \exists j \text{ with } i-n < j \leq i \text{ such that } \forall t<n \ {\bf f}[t] = {\bf f}[j+t] .$$ When we do this, we get the automaton in Figure~\ref{quasi} below. Inspection shows that this DFA accepts all canonical representations, except those of the form $1(01)^*(\epsilon + 0)$, which are precisely the representations of $F_n - 1$. \begin{figure}[H] \begin{center} \includegraphics[width=4in]{output_quasiperiods.pdf} \caption{Automaton accepting lengths of prefixes of $\bf f$ that are quasiperiods} \label{quasi} \end{center} \end{figure} \end{proof} \subsection{Unbordered factors} Next we look at unbordered factors. A word $y$ is said to be a {\it border} of $x$ if $y$ is both a nonempty proper prefix and suffix of $x$. A word $x$ is {\it bordered} if it has at least one border. It is easy to see that if a word $y$ is bordered iff it has a border of length $\ell$ with $0 < \ell \leq |y|/2$. \begin{theorem} The only unbordered nonempty factors of $\bf f$ are of length $F_n$ for $n \geq 2$, and there are two for each such length. For $n \geq 3$ these two unbordered factors have the property that one is a reverse of the other. \end{theorem} \begin{proof} We can express the property of having an unbordered factor of length $n$ as follows $$ \exists i\ \forall j, 1 \leq j \leq n/2, \ \exists t<j\ {\bf f}[i+t] \not= {\bf f}[i+n-j+t] .$$ Here is the log: {\footnotesize \begin{verbatim} j >= 1 with 4 states, in 155ms 2 * j <= n with 16 states, in 91ms j >= 1 & 2 * j <= n with 21 states, in 74ms t < j with 7 states, in 17ms F[i + t] != F[i + n - j + t] with 321 states, in 10590ms t < j & F[i + t] != F[i + n - j + t] with 411 states, in 116ms Et t < j & F[i + t] != F[i + n - j + t] with 85 states, in 232ms j >= 1 & 2 * j <= n => Et t < j & F[i + t] != F[i + n - j + t] with 137 states, in 19ms Aj j >= 1 & 2 * j <= n => Et t < j & F[i + t] != F[i + n - j + t] with 7 states, in 27ms Ei Aj j >= 1 & 2 * j <= n => Et t < j & F[i + t] != F[i + n - j + t] with 3 states, in 0ms overall time: 11321ms \end{verbatim} } The automaton produced accepts the Fibonacci representation of $0$ and $F_n$ for $n \geq 2$. Next, we make the assertion that there are exactly two such factors for each appropriate length. We can do this by saying there is an unbordered factor of length $n$ beginning at position $i$, another one beginning at position $k$, and these factors are distinct, and for every unbordered factor of length $n$, it is equal to one of these two. When we do this we discover that the representations of all $F_n$ for $n \geq 2$ are accepted. Finally, we make the assertion that for any two unbordered factors of length $n$, either they are equal or one is the reverse of the other. When we do this we discover all lengths except length $1$ are accepted. (That is, for all lengths other than $F_n$, $n \geq 2$, the assertion is trivially true since there are no unbordered factors; for $F_2 = 1$ it is false since $0$ and $1$ are the unbordered factors and one is not the reverse of the other; and for all larger $F_i$ the property holds.) \end{proof} \subsection{Recurrence, uniform recurrence, and linear recurrence} We now turn to various questions about recurrence. A factor $x$ of an infinite word $\bf w$ is said to be {\it recurrent} if it occurs infinitely often. The word $\bf w$ is recurrent if every factor that occurs at least once is recurrent. A factor $x$ is {\it uniformly recurrent} if there exists a constant $c = c(x)$ such that any factor ${\bf w}[i..i+c]$ is guaranteed to contain an occurrence of $x$. If all factors are uniformly recurrent then $\bf w$ is said to be uniformly recurrent. Finally, ${\bf w}$ is {\it linearly recurrent} if the constant $c(x)$ is $O(|x|)$. \begin{theorem} The word {\bf f} is recurrent, uniformly recurrent, and linearly recurrent. \end{theorem} \begin{proof} A predicate for all length-$n$ factors being recurrent: $$ \forall i \geq 0\ \forall j \geq 0\ \exists k > j\ \forall t<n \ {\bf f}[i+t] = {\bf f}[k+t] .$$ This predicate says that for every factor $z = {\bf f}[i..i+n-1]$ and every position $j$ we can find another occurrence of $z$ beginning at a position $k > j$. When we run this we discover that the representations of all $n \geq 0$ are accepted. So $\bf f$ is recurrent. A predicate for uniform recurrence: $$ \forall i\ \exists \ell\ \forall j \ \exists s, \ j \leq s \leq j+l-n \ \forall p<n \ {\bf f}[s+p] = {\bf f}[i+p] .$$ Once again, when we run this we discover that the representations of all $n \geq 0$ are accepted. So $\bf f$ is uniformly recurrent. A predicate for linear recurrence with constant $C$: $$ \forall i\ \forall j \ \exists s, \ j \leq s \leq j+Cn \ \forall p<n \ {\bf f}[s+p] = {\bf f}[i+p] .$$ When we run this with $C = 4$, we discover that the representations of all $n \geq 0$ are accepted (but, incidentally, not for $C = 3$). So $\bf f$ is linearly recurrent. \end{proof} \begin{remark} We can decide the property of linear recurrence for Fibonacci-automatic words even without knowing an explicit value for the constant $C$. The idea is to accept those pairs $(n,t)$ such that there exists a factor of length $n$ with two consecutive occurrences separated by distance $t$. Letting $S$ denote the set of such pairs, then a sequence is linearly recurrent iff $\limsup_{(n,t)\in S} t/n < \infty$, which can be decided using an argument like that in \cite[Thm.~8]{Schaeffer&Shallit:2012}. However, we do not know how to compute, in general, the exact value of the $\limsup$ for Fibonacci representation (which we do indeed know for base-$k$ representation), although we can approximate it arbitrarily closely. \end{remark} \subsection{Lyndon words} Next, we turn to some results about Lyndon words. Recall that a nonempty word $x$ is a {\it Lyndon word\/} if it is lexicographically less than all of its nonempty proper prefixes.\footnote{There is also a version where ``prefixes'' is replaced by ``suffixes''.} We reprove some recent results of Currie and Saari \cite{Currie&Saari:2009} and Saari \cite{Saari:2014}. \begin{theorem} Every Lyndon factor of $\bf f$ is of length $F_n$ for some $n \geq 2$, and each of these lengths has a Lyndon factor. \end{theorem} \begin{proof} Here is the predicate specifying that there is a factor of length $n$ that is Lyndon: $$ \exists i\ \forall j, 1 \leq j < n, \ \exists t < n-j \ (\forall u<t \ {\bf f}[i+u]={\bf f}[i+j+u]) \ \wedge \ {\bf f}[i+t] < {\bf f}[i+j+t] .$$ When we run this we get the representations $10^*$, which proves the result. \end{proof} \begin{theorem} For $n \geq 2$, every length-$n$ Lyndon factor of $\bf f$ is a conjugate of ${\bf f}[0..n-1]$. \end{theorem} \begin{proof} Using the predicate from the previous theorem as a base, we can create a predicate specifying that every length-$n$ Lyndon factor is a conjugate of ${\bf f}[0..n-1]$. When we do this we discover that all lengths except $1$ are accepted. (The only lengths having a Lyndon factor are $F_n$ for $n \geq 2$, so all but $F_2$ have the desired property.) \end{proof} \subsection{Critical exponents} Recall from Section~\ref{proofsf} that $\exp(w) = |w|/P$, where $P$ is the smallest period of $w$. The {\it critical exponent} of an infinite word $\bf x$ is the supremum, over all factors $w$ of $\bf x$, of $\exp(w)$. A classic result of \cite{Mignosi&Pirillo:1992} is \begin{theorem} The critical exponent of $\bf f$ is $2+ \alpha$, where $\alpha = (1+\sqrt{5})/2$. \end{theorem} Although it is known that the critical exponent is computable for $k$-automatic sequences \cite{Schaeffer&Shallit:2012}, we do not yet know this for Fibonacci-automatic sequences (and more generally Pisot-automatic sequences). However, with a little inspired guessing about the maximal repetitions, we can complete the proof. \begin{proof} For each length $n$, the smallest possible period $p$ of a factor is given by Theorem~\ref{allpers}. Hence the critical exponent is given by $\lim_{j \rightarrow \infty} (L_{j+1}-2)/F_j$, which is $2+\alpha$. \end{proof} We can also ask the same sort of questions about the {\it initial critical exponent} of a word $\bf w$, which is the supremum over the exponents of all prefixes of $\bf w$. \begin{theorem} The initial critical exponent of $\bf f$ is $1+\alpha$. \end{theorem} \begin{proof} We create an automaton $M_{\rm ice}$ accepting the language $$L = \{ (n,p)_F \ : \ {\bf f}[0..n-1] \text{ has least period } p \} .$$ It is depicted in Figure~\ref{ice} below. From the automaton, it is easy to see that the least period of the prefix of length $n \geq 1$ is $F_j$ for $j \geq 2$ and $F_{j+1}-1 \leq n \leq F_{j+2} - 2$. Hence the initial critical exponent is given by $\limsup_{j \rightarrow \infty} (F_{j+2} - 2)/F_j$, which is $1+\alpha$. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{leastperprefixes.pdf} \caption{Automaton accepting least periods of prefixes of length $n$} \label{ice} \end{center} \end{figure} \end{proof} \subsection{The shift orbit closure} The {\it shift orbit closure} of a sequence $\bf x$ is the set of all sequences $\bf t$ with the property that each prefix of $\bf t$ appears as a factor of $\bf x$. Note that this set can be much larger than the set of all suffixes of $\bf x$. The following theorem is well known \cite[Prop.~3, p.~34]{Borel&Laubie:1993}: \begin{theorem} The lexicographically least sequence in the shift orbit closure of $\bf f$ is $0{\bf f}$, and the lexicographically greatest is $1 {\bf f}$. \end{theorem} \begin{proof} We handle only the lexicographically least, leaving the lexicographically greatest to the reader. The idea is to create a predicate $P(n)$ for the lexicographically least sequence ${\bf b} = b_0 b_1 b_2 \cdots$ which is true iff $b_n = 1$. The following predicate encodes, first, that $b_n = 1$, and second, that if one chooses any length-($n+1$) factor $t$ of $\bf f$, then $b_0 \cdots b_n$ is equal or lexicographically smaller than $t$. \begin{multline*} \exists j \ {\bf f}[j+n]=1 \ \wedge \ \forall k \ (( \forall s \leq n \ {\bf f}[j+s] = {\bf f}[k+s] ) \ \vee \ \\ (\exists i\leq n\ {\text s. t. }\ {\bf f}[j+i] < {\bf f}[k+i] \ \wedge \ ( \forall t<i \ {\bf f}[j+t]={\bf f}[k+t] ))) \end{multline*} When we do this we get the following automaton, which is easily seen to generate the sequence $0 {\bf f}$. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{lexleastorbit.pdf} \caption{Automaton accepting lexicographically least sequence in shift orbit closure of ${\bf f}$} \label{lexleastorbit} \end{center} \end{figure} \end{proof} \subsection{Minimal forbidden words} Let ${\bf x}$ be an infinite word. A finite word $z = a_0 \cdots a_n$ is said to be {\it minimal forbidden} if $z$ is not a factor of $\bf x$, but both $a_1 \cdots a_n$ and $a_0 \cdots a_{n-1}$ are \cite{Currie&Rampersad&Saari:2013}. We can characterize all minimal forbidden words as follows: we create an automaton accepting the language \begin{multline*} \{ (i,n)_F \ : \ {\bf f}[i..i+n-1] \, \overline{{\bf f}[n]} \text{ is not a factor of $\bf f$ and } \\ {\bf f}[i+1..i+n-1] \, \overline{{\bf f}[n]} \text{ is a factor } \text{and } i \text{ is as small as possible } \}. \end{multline*} When we do so we find the words accepted are $$ [1,1] ([0,0][1,1])^* (\epsilon + [0,0]) .$$ This corresponds to the words $$ {\bf f}[F_n - 1..2F_n -3] \, \overline{{\bf f}[2F_n -2]} $$ for $n \geq 3$. The first few are $$ 11, 000, 10101, 00100100, 1010010100101, \ldots .$$ \subsection{Grouped factors} Cassaigne \cite{Cassaigne:1998} introduced the notion of \textit{grouped factors}. A sequence ${\bf a} = (a_i)_{i \geq 0}$ has grouped factors if, for all $n \geq 1$, there exists some position $m = m(n)$ such that ${\bf a}[m..m+\rho(n)+n-2]$ contains all the $\rho(n)$ length-$n$ blocks of $\bf a$, each block occurring exactly once. One consequence of his result is that the Fibonacci word has grouped factors. We can write a predicate for the property of having grouped factors, as follows: \begin{multline*} \forall n \geq 1 \quad \exists m, s \geq 0 \quad \forall i \geq 0 \\ \exists j \text{ s.t. } m \leq j \leq m+s \text{ and } {\bf a}[i..i+n-1] = {\bf a}[j..j+n-1] \text{ and } \\ \forall j', \ m \leq j' \leq m+s, \quad j \not= j' \text{ we have } {\bf a}[i..i+n-1] \not= {\bf a}[j'..j'+n-1] . \end{multline*} The first part of the predicate says that every length-$n$ block appears somewhere in the desired window, and the second says that it appears exactly once. (This five-quantifier definition can be viewed as a response to the question of Homer and Selman \cite{Homer&Selman:2011}, ``...in what sense would a problem that required at least three alternating quantifiers to describe be natural?") Using this predicate and our decision method, we verified that the Fibonacci word does indeed have grouped factors. \section{Mechanical proofs of properties of the finite Fibonacci words} \label{finitefib} Although our program is designed to answer questions about the properties of the infinite Fibonacci word $\bf f$, it can also be used to solve problems concerning the finite Fibonacci words $(X_n)$, defined as follows: $$ X_n = \begin{cases} \epsilon, & \text{if $n = 0$}; \\ 1, & \text{if $n = 1$}; \\ 0, & \text{if $n = 2$}; \\ X_{n-1} X_{n-2}, & \text{if $n > 2$}. \end{cases} $$ Note that $|X_n| = F_n$ for $n \geq 1$. (We caution the reader that there exist many variations on this definition in the literature, particularly with regard to indexing and initial values.) Furthermore, we have $\varphi(X_n) = X_{n+1}$ for $n \geq 1$. Our strategy for the the finite Fibonacci words has two parts: \begin{itemize} \item[(i)] Instead of phrasing statements in terms of factors, we phrase them in terms of occurrences of factors (and hence in terms of the indices defining a factor). \item[(ii)] Instead of phrasing statements about finite Fibonacci words, we phrase them instead about {\it all\/} length-$n$ prefixes of $\bf f$. Then, since $X_i = {\bf f}[0..F_i - 1]$, we can deduce results about the finite Fibonacci words by considering the case where $n$ is a Fibonacci number $F_i$. \end{itemize} To illustrate this idea, consider one of the most famous properties of the Fibonacci words, the {\it almost-commutative} property: letting $\eta(a_1 a_2 \cdots a_n) = a_1 a_2 \cdots a_{n-2} a_n a_{n-1}$ be the map that interchanges the last two letters of a string of length at least $2$, we have \begin{theorem} $X_{n-1} X_n = \eta(X_n X_{n-1})$ for $n \geq 2$. \end{theorem} We can verify this, and prove even more, using our method. \begin{theorem} Let $x = {\bf f}[0..i-1]$ and $y = {\bf f}[0..j-1]$ for $i > j > 1$. Then $xy = \eta(yx)$ if and only if $i = F_n$, $j = F_{n-1}$ for $n \geq 3$. \end{theorem} \begin{proof} The idea is to check, for each $i > j > 1$, whether $${\bf f}[0..i-1] {\bf f}[0..j-1] = \eta({\bf f}[0..j-1] {\bf f}[0..i-1]).$$ We can do this with the following predicate: \begin{multline*} (i>j) \ \wedge \ (j\geq 2) \ \wedge \ (\forall t,\ j\leq t<i,\ {\bf f}[t]= {\bf f}[t-j]) \ \wedge \\ (\forall s \leq j-3\ {\bf f}[s]={\bf f}[s+i-j]) \ \wedge \ ({\bf f}[j-2]={\bf f}[i-1]) \ \wedge \ ({\bf f}[j-1]={\bf f}[i-2]) . \end{multline*} The log of our program is as follows: {\tiny \begin{verbatim} i > j with 7 states, in 49ms j >= 2 with 5 states, in 87ms i > j & j >= 2 with 12 states, in 3ms j <= t with 7 states, in 3ms t < i with 7 states, in 17ms j <= t & t < i with 19 states, in 6ms F[t] = F[t - j] with 16 states, in 31ms j <= t & t < i => F[t] = F[t - j] with 62 states, in 31ms At j <= t & t < i => F[t] = F[t - j] with 14 states, in 43ms i > j & j >= 2 & At j <= t & t < i => F[t] = F[t - j] with 12 states, in 9ms s <= j - 3 with 14 states, in 72ms F[s] = F[s + i - j] with 60 states, in 448ms s <= j - 3 => F[s] = F[s + i - j] with 119 states, in 14ms As s <= j - 3 => F[s] = F[s + i - j] with 17 states, in 58ms i > j & j >= 2 & At j <= t & t < i => F[t] = F[t - j] & As s <= j - 3 => F[s] = F[s + i - j] with 6 states, in 4ms F[j - 2] = F[i - 1] with 20 states, in 34ms i > j & j >= 2 & At j <= t & t < i => F[t] = F[t - j] & As s <= j - 3 => F[s] = F[s + i - j] & F[j - 2] = F[i - 1] with 5 states, in 1ms F[j - 1] = F[i - 2] with 20 states, in 29ms i > j & j >= 2 & At j <= t & t < i => F[t] = F[t - j] & As s <= j - 3 => F[s] = F[s + i - j] & F[j - 2] = F[i - 1] & F[j - 1] = F[i - 2] with 5 states, in 1ms overall time: 940ms \end{verbatim} } The resulting automaton accepts $[1,0][0,1][0,0]^+$, which corresponds to $i = F_n$, $j = F_{n-1}$ for $n \geq 4$. \end{proof} An old result of S\'e\'ebold \cite{Seebold:1985b} is \begin{theorem} If $uu$ is a square occurring in $\bf f$, then $u$ is conjugate to some finite Fibonacci word. \end{theorem} \begin{proof} Assertion $\conj(i,j,k,\ell)$ means ${\bf f}[i..j]$ is a conjugate of ${\bf f}[k..\ell]$ (assuming $j-i = \ell-k$) $$\conj(i,j,k,\ell) := \exists m \ {\bf f}[i..i+\ell-m] = {\bf f}[m..\ell] \text{ and } {\bf f}[i+\ell-m+1..j] = {\bf f}[k..m-1].$$ Predicate: $$ ({\bf f}[i..i+n-1] = {\bf f}[i+n..i+2n-1]) \implies \conj(i,i+n-1,0,n-1) $$ This asserts that any square $uu$ of order $n$ appearing in $\bf f$ is conjugate to ${\bf f}[0..n-1]$. When we implement this, we discover that all lengths are accepted. This makes sense since the only lengths corresponding to squares are $F_n$, and for all other lengths the base of the implication is false. \end{proof} We now reprove an old result of de Luca \cite{deLuca:1981}. Recall that a primitive word is a non-power; that is, a word that cannot be written in the form $x^n$ where $n$ is an integer $\geq 2$. \begin{theorem} All finite Fibonacci words are primitive. \end{theorem} \begin{proof} The factor ${\bf f}[i..j]$ is a power if and only if there exists $d$, $0 < d < j-i+1 $, such that ${\bf f}[i..j-d] = {\bf f}[i+d..j]$ and ${\bf f}[j-d+1..j] = {\bf f}[i..i+d-1]$. Letting $\pow(i,j)$ denote this predicate, the predicate $$ \neg \pow(0,n-1) $$ expresses the claim that the length-$n$ prefix ${\bf f}[0..n-1]$ is primitive. When we implement this, we discover that the prefix of every length is primitive, except those prefixes of length $2 F_n$ for $n \geq 4$. \end{proof} A theorem of Chuan \cite[Thm.~3]{Chuan:1993b} states that the finite Fibonacci word $X_n$, for $n \geq 5$, is the product of two palindromes in exactly one way: where the first factor of length $F_{n-1} -2$ and the second of length $F_{n-2} + 2$. (Actually, Chuan claimed this was true for all Fibonacci words, but, for example, for $010$ there are evidently two different factorizations of the form $(\epsilon)(010)$ and $(010)\epsilon$.) We can prove something more general using our method, by generalizing: \begin{theorem} If the length-$n$ prefix ${\bf f}[0..n-1]$ of $\bf f$ is the product of two (possibly empty) palindromes, then $(n)_F$ is accepted by the automaton in Figure~\ref{pal2} below. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{product2pal.pdf} \caption{Automaton accepting lengths of prefixes that are the product of two palindromes} \label{pal2} \end{center} \end{figure} Furthermore, if the length-$n$ prefix ${\bf f}[0..n-1]$ of $\bf f$ is the product of two (possibly empty) palindromes in exactly one way, then $(n)_F$ is accepted by the automaton in Figure~\ref{pal2u} below. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{product2pals1way.pdf} \caption{Automaton accepting lengths of prefixes that are the product of two palindromes in exactly one way} \label{pal2u} \end{center} \end{figure} Evidently, this includes all $n$ of the form $F_j$ for $j \geq 5$. \end{theorem} \begin{proof} For the first, we use the predicate $$ \exists p\leq n\ \left( (\forall t<p\ {\bf f}[t] = {\bf f}[p-1-t]) \ \wedge\ (\forall u< n-p\ {\bf f}[p+u] = {\bf f}[n-1-u]) \right) .$$ For the second, we use the predicate \begin{multline*} \exists p\leq n\ ( (\forall t<p\ {\bf f}[t] = {\bf f}[p-1-t]) \ \wedge\ (\forall u< n-p\ {\bf f}[p+u] = {\bf f}[n-1-u]) )) \ \wedge \ \\ (\forall q\leq n \ ( (\forall m<q\ {\bf f}[m] = {\bf f}[q-1-m]) \ \wedge\ (\forall v < n-q\ {\bf f}[q+v] = {\bf f}[n-1-v]) ) \implies p=q ) . \end{multline*} \end{proof} A result of Cummings, Moore, and Karhum\"aki \cite{Cummings&Moore&Karhumaki:1996} states that the borders of the finite Fibonacci word ${\bf f}[0..F_n - 1]$ are precisely the words ${\bf f}[0..F_{n-2k} - 1]$ for $2k < n$. We can prove this, and more: \begin{proof} Consider the pairs $(n,m)$ such that $1 \leq m < n$ and ${\bf f}[0..m-1]$ is a border of ${\bf f}[0..n-1]$. Their Fibonacci representations are accepted by the automaton below in Figure~\ref{borders}. \begin{figure}[H] \begin{center} \includegraphics[width=3in]{output_borders.pdf} \caption{Automaton encoding borders of prefixes of $\bf f$} \label{borders} \end{center} \end{figure} We use the predicate $$ (n > m) \ \wedge \ (m \geq 1) \ \wedge \ \forall i<m \ {\bf f}[i] = {\bf f}[n-m+i] .$$ By following the paths with first coordinate of the form $10^+$ we recover the result of Cummings, Moore, and Karhum\"aki as a special case. \end{proof} \section{Avoiding the pattern $x x x^R$ and the Rote-Fibonacci word} \label{rotefib} In this section we show how to apply our decision method to an interesting and novel avoidance property: avoiding the pattern $x x x^R$ . An example matching this pattern in English is a factor of the word {\tt bepepper}, with $x = {\tt ep}$. Here, however, we are concerned only with the binary alphabet $\Sigma_2 = \lbrace 0, 1 \rbrace$. Although avoiding patterns with reversal has been considered before (e.g., \cite{Rampersad&Shallit:2005,Bischoff&Nowotka:2011,Currie:2011,Bischoff&Currie&Nowotka:2012}), it seems our particular problem has not been studied. If our goal is just to produce some infinite word avoiding $x x x^R$, then a solution seems easy: namely, the infinite word $(01)^\omega$ clearly avoids $x x x^R$, since if $|x| = n$ is odd, then the second factor of length $n$ cannot equal the first (since the first symbol differs), while if $|x| = n$ is even, the first symbol of the third factor of length $n$ cannot be the last symbol of $x$. In a moment we will see that even this question seems more subtle than it first appears, but for the moment, we'll change our question to \medskip \centerline{\it Are there infinite aperiodic binary words avoiding $x x x^R$?} \medskip To answer this question, we'll study a special infinite word, which we call the {\it Rote-Fibonacci word}. (The name comes from the fact that it is a special case of a class of words discussed in 1994 by Rote \cite{Rote:1994}.) Consider the following transducer $T$: \begin{figure}[H] \begin{center} \begin{tikzpicture}[node distance=3cm,on grid,>=stealth',initial text=,auto, every state/.style={inner sep=1pt,minimum size=1cm}, every loop/.style={shorten >=0,looseness=0}] \node[state,initial] (q_0) {$q_0$}; \node[state] (q_1) [right=of q_0] {$q_1$}; \path[->] (q_0.10) edge node {{\tt 0}/{\tt 00}, {\tt 1}/{\tt 0}} (q_1.170) (q_1.190) edge node {{\tt 0}/{\tt 11}, {\tt 1}/{\tt 1}} (q_0.350); \end{tikzpicture} \end{center} \caption{Transducer converting Fibonacci words to Rote-Fibonacci words} \label{fig:trans-f-rf} \end{figure} This transducer acts on words by following the transitions and outputting the concatenation of the outputs associated with each transition. Thus, for example, the input $01001$ gets transduced to the output $00100110$. \begin{theorem} The Rote-Fibonacci word $${\bf r} = 001001101101100100110110110010010011011001001001101100100100 \cdots = r_0 r_1 r_2 \cdots$$ has the following equivalent descriptions: \bigskip 0. As the output of the transducer $T$, starting in state $0$, on input $\bf f$. \bigskip 1. As $\tau(h^\omega(a))$ where $h$ and $\tau$ are defined by \begin{align*} h(a) &= a b_1 &\quad \tau(a) = 0 \\ h(b) &= a & \quad \tau(b) = 1 \\ h(a_0) &= a_2 b & \quad \tau(a_0) = 0 \\ h(a_1) &= a_0 b_0 & \quad \tau(a_1) = 1 \\ h(a_2) &= a_1 b_2 & \quad \tau(a_2) = 1 \\ h(b_0) &= a_0 & \quad \tau(b_0) = 0 \\ h(b_1) &= a_1 & \quad \tau(b_1) = 0 \\ h(b_2) &= a_2 & \quad \tau(b_2) = 1 \\ \end{align*} 2. As the binary sequence generated by the following DFAO, with outputs given in the states, and inputs in the Fibonacci representation of $n$. \begin{figure}[H] \begin{center} \begin{tikzpicture}[node distance=2cm,on grid,>=stealth',initial text=,auto, every state/.style={inner sep=1pt,minimum size=1cm}, every loop/.style={shorten >=0,looseness=0}] \node[state,initial] (a) {$a/{\tt 0}$}; \node[state] (b_1) [right=of a] {$b_1/{\tt 0}$}; \node[state] (a_1) [right=of b_1] {$a_1/{\tt 1}$}; \node[state] (b_0) [right=of a_1] {$b_0/{\tt 0}$}; \node[state] (b) [right=of b_0] {$b/{\tt 1}$}; \node[state] (a_0) [right=of b] {$a_0/{\tt 0}$}; \node[state] (a_2) [right=of a_0] {$a_2/{\tt 1}$}; \node[state] (b_2) [right=of a_2] {$b_2/{\tt 1}$}; \path[->] (a) edge [loop above] node {\tt 0} () edge node {\tt 1} (b_1) (b_1) edge node {\tt 0} (a_1) (a_1) edge node {\tt 1} (b_0) (a_1.315) edge [bend right=20]node [pos=0.1,swap] {\tt 0} (a_0.225) (b_0.45) edge [bend left=45] node [pos=0.5,swap] {\tt 0} (a_0.135) (b.210) edge [bend left=24] node [pos=0.1,swap] {\tt 0} (a_1.330) (a_0) edge node [swap] {\tt 1} (b) (a_0) edge node [swap] {\tt 0} (a_2) (a_2.135) edge [bend right=24]node [pos=0.1,swap] {\tt 0} (a_1.45) (a_2.10) edge node {\tt 1} (b_2.170) (b_2.190) edge node {\tt 0} (a_2.350); \end{tikzpicture} \end{center} \caption{Canonical Fibonacci representation DFAO generating the Rote-Fibonacci word} \label{fig:rf-dfao} \end{figure} 3. As the limit, as $n \rightarrow \infty$, of the sequence of finite Rote-Fibonacci words $(R_n)_n$ defined as follows: $R_0 = 0$, $R_1 = 00$, and for $n \geq 3$ $$R_n = \begin{cases} R_{n-1} R_{n-2}, & \text{ if $n \equiv 0$ (mod 3);} \\ R_{n-1} \overline{R_{n-2}}, & \text{ if $n \equiv 1, 2$ (mod 3).} \end{cases}$$ 4. As the sequence obtained from the Fibonacci sequence ${\bf f} = f_0 f_1 f_2 \cdots = 0100101001001 \cdots$ as follows: first, change every $0$ to $1$ and every one to $0$ in ${\bf f}$, obtaining $\overline{\bf f} = 1011010110110 \cdots$. Next, in $\overline{\bf f}$ change every second $1$ that appears to $-1$ (which we write as ${\overline{1}}$ for clarity): $1 0 {\overline{1}} 1 0 {\overline{1}} 0 1 {\overline{1}} 0 1 {\overline{1}} 0 \cdots$. Now take the running sum of this sequence, obtaining $1101100100100 \cdots$, and finally, complement it to get $\bf r$. \bigskip 5. As $\rho(g^\omega (a))$, where $g$ and $\rho$ are defined as follows \begin{align*} g(a) &= abcab \quad & \rho(a) = 0 \\ g(b) &= cda \quad & \rho(b) = 0 \\ g(c) &= cdacd \quad & \rho(c) = 1 \\ g(d) &= abc \quad & \rho(d) = 1 \end{align*} \end{theorem} \begin{proof} $(0) \iff (3)$: Let $T_0 (x)$ (resp., $T_1 (x)$) denote the output of the transducer $T$ starting in state $q_0$ (resp., $q_1$) on input $x$. Then a simple induction on $n$ shows that $T_0 (X_{n+1}) = R_n$ and $T_1(X_{n+1}) = \overline{R_n}$. We give only the induction step for the first claim: \begin{align*} T_0 (X_{n+1}) &= T_0 (X_n X_{n-1}) \\ &= \begin{cases} T_0 (X_n) T_0 (X_{n-1}), & \text{if $|X_n|$ is even}; \\ T_0 (X_n) T_1 (X_{n-1}), & \text{if $|X_n|$ is odd}; \end{cases} \\ &= \begin{cases} R_{n-1} R_{n-2}, & \text{if $n \equiv 0$ (mod 3)}; \\ R_{n-1} \overline{R_{n-2}}, & \text{if $n \not\equiv 0$ (mod 3)}; \end{cases} \\ &= R_n . \end{align*} Here we have used the easily-verified fact that $|X_n|= F_n$ is even iff $n \equiv 0$ (mod $3$). \bigskip $(1) \iff (3)$: we verify by a tedious induction on $n$ that for $n \geq 0$ we have \begin{align*} \tau(h^n(a)) &= \tau(h^{n+1} (a)) = R_n \\ \tau(h^n(a_i)) &= \tau(h^{n+1} (b_i)) = \begin{cases} R_i, & \text{if $n \equiv i$ (mod 3)}; \\ \overline{R_i}, & \text{if $n \not\equiv i$ (mod 3)}. \end{cases} \end{align*} \bigskip $(2) \iff (4)$: Follows from the well-known transformation from automata to morphisms and vice versa (see, e.g., \cite{Holton&Zamboni:2001}). \bigskip $(3) \iff (4)$: We define some transformations on sequences, as follows: \begin{itemize} \item $C(x)$ denotes $\overline{x}$, the complement of $x$; \item $s(x)$ denotes the sequence arising from a binary sequence $x$ by changing every second $1$ to $-1$; \item $a(x)$ denotes the running sum of the sequence $x$; that is, if $x = a_1 a_2 a_3 \cdots $ then $a(x)$ is $a_1 (a_1 + a_2) (a_1 + a_2 +a_3) \cdots$. \end{itemize} Note that $$ a (s (xy)) = \begin{cases} a(s(x)) \ a(s(y)), & \text{if $|x|_1$ even}; \\ a(s(x)) \ C(a(s(y))), & \text{if $|x|_1$ odd}. \end{cases} $$ Then we claim that $ C(R_n) = a(s(C(X_{n+2})))$. This can be verified by induction on $n$. We give only the induction step: \begin{align*} a(s(C(X_{n+2}))) &= a(s( C(X_{n+1}) C(X_{n}) )) \\ &= \begin{cases} a(s(C(X_{n+1}))) \ a(s(C(X_{n}))), & \text{ if $C(X_{n+1})_1$ even}; \\ a(s(C(X_{n+1}))) \ C(a(s(C(X_{n})))), & \text{ if $C(X_{n+1})_1$ odd}; \end{cases} \\ &= \begin{cases} C(R_{n-1}) \ C(R_{n-2}), & \text{ if $n \equiv 0$ (mod 3)}; \\ C(R_{n-1}) \ R_{n-2}, & \text{ if $n \not\equiv 0$ (mod 3)}; \end{cases} \\ &= R_{n}. \end{align*} \bigskip $(3) \iff (5)$: Define $\gamma$ by \begin{align*} \gamma(a) &= \gamma(a_0) = a \\ \gamma(b_0) &= \gamma(b_1) = b \\ \gamma(a_1) &= \gamma(a_2) = c \\ \gamma(b) &= \gamma(b_2) = d . \end{align*} We verify by a tedious induction on $n$ that for $n \geq 0$ we have \begin{align*} g^n (a) &= \gamma(h^{3n} (a)) = \gamma(h^{3n} (a_0)) \\ g^n (b) &= \gamma(h^{3n} (b_0)) = \gamma(h^{3n} (b_1)) \\ g^n (c) &= \gamma(h^{3n} (a_1)) = \gamma(h^{3n} (a_2)) \\ g^n (d) &= \gamma(h^{3n} (b)) = \gamma(h^{3n} (b_2)) . \end{align*} \end{proof} \begin{corollary} The first differences $\Delta {\bf r}$ of the Rote-Fibonacci word $\bf r$, taken modulo $2$, give the complement of the Fibonacci word $\overline{f}$, with its first symbol omitted. \label{rotecor} \end{corollary} \begin{proof} Note that if ${\bf x} = a_0 a_1 a_2 \cdots$ is a binary sequence, then $\Delta(C({\bf x})) = -\Delta({\bf x})$. Furthermore $\Delta(a(x)) = a_1 a_2 \cdots$. Now from the description in part 4, above, we know that ${\bf r} = C(a(s(C({\bf f}))))$. Hence $\Delta({\bf r}) = \Delta ( C(a(s(C({\bf f}))))) = -\Delta( a(s(C({\bf f})))) = \dr(-s(C({\bf f})))$, where $\dr$ drops the first symbol of its argument. Taking the last result modulo $2$ gives the result. \end{proof} We are now ready to prove our avoidability result. \begin{theorem} The Rote-Fibonacci word $\bf r$ avoids the pattern $x x x^R$. \label{rote-avoid-thm} \end{theorem} \begin{proof} We use our decision procedure to prove this. A predicate is as follows: $$ \exists i\ \forall t<n\ ({\bf r}[i+t]={\bf r}[i+t+n]) \ \wedge \ ({\bf r}[i+t]={\bf r}[i+3n-1-t]) .$$ When we run this on our program, we get the following log: {\footnotesize \begin{verbatim} t < n with 7 states, in 36ms R[i + t] = R[i + t + n] with 245 states, in 1744ms R[i + t] = R[i + 3 * n - 1 - t] with 1751 states, in 14461ms R[i + t] = R[i + t + n] & R[i + t] = R[i + 3 * n - 1 - t] with 3305 states, in 565ms t < n => R[i + t] = R[i + t + n] & R[i + t] = R[i + 3 * n - 1 - t] with 2015 states, in 843ms At t < n => R[i + t] = R[i + t + n] & R[i + t] = R[i + 3 * n - 1 - t] with 3 states, in 747ms Ei At t < n => R[i + t] = R[i + t + n] & R[i + t] = R[i + 3 * n - 1 - t] with 2 states, in 0ms overall time: 18396ms \end{verbatim} } Then the only length $n$ accepted is $n = 0$, so the Rote-Fibonacci word $\bf r$ contains no occurrences of the pattern $x x x^R$. \end{proof} We now prove some interesting properties of $\bf r$. \begin{theorem} The minimum $q(n)$ over all periods of all length-$n$ factors of the Rote-Fibonacci word is as follows: $$ q(n) = \begin{cases} 1, & \text{if $1 \leq n \leq 2$;} \\ 2, & \text{if $n = 3$;} \\ F_{3j+1}, & \text{if $j \geq 1$ and $L_{3j} \leq n < L_{3j+2}$;} \\ L_{3j+1}, & \text{if $j \geq 1$ and $L_{3j+2} \leq n < L_{3j+2}+F_{3j-2}$;} \\ F_{3j+2}+L_{3j}, & \text{if $j \geq 2$ and $L_{3j+2} + F_{3j-2} \leq n < L_{3j+2}+ F_{3j-1}$;} \\ 2F_{3j+2}, & \text{if $L_{3j+2}+F_{3j-1} \leq n < L_{3j+3}$} . \end{cases} $$ \label{rfperiods-thm} \end{theorem} \begin{proof} To prove this, we mimic the proof of Theorem~\ref{allpers}. The resulting automaton is displayed below in Figure~\ref{leastper-rote}. \begin{figure}[H] \begin{center} \includegraphics[width=4in]{output_least-period-over-all-factors-rote.pdf} \caption{Automaton accepting least periods of prefixes of length $n$} \label{leastper-rote} \end{center} \end{figure} \end{proof} \begin{corollary} The critical exponent of the Rote-Fibonacci word is $2+\alpha$. \label{critical-rote} \end{corollary} \begin{proof} An examination of the cases in Theorem~\ref{rfperiods-thm} show that the words of maximum exponent are those corresponding to $n = L_{3j+2}-1$, $p = F_{3j+1}$. As $j \rightarrow \infty$, the quantity $n/p$ approaches $2 + \alpha$ from below. \end{proof} \begin{theorem} All squares in the Rote-Fibonacci word are of order $F_{3n+1}$ for $n \geq 0$, and each such order occurs. \label{rote3n} \end{theorem} \begin{proof} We use the predicate $$ (n \geq 1) \ \wedge \ \exists i \ \forall j<n\ ({\bf r}[i+j] = {\bf r}[i+j+n]) .$$ The resulting automaton is depicted in Figure~\ref{rotesquares}. The accepted words correspond to $F_{3n+1}$ for $n \geq 0$. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{rotesquares.pdf} \caption{Automaton accepting orders of squares in the Rote-Fibonacci word} \label{rotesquares} \end{center} \end{figure} \end{proof} We now turn to problems considering prefixes of the Rote-Fibonacci word $\bf r$. \begin{theorem} A length-$n$ prefix of the Rote-Fibonacci word $\bf r$ is an antipalindrome iff $n = F_{3i+1} - 3$ for some $i \geq 1$. \end{theorem} \begin{proof} We use our decision method on the predicate $$ \forall j<n\ {\bf r}[j] \not= {\bf r}[n-1-j] .$$ The result is depicted in Figure~\ref{rote-antipal}. The only accepted expansions are given by the regular expression $\epsilon + 1(010101)^* 0 (010+101000)$, which corresponds to $F_{3j+1} - 3$. We use the predicate $$ (n \geq 1) \ \wedge \ \exists i \ \forall j<n \ {\bf r}[i+j] = {\bf r}[i+j+n]) .$$ The resulting automaton is depicted in Figure~\ref{rote-antipal}. The accepted words correspond to $F_{3n+1}$ for $n \geq 0$. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{rote-antipal-prefixes.pdf} \caption{Automaton accepting lengths of antipalindrome prefixes in the Rote-Fibonacci word} \label{rote-antipal} \end{center} \end{figure} \end{proof} \begin{theorem} A length-$n$ prefix of the Rote-Fibonacci word is an antisquare if and only if $n = 2F_{3k+2}$ for some $k \geq 1$. \end{theorem} \begin{proof} The predicate for having an antisquare prefix of length $n$ is $$ \forall k < n \ {\bf r}[i+k] \not= {\bf r}[i+k+n] .$$ When we run this we get the automaton depicted in Figure~\ref{rote-antisquare-prefix}. \begin{figure}[H] \begin{center} \includegraphics[width=5.5in]{rote-antisquare-prefix.pdf} \caption{Automaton accepting orders of antisquares that are prefixes of $\bf f$} \label{rote-antisquare-prefix} \end{center} \end{figure} \end{proof} \begin{theorem} The Rote-Fibonacci word has subword complexity $2n$. \end{theorem} \begin{proof} Follows from Corollary~\ref{rotecor} together with \cite[Thm.~3]{Rote:1994}. \end{proof} \begin{theorem} The Rote-Fibonacci word is mirror invariant. That is, if $z$ is a factor of $\bf r$ then so is $z^R$. \label{rotemi} \end{theorem} \begin{proof} We use the predicate $$ \forall i \ \exists j \ \forall t < n \ {\bf r}[i+t] = {\bf r}[j+n-1-t] .$$ The resulting automaton accepts all $n$, so the conclusion follows. The largest intermediate automaton has 2300 states and the calculation took about 6 seconds on a laptop. \end{proof} \begin{corollary} The Rote-Fibonacci word avoids the pattern $x x^R x^R$. \end{corollary} \begin{proof} Suppose $x x^R x^R$ occurs in $\bf r$. Then by Theorem~\ref{rotemi} we know that $(x x^R x^R)^R = x x x^R$ occurs in $\bf f$. But this is impossible, by Theorem~\ref{rote-avoid-thm}. \end{proof} As it turns out, the Rote-Fibonacci word has (essentially) appeared before in several places. For example, in a 2009 preprint of Monnerot-Dumaine \cite{Monnerot-Dumaine:2009}, the author studies a plane fractal called the ``Fibonacci word fractal'', specified by certain drawing instructions, which can be coded over the alphabet $S, R, L$ by taking the fixed point $g^\omega (a)$ and applying the coding $\gamma(a) = S$, $\gamma(b) = R$, $\gamma(c) = S$, and $\gamma(d) = L$. Here $S$ means ``move straight one unit'', ``$R$'' means ``right turn one unit'' and ``$L$'' means ``left turn one unit''. More recently, Blondin Mass\'e, Brlek, Labb\'e, and Mend\`es France studied a remarkable sequence of words closely related to $\bf r$ \cite{BlondinMasse&Brlek&Garon&Labbe:2011,BlondinMasse&Brlek&Labbe&MendesFrance:2011,BlondinMasse&Brlek&Labbe&MendesFrance:2012}. For example, in their paper ``Fibonacci snowflakes'' \cite{BlondinMasse&Brlek&Garon&Labbe:2011} they defined a certain sequence $q_i$ which has the following relationship to $g$: let $\xi(a) = \xi(b) = L$, $\xi(c) = \xi(d) = R$. Then $$ R \xi(g^n(a)) = q_{3n+2} L .$$ \subsection{Conjectures and open problems about the Rote-Fibonacci word} In this section we collect some conjectures we have not yet been able to prove. We have made some progress and hope to completely resolve them in the future. \begin{conjecture} Every infinite binary word avoiding the pattern $x x x^R$ has critical exponent $\geq 2+\alpha$. \end{conjecture} \begin{conjecture} Let $z$ be a finite nonempty primitive binary word. If $z^\omega$ avoids $x x x^R$, then $|z| = 2 F_{3n+2}$ for some integer $n \geq 0$. Furthermore, $z$ is a conjugate of the prefix ${\bf r}[0..2F_{3n+2} - 1]$, for some $n \geq 0$. Furthermore, for $n \geq 1$ we have that $z$ is a conjugate of $y \overline{y}$, where $y = \tau(h^{3n} (a))$. \end{conjecture} We can make some partial progress on this conjecture, as follows: \begin{theorem} Let $k \geq 1$ and define $n = 2F_{3k+2}$. Let $z = {\bf r}[0..n-1]$. Then $z^\omega$ contains no occurrence of the pattern $x x x^R$. \end{theorem} \begin{proof} We have already seen this for $k = 0$, so assume $k \geq 1$. Suppose that $z^\omega$ does indeed contain an occurrence of $x x x^R$ for some $|x| = \ell > 0$. We consider each possibility for $\ell$ and eliminate them in turn. \bigskip Case I: $\ell \geq n$. There are two subcases: \bigskip Case Ia: $n {\, |\kern-4.5pt/}\, \ell$: In this case, by considering the first $n$ symbols of each of the two occurrences of $x$ in $x x x^R$ in $z^\omega$, we see that there are two different cyclic shifts of $z$ that are identical. This can only occur if ${\bf r}[0..n-1]$ is a power, and we know from Theorem~\ref{rote3n} and Corollary~\ref{critical-rote} that this implies that $n = 2F_{3k+1}$ or $n = 3F_{3k+1}$ for some $k \geq 0$. But $2F_{3k+1} \not= 2F_{3k'+2}$ and $3F_{3k+1} \not= 2F_{3k'+2}$ provided $k, k' > 0$, so this case cannot occur. Case Ib: $n \ | \ \ell$: Then $x$ is a conjugate of $z^e$, where $e = \ell/n$. By a well-known result, a conjugate of a power is a power of a conjugate; hence there exists a conjugate $y$ of $z$ such that $x = y^e$. Then $x^R = y^e$, so $x$ and hence $y$ is a palindrome. We can now create a predicate that says that some conjugate of ${\bf r}[0..n-1]$ is a palindrome: $$\exists i<n \ \wedge\ (\forall j<n \ \cmp(i+j,n+i-1-j))$$ where \begin{multline*} \cmp(k,k') := (((k<n) \ \wedge\ (k'<n)) \implies ({\bf r}[k] = {\bf r}[k'])) \ \wedge \ \\ (((k<n)\ \wedge \ (k' \geq n)) \implies ({\bf r}[k] = {\bf r}[k'-n])) \ \wedge \ \\ (((k \geq n)\ \wedge \ (k'<n)) \implies ({\bf r}[k-n] = {\bf r}[k'])) \ \wedge \ \\ (((k \geq n)\ \wedge \ (k' \geq n)) => ({\bf r}[k-n] = {\bf r}[k'-n])) . \end{multline*} When we do this we discover the only $n$ with Fibonacci representation of the form $10010^i$ accepted are those with $i \equiv 0, 2$ (mod $3$), which means that $2F_{3k+2}$ is not among them. So this case cannot occur. \bigskip Case II: $\ell < n$. There are now four subcases to consider, depending on the number of copies of $z$ needed to ``cover'' our occurrence of $x x x^R$. In Case II.$j$, for $1 \leq j \leq 4$, we consider $j$ copies of $z$ and the possible positions of $x x x^R$ inside that copy. Because of the complicated nature of comparing one copy of $x$ to itself in the case that one or both overlaps a boundary between different copies of $z$, it would be very helpful to be able to encode statements like ${\bf r}[k \bmod n] = {\bf r}[\ell \bmod n]$ in our logical language. Unfortunately, we cannot do this if $n$ is arbitrary. So instead, we use a trick: assuming that the indices $k, k'$ satisfy $0 \leq k, k' < 2n$, we can use the $\cmp(k,k')$ predicate introduced above to simulate the assertion ${\bf r}[k \bmod n] = {\bf r}[k' \bmod n]$. Of course, for this to work we must ensure that $0 \leq k, k' < 2n$ holds. The cases are described in Figure~\ref{rotecon}. We assume that that $|x| = \ell$ and $x x x^R$ begins at position $i$ of $z^\omega$. We have the inequalities $i < n$ and $\ell < n$ which apply to each case. Our predicates are designed to compare the first copy of $x$ to the second copy of $x$, and the first copy of $x$ to the $x^R$. \begin{figure}[H] \begin{center} \includegraphics[width=5in]{rotecases2.pdf} \caption{Cases of the argument} \label{rotecon} \end{center} \end{figure} \medskip Case 1: If $xxx^R$ lies entirely within one copy of $z$, it also lies in $\bf r$, which we have already seen cannot happen, in Theorem~\ref{rote-avoid-thm}. This case therefore cannot occur. \medskip Case 2: We use the predicate $$ \exists i \ \exists \ell \ (i+3\ell \geq n) \ \wedge \ (i+3\ell < 2n) \ \wedge \ (\forall j < \ell\ \cmp(i+j, i+\ell+j ) ) \ \wedge \ (\forall k < \ell\ \cmp(i+k,i+3\ell-1-k) ) $$ to assert that there is a repetition of the form $x x x^R$. \medskip Case 3: We use the predicate $$ \exists i \ \exists \ell \ (i + 3\ell \geq 2n) \ \wedge \ (i+3\ell < 3n) \ \wedge \ (\forall j < \ell\ \cmp(i+j, i+\ell+j-n) ) \ \wedge \ (\forall k < \ell\ \cmp(i+k,i+3\ell-1-k-n) ) ) .$$ \medskip Case 4: We use the predicate $$ \exists i \ \exists \ell \ (i+3 \ell \geq 3n) \ \wedge \ (i+3\ell < 4n) \ \wedge \ (\forall j < \ell\ \cmp(i+j, i+\ell+j-n) ) \ \wedge \ (\forall k < \ell\ \cmp(i+k, i+3\ell-1-k-2n ) ) .$$ When we checked each of the cases 2 through 4 with our program, we discovered that $n = 2F_{3k+2}$ is never accepted. Actually, for cases (2)--(4) we had to employ one additional trick, because the computation for the predicates as stated required more space than was available on our machine. Here is the additional trick: instead of attempting to run the predicate for all $n$, we ran it only for $n$ whose Fibonacci representation was of the form $10010^*$. This significantly restricted the size of the automata we created and allowed the computation to terminate. In fact, we propagated this condition throughout the predicate. We therefore eliminated all possibilities for the occurrence of $x x x^R$ in $z^\omega$ and so it follows that no $x x x^R$ occurs in $z^\omega$. \end{proof} \begin{openproblem} How many binary words of length $n$ avoid the pattern $x x x^R$? Is it polynomial in $n$ or exponential? How about the number of binary words of length $n$ avoiding $x x x^R$ and simultaneously avoiding $(2+\alpha)$-powers? \end{openproblem} Consider finite words of the form $x x x^R$ having no proper factor of the form $w w w^R$. \begin{conjecture} For $n = F_{3k+1}$ there are $4$ such words of length $n$. For $n = F_{3k+1} \pm F_{3k-2}$ there are $2$ such words. Otherwise there are none. For $k \geq 3$ the $4$ words of length $n = F_{3k+1}$ are given by ${\bf r}[p_i..p_i+n-1]$, $i = 1,2,3,4$, where \begin{align*} (p_1)_F &= 1000 (010)^{k-3} 001 \\ (p_2)_F &= 10 (010)^{k-2} 001 \\ (p_3)_F &= 1001000 (010)^{k-3} 001 \\ (p_4)_F &= 1010 (010)^{k-2} 001 \end{align*} For $k \geq 3$ the $2$ words of length $n = F_{3k+1}-F_{3k-2}$ are given by ${\bf r}[q_i..q_i+n-1]$, $i = 1,2$, where \begin{align*} (q_1)_F &= 10 (010)^{k-3} 001 \\ (q_2)_F &= 10000 (010)^{k-3} 001 \end{align*} For $k \geq 3$ the $2$ words of length $n = F_{3k+1}+F_{3k-2}$ are given by ${\bf r}[s_i..s_i+n-1]$, $i = 1,2$, where \begin{align*} (s_1)_F &= 10 (010)^{k-3} 001 \\ (s_2)_F &= 1000 (01)^{k-2} 001 \end{align*} \end{conjecture} \section{Other sequences} \label{other} In this section we briefly apply our method to some other Fibonacci-automatic sequences, obtaining several new results. Consider a Fibonacci analogue of the Thue-Morse sequence $${\bf v} = (v_n)_{n \geq 0} = 0111010010001100010111000101 \cdots$$ where $v_n$ is the sum of the bits, taken modulo $2$, of the Fibonacci representation of $n$. This sequence was introduced in \cite[Example 2, pp.\ 12--13]{Shallit:1988a}. We recall that an {\it overlap} is a word of the form $axaxa$ where $x$ may be empty; its order is defined to be $|ax|$. Similarly, a {\it super-overlap} is a word of the form $abxabxab$; an example of a super-overlap in English is the word {\tt tingalingaling} with the first letter removed. \begin{theorem} The only squares in $\bf v$ are of order $4$ and $F_n$ for $n \geq 2$, and a square of each such order occurs. The only cubes in $\bf v$ are the strings $000$ and $111$. The only overlaps in $\bf v$ are of order $F_{2n}$ for $n \geq 1$, and an overlap of each such order occurs. There are no super-overlaps in $\bf v$. \end{theorem} \begin{proof} As before. We omit the details. \end{proof} We might also like to show that $\bf v$ is recurrent. The obvious predicate for this property holding for all words of length $n$ is $$ \forall i\ \exists j\ ((j>i) \wedge ( \forall t \ ((t<n) \implies ({\bf v}[i+t]={\bf v}[j+t])))) .$$ Unfortunately, when we attempt to run this with our prover, we get an intermediate NFA of 1159 states that we cannot determinize within the available space. Instead, we rewrite the predicate, setting $k := j-i$ and $u := i+t$. This gives $$ \forall i\ \exists j \ (j>i) \wedge \forall k \ \forall u \ ((k \geq 1) \wedge (i=j+k) \wedge (u \geq i) \wedge (u < n+i)) \implies {\bf v}[u]={\bf v}[u+k] .$$ When we run this we discover that $\bf v$ is indeed recurrent. Here the computation takes a nontrivial 814007 ms, and the largest intermediate automaton has 625176 states. This proves \begin{theorem} The word $\bf v$ is recurrent. \end{theorem} Another quantity of interest for the Thue-Morse-Fibonacci word $\bf v$ is its subword complexity $\rho_{\bf v}(n)$. It is not hard to see that it is linear. To obtain a deeper understanding of it, let us compute the first difference sequence $d(n) = \rho_{\bf v}(n+1) - \rho_{\bf v}(n)$. It is easy to see that $d(n)$ is the number of words $w$ of length $n$ with the property that both $w0$ and $w1$ appear in $\bf v$. The natural way to count this is to count those $i$ such that $t:= {\bf v}[i..i+n-1]$ is the first appearance of that factor in $\bf v$, and there exists a factor ${\bf v}[k..k+n]$ of length $n+1$ whose length-$n$-prefix equals $t$ and whose last letter ${\bf v}[k+n]$ differs from ${\bf v}[i+n]$. $$ (\forall j<i \ \exists t<n \ {\bf v}[i+t] \not= {\bf v}[j+t]) \ \wedge \ (\exists k\ (\forall u <n\ {\bf v}[i+u]={\bf v}[k+u]) \wedge {\bf v}[i+n] \not= {\bf v}[k+n]).$$ Unfortunately the same blowup appears as in the recurrence predicate, so once agin we need to substitute, resulting in the predicate \begin{multline*} (\forall j<i \ \exists k\geq 1\ \exists v\ (i=j+k) \wedge (v \geq j) \wedge (v<n+j) \wedge {\bf v}[u] \not= {\bf v}[u+k] ) \wedge \\ (\exists l>i \ {\bf v}[i+n] \not= {\bf v}[l+n] ) \wedge \\ (\forall k' \ \forall u' \ (k'\geq 1) \wedge (l = i+k') \wedge (u' \geq i) \wedge (v' < n+i) \implies {\bf v}[k'+u']={\bf v}[u'] ) . \end{multline*} From this we obtain a linear representation of rank $46$. We can now consider all vectors of the form $u \{ M_0, M_1 \}^*$. There are only finitely many and we can construct an automaton out of them computing $d(n)$. \begin{theorem} The first difference sequence $(d(n))_{n \geq 0}$ of the subword complexity of $\bf v$ is Fibonacci-automatic, and is accepted by the following machine. \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{tmf-specialf.pdf} \caption{Automaton computing $d(n)$} \label{tmf-specialf} \end{center} \end{figure} \end{theorem} \section{Combining two representations and avoidability} \label{additive} In this section we show how our decidability method can be used to handle an avoidability question where two different representations arise. Let $x$ be a finite word over the alphabet ${\mathbb{N}}^* = \lbrace 1, 2, 3 \ldots \rbrace$. We say that $x$ is an {\it additive square\/} if $x = x_1 x_2$ with $|x_1| = |x_n|$ and $\sum x_1 = \sum x_2$. For example, with the usual association of ${\tt a} = 1$, ${\tt b} = 2$, and so forth, up to ${\tt z} = 26$, we have that the English word {\tt baseball} is an additive square, as {\tt base} and {\tt ball} both sum to $27$. An infinite word ${\bf x}$ over ${\mathbb{N}}^*$ is said to {\it avoid additive squares} if no factor is an additive square. It is currently unknown, and a relatively famous open problem, whether there exists an infinite word over a {\it finite\/} subset of ${\mathbb{N}}^*$ that avoids additive squares \cite{Brown&Freedman:1987,Pirillo&Varricchio:1994,Halbeisen&Hungerbuhler:2000}.., although it is known that additive cubes can be avoided over an alphabet of size $4$ \cite{Cassaigne&Currie&Schaeffer&Shallit:2013}. (Recently this was improved to alphabet size $3$; see \cite{Rao:2013}.) However, it is easy to avoid additive squares over an {\it infinite} subset of ${\mathbb{N}}^*$; for example, any sequence that grows sufficiently quickly will have the desired property. Hence it is reasonable to ask about the {\it lexicographically least} sequence over ${\mathbb{N}}^*$ that avoids additive squares. Such a sequence begins $$ 1 2 1 3 1 2 1 4 2 1 2 5 2 1 3 1 2 1 3 4 1 2 1 7 2 \cdots ,$$ but we do not even know if this sequence is unbounded. Here we consider the following variation on this problem. Instead of considering arbitrary sequences, we start with a sequence ${\bf b} = b_0 b_1 b_2 \cdots$ over ${\mathbb{N}}^+$ and from it construct the sequence $S({\bf b}) = a_1 a_2 a_3 \cdots$ defined by $$ {\bf a}[i] = {\bf b}[\nu_2 (i)]$$ for $i \geq 1$, where $\nu_2(i)$ is the exponent of the largest power of $2$ dividing $i$. (Note that ${\bf a}$ and ${\bf b}$ are indexed differently.) For example, if ${\bf b} = 123\cdots$, then ${\bf a} = 1213121412131215 \cdots$, the so-called ``ruler sequence''. It is known that this sequence is squarefree and is, in fact, the lexicographically least sequence over ${\mathbb{N}}^*$ avoiding squares \cite{Guay-Paquet&Shallit:2009}. We then ask: what is the lexicographically least sequence avoiding additive squares that is of the form $S({\bf b})$? The following theorem gives the answer. \begin{theorem} \label{thm:lex-least-add-sq} The lexicographically least sequence over ${\mathbb{N}} \setminus \{0\}$ of the form $S({\bf b})$ that avoids additive squares is defined by ${\bf b}[i] \mathrel{\mathop:}= F_{i+2}$. \label{additive-thm} \end{theorem} \begin{proof} First, we show that ${\bf a} \mathrel{\mathop:}= S({\bf b}) = \prod_{k=1}^\infty {\bf b}[\nu_2(k)] = \prod_{k=1}^\infty F_{\nu_2(k)+2}$ avoids additive squares. For $m,n,j \in {\mathbb{N}}$, let $A(m,n,j)$ denote the number of occurrences of $j$ in $\nu_2(m+1), \dots, \nu_2(m+n)$. (a): Consider two consecutive blocks of the same size say $a_{i+1} \cdots a_{i+n}$ and $a_{i+n+1} \cdots a_{i+2n}$. Our goal is to compare the sums $\sum_{i < j \leq i+n} a_j$ and $\sum_{i+n < j \leq i+2n} a_j$. First we prove \begin{lemma} Let $m,j \geq 0$ and $n \geq 1$ be integers. Let $A(m, n,j)$ denote the number of occurrences of $j$ in $\nu_2 (m+1), \ldots, \nu_2 (m+n)$. Then for all $m, m' \geq 0$ we have $|A(m', n, j) - A(m, n, j)| \leq 1$. \label{flemm} \end{lemma} \begin{proof} We start by observing that the number of positive integers $\leq n$ that are divisible by $t$ is exactly $\lfloor n/t \rfloor$. It follows that the number $B(n,j)$ of positive integers $\leq n$ that are divisible by $2^j$ but not by $2^{j+1}$ is \begin{equation} B(n,j) = \lfloor {n \over {2^j}} \rfloor - \lfloor {n \over {2^{j+1}}} \rfloor . \label{fl} \end{equation} Now from the well-known identity $$ \lfloor x \rfloor + \lfloor x + {1 \over 2} \rfloor = \lfloor 2x \rfloor,$$ valid for all real numbers $x$, substitute $x = n/2^{j+1}$ to get $$ \lfloor {n \over {2^{j+1}}} \rfloor + \lfloor {n \over {2^{j+1}}} + {1 \over 2} \rfloor = \lfloor {n \over {2^j}} \rfloor ,$$ which, combined with \eqref{fl}, shows that $$B(n,j) = \lfloor {n \over {2^{j+1}}} + {1 \over 2} \rfloor .$$ Hence \begin{equation} {n \over {2^{j+1}}} - {1 \over 2} \leq B(n,j) < {n \over {2^{j+1}}} + {1 \over 2} . \label{flooreq} \end{equation} Now the number of occurrences of $j$ in $\nu_2(m+1), \ldots, \nu_2(m+n)$ is $A(m,n,j) = B(m+n,j)-B(m,j)$. From \eqref{flooreq} we get \begin{equation} {n \over {2^{j+1}}} - 1 < A(m,n,j) < {n \over {2^{j+1}}} + 1 \label{flreq2} \end{equation} for all $m \geq 0$. Since $A(m,n,j)$ is an integer, the inequality \eqref{flreq2} implies that $|A(m',n,j)-A(m,n,j)| \leq 1$ for all $m, m'$. \end{proof} Note that for all $i,n \in {\mathbb{N}}$, we have $\sum_{k=i}^{i+n-1} {\bf a}[k] = \sum_{j=0}^{\floor{\log_2(i+n)}} A(i,n,j) F_{j+2}$, so for adjacent blocks of length $n$, $\sum_{k=i+n}^{i+2n-1} {\bf a}[k] - \sum_{k=i}^{i+n-1} {\bf a}[k] = \sum_{j=0}^{\floor{\log_2(i+2n)}} (A(i+n,n,j)-A(i,n,j)) F_{j+2}$. Hence, ${\bf a}[i \ldotp\ldotp i+2n-1]$ is an additive square iff $\sum_{j=0}^{\floor{\log_2(i+2n)}} (A(i+n,n,j)-A(i,n,j)) F_{j+2} = 0$, and by above, each $A(i+n,n,j)-A(i,n,j) \in \{-1,0,1\}$. The above suggests that we can take advantage of ``unnormalized'' Fibonacci representation in our computations. For $\Sigma \subseteq {\mathbb{Z}}$ and $w \in \Sigma^*$, we let the unnormalized Fibonacci representation $\ip{w}_{uF}$ be defined in the same way as $\ip{w}_F$, except over the alphabet $\Sigma$. In order to use Procedure~\ref{proc:Fib-auto-decide}, we need two auxiliary DFAs: one that, given $i,n \in {\mathbb{N}}$ (in any representation; we found that base 2 works), computes $\ip{A(i+n,n,\_)-A(i,n,\_)}_{uF}$, and another that, given $w \in \{{\tt -1},{\tt 0},{\tt 1}\}^*$, decides whether $\ip{w}_{uF} = 0$. The first task can be done by a 6-state (incomplete) DFA $M_\text{add22F}$ that accepts the language $\{ z \in (\Sigma_2^2 \times \{{\tt -1},{\tt 0},{\tt 1}\})^* \;:\; \forall j (\pi_3(z)[j] = A(\ip{\pi_1(z)}_2+\ip{\pi_2(z)}_2,\ip{\pi_2(z)}_2,j) - A(\ip{\pi_1(z)}_2,\ip{\pi_2(z)}_2,j))\}$. The second task can be done by a 5-state (incomplete) DFA $M_\text{1uFisZero}$ that accepts the language $\{ w \in \{{\tt -1},{\tt 0},{\tt 1}\}^* \;:\; \ip{w}_{uF} = 0 \}$. We applied a modified Procedure~\ref{proc:Fib-auto-decide} to the predicate $n \geq 1 \wedge \exists w ({\tt add22F}(i,n,w) \wedge {\tt 1uFisZero}(w))$ and obtained as output a DFA that accepts nothing, so ${\bf a}$ avoids additive squares. Next, we show that ${\bf a}$ is the lexicographically least sequence over ${\mathbb{N}} \setminus \{0\}$ of the form $S({\bf b})$ that avoids additive squares. Note that for all ${\bf x},{\bf y} \in {\mathbb{N}} \setminus \{0\}$, $S({\bf x}) < S({\bf y})$ iff ${\bf x} < {\bf y}$ in the lexicographic ordering. Thus, we show that if any entry ${\bf b}[s]$ with ${\bf b}[s] > 1$ is changed to some $t \in [1,{\bf b}[s]-1]$, then ${\bf a} = S({\bf b})$ contains an additive square using only the first occurrence of the change at ${\bf a}[2^s-1]$. More precisely, we show that for all $s,t \in {\mathbb{N}}$ with $t \in [1,F_{s+2}-1]$, there exist $i,n \in {\mathbb{N}}$ with $n \geq 1$ and $i+2n < 2^{s+1}$ such that either ($2^s-1 \in [i,i+n-1]$ and $\sum_{k=i+n}^{i+2n-1} {\bf a}[k] - \sum_{k=i}^{i+n-1} {\bf a}[k] + t = 0$) or ($2^s-1 \in [i+n,i+2n-1]$ and $\sum_{k=i+n}^{i+2n-1} {\bf a}[k] - \sum_{k=i}^{i+n-1} {\bf a}[k] - t = 0$). Setting up for a modified Procedure~\ref{proc:Fib-auto-decide}, we use the following predicate, which says ``$r$ is a power of $2$ and changing ${\bf a}[r-1]$ to any smaller number results in an additive square in the first $2r$ positions", and six auxiliary DFAs. Note that all arithmetic and comparisons are in base 2. \begin{align*} &{\tt powOf2}(r) \wedge \forall t ((t \geq 1 \wedge t<r \wedge {\tt canonFib}(t)) \rightarrow \exists i \exists n (n \geq 1 \wedge i+2n < 2r \wedge {} \\ &\quad ((i<r \wedge r\leq i+n \wedge \forall w ({\tt add22F}(i,n,w) \rightarrow \forall x ({\tt bitAdd}(t,w,x) \rightarrow {\tt 2uFisZero}(x)))) \vee {} \\ &\quad \hphantom{(}(i+n<r \wedge r \leq i+2n \wedge \forall w ({\tt add22F}(i,n,w) \rightarrow \forall x ({\tt bitSub}(t,w,x) \rightarrow {\tt 2uFisZero}(x))))))). \end{align*} \vspace{-2em} \begin{align*} L(M_\text{powOf2}) &= \{w \in \Sigma_2^* \;:\; \exists n (w=(2^n)_2)\}. \\ L(M_\text{canonFib}) &= \{w \in \Sigma_2^* \;:\; \exists n (w=(n)_F)\}. \\ L(M_\text{bit(Add/Sub)}) &= \{z \in (\Sigma_2 \times \{{\tt -1},{\tt 0},{\tt 1}\} \times \{{\tt -1},{\tt 0},{\tt 1},{\tt 2}\})^* \;:\; \forall i (\pi_1(z)[i] \pm \pi_2(z)[i] = \pi_3(z)[i]) \}. \\ L(M_\text{2uFisZero}) &= \{w \in \{{\tt -1},{\tt 0},{\tt 1},{\tt 2}\}^* \;:\; \ip{w}_{uF} = 0\}. \end{align*} We applied a modified Procedure~\ref{proc:Fib-auto-decide} to the above predicate and auxiliary DFAs and obtained as output $M_\text{powOf2}$, so ${\bf a}$ is the lexicographically least sequence over ${\mathbb{N}} \setminus \{0\}$ of the form $S({\bf b})$ that avoids additive squares. \end{proof} \section{Enumeration} \label{enumer} Mimicking the base-$k$ ideas in \cite{Charlier&Rampersad&Shallit:2012}, we can also mechanically enumerate many aspects of Fibonacci-automatic sequences. We do this by encoding the factors having the property in terms of paths of an automaton. This gives the concept of {\it Fibonacci-regular sequence} as previously studied in \cite{Allouche&Scheicher&Tichy:2000}. Roughly speaking, a sequence $(a(n))_{n \geq 0}$ taking values in ${\mathbb{N}}$ is Fibonacci-regular if the set of sequences $$ \{ (a([xw]_F)_{w \in \Sigma_2^*} \ : \ x \in \Sigma_2^* \} $$ is finitely generated. Here we assume that $a([xw]_F)$ evaluates to $0$ if $xw$ contains the string $11$. Every Fibonacci-regular sequence $(a(n))_{n \geq 0}$ has a {\it linear representation} of the form $(u, \mu, v)$ where $u$ and $v$ are row and column vectors, respectively, and $\mu:\Sigma_2 \rightarrow {\mathbb{N}}^{d \times d}$ is a matrix-valued morphism, where $\mu(0) = M_0$ and $\mu(1) = M_1$ are $d \times d$ matrices for some $d \geq 1$, such that $$a(n) = u \cdot \mu(x) \cdot v$$ whenever $[x]_F = n$. The {\it rank} of the representation is the integer $d$. As an example, we exhibit a rank-$6$ linear representation for the sequence $a(n) = n+1$: \begin{align*} u &= [1 \ 2 \ 2 \ 3 \ 3 \ 2] \\ M_0 &= \left[ \begin{array}{cccccc} 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right ] \\ M_1 &= \left[ \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right ] \\ v &= [1 \ 0 \ 0 \ 0 \ 0 \ 0 ]^T . \end{align*} This can be proved by a simple induction on the claim that $$u \cdot \mu(x) = [ x_F + 1 \ (1x)_F + 1 \ (10x)_F - x_F \ (100x)_F - x_F \ (101x)_F - (1x)_F \ (1001x)_F - (101x)_F ] $$ for strings $x$. Recall that if $\bf x$ is an infinite word, then the subword complexity function $\rho_{\bf x} (n)$ counts the number of distinct factors of length $n$. Then, in analogy with \cite[Thm.~27]{Charlier&Rampersad&Shallit:2012}, we have \begin{theorem} If $\bf x$ is Fibonacci-automatic, then the subword complexity function of $\bf x$ is Fibonacci-regular. \end{theorem} Using our implementation, we can obtain a linear representation of the subword complexity function for $\bf f$. To do so, we use the predicate $$ \{ (n,i)_F \ : \ \forall i' < i \ {\bf f}[i..i+n-1] \not= {\bf f}[i'..i'+n-1] \} ,$$ which expresses the assertion that the factor of length $n$ beginning at position $i$ has never appeared before. Then, for each $n$, the number of corresponding $i$ gives $\rho_{\bf f}(n)$. When we do this for $\bf f$, we get the following linear representation $(u', \mu', v')$ of rank $10$: \begin{align*} u' &= [0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0\ 0] \\ M'_0 &= \left[ \begin{array}{ccccccccccc} 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right] \\ M'_1 &= \left[ \begin{array}{ccccccccccc} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right] \\ v' &= [1\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1]^T \end{align*} To show that this computes the function $n+1$, it suffices to compare the values of the linear representations $(u, \mu, v)$ and $(u', \mu', v')$ for all strings of length $\leq 10 + 6 = 16$ (using \cite[Corollary 3.6]{Berstel&Reutenauer:2011}). After checking this, we have reproved the following classic theorem of Morse and Hedlund \cite{Morse&Hedlund:1940}: \begin{theorem} The subword complexity function of $\bf f$ is $n+1$. \label{sturmcomp} \end{theorem} We now turn to a result of Fraenkel and Simpson \cite{Fraenkel&Simpson:1999}. They computed the exact number of squares appearing in the finite Fibonacci words $X_n$; this was previously estimated by \cite{Crochemore:1981}. There are two variations: we could count the number of distinct squares in $X_n$, or what Fraenkel and Simpson called the number of ``repeated squares'' in $X_n$ (i.e., the total number of {\it occurrences} of squares in $X_n$). To solve this using our approach, we generalize the problem to consider any length-$n$ prefix of $X_n$, and not simply the prefixes of length $F_n$. We can easily write down predicates for these. The first represents the number of distinct squares in ${\bf f}[0..n-1]$: \begin{multline*} L_{\rm ds} := \{ (n,i,j)_F \ : \ (j \geq 1) \text{ and } (i+2j \leq n) \text{ and } {\bf f}[i..i+j-1] = {\bf f}[i+j..i+2j-1] \\ \text{ and } \forall i' < i \ {\bf f}[i'..i'+2j-1] \not= {\bf f}[i..i+2j-1] \} . \end{multline*} This predicate asserts that ${\bf f}[i..i+2j-1]$ is a square occurring in ${\bf f}[0..n-1]$ and that furthermore it is the first occurrence of this particular string in ${\bf f}[0..n-1]$. The second represents the total number of occurrences of squares in ${\bf f}[0..n-1]$: $$ L_{\rm dos} := \{ (n,i,j)_F \ : \ (j \geq 1) \text{ and } (i+2j \leq n) \text{ and } {\bf f}[i..i+j-1] = {\bf f}[i+j..i+2j-1] \} .$$ This predicate asserts that ${\bf f}[i..i+2j-1]$ is a square occurring in ${\bf f}[0..n-1]$. We apply our method to the second example, leaving the first to the reader. Let $b(n)$ denote the number of occurrences of squares in ${\bf f}[0..n-1]$. First, we use our method to find a DFA $M$ accepting $L_{\rm dos}$. This (incomplete) DFA has 27 states. Next, we compute matrices $M_0$ and $M_1$, indexed by states of $M$, such that $(M_a)_{k,l}$ counts the number of edges (corresponding to the variables $i$ and $j$) from state $k$ to state $l$ on the digit $a$ of $n$. We also compute a vector $u$ corresponding to the initial state of $M$ and a vector $v$ corresponding to the final states of $M$. This gives us the following linear representation of the sequence $b(n)$: if $x = a_1 a_2 \cdots a_t$ is the Fibonacci representation of $n$, then \begin{equation} b(n) = u M_{a_1} \cdots M_{a_t} v , \label{linrep} \end{equation} which, incidentally, gives a fast algorithm for computing $b(n)$ for any $n$. Now let $B(n)$ denote the number of square occurrences in the finite Fibonacci word $X_n$. This corresponds to considering the Fibonacci representation of the form $10^{n-2}$; that is, $B(n+1) = b([10^{n-1}]_F)$. The matrix $M_0$ is the following $27 \times 27$ array \begin{equation} \left[ \begin{array}{ccccccccccccccccccccccccccc} 1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0 \\ 1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0 \\ 1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1 \\ 1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&1&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0 \\ 0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0 \\ 1&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \\ 1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&1&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0 \\ 1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&1&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1 \\ 0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&0&0&0&0&0&0 \end{array} \right] \end{equation} and has minimal polynomial $$ X^4 (X-1)^2(X+1)^2(X^2-X-1)^2.$$ It now follows from the theory of linear recurrences that there are constants $c_1, c_2, \ldots, c_8$ such that $$ B(n+1) = (c_1n + c_2) \alpha^n + (c_3n+c_4) \beta^n + c_5n+c_6 + (c_7n+c_8)(-1)^n $$ for $n \geq 3$, where $\alpha = (1+\sqrt{5})/2$, $\beta = (1-\sqrt{5})/2$ are the roots of $X^2 - X - 1$. We can find these constants by computing $B(4), B(5), \ldots, B(11)$ (using Eq.~\eqref{linrep}) and then solving for the values of the constants $c_1, \ldots, c_8$. When we do so, we find \begin{align*} c_1 &= {2 \over 5} \quad\quad & c_2 &= {-{2\over{25}}}\sqrt{5} - 2 \\ c_3 &= {2 \over 5} \quad\quad & c_4 &= {{2\over{25}}}\sqrt{5} - 2 \\ c_5 &= 1 \quad\quad & c_6 &= 1 \\ c_7 &= 0 \quad\quad & c_8 &= 0 \end{align*} A little simplification, using the fact that $F_n = (\alpha^n - \beta^n)/(\alpha - \beta)$, leads to \begin{theorem} Let $B(n)$ denote the number of square occurrences in $X_n$. Then $$B(n+1) = {4 \over 5} n F_{n+1} - {2 \over 5} (n+6) F_{n} - 4F_{n-1} + n + 1 $$ for $n \geq 3$. \end{theorem} This statement corrects a small error in Theorem 2 in \cite{Fraenkel&Simpson:1999} (the coefficient of $F_{n-1}$ was wrong; note that their $F$ and their Fibonacci words are indexed differently from ours), which was first pointed out to us by Kalle Saari. In a similar way, we can count the cube occurrences in $X_n$. Using analysis exactly like the square case, we easily find \begin{theorem} Let $C(n)$ denote the number of cube occurrences in the Fibonacci word $X_n$. Then for $n \geq 3$ we have $$ C(n) = (d_1 n+ d_2) \alpha^n + (d_3 n+d_4) \beta^n + d_5n + d_6$$ where \begin{align*} d_1 &= {{3-\sqrt{5}}\over {10}} \quad\quad & d_2 &= {{17}\over {50}}\sqrt{5} - {3 \over 2} \\ d_3 &= {{3+\sqrt{5}}\over {10}} \quad\quad & d_4 &= -{{17}\over {50}}\sqrt{5} - {3 \over 2} \\ d_5 &= 1 \quad\quad & d_6 &= -1 . \end{align*} \end{theorem} We now turn to a question of Chuan and Droubay. Let us consider the prefixes of $\bf f$. For each prefix of length $n$, form all of its $n$ shifts, and let us count the number of these shifts that are palindromes; call this number $d(n)$. (Note that in the case where a prefix is a power, two different shifts could be identical; we count these with multiplicity.) Chuan \cite[Thm.~7, p.~254]{Chuan:1993b} proved \begin{theorem} For $i > 2$ we have $d(F_i) = 0$ iff $i \equiv \modd{0} {3}$. \label{chuan93-thm} \end{theorem} \begin{proof} Along the way we actually prove a lot more, characterizing $d(n)$ for all $n$, not just those $n$ equal to a Fibonacci number. We start by showing that $d(n)$ takes only three values: $0$, $1$, and $2$. To do this, we construct an automaton accepting the language $$ \{ (n,i)_F \ : \ (0 \leq i < n) \ \wedge\ {\bf f}[i..n-1]{\bf f}[0..i-1] \text{ is a palindrome } \} .$$ From this we construct the linear representation $(u, M_0, M_1, v)$ of $d(n)$ as discussed above; it has rank $27$. The range of $c$ is finite if the monoid ${\cal M} = \langle M_0, M_1 \rangle$ is finite. This can be checked with a simple queue-based algorithm, and $\cal M$ turns out to have cardinality $151$. From these a simple computation proves $$\lbrace uMv \ : \ M \in {\cal M} \rbrace = \lbrace 0, 1, 2 \rbrace,$$ and so our claim about the range of $c$ follows. Now that we know the range of $c$ we can create predicates $P_0(n), P_1(n), P_2(n)$ asserting that (a) there are no length-$n$ shifts that are palindromes (b) there is exactly one shift that is a palindrome and (c) more than one shift is a palindrome, as follows: $$P_0 : \neg \exists i, (0 \leq i < n), {\bf f}[i..n-1]{\bf f}[0..i-1] \text{ is a palindrome } $$ $$P_1 : \exists i, (0 \leq i < n), {\bf f}[i..n-1]{\bf f}[0..i-1] \text{ is a palindrome and } \neg\exists j \not= i (0 \leq j < n), {\bf f}[j..n-1]{\bf f}[0..j-1] $$ $$ P_2 : \exists i, j, 0 \leq i < j < n {\bf f}[i..n-1]{\bf f}[0..i-1] \text{ and } {\bf f}[j..n-1]{\bf f}[0..j-1] \text{ are both palindromes }$$ For each one, we can compute a finite automaton characterizing the Fibonacci representations of those $n$ for which $d(n)$ equals, respectively, $0$, $1$, and $2$. For example, we computed the automaton corresponding to $P_0$, and it is displayed in Figure~\ref{noshifts} below. \begin{figure}[H] \begin{center} \includegraphics[width=4in]{output_no-shifts-are-pals.pdf} \caption{Automaton accepting lengths of prefixes for which no shifts are palindromes} \label{noshifts} \end{center} \end{figure} By tracing the path labeled $10^*$ starting at the initial state labeled $18$, we see that the ``finality'' of the states encountered is ultimately periodic with period $3$, proving Theorem~\ref{chuan93-thm}. \end{proof} To finish this section, we reprove a result of Kolpakov and Kucherov \cite{Kolpakov&Kucherov:1999a}. Recalling the definition of maximal repetition from Section~\ref{repe-subsec}, they counted the number $\mr(F_n)$ of occurrences of maximal repetitions in the prefix of $\bf f$ of length $F_n$: \begin{theorem} For $n \geq 5$ we have $\mr(F_n) = 2F_{n-2} - 3$. \end{theorem} \begin{proof} We create an automaton for the language $$ \lbrace (n,i,j)_F \ : \ 0 \leq i \leq j < n \text{ and } {\bf f}[i..j] \text{ is a maximal repetition of } {\bf f}[0..n-1] \rbrace ,$$ using the predicate \begin{multline*} (i \leq j) \ \wedge\ (j<n)\ \wedge \ \exists p \text{ with } 1 \leq p \leq (j+1-i)/2 \text{ such that } \\ ( (\forall k\leq j-i-p \ {\bf f}[i+k]= {\bf f}[i+k+p]) \ \wedge \ \\ (i \geq 1) \implies (\forall q \text{ with } 1 \leq q \leq p \ \exists \ell \leq j-i-q+1 \ {\bf f}[i-i+\ell] \not= {\bf f}[i-1+\ell+q]) \ \wedge\ \\ (j+1\leq n-1) \implies (\forall r \text{ with } 1 \leq r \leq p\ \exists m \leq j+1-r-i \ {\bf f}[i+m] \not= {\bf f}[i+m+r] ) ) . \end{multline*} Here the second line of the predicate specifies that there is a period $p$ of ${\bf f}[i..j]$ corresponding to a repetition of exponent at least $2$. The third line specifies that no period $q$ of ${\bf f}[i-1..j]$ (when this makes sense) can be $\leq p$, and the fourth line specifies that no period $r$ of ${\bf f}[i..j+1]$ (when $j+1 \leq n-1$) can be $\leq p$. From the automaton we deduce a linear representation $(u, \mu, v)$ of rank 59. Since $(F_n)_F = 10^{n-2}$, it suffices to compute the minimal polynomial of $M_0 = \mu(0)$. When we do this, we discover it is $X^4(X^2 - X - 1)(X-1)^2(X+1)^2$. It follows from the theory of linear recurrences that $$\mr(F_n) = e_1 \alpha^n + e_2 \beta^n + e_3 n + e_4 + (e_5n + e_6)(-1)^n $$ for constants $e_1, e_2, e_3, e_4, e_5, e_6$ and $n \geq 6$. When we solve for $e_1, \ldots, e_6$ by using the first few values of $\mr(F_n)$ (computed from the linear representation or directly) we discover that $e_1 = (3\sqrt{5} - 5)/5$, $e_2 = (-3\sqrt{5} -5)/5$, $e_3 = e_5 = e_6 = 0$, and $e_4 = -3$. From this the result immediately follows. \end{proof} In fact, we can prove even more. \begin{theorem} For $n \geq 0$ the difference $\mr(n+1) - \mr(n)$ is either $0$ or $1$. Furthermore there is a finite automaton with 10 states that accepts $(n)_F$ precisely when $\mr(n+1) - \mr(n) = 1$. \end{theorem} \begin{proof} Every maximal repetition ${\bf f}[i..j]$ of ${\bf f}[0..n-1]$ is either a maximal repetition of ${\bf f}[0..n]$ with $j \leq n-1$, or is a maximal repetition with $j = n-1$ that, when considered in ${\bf f}[0..n]$, can be extended one character to the right to become one with $j = n$. So the only maximal repetitions of ${\bf f}[0..n]$ not (essentially) counted by $\mr(n)$ are those such that \begin{multline} {\bf f}[i..n] \text{ is a maximal repetition of } {\bf f}[0..n] \text{ and } \\ {\bf f}[i..n-1] \text{ is {\it not\/} a maximal repetition of } {\bf f}[0..n-1]. \label{condit} \end{multline} We can easily create a predicate asserting this latter condition, and from this obtain the linear representation of $\mr(n+1) - \mr(n)$: \begin{align*} u &= [0\ 0\ 0\ 0\ 0\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ ] \\ \mu(0) &= \left[ \begin{array}{cccccccccccc} 0&0&0&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&1&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&1&0&0 \end{array} \right] \\ \mu(1) &= \left[ \begin{array}{cccccccccccc} 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&1&0&0&0\\ 0&1&1&0&0&0&0&0&0&0&0&0\\ 1&1&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&1&1&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&1&0&0&0 \end{array} \right] \\ v &= [0\ 0\ 0\ 0\ 1\ 0\ 0\ 0\ 1\ 0\ 0\ 1 ] \\ \end{align*} We now use the trick we previously used for the proof of Theorem~\ref{chuan93-thm}; the monoid generated by $\mu(0)$ and $\mu(1)$ has size $61$ and for each matrix $M$ in this monoid we have $u M v \in \lbrace 0, 1 \rbrace$. It follows that $\mr(n+1) - \mr(n) \in \lbrace 0, 1 \rbrace$ for all $n \geq 0$. Knowing this, we can now build an automaton accepting those $n$ for which there exists an $i$ for which \eqref{condit} holds. When we do so we get the automaton depicted below in Figure~\ref{maxrepp}. \begin{figure}[H] \begin{center} \includegraphics[width=6in]{kk-prefixes.pdf} \caption{Automaton accepting $(n)_F$ such that $\mr(n+1) - \mr(n) = 1$} \label{maxrepp} \end{center} \end{figure} \end{proof} \section{Abelian properties} Our decision procedure does not apply, in complete generality, to abelian properties of infinite words. This is because there is no obvious way to express assertions like $\psi(x) = \psi(x')$ for two factors $x, x'$ of an infinite word. (Here $\psi:\Sigma^* \rightarrow {\mathbb{N}}^{|\Sigma|}$ is the Parikh map that sends a word to the number of occurrences of each letter.) Indeed, in the $2$-automatic case it is provable that there is at least one abelian property that is inexpressible \cite[\S 5.2]{Schaeffer:2013}. However, the special nature of the Fibonacci word $\bf f$ allows us to mechanically prove some assertions involving abelian properties. In this section we describe how we did this. By an {\it abelian square of order $n$} we mean a factor of the form $x x'$ where $\psi(x) = \psi(x')$, where $n = |x|$. In a similar way we can define abelian cubes and higher powers. We start with the elementary observation that $\bf f$ is defined over the alphabet $\lbrace 0, 1 \rbrace$. Hence, to understand the abelian properties of a factor $x$ it suffices to know $|x|$ and $|x|_0$. Next, we observe that the map that sends $n$ to $a_n := |{\bf f}[0..n-1]|_0$ (that is, the number of $0$'s in the length-$n$ prefix of $\bf f$), is actually {\it synchronized} (see \cite{Carpi&Maggi:2001,Carpi&DAlonzo:2009,Carpi&DAlonzo:2010,Goc&Schaeffer&Shallit:2013}). That is, there is a DFA accepting the Fibonacci representation of the pairs $(n,a_n)$. In fact we have the following \begin{theorem} Suppose the Fibonacci representation of $n$ is $e_1 e_2 \cdots e_i$. Then $a_n = [e_1 e_2 \cdots e_{i-1}]_F + e_i$. \label{fibr} \end{theorem} \begin{proof} First, we observe that an easy induction on $m$ proves that $|X_m|_0 = F_{m-1}$ for $m \geq 2$. We will use this in a moment. The theorem's claim is easily checked for $n = 0,1$. We prove it for $F_{m+1} \leq n < F_{m+2}$ by induction on $m$. The base case is $m = 1$, which corresponds to $n = 1$. Now assume the theorem's claim is true for $m-1$; we prove it for $m$. Write $(n)_F = e_1 e_2 \cdots e_m$. Then, using the fact that ${\bf f}[0..F_{m+2}-1] = X_{m+2} = X_{m+1} X_m$, we get \begin{align*} |{\bf f}[0..n-1]|_0 &= |{\bf f}[0..F_{m+1}-1]|_0 + |{\bf f}[F_{m+1}..n-1]|_0 \\ &= |X_{m+1}|_0 + |{\bf f}[0..n-1-F_{m+1}]|_0 \\ &= F_m + |{\bf f}[0..n-1-F_{m+1}|_0 \\ &= F_m + [e_2\cdots e_{m-1}]_F + e_m \\ &= [e_1 \cdots e_{m-1}]_F + e_m , \end{align*} as desired. \end{proof} In fact, the synchronized automaton for $(n,a_n)_F$ is given in the following diagram: \begin{figure}[H] \begin{center} \includegraphics[width=6in]{synchrofib.pdf} \caption{Automaton accepting $(n,a_n)_F$} \label{synchro} \end{center} \end{figure} Here the missing state numbered $2$ is a ``dead'' state that is the target of all undrawn transitions. The correctness of this automaton can be checked using our prover. Letting $\zc(x,y)$ denote $1$ if $(x,y)_F$ is accepted, it suffices to check that \begin{enumerate} \item $ \forall x\ \exists y\ \zc(x,y)= 1$ (that is, for each $x$ there is at least one corresponding $y$ accepted); \item $\forall x\ \forall y\ \forall z\ (\zc(x,y) = \zc(x,z)) \implies y = z$ (that is, for each $x$ at most one corresponding $y$ is accepted); \item $\forall x \ \forall y \ ((\zc(x,y)=1) \ \wedge \ ({\bf f}[x] = 1)) \implies (\zc(x+1,y+1) = 1)$; \item $\forall x \ \forall y \ ((\zc(x,y)=1) \ \wedge \ ({\bf f}[x] = 0)) \implies (\zc(x+1,y) = 1)$; \end{enumerate} Another useful automaton computes, on input $n, i, j$ the function $$\fab(n,i,j) := |{\bf f}[i..i+n-1]|_0 - |{\bf f}[j..j+n-1]|_0 = a_{i+n}-a_i - a_{j+n}+a_j.$$ From the known fact that the factors of $\bf f$ are ``balanced'' we know that $\fab$ takes only the values $-1, 0, 1$. This automaton can be deduced from the one above. However, we calculated it by ``guessing'' the right automaton and then verifying the correctness with our prover. The automaton for $\fab(n,i,j)$ has 30 states, numbered from $1$ to $30$. Inputs are in $\Sigma_2^3$. The transitions, as well as the outputs, are given in the table below. \begin{table}[H] \begin{center} \begin{tabular}{|c|cccccccc|c} $q$ & $[0,0,0]$ & $[0,0,1]$ & $[0,1,0]$ & $[0,1,1]$ & $[1,0,0]$ & $[1,0,1]$ & $[1,1,0]$ & $[1,1,1]$ & $\tau(q)$ \\ \hline 1& 1& 2& 3& 4& 4& 5& 6& 7& 0\\ 2& 8& 1& 9& 3& 3& 4&10& 6& 0\\ 3&11&12& 1& 2& 2&13& 4& 5& 0\\ 4&14&11& 8& 1& 1& 2& 3& 4& 0\\ 5&15&11&16& 1& 1& 2& 3& 4& 1\\ 6&17&18& 8& 1& 1& 2& 3& 4&$-1$\\ 7&19&18&16& 1& 1& 2& 3& 4& 0\\ 8& 1& 2& 3& 4& 4&20& 6&21& 0\\ 9&11&12& 1& 2& 2&22& 4&20& 0\\ 10&18&23& 1& 2& 2&13& 4& 5&$-1$\\ 11& 1& 2& 3& 4& 4& 5&24&25& 0\\ 12& 8& 1& 9& 3& 3& 4&26&24& 0\\ 13&16& 1&27& 3& 3& 4&10& 6& 1\\ 14& 1& 2& 3& 4& 4&20&24&28& 0\\ 15& 2&13& 4& 5& 5&20&25&28&$-1$\\ 16& 2&13& 4& 5& 5&20& 7&21&$-1$\\ 17& 3& 4&10& 6& 6&21&24&28& 1\\ 18& 3& 4&10& 6& 6& 7&24&25& 1\\ 19& 4& 5& 6& 7& 7&21&25&28& 0\\ 20&15&14&16& 8& 8& 1& 9& 3& 1\\ 21&19&17&16& 8& 8& 1& 9& 3& 0\\ 22&16& 8&27& 9& 9& 3&29&10& 1\\ 23& 9& 3&29&10&10& 6&26&24& 1\\ 24&17&18&14&11&11&12& 1& 2&$-1$\\ 25&19&18&15&11&11&12& 1& 2& 0\\ 26&18&23&11&12&12&30& 2&13&$-1$\\ 27&12&30& 2&13&13&22& 5&20&$-1$\\ 28&19&17&15&14&14&11& 8& 1& 0\\ 29&18&23& 1& 2& 2&22& 4&20&$-1$\\ 30&16& 1&27& 3& 3& 4&26&24& 1 \end{tabular} \end{center} \caption{Automaton to compute $\fab$} \end{table} Once we have guessed the automaton, we can verify it as follows: \begin{enumerate} \item $\forall i \ \forall j\ \fab[0][i][j]=0$. This is the basis for an induction. \item Induction steps: \begin{itemize} \item $\forall i\ \forall j\ \forall n \ ({\bf f}[i+n]={\bf f}[j+n]) \implies (\fab[n][i][j]=\fab[n+1][i][j])$. \item $\forall i\ \forall j\ \forall n\ (({\bf f}[i+n]=0) \wedge ({\bf f}[j+n]=1)) \implies (((\fab[n][i][j]=-1) \wedge (\fab[n+1][i][j]=0)) \vee ((\fab[n][i][j]=0) \wedge (\fab[n+1][i][j]=1))) $ \item $\forall i\ \forall j\ \forall n\ (({\bf f}[i+n]=0) \wedge ({\bf f}[j+n]=1)) \implies (((\fab[n][i][j]=1) \wedge (\fab[n+1][i][j]=0)) \vee ((\fab[n][i][j]=0) \wedge (\fab[n+1][i][j]=-1))) $. \end{itemize} \end{enumerate} As the first application, we prove \begin{theorem} The Fibonacci word $\bf f$ has abelian squares of all orders. \end{theorem} \begin{proof} We use the predicate $$ \exists i \ (\fab[n][i][i+n] = 0) .$$ The resulting automaton accepts all $n \geq 0$. The total computing time was 141 ms. \end{proof} Cummings and Smyth \cite{Cummings&Smyth:1997} counted the total number of all occurrences of (nonempty) abelian squares in the Fibonacci words $X_i$. We can do this by using the predicate $$ (k>0) \wedge (i+2k \leq n) \wedge (\fab[k][i][i+k]=0),$$ using the techniques in Section~\ref{enumer} and considering the case where $n = F_i$. When we do, we get a linear representation of rank 127 that counts the total number $w(n)$ of occurrences of abelian squares in the prefix of length $n$ of the Fibonacci word. To recover the Cummings-Smyth result we compute the minimal polynomial of the matrix $M_0$ corresponding to the predicate above. It is $$x^4 (x-1)(x+1)(x^2+x+1)(x^2-3x+1)(x^2-x+1)(x^2+x-1)(x^2-x-1).$$ This means that $w(F_n)$, that is, $w$ evaluated at $10^{n-2}$ in Fibonacci representation, is a linear combination of the roots of this polynomial to the $n$'th power (more precisely, the $(n-2)$th, but this detail is unimportant). The roots of the polynomial are $$ -1, 1, (-1 \pm i \sqrt{3})/2, (3 \pm \sqrt{5})/2, (1 \pm i \sqrt{3})/2, (-1 \pm \sqrt{5})/2, (1 \pm \sqrt{5})/2.$$ Solving for the coefficients as we did in Section~\ref{enumer} we get \begin{theorem} For all $n \geq 0$ we have \begin{multline*} w(F_n) = c_1 \left({{3+\sqrt{5}}\over 2}\right)^n + c_1 \left({{3-\sqrt{5}}\over 2}\right)^n + c_2 \left( {{1+\sqrt{5}}\over 2} \right)^n + c_2 \left( {{1-\sqrt{5}}\over 2} \right)^n + \\ c_3 \left({{1+i\sqrt{3}}\over 2} \right)^n + \overline{c_3} \left({{1-i\sqrt{3}}\over 2} \right)^n + c_4 \left({{-1+i\sqrt{3}}\over 2} \right)^n + \overline{c_4} \left({{-1-i\sqrt{3}}\over 2} \right)^n + c_5 (-1)^n, \end{multline*} where \begin{align*} c_1 &= 1/40 \\ c_2 &= -\sqrt{5}/20 \\ c_3 &= (1 - i\sqrt{3})/24 \\ c_4 &= i\sqrt{3}/24 \\ c_5 &= -2/15, \end{align*} and here $\overline{x}$ denotes complex conjugate. Here the parts corresponding to the constants $c_3, c_4, c_5$ form a periodic sequence of period 6. \end{theorem} Next, we turn to what is apparently a new result. Let $h(n)$ denote the total number of distinct factors (not occurrences of factors) that are abelian squares in the Fibonacci word $X_n$. In this case we need the predicate $$ (k \geq 1) \wedge (i+2k \leq n) \wedge (\fab[k][i][i+k]=0) \wedge (\forall j<i \ (\exists t<2k\ ({\bf f}[j+t]\not= {\bf f}[i+t]))).$$ We get the minimal polynomial $$ x^4(x+1)(x^2+x+1)(x^2-3x+1)(x^2-x+1)(x^2+x-1)(x^2-x-1)(x-1)^2.$$ Using the same technique as above we get \begin{theorem} For $n \geq 2$ we have $h(n) = a_1c_1^n + \cdots + a_{10}c_{10}^n $ where \begin{align*} a_1 &= (-2+\sqrt{5})/20 \\ a_2 &= (-2-\sqrt{5})/20 \\ a_3 &= (5-\sqrt{5})/20 \\ a_4 &= (5+\sqrt{5})/20 \\ a_5 &= 1/30 \\ a_6 &= -5/6 \\ a_7 &= (1/12)-i \sqrt{3}/12 \\ a_8 &= (1/12)+i \sqrt{3}/12 \\ a_9 &= (1/6) + i \sqrt{3}/12 \\ a_{10}&= (1/6) - i \sqrt{3}/12 \end{align*} and \begin{align*} c_1 &= (3+\sqrt{5})/2 \\ c_2 &= (3-\sqrt{5})/2 \\ c_3 &= (1+\sqrt{5})/2 \\ c_4 &= (1-\sqrt{5})/2 \\ c_5 &= -1 \\ c_6 &= 1 \\ c_7 &= (1/2)+i \sqrt{3}/2 \\ c_8 &= (1/2)-i \sqrt{3}/2 \\ c_9 &= (-1/2)+i \sqrt{3}/2 \\ c_{10} &= (-1/2)-i \sqrt{3}/2 . \end{align*} \end{theorem} For another new result, consider counting the total number $a(n)$ of distinct factors of length $2n$ of the infinite word $\bf f$ that are abelian squares. This function is rather erratic. The following table gives the first few values: \begin{table}[H] \begin{center} \begin{tabular}{c|cccccccccccccccccccccccccccccc} $n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline $a(n)$ & 1&3&5&1&9&5&5&15&3&13&13&5&25&9&15&25&1&27&19&11 \end{tabular} \end{center} \end{table} We use the predicate $$ (n \geq 1) \wedge (\fab[n][i][i+n]=0) \wedge (\forall j < i\ ( \exists t<2n \ ({\bf f}[j+t]\not= {\bf f}[i+t]))).$$ to create the matrices and vectors. \begin{theorem} $a(n) = 1$ infinitely often and $a(n) = 2n-1$ infinitely often. More precisely $a(n) = 1$ iff $(n)_F = 1$ or $(n)_F = (100)^i 101$ for $i \geq 0$, and $a(n) = 2n-1$ iff $(n)_F = 10^i$ for $i \geq 0$. \end{theorem} \begin{proof} For the first statement, we create a DFA accepting those $(n)_F$ for which $a(n) = 1$, via the predicate $$ \forall i\ \forall j \ ((\fab[n][i][i+n]=0) \wedge (\fab[n][j][j+n]=0)) \implies (\forall t<2n \ ({\bf f}[j+t] = {\bf f}[i+t])).$$ The resulting $6$-state automaton accepts the set specified. For the second result, we first compute the minimal polynomial of the matrix $M_0$ of the linear representation. It is $x^5 (x-1)(x+1)(x^2-x-1)$. This means that, for $n \geq 5$, we have $a(F_n) = c_1 + c_2 (-1)^n + c_3 \alpha^n + c_4 \beta^n$ where, as usual, $\alpha = (1+\sqrt{5})/2$ and $\beta=(1-\sqrt{5})/2$. Solving for the constants, we determine that $a(F_n) = 2F_n - 1$ for $n \geq 2$, as desired. To show that these are the only cases for which $a(n) = 2n-1$, we use a predicate that says that there are not at least three different factors of length $2n$ that are not abelian squares. Running this through our program results in only the cases previously discussed. \end{proof} Finally, we turn to abelian cubes. Unlike the case of squares, some orders do not appear in $\bf f$. \begin{theorem} The Fibonacci word $\bf f$ contains, as a factor, an abelian cube of order $n$ iff $(n)_F$ is accepted by the automaton below. \end{theorem} \begin{figure}[H] \begin{center} \includegraphics[width=6.5in]{fibabelcube.pdf} \caption{Automaton accepting orders of abelian cubes in $\bf f$} \label{fibabelcube} \end{center} \end{figure} Theorem~\ref{fibr} has the following interesting corollary. \begin{corollary} Let $h:\lbrace 0, 1 \rbrace^* \rightarrow \Delta^*$ be an arbitrary morphism such that $h(01) \not= \epsilon$. Then $h({\bf f})$ is an infinite Fibonacci-automatic word. \end{corollary} \begin{proof} From Theorem~\ref{fibr} we see that there is a predicate $\zc(n,n')$ which is true if $n' = |{\bf f}[0..n-1]|_0$ and false otherwise, and this predicate can be implemented as a finite automaton taking the inputs $n$ and $n'$ in Fibonacci representation. Suppose $h(0) = w$ and $h(1) = x$. Now, to show that h({\bf f}) is Fibonacci-automatic, it suffices to show that, for each letter $a \in \Delta$, the language of ``fibers'' $$ L_a = \{ (n)_F : (h({\bf f}))[n] = a \} $$ is regular. To see this, we write a predicate for the $n$ in the definition of $L_a$, namely \begin{multline*} \exists q\ \exists r_0 \ \exists r_1 \ \exists m \ (q \leq n < q+ |h({\bf f}[m])|) \ \wedge \ \zc(m,r_0) \ \wedge \ (r_0+r_1=m) \wedge \\ (r_0 |w| + r_1 |x| = q) \ \wedge \ (( {\bf f}[m]=0 \ \wedge \ w[n-q] = a) \ \vee \ ({\bf f}[m] = 1 \ \wedge\ x[n-q] = a) ) . \end{multline*} Notice that the predicate looks like it uses multiplication, but this multiplication can be replaced by repeated addition since $|w|$ and $|x|$ are constants here. Unpacking this predicate we see that it asserts the existence of $m$, $q$, $r_0$, and $r_1$ having the meaning that \begin{itemize} \item the $n$'th symbol of h({\bf f}) lies inside the block $h({\bf f}[m])$ and is in fact the $(n-q)$'th symbol in the block (with the first symbol being symbol 0) \item ${\bf f}[0..m-1]$ has $r_0$ 0's in it \item $ {\bf f}[0..m-1]$ has $r_1$ 1's in it \item the length of $h({\bf f}[0..m-1])$ is $q$ \end{itemize} Since everything in this predicate is in the logical theory $({\mathbb{N}}, +, <, F)$ where $F$ is the predicate for the Fibonacci word, the language $L_a$ is regular. \end{proof} \begin{remark} Notice that everything in this proof goes through for other numeration systems, provided the original word has the property that the Parikh vector of the prefix of length $n$ is synchronized. \end{remark} \section{Details about our implementation} Our program is written in JAVA, and was developed using the {\tt Eclipse} development environment.\footnote{Available from {\tt http://www.eclipse.org/ide/} .} We used the {\tt dk.brics.automaton} package, developed by Anders M{\o}ller at Aarhus University, for automaton minimization.\footnote{Available from {\tt http://www.brics.dk/automaton/} .} {\tt Maple 15} was used to compute characteristic polynomials.\footnote{Available from {\tt http://www.maplesoft.com} .} The {\tt GraphViz} package was used to display automata.\footnote{Available from {\tt http://www.graphviz.org} .} Our program consists of about 2000 lines of code. We used Hopcroft's algorithm for DFA minimization. A user interface is provided to enter queries in a language very similar to the language of first-order logic. The intermediate and final result of a query are all automata. At every intermediate step, we chose to do minimization and determinization, if necessary. Each automaton accepts tuples of integers in the numeration system of choice. The built-in numeration systems are ordinary base-$k$ representations and Fibonacci base. However, the program can be used with any numeration system for which an automaton for addition and ordering can be provided. These numeration system-specific automata can be declared in text files following a simple syntax. For the automaton resulting from a query it is always guaranteed that if a tuple $t$ of integers is accepted, all tuples obtained from $t$ by addition or truncation of leading zeros are also accepted. In Fibonacci representation, we make sure that the accepting integers do not contain consecutive $1$'s. The program was tested against hundreds of different test cases varying in simplicity from the most basic test cases testing only one feature at a time, to more comprehensive ones with many alternating quantifiers. We also used known facts about automatic sequences and Fibonacci word in the literature to test our program, and in all those cases we were able to get the same result as in the literature. In a few cases, we were even able to find small errors in those earlier results. The source code and manual will soon be available for free download. \section{Acknowledgments} We thank Kalle Saari for bringing our attention to the small error in \cite{Fraenkel&Simpson:1999}. We thank Narad Rampersad and Michel Rigo for useful suggestions. Eric Rowland thought about the proof of Theorem~\ref{additive-thm} with us in 2010, and was able to prove at that time that the word $1213121512131218\cdots$ avoids additive squares. We acknowledge his prior work on this problem and thank him for allowing us to quote it here. \newcommand{\noopsort}[1]{} \newcommand{\singleletter}[1]{#1}
train/arxiv
BkiUdYE5qdmB6xJ6-cm-
5
1
\section{Appendices} \label{sec:appendix} \subsection{Datasets: Details} \label{appendix:sec:datasets} We evaluate our \textsf{MultiFormatQA} on 19 existing datasets that target various formats, as well as various complex linguistic phenomena. Table~\ref{fig:datasets:properties} shows different properties for our datasets (whether it comes with a paragraph, whether the paragraph explicitly contains the answer, whether there are candidate-answers as part of the input, etc.) Most importantly, they are grouped into several formats/categories described below. Table~\ref{tab:statitstics} gives summary statistics of these datasets. \paragraph{Extractive QA (EX).} All the datasets in this format require models to extract the answer to a given question as a substring from a context paragraph. SQuAD 1.1~\cite{rajpurkar-etal-2016-squad} contains questions about Wikipedia paragraphs. A later version of this dataset, SQuAD 2~\cite{rajpurkar-etal-2018-know}, includes unanswerable questions which empirically makes the task much harder. For our evaluation, we use the development sets of SQuAD 1.1 and SQuAD 2. NewsQA~\cite{trischler-etal-2017-newsqa} dataset focuses on paraphrased questions with predicate-argument structure understanding collected from news articles from CNN/DailyMail articles. Quoref~\cite{dasigi-etal-2019-quoref} contains questions that require coreference resolution in Wikipedia articles and can even have disjoint spans as answers. ROPES~\cite{lin-etal-2019-reasoning} centers around situation understanding, where the model must under the causes and effects implicit in the given situation. \paragraph{Abstractive QA (AB).} All the datasets in this format require models to produce answers that are often not mere substrings of the given context paragraph. NarrativeQA~\cite{kocisky-etal-2018-narrativeqa} focuses on understanding various events that happen in a given movie plot, based on summaries of their movie adaptations from various web resources. Many of the answers do not have high overlap with the context. DROP~\cite{dua-etal-2019-drop} contains questions that involve rudimentary mathematical skills (such as counting, addition, subtraction, maximum, minimum, etc.) and questions query multiple parts of the paragraph. The answer can be either a number or a date that can be inferred from the paragraph, or several spans from the context paragraph. Finally, we use an open-domain version of NaturalQuestions~\cite{kwiatkowski-etal-2019-natural} where the paragraph that was used for creating the question is eliminated, and only the questions with short answers up to five tokens are taken. Instead, we follow~\cite{min2020ambigqa} to use a DPR retrieval~\cite{karpukhin2020dense} engine to augment each question with an additional context paragraph. We call this dataset NatQA. \paragraph{Multiple-choice QA (MC).} All the datasets in this format contain questions that come with candidate answers. MCTest~\cite{richardson-etal-2013-mctest} contains questions about simple, fictional stories. RACE~\cite{lai-etal-2017-race} is a challenging set of English comprehension multiple choice exams given in Chinese middle and high schools. OpenBookQA~\cite{mihaylov-etal-2018-suit}, ARC \cite{clark2018think,clark2016combining}, QASC~\cite{khot2019qasc} are different MC tests focusing on elementary/high school-style science exams. We use several othern datasets that are often framed as commonsense reasoning benchmarks: CommonsenseQA~\cite{talmor-etal-2019-commonsenseqa} is geared towards activity/concept questions, PIQA~\cite{bisk2019piqa} addresses physical interaction reasoning, SIQA~\cite{sap2019socialiqa} contains question that require social reasoning (motivations, reactions, event orders) and finally Winogrande~\cite{sakaguchi2019winogrande} which a benchmark for hard pronoun resolution problems~\cite{Levesque2011TheWS,peng2015solving}. Other than MCTest and RACE, the rest of the datasets do not come with accompanying paragraphs. On such datasets, occasionally a retrieval system is used to supplement each question with a relevant retrieved context paragraph. For most of this the work, we keep the questions as is with no additional retrieval (unless otherwise mentioned), except in \S\ref{subsec:sota} where we use IR to get numbers comparable to earlier work. One other variability among these datasets is their number of candidate answers. While many datasets have four candidates (see Figure~\ref{fig:datasets:properties}), others have more. Later, in \S\ref{subsec:generalization} we will see that our approach generalizes to datasets with different number of candidates, even if it's not seen during training. \paragraph{Yes/No QA (YN).} All the datasets in this format contain questions that could be responded with yes/no answers. One can think of these as multiple-choice questions with 2 candidates; however, they're usually treated differently. Several examples we use are BoolQ~\cite{clark-etal-2019-boolq} and a version of this dataset with natural-perturbations, BoolQ-NP~\cite{khashabi2020naturalperturbations}, the subset of MultiRC~\cite{khashabi-etal-2018-looking} that have binary(yes/no) answers. \paragraph{Contrast-sets.} Additionally, we use \emph{contrast-sets}~\cite{gardner2020evaluating} for several of our datasets (denoted with ``CS''): BoolQ-CS, ROPES-CS, Quoref-CS, DROP-CS. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset. \subsection{Details on the experiments: } \label{appendix:subsec:hyperparams} Below is several details on the experiments: \begin{itemize} \item \underline{Models: } we use two text-to-text frameworks: T5 and BART. \item \underline{Model sizes: } Most of the experiments are done on T5(11B) which has 11 billion parameters. We also report experiments with BART (large) with 440 million parameters. \item \underline{Input/output size: } For all experiments, we use token-limits of size 512 and 100 for inputs and outputs sequences, respectively. \item \underline{\# of iterations for pretraining on the seed datasets (\S\ref{sec:multiformat-training}):} All models are trained for $100k$ steps on the seed datasets. \item \underline{Learning rates:} we use 1e-3 and 1e-5, for T5 and BART, following the original works on each framework. \item \underline{Batch sizes:} We use batches of 8 and 120, for the T5 (11B) and BART models, respectively. \item \underline{Infrastructure: } In the experiments, we use v3-8 TPUs for T5 models, and eight 32GB GPUs for BART models. \item \underline{Time spent to build \textsf{MultiFormatQA}:} pretraining \textsf{MultiFormatQA}\ approximately takes about 36 and 55 hours, on T5(11B) and BART models, respectively. \item \underline{Finetuning on datasets (\S\ref{subsec:sota}):} the only hyperparameter we iterated over is the training steps. Each model was fine-tuned for 60$k$ steps and checkpoints were saved every 2$k$ steps. The model with the highest score on the dev set is our selected model. \end{itemize} \clearpage \subsection{\textbf{\textsc{UnifiedQA}}\xspace: Different Sizes} \label{appendix:subsec:unified:qa:sizes} For completeness we're also showing the scores of \textsf{MultiFormatQA} of different sizes on each dataset. For these systems each row is a single system. \begin{table}[h] \centering \includegraphics[scale=0.66,trim=1.8cm 15.7cm 0cm 2.2cm]{figures/sizesa1.pdf} \includegraphics[scale=0.66,trim=1.8cm 16.2cm 0cm 2.2cm]{figures/sizesb2.pdf} \caption{\textsf{MultiFormatQA} of different sizes on our datasets. } \label{tab:unifiedqa:different:model:sizes} \end{table} \subsection{Comparison with the Dedicated Models: extended results} \label{appendix:subsec:union:vs:single:dataset} Here we summarize an extension of the results in \S\ref{subsec:union:vs:single:dataset}. Table~\ref{tab:appendix:union:vs:single:dataset} summarizes the results of the relevant experiment. In the top portion of the table we have evaluations of T5 model fine-tuned for individual datasets, followed by \textsf{MultiFormatQA}. As it can be observed from the table, \textsf{MultiFormatQA} performs almost as good as the best single dataset experts. In some cases \textsf{MultiFormatQA} performs even better than than the single-dataset experts (e.g., on OBQA or NQA.) On average (last column) \textsf{MultiFormatQA} is doing much better dataset/format-specific systems. In conclusion, \textsf{MultiFormatQA} offers flexibility across multiple QA formats while compromising almost nothing compared to dataset-specific experts. \begin{table*}[h] \centering \includegraphics[scale=0.665,trim=2.1cm 13.9cm 2cm 1.5cm, clip=false]{figures/union_vs_dataset5.pdf} \caption{ \textsf{MultiFormatQA} is on-par with systems tailored to individual datasets (the diagonal cells vs the last row) while functioning across a wide range of datasets (the last column). } \label{tab:appendix:union:vs:single:dataset} \end{table*} \subsection{Pairwise Mixing: extended results} \label{appendix:subsec:pairwise} Here we summarize an extension of the results in \S\ref{subsec:pair}. The question addressed here is whether there is value in mixing datasets with different formats. We evaluated this by adding one dataset of a different format to four different datasets (one for each format). The results are summarized in Table~\ref{tab:appendix:pairwise_table}. The goal of each sub-table is to measure the \emph{within-format} generalization one can gain via \emph{out-of-format} training. Each sub-table has an \emph{anchor} dataset, indicated in the first column. For example in the first table the anchor dataset is SQuAD. Rows of the table: Each table combines datasets of other formats with the anchor dataset (e.g., SQuAD + RACE, etc). The columns of the sub-tables contain evaluations on the dataset with the same format as the anchor dataset. For example, on the first table, the evaluation is done on SQuAD 1.1/2.0, NewsQA, Quoref which have the same format as SQuaD 1.1, the anchor dataset. The results show that one can achieve gains for question-answering in a certain format by incorporating resources in other formats. In the first two sub-tables, we see that NarQA (AB) and OBQA (MC) help a SQuAD models generalize better to other EX datasets. In the third table where the anchor dataset is NQA (AB), EX datasets help a NQA model generalize better to other AB datasets. In the 4th/5th subtable, EX and AB datasets help a RACE/OBQA (MC) models generalize better to other MC datasets. Similarly, in the final sub-table, MC dataset helps improve the scores on a YN datasets. \begin{table*}[!htbp] \centering \includegraphics[scale=0.64,trim=3cm 13cm 2cm 1.5cm, clip=false]{figures/squad11_pairwise_table2.pdf} \includegraphics[scale=0.64,trim=3cm 14.7cm 2cm 1.5cm, clip=false]{figures/nqa_pairwise_table2.pdf} \includegraphics[scale=0.64,trim=3cm 12.5cm 2cm 1.5cm, clip=false]{figures/race_pairwise_table3.pdf} \includegraphics[scale=0.64,trim=3cm 14.7cm 2cm 1.5cm, clip=false]{figures/boolq_pairwise_table2.pdf} \caption{Pairwise mixing of formats: mixing with QA of datasets with different formats helps. } \label{tab:appendix:pairwise_table} \end{table*} \clearpage \subsection{Extended Results of Fine-tuning on Winogrande} \label{appendix:subsec:winogrande} Here we provide extended result for the Winogrande dataset. The results are summarized in Table~\ref{tab:winograd:table}. The table include results of fine-tuning \textsc{UnifiedQA}$_\text{T5}$\ and \textsc{UnifiedQA}$_\text{BART}$, as well as fine-tuning of the vanilla language models, T5 and BART. As it can be observed, on this dataset, fine-tuning \textsf{MultiFormatQA} gives stronger results when the size of the training data is limited. With respect to the overall metric AUC, \textsf{MultiFormatQA} has a slight edge over fine-tuning the vanilla language models. \begin{table}[ht] \centering \includegraphics[scale=0.8,trim=4cm 16cm 0cm 2cm]{figures/winograd-1.pdf} \caption{Extended results on the Winogrande dataset} \label{tab:winograd:table} \end{table} \section{Introduction} Question answering is a common tool for assessing how well can computers understand language and reason with it. To this end, the NLP community has introduced several distinct datasets, with four popular \emph{QA formats} illustrated in Fig.~\ref{fig:intro_figure}. For instance, some datasets expect the answer to be ``yes'' or ``no'', or a unique answer span in the associated paragraph (as opposed to multiple or no spans). These differences have motivated their study in silos, often encoding QA format into the model architecture itself. Efforts to exploit multiple datasets remain largely restricted to a single format. For example, \citet{clark2019f} limit consideration to multiple-choice datasets, while \citet{talmor-berant-2019-multiqa} focus their generalization study on extractive span prediction models. To the best of our knowledge, no single QA system targets, not to mention excels at, all of these formats. \begin{figure}[t] \centering \includegraphics[scale=0.69,trim=1.65cm 15.2cm 6.5cm 2cm, clip=true]{figures/formats4.pdf} \caption{ Four formats (color-coded throughout the paper) commonly used for posing questions and answering them: \bluetext{Extractive (EX)}, \redtext{Abstractive (AB)}, \purpletext{Multiple-Choice (MC)}, and \greentext{Yes/No (YN)}. Sample dataset names are shown in square brackets. We study generalization and transfer across these formats. } \label{fig:intro_figure} \end{figure} \begin{figure*}[t] \centering \includegraphics[scale=0.66,trim=10.1cm 16.5cm 10cm 2.5cm]{figures/datasets6.pdf} \caption{Properties of various QA datasets included in this study: 5 extractive (EX), 3 abstractive (AB), 9 multiple-choice (MC), and 3 yes/no (YN). `idk' denotes `I don't know' or unanswerable questions. BoolQ represents both the original dataset and its \emph{contrast-sets} extension BoolQ-CS; similarly for ROPES, Quoref, and DROP. } \label{fig:datasets:properties} \end{figure*} This raises the question: \emph{Can QA models learn linguistic reasoning abilities that generalize across formats?} Our intuition is simple: while question format and relevant knowledge may vary across QA datasets, the underlying linguistic understanding and reasoning abilities are largely common. A multiple-choice model may, therefore, benefit from training on an extractive answers dataset. Building upon this intuition, we present a single pre-trained QA system, named \textsf{MultiFormatQA}, that exploits information across 4 different QA formats to achieve strong performance across 20 different factoid and commonsense QA datasets listed in Fig.~\ref{fig:datasets:properties}. In this work, we advocate for a unifying view of QA formats by building a format-agnostic QA system. Our work leverages recent progress in text-to-text pre-trained neural models, specifically T5~\cite{raffel2019exploring} and BART~\cite{lewis2019bart}, but with a strong focus on differing QA formats. This paradigm allows unifying many NLP models, which formerly had task-specific designs, into a single text-to-text framework. Previous work uses textual prefixes to explicitly define the task associated with each input instance~\cite{raffel2019exploring,radford2019language}; often such attempts to build a single model for multiple NLP tasks \underline{underperform} the standard pre-training plus fine-tuning setup (a model per task)~\cite{raffel2019exploring}. Our work narrows down the scope to tasks that stay within the boundaries of QA, demonstrating that a unified text-to-text paradigm can, in fact, be successful across different QA tasks and formats. We develop a single pre-trained QA model by training text-to-text models on a set of seed QA datasets of multiple formats, taking natural text as input, without using format-specific prefixes. Our experiments show that \textsf{MultiFormatQA} can be applied as-is to different QA tasks, generalizes well to other unseen datasets (zero-shot), and with further fine-tuning achieves state-of-the-art results on many QA tasks including commonsense and factual datasets. \paragraph{Contributions.} This work advocates for a unified view of different QA formats, and for building format-agnostic QA systems. To support this view, we present \textsf{MultiFormatQA}, a single pre-trained QA system that works well on and generalizes to datasets with different formats (\S\ref{subsec:generalization}), while performing on par with state-of-the-art dedicated systems tailored to each dataset (\S\ref{subsec:union:vs:single:dataset}). Additionally, fine-tuning \textsf{MultiFormatQA} into specialized systems sets a new state of the art for 10 datasets (\S\ref{subsec:sota}), establishing it as a powerful starting point for QA research. Our findings demonstrate that crossing QA format boundaries is not only qualitatively desirable but also quantitatively beneficial. \section{Related Work} Several QA efforts have studied generalization across datasets of a \emph{single} format. For instance, in MultiQA, \citet{talmor-berant-2019-multiqa} study generalization and transfer, but only across extractive span selection datasets. Further, while they show strong leave-one-out style results, they find a single system performs substantially worse than one tuned to each dataset. In ORB, \citet{dua2019orb} propose a multi-dataset evaluation benchmark spanning extractive and abstractive formats. However, that study is limited to an \emph{evaluation} of systems, falling short of addressing how to build such generalized models. The MRQA shared task~\cite{fisch-etal-2019-mrqa} focuses on span-prediction datasets. Unlike all these efforts, our goal is to investigate transfer and generalization across different QA formats, as well as to build a single system that does this well. Exploiting commonality across machine learning tasks has a rich history studied under transfer learning~\cite{caruana1997multitask,clark2019bam}. \citet{mccann2018natural} and \citet{keskar2019unifying} study transfer among various NLP tasks by casting them into a single QA format---an elegant transfer learning approach but orthogonal to the goal of this work. As noted earlier, \citet{raffel2019exploring} investigate the transfer between several diverse NLP tasks (machine translation, summarization, etc). Their key contribution is a text-to-text framework, and a powerful model called T5, that makes it easier to mix multiple tasks by encoding both inputs and outputs as text. They rely on textual prefixes to explicitly define the task corresponding to each input instance. While we build upon their framework, we narrow our focus to variations of QA. This allows us to achieve strong results while avoiding reliance on any format-specific prefixes. Our models \emph{learn to infer} the format of each input question based on its content (e.g., whether the phrasing of the question demands a yes/no answer). Moreover, we are able to demonstrate generalization across QA tasks, which prior work failed to achieve presumably due to its focus on too broad a set of NLP tasks. \section{\textbf{\textsc{UnifiedQA}}\xspace: Multi-format Training} \label{sec:multiformat-training} Suppose we would like to train a unified QA model that can operate over $k$ formats $F_1, F_2, \ldots, F_k$. For each format $F_i$, suppose we have $\ell_i$ datasets sets $D^i_1, D^i_2, \ldots, D^i_{\ell_i}$ where $D^i_j = (T^i_j, E^i_j)$ includes both training and evaluation examples. In some cases, the training set $T^i_j$ may be empty or we may want to ignore it in order to treat $D^i_j$ as an `unseen', \emph{evaluation-only} dataset and assess a model's generalization to it. We use the text-to-text paradigm to convert each training question $q$ in format $F_i$ into a \emph{plain-text} input representation $\mathit{enc}_i(q)$. This conversion uses a natural encoding process that will be described shortly (\S\ref{sec:datasets:encoding}) for four common QA formats, and is easily extensible to other formats as well. We follow a simple approach of creating a mixed training pool consisting of all available training instances: $$ \tilde{T} = \bigcup_{i=1}^k \bigcup_{j=1}^{\ell_i} \Big\{ \mathit{enc}_i(q) \mid q \in T^i_j \Big\} $$ Training batches are drawn from this pooled data, $\tilde{T}$, by including each $q \in T^i_j$ with a probability proportional $1 / |T^i_j|$. Each batch thus, on average, contains the same number of instances from each training set, regardless of its size. Similar treatments of task mixing have also been adopted by~\citet{arivazhagan2019massively} and \citet{raffel2019exploring}. As our experiments will show, our multi-format mixing approach works well. It clearly highlights the value of training on out-of-format data and confirms our intuition that there are strong ties across QA formats in terms of the underlying reasoning abilities.\footnote{A more sophisticated teaching curriculum~\cite{sachan2016easy} or approaches such as model distillation and teacher annealing~\cite{clark2019bam} are likely to further improve the performance of the resulting unified model, bolstering the strength of our advocacy for a unified view of all QA formats. We leave their exploration to future work.} \begin{comment} The experiments have the following high-level organization: The first part \S\ref{subsec:pair} mixes \emph{pairs} of formats. The second part of the work \S\ref{subsec:union} uses the lessons we learned in the first part (like what formats/datasets to mix, etc) to build a a single system that does well across multiple formats. \end{comment} Our unified question-answering system is based on the recent text-to-text frameworks, particularly, T5~\cite{raffel2019exploring} and BART~\cite{lewis2019bart}. We first define a unifying encoding of the instances across various formats (\S\ref{sec:datasets:encoding}). We then introduce \textsf{MultiFormatQA} (\S\ref{subsec:unifiedQA}) that is a QA system trained on datasets in multiple formats, indicating new state-of-the-art results on 10 datasets and generalization to unseen datasets. \begin{comment} {should this pragraph be titled ``Roadmap.''?} {sure, in which case, move it to the end of the intro and update it to include section 3 as well} {If we name it ``Roadmap of experiments.'' it could stay here?} {it would be a very unusual place to talk about experiments. best to move it away} We first define a unifying encoding of the instances across various formats (in \S\ref{sec:datasets:encoding}). Then in \S\ref{subsec:pair} we evaluate the impact of training on out-of-format datasets, e.g., does training our model on a mix of MC and EX dataset leads to improved scores on EX tasks? As we empirically show, combining datasets from two different formats does lead to improved performance. This motivates the design of \textsf{MultiFormatQA}, described in \S\ref{subsec:unifiedQA}). We finally perform extensive evaluations on \textsf{MultiFormatQA} that show new state-of-the-art results on 7 datasets and generalization to unseen datasets. \end{comment} \subsection{Text-to-Text Encoding} \label{sec:datasets:encoding} We convert each of our target datasets into a text-in/text-out format~\cite{raffel2019exploring,lewis2019bart,radford2019language}. The question always comes first, followed by some additional information (context paragraph or candidate answers, or both). We use ``\script{\textbackslash n}'' separators between different parts of the input. This ensures having a human-like encoding while not making it overly-specific to a certain format. Our unified model incorporates the following four common question-answering formats. Specific datasets within them are deferred to Section~\ref{sec:datasets}. \begin{description}[topsep=1pt,itemsep=1pt,leftmargin=0pt] \item [Extractive (EX)] questions $Q$ include a context paragraph $C$ (typically a paragraph) and require models to extract the answer as a substring from the context. In some datasets, `unanswerable' can sometimes be the correct response. \item [Abstractive (AB)] questions $Q$ require models to produce answers that are often not mere substrings of the provided context paragraph $C$. \item [Multiple-choice (MC)] questions $Q$ come with a set of candidate answers $\{A_i\}$, of which generally exactly one is correct. In some cases, they also include a context paragraph $C$. \item [Yes/No (YN)] questions $Q$ expect a `yes' or `no' answer as the response and may include a context paragraph $C$. \end{description} Table~\ref{tab:example:encodings} provides examples of the natural input and output encoding for each of these formats, where both input and output representations are raw text. There is no explicit information regarding a question being an MC question or having exactly four candidate answers. Specifically, MC questions without any context paragraph are encoded as \script{question \textbackslash n (A) c1 (B) c2 $\hdots$} where \script{c1}, \script{c1}, $\hdots$ are the set of candidate answers (see the example from ARC dataset). If the question includes a context paragraph, it is appended after the candidate answers: \script{question \textbackslash n (A) c1 (B) c2 $\hdots$ \textbackslash n paragraph}, as shown in the example from the MCTest dataset. Questions in the other three formats (EX, AB, and YN) are encoded simply as \script{question \textbackslash n paragraph}. To re-emphasize, unlike prior work \cite{raffel2019exploring}, we do not specify any task-, dataset-, or format-specific prefixes in the input representation. Whether the answer should be extracted or abstracted, and whether from the provided context paragraph or candidate answers (or the fact that these even are candidate answers) is expected to be inferred by the system. \begin{table}[t] \centering \includegraphics[scale=0.65,trim=8cm 6.3cm 0cm 2cm]{figures/encoding-examples.pdf} \caption{Example text-to-text encoding of instances.} \label{tab:example:encodings} \end{table} \subsection{\textbf{\textsc{UnifiedQA}}\xspace: The Pre-Trained Model} \label{subsec:unifiedQA} The specific pre-trained QA model we provide and use in all our experiments is trained on representative datasets for each of the 4 formats discussed earlier. We empirically chose the following 8 \emph{seed datasets} for training \textsf{MultiFormatQA},\footnote{Future references to `\emph{seed dataset}' point to the QA datasets used in this section.} based on their effectiveness in our pilot study (details deferred to Section~\ref{subsec:pair}) assessing which datasets are most valuable for out-of-format training: \begin{itemize}[noitemsep] \item EX: SQuAD 1.1, SQuAD 2.0 \item AB: NarrativeQA \item MC: RACE, ARC, OBQA, MCTest \item YN: BoolQ \end{itemize} One can easily use other combinations of formats and datsets to create variants of our \textsf{MultiFormatQA} model, or extend it as future datasets become available or new formats are introduced. Unless otherwise noted, we use the largest available T5 model (11B parameters) as the starting point for training our model and call the system \textsf{MultiFormatQA}. We also report results of training our system with BART$_\text{large}$, referred to as \textsc{UnifiedQA}$_\text{BART}$\ (see \S\ref{subsec:sota}). Details on the parameters of the models used are deferred to Appendix~\ref{appendix:subsec:hyperparams}. Similar to pre-trained language models, the resulting pre-trained QA model can be used as a starting point for fine-tuning on other QA datasets. \section{Formats and Datasets} \subsection{Datasets} \label{sec:datasets} We evaluate \textsf{MultiFormatQA} on 20 existing datasets that target different formats as well as various complex linguistic phenomena. Fig.~\ref{fig:datasets:properties} summarizes key properties of our datasets (whether it comes with a paragraph or answer candidates, whether the paragraph explicitly contains the answer, etc). Most importantly, they are grouped into several formats/categories as described below. Table~\ref{tab:statitstics} gives certain statistics of these datasets. We next provide a summary enumerating these datasets, with additional details deferred to Appendix~\ref{appendix:sec:datasets}. \paragraph{Extractive QA (EX). Among the datasets in this popular format, we adopt SQuAD 1.1~\cite{rajpurkar-etal-2016-squad}, SQuAD 2~\cite{rajpurkar-etal-2018-know}, NewsQA~\cite{trischler-etal-2017-newsqa}, Quoref~\cite{dasigi-etal-2019-quoref}, ROPES~\cite{lin-etal-2019-reasoning}. \paragraph{Abstractive QA (AB).} The datasets used from this format are: NarrativeQA/NarQA~\cite{kocisky-etal-2018-narrativeqa}, the open-domain version of NaturalQuestions/NatQA~\cite{kwiatkowski-etal-2019-natural}, and DROP~\cite{dua-etal-2019-drop}. \begin{table}[t] \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{lC{0.9cm}C{0.9cm}C{1.3cm}C{0.9cm}C{0.7cm}C{0.7cm}} \toprule Dataset & Train set size & Eval. set size & Best published & 95\% CI (\%) & Input length & Output length \\ \midrule SQuAD 1.1 & 87$k$ & 10$k$ & 95.6 & 0.4 & 136.2 & 3.0 \\ SQuAD 2.0 & 130$k$ & 11$k$ & 91.2 & 0.5 & 139.9 & 2.6 \\ NewsQA & 76$k$ & 4$k$ & 66.8 & 1.4 & 606.6 & 4.0 \\ Quoref & 22$k$ & 2$k$ & 86.1 & 1.5 & 352.7 & 1.7 \\ Quoref-CS & - & 700 & 55.4 & 3.6 & 324.1 & 2.2 \\ ROPES & 10$k$ & 1.4$k$ & 61.1 & 2.5 & 169.1 & 1.4 \\ ROPES-CS & - & 974 & 32.5 & 3.0 & 182.7 & 1.3 \\ \midrule NarQA & 65$k$ & 21$k$ & 58.9 & 0.7 & 563.6 & 6.2 \\ NatQA & 79$k$ & 3.6$k$ & 42.2 & 1.6 & 607.0 & 2.2 \\ DROP & 77$k$ & 9$k$ & 89.1 & 0.6 & 189.1 & 1.6 \\ DROP-CS & - & 947 & 54.2 & 3.2 & 206.0 & 2.1 \\ \midrule RACE & 87$k$ & 4$k$ & 89.5 & 0.9 & 317.9 & 6.9 \\ OBQA & 4$k$ & 501 & 80.0 & 3.3 & 28.7 & 3.6 \\ MCTest & 1.4$k$ & 320 & 86.5 & 3.4 & 245.4 & 4.0 \\ ARC (easy) & 2$k$ & 2$k$ & 80.0 & 1.7 & 39.4 & 3.7 \\ ARC (chal.) & 1$k$ & 1$k$ & 67.8 & 2.9 & 47.4 & 5.0 \\ CQA & 9.7$k$ & 1.2$k$ & 79.1 & 2.2 & 26.8 & 1.5 \\ WG & 40.3$k$ & 1.7$k$ & 67.5 & 2.2 & 25.2 & 3.0 \\ PIQA & 16.1$k$ & 3$k$ & 79.4 & 1.4 & 49.6 & 20.2 \\ SIQA & 33.4$k$ & 2.2$k$ & 78.0 & 1.7 & 37.3 & 4.7 \\ \midrule BoolQ & 9$k$ & 3$k$ & 91.0 & 1.0 & 105.1 & 1.0 \\ BoolQ-CS & - & 461 & 71.1 & 4.0 & 108.9 & 1.0 \\ NP-BoolQ & 10$k$ & 3$k$ & 78.4 & 1.4 & 106.2 & 1.0 \\ MultiRC & - & 312 & 91.7 & 2.6 & 293.3 & 1.0 \\ \bottomrule \end{tabular} } \caption{ Dataset Statistics. CQA, OBQA, WG, and NarQA refer to CommonsenseQA, OpenBookQA, Winogrande, and NarrativeQA, respectively. The CI column shows the \added{upper 95\% confidence interval for the evaluation set as a percentage, based on the Wilson test around the mean score listed as a percentage in the best known performance column.} Input and output representation lengths are measured in the number of tokens and averaged across the dataset. } \label{tab:statitstics} \end{table} \paragraph{Multiple-choice QA (MC).} We use the following MC datasets: MCTest~\cite{richardson-etal-2013-mctest}, RACE~\cite{lai-etal-2017-race}, OpenBookQA/OBQA~\cite{mihaylov-etal-2018-suit}, ARC~\cite{clark2018think,clark2016combining}, QASC~\cite{khot2019qasc}, CommonsenseQA/CQA~\cite{talmor-etal-2019-commonsenseqa}, PIQA~\cite{bisk2019piqa}, SIQA~\cite{sap2019socialiqa}, and Winogrande~\cite{sakaguchi2019winogrande}. Several of the MC datasets do not come with accompanying paragraphs (such as ARC, QASC, OBQA). For most of this the work, we keep the questions as is with no additional retrieval (unless otherwise mentioned). One other variability among these datasets is their number of candidate answers. While many datasets have four candidates (see Fig.~\ref{fig:datasets:properties}), others have more. Later (in \S\ref{subsec:generalization}) we will see that our approach generalizes to datasets with different numbers of candidates, even if such questions have not been seen during training. \paragraph{Yes/No QA (YN).} The YN datasets we use are BoolQ~\cite{clark-etal-2019-boolq} and a naturally-perturbed version of this dataset, BoolQ-NP~\cite{khashabi2020naturalperturbations}, and the binary (yes/no) subset of MultiRC~\cite{khashabi-etal-2018-looking}. \paragraph{Contrast-sets.} Additionally, we use \emph{contrast-sets}~\cite{gardner2020evaluating} for several of our datasets (denoted with ``CS''): BoolQ-CS, ROPES-CS, Quoref-CS, DROP-CS. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset. \begin{table*}[t] \centering \includegraphics[scale=0.64,trim=7.5cm 17.6cm 9.0cm 2cm, clip=false]{figures/squad11_pairwise_summary_table3.pdf} \includegraphics[scale=0.68,trim=7.5cm 17.6cm 8.1cm 2cm, clip=false]{figures/race_pairwise_summary_table3.pdf} \includegraphics[scale=0.70,trim=8.7cm 17.6cm 7.1cm 1.5cm, clip=false]{figures/boolq_pairwise_summary_table4.pdf} \includegraphics[scale=0.70,trim=10cm 17.6cm 9.1cm 1.5cm, clip=false]{figures/nqa_pairwise_summary_table3.pdf} \caption{ Pilot study showing that out-of-format training can help improve performance. Each table compares training on just the anchor dataset (e.g., BoolQ in the top-left table) with training also on an out-of-format dataset denoted `X'. Evaluation is on the anchor dataset as well as unseen datasets of that format. The last row identifies the out-of-format dataset that helped most on each evaluation dataset. All results are based on the ``small'' size T5 model. Color denotes QA format (see Table~\ref{fig:datasets:properties}). } \label{tab:pairwise_table} \end{table*} \subsection{Evaluation Metrics for Textual Output} \begin{comment} {One thing to consider: for YN questions, if our model outputs `yes it is', we say it was correct. (similarly for MC) The way it's said currently might make it sound like (a) it's an implementation detail and (b) we are taking liberties with the evaluation mechanism. Do we need to justify it? In a way, it's just a deterministic transform we could add on top of the pre-trained model. However, this last computation would then be (I suppose it currently already is) format dependent.} { Let's not conflate these two: (1) architecture vs (2) evaluation. The former is format independent while the latter is dependant. Like, NarrativeQA is evaluated is evaluated via ROUGE-L is an issue of (2) evaluation and orthogonal to the actual computation done in the architecuture (1). } {agreed with the separation between (1) and (2). Still need some justification for (2), I think. E.g., if the correct answer is `yes' and our model outputs `yes it is', we say it gets 1 point. This seems reasonable but isn't totally obvious --- e.g., it would be questionable if we gave our model 1 point if it output `yeah, probably'. It would definitely be NOT OK if we gave it 1 point if it output `not really, though it could be yes'. The point is, by doing a substring match, we are taking some liberties with the evaluation of our own model, and drawing a line somewhere.} {okay, if you wanna get to the weeds, yes, there could be a lot of corner cases where this evaluation rule breaks down. In reality, the system RARELY produces anything other than just a simple ``yes''/``no''.}{do you have a ballpark number for how rare? that would solve it!} {depends on the model; but I can share the predictions of several models if you wanna have a look.} \end{comment} We evaluate each dataset using the metric used most often for it in prior work. For the EX format, it's the F1 score of the extracted span relative to the gold label. For the AB format, we use ROUGE-L metric~\cite{lin2006information,min-etal-2019-discrete,nishida-etal-2019-multi}. For NatQA we use the exact-match metric, following \citet{min2020ambigqa}. For the MC format, we match the generated text with the closest answer candidate based token overlap and compute the accuracy. For the YN format, we follow \citet{clark-etal-2019-boolq} to measure if the generated output matches the correct `yes' or `no' label. In rare cases where the output is longer than one word (e.g., `yes it is'), we check if it contains the correct label but not the incorrect one.\footnote{The evaluation code \added{is available at the URL in Footnote~\ref{footnote:code}.}} \section{Pilot Study: Can Out-of-Format Training Help?} \label{subsec:pair} We first answer the question: \emph{Is the broad idea of benefiting from out-of-format training even viable?} For instance, is our intuition correct that an MC dataset can, in practice, benefit from training on an EX dataset? Before discussing our main experimental results, we briefly report on a pilot study that assesses the following basic question: Given a training set $T^i_1$ (the \emph{anchor} dataset) of QA format $F_i$, is there an out-of-format training set $T^j_1$ of format $F_j$ such that training jointly on $T^i_1 \cup T^j_1$ improves performance relative to training only on $T^i_1$? To this end, we evaluate both on the matching evaluation set $E^i_1$ as well as on `unseen' data $E^i_2, E^i_3, \ldots$ of the same format. The results are summarized in Table~\ref{tab:pairwise_table}. The two rows in each individual table correspond to training on $T^i_1$ (the \emph{anchor} dataset) and on $T^i_1 \cup X$, where $X$ is an out-of-format dataset corresponding to $T^j_1$ above. The columns represent various evaluation sets of format $F_i$. For each column, `$X = \ldots$' at the very bottom indicates the out-of-format dataset $X$ that was the most helpful in improving performance on the evaluation set in that column.\footnote{Appendix~\ref{appendix:subsec:pairwise} reports extended results, including the performance with various choices of $X$.} Consider the case of the anchor set $T^i_1$ being BoolQ and the evaluation set being NP-BoolQ, both of format YN. Here, including out-of-format training data $X{=}$SQuAD2 boosts performance from 51\% to as much as 59\%. The gain may be less in other cases, but across all anchor and evaluation datasets, we generally observe that there is at least one out-of-format training set whose inclusion improves performance. This pilot study thus provides a proof of concept that out-of-format training can indeed help a QA model in nearly every case. Of course, this study only shows the existence of such an out-of-format dataset, rather than provide a single unified model. Nevertheless, it helps identify \emph{representative training sets} from each format that were most helpful. As alluded to earlier, we used this empirical data to guide which training sets to include when building \textsf{MultiFormatQA} in Section~\ref{subsec:unifiedQA}. \begin{figure}[ht] \centering \includegraphics[scale=0.50,trim=4.65cm 4.6cm 2cm 2.5cm]{figures/bipartite-graph2.pdf} \caption{Bipartite graph showing the value of various datasets. The datasets on the left were used for training and on the right for evaluation. The wider the edge from a dataset $\ell$ (on the left) to a dataset $r$ (on the right), the higher the contribution of adding the out-of-format dataset $\ell$ to the training set of questions in $r$'s format. } \label{fig:bipartite-fig} \end{figure} \added{ The experimental results from this case study are summarized in the aggregated plot shown in Fig.~\ref{fig:bipartite-fig}. In this bipartite graph, the datasets used for training are on the left hand side and the evaluation datasets are on the right hand side. The weight of each edge $w(\ell, r)$ indicates the contribution of a dataset $\ell$ when used for training jointly with an anchor dataset $d$, when evaluated on $r$ ($d$ and $r$ have the same format.) Specifically, \\ \indent $w(\ell, r) = avg_{d} \Big[ S\big(\ell \cup d; r\big) - S\big(d; r\big) \Big],$ \\ where $S(d, r)$ is the score achieved on $r$ after training on $d$. Since we only focus on \emph{gains} from out-of-format training, we drop the edges that are negative or between two datasets of the same format. As expected, there are strong connections between the AB and EX datasets in Fig.~\ref{fig:bipartite-fig} since their definitions are quite similar. Apart from the edge weight, the overall width of a dataset $\ell$ on the left also depicts how much it contributes to out-of-format datasets. E.g., NQA (NarrativeQA) is the most helpful dataset and even helps multiple formats. Similarly our extractive datasets (SQuAD11.1, SQuAD 2, and NewsQA) are also relatively more helpful. While large datasets generally appear to help, RACE, another large-scale dataset, doesn't help that much. The least helpful dataset in the mix is BoolQ which focuses on yes/no questions. In a similar vein, the wider the dataset on the right hand side, the more it can be benefit from out-of-format datasets. Among these beneficiary datasets, all four formats are equally represented. } \section{Experimental Results} \label{sec:experiments} We now discuss our main experimental results, evaluating \textsf{MultiFormatQA} on seed datasets (used for training the system) as well as unseen datasets. \begin{table*}[tb] \centering \includegraphics[scale=0.66,trim=1.95cm 15.5cm 2cm 2cm]{figures/generalization-table4.pdf} \caption{ Generalization to unseen datasets: Multi-format training (\textsf{MultiFormatQA}) often outperforms models trained the same way but solely on other in-format datasets (e.g., \textsf{MultiFormatQA}[EX], which is trained on all extractive training sets of \textsf{MultiFormatQA}. When averaged across all evaluation datasets (last column), \textsf{MultiFormatQA} shows strong generalization performance across all formats. Notably, the ``Previous best'' models (last row) were trained on the target dataset's training data, but are even then outperformed by UnifiedQA (which has \underline{never seen these datasets} during training) on the YN tasks. } \label{tab:generalization} \end{table*} \subsection{\textbf{\textsc{UnifiedQA}}\xspace vs.\ 8 Dedicated Models} \label{subsec:union:vs:single:dataset} Is \textsf{MultiFormatQA}, a single pre-trained multi-format QA system, as good as dedicated systems trained for individual datasets? We emphasize that the answer to this question is not as simple as it may seem, since earlier works have observed that a system addressing multiple tasks often \emph{underperforms} a focused system~\cite{raffel2019exploring}. \begin{figure}[t] \centering \includegraphics[scale=0.41,trim=0.2cm 1.1cm 0cm 0.5cm]{figures/union_vs_dataset-2.pdf} \caption{ \textsf{MultiFormatQA} is on-par with, and often outperforms, 9 different equally-sized T5-based systems tailored to individual datasets. The figure contains separate models for each of the two subsets of the ARC and Regents datasets. \ } \label{fig:union:vs:single:dataset} \end{figure} Fig.~\ref{fig:union:vs:single:dataset} summarizes the results of the relevant experiment. The gray bars belong to \textsf{MultiFormatQA}\ (a single system for multiple datasets of different formats). The colored bars are different T5-based systems tailored to individual datasets (a different system for each dataset). The results show that \textsf{MultiFormatQA} performs almost as good as individual T5 models targeted to each dataset. In some cases \textsf{MultiFormatQA} performs even better than the single-dataset experts (e.g., on OBQA or NQA). On average (last column) \textsf{MultiFormatQA} clearly outperforms the ensemble of dataset/format-specific systems. \textsf{MultiFormatQA} thus offers flexibility across multiple QA formats while compromising almost nothing compared to dataset-specific experts. \begin{table*}[t] \centering \includegraphics[scale=0.66, trim=1.9cm 14.55cm 1.8cm 1.85cm,clip=true]{figures/sota10.pdf} \includegraphics[scale=0.67, trim=1.0cm 10.30cm 3cm 6.9cm,clip=true]{figures/sota10.pdf} \caption{ Fine-tuning \textsf{MultiFormatQA} (last row) results in new state-of-the-art performance on 11 datasets. Further, it consistently improves upon fine-tuned T5 (2nd last row) by a margin ranging from 1\% for CommonsenseQA (CQA) to as much as 13\% for ARC-challenge. `(w/ IR)' denotes relevant information is retrieved and appended as context sentences in the input encoding. Datasets marked with * are used in \textsf{MultiFormatQA}'s original training. } \label{tab:fine-tuning} \end{table*} \subsection{Generalization to Unseen Datasets} \label{subsec:generalization} We now explore whether \textsf{MultiFormatQA} generalizes well to other, unseen datasets. Table~\ref{tab:generalization} summarizes the results of experiments where we evaluate various models on datasets that are not used to train them. It compares \textsf{MultiFormatQA}\ (training on multiple formats) with training on various datasets of a \emph{single} format (e.g., \textsf{MultiFormatQA}[EX], built by training the model on only extractive datasets). The first few rows of the table show T5 models trained for individual formats, followed by \textsf{MultiFormatQA}. For completeness, we include the highest previous scores for each dataset; one must be careful when reading these numbers as the best previous numbers follow the fully \emph{supervised} protocol (for NewsQA~\cite{zhang2020retrospective}, Quoref~\cite{segal2019simple}, DROP~\cite{lan2019albert}, ROPES~\cite{lin-etal-2019-reasoning}, QASC~\cite{khot2019qasc}, CommonsenseQA~\cite{Zhu2020FreeLB} and x-CS datasets~\cite{gardner2020evaluating}.) We make three key observations: (1) On average (last column), \textsf{MultiFormatQA} shows much stronger generalization across a wide range of datasets. (2) on 9 (out of 12) datasets, \textsf{MultiFormatQA} shows a better generalization than any single-format expert. For example, while the system is trained on multiple-choice questions with 4 candidate answers, it works quite well on datasets with more than 4 candidate answers (QASC and CommonsenseQA have has 8 and 5 candidate answers per question, respectively). (3) Single-format experts are better at generalization only when the source and target datasets are very similar (for instance SQuAD and Quoref). \begin{table*}[t] \centering \includegraphics[scale=0.785,trim=3.8cm 13.8cm 2cm 1.4cm]{figures/leave-one-out-4.pdf} \caption{ The results of a leave-one-out ablation. The first row indicates the performance of \textsf{MultiFormatQA} on each dataset it was trained on. The rest of the rows exclude one dataset at a time. The rows are sorted based on the last column: the dataset with biggest contribution appear first. The \redtext{red highlights} indicate the top 3 performance drops for each column. } \label{tab:leave:one:out} \end{table*} \subsection{State-of-the-Art via Simple Fine-tuning} \label{subsec:sota} Fine-tuning of pre-trained language models has become the standard paradigm for building dataset-specific stat-of-the-art systems~\cite{devlin-etal-2019-bert,liu2019roberta}. The question we address here is: when it comes to QA, is there a value in using \textsf{MultiFormatQA} as a starting point for fine-tuning, as opposed to a vanilla language model that has not seen other QA datasets before? To address this question, we fine-tune each of \textsf{MultiFormatQA}, T5, and BART on several datasets by selecting the best check point on the dev set, and evaluating on the test set. Table~\ref{tab:fine-tuning} summarizes the results of the experiments. The table shows two variants: \textsc{UnifiedQA}$_\text{T5}$\ and \textsc{UnifiedQA}$_\text{BART}$. All results are based on the 11B version of T5. The columns indicate the evaluation on the test set corresponding to the data that was used for training. For each dataset, the first line of the table reports the best previously published work. For several MC datasets that do not come with evidence paragraphs, we include two variants: one where we use them as-is and another that uses paragraphs fetched via an Information Retrieval (IR) system as additional evidence, indicated with ``w/ IR'' tags. We use the same IR sentences as used by the baselines: Aristo corpus for ARC and OBQA datasets~\cite{clark2019f}, and 2-step IR for QASC~\cite{khot2019qasc}. For NatQA, following \cite{min2020ambigqa}, we use the DPR retrieval engine~\cite{karpukhin2020dense} to augment each question with additional paragraphs. \nocite{lan2019albert} \nocite{clark2019f} \nocite{Mitra2020HowAK} \nocite{banerjee2020knowledge} \nocite{Zhu2020FreeLB} \nocite{min2020ambigqa} \begin{comment} Additionally, we show the best published scores on each dataset: ALBERT~\cite{lan2019albert} (on RACE), RoBERTa~\cite{clark2019f} (on OBQA, ARC, Winogrande/WG, PIQA, SIQA~\cite{Mitra2020HowAK}), KF+SIR~\cite{banerjee2020knowledge} (on OBQA and QASC), FreeLB+RoBERTa~\cite{Zhu2020FreeLB} (on ARC-easy and CommonsenseQA), and DPR+BART~\cite{min2020ambigqa}. \end{comment} We see that fine-tuning on \textsf{MultiFormatQA} consistently dominates fine-tuning on T5 and BART, respectively. It also dominates the best previous scores on the datasets. Intuitively, since \textsf{MultiFormatQA} has seen different formats, it should be positioned to achieve higher scores after a little fine-tuning, compared to fine-tuning on a vanilla T5 or BART model. This could be especially effective when a user has limited training data for a target QA task (also shown in Appendix~\ref{appendix:subsec:winogrande}.) This also highlights that the effectiveness of cross-format training is not limited only to T5, but is rather a general trend for text-to-text architectures. \subsection{Ablation: Training Set Contributions} \label{subsec:leave-one-out} We now perform a leave-one-out experiment to better understand the contribution of each seed dataset to \textsf{MultiFormatQA}. We take the system from \S\ref{subsec:unifiedQA} and assess how strong the model is when individual seed training datasets are dropped from the union. The result of this experiment is summarized in Table~\ref{tab:leave:one:out}. It compares the performance of full \textsf{MultiFormatQA} (the first row) with ablated variants that exclude one seed dataset at a time. The rows are sorted based on the last column: datasets with higher contributions appear first. Looking at first few rows of the table, BoolQ, SQuAD 2.0, OBQA, NarQA are the top four contributing datasets, each with a different format. SQuAD 1.1 has the least importance, presumably because it is mostly covered by SQuAD 2.0. This study suggests that in order to build an effective unified QA system, it suffices to have a relatively small set of datasets as long as the set includes representatives from each format. \added{ \section{Discussion} The key motivation for this work is the observation that nearly all prior efforts on QA research were limited to the boundaries defined by narrow \emph{formats}. A \emph{format-specific} design would not generalize across QA datasets with slightly different definitions (e.g., a model built for SQuAD would not work for RACE). Additionally, such a design would prevent us from benefiting from the labeled data available in other formats. We challenge this view by advocating for approaches that combine seemingly different datasets. We believe that developing QA systems targeted to a specific format is a conceptual barrier for progress in the field. \paragraph{Factors affecting generalization.} Format is not the only factor affecting generalization across datasets. We additionally studied the value of other factors including \emph{dataset size} and \emph{domain} (vocabulary, topic, and style) in improving generalization. We observed that larger datasets often help with generalization, but not always (\S\ref{subsec:pair}); e.g., RACE or OBQA show similar benefits (Fig.~\ref{fig:bipartite-fig}), even though RACE is much larger than OBQA. We observed a similar phenomenon with domain: similar domains help with transfer, but that is not always the case. For example, while BoolQ questions, similar to SQuAD, are accompanied with Wiki paragraphs, they barely benefit each other. Overall, the factors affecting generalization are not well-understood, leaving room for future investigations. \paragraph{Unifying QA formats and text-to-text models.} While \textsf{MultiFormatQA} is built based using existing text-to-text models~\cite{gpt2,raffel2019exploring}, we emphasize that the choice of tasks for multi-task learning plays a crucial role in achieving successful results. Previous studies \cite{raffel2019exploring} did \emph{not} observe gains when mixing {\it tasks} that are very different. The key intuition is that a more coherent choice of {\it tasks} is more likely to succeed. Further, focusing on a coherent space of QA tasks/formats allows us to simplify the input by not requiring ``prefixes'' to explicitly define tasks/formats. } \section{Conclusion} The question-answering community has fruitfully explored the design of strong models, but while staying within the boundaries of individual QA formats. We argued that such boundaries are artificial and can even limit the performance of systems, because the desired reasoning abilities being taught and probed are not tied to specific formats. Training data in one format should, in principle, help QA systems perform better even on questions in another format. With this intuition in mind, we presented \textsf{MultiFormatQA}, a single pre-trained QA system based on the text-to-text paradigm, seeking to bring unification across four common QA formats. We showed that even with its simple multi-format training methodology, \textsf{MultiFormatQA} achieves performance on par with 8 dataset-specific expert models (\S\ref{subsec:union:vs:single:dataset}), while also generalizing well to many unseen datasets of seen formats (\S\ref{subsec:generalization}). At the same time, we demonstrated that \textsf{MultiFormatQA} is a strong starting point for building QA systems: it can achieve state-of-the-art performance by simply fine-tuning on target datasets (\ref{subsec:sota}). We hope this effort will inspire a future line of work in the QA and NLP communities, moving towards more general and broader system designs. We leave extensions of \textsf{MultiFormatQA} to other formats such as to direct-answer questions~\cite{Roberts2020HowMK} as a promising avenue for future work. \subsection*{Acknowledgments} The authors would like to thank Collin Raffel, Adam Roberts, and Nicholas Lourie for their help with the T5 framework and for providing feedback on an earlier version of this work. The authors would like to acknowledge grants by ONR N00014-18-1-2826 and DARPA N66001-19-2-403, and gifts from the Sloan Foundation and the Allen Institute for AI. Moreover, the authors would like to thank members of the Allen Institute for AI, UW-NLP, and the H2Lab at the University of Washington for their valuable feedback and comments. TPU machines for conducting experiments were provided by Google.
train/arxiv
BkiUfUY5qhLBsCDnQqEQ
5
1
\section{Introduction} In this paper an attempt to describe the gravitational phenomena using the vector field approximation is made. Some attempts to describe the gravity using vector models were made earlier \cite{1}, however a number of difficulties arise in this approach. The main problems are: 1) the absence of the deflection of light in the gravitational field; 2) an incorrect value of the anomalous procession of the Mercury's perihelion; 3) the problem of the sigh of gravitational energy \cite{2}. In this work the author tries to solve the mentioned problems. In particular, the correct value of the anomalous procession of the Mercury's perihelion is received using the vector gravitational field Lagrangian up to the second order terms. To describe the deflection of light the generalization of this Lagrangian on the case of ultrarelativistic velocities has been used; in a result, the received value for the deflection of light near the Sun coincides with the experimental one. The components giving negative contribution into the stress-energy tensor of the gravitational field are excluded by imposing of a specific relativistically invariant condition. Also in this paper the effective geometrization of the given theory is made. \section{General model} We will match gravitation field 4-potential $A^i=(\varphi, c\vec A)$, where $\varphi$- usual scalar potential, $\vec A$- vector potential, c - speed of light. Lagrangian of gravitation field with provision for matters is of the form of: \begin{equation}\label{1} L=-A_ij^i +\frac{1}{8\pi\gamma} \frac{\partial A_i}{\partial x^k}\frac{\partial A^i}{\partial x_k} \end{equation} where $\gamma$- gravitation constant, $j^i=\mu\frac{1}{c}\frac{dx^i}{dt}$ - 4-vector to density of current of masses , $\mu$- density of mass of bodies. First summand describes interaction of field and matters, second describes characteristics of field without particles. Type second composed given that it recorded disregarding Lorence condition, also follows to note sign a plus before composed in change from electrodinamic. Instead of scalar Lorence condition, which with transverse condition excludes negative contribution to tensor of energy-pulse zero scalar components, as well as contribution third components in the electoridinamic, in the event of gravitation necessary to use other condition, excluding negative contribution to tensor of energy- pulse vector component. The relativistically invariant condition for the vector field can be written in a 4-form: \begin{equation}\label{2} \biggl(\frac{\partial A^i}{\partial t}\frac{\partial A_i}{\partial t} - \frac{\partial A^i}{\partial x^i}\frac{\partial A_i}{\partial x_i}\biggr)\delta^i=0 \end{equation} where $\delta^i$ is a unit vector. For $i=0$ there is no condition on the scalar component. For $i=1,2,3$ the condition has the following form: \begin{equation}\label{3} \biggl(\frac{d \vec A}{dt}\biggr)^2-(div\vec A)^2=0 \end{equation} This condition can be introduced into the theory by using the method of the Lagrange's multipliers. From action (\ref{1}) possible get the system of equations of gravitation field: \begin{equation}\label{4} \frac{\partial^2A}{\partial x_k\partial x^k}=4\pi\gamma j^i \end{equation} To get equations of gravitation field to similar Mackswell equations we will choose Lagrangian in the following type: \begin{equation}\label{5} L=-A_ij^i+\frac{1}{16\pi\gamma}F_{ik}F^{ik}+\frac{1}{8\pi\gamma}\biggl(\frac{\partial A^k} {\partial x^k}\biggr)^2 \end{equation} which differs from (\ref{1}) an unessential divergention. Where $F_{ik}=\frac{\partial A_k}{\partial x^i}- \frac{\partial A_i}{\partial x^k} $ antisymmetric tensor of the gravitation field. As a result we will get the equations of gravitation field: \begin{equation}\label{6} \frac{\partial F^{ik}}{\partial x^k}+\frac{\partial^2 A^k} {\partial x^{k}\partial x^i}=4\pi\gamma j^i \end{equation} In stationary event write the equation (\ref{6}) in three-dimensional type for $i=0$: \begin{equation}\label{7} \triangle\varphi=4\pi\gamma\mu \end{equation} The decision (\ref{7}) is of the form of: \begin{equation}\label{8} \varphi=-\gamma\int\frac{\mu}{r}dV \end{equation} Potential for one particles of mass m $\varphi=-\frac{\gamma m}{r}$. Consequently power acting in given field to other particle of mass $m^{\prime}$ \begin{equation}\label{9} F=-\frac{\gamma m m^{\prime}}{r^2} \end{equation} (\ref{9}) - there is known Newton's law of gravity Because of the stationarity from (\ref{3}) we have $div\vec A=0$. Then the equation for the vector potential can be written as \begin{equation} \label{Eq3} \triangle\vec A=4\pi\gamma\vec j. \end{equation} From here we recieve \begin{equation} \label{Eq4} \vec A= -\gamma\int\frac{\vec j}{r}dV. \end{equation} This field can be called cyclic. The induction of the field is \begin{equation} \label{Eq5} \vec C=rot\vec A= -\gamma\int\frac{[\vec j\vec r]}{r^3}dV=-\gamma, \frac{[\vec p\vec r]}{r^3} \end{equation} where $\vec p$ is a momentum of a particle. Using the mechanical momentum $\vec M$ we can write $\vec A$ and $\vec C$ as \begin{equation} \label{Eq6} \vec A=-\gamma\frac{[\vec M\vec r]}{r^3}, \ \ \ \ \ \vec C=-\gamma\frac{3\vec n(\vec M\vec n)-\vec M}{r^3}, \end{equation} where $\vec n$ is a unit vector in the $\vec r$-direction. Thus two moving particles experience (besides the gravitational attraction one to another) the cyclic force. The latter can be attractive or repulsive, depending on the mutual direction of the velocities of the particles. Notice the smallness of the effects conditioned by the vector potential; their values are almost inaccessible for the experimental registration. \section{Gravitation experiments} We will calculate importance of adjustment for planet perihelion and corner of deflection of light in field of power to gravity within the framework of offereded approach. We will find the type of Lagrange function for particles with provision for gravitation field in the second approach. The Lagrange function for body, residing in external gravitation field is of the form of: \begin{equation} \label{ex0} L=-mc^2\sqrt{(1-\frac{v^2}{c^2})}-m\varphi+m\vec v \vec A \end{equation} where $\varphi$ - scalar potential of gravitation field $\vec A$ vector potential the fields, which in analogy with magnetic possible to name cyclic to example. Source from expressions for lagging potentials, following from decision of equations (\ref{4}) \begin{equation} \label{ex1} \varphi=-\int\frac{\rho_{t-R/c}}{R}dV, \ \ \ \vec A=-\int\frac{\vec j_{t-R/c}}{R}dV, \end{equation} degrading scalar potential in row before members of second order, but for vector limiting first-order members \cite{3} will find type of Lagrange function in second approach. We will consider givenned function for system two particles excluded from it moving the system as whole: \begin{equation} \label{ex2} L=T +\frac{\gamma m_1 m_2}{r}+\frac{\gamma m_1 m_2 v^2}{c^2 r} \end{equation} The Energy of system possible to write: \begin{equation} \label{ex3} E=E_0-\frac{\gamma M m}{r}-\frac{\gamma MJ^2}{mc^2r^3}=E_0-V \end{equation} where a speed $\vec v=r\frac{d\psi}{dt}$ denominated in moment of pulse $J=mr^2\frac{d\psi}{dt}$, $\psi$- angle; $M=m_1$, $m=m_2$. The calculations of adjustment a perihelion and angle of deflection of light in gravitation field comfortable to conduct the Runge - Lenz vector with use. For the first time Runge - Lenz vector for calculation the adjustments of the General theory of relativity there was aplying in work \cite{4}. \begin{equation} \label{ex4} \vec X=\vec v\times\vec J-\gamma Mm\vec e_r \end{equation} where $\vec e_r$ - a unit vector in the r - direction. The derivative on time from Runge - Lenz vector: \begin{equation} \label{ex5} \frac{d\vec X}{dt}=(r^2\frac{\partial V}{\partial r}-\gamma Mm)\frac{d\vec e_r}{dt}= (\frac{3\gamma MJ^2}{mr^2c^2})\frac{d\psi}{dt}\vec e_{\psi} \end{equation} The direction of $\vec X$ changes with angular speed: \begin{equation} \label{ex6} \vec\omega=\frac{\vec X\times\vec{\dot X}}{\vec X^2}= (\frac{3\gamma MJ^2}{mr^2c^2X^2})\frac{d\psi}{dt}\vec X\times\vec e_{\psi} \end{equation} And its total change when the particle moves within from $\psi_1$ to $\psi_2$ (is expected that this change little and vector $\vec X$ is originally oriented toward $\psi=0$ ): \begin{equation} \label{ex7} \Delta\alpha=\int\limits^{\psi_2}_{\psi_1}\omega dt=\frac{3\gamma MJ^2}{mc^2} \int\limits^{\psi_2}_{\psi_1}\frac{\cos\psi d\psi}{Xr^2} \end{equation} When $\vec X$ is constant and is oriented toward $\psi=0$ we have \begin{equation} \label{ex8} \vec X\vec r=Xr\cos\psi=J^2-\gamma Mmr \end{equation} From nonperturbation orbits (\ref{ex8}) express r and substitute in (\ref{ex7}). For bound orbits ($m\neq 0$) with eccentricity $e=A/M$, semi-major axis $a=J^2/\gamma Mm^2(1-e^2)$ find the perihelion precession: \begin{eqnarray} \label{ex9} \Delta\alpha=\frac{3\gamma Mm}{c^2J^2} \int\limits^{2\pi}_{0}\frac{(X\cos\psi+\gamma Mm)^2}{X}\cos\psi d\psi= \nonumber \\ =\frac{6\pi\gamma^2m^2M^2}{c^2J^2}=\frac{6\pi\gamma M}{c^2a(1-e^2)} \ \ \ \ \ \end{eqnarray} The value of offset a perihelion for Merkury is $\Delta\alpha=43^{\prime\prime}$. To calculate the deflection of light in the gravitational field the Lagrangian (\ref{ex2}) should be written without taking into account the assumption of small velocities, in order it can be used in the ultrarelativistic case. Similar Lagrangian up to the second order terms was found in \cite{5} for ultrarelativistic particles in the electromagnetic field. For the gravitational field this Lagrangian can be written as \begin{eqnarray} \label{ex10} L=\frac{m_1 m_2}{r_{21}}[\frac{f(\eta_1^2)+f(\eta_2^2)}{2}\beta_1\beta_2+ \frac{h(\eta_1^2)+h(\eta_2^2)}{2}(\beta_1r_{21})(\beta_2r_{21})+1] \end{eqnarray} where $\eta^2_a=(r_{ab}\times\beta_a)^2$, $\beta=v_a/c$. The functions f and g are defined as \cite{5} \begin{equation} \label{ex11} f(x) = \frac{1}{1+\sqrt{1-x}}\simeq\frac{1}{2}+\frac{1}{8}x+... \end{equation} \begin{equation} \label{ex12} h(x) = \frac{f(x)}{\sqrt{1-x}}\simeq\frac{1}{2}+\frac{3}{8}x+... \end{equation} The expression for the energy of particles moving with the speed of light (photons) takes the form (\ref{ex2}), but without taking into account the Newtonian interaction. The last term in (\ref{ex2}) can be written as $\frac{\gamma MJ^2}{\varepsilon r^3}$, where $\varepsilon$ is a photon frequency. Therefore for unbound orbit with mass of photon $m=0$ we have: \begin{equation} \label{ex13} \Delta\alpha=\frac{3\gamma M\varepsilon}{c^4J^2} \int\limits^{\pi/2}_{-\pi/2}X\cos^3\psi d\psi=\frac{4\gamma M}{c^2b} \end{equation} where $b=\frac{\varepsilon J^2}{Ac^2}$ - aiming parameter. Therefore for the ray, going by edge of Sun $\Delta\alpha=1,75^{\prime\prime}$. The given results for offset a perihelion and angle of deflection of light, got within the framework of vector theory gravitation field comply with similar results, got in the General theory of relativity \cite{2,3} and are confirmed experimental \cite{1}. Also possible note that in givenned theory, either as in JTR exists the so-called effect of red offset, appearing when removing the photons from massive objects. \section{Effective geometrization} To describe all the basic gravitational effects it is enough to geometrize the Lagrangian (\ref{ex2}). Rewrite this Lagrangian in the following form: \begin{equation} \label{ef1} L=-mc^2(1-v^2/c^2)^{1/2}-m\varphi-m\varphi v^2/c^2 \end{equation} In the gravitational theory the Lagrangian, from which the geodesic equations are received, is written as \begin{equation} \label{ef2} L = -mc^2\biggl(-g_{ik}\frac{dx^i}{dt}\frac{dx^k}{dt}\biggr)^{1/2} \end{equation} Write the metric tensor $g_{ik}$ in a form $g_{ik}=g^{0}_{ik}+h_{ik}$, where $g^{0}_{ik}$ is a metric of the Minkowsky spacetime, $h_{ik}$ is corrections describing the gravitational field. Then the Lagrangian takes a form \begin{equation} \label{ef3} L = -mc^2(1-v^2/c^2-h_{00}-2h_{0j}v^j-h_{jk}v^jv^k)^{1/2} \end{equation} where $j,k = 1,2,3$. By expanding the expression under the square root and comparing (\ref{ef1}) and (\ref{ef3}) the metric $g_{ik}$ can be found up to the second order terms, which corresponds to the postnewtonian approximation: \begin{eqnarray} \label{ef4} g_{00}=-1-2\varphi \nonumber \\ g_{\alpha\alpha}=1-2\varphi \nonumber \\ g_{0\alpha}=0. \nonumber \end{eqnarray} The anomalous procession of the Mercury's perihelion and the deflection of light can be found by solving the Hamilton - Jacoby equation $$g^{ik}\frac{\partial S}{\partial x^i}\frac{\partial S}{\partial x^i}-m^2c^2=0.$$ In a result the received values coincide with the experimental ones \cite{1}. \section{Conclusion} In this article the gravitational theory has been formulated in the framework of the special theory of relativity. In analogy with the electromagnetic interaction the gravitational interaction is described by the vector 4-potential. The new field, conjugated to the gravitational one and reminding of the magnetic field by description, appears in this theory. The field is generated by the momenta, the angular momenta and the spins of particles. Calculated in the article anomalous procession of the Mercury's perihelion and deflection of light near the Sun coincide with the experimental values. This article does not concern the cosmological and cosmogonical models that appear in the framework of the theory of the vector gravitational field. This question is a subject of the further studies.
train/arxiv
BkiUa5rxK0zjCsHeYoPc
5
1
\section{Introduction} Study of the diffusion phenomena \cite{cussler2009diffusion,tyrrell2013diffusion} is an important problem in flow analysis, and has applications in diverse areas such as biology \cite{yu2005diffusion}, mechanical and chemical engineering \cite{wijmans1995solution} and environmental sciences \cite{choy2017diffusion}. An important class of techniques for diffusion measurements is comprised of the optical interferometric techniques, which offer several advantages such as full field measurement, good resolution, non-invasive operation and flexibility for both qualitative and quantitative assessment \cite{ambrosini2008overview,ambrosini2012}. Some of the prominent optical techniques in this domain include holographic interferometry \cite{gabelmann1979holographic,ruiz1985holographic,anand2006diffusivity,he2015development}, electronic speckle pattern interferometry \cite{paoletti1997temperature,riquelme2007interferometric,axelsson}, speckle decorrelation \cite{ambrosini2002speckle}, common path interferometry \cite{rashidnia2002development}, phase-shifting interferometry \cite{torres2012development} and projection moir\'e interferometry \cite{spagnolo2004liquid}. In particular, the method described by Spagnolo et al. \cite{spagnolo2004liquid} can be considered a type of background-oriented schlieren \cite{raffel2015background,settles2017review}. The central idea in most of these techniques involves reliable extraction of phase distribution encoded in an interferogram or fringe pattern, since the phase distribution is directly mapped to the refractive index fluctuations caused by the diffusion process. However, phase retrieval is not a trivial problem because of the challenges associated with noise, computational requirements associated with image size and total number of images, and single frame versus multi-frame operation and accordingly, several approaches have been proposed in literature. Phase-shifting \cite{creath1985phase} is a popular multi-frame phase retrieval approach; however, the need for capturing multiple phase shifted interferograms poses technical difficulties for dynamics studies. On the other hand, single-frame methods such as Fourier transform \cite{takeda1982fourier}, windowed Fourier transform \cite{kemao2007two,kemao2004windowed} and wavelet transform \cite{watkins1999determination,watkins2012review} can operate without the requirement of multiple fringe patterns. Among these methods, Fourier transform method has been extensively applied in the domain of flow analysis and visualization \cite{spagnolo1994fourier,huntley01} mainly because of operational simplicity. However, Fourier transform is a global operation and its performance is severely affected by the presence of localized fringe abnormalities and noisy regions in the fringe pattern. For studying a dynamics based phenomenon such as diffusion, another important challenge is that a large number of time-lapsed fringe patterns are captured during imaging, and processing them for phase retrieval imposes significant computational burden. This problem is even more apparent when the size of individual fringe pattern, measured in terms of number of pixels, is large. Further, the investigation of fast diffusion process usually requires high speed imaging where the number of recorded fringe patterns is large to avoid any phase discontinuity between two consecutive images, and keep the phase variation small between successive fringe patterns. In this case, the benefits of high performance fringe processing methods could be influential in the overall diffusion study. The main aim of our work is to propose a fringe processing method for flow analysis to address the twin challenges of noise sensitivity and computational efficiency. Accordingly, in this paper, we propose root multiple signal classification method for robust phase retrieval and show a highly efficient implementation using graphics processing unit based computing framework. In recent years, there is immense interest in the domain of fringe analysis for investigating methods with noise resistant capabilities \cite{montresor2016quantitative,montresor2019comparative,xia2018comparative,xia2017robust} and high performance operations \cite{gao2009real,van2016real,wang2019fast,vishnoi2019rapid}, and our work is a step in this direction. The outline of the paper is as follows. The experimental technique based on diffractive optical element (DOE) background-oriented schlieren is described in section \ref{sec_exp}. The theory of the proposed method and the graphics processing unit based implementation is described in section \ref{sec_theory}. The simulation and experimental results are described in section \ref{sec_results}. This is followed by discussions and conclusions. Finally, in the appendix, some details are given about phase retrieval, refractive index variation and diffusion coefficient. \section{Experimental setup} \label{sec_exp} The schematic of the experimental setup used is shown in Fig. \ref{fig:exp_set}. The experimental setup consist of a fiber coupled diode laser, diffractive optical element, grounded glass plate(D), diffusion cell and a camera. The diffractive element used in the system is a saw-tooth diffraction grating (G). The diode laser produces coherent beam of wavelength $670$ nm and the spherical wave, originated from the tip of the single-mode optical fiber illuminates the saw-tooth grating. The diffraction efficiency for the $+1$ order and zeroth order of the grating is $0.31$ and $0.4$ respectively. Hence, most of the power incident on the grating(G) is divided between these two orders. Since, the intensity of these two orders are almost same, the interference pattern or grating pattern formed has high visibility. This grating pattern is projected over a grounded glass plate (D) as shown in the schematic. The diffusive behavior of the grounded glass plate (D) makes the projected grating pattern visible at the camera plane. Binary liquid solution to be analyzed is kept inside a diffusion cell (S) made of glass and is placed after the grounded glass plate (D). The spectrophotometric glass cell, equipped with a Teflon shutting device to avoid evaporation phenomena, has internal dimensions of $10 \times 8$ mm$^2$, the path length along the optical axis is $10$ mm. In the figure, A and B represent pure water and an aqueous solution of common salt, respectively. The diffusion cell is first half filled with pure water (the solvent), then the solute (NaCl $1.75$ M/l aqueous solution) is injected from the bottom to reduce turbulence and mixing. A CMOS TV camera (Silicon Video 9T001C with PIXCI® D2X imaging board by EPIX Inc., resolution 2048 $\times$ 1536 pixels, 3.2 $\mu$m $\times$ 3.2 $\mu$m pixel size), equipped with a TEC-55 $55$ mm F/$2.8$ Telecentric Computar Lens, captures the grating pattern at different times from the start of the diffusion process. The TEC-55 lens reduces viewing angle error and magnification error while providing good resolution and contrast with low distortion. The refractive index distribution inside the diffusion cell is non-uniform, due to the concentration difference. Thus, the distorted grating patterns, as seen through the diffusion cell, encode the information about the refractive index variation and a set of sequentially captured patterns will contain the dynamics of the diffusion process. More details about the setup for analyzing the diffusion in liquids are outlined in \cite{spagnolo2004liquid}. \begin{figure} \centering {\includegraphics[width=0.5\textwidth]{1}} \\ \caption{Experimental setup.} \label{fig:exp_set} \end{figure} For our experiment, the recorded set of time-lapsed fringe patterns can be modeled as an image stack, where each image has a spatial carrier modulated cosinusoidal intensity variation. Subsequently, by using bandpass filtering and carrier removal, the analytic or complex fringe signal \cite{spagnolo2004liquid,kemao2007two} is obtained as \begin{equation} \Gamma(x,y,t) = A(x,y,t)e^{j\phi(x,y,t)} + \eta(x,y) \label{eqn:1} \end{equation} where $t$ indicates the time instant, $A$ indicates the amplitude and $\phi$ denotes the phase distribution induced by the refractive index fluctuations caused by diffusion process. Here, $\eta$ is the noise term which is assumed to be additive white Gaussian noise (AWGN). As the cell dimensions remain unchanged throughout the experiment, the phase change with respect to time is primarily because of the refractive index variation induced by the diffusion process. Consequently, to investigate the dynamics of diffusion process, a reliable method for extracting the phase information is important. However, as mentioned before, phase estimation accuracy is severely deteriorated in the presence of noise. Accordingly, to address this challenging problem, our proposed method is described in the next section. \section{Theory} \label{sec_theory} In the proposed method, we select a region of the fringe signal by applying a moving window of size $(2L + 1) \times (2L + 1)$ where $L$ is a length parameter. The moving window will slide over the fringe signal effectively creating multiple blocks. Within the block, we assume the phase distribution to have a linear form and the fringe amplitude to have minimal variations so as to be approximately constant. Subsequently, the fringe signal inside a block can be written as, \begin{equation} \boldsymbol{\Gamma}_w(x,y) = A_we^{j\phi_w(x,y)} + \eta_w(x,y) \label{eqn:wind} \end{equation} where the subscript "$w$" indicates the block index or number. The linear phase $\phi_w$ inside the block can be modeled as \begin{equation} \phi_w = \alpha + \omega_xx + \omega_yy \end{equation} As a consequence, the phase at any pixel can be calculated using Eq.(\ref{eqn:wind}) by estimating the unknowns $\alpha$, $\omega_x$ and $\omega_y$. We apply root multiple signal classification approach \cite{hayes2009statistical} to extract these parameters. Consider the auto-correlation of $\boldsymbol{\Gamma}_w$ denoted by $\boldsymbol{R}$ as, \begin{equation} \boldsymbol{R} = E\{\boldsymbol{\Gamma}_w\boldsymbol{\Gamma}_w^H\} = \boldsymbol{R}_s + \boldsymbol{R}_n \label{eqn:autocor} \end{equation} where $E\{.\}$ and $(.)^H$ denote the expectation and conjugate transpose operations. In Eq.(\ref{eqn:autocor}), $\boldsymbol{R}_s$ is called the signal auto-correlation matrix and is given as \begin{equation} \boldsymbol{R}_s = A_w^2 \begin{bmatrix} 1 & e^{-j\omega_y} & \cdots & e^{-j(M-1)\omega_y} \\ e^{j\omega_y} & 1 & \cdots & e^{-j(M-2)\omega_y} \\ \vdots & \vdots & \ddots & \vdots \\ e^{j(M-1)\omega_y} & e^{j(M-2)\omega_y} & \cdots & 1 \end{bmatrix} \label{eqn:auto_matrix} \end{equation} and $\boldsymbol{R}_n = \sigma_w^2\boldsymbol{I}$ is the noise auto-correlation matrix with $\sigma_w^2$ denoting the noise covariance and $\boldsymbol{I}$ is an identity matrix of size $M\times M$ where $M=2L+1$. Considering a vector defined as, \begin{equation} \boldsymbol{u}_1 = \begin{bmatrix} 1 & e^{j\omega_y} & \cdots & e^{j(M-1)\omega_y} \end{bmatrix}^T \end{equation} Eq.(\ref{eqn:auto_matrix}) can be simplified as \begin{equation} \boldsymbol{R}_s = A_w^2\boldsymbol{u}_1\boldsymbol{u}_1^H \end{equation} Post multiplying the vector $\boldsymbol{u}_1$ on both sides of the above equation, we get \begin{equation} \begin{aligned} \boldsymbol{R}_s\boldsymbol{u}_1 &= A_w^2\boldsymbol{u}_1(\boldsymbol{u}_1^H\boldsymbol{u}_1) \\ &= (MA_w^2)\boldsymbol{u}_1 \label{eqn:eigen} \end{aligned} \end{equation} which implies that the vector $\boldsymbol{u}_1$ is an eigenvector of $\boldsymbol{R}_s$ corresponding to the eigenvalue equal to $MA_w^2$. Since $\boldsymbol{R}_s$ is a Hermitian matrix, the remaining eigenvectors, $\boldsymbol{u}_2$, $\boldsymbol{u}_3$, $\cdots$, $\boldsymbol{u}_M$ will be orthogonal to $\boldsymbol{u}_1$. \begin{equation} \boldsymbol{u}_1^H\boldsymbol{u}_i = 0\quad;\quad i = 2,3,\cdots,M \label{eqn:ortho} \end{equation} If $\lambda_i^s$ are the eigenvalues of $\boldsymbol{R}_s$, then Eq. \ref{eqn:autocor} becomes \begin{equation} \begin{aligned} \boldsymbol{R}\boldsymbol{u}_i &= (\boldsymbol{R}_s + \sigma_w^2\boldsymbol{I})\boldsymbol{u}_i \\ &= (\lambda_i^s+ \sigma_w^2)\boldsymbol{u}_i \end{aligned} \label{eqn:eigen_val} \end{equation} Thus the eigenvalues of noisy data matrix $\boldsymbol{\Gamma}_w$ are $\lambda_i = \lambda_i^s+ \sigma_w^2$. From equations (\ref{eqn:eigen}) and (\ref{eqn:eigen_val}), it is clear that $\boldsymbol{u}_1$ is an eigenvector corresponding to the largest eigenvalue given by $\lambda_1 = MA_w^2+\sigma^2$. The vector $\boldsymbol{u}_1$ represents the signal subspace \cite{stoica2005spectral}. Similarly, the matrix containing all the remaining eigenvectors represents noise subspace and is given as, \begin{equation} \boldsymbol{U}_n = \begin{bmatrix} \boldsymbol{u}_2 & \boldsymbol{u}_3 & \cdots & \boldsymbol{u}_M \end{bmatrix}_{M\times (M-1)} \end{equation} From Eq.(\ref{eqn:ortho}), we can infer that the signal subspace and noise subspace are orthogonal to each other. Using this property, the unknown $\omega_y$ can be estimated by solving the polynomial equation given as \cite{stoica2005spectral} \begin{equation} \lvert \boldsymbol{u}_1^H\boldsymbol{U}_n \rvert^2 = \boldsymbol{u}_1^H(z_y)\boldsymbol{U}_n\boldsymbol{U}_n^H\boldsymbol{u}_1(z_y) = 0 \label{eqn:poly1} \end{equation} where $ \boldsymbol{u}_1(z_y) = \begin{bmatrix} 1 & z_y & \cdots & z_y^{M-1} \end{bmatrix}^T $ and $z_y = e^{j\omega_y}$. Following the similar analysis for the auto-correlation matrix given by $E\{\boldsymbol{\Gamma}_w^H\boldsymbol{\Gamma}_w\}$, we get a polynomial equation in $\omega_x$ as, \begin{equation} \lvert \boldsymbol{v}_1^H\boldsymbol{V}_n \rvert^2 = \boldsymbol{v}_1^H(z_x)\boldsymbol{V}_n\boldsymbol{V}_n^H\boldsymbol{v}_1(z_x) = 0 \label{eqn:poly2} \end{equation} where $ \boldsymbol{v}_1(z_x) = \begin{bmatrix} 1 & z_x & \cdots & z_x^{M-1} \end{bmatrix}^T $ represent signal subspace with $z_x = e^{-j\omega_x}$ and the noise subspace is given by, \begin{equation} \boldsymbol{V}_n = \begin{bmatrix} \boldsymbol{v}_2 & \boldsymbol{v}_3 & \cdots & \boldsymbol{v}_M \end{bmatrix}_{M\times (M-1)} \end{equation} For our analysis, the eigenvectors of the auto-correlation matrices were computed using the singular value decomposition approach \cite{golub2012matrix}. Further, the polynomial equations in Eq. \ref{eqn:poly1} and \ref{eqn:poly2} can be solved using eigenvalue decomposition of the companion matrix as discussed in \cite{chapra2012applied}. Among the $(2M-1)$ possible roots, we select the one which is closest to the unit circle with magnitude less than 1 on the complex plane as the roots of $z_y$ and $z_x$. Finally, the unknown parameters are estimated by solving Eqns.(12) and (13) for $z_y$ and $z_x$, and are given as, \begin{equation} \begin{aligned} \omega_y &= \textit{arg}(z_y) \\ \omega_x &= -\textit{arg}(z_x) \\ \alpha &= \angle \left[\overline{\boldsymbol{\Gamma}_we^{-j(\omega_xx+\omega_yy)}}\right] \end{aligned} \end{equation} where $\angle(\cdot)$ denotes the angle operation and $\overline{(.)}$ denotes mean operation. The above process is repeated for all blocks and followed by an unwrapping operation \cite{herraez2002fast} to obtain the overall phase map. The main advantage of the proposed approach is that the signal and noise components can be effectively separated as discussed above which provides high robustness against noise. However, the matrix based operations associated with the proposed approach are usually computationally intensive. Hence, to improve the computational efficiency, the proposed method is implemented using graphics processing unit (GPU) \cite{nvidia2011nvidia, sanders2010cuda}. Graphics processing unit is a specialized hardware designed for parallel execution of a given task on numerous threads and is rapidly emerging as a powerful tool for parallel computing in optical metrology \cite{gao2012parallel,wang2018parallel}. Any recursive and independent tasks can be effectively performed on GPU, resulting in high computation speed. In general, a GPU is connected to a central processing unit (CPU) called the host computer which controls the execution of GPU threads. A C/C++ based heterogeneous programming model called compute unified device architecture (CUDA) was created by NVIDIA to program both host computers and CUDA enabled GPUs. The threads are grouped into multiple blocks so that each block is executed concurrently based on the availability of streaming multiprocessors. The code for host CPU is written in C language like functions which are executed sequentially. But the task required to be executed in parallel on GPU is written using a special function called kernel. The user can also specify the number of threads and blocks to be launched while calling a kernel function. To implement the proposed method, we used NVIDIA's Quadro-M5000 GPU which has 16 streaming multiprocessors and supports up to one thousand threads in each block. Also, the native single precision CUDA arithmetic was used in our computations. As the proposed method is based on independent pixel wise operations, the processing of each window can be programmed using a GPU kernel function so that each thread can estimate the phase at the corresponding pixel. This GPU computing based approach has the potential to significantly speed up the computations required in the proposed method. \begin{algorithm}[t] \caption{Pseudo-code of GPU kernel used for phase estimation}\label{Alg:kernel} \begin{algorithmic}[1] \Function gpu\_kernel (int $N_y$, int $N_x$) \State \quad (px, py) $\leftarrow$ Compute pixel indexes \State \quad \textbf{if} (px $<$ $N_x$ and py $<$ $N_y$) \State \qquad $\boldsymbol{\Gamma}_w$ $\leftarrow$ $(M\times M)$ window around (px, py) \State \qquad \textbf{U},\textbf{S},$\textbf{V}^H$ $\leftarrow$ SVD of $\boldsymbol{\Gamma}_w$ \State \qquad $\textbf{U}_n$ $\leftarrow$ $\begin{bmatrix} \boldsymbol{u}_2 & \boldsymbol{u}_3 & \cdots & \boldsymbol{u}_M \end{bmatrix}_{M\times (M-1)}$ \State \qquad $\textbf{V}_n$ $\leftarrow$ $\begin{bmatrix} \boldsymbol{v}_2 & \boldsymbol{v}_3 & \cdots & \boldsymbol{v}_M \end{bmatrix}_{M\times (M-1)}$ \State \qquad y\_poly $\leftarrow$ $\boldsymbol{u}_1^H(z_y)\boldsymbol{U}_n\boldsymbol{U}_n^H\boldsymbol{u}_1(z_y)$ \State \qquad x\_poly $\leftarrow$ $\boldsymbol{v}_1^H(z_x)\boldsymbol{V}_n\boldsymbol{V}_n^H\boldsymbol{v}_1(z_x)$ \State \qquad $z_y$ $\leftarrow$ Root of y\_poly which is inside and closest to unit circle. \State \qquad $z_x$ $\leftarrow$ Root of x\_poly which is inside and closest to unit circle. \State \qquad $\phi(px,\ py)$ $\leftarrow$ Compute phase using Eq. (15) \State \quad \textbf{end if} \State \textbf{end function} \end{algorithmic} \end{algorithm} Algorithm 1 lists the step-wise operations performed by the gpu\_kernel function for estimating phase using the proposed method. When this kernel is called by CPU, the GPU launches an user specified number of threads in parallel where each thread simultaneously performs the steps listed in Pseudo-code 1. The pixel index of a particular thread can be calculated using CUDA specific variables that are accessible in each active thread. Note that in steps 6 and 7, the vectors $\boldsymbol{u}_i$ and $\boldsymbol{v}_i$ represent $i^{th}$ column of $\boldsymbol{U}$ and $\boldsymbol{V}$ respectively. After computing the polynomial roots, the estimated phase at the pixel corresponding to the thread is calculated using Eq. (15). \section{Results} \label{sec_results} To demonstrate the applicability of the proposed method, we simulated a noisy complex fringe signal at signal to noise ratio (SNR) equal to 0 dB. The size of the simulated fringe pattern is 512 $\times$ 512 pixels. All simulations were performed using Numpy \cite{van2011numpy}, which is an efficient scientific library for the Python programming language. The main advantage of this library is that it provides an easy implementation for multi-dimensional array or matrix datatype similar to MATLAB, but is purely open source and free of cost. The real part of the signal or the fringe pattern is shown in Fig.\ref{fig:sim_results}(a). Subsequently, we applied the proposed method for phase retrieval from the noisy fringe signal. The estimated phase in radians using the proposed method with $L=5$ is shown in Fig.\ref{fig:sim_results}(b) and the corresponding absolute phase estimation error ( logarithmic values) is shown in Fig.\ref{fig:sim_results}(c) For comparison, the estimated phases using both one dimensional wavelet transform (1D-WT)\cite{watkins1999determination} and two dimensional wavelet transform (2D-WT) \cite{watkins2012review} are shown in figures \ref{fig:sim_results}(d) and (f). The wavelet transform based approach is popular for analyzing a non-stationary signal such as fringe pattern. It offers an excellent tool for studying the local spatial frequencies associated with the fringe pattern and provides multi-resolution space-frequency analysis capability \cite{watkins2012review}. The corresponding estimation errors (logarithmic values) using the wavelet transform methods are also given in Fig.\ref{fig:sim_results}(e) and (g). Note that border pixels are neglected to ignore the errors at the boundaries. \begin{figure}[t!] \centering {\includegraphics[width=0.5\textwidth]{2}} \caption{(a) Simulated fringe pattern of size $512\times 512$ at SNR of 0 dB. Estimated phases in radians using (b) proposed method, (d) 1D-WT method and (f) 2D-WT method. Corresponding absolute phase estimation errors (logarithmic values) are shown in (c), (e) and (g).} \label{fig:sim_results} \end{figure} \begin{figure}[t!] \centering {\includegraphics[width=0.5\textwidth]{3}} \caption{Top view of estimated phase using (a) proposed method, (c) 1D-WT method and (e) 2D-WT method with the corresponding line profiles along the dashed lines shown in (b), (d) and (f) respectively.} \label{fig:sim_results_3} \end{figure} \begin{figure}[t!] \centering {\includegraphics[width=0.5\textwidth]{4}} \\ \caption{Plot comparing the RMSE of the estimated phase using various methods.} \label{fig:sim_rmse} \end{figure} For better visualization, we also show the top view images of the estimated phases in radians using the proposed method, one-dimensional wavelet transform method and two-dimensional wavelet transform method in parts (a,c,e) of Figure \ref{fig:sim_results_3}. Without loss of generality, an arbitrary row (marked by red dashed line) was chosen in the top view images, and the corresponding line profile plots of the estimated phases in radians are shown in parts (b,d,f) of Figure \ref{fig:sim_results_3}. It is evident that the proposed method offers a smoother line profile as compared to the other methods. For quantitative assessment, we also computed the root mean square errors (RMSE) for phase estimation using the different methods at various levels of noise. The RMSE versus SNR plot for the proposed method and the wavelet based methods is shown in Fig.\ref{fig:sim_rmse}. This figure clearly highlights the robustness of the proposed method against noise for fringe processing. \begin{table}[t] \centering \caption{Comparison of phase estimation errors for different window sizes} \label{tab:1} \begin{tabular}{|c|c|} \hline \makecell{L (in pixels)} & \makecell{RMSE (in radians)}\\ \hline 1 & 5.3823\\ \hline 2 & 0.4893\\ \hline 3 & 0.1090\\ \hline 4 & 0.0945\\ \hline 5 & 0.1316\\ \hline 6 & 0.1552\\ \hline 7 & 0.2185\\ \hline 8 & 0.2937\\ \hline \end{tabular} \end{table} Further, we also investigated the effect of window size on estimation accuracy. In Table \ref{tab:1}, the root mean square errors for phase estimation are displayed for varying values of the window size related parameter $L$. For a small window, the linear phase approximation is more valid; however, this brings high noise susceptibility since less number of data samples are processed. On the other hand, a larger window size offers better robustness against noise, though the accuracy of linear phase model deteriorates in this case. With respect to computational efficiency, as mentioned before, we implemented the proposed method using graphics processing unit. For comparison, we also implemented C programming language based sequential processing using gcc compiler with optimization option (-O3) enabled. The comparison between the sequential processing versus the parallel GPU computing approach is shown in Table \ref{tab:2}. It is evident that as the image size grows, tremendous improvements in execution runtime can be achieved using the graphics processing unit based implementation. \begin{table}[t] \centering \caption{Comparison of computation time for estimating phase from fringe patterns of varied sizes.} \label{tab:2} \begin{tabular}{|c|c|c|} \hline \makecell{} & \multicolumn{2}{c|}{\makecell{Execution time (sec)}} \\ \cline{2-3} \makecell{Image size\\ (in pixels)} & \makecell{CPU\\(using C)} & \makecell{GPU\\(using CUDA)} \\ \hline $256\times 256$ & 30.09 & 1.02 \\ \hline $512\times 512$ & 121.30 & 3.59 \\ \hline $1024\times 1024$ & 486.81 & 13.88 \\ \hline $2048\times 2048$ & 1952.65 & 55.36 \\ \hline \end{tabular} \end{table} \begin{figure}[t!] \centering {\includegraphics[width=0.5\textwidth]{5}} \caption{An experimentally recorded fringe pattern.} \label{fig:exp_results_4} \end{figure} \begin{figure}[t!] \centering {\includegraphics[width=0.5\textwidth]{6}} \caption{(a,c,e,g) Experimentally recorded fringe patterns and (b,d,f,h) corresponding phase estimated using the proposed method.} \label{fig:exp_results_1} \end{figure} \begin{figure}[t!] \centering {\includegraphics[width=0.5\textwidth]{7}} \caption{(a,c,e,g) Experimentally recorded fringe patterns and (b,d,f,h) corresponding phase estimated using the proposed method.} \label{fig:exp_results_2} \end{figure} \begin{figure}[t!] \centering {\includegraphics[width=0.5\textwidth]{8}} \caption{(a,c) Experimentally recorded fringe patterns and (b,d) corresponding phase estimated using the proposed method.} \label{fig:exp_results_3} \end{figure} The utility of the proposed method for practical applications is demonstrated using the experimentally recorded fringe patterns in diffractive optical element based background-oriented schlieren. Images (8-bit depth) were taken using a f-number = 4, an exposure time of $2.5$ ms and with a magnification such as $1$ pixel = $9.1$ $\mu$m. The time-lapsed data set comprises of 19 sequentially captured fringe patterns for the diffusion experiment. Frames 1-5 were recorded at time intervals of $120$ s (i.e. $120$ s, $240$ s, $360$ s, $480$ s and $600$ s from the beginning of diffusion) while Frames 6-19 were recorded at time intervals of $300$ s (i.e. $900$ s, $1200$ s, $1500$ s etc.). For experimental analysis, fringe patterns with size of 1850 $\times$ 700 pixels were processed using the proposed method for phase recovery with $L=7$. A representative fringe pattern from the data set is shown in Figure \ref{fig:exp_results_4}. The images marked by different frame numbers are shown in parts (a,c,e,g) of Figures \ref{fig:exp_results_1}, \ref{fig:exp_results_2} and parts (a,c) of Figure \ref{fig:exp_results_3}. After phase unwrapping, the estimated phases in radians using the proposed method for different fringe patterns are shown in parts (b,d,f,h) of Figures \ref{fig:exp_results_1}, \ref{fig:exp_results_2}, and parts (b,d) of Figure \ref{fig:exp_results_3}. In these figures, the variation in phase distribution with time is clearly evident. This is because as the diffusion process progresses with time, the initially non-uniform refractive index distribution of the mixture proceeds towards a uniform distribution. These temporal fluctuations in refractive index lead to equivalent temporal phase variations exhibited in the shown images. \section{Discussion} Diffractive optical element based BOS provides a simple and unsophisticated technique for studying flow based phenomenon such as diffusion and can be easily applied by unskilled operator. The applicability of this system for flow analysis can be tremendously improved by the infusion of robust data processing methods. Our work is a step in this direction by proposing a noise resistant and fast method for fringe processing. The performance of the proposed method is validated using the shown simulation and experimental results. Further, the applied GPU computing methodology provides significant improvements in computational efficiency, and paves path for rapid flow analysis using optical interferometry. The utility of this framework is especially evident when large number of fringe patterns and with big image sizes need to be processed. In addition, the external factors which deteriorate the estimation accuracy in fringe processing include temperature effects, vibrations and air currents. To an extent, their effect is addressed by the single shot fringe processing operation of the proposed method where only one frame is required for extracting the desired phase map. In contrast, multi-frame methods such as phase-shifting algorithms require the recording of multiple fringe patterns, which can induce more susceptibility to these external disturbances. For more details about accuracy and reproducibility in diffusion measurements, the Reader is addressed to \cite{axelsson} and references therein. \section{Conclusions} In this paper, we proposed an elegant fringe processing method in DOE based background-oriented schlieren for flow analysis. The dual advantages of high noise robustness and good computational efficiency attributed to graphic processing unit based computing provide great practical utility to the proposed method. The authors believe that the proposed method has great potential for flow analysis and visualization. \section*{Acknowledgments} Dr. Gannavarpu Rajshekhar gratefully acknowledges the funding obtained from Department of Science and Technology (DST), India under grant number DST/NM/NT/2018/2. \section*{Appendix} This section is about phase retrieval, refractive index retrieval and diffusion coefficient. These topics are discussed in depth in literature, e.g. \cite{ambrosini2008overview,gabelmann1979holographic,ruiz1985holographic,riquelme2007interferometric,axelsson, rashidnia2002development,spagnolo2004liquid}, therefore only a very brief account will be given here for the coherence of the paper. "Diffusion is the process by which matter is transported from one part of a system to another as a result of random molecular motion" \cite{crank}; the final result of the process is complete mixing. A free diffusion process can be described by Fick's second law: for a one dimensional diffusion \begin{equation} \pdv{c(x,t)}{ t} =D \pdv[2]{c(x,t)}{x} \label{eqn:diff1} \end{equation} where $c$ is the concentration, $x$ and $t$ are position and time, respectively and $D$ is the diffusion coefficient (independent from $c$). For dilute solutions, the refractive index $n$ can be treated as a linear function of the concentration $c$. The solution of Eq.(\ref{eqn:diff1}) for two binary liquid mixtures, initially ($t=0$) separated at $x=0$, is available in literature \cite{ambrosini2008overview,gabelmann1979holographic,spagnolo2004liquid,crank}. A change in concentration induces a change in refractive index; this non uniform $n$ deflects the beam travelling through the test section \cite{spagnolo2004liquid}. The change of the beam deflection angle can be considered as a local shift of the fringe pattern as seen through the diffusion cell, giving rise to the phase term $\phi$ of Eq.(\ref{eqn:1}). In particular, under paraxial approximation, the expression relating the refractive index derivative to the phase term can be written as \cite{spagnolo2004liquid} \begin{equation} \pdv{n(x,t)}{x} =\frac{1}{2\mu f_x}\frac{n_0}{L^2} \phi(x,t) \label{eqn:diff2} \end{equation} where $f_x$ is the fringe frequency, $n_0$ is a suitable index of refraction and $L$ represents the test cell dimension along the propagation axis. Therefore the estimated phase, presented in parts (b,d,f,h) of Figures \ref{fig:exp_results_1}, \ref{fig:exp_results_2}, and parts (b,d) of Figure \ref{fig:exp_results_3} are proportional to the derivative of $n$.
train/arxiv
BkiUdS05qdmDBjMqeCLc
5
1
\section{Introduction} \label{sec:intro} The research community recognizes several important normative dimensions of information technology including privacy, transparency, and fairness. In this survey we focus on fairness --- a broad and inherently interdisciplinary topic of which the social and philosophical foundations are still unresolved~\cite{DBLP:journals/cacm/ChouldechovaR20}. Research on fair machine learning has mainly focused on classification and prediction tasks~\cite{fairMLbook,DBLP:journals/cacm/ChouldechovaR20}, while we focus on ranking. As is customary in fairness research, we assume that input data describes \emph{individuals} --- natural persons seeking education, employment, or financial opportunities, or being prioritized for access to goods and services. While some of the algorithmic techniques described here can be applied to entities other than people, we believe that the concept of fairness, along with the corresponding normative frameworks, applies predominantly to scenarios where data describes people. For consistency, we will refer to the set of individuals in the input to a ranking task as \emph{candidates}. We consider two types of ranking tasks: score-based and supervised learning. In score-based ranking, a given set of candidates is sorted on the score attribute, which may itself be computed on the fly, and returned in sorted order. In supervised learning, a preference-enriched training set of candidates is given, with preferences among them stated in the form of scores, preference pairs, or lists; this training set is used to train a model that predicts the ranking of unseen candidates. For both score-based and supervised learning tasks, we typically return the best-ranked $k$ candidates, the top-$k$. Set selection is a special case of ranking that ignores the relative order among the top-$k$, returning them as a set. To make this discussion concrete, we now present our running example from university admissions, a domain in which ranking and set selection are very natural and are broadly used. \subsection{Running example: university admissions} \label{sec:intro:example} Consider an admissions officer at a university who selects candidates from a large applicant pool. When making their decision, the officer pursues some or all of the goals listed below. Some of these goals may be legally mandated, while others may be based on the policies adopted by the university, and include admitting students who: \begin{itemize} \item are likely to succeed: complete the program with high marks and graduate on time; \item show strong interest in specific majors like computer science, art, or literature; and \item form a demographically diverse group in terms of their demographics, both overall and in each major. \end{itemize} \input{figs/example-supervised-ranker} Figure~\ref{fig:admissions} shows a dataset $\candidateSet{}$ of applicants and illustrates the admissions process. Each applicant submits several quantitative scores, all transformed here to a discrete scale of 1 (worst) through 5 (best) for ease of exposition: $X_1$ is the high school GPA (grade point average), $X_2$ is the verbal portion of the SAT (Scholastic Assessment Test) score, and $X_3$ is the mathematics portion of the SAT score. Attribute $X_4$ (choice) is a weighted feature vector extracted from the applicant's essay, with weight ranging between 0 and 1, and with a higher value corresponding to stronger interest in a specific major. For example, candidate \val{b} is a White male with a high GPA (4 out of 5), perfect SAT verbal and SAT math scores (5 out of 5), a strong interest in studying computer science (feature weight 0.9), and some interest in studying art (weight 0.2). The admissions officer uses a suite of tools to sift through the applications and identify promising candidates. Many of these tools are \emph{rankers}, illustrated in Figure~\ref{fig:ranker_general}. A ranker takes a dataset of candidates, described by structured features, text, or both, as input and produces a permutation of these candidates, also called a \emph{ranking}. The admissions officer will take the order in which the candidates appear in a ranking under advisement when deciding whom to consider more closely, interview, and admit. These tools include score-based rankers (Sections~\ref{sec:intro:score-based}) that compute the score of each candidate based on a formula that the admissions officer gives, and then return some number of highest-scoring applicants in ranked order. This \emph{scoring formula} may, for example, specify the score as a linear combination of the applicant's high-school GPA and the two components of their SAT score, each carrying an equal weight. This is done in Figure~\ref{fig:admissions}(a), where a candidate's score is computed as $\queryScore{1} = X_1 + X_2 + X_3$ and then ranking $\boldsymbol{\tau}_1$ in Figure~\ref{fig:admissions}(b) is produced. \input{figs/merge-ranker-db-example} Predictive analytics are also among the admissions officer's toolkit (Sections~\ref{sec:intro:learned}). For example, multiple ranking models may be trained, one per undergraduate major or set of majors, on features $X_1, X_2, X_3, X_4$ of the successful applicants from the past years, to predict applicant's standing upon graduation (based, e.g.,\xspace on their GPA in the major). These ranking models are then used to predict a ranking of this year's applicants. In our example in Figure~\ref{fig:admissions}(a), feature $\queryScore{2}$ predicts performance in a STEM (Science, Technology, Engineering, Mathematics) major such as computer science (\val{cs}), economics (\val{econ}), or mathematics (\val{math}) and leads to ranking $\boldsymbol{\tau}_2$ in Figure~\ref{fig:admissions}(c), while feature $\queryScore{3}$ predicts performance in a humanities major such as literature (\val{lit}) or fine arts (\val{art}) and leads to ranking $\boldsymbol{\tau}_3$ in Figure~\ref{fig:admissions}(d). The promising applicants identified in this way---with the help of either a score-based ranker or a predictive analytic---will then be considered more closely, \emph{in ranked order}: invited for an interview and potentially admitted. Let us recall that, in addition to incorporating quantitative scores and students' choice, an admissions officer also aims to admit a demographically diverse group of students to the university and to each major. Further, the admissions officer is increasingly aware that the data on which their decisions are based may be biased, in the sense that this data may carry results of historical discrimination or disadvantage, and that the computational tools at their disposal may be exacerbating or introducing new forms of bias, or even creating a kind of a self-fulfilling prophecy. (See discussion of the types of bias in Section~\ref{sec:frame:bias}.) For this reason, the officer may elect to incorporate one or several fairness objectives into the ranking process. For example, they may assert, for legal or ethical reasons, that the proportion of the female applicants among those selected for further consideration should match their proportion in the input. Applying this requirement to ranking $\boldsymbol{\tau}_1$ in Figure~\ref{fig:ad_example} (in which we elaborate on the already familiar example in Figure~\ref{fig:admissions}) yields ranking $\boldsymbol{\tau}_2$ in Figure~\ref{fig:ad_example}. Further, the admissions officer may assert that, because applicants are interviewed in ranked order, it is important to achieve proportional representation by sex in \emph{every prefix} of the produced ranking, which yields ranking $\boldsymbol{\tau}_3$ in Figure~\ref{fig:ad_example}. In this survey we give an overview of the technical work that would allow an admissions officer to compute ranked results under these and other fairness requirements. \subsection{Scope and contributions of the survey} \label{sec:intro:scope} In the past few years, there has been much work on incorporating fairness requirements into algorithmic rankers. Giving a systematic overview of this work is the primary goal of our survey. Which specific fairness requirements an admissions officer will assert depends on the values they are operationalizing and, thus, on the mitigation objectives. An important goal of this survey is to create an explicit mapping between mitigation objectives, which we will characterize in Section~\ref{sec:frame:mit_goal}, and existing technical methods for fairness in ranking, discussed in Sections~\ref{sec:fair_db},~\ref{sec:fair_ir}, and~\ref{sec:fair_recsys}. Without such a mapping, an admissions officer in our running example would have a difficult time selecting an appropriate fairness-enhancing intervention, and would not know which interventions are mutually comparable and which are not. In our survey we will present a selection of approaches for fairness in ranking that were developed in several subfields of computer science, including data management, algorithms, information retrieval, and recommender systems. We are aware of several recent tutorials on fairness in ranking at RecSys 2019~\cite{DBLP:conf/recsys/EkstrandBD19} and VLDB 2020~\cite{vldb20tutorial}, pointing to the need to systematize the work in this area and motivating our survey. Our goal is to offer a broad perspective, connecting work across subfields. The primary focus of this survey is on associational fairness measures for ranking, although we do include one recently proposed causal framework. The rest of this survey is organized as follows: We start with the preliminaries and fix notation in Section~\ref{sec:prelim}. We then go on to present classification frameworks along which we relate the technical methods surveyed in this paper in Section~\ref{sec:02-four-frameworks}. In Sections~\ref{sec:fair_db},~\ref{sec:fair_ir}, and~\ref{sec:fair_recsys}, we describe technical work on fairness in score-based ranking, supervised learning, and recommender systems, respectively. We discuss evaluation datasets and frameworks in Section~\ref{sec:eval}, present important directions of future work in Section~\ref{sec:discuss}, and conclude in Section~\ref{sec:conc}. \section{Four Classification Frameworks for Fairness-Enhancing Interventions} \label{sec:02-four-frameworks} Operationally, algorithmic approaches surveyed in this paper differ in how they represent candidates (e.g., whether they support one or multiple sensitive attribute, and whether these are binary), in the type of bias they aim to surface and mitigate, in what fairness measure(s) they adopt, in how they navigate the trade-offs between fairness and utility during mitigation, and, for supervised learning methods, at what stage of the pipeline a mitigation is applied. Conceptually, these operational choices correspond to normative statements about the types of bias being observed and mitigated, and the objectives of the mitigation. In this section we give four classification frameworks that allow us to relate the technical choices with the normative judgments they encode, and to identify the commonalities and the differences between the many algorithmic approaches. \subsection{Group structure} \label{sec:frame:group} Recall that fairness of a method is stated with respect to a set of categorical sensitive attributes (or features). Individuals who have the same value of a particular sensitive attribute, such as gender or race, are called a \emph{group}. In this survey, we consider several orthogonal dimensions of group structure, based on the handling of sensitive attributes. \paragraph*{Cardinality of sensitive attributes} Some methods consider only \emph{binary} sensitive attributes (e.g.,\xspace binary gender, majority or minority ethnic group), while other methods handle higher-cardinality (\emph{multinary}) domains of values for sensitive attributes. If a multinary domain is supported, methods differ in whether they consider one of the values to be protected (corresponding to a designated group that has been experiencing discrimination), or if they treat all values of the sensitive attribute as potentially being subject to discrimination. \paragraph*{Number of sensitive attributes} Some methods are designed to handle a \emph{single sensitive attribute} at a time ( e.g.,\xspace they handle gender or race, but not both), while other methods handle \emph{multiple sensitive attributes} simultaneously (e.g.,\xspace they handle both gender and race). \paragraph*{Handling of multiple sensitive attributes} Methods that support multiple sensitive attributes differ in whether they handle these \emph{independently} (e.g.,\xspace by asserting fairness constraints w.r.t. the treatment of both women and Blacks) or \emph{in combination} (e.g.,\xspace by requiring fairness w.r.t. Black women). Note that any method that supports a single multinary attribute can be used to represent multiple sensitive attributes with the help of a computed high-cardinality sensitive attribute. For example, a computed sensitive attribute \emph{gender-race-disability} can represent the Cartesian product $\{male,female,non\text{-}binary\}\cross\{White,Black,Asian\}\cross\{disabled,non\text{-}disabled\}$. We may be tempted to say that such methods take the point of view of intersectional discrimination~\cite{crenshaw1990mapping, makkonen2002multiple}. However, as we will discuss in Section~\ref{sec:frame:mit_goal}, detecting and mitigating intersectional discrimination is more nuanced, and so it is in general not true that if a method takes a Cartesian product of sensitive attribute values then handles intersectional discrimination, and if a method treats sensitive attributes independently then it does not. \subsection{Type of bias} \label{sec:frame:bias} We study ranking systems with respect to the types of bias that they attempt to mitigate, namely, pre-existing bias, technical bias, and emergent bias, as defined by~\cite{DBLP:journals/tois/FriedmanN96}. \paragraph*{Pre-existing bias} This type of bias includes all biases that exist independently of an algorithm itself and has its origins in society. For an example of pre-existing bias in rankings, consider the Scholastic Assessment Test (SAT). College applicants in the US are commonly ranked on their SAT score, often in combination with other features. It has been documented that the mean score of the math section of the SAT differs across racial groups, as does the shape of the score distribution. According to a Brookings report that analyzed 2015 SAT test results, ``The mean score on the math section of the SAT for all test-takers is 511 out of 800, the average scores for blacks (428) and Latinos (457) are significantly below those of whites (534) and Asians (598). The scores of black and Latino students are clustered towards the bottom of the distribution, while white scores are relatively normally distributed, and Asians are clustered at the top''~\cite{brookings_race}. This disparity is often attributed to racial and class inequalities encountered early in life, and presenting persistent obstacles to upward mobility and opportunity. \paragraph{Technical bias} This type of bias arises from technical constraints or considerations, such as the screen size or a ranking's inherent position bias --- the geometric drop in visibility for items at lower ranks compared to those at higher ranks. Position bias arises because in Western cultures we read from top to bottom, and from left to right, and so items appearing in the top-left corner of the screen attract more attention~\cite{DBLP:journals/cacm/Baeza-Yates18}. A practical implication of position bias in rankings that do not admit ties is that, even if two items are equally suitable for a searcher, only one of them can be placed above the other in a ranking, suggesting to the searcher that it is better and should be prioritized. Note that, as all rankings carry an inherent position bias, any method that produces rankings with equalized candidate visibility implicitly addresses this technical bias. However, we will only assign a method to technical bias mitigation, if the paper is explicitly concerned with it, such as~\cite{biega2018equity}. \paragraph{Emergent bias} This type of bias arises in a context of use and may be present if a system was designed with different users in mind or when societal concepts shift over time. In the context of ranking and recommendation it arises most notably because searchers tend to trust the systems to indeed show them the most suitable items at the top positions~\cite{pan2007google}, which in turn shapes a searcher's idea of a satisfactory answer for their search. These feedback loops can create a ``the-winner-takes-it-all'' situation in which consumers increasingly prefer one majority product over everything else. \subsection{Mitigation objectives} \label{sec:frame:mit_goal} \paragraph*{Worldviews} Friedler et al. ~\cite{friedler2016possibility} reflect on the impossibility of a purely objective interpretation of algorithmic fairness (in the sense of a lack of bias): ``In order to make fairness mathematically precise, we tease out the difference between beliefs and mechanisms to make clear what aspects of this debate are opinions and which choices and policies logically follow from those beliefs.'' They model the decision pipeline of a task as a sequence of mappings between three metric spaces: construct space ({\it CS}\xspace), observed space ({\it OS}\xspace), and decision space ({\it DS}\xspace), and define worldviews (belief systems) as assumptions about the properties of these mappings. The spaces and the mappings between them are illustrated in Figure~\ref{fig:wae_wys} for the college admissions example. Individuals are represented by points. {\it CS}\xspace represents the ``true'' unobservable properties of an individual (e.g.,\xspace intelligence and grit), while {\it OS}\xspace represents the properties that we can measure (e.g.,\xspace SAT score as a proxy for intelligence, high school GPA as a proxy for grit) and serves as the feature space of an algorithmic ranker. An observation process $g(p) = \hat{p}$ maps from an individual $p \in {\it CS}\xspace$ to an entity $\hat{p} \in OS$. An example of such a process is an SAT test. The decision space {\it DS}\xspace maps from {\it OS}\xspace to a metric space of decisions, which for rankings represent the degree of relevance of an entity $\hat{p}$ by placing it at a particular position in the ranking. \input{figs/merge-worldview} Note that the mappings between the spaces are prone to distortions, of which those that map from {\it CS}\xspace to either {\it OS}\xspace or {\it DS}\xspace are by definition unobservable. Because the properties of these mapping cannot be independently verified, a belief system has to be postulated. \citet{friedler2016possibility} describe two extreme cases: WYSIWYG (``what you see is what you get'') and WAE (``we are all equal''). The former assumes that {\it CS}\xspace and {\it OS}\xspace are essentially the same and any distortion between the two is at most $\epsilon$. The latter assumes that any differences between the utility distributions of different groups are due to an erroneous or biased observation process $g$. In our college admissions example this would mean that any differences in the GPA or IQ distributions across different groups are solely caused by biased school systems and IQ tests. It is also assumed that $g$ shows different biases across groups, to which the authors refer as \emph{group skew}. The authors further define different terms from the Fairness, Accountability, Transparency, and Ethics (FATE) literature in terms of the underlying group skew. Their \emph{fairness} definition is inspired by~\citet{dwork2012fairness} and says that items that are close in construct space shall also be close in decision space, which is widely known as individual fairness. Group fairness, however, is defined indirectly through the terms \emph{direct discrimination} and \emph{non-discrimination}. Direct discrimination is absent if the group skew of a mapping between $OS$ and $DS$ is less than $\epsilon$. Non-discrimination is present, if the group skew of a mapping between $CS$ and $DS$ is less than $\epsilon$. Note that the last definition requires a choice of world view beforehand in order to be evaluated. If WYSIWYG is chosen, group fairness is given as soon as there is no direct discrimination, because $CS \approx OS$. We will classify the investigated algorithms in terms of which worldview they choose and which of the three terms (fairness, direct discrimination, non-discrimination) they aim to optimize. When categorizing surveyed methods with respect to worldview, we consider whether their fairness objective aims for equality of outcome or equality of treatment. If the goal of a method is to achieve equality of outcome, and if it is asserted that OS is not trustworthy because of biased or erroneous distortion $g$ between CS and OS, then we consider this method to fall under the WAE worldview. If, on the other hand, the goal is to achieve equality of treatment and it is asserted that the mapping between CS and OS shows low distortion, then the method falls under the WYSIWYG worldview. \paragraph{Equality of Opportunity} \citet{heidari2019moral} show an application of equality of opportunity (EOP) frameworks to algorithmic fairness. EOP emphasizes the importance of personal qualifications, and seeks to minimize the impact of circumstances and arbitrary factors on individual outcomes. ``At a high level, in these models an individual's outcome/position is assumed to be affected by two main factors: his/her circumstance $c$ and effort $e$. Circumstance $c$ is meant to capture all factors that are deemed irrelevant, or for which the individual should not be held morally accountable; for instance $c$ could specify the socio-economic status they were born into. Effort $e$ captures all accountability factors---those that can morally justify inequality.''~\cite{heidari2019moral} Several conceptions of EOP have been proposed, differing in what features they consider to be relevant (or morally acceptable to use) and which they deemed irrelevant. One of these is the \emph{libertarian} view that focuses on an individual's freedoms and liberties. According to this view, any information about an individual that was legally obtained can be used to make a decision. Under this view there is no attempt to equalize access to opportunity, and so it is not, properly speaking, an EOP framework~\cite{fairfriends}, despite being denoted as such by \citet{heidari2019moral}. If a fairness definition assumes that all individuals are comparable in all dimensions as long as there is no gross violations of their privacy rights during the comparison, then we map this definition to the libertarian view. \emph{Formal EOP} considers a competition to be fair when candidates are evaluated on the basis of their relevant qualifications, and the most qualified candidate wins. This view rejects any qualifications that are irrelevant, such as hereditary privileges or social status, but it makes no attempt to correct for arbitrary privileges and disadvantages leading up to the competition that can lead to disparities between people's qualifications at the time of competition. Formal EOP is typically understood in the algorithmic fairness literature as fairness-through-blindness --- disallowing the direct impact from sensitive attributes (e.g.,\xspace gender and race) on the outcome but allowing them to impact the outcome through proxies. Limiting formal EOP to fairness through blindness has been challenged in recent work by~\citet{fairfriends}, who argue for a broader interpretation: ``For example, think of the SAT as a predictor of college success: when students can afford to do a lot of test prep, scores are an inflated reflection of students' college potential. When students don't have access to test prep, the SAT underestimates students' college potential. The SAT systematically overestimates more privileged students, while systematically underestimating less privileged students. The test's validity as a predictor of college potential varies across groups. That's also a violation of formal EOP. After all, in the college admissions contest, applicants should only be judged by 'college-relevant' qualifications--but this test's accuracy as a yardstick for college potential varies with students' irrelevant privilege'.'' Despite this recent development, we will only map fairness-through-blindness definitions to formal EOP in this survey. \emph{Substantive EOP} views, including Rawlsian and luck-egalitarian, assume that individuals are only comparable in the dimension of their \emph{intrinsic} effort that is not affected directly or indirectly by their sensitive attributes (e.g.,\xspace gender and race). In other words, the comparison across individuals shall be conditioned on their effort to assign opportunities. While Rawlsian EOP assumes that this effort itself is not affected by sensitive attributes, luck-egalitarian EOP states that the effort can also be affected by sensitive attributes. We map a fairness definition to Rawlsian EOP when its implied comparison across individuals is conditioned on a dimension of effort that is independent of sensitive attributes. And we map a fairness definition to luck-egalitarian EOP when its implied comparison across individuals is conditioned on their circumstances (i.e.,\xspace conditioned on effort, and effort is conditioned on circumstances). We will classify surveyed approaches with respect to the EOP framework based on how their fairness definition compares individuals according to some qualifications (e.g.,\xspace test scores, credit amount, and times of being arrested). However, such a mapping is elusive if a paper does not clearly state its underlying concept of an individual's effort. Note also that we explicitly map the \textit{fairness definitions}, not the overall approaches. This is because many methods \textit{combine} a fairness and a utility objective into a single optimization problem and, by doing so, lose a clear association with a particular framework. As a result, many of the methods we survey fall between the WAE and WYSIWYG worldviews, and do not cleanly map to a single EO category. Some are even designed to allow a continuous shift between the frameworks, by providing a tuning parameter~\cite{zehlike2017matching}. \paragraph{Worldviews vs. Equality of Opportunity} There is an ambiguity between the worldviews~\cite{friedler2016possibility} and the EOP\xspace~\cite{heidari2019moral} classification frameworks: It is not clear at what point in time the features from construct space are to be considered and therefore it is not clear whether WAE relates to Rawlsian EOP\xspace or luck-egalitarian EOP\xspace. If we have to interpret effort as an absolute term, then construct space features (e.g.,\xspace a student's grit) have to be considered by the time a decision is made. In this case WAE relates to Rawlsian EOP\xspace. If, instead, we interpret effort as a relative term and therefore allow for the fact, that it might be shaped by our circumstances (e.g.,\xspace a student's grit may flourish better when they grow up in households where education is deemed important), we should consider features in {\it CS}\xspace much earlier, probably at the time of birth. In this case WAE relates to luck-egalitarian EOP\xspace. The definition of construct space and WAE in~\citet{friedler2016possibility} suggest the former case, however the authors do not explicitly state this and also discuss the issue of when to consider construct space features, but without stating any assumption for their own work. We show in our method analysis that different methods are not free of this ambiguity and we will work out which algorithm assumes WAE as Rawlsian EO and which assumes WAE as luck-egalitarian EO, whenever possible. We believe that more research and discussion is needed to resolve the ambiguity of the WAE worldview. \paragraph*{Intersectional discrimination} Intersectional Discrimination~\cite{crenshaw1990mapping, makkonen2002multiple} states that individuals who belong to several protected groups simultaneously (e.g.,\xspace Black women) experience stronger discrimination compared to individuals who belong to a single protected group (e.g.,\xspace White women or Black men), and that this disadvantage compounds more than additively. This effect has been demonstrated by numerous case studies, and by theoretical and empirical work~\cite{collins2002black,shields2008gender,d2020data,noble2018algorithms}. The most immediate interpretation for ranking is that, if fairness is taken to mean proportional representation among the top-$k$, then it is possible to achieve proportionality for each gender subgroup (e.g.,\xspace men and women) and for each racial subgroup (e.g.,\xspace Black and White), while still having inadequate representation for a subgroup defined by the intersection of both attributes (e.g.,\xspace Black women). Intersectional concerns also arise in more subtle ways. For example,~\citet{DBLP:conf/ijcai/YangGS19} observed that when representation constraints are stated on individual attributes, like race and gender, and when the goal is to maximize score-based utility subject to these constraints, then a particular kind of unfairness can arise, namely, utility loss can be particularly severe in historically disadvantaged intersectional groups. When discussing specific technical methods, we will speak to whether they consider intersectional discrimination and, if so, which specific concerns they aim to address. \subsection{Mitigation method} \label{sec:frame:mit_where} \input{figs/merge-flowchart} Score-based and supervised learning based rankers use different types of bias mitigation methods. In score-based ranking, we categorize mitigation methods into those that intervene on the score distribution, or on the scoring function, or on the ranked outcome, as illustrated in Figure~\ref{fig:db-flowchart}. Methods that \emph{intervene on the score distribution} aim to mitigate disparities in candidate scores, either before these candidates are processed by an algorithmic ranker or during ranking. Methods that \emph{intervene on the ranking function} identify a function that is similar to the input function but that produces a ranked outcome that meets the specified fairness criteria. Methods that \emph{intervene on the ranked outcome} impose constraints to require a specific level of diversity or representation among the top-$k$ as a set, or in every prefix of the top-$k$. In supervised learning, we categorize mitigation methods into pre-processing, in-processing, and post-processing, Illustrated in Figure~\ref{fig:ir-flowchart}. \emph{Pre-processing} methods seek to mitigate discriminatory bias in training data, and have the advantage of early intervention on pre-existing bias. \emph{In-processing} methods aim to learn a bias-free model. Finally, \emph{post-processing} methods re-rank candidates in the output subject to given fairness constraints~\cite{hajian2016algorithmic}. The advantage of post-processing methods in supervised learning is that they provide a guaranteed share of visibility for protected groups. In contrast, in-processing methods only consider fairness at training time and make no guarantees about fairness of the test set. However, post-processing methods may be subject to legal challenges because of due process concerns that may make it illegal to intervene at the decision stage (e.g.,\xspace Ricci v. DeStefano~\cite{ricci}). Thus, like all technical choices, the choice of whether to use a pre-, in-, or post-processing fairness-enhancing intervention is not purely technical, but must also consider the social and legal context of use of the algorithmic ranker. \input{021-datasets} \section{Datasets} \label{sec:datasets} Before diving into a description of the fair ranking methods, we present the experimental datasets used by them. We summarize the datasets in Table~\ref{tab:datasets}, where we highlight the following aspects: size (usually the number of candidates), sensitive attributes, scoring attributes, and the surveyed papers that use this dataset in their evaluation. We then briefly describe each dataset, and refer the reader to the description of each method in Sections~\ref{sec:fair_db}-\ref{sec:fair_recsys} for details about that dataset's use. All datasets are publicly available under the referenced links unless otherwise indicated. The papers surveyed here rarely substantiate their choice of an experimental dataset, other than that by the fact that the dataset was available, and that items in it have scores on which to rank. Both of these reasons can be seen as purely technical (or even syntactic) rather than conceptual. Unfortunately little explicit attention has been paid to explaining whether a particular dataset was collected with a ranking task in mind, and \emph{why} it is deemed appropriate for the specific fairness definition, that is, whether and to what extent the task for which the dataset was collected or can plausibly be used exhibits unfairness of the kind that the proposed fairness definition is designed to address. We see this as an important limitation of empirical studies in fairness in ranking and, more generally, in algorithmic fairness research, and posit that the use of a dataset must be explicitly justified. \paragraph{AirBnB~\cite{AirBnBData}} This dataset consists of 10,201 house listings from three major cities: Hong Kong (4,529 items), Boston (3,944 items), and Geneva (1,728 items). The gender of the hosts is used as the sensitive attribute, and the ranking score is computed as the ratio of the rating and the price. \paragraph{COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)~\cite{COMPASData}} This dataset is derived based on a recidivism risk assessment tool called COMPAS. The dataset contains the COMPAS scores from the Broward County Sheriff's Office in Florida in 2013 and 2014, and the profile of each person's criminal history collected by ProPublica~\cite{angwin2016machine}. In total there are 7,214 data points, with sensitive attributes gender and race. \paragraph{CS department rankings~\cite{CSData}} This dataset contains information about 51 computer science departments in the U.S.. The methods in \cite{DBLP:conf/ijcai/YangGS19,yang2020causal} use the number of publications as the ranking score. Two categorical attributes are treated as sensitive: department size (with values ``large'' and ``small'') and geographic area (with values ``North East'', ``West'', ``Middle West'', ``South Center'', and ``South Atlantic''). \paragraph{DOT (Department of Transportation)~\cite{DOTData}} This dataset consists of about 1.3 million records of flights conducted by 14 U.S. airlines in the first three months of 2016. The dataset was collected by~\citet{asudeh2019designing} from the flight on-time database that is published by the U.S. Department of Transportation. Three scoring attributes are used in~\cite{asudeh2019designing}: departure delay, arrival delay, and taxi-in time. The name of the airline conducting the flight is treated as the sensitive attribute. \paragraph{Engineering students~\cite{EngineeringData}} This dataset contains the results of a Chilean university admissions test from applicants to a large engineering school in five consecutive years. The task in~\cite{zehlike2018reducing} is to predict the students' academic performance after the first year based on their admissions test results and school grades. The sensitive attributes are gender and whether the applicants graduated from a private or public high school. This dataset is only accessible upon request, see referenced link for details. \paragraph{Forbes richest Americans~\cite{ForbesRichesData}} This dataset consists of 400 individuals from the 2016 Forbes US Richest list \footnote{\url{https://www.forbes.com/forbes-400/list/}}, ranked by their net worth. Gender is the sensitive attribute, with 27 female vs. 373 male individuals in the dataset. \paragraph{German credit~\cite{GermanCreditData}} This dataset, hosted by the UCI machinle learning repository~\cite{lichman_2013_uci}, contains financial information of 1,000 individuals, and is associated with a binary classification task that predicts whether an individual's credit is good or bad. The sensitive attributes are gender and age, where age is categorized into younger or older based on a threshold (25 or 35 years old is variably used as the threshold). Attributes credit amount and duration (how long an individual has had a line of credit) have been used as scoring attributes in fair ranking papers. \paragraph{IIT-JEE (The Joint Entrance Exam of Indian Institutes of Technology)~\cite{IITJEEData}} This dataset consists of scores of 384,977 students in the Mathematics, Physics, and Chemistry sections of IIT-JEE 2009, along with their gender, birth category (see~\cite{DBLP:journals/corr/abs-1904-06698}), disability status, and zip code. The students are scored on a scale from −35 to +160 points in all three sections, with an average total score of +28.36, a maximum score of +424, and a minimum score of −86. \paragraph{LSAC~\cite{LSACData}} This dataset consists of a U.S. national longitudinal bar exam passage data gathered from the class that started law school in Fall 1991. Data is provided by the students, their law schools, and state boards of bar examiners over a 5-year period~\cite{wightman1998lsac}. The dataset consists of 21,791 students, with the sensitive attributes sex and race. Rankings are produced based on LSAT scores. \paragraph{MEPS (Medical Expenditure Panel Survey)~\cite{MEPSData}} This dataset consists of 15,675 people and their information regarding the amount of health expenditures~\cite{cohen2009medical,coston2019fair}. The sensitive attributes are gender, race, and age of each individual, where age is categorized into younger or older based on a threshold (35 years old) in~\cite{DBLP:conf/ijcai/YangGS19,yang2020causal}. The ranking score is based on utilization, defined by the IBM AI Fairness 360 toolkit~\cite{bellamy2018ai} as the total number of trips requiring medical care. Utilization is computed as the sum of the number of office-based visits, the number of outpatient visits, the number of ER visits, the number of inpatient nights, and the number of home health visits. \paragraph{NASA astronauts~\cite{NASAData}} This dataset consists of 357 astronauts with their demographic information. The method in~\cite{DBLP:conf/edbt/StoyanovichYJ18} ranks this dataset by the number of space flight hours, and assigns individuals to categories based on their undergraduate major, treating is a the sensitive attribute. A total of 83 majors are represented in the dataset, the 9 most frequent are assigned to their individual categories --- Physics (35), Aerospace Engineering (33), Mechanical Engineering (30), etc, and the remaining 141 individuals are combined into the category ``Other'', resulting in 10 groups. \paragraph{SAT~\cite{SATData}} This dataset contains about 1.6 million data points, in which the score column corresponds to an individual's results in the US Scholastic Assessment Test (SAT) in 2014~\cite{sat_2014}. The sensitive attribute is gender. \paragraph{SSORC~\cite{SSORCData}} The Semantic Scholar Open Research Corpus contains the meta-data of 46,947,044 published research papers in computer science, neuroscience, and biomedicine from 1936 to 2019 on Semantic Scholar. The meta-data for each paper includes the list of authors of the paper, the year of publication, the list of papers citing it, and the journal of publication, along with other details. The sensitive attribute is the gender of the authors, collected by~\citet{celis2020interventions}. The ranking score is the number of citations of each paper. \paragraph{StackExchange~\cite{StackData}} This dataset contains a query log and a document collection using the data from the Stack Exchange Q\&A community (dump as of 13-06-2016)~\cite{biega2017privacy}. It consists of about 6 million posts inside the type ``Question'' or ``Answer'' in 142 diverse subforums (e.g.,\xspace Astronomy, Security, Christianity, Politics, Parenting, and Travel). The questions are translated into about 253,000 queries, and the respective answers serve as the documents for the queries. The sensitive attribute is the query domain. \paragraph{W3C experts~\cite{W3CData}} The task behind this dataset corresponds to a search of experts for a given topic based on a corpus of e-mails written by possible candidates. The sensitive attribute is the gender of the expert. The experimental setup in~\cite{zehlike2018reducing} investigates situations in which bias is unrelated to relevance: expertise has been judged correctly, but ties have been broken in favor to the privileged group (i.e.,\xspace all male experts are followed by all female experts, followed by all male non-experts, followed finally by all female non-experts). \paragraph{Xing~\cite{XINGData}} This dataset was collected by~\citet{zehlike2017fa} from a German online job market website\footnote{\url{https://www.xing.com}}. The authors collected the top-40 profiles returned for 54 queries, and computed an ad-hoc score based on educational features, job experience and profile popularity. The sensitive attribute is gender, which was inferred based on the first name associated with the profile and the profile picture, when available. Items are ranked based on an ad-hoc score. \paragraph{Yahoo! LTR~\cite{YahooData}} This dataset consists of 19,944 training queries and 6,983 test set queries. Each query has a variable sized candidate set of documents that needs to be ranked. There are 473,134 training and 165,660 test documents. The query-document pairs are represented by a 700-dimensional feature vector. For supervision, each query-document pair is assigned an integer relevance judgments from 0 (bad) to 4 (perfect). The dataset is used to evaluate the effectiveness of Learning to Rank methods in~\cite{singh2019policy}, thus no sensitive attribute is specified. \paragraph{Yow news~\cite{YowData}} This dataset contains explicit and implicit feedback from a set of users for news articles in the ``people'' topic produced by Yow~\cite{zhang2005bayesian}. The ranking score is the explicitly given relevance field. The source of news is treated as the sensitive attribute. \section{Score-based Ranking} \label{sec:fair_db} \begin{table*}[ht!] \small \begin{tabular}{p{0.2\textwidth}p{0.2\textwidth}p{0.12\textwidth}p{0.1\textwidth}p{0.11\textwidth}l} \toprule \textbf{Method} & \textbf{Group structure} & \textbf{Bias} & \textbf{Worldview} & \textbf{EOP} & \textbf{Intersectional} \\ \midrule \makecell[l]{Rank-aware proportional\\ representation~\cite{yang2017measuring}} & one binary sensitive attr. & pre-existing & WAE & luck-egalitarian & no\\ \rowcolor{Lightgray} \makecell[l]{Constrained ranking\\ maximization~\cite{celis2018ranking}} & \makecell[l]{multiple sensitive attrs.;\\ multinary; \\handled independently} & pre-existing & WAE & Rawlsian & no \\ \makecell[l]{Balancing utility loss\\ across groups~\cite{DBLP:conf/ijcai/YangGS19}} & \makecell[l]{multiple sensitive attrs.;\\ multinary; \\handled independently} & \makecell[l]{pre-existing;\\ technical} & WAE & luck-egalitarian & yes \\ \rowcolor{Lightgray} \makecell[l]{Diverse $k$-choice\\ secretary~\cite{DBLP:conf/edbt/StoyanovichYJ18}} & one multinary sensitive attr. & pre-existing & WAE & luck-egalitarian & no \\ \makecell[l]{Utility of selection with\\ implicit bias~\cite{kleinberg_et_al:LIPIcs:2018:8323}} & one binary sensitive attr. &\makecell[l]{pre-existing;\\ implicit} & WAE & N/A & no\\ \rowcolor{Lightgray} \makecell[l]{Utility of ranking with\\ implicit bias~\cite{celis2020interventions}} & \makecell[l]{multiple sensitive attrs.;\\ multinary;\\ handled independently} & \makecell[l]{pre-existing;\\ implicit} & WAE & N/A & yes\\ \makecell[l]{Causal intersectionally\\ fair ranking~\cite{yang2020causal}} & \makecell[l]{multiple sensitive attrs.;\\ multinary;\\ handled independently} & pre-existing & WAE & \makecell[l]{substantive:\\ Rawlsian or\\ luck-egalitarian} & yes\\ \rowcolor{Lightgray} \makecell[l]{Designing fair ranking\\ functions~\cite{asudeh2019designing}} & any & pre-existing & any & any & yes\\ \hline \end{tabular} \caption{Classification of score-based ranking methods according to the frameworks in Section~\ref{sec:02-four-frameworks}. } \label{tbl:method-summary_score} \end{table*} In this section we present several methods for fairness in score-based ranking. Table~\ref{tbl:method-summary_score} summarizes the methods presented in this section according to the frameworks of Section~\ref{sec:02-four-frameworks}. Recall that, in score-based ranking we categorize mitigation methods into those that intervene on the ranking process, on the score distribution, or on the scoring function. In Section~\ref{sec:fair_db:prop}, we describe methods that \emph{intervene on the ranked outcome} by ensuring proportional representation across groups. Next, in Section~\ref{sec:fair_db:bounds}, we discuss several methods that formulate \emph{fairness} and \emph{coverage-based diversity} constraints by specifying bounds on the number of candidates from groups of interest to be present in prefixes of a ranked list. These methods also intervene on the ranked outcome. Then, in Section~\ref{sec:fair_db:latent}, we describe methods that \emph{intervene on the score distributions}. Finally, in Section~\ref{sec:fair_db:geo}, we present a method that treats the fairness objective as a black-box and proposes a geometric interpretation of score-based ranking to reach the objective by \emph{intervening on the ranking function}. \subsection{Intervening on the Ranked Outcome: Rank-aware Proportional Representation} \label{sec:fair_db:prop} To the best of our knowledge,~\citet{yang2017measuring} were the first formalize rank-aware fairness, under the assumption that the scores based on which the ranking is produced encode pre-existing bias. Consider a ranking in which candidates are assigned to one of two groups, $\group{1}$ or $\group{2}$, according to a single binary sensitive attribute (e.g.,\xspace, binary gender), and with one of these groups, $\group{1}$, corresponding to the protected group (e.g.,\xspace the female gender). The fairness measures proposed in this paper are based on the following intuition: Because it is more beneficial for an item to be ranked higher, it is also more important to achieve proportional representation at higher ranks. The idea, then, is to take several well-known proportional representation measures and to make them \emph{rank-aware}, by placing them within a framework that applies position-based discounts. \spara{Fairness definition and problem formalization.} Recall from Section~\ref{sec:prelim} that position-based discounting is commonly used to quantify utility (Eq.~\ref{eq:disc_agg_utility}) or prediction accuracy (Eq.~\ref{eq:ndcg}) in a ranking. In a similar vein, the use of position-based discounting in \citet{yang2017measuring} is a natural way to make set-wise proportional representation requirements rank-aware. Specifically, the idea is to consider a series of prefixes of a ranking $\boldsymbol{\tau}$, for $k=10,20,\dots$, to treat each top-$k$ prefix $\boldsymbol{\tau}_{1 \ldots k}$ as a set, to compute \emph{statistical parity} at top-$k$, and to compare that value to the proportion of the protected group in the entire ranking. (Naturally, perfect statistical parity is achieved when $k=n$.) The values computed at each cut-off point are summed up with a position-based discount. Based on this idea, the authors propose three fairness measures that differ in the specific interpretation of statistical parity: normalized discounted difference (\textsf{rND}\xspace), ratio (\textsf{rRD}\xspace), and KL-divergence (\textsf{rKL}\xspace). Normalized discounted difference (\textsf{rND}\xspace) (Equation~\ref{eq:nd}) computes the difference between the proportions of the protected group $\group{1}$ at the top-$k$ and in the over-all population. Normalizer $Z$ is computed as the highest possible value of \textsf{rND}\xspace for the given number of items $n$ and protected group size $|\group{1}|$. \begin{equation} \textsf{rND}\xspace(\boldsymbol{\tau})=\frac{1}{Z} \sum_{k=10,20,...}^{n}{ \frac{1}{\log_{2}{k}} \left(\frac{|\boldsymbol{\tau}_{1\ldots k} \cap \group{1}|}{k} - \frac{|\group{1}|}{n} \right)} \label{eq:nd} \end{equation} Normalized discounted ratio (\textsf{rRD}\xspace) is defined analogously, as follows: \begin{equation} \textsf{rRD}\xspace(\boldsymbol{\tau})=\frac{1}{Z} \sum_{k=10,20,...}^{n}{ \frac{1}{\log_{2}{k}} \left(\frac{|\boldsymbol{\tau}_{1\ldots k} \cap \group{1}|}{|\boldsymbol{\tau}_{1\ldots k} \cap \group{2}|} - \frac{|\group{1}|}{|\group{2}|} \right)} \label{eq:rd} \end{equation} When either the numerator or the denominator of a term in Eq.~\ref{eq:rd} is 0, the value of the term is set to 0. Finally, normalized discounted KL-divergence (\textsf{rKL}\xspace) uses Kullback-Leibler (KL) divergence to quantify the expectation of the logarithmic difference between two discrete probability distributions, $P_k$ that quantifies the proportions in which groups are represented at the top-$k$: \begin{equation} P_k = \left( \frac{|\boldsymbol{\tau}_{1\ldots k} \cap \group{1}|}{k}, \frac{|\boldsymbol{\tau}_{1\ldots k} \cap \group{2}|}{k} \right) \label{eq:kl_pk} \end{equation} and $Q$ that quantifies the proportions in which groups are represented in the over-all ranking: \begin{equation} Q = \left( \frac{|\group{1}|}{n}, \frac{|\group{2}|}{n} \right) \label{eq:kl_q} \end{equation} KL-divergence between $P_k$ and $Q$, denoted $D_{KL}(P_k||Q)$, is computed at every cut-off point $k$, and position-based discounting is applied as the values are compounded, with normalizer $Z$ defined analogously as for \textsf{rND}\xspace: \begin{equation} \textsf{rKL}\xspace (\boldsymbol{\tau})=\frac{1}{Z} \sum_{k=10,20,...}^{n}{ \frac{1}{\log_{2}{k}} D_{KL}(P_k||Q)} \label{eq:kl} \end{equation} Note that, unlike \textsf{rND}\xspace and \textsf{rRD}\xspace, which are limited to a binary sensitive attribute, \textsf{rKL}\xspace can handle a multinary sensitive attribute and so is more flexible. \spara{Experiments and observations.} The authors evaluate the empirical behavior of the proposed fairness measures using real and synthetic datasets. Real datasets used are COMPAS~\cite{COMPASData} and German Credit~\cite{GermanCreditData}, see Section~\ref{sec:datasets} for details. Synthetic datasets are generated using an intuitive data generation procedure described below. This procedure was later used in the work of~\citet{zehlike2017fa} and~\citet{,DBLP:conf/kdd/WuZW18}, and is of independent interest. Recall that $\group{1}$ represents the protected group and $\group{2}$ represents the privileged group, and suppose for simplicity that each group constituted one half of the candidates $\candidateSet{}$. An example is given in Figure~\ref{fig:exm_measure_fair_rank_data}, in which $\candidateSet{}$ contains 8 candidates, 4 female ($\group{1}$) and 4 male ($\group{2}$). The data generation procedure, presented in Algorithm~\ref{alg:rankgen}, takes two inputs: a ranking $\boldsymbol{\tau}$ of $\candidateSet{}$ and a ``fairness probability'' $p$, and it produces a ranking $\Tilde{\boldsymbol{\tau}}$. The input ranking $\boldsymbol{\tau}$ is assumed to be generated by the vendor according to their usual process (e.g.,\xspace based on candidate scores, as in Figure~\ref{fig:exm_measure_fair_rank_input}). Algorithm~\ref{alg:rankgen} splits up $\boldsymbol{\tau}$ into two rankings: $\boldsymbol{\tau}_1$ of candidates in $\group{1}$ and $\boldsymbol{\tau}_2$ of candidates in $\group{2}$. It then repeatedly considers pairs of candidates at the top of the lists, $\boldsymbol{\tau}_1(1)$ and $\boldsymbol{\tau}_2(1)$, and decides which of these should be ranked above the other, selecting $\boldsymbol{\tau}_1(1)$ with probability $p$ and $\boldsymbol{\tau}_2(1)$ with probability $1-p$, and appending the selected candidate to $\Tilde(\boldsymbol{\tau})$. The parameter $p$ specifies the relative preference between candidates in $\group{1}$ and in $\group{2}$. When $p = 0.5$, groups are mixed in approximately equal proportion for as long as there are items in both groups. This is illustrated in Figure~\ref{fig:fig:alg1:gender_05} for the sensitive attribute $A_1$ (gender) and in Figure~\ref{fig:alg1:race_05} for the sensitive attribute $A_2$ (race). When $p > 0.5$, members of the protected group $\group{1}$ are preferred, and when $p < 0.5$ members of the privileged group $\group{2}$ are preferred. In extreme cases, when $p=0$, all (or most) members of $\group{2}$ will be placed before any members of $\group{1}$, as shown in Figure~\ref{fig:exm_male_then_female} for the sensitive attribute $A_1$ (gender). Note that candidates within a group always remain in the same relative order in $\Tilde{\boldsymbol{\tau}}$ as in $\boldsymbol{\tau}$ (that is, there is \emph{no reordering within a group}), but there may be \emph{reordering between groups}. \begin{algorithm}[ht!] \caption{FairGen} \begin{algorithmic}[1] \REQUIRE Ranking $\boldsymbol{\tau}$, fairness probability $p$.\\ \COMMENT {Initialize the output ranking $\Tilde{\boldsymbol{\tau}}$.}\\ \STATE $\Tilde{\boldsymbol{\tau}} \leftarrow \emptyset$\\ \STATE $\boldsymbol{\tau}_{1} = \boldsymbol{\tau} \cap \group{1}$\\ \STATE $\boldsymbol{\tau}_{2} = \boldsymbol{\tau} \cap \group{2}$\\ \WHILE {$(\boldsymbol{\tau}_{1} \neq \emptyset) \wedge (\boldsymbol{\tau}_{2} \neq \emptyset)$} \STATE $r=random([0, 1])$\\ \COMMENT {Append the next selected item to $\Tilde{\boldsymbol{\tau}}$}\\ \IF {$r<p$} \STATE $\Tilde{\boldsymbol{\tau}} \leftarrow pop(\boldsymbol{\tau}_{1})$\\ \ELSE \STATE $\Tilde{\boldsymbol{\tau}} \leftarrow pop(\boldsymbol{\tau}_{2})$\\ \ENDIF \ENDWHILE\\ \COMMENT {If any items remain in $\boldsymbol{\tau}_{1}$ or $\boldsymbol{\tau}_{2}$, append them to $\Tilde{\boldsymbol{\tau}}$} \STATE $\Tilde{\boldsymbol{\tau}} \leftarrow \boldsymbol{\tau}_{1}$\\ \STATE $\Tilde{\boldsymbol{\tau}} \leftarrow \boldsymbol{\tau}_{2}$\\ \RETURN $\Tilde{\boldsymbol{\tau}}$\\ \end{algorithmic} \label{alg:rankgen} \end{algorithm} \begin{figure}[t!] \centering \small \setlength{\tabcolsep}{0.3em} \subfloat[] { \begin{tabular}{|c||c|c||c|} \hline \rowcolor[HTML]{C0C0C0} candidate & $A_1$ & $A_2$ & $Y$ \\ \hline \val{b} & \cellcolor[HTML]{CBCEFB} \val{male} & \cellcolor[HTML]{CBCEFB} \val{White} & 9 \\ \hline \val{c} & \cellcolor[HTML]{CBCEFB} \val{male} & \cellcolor[HTML]{FFCE93} \val{Black} & 8 \\ \hline \val{d} & \cellcolor[HTML]{FFCE93} \val{female} & \cellcolor[HTML]{CBCEFB} \val{White} & 7 \\ \hline \val{e} & \cellcolor[HTML]{CBCEFB} \val{male} & \cellcolor[HTML]{CBCEFB} \val{White} & 6 \\ \hline \val{f} & \cellcolor[HTML]{FFCE93} \val{female} & \cellcolor[HTML]{CBCEFB} \val{White} & 5 \\ \hline \val{k} & \cellcolor[HTML]{FFCE93} \val{female} & \cellcolor[HTML]{CBCEFB} \val{White} & 4 \\ \hline \val{l} & \cellcolor[HTML]{CBCEFB} \val{male} & \cellcolor[HTML]{CBCEFB} \val{White} & 3 \\ \hline \val{o} & \cellcolor[HTML]{FFCE93} \val{female} & \cellcolor[HTML]{FFCE93} \val{Black} & 2 \\ \hline \end{tabular} \label{fig:exm_measure_fair_rank_data} } \hspace{5mm} \subfloat[]{ \begin{tabular}{|c|} \hline \rowcolor[HTML]{C0C0C0} $\boldsymbol{\tau}$ \\ \hline \rowcolor[HTML]{CBCEFB} \val{b} \\ \hline \rowcolor[HTML]{CBCEFB} \val{c} \\ \hline \rowcolor[HTML]{FFCE93} \val{d} \\ \hline \rowcolor[HTML]{CBCEFB} \val{e} \\ \hline \rowcolor[HTML]{FFCE93} \val{f} \\ \hline \rowcolor[HTML]{FFCE93} \val{k} \\ \hline \rowcolor[HTML]{CBCEFB} \val{l} \\ \hline \rowcolor[HTML]{FFCE93} \val{o} \\ \hline \end{tabular} \label{fig:exm_measure_fair_rank_input} } \hspace{5mm} \subfloat[]{ \begin{tabular}{|c|} \hline \rowcolor[HTML]{C0C0C0} $\Tilde{\boldsymbol{\tau}}$ \\ \hline \rowcolor[HTML]{CBCEFB} \val{b} \\ \hline \rowcolor[HTML]{FFCE93} \val{d} \\ \hline \rowcolor[HTML]{CBCEFB} \val{c} \\ \hline \rowcolor[HTML]{FFCE93} \val{f} \\ \hline \rowcolor[HTML]{CBCEFB} \val{e} \\ \hline \rowcolor[HTML]{FFCE93} \val{k} \\ \hline \rowcolor[HTML]{CBCEFB} \val{l} \\ \hline \rowcolor[HTML]{FFCE93} \val{o} \\ \hline \end{tabular} \label{fig:fig:alg1:gender_05} } \hspace{5mm} \subfloat[]{ \begin{tabular}{|c|} \hline \rowcolor[HTML]{C0C0C0} $\Tilde{\boldsymbol{\tau}}$ \\ \hline \rowcolor[HTML]{CBCEFB} \val{b} \\ \hline \rowcolor[HTML]{CBCEFB} \val{c} \\ \hline \rowcolor[HTML]{CBCEFB} \val{e} \\ \hline \rowcolor[HTML]{CBCEFB} \val{l} \\ \hline \rowcolor[HTML]{FFCE93} \val{d} \\ \hline \rowcolor[HTML]{FFCE93} \val{f} \\ \hline \rowcolor[HTML]{FFCE93} \val{k} \\ \hline \rowcolor[HTML]{FFCE93} \val{o} \\ \hline \end{tabular} \label{fig:exm_male_then_female} } \hspace{5mm} \subfloat[]{ \begin{tabular}{|c|} \hline \rowcolor[HTML]{C0C0C0} $\Tilde{\boldsymbol{\tau}}$ \\ \hline \rowcolor[HTML]{CBCEFB} \val{b} \\ \hline \rowcolor[HTML]{FFCE93} \val{c} \\ \hline \rowcolor[HTML]{CBCEFB} \val{d} \\ \hline \rowcolor[HTML]{FFCE93} \val{o} \\ \hline \rowcolor[HTML]{CBCEFB} \val{e} \\ \hline \rowcolor[HTML]{CBCEFB} \val{f} \\ \hline \rowcolor[HTML]{CBCEFB} \val{k} \\ \hline \rowcolor[HTML]{CBCEFB} \val{l} \\ \hline \end{tabular} \label{fig:alg1:race_05} } \caption{{\bf (a)} A set of applicants for college admissions $\candidateSet{}$, with two binary sensitive attributes: $A_1$ (gender), with protected group $\group{\val{F}}=\{d,f,k,o\}$ and privileged group $\group{\val{M}}=\{b,c,e,l\}$; and $A_2$ (race), with protected group $\group{\val{B}}=\{c,o\}$ and privileged group $\group{\val{W}}=\{b,d,e,f,k,l\}$. Protected values of $A_1$ and $A_2$ are shown in orange, and privileged values---in blue. {\bf (b)} Ranking $\boldsymbol{\tau}$ sorts the applicants in descending order of their score $Y$, as shown in Figure~\ref{fig:exm_measure_fair_rank_input}, with male candidates appearing in higher proportion at the top ranks. {\bf (c)} Ranking $\Tilde{\boldsymbol{\tau}}$ for $A_1$ mixes candidates in approximately equal proportion by gender, with $p=0.5$ in Algorithm~\ref{alg:rankgen}, and is expected to achieve statistical parity for this attribute, since gender groups are represented in equal proportion in $\candidateSet{}$. {\bf (d)} Ranking $\Tilde{\boldsymbol{\tau}}$ for $A_1$, with $p=0$ in Algorithm~\ref{alg:rankgen}, places all, or most, male applicants about the female applicants. {\bf (e)} Ranking $\Tilde{\boldsymbol{\tau}}$ for $A_2$ (race), with $p=0.5$ in Algorithm~\ref{alg:rankgen}, is expected to achieve equal representation by race at top ranks, but not statistical parity, since $\candidateSet{}$ is not balanced by race.} \label{fig:exm_measure_fair_rank} \end{figure} The proposed fairness measures ---normalized discounted difference (\textsf{rND}\xspace), ratio (\textsf{rRD}\xspace), and KL-divergence (\textsf{rKL}\xspace)--- are evaluated on rankings produced by Algorithm~\ref{alg:rankgen} with a range of values for $p$ and with different relative proportions of $\group{1}$ and $\group{2}$ in $\candidateSet{}$. The authors conclude that \textsf{rKL}\xspace is the most promising measure both because it is smooth and because it naturally generalized to multinary sensitive attributes. This paper also proposes a bias mitigation methodology, inspired by~\citet{DBLP:conf/icml/ZemelWSPD13}, that integrates fairness objectives into an optimization framework that balance fairness against utility, with an experimental evaluation on the German credit dataset~\cite{GermanCreditData} (see details in Sec.~\ref{sec:datasets}). \spara{Insights.} The fairness definitions of this paper aim to address pre-existing bias, per classification in Section~\ref{sec:frame:bias}. Fairness is interpreted as equality of outcomes, suggesting an underlying assumption of WAE, per classification in Section~\ref{sec:frame:mit_goal}. Assuming the existence of indirect discrimination in candidate scores (i.e.,\xspace that the observation process between construct space {\it CS}\xspace and observable space {\it OS}\xspace is biased), the paper aims to ensure a similar representation of groups in the ranked outcomes. The approach is designed around a relative view of effort: candidates are ranked according to score within a demographic group, and a ranked outcome is considered fair if the groups are mixed in equal proportion when the input is balanced, as in Figure~\ref{fig:exm_measure_fair_rank_data} by $A_1$ (gender), or, more generally, when statistical parity is achieved at high ranks. This clearly links to luck-egalitarian EOP, per classification in Section~\ref{sec:frame:mit_goal}. \subsection{Intervening on the Ranked Outcome: Diversity Constraints} \label{sec:fair_db:bounds} In Section~\ref{sec:fair_db:prop} we discussed how fairness measures that are based on (set-wise) proportional representation can be made rank-aware. The methods described in this section start with the observation that if the total number of candidates in $\candidateSet{}$, and the number of candidates in each demographic group of interest, is available as input (i.e.,\xspace that these quantities are known a priori or can be estimated), then any measure that aims to equalize or bound the difference in proportions can be equivalently re-formulated with the help of counts. Specifically, proportional representation constraints and coverage-based diversity constraint~\cite{drosou2017diversity} for \emph{set selection tasks} can be expressed by specifying a lower-bound $L_k^{\group{}}$ and an upper-bound $U_k^{\group{}}$ on the representation of group $\group{} \subseteq \candidateSet{}$ among the top-$k$ set of a ranking. Such constraints can be formulated for one or several demographic groups of interest, and also for their intersections, and a score-based ranker can then optimize utility under such constraints. Generalizing beyond set selection, constraints $L_p^{\group{}}$ and $U_p^{\group{}}$ can be specified over every prefix of the top-$k$ of a ranked list, with $p \in [k]$, or, more practically, at some specific cut-off points within the top-$k$. Similarly to the methods of Section~\ref{sec:fair_db:prop}, the methods described in this section are designed to enforce fairness and diversity in the sense of representation. In contrast Section~\ref{sec:fair_db:prop}, these methods are designed to handle multiple sensitive attributes simultaneously---individually or in combination. \subsubsection{\citet{celis2018ranking}} \spara{Fairness definition and problem formalization.} The authors formulate the \emph{constrained ranking maximization problem}: Consider a set of $n$ candidates $\candidateSet{}$, and the integer $k \ll n$, along with 1) the utility of placing a candidate in a particular position in the ranking, 2) the collection of sensitive attributes (e.g.,\xspace gender, race, or disability status) that map candidates to groups $\groupSet$, and 3) a collection of lower-bound constraints $L_p^{\group{}}$ and upper-bound constraints $U_p^{\group{}}$ that, for each prefix $p \in [k]$ and for each group $\group{} \in \groupSet$, bound the number of candidates from that group that are allowed to appear in the top-$p$ positions of the ranking. The goal is to output a ranking that maximizes overall utility with respect to the original utility metric, while respecting the constraints. Note that this problem formulation has the flexibility to explicitly associate a utility with an assignment of candidate $a \in \candidateSet{}$ to rank position $j \in [k]$, and may already incorporate position-based discounting (per Equation.~\ref{eq:disc_agg_utility}). However, for consistency and ease of exposition, we will assume that utility score $Y$ is fixed per candidate. An example of the constrained ranking maximization problem is given in Figure~\ref{fig:crm}, where the goal is to select $k=4$ candidates, with at least two of each gender ($L_4^{\val{M}}=2$, $L_4^{\val{F}}=2$) and at least one of each race ($L_4^{\val{W}}=1$, $L_4^{\val{B}}=1$, $L_4^{\val{A}}=1$) among the top-$k$, and with no further constraints on the prefixes of the top-$k$. (For convenience, we are referring to each groups by the first letter of the attribute value that defines it, such as \val{M} for \val{male} and \val{A} for \val{Asian}). Ranking $\boldsymbol{\tau}_1$ in Figure~\ref{fig:crm} is a ranked outcome of the top-$4$ candidates selected based on utility $Y$: two of them are male and two are female, and all are White. Applying diversity constraints on gender and race yields $\boldsymbol{\tau}_2$, a ranking of the top-$4$ in Figure~\ref{fig:crm}, selecting the top-scoring White male candidates $a$ and $b$, and two lower-scoring female candidates, $g$ and $k$. Computing total utility as the sum of scores of selected candidates (for simplicity), we observe that $U(\boldsymbol{\tau}_1)=68$ and $U(\boldsymbol{\tau}_2)=53$ in this example. Note that the example in Figure~\ref{fig:crm} is deliberately constructed to highlight disparities in scores due to pre-existing bias on gender and race: all male candidates are ranked above all female candidates of a given race, and all Whites are ranked above all Black, who are in turn ranked above all Asians. For this reason, imposing diversity constraints leads to a substantial drop in score-utility of $\boldsymbol{\tau}_2$ in Figure~\ref{fig:crm}. \input{figs/merge-celis-secretary} In the example we constructed, diversity constraints are satisfiable. However, as was shown by~\citet{celis2018ranking}, the constrained ranking maximization problem can be seen to generalize various NP-hard problems such as independent set, hypergraph matching and set packing, and so is hard in the general case. It turns out that even checking if there is a complete feasible ranking is NP-hard. The authors show that a special case of the problem, in which each candidate is assigned to (at most) one group, and so the assignment induces a partitioning on $\candidateSet{}$, can be solved in polynomial time. In this case, diversity constraints can only be specified with respect to a single sensitive attribute, which may be binary or multinary, and so can represent multiple sensitive attributes in combination (see discussion on group structure in Section~\ref{sec:frame:group}). Recall that the problem formulation allows to associate a utility with an assignment of candidate $a \in \candidateSet{}$ to rank position $j \in [k]$. While the nature of these assignments can in principle be arbitrary, many reasonable utility metrics, including NDCG, Bradley-Terry~\cite{bradley1952rank} or Spearman's rho~\cite{spearman1961proof}, are non-increasing with increasing rank position, and with decreasing utility score $Y$, which is intuitively interpreted to mean that, if $Y_a \geq Y_b$ then placing $a$ above $b$ cannot decrease the utility of the overall ranking. Such metrics are said to be monotone and to satisfy the Monge property. For this family of utility metrics, the authors propose an exact dynamic programming algorithm that solves the constrained ranking optimization problem in time polynomial in the number of candidates $m$ and size of the selected set $k$, and exponential in the number of possible assignments of candidates to groups (typically the product of domain cardinalities of the sensitive attributes, $card(A_1) \times card(A_2) = 6$ in our example in Figure~\ref{fig:crm}). The authors also propose approximation algorithms that allow violations of diversity constraints, and study the quality of these approximations. \spara{Insights.} The focus of this work is on the formal properties of the constrained ranking maximization problem, including its hardness and approximability under different assumptions about the sensitive attributes, the diversity constraints, and the properties of the utility metric. The paper does not include an experimental evaluation. The paper states that ``[...] left unchecked, the output of ranking algorithms can result in decreased diversity in the type of content presented, promote stereotypes, and polarize opinions.'' The goal of imposing diversity constrains is to counteract pre-existing bias. Further, since these constraints enforce equality of outcomes, the method relates to the WAE worldview. When the candidates are partitioned by a single sensitive attribute (either binary or multinary), the method will select the highest-scoring members of each group to satisfy diversity constraints, suggesting that it operates under a luck-egalitarian EOP view. However, when candidates are associated with two or more sensitive attributes, as is the case in the example in Figure~\ref{fig:crm}, a single utility distribution is assumed, in the sense that a higher-scoring candidate will be preferred to a lower-scoring one irrespective of their group membership, whenever constraints permit. This was illustrated ranking $\boldsymbol{\tau}_2$ as shown in Figure~\ref{fig:crm}, where the highest-scoring White and male candidates were selected among the top-$k$, but the highest-scoring female, Black, and Asian candidates were skipped. Based on this observation, we map this method to Rawlsian EOP\xspace. We will elaborate on this specific concern in the next sub-section. \subsubsection{\citet{DBLP:conf/ijcai/YangGS19}} \spara{Fairness definition and problem formalization.} The authors further investigate the constrained ranking maximization problem with two or more sensitive attributes, and observe that members of multiple historically disadvantaged groups may still be treated unfairly by this process. For an intuition, consider again Figure~\ref{fig:crm}, and recall that the goal is to select the top-$4$ candidates, with at least two of each gender ($L_4^{\val{M}}=2$, $L_4^{\val{F}}=2$) and at least one of each race ($L_4^{\val{W}}=1$, $L_4^{\val{B}}=1$, $L_4^{\val{A}}=1$). Maximizing utility subject to these constraints being met yields, ranking $\boldsymbol{\tau}_2$ in Figure~\ref{fig:crm} that selects the best (according to score) White and male candidates $a$ and $b$, but it does not select the best Black, Asian, or female candidates. Our example was deliberately constructed to highlight the following: if some population groups have systematically lower scores, then it costs less to skip their best-scoring members in the name of diversity. This runs contrary to the nature of the diversity objective, which is to equalize access to opportunity. This also represents unfairness, under the luck-egalitarian view. To see why, suppose that scores represent effort (e.g.,\xspace how hard someone studied to do well on a test), and that we consider it important to reward effort. We may then take a relative view of effort, and assert that scores are more informative \emph{within} a group than \emph{across groups}. Taken together, this means that the best-scoring individuals from historically disadvantaged groups should have a chance to be selected among the top-$k$. Ranking $\boldsymbol{\tau}_3$ in Figure~\ref{fig:crm} represents a ranked outcome that gets closer to this objective; it presents $\boldsymbol{\tau}_3$, a top-$4$ ranking that contains the highest-scoring male, female, White and Black candidates. \citet{DBLP:conf/ijcai/YangGS19} formalize this intuition by stating that, when multiple sensitive attributes (e.g.,\xspace gender and race) are considered simultaneously, it is crucial to consider the utility loss that is incurred within each group, and to balance that loss across groups. The authors propose two measures to quantify in-group utility, \textsf{IGF-Ratio}\xspace and \textsf{IGF-Aggregated}\xspace, both taking on values from the range (0, 1], with 1 corresponding to perfect utility within a group (no loss), and with high loss corresponding to values close to 0. Both \textsf{IGF-Ratio}\xspace and \textsf{IGF-Aggregated}\xspace can be computed over the top-$k$ as a set, or in rank-aware manner, by considering every prefix of length $p \in [k]$. In what follows, we will illustrate one of these measures, \textsf{IGF-Ratio}\xspace, taking the set interpretation for simplicity. \textsf{IGF-Ratio}\xspace, quantifies the utility within a group (e.g.,\xspace female or Black) by computing the ratio of the utility score of the highest-scoring skipped candidate from that group and the lowest-scoring selected candidate. Consider again ranking $\boldsymbol{\tau}_2$ in Figure~\ref{fig:crm}. We compute $\textsf{IGF-Ratio}\xspace(\boldsymbol{\tau}_2, \group{M}) = \textsf{IGF-Ratio}\xspace(\boldsymbol{\tau}_2, \group{W}) = 1$, since the highest-scoring male and White candidates were selected. For the female, Black, and Asian groups, we compute $\textsf{IGF-Ratio}\xspace(\boldsymbol{\tau}_2, \group{F}) = \frac{Y_g}{Y_c} = \frac{10}{16}$; $\textsf{IGF-Ratio}\xspace(\boldsymbol{\tau}_2, \group{B}) = \frac{Y_g}{Y_e} = \frac{10}{11}$; and $\textsf{IGF-Ratio}\xspace(\boldsymbol{\tau}_2, \group{A}) = \frac{Y_k}{Y_i} = \frac{6}{7}$. \textsf{IGF-Aggregated}\xspace is based on similar intuition as \textsf{IGF-Ratio}\xspace, but rather than comparing the utility due to a pair of items for each group --- one selected and one skipped --- it compares the sum of scores of all items from a group up to a particular position with the sum of scores of all selected items from that group (up to the same position). The authors go on to use \textsf{IGF-Ratio}\xspace and \textsf{IGF-Aggregated}\xspace to state that loss in these measures should be balanced across groups. They implement this requirement as an additional set of constraints, and formalize the induced optimization problem that (1) meets diversity constraints for each group, (2) balances utility loss across groups, and (3) maximizes over-all utility subject to (1) and (2), as integer linear programs. \spara{Experiments and observations} The authors conduct experiments on two real datasets, CS departments~\cite{CSData} and MEPS~\cite{MEPSData} (see details in Section~\ref{sec:datasets}). They use these datasets to quantify the feasible trade-offs between, diversity, overall utility, and utility loss across groups. Further, they show that utility loss can be balanced effectively, and that the over-all utility cost of such interventions is low. \spara{Insights} Similarly to papers surveyed earlier in this section, the work of~\citet{DBLP:conf/ijcai/YangGS19} aims to address pre-existing bias by equalizing outcomes, and so relates to the WAE worldview. Further, because of an explicit focus on ensuring that the best-qualified candidates from each group have an opportunity to be selected, or to appear at higher ranks, this work takes a relative view of effort and so is firmly in the luck-egalitarian EOP camp. The main insight on which this work is based is that membership in multiple sensitive groups can lead to unfair treatment, and that the effects can be particularly pronounced for individuals who are multiply marginalized and who may, for example, be denied opportunity along the dimensions of both race and gender. This insight is surfacing an important dimension of intersectional discrimination in algorithmic rankers and is, to the best of our knowledge, the first approach in this area to have observed and proposed ways to counteract intersectional effects. \subsubsection{\citet{DBLP:conf/edbt/StoyanovichYJ18}} \label{subsubsec:Stoyanovichetal} \spara{Fairness definition and problem formalization} The final method we discuss in this section aims to incorporate diversity constraints of the kind that were used by~\citet{celis2018ranking} and~\citet{DBLP:conf/ijcai/YangGS19} into online set selection. This setting models a sequence of job or college admissions interviews: candidates arrive one-by-one and their qualification score $Y$ is revealed at the time of the interview. Candidates are assumed to arrive in random order according to score, and their total number $n$ is known or can be estimated. The decision maker must hire or to reject the candidate being considered as soon as their score $Y$ is revealed, before advancing to the next candidate in the sequence. The classic version of this problem, known as the Secretary problem~\cite{Dynkin:1963,Lindley:1961}, aims to select a single candidate with the highest score $Y$. It was shown by~\citet{Lindley:1961} and by~\citet{Dynkin:1963} that the optimal hiring strategy is to interview $s = \lfloor \frac{n}{e} \rfloor$ candidates without making any offers (this is called the ``warm-up period''), and make an offer to the first candidate whose score is better than the best score of the of the first $s$ candidates (or accept the last candidate if no better candidate is seen). This strategy yields the highest-scoring candidate with probability $\frac{1}{e}$, and is said to have ``competitive ratio'' $e$. Further, this is the best such strategy for the secretary problem (i.e.,\xspace with the highest competitive ratio)~\cite{ferguson1989}. This problem has been extended by~\citet{DBLP:journals/sigecom/BabaioffIKK08} to select $k$ candidates, maximizing the expected sum of their scores. \citet{DBLP:conf/edbt/StoyanovichYJ18} postulate the diverse $k$-choice secretary problem that enriches the $k$-choice secretary problem of~\citet{DBLP:journals/sigecom/BabaioffIKK08} with diversity constraints. The diverse $k$-choice secretary problem is formalized as follows: In addition to a qualification score $Y$, each candidate is associated with one of $i \geq 2$ groups $\groupSet$ based on the value of a single multinary sensitive attribute (e.g.,\xspace gender, race, or disability status). Both the total number of candidates $n$, and the number of candidates in each group $n_1 \ldots n_i$, are known ahead of time or can be estimated. The goal of the decision maker is to select $k$ candidates, maximizing the expected sum of their scores, subject to diversity constraints, stated in the form of per-group lower-bounds $L_k^{\group{}}$ and upper-bounds $L_k^{\group{}}$. Figure~\ref{fig:sec} gives an example: a set of $n=6$ college applicants, of whom $n_{\val{M}}=3$ are male and $n_{\val{F}}=3$ are female, are being interviewed in the order shown in Figure~\ref{fig:sec}, left-to-right. The admissions officer wishes to select $k=2$ applicants, with one of each gender, specified by the lower-bound constraints $L_k^{\val{M}} = 1$ and $L_k^{\val{F}} = 1$. The key idea in~\citet{DBLP:conf/edbt/StoyanovichYJ18} is that, if score distributions are expected to differ between the groups, then separate warm-up periods should be conducted for each group to better estimate the scores of that group's desirable candidates. As illustrated in the outcome (a) in Figure~\ref{fig:sec}, measuring the higher-scoring male candidates and the lower-scoring female candidates against the same (higher-scoring) standard will allow high-scoring male candidates to be selected. However, the female candidates selected in this way are those that happen to be at the end of the interview queue: they were chosen ``at the last minute'' to satisfy the diversity constraint. This is problematic for the reasons we outlined when discussing~\citet{DBLP:conf/ijcai/YangGS19} earlier in this section --- it withholds opportunity from the relatively better-qualified candidates of a historically disadvantaged group, and it can build bad precedent if the lesser-qualified candidates from that group are selected but do not perform well on the task. Outcome (b) in Figure~\ref{fig:sec} shows the result of a selection in which warm-up was conducted separately per group, yielding a higher-scoring female candidate. The authors propose additional techniques to handle cases where the sum of the per-group lower bound is less than $k$, leaving the freedom to select high-scoring candidates from any group. Finally, they consider the case where a constant-size waiting list of candidates is allowed, showing that it can lead to higher-utility outcomes. \spara{Experiments and observations} The experimental evaluation of the proposed algorithms for variants of the diverse $k$-choice secretary problems is conducted using three real datasets: Forbes richest Americans~\cite{ForbesRichesData}, NASA astronauts~\cite{NASAData}, and Pantheon~\cite{PantheonData} (see details in Section~\ref{sec:datasets}). Additional results on synthetic datasets are provided, to simulate differences in score distributions between groups. The evaluation on real datasets shows that the algorithms can select candidates that meet the desired diversity constraints while paying a very small cost in terms of the utility loss. The evaluation on synthetic datasets shows that if a difference in the observed scores is expected between groups, then these groups must be treated separately during processing. Otherwise, a solution may be derived that meets diversity constraints, but that results in lower utility for the disadvantaged groups. \spara{Insights} This work focuses on pre-existing bias that exhibits itself through differences in expected scores between groups of candidates. Diversity constraints, and the mechanism used to enact them, aims to equalize outcomes across groups, and so this method clearly links to the WAE worldview. The core idea in this work is that effort, as represented by scores, should be seen as relative: scores are estimated per group, and individuals from a particular group are evaluated against that group's score threshold. This places the method firmly within the luck-egalitarian EOP camp. \subsection{Intervening on the Score Distribution} \label{sec:fair_db:latent} The methods in this subsection work under the assumption that the scores on which candidates are ranked are subject to pre-existing bias, such that members of minority or historically disadvantaged groups have lower scores, and thus are ranked less favorably. The approach these methods take is based on correcting for the bias by adjusting the score distribution before it is given as input to a ranker. \subsubsection{\citet{kleinberg_et_al:LIPIcs:2018:8323} and~\citet{celis2020interventions}} \label{subsubsec:kleinberg} \spara{Problem formalization} The papers discussed in this section study set selection and ranking in presence of \emph{implicit bias}; they investigate under what conditions the utility of the selected set or the top-$k$ would be improved by imposing representation constraints. \citet{kleinberg_et_al:LIPIcs:2018:8323} consider a score-based set selection task motivated by hiring, in which a set of $n$ candidates $\candidateSet{}$ applies for an open job position, and some $k \ll n$ of them are selected as finalists to interview. The size of the selected set $k$ is assumed to be a small constant, with the case $k=2$ studied closely in the paper. % Candidates in $\candidateSet{}$ belong to one of two groups, $\group{1}$ or $\group{2}$, according to a single binary sensitive attribute (e.g.,\xspace, binary gender), and with one of these groups, $\group{1}$, corresponding to the protected group (e.g.,\xspace the female gender). It is assumed that $\group{1}$ constitutes a minority of the applicant pool, as quantified by the parameter $\alpha \in (0, 1]$, with $\left| \group{1} \right| = \alpha \cdot \left| \group{2} \right |$. Further, it is assumed that the true qualification scores (called ``potentials'' in~\citet{kleinberg_et_al:LIPIcs:2018:8323}) are drawn from the same score distribution for the candidates in both groups, and that this distribution follows the power law, parameterized by $\delta > 0$, such that $Pr[Y\geq t]=t^{-(1+\delta)}$. Candidates are not hired according to their true qualification scores $Y$, but rather according to their perceived scores $\Tilde{Y}$, which are, in turn, subject to \emph{implicit bias}: hiring committee members ``downgrade'' the true scores of the candidates from $\group{1}$ by dividing them by a factor $\beta > 1$. The question being asked in this papers is: Under what conditions does including \emph{a single candidate} from the protected group $\group{1}$ among the $k$ finalists improve the utility of the selected set according to the true score $Y$? (Utility is quantified as the sum of true scores of the selected candidates.) This intervention is known as the Rooney Rule~\cite{rooney}, and while its goal is to improve diversity in hiring, \citet{kleinberg_et_al:LIPIcs:2018:8323} study it explicitly from the point of view of utility rather than diversity or fairness. The requirement of including a single protected group candidate among the finalists is a basic coverage-based diversity requirement~\cite{drosou2017diversity}. The authors study the problem under different settings of $\alpha$ (relative proportion of the minority group), $\beta$ (bias factor), and $\delta$ (the parameter of the power law distribution of true scores). They find that, for every $\alpha$, there exists a sufficiently small $\delta > 0$ for which the Rooney Rule will produce a set of $k$ finalists with higher expected utility, compared to when candidates are selected according to their perceived --- and biased --- scores $\Tilde{Y}$. Put another way, with a power law exponent $1+\delta$ that is sufficient close to 1, it is a better strategy, \emph{in terms of utility}, to commit one of the $k$ offers to the candidates from group $\group{1}$, even when $k$ is as low as 2 and $\group{1}$ forms an extremely small fraction of the population. Figure~\ref{fig:exm_utility_kleinberg} shows an example of the selection process, where the goal is to select $k=2$ finalists from a pool of 6, with 2 male candidates for each female candidate ($\alpha=1/2$), and with females candidates being perceived as half as qualified as what their true score would suggest ($\beta=2$). The Rooney Rule would select a top-scoring candidate from each gender group, leading to higher expected utility than if the top-two candidates were selected, both of them male. \input{figs/merge-causal-rank-presentation} The results of are extended by~\citet{celis2020interventions}, who consider arbitrary utility distributions (beyond the power law) and support a richer group structure, including multiple sensitive attributes handled independently, with multinary domains. They show that, for any (assumed) distribution of utilities and any level of implicit bias, representation constraints can lead to optimal latent utility. \spara{Experiments and insights} ~\citet{kleinberg_et_al:LIPIcs:2018:8323} give a tight characterization of the conditions on $\alpha$, $\beta$, and $\delta$, under which applying the Rooney Rule, with its most basic representation constraint, produces a positive change in expected utility. Proposed techniques can be used to estimate parameters of a biased decision-making process. The paper focuses on theoretical analysis and does not provide any experimental results. ~\citet{celis2020interventions} extend these results, and also include an experimental evaluation on the \emph{IIT-JEE} dataset~\cite{IITJEEData} (see Section~\ref{sec:datasets} for details). These results give the flavor of the utility of the proposed intervention, although experimental evaluation is substantially more limited than the problem set-up warrants, focusing on a single binary protected attribute and leaving empirically unsubstantiated the claim that proposed approach generalizes to multiple sensitive attributes and handles intersectional discrimination. Both papers consider utility rather than diversity or fairness, and so cannot be classified according to one of our EOP frameworks. That said, the assumption made in the papers --- that candidates' true (unobserved) qualifications are drawn from the same score distribution --- is consistent with the WAE worldview. \subsubsection{\citet{yang2020causal}} \spara{Fairness definition and problem formalization} The authors define \emph{intersectional fairness for ranking} by modelling the causal effects of sensitive attributes on other variables, and then making rankers fairer by removing these effects. Their method, \textsc{CIF-Rank}\xspace, computes model-based counterfactuals to answer the question: ``What would this person's data look like if they had (or had not) been a Black woman (for example)?'' Counterfactual scores are computed by treating every candidate as though they had belonged to one specific intersectional subgroup. Candidates are then ranked on counterfactual scores (for score-based rankers), or these scores are used to train a fair model (for rankers based on supervised learning). Consider the hiring process of a moving company that has a dataset of applicants including their gender $G$, race $R$, weight-lifting test score $X$, and an overall qualification score $Y$ by which job candidates are ranked. Figure~\ref{fig:cm} presents the structural causal model (SCM) that describes the data generation process. An SCM is a directed acyclic graph, where vertices represent (observed or latent) variables and edges indicate causal relationships from source to target vertices. The arrows pointing from $G$ (gender) and $R$ (race) directly to $Y$ encode the effect of ``direct'' discrimination. Additionally, the SCM can encode indirect discrimination: note that $G$ and $R$ both impact $Y$ through weight-lifting ability $X$, called a ``mediator variable.'' A mediator may be designates as \emph{resolving} with respect to a sensitive variable, which means that we allow that mediator to carry the effect from the sensitive variable to the outcome. For example, we may consider $X$ as a resolving on the path from gender $G$ to score $Y$. Alternatively, a mediator may be designates as \emph{non-resolving}, which means that we consider the influence to be due to discrimination. For example, we may consider $X$ as non-resolving on the path from race $R$ to score $Y$. The SCM, together with the information about which mediators are considered resolving, is given as input; it encodes the fairness objectives of the ranker. \textsc{CIF-Rank}\xspace will use the SCM to produce a ranking that is fair with respect to race, gender, and the intersectional subgroups of these categories. Let $\mathbf A$ denote the vector of sensitive attributes and let $\mathbf a$ denote a possible value. The counterfactual $Y_{\mathbf A \gets \mathbf a'}$ is computed by replacing the observed value of $\mathbf A$ with $\mathbf a'$ and then propagating this change through the DAG: any directed descendant of $\mathbf A$ has its value changed by computing the expectation for the new value of $\mathbf a'$, and this operation is iterated until it reaches all the terminal nodes that are descendants of any of the sensitive attributes $\mathbf A$. If a mediator variable is non-resolving, then its value will be set to its counterfactual value in the process. If, however, it is designated as resolving, then we keep its observed value. \textsc{CIF-Rank}\xspace considers a ranking $\hat \boldsymbol{\tau}$ is counterfactually fair if, for all possible $x$ and pairs of vectors of actual and counterfactual sensitive attributes $a \neq a'$, respectively, \begin{equation*} \label{eq:cfranking} \begin{split} \mathbb P (\hat \boldsymbol{\tau}(Y_{\mathbf A \gets \mathbf a}(U)) = k \mid \mathbf X = \mathbf x, \mathbf A = \mathbf a) \\ = \mathbb P (\hat \boldsymbol{\tau}(Y_{\mathbf A \gets \mathbf a'}(U)) = k \mid \mathbf X = \mathbf x, \mathbf A = \mathbf a) \end{split} \end{equation*} for any rank $k$, and with suitably randomized tie-breaking. The causal model can be used to compute counterfactual scores $Y$ --- the scores that would have been assigned to the individuals if they belonged to one particular subgroup defined by fixed values of $R$ and $G$, while holding the weight lifting score $X$ fixed in the resolving case --- and then rank the candidates based on these scores. The moving company can then interview or hire the highly ranked candidates, and this process would satisfy a causal and intersectional definition of fairness that corresponds to the hiring manager's explicitly stated goals. \spara{Experiments and observations} The authors evaluated the performance of \textsc{CIF-Rank}\xspace on several real and synthetic datasets, including \emph{CSRankings}, \emph{COMPAS} and \emph{MEPS} (see details in Section~\ref{sec:datasets}). Results on synthetic datasets are provided to simulate different structural assumptions of the underlying causal model. The evaluation is done on two types of ranking tasks: score-based and supervised learning. The evaluation of score-based ranking tasks on real and synthetic datasets shows that \textsc{CIF-Rank}\xspace can be flexibly applied to different scenarios, including ones with mediating variables and numerical sensitive attributes. Counterfactually fair rankings that are produced by \textsc{CIF-Rank}\xspace compare reasonably to intuitive expectations we may have about intersectional fairness for those examples, while paying a small cost in terms of the utility loss. The evaluation of rankers based on supervised learning on synthetic datasets shows that \textsc{CIF-Rank}\xspace can be used as a preprocessing fairness intervention to produce counterfactually fair training and test data. \spara{Insights} \textsc{CIF-Rank}\xspace admits \emph{multiple sensitive attributes}, and is specifically designed for intersectional concerns, and so is appropriate when it is important to account for potential discrimination along two or more features. The method supports \emph{multinary sensitive attributes}, such as non-binary gender and ethnic group membership. The method is concerned with \emph{pre-existing bias} that in turn leads to disparities in outcomes. The method focuses on \emph{equality of outcome}, takes the \emph{WAE} worldview, and falls within the framework of \emph{substantive EOP\xspace}. It gives the decision maker the flexibility to specify which impacts of which sensitive attribute to mitigate, and which to allow to persist. This is done through the mediator mechanism. A mediator $X$ may be considered resolving or not; this decision can be made separately for different sensitive attributes, and the relative strengths of causal influences of sensitive attributes on both $X$ and $Y$ can vary, creating potential for explanatory nuance. If all mediators are designated as resolving, then the method expresses \emph{Rawlsian EOP\xspace}. If, however, some or all of the mediators are designated as non-resolving, then the result will be consistent with \emph{luck-egalitarian EOP\xspace}. \subsection{Intervening on the Ranking Function} \label{sec:fair_db:geo} To motivate the methods discussed in this section, let us return to our running example described in Section~\ref{sec:intro:example}, and shown in Figure~\ref{fig:admissions}, and consider a college admissions officer who is designing a ranking scheme to evaluate a pool of applicants, each with several potentially relevant attributes. For simplicity, let us focus on two of these attributes, high school GPA $X_1$, and verbal SAT $X_2$, and assume that they are appropriately normalized and standardized. Suppose that our fairness criterion is that the admitted class comprise at least 40\% women. The admissions officer may believe \emph{a priori} that $X_1$ and $X_2$ should carry an approximately equal weight, computing the score of an applicant $a \in \candidateSet{}$ as $f(a) = 0.5 X_1 + 0.5 X_2$, ranking the applicants, and returning the top 500 individuals. Upon inspection, it may be determined that an insufficient number of women is returned among the top-$k$: at least 200 were expected and only 150 were returned, violating the fairness constraint. A possible mitigation is to identify an alternative scoring function $\Tilde{f}$ that, when applied to $\candidateSet{}$, meets the fairness constraint and is close to the original function $f$ in terms of attribute weights, thereby reflecting the admission officer's notion of quality. To arrive at such a function, the admissions officer would try a new scoring function, check whether the result meets the fairness criterion, and, if necessary, repeat. After a few cycles of such interaction, the admissions officer may choose $f(a) = 0.45 X_1 + 0.55 X_2$ as the final scoring function. The work of~\citet{asudeh2019designing} automates this process; the authors use a combinatorial geometry approach to efficiently explore the search space and identify a fair scoring function $\Tilde{f}$ in the neighborhood of $f$, if one exists. \spara{Fairness definition and problem formalization} Let us assume that a dataset of candidates $\candidateSet{}$ is given, along with a linear ranking function $f$, specified by a weight vector $\Vec{w}$. The goal is to find a ranking function $\Tilde{f}$ that is both close to $f$ in terms of the angular distance between the weight vectors of $f$ and $\Tilde{f}$, and fair according to a fairness oracle $\mathcal O$. The main technical contribution of the work is in establishing a correspondence between the space of linear ranking functions and the rankings of items from a given dataset $\candidateSet{}$ induced by these functions. This characterization is based on the notion of an \emph{ordering exchange} that partitions the space of linear functions into disjoint regions. Intuitively, while there is an infinite number of linear ranking functions to explore, only those of them that change the relative order among some pair of items $a, b \in \candidateSet{}$ need to be considered, because if a ranking is unchanged, then the fairness oracle $\mathcal O$ will not change its answer from \emph{false} to \emph{true}. Based on this observation, the authors develop exact algorithms to determine boundaries that partition the space into regions where the desired fairness constraint is satisfied, called satisfactory regions, and regions where the constraint is not satisfied. They also develop approximation algorithms to efficiently identify and index satisfactory regions, and introduce sampling heuristics for on-the-fly processing in cases where the size of $\candidateSet{}$ or the number of scoring attributes are large. \spara{Insights} The fairness oracle $\mathcal O$ is treated as a black box: given a dataset $\candidateSet{}$ and a ranking function $f$, it returns \emph{true} if the ranking of $\candidateSet{}$ by $f$ meets fairness criteria and so is satisfactory, and returns \emph{false} otherwise. The oracle is deterministic, and no further assumptions are made about the type of fairness criteria it encodes. Because of this black-box treatment of the fairness objective, the method makes no commitment to worldview (WYSWYG or WAE) or EOP framework, and it is not restricted in terms of group structure: the number of sensitive attributes, their cardinality, and the method by which multiple sensitive attributes are handled. Despite this flexibility, the authors target their approach specifically at pre-existing bias. \spara{Experiments and observations} While the fairness model is general, the authors focus their experimental evaluation on proportional representation constraints that bound the number of items belonging to a particular group at the top-$k$, for some given value of $k$. Proposed methods are evaluated on the COMPAS~\cite{COMPASData} and DOT~\cite{DOTData} datasets (see Section~\ref{sec:datasets} for details), and with two sets of fairness measures: (1) proportional representation on a single multinary protected attribute and (2) proportional representation on multiple, possibly overlapping, protected attributes. They study both how intuitive the results are --- how close a fair ranking function is to the original --- and how efficiently results can be computed in this computationally challenging setting. \section{Supervised learning} \label{sec:fair_ir} In this section, we present several methods for fairness in learning-to-rank and information retrieval. We will continue to use the notation that we introduced in Section~\ref{sec:intro:learned} wherever appropriate to illustrate the commonalities of the fields. Recall from Section~\ref{sec:frame:bias} that we speak of three different bias categories: pre-existing, technical and emergent bias. The following section presents different approaches to mitigate these biases and enhance fairer ranking results through various strategies and definitions of fairness. Their categorization into pre-, in- and post-processing approaches describes whether they mitigate bias, before, during, or after model training (recall Figure~\ref{fig:ir-flowchart} at page~\pageref{fig:ir-flowchart}). Table~\ref{tbl:method-summary_ir} summarizes the classification of the methods we present in the following into the frameworks described in Section~\ref{sec:02-four-frameworks}. \begin{table*}[ht!] \small \begin{tabular}{p{0.16\textwidth}p{.1\textwidth}p{0.18\textwidth}p{0.12\textwidth}p{0.1\textwidth}p{0.2\textwidth}l} \toprule \textbf{Method} & \textbf{Mitigation Point} & \textbf{Group structure} & \textbf{Bias} & \textbf{Worldview} & \textbf{EO Framework} \\ \midrule iFair~\cite{lahoti2019ifair} & pre-proc. & \makecell[l]{multinary;\\ intersectional} & pre-existing & WAE & Rawlsian \\ \rowcolor{Lightgray} DELTR~\cite{zehlike2018reducing} & in-proc. & binary; intersectional & pre-existing & WAE & luck-egalitarian \\ Fair-PG-Rank~\cite{singh2019policy} & in-proc & binary; intersectional & technical & WYSIWYG & formal \\ \rowcolor{Lightgray} Pairwise Ranking Fairness~\cite{Beutel:2019:FRR:3292500.3330745} & in-proc. & binary; intersectional & ? & ? & ? \\ \textsc{FA*IR}\xspace~\cite{zehlike2017fa} & post-proc. & binary; intersectional & pre-existing & continuous & formal -- luck-egal. \\ \rowcolor{Lightgray} CFA$\theta$~\cite{zehlike2017matching} & post-proc. & \makecell[l]{multinary;\\ intersectional} & pre-existing & continuous & formal -- Rawlsian. \\ \makecell[l]{Fairness of\\Exposure~\cite{singh2018fairness}} & post-proc. & binary; intersectional & \makecell[l]{pre-existing \& \\ technical} & WYSIWYG \& WAE & formal \& luck-egal. \\ \rowcolor{Lightgray} \makecell[l]{Equity of\\ Attention~\cite{biega2018equity}} & post-proc. & \makecell[l]{multinary;\\ intersectional} & \makecell[l]{technical \& \\ emergent} & WYSIWYG & formal\\ \bottomrule \end{tabular} \caption{Summary of method classification. ``Pre-, in-'' and ``post-proc'' refer to whether a method can be classified as pre-, in- or post-processing. ``Binary'' vs. ``multinary'' tells whether the method can handle two or more protected groups per one attribute at a time (e.g. young/old is a binary manifestation of the attribute ``age'', while kid/teen/adult/old is a multinary manifestation of said attribute). ``Intersectional'' expresses whether and how a method can handle more than one protected attribute at a time. Intersectional means that groups are formed by a combination of attribute values (e.g. black females). See Section~\ref{sec:02-four-frameworks} for reference.} \label{tbl:method-summary_ir} \end{table*} \subsection{Pre-Processing Methods: Learning Fair Training Data} Pre-processing approaches are usually concerned with biases in the training data which they try to mitigate. Those biases can be of all three types: pre-existing biases appear in any data collection procedure in various ways. It is the way we interrogate, the decision which information we collect and which not, etc. Technical bias makes its way into data as rounding errors, different number and category encoding or the strategy choice how to handle missing values. Emergent bias arises when data is used in a different way than intended during collection. \noindent General advantages of pre-processing methods are: \begin{itemize} \item Pre-processing methods consider fairness as first concern in the machine learning pipeline. \item Most in- and post-processing methods rely on the availability of group labels during or after training, respectively. Pre-processing approaches instead commonly operate on a distance measure between individuals which allows to be agnostic to group membership. It is sufficient to define who should be similar to whom, based on the features that are available. \item Additionally it is possible to control for certain types of fairness across groups, even if only sparse information about group membership is available~\cite{lahoti2019operationalizing}. \end{itemize} General disadvantages are: \begin{itemize} \item Machine learning methods that rely on a separate feature engineering step are not applicable because the features identified by domain experts may be rendered meaningless, if fair representations are learned from the raw data. \item Current methods only operationalize individual fairness and treat group fairness as a special case of it. \end{itemize} \subsubsection{iFair \cite{lahoti2019ifair}} \input{figs/example-lahoti} \spara{Fairness Definition. } This work operates on an individual fairness objective to learn fair representations of training data points. It is based on the fairness definition by~\citet{dwork2012fairness}, which states that similar individuals should be treated similarly. The goal is to transform an input record $\featureSet_a$ (the feature vector for candidate $a$) into fairer data representations $\fairFeatOfCand{a}$ using a mapping $\phi$, such that two individuals $a$ and $b$, who are indistinguishable on their non-sensitive attributes $\featureSet \setminus \sensAttrSet$ (marked by $\featureSet^*$) should also be nearly indistinguishable in their fair representations $\phi(\featureSet_{a})$ (where sensitive attributes are included): \begin{equation*} \left|d\left(\phi(\featureSet_{a}{}), \phi(\featureSet_{b}{})\right) - d(\npFeatOfCand{a}, \npFeatOfCand{b})\right| \leq \epsilon \end{equation*} Note that this definition assumes that a similarity measure $d$ is available that can correctly (and free of bias) capture the differences between two individuals. As the paper uses the family of Minkowsky $p$-metrics, let us assume we choose the Minkowsky metric with $p=1$, i.e.,\xspace the Manhattan distance as $d$. Reconsidering our college admission example, in Figure~\ref{fig:lahoti_example} we see that candidates $\val{b}$ and $\val{c}$, as well as $\val{d}$ and $\val{e}$ have a distance of 0 in their non-protected features: $ d(\npFeatOfCand{1}, \npFeatOfCand{2}) = d(\npFeatOfCand{3}, \npFeatOfCand{4}) = 0$. When comparing candidates across groups we see that the female group shows a Manhattan distance of 1 to the male group: $ d(\npFeatOfCand{1}, \npFeatOfCand{3}) = d(\npFeatOfCand{2}, \npFeatOfCand{3}) = d(\npFeatOfCand{1}, \npFeatOfCand{4}) = d(\npFeatOfCand{2}, \npFeatOfCand{4}) = 1$ The algorithm, named ``iFair'', would create a new feature set $\phi(\featureSet)$ that preserves those distances \emph{and} includes sensitive attributes in a way that they do not correlate with non-protected attributes anymore. In our example $\sensAttr_1$ correlates with $X_{2}$, which may be picked up by a ranking model. To avoid that iFair would assign non-correlating values to $\sensAttr_1$, e.g.,\xspace by switching the sex of candidates $\val{b}$ and $\val{d}$. \spara{Insights.} Though not clearly stated the wording suggests that attributes are measured in observable space {\it OS}\xspace and the definition seeks to reduce technical bias. Depending on the distance metric choice of $d$, the method would potentially be capable of learning representations that ignore \emph{all} information about group membership, even if it is encoded partly in the non-protected features. In this case it would assign low weights to those non-protected features that indirectly encode protected ones. However, the choice of Minkowsky metrics, where distances are measured in terms of absolute numbers, suggests that the authors assume construct space $CS \sim OS$ and hence a leaning to a WYSIWYG worldview and to formal equality of opportunity. The authors do not address the question if or why Minkowsky metrics are not prone to reproduce biases from training data, that may arise from biased observation processes. They tackle this problem in a follow-up work~\cite{lahoti2019operationalizing}, where the distance metric is replaced by a \emph{fairness graph} that captures pairwise similarities of individuals. In the graph a node represents an individual $a$ and an edge between two nodes indicates that these individuals are to be considered similar. This approach has two advantages: it allows a comparison of an individual's non-protected attributes across different domains (e.g.,\xspace the h-index of a successful researcher in programming languages is typically lower than those of a successful researcher in machine learning), and it allows for sparse similarity judgments, as individuals can be grouped into clusters based on their in-group relevance scores (e.g.,\xspace the top-10\% in the female group). We do not further present~\citet{lahoti2019operationalizing} here, because the paper focuses on classification tasks in its experimental section. Pre-processing methods claim to be application agnostic, however~\citet{lahoti2019ifair} is the only work by the publication of this thesis that has been proven to work for ranking tasks. \spara{Algorithm.} The problem is formalized as a probabilistic clustering problem: given $K$ clusters of similar individuals (with $K < n$) each is represented by a prototype vector $v_k$. A candidate record $\featureSet_{a}{}$ is assigned to one of the $v_k$ based on a record-specific probability distribution $P_a$ that reflects the distances of the record from each of the prototype vectors and thus forms the fair representation: \begin{equation*} \phi(\featureSet_{a}{}) = \fairFeatOfCand{a} = \sum_{k=1..K} P_{ak} \cdot v_k \end{equation*} This is used to formalize a utility objective to ensure a low reconstruction loss and a fairness objective that demands that $\phi$ should preserve pair-wise distances on non-protected attributes between data records: \begin{equation*} L_{\operatorname{util}} (\mathbf X, \widetilde{\mathbf X}) = \sum_{a \in \candidateSet{}} || \featureSet_{a}{}- \fairFeatOfCand{a} ||_2 \end{equation*} \begin{equation*} L_{\operatorname{fair}} (\mathbf X, \widetilde{\mathbf X}) = \sum_{a, b \in \candidateSet{}} \left(d(\fairFeatOfCand{a}, \fairFeatOfCand{b}) - d(\npFeatOfCand{a}, \npFeatOfCand{b}) \right)^2 \end{equation*} The two objectives are combined into an objective function that the algorithm optimizes by use of gradient descent: \begin{equation*} L = \lambda \cdot L_{\operatorname{util}} (\mathbf X, \widetilde{\mathbf X}) + \mu \cdot L_{\operatorname{fair}} (\mathbf X, \widetilde{\mathbf X}) \end{equation*} The algorithm supports multiple groups and it is not necessary to specify the protected groups a-priori. \spara{Experiments.} Experiments are performed on five real-world and a synthetic data set. From the real-world data sets, two are used for learning-to-rank tasks, namely the XING data set~\cite{XINGData} and the AirBnB data set~\cite{AirBnBData} (the other ones are used for classification tasks and hence we do not cover them). The synthetic experiments show that representations learned by iFair remain nearly the same for all configurations, irrespective of changes in group membership. This means that changing the value of the protected attribute does not influence the learned representation, i.e. when training a model on these representations it will not allow any conclusions about the protected attribute. The results show that applying learning algorithms on representations learned by iFair leads to more consistent decisions w.r.t. the distribution of items across a ranking than when applying the same algorithm to the original data. This means that two items with similar non-protected features receive similar visibility in the resulting ranking. \subsection{In-Processing Methods: Learning a Fair Model} In-processing fair ranking methods extend the objective function of a learning-to-rank algorithm by a fairness term. Thus the algorithm's optimization problem consists of an accuracy objective \emph{and} a fairness objective and the method learns to find the best balance between these two. General advantages are: \begin{enumerate}[(i)] \item In-processing methods yield better trade-offs between accuracy and fairness than post-processing ones, because finding this balance is the heart of their learning objective. \item They are capable of handling different types of underlying biases without knowing which particular type is present (see Section~\ref{subsubsec:DELTR}). \end{enumerate} General disadvantages are: \begin{enumerate}[(i)] \item The impact of the fairness objective on the resulting ranking is less apparent than it is in case of post-processing algorithms. \end{enumerate} \subsubsection{\textsc{DELTR}\xspace \cite{zehlike2018reducing}.} \label{subsubsec:DELTR} \spara{Fairness Definition.} This method perceives unfairness as disparities in exposure, i.e. the average visibility a group receives (see Section~\ref{sec:intro:learned} and Eq.~\ref{eq:exp} on page~\pageref{eq:exp}). The exposure of a document is defined as its probability~$P_{\predScore^{\query}} \left(\featOfCand{a}\right)$ to appear in the top position of a ranking for query $\query$: \begin{equation} \operatorname{Exposure}\left(\featOfCand{a}|P_{\predScore^{\query}}\right) = P_{\predScore^{\query}}\left(\featOfCand{a}\right) \cdot v_1 \end{equation} where $ v_1 $ is the \emph{position bias} of position 1, indicating its relative importance for users of a ranking system~\cite{jarvelin2002cumulated}. The exposure of group $\group{}$ is hence the average probability of its members to appear in the top position: \begin{equation} \operatorname{Exposure}(\group{}|P_{\predScore^{\query}}) = \frac{1}{|\group{}|} \sum_{\featOfCand{a} \in \group{}} \operatorname{Exposure}\left(\featOfCand{a}|P_{\predScore^{\query}}\right) \end{equation} \spara{Insights. } As the definition optimizes for equality of exposure (exposure is the outcome), rather than equity, this means that the (potentially biased) qualification of a candidate is not taken into account, which suggests that its underlying assumption is a WAE worldview. We denote however, that because exposure is defined through the probability to appear in the top position, it indirectly contains a measurement of document relevance. We further denote that by setting $\gamma$ to 0, the WYSIWYG worldview can also be adopted. The method is concerned with pre-existing biases that lead to a biased observation process and in turn disparate exposure distributions. Given that, the fairness definition belongs to the framework of substantive EO, yet the authors do not explicitly state their beliefs about score distributions in $CS$, and so they do not explicitly map their approach to either Rawlsian or luck-egalitarian EO framework. However, as the method is agnostic to ``true'' score distributions and optimizes for equal exposure in decision space $DS$ irrespective of whether score distributions in $CS$ are different for demographic groups, we argue that this meets the conditions for luck-egalitarian EO and place it into that category. \spara{Algorithm.} The algorithm incorporates its unfairness measure into the objective function of a list-wise learning algorithm, namely ListNet~\cite{cao2007learning}, such that the algorithm optimizes simultaneously for an accuracy metric $L$ and an unfairness metric for two groups $\unfairnessOnePara{\predScore}$: \begin{equation} L_{\operatorname{\textsc{DELTR}\xspace}} \left( \score{}, \predScore \right) = L \left( \score{}, \predScore \right) + \gamma \unfairnessOnePara{\predScore} \end{equation} with \begin{equation*} \unfairnessOnePara{\predScore} = \max \left(0, \operatorname{Exposure}\left(\group{0}|P_{\predScore{\query}}\right) - \operatorname{Exposure}\left(\group{1}|P_{\predScore{\query}}\right)\right)^2 \label{eq:exposure} \end{equation*} where $\group{0}$ denotes the non-protected group and $\group{1}$ the protected one. The squared hinge loss ensures an asymmetric metric, i.e. unfairness is only detected, if the protected group receives less exposure than the non-protected group, but not vice versa. The optimization problem is solved using gradient descent. \spara{Experiments. } To illustrate how the method works, let us back back to our running example in Figure~\ref{fig:ir_example} on page~\pageref{fig:ir_example}: the algorithm gets as input the sensitive feature that forms a protected group, in this case $A_1$. As the training data $\candidateSet{train}$ shows disparities in exposure for females (they are all ranked below males) a standard learning to rank algorithm is likely to pick up $A_1$ as a predominant criterion for its model and assign a high weight to it. \textsc{DELTR}\xspace instead will learn to ignore $A_1$ as a decision criterion because the unfairness metric penalizes ranking predictions that show high disparities in exposure for the different groups of $A_1$. This way \textsc{DELTR}\xspace can also compensate for systematic errors in the data or in the relevance measures (e.g. if the SAT test design favors male applicants, hence females systematically achieve lower scores). The experiments are performed on three different datasets each of them containing different types of biases. The proposed methods handles all types without prior knowledge to what particular bias is present: \begin{itemize} \item \textbf{W3C experts:}~\cite{W3CData} The experimental setup investigates on situations in which bias is unrelated to relevance: expertise has been judged correctly, but ties have been broken in favor to the non-protected group. % In this case, including the sensitive feature during training yields very bad results in terms of disparate exposure and relevance, because all experts from the protected group are ranked to the bottom of the list. \item \textbf{Engineering students:}~\cite{EngineeringData} The task is to predict the students' academic performance after the first year based on their admission test results and school grades. % Here the same admission test score relates to different levels of academic performance across groups, i.e. a score of 500 from a protected candidate relates to better academic performance than a score of 500 from a non-protected candidate. % This experiment investigates on situations in which bias is coming from different score distributions among groups and shows that trading-off accuracy for fairer rankings is not a necessary condition. % In this case including the sensitive feature during training yields \emph{better} results in terms of exposure and relevance. \item \textbf{Law School Admission Council:}~\cite{LSACData} These experiments show that disparities in exposure due to differences in academic performances can be reduced by use of the approach at a cost of accuracy. % However the trade-off is usually lower for in-processing methods than for post-processing ones, because the best balance between accuracy and fairness is found automatically and not manually. \end{itemize} \subsubsection{Fair-PG-Rank \cite{singh2019policy}} \label{sec:in-proc:fairpgrank} \spara{Fairness Definition.} The approach by~\citet{singh2019policy} addresses technical bias i.e.,\xspace bias that is created by the ranking system itself, namely through position bias that gives candidates beyond the first few positions significantly less visibility than those in the first positions. It also operates on a notion of document exposure, however this time exposure is not defined as the probability of a document to appear at the top position but through expected attention, which the authors define as equivalent to the expected position bias (as we defined in Equation~\ref{eq:exp} in Section~\ref{sec:intro:learned} on page~\pageref{eq:exp}). Similar to~\citet{biega2018equity} (Section~\ref{subsubsec:equityOfAttention}) they operate on a merit-based constraint: each candidate $\boldsymbol{\tau}(i) = a$ in ranking $\boldsymbol{\tau}$ should receive exposure proportional to their utility $\utilityThreePara{}{\boldsymbol{\tau}}{a}$: \begin{equation} \utilityThreePara{}{\boldsymbol{\tau}}{a} \geq \utilityThreePara{}{\boldsymbol{\tau}}{b} \rightarrow \frac{\operatorname{Exposure}(a)}{\utilityThreePara{}{\boldsymbol{\tau}}{a}} \leq \frac{\operatorname{Exposure}(b)}{\utilityThreePara{}{\boldsymbol{\tau}}{b}} \end{equation} The work further proposes a definition of individual fairness per query $\query$ that measures the disparities in visibility $\posBias{.}$ for two candidates $a, b$ in $\boldsymbol{\tau}$: \begin{equation} \unfairnessThreePara{a}{b}{\query} = \frac{1}{|H^{\query}|} \sum_{(a, b) \in H^{\query}} \max \left[0, \frac{\posBias{a}}{\utilityThreePara{}{\boldsymbol{\tau}}{a}} - \frac{\posBias{b}}{\utilityThreePara{}{\boldsymbol{\tau}}{b}}\right] \end{equation} with $H^{\query} = \left\{(a, b) \; \text{s.t.} \; \utilityThreePara{}{\boldsymbol{\tau}}{a} \geq \utilityThreePara{}{\boldsymbol{\tau}}{b}\right\}$. The authors also propose a definition of group fairness for two groups, in which the individual document visibility from the above equation is replaced with a notion of group visibility: \begin{equation*} \unfairnessThreePara{\group{0}}{\group{1}}{\query} = \max \left[0, \frac{\posBias{\group{0}}}{\utilityThreePara{}{\boldsymbol{\tau}}{\group{0}}} - \frac{\posBias{\group{1}}}{\utilityThreePara{}{\boldsymbol{\tau}}{\group{1}}}\right] \end{equation*} with $\posBias{\group{}} = \frac{1}{|\group{}|} \sum_{a \in \group{}} \posBias{a}$ being the average exposure of group $\group{}$. \spara{Algorithm.} Using the fairness definitions the authors extend the ListNet~\cite{cao2007learning} ranking function to incorporate their disparity measures. The algorithm Fair-PG-Rank is learning an optimal ranking $\boldsymbol{\tau}^*$ via empirical risk minimization using the following learning objective: \begin{equation} \boldsymbol{\tau}^*_\delta = \argmax_{\boldsymbol{\tau}} \frac{1}{N} \sum^{N}_{\query=1} \left[ L(\boldsymbol{\tau}) \right] - \lambda \frac{1}{N} \sum^{N}_{\query=1} \left[ \unfairnessOnePara{.|\query} \right] \end{equation} with $L$ being a loss function that measures the utility of $\boldsymbol{\tau}$ for the user. The optimization is done using gradient descent. \spara{Insights. } As already mentioned the method is concerned explicitly with the inherent technical bias of a ranking that arises from showing result documents in a one-dimensional list. It assumes a WYSIWYG world in which a document's merit in $OS$ truly reflects its merit in $CS$, because no means are taken to reduce any errors that might have been introduced by mapping function $g$. It therefore operates on a formal EO framework. Compared to the method described in the previous section (\ref{subsubsec:DELTR}) this approach is concerned with equity of exposure rather than equality. This means that documents shall only receive as much exposure as they ``deserve'' based on their relevance measure. Coming to our college admission example (Figure~\ref{fig:ir_example}, page~\pageref{fig:ir_example}): we see that in the training data (white background lines) all females have worse scores than males and appear below in the ranking. As stated before, a standard LTR algorithm is therefore likely to treat $A_1$ as an important decision criterion and assign a high weight to it. Because Fair-PG-Rank optimizes for equity of exposure this approach ensures that documents receive equal exposure among those that ``deserve'' it based on their relevance measure. As such Fair-PG-Rank ensures that a model is not discriminating based on a sensitive attribute while assuming that the relevance data is correct and does not contain discriminating patterns. \spara{Experiments.} Experiments are done on a synthetic dataset, the German credit dataset~\cite{GermanCreditData} using the group fairness definition in the learning objective and the Yahoo! LTR dataset~\cite{YahooData}, using the individual fairness definition in the learning objective. The synthetic dataset has two features for each document, where Feature 2 is corrupted for the minority group $G_1$. The results show that with increasing values of $\lambda$, the weight of the corrupted feature is decreasing. With the real world datasets the authors show that the method works in a real world setting, but do not discuss its implications. The authors compare their method to~\cite{zehlike2018reducing}, but as their measure of disparate exposure is vastly different and the bias they are focusing on is not the same, it is not clear if meaningful conclusions should be drawn from such comparisons. Further research is needed to understand how methods with opposing worldviews, conflicting understandings of equality of opportunity and different addressed biases can be compared in a meaningful way. \subsubsection{Pairwise Fairness for Rankings \cite{Beutel:2019:FRR:3292500.3330745}} \label{subsubsec:beutel} \spara{Fairness Definition.} This method is the first one to introduce a pairwise fairness metric for ranking predictions. The authors setup their method as a component of a cascading recommender system, however it only considers the final ranking of items. We therefore present the method here rather than in Section~\ref{sec:fair_recsys}, where we talk about fairness in recommendation systems. The framework is formalized as follows: each query consists of user features $\mathbf{U}_i$ for user $i$ and context features $\mathbf{C}$. Each ranking candidate $a$ is described by a feature vector $\featOfCand{a}$. Then a ranker $\hat{f}$ is trained to predict user engagement, which relates to clicks $\hat{y}$ \emph{and} interaction after a click $\hat{z}$ (such as purchase, ratings, etc.). which are then mapped to a scalar value to rank items: $\hat{f}(\featureSet) = \predScore$. The focus of the fairness definition is on the risk for groups of ranked candidates to be under-recommended, where the group definition is binary, i.e. $A_a \in {0,1}$ for candidate $a$. For this, the authors first define pairwise accuracy, which describes the probability that a clicked candidate is ranked above another relevant un-clicked candidate, for the same query: \[P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}\right)\] Pairwise fairness asks if the pairwise accuracy is the same across the two groups: \[P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=0\right) = P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=1\right)\] To account for user engagement $z$ the definition is extended to compare only those candidates with each other that receive the same amount of engagement $\Tilde{z}$: \begin{align*} &P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=0, z_a=\Tilde{z}\right) = \\ &P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=1, z_a=\Tilde{z}\right) \forall \Tilde{z} \end{align*} The authors further extend the definition to also consider group exposure in rankings, because two rankings could have the same pairwise accuracy across groups while systematically putting candidates of one group to lower ranks of the list. To account for this, they split the definition into \emph{intra-group} pairwise fairness: \begin{align*} &P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=\sensAttr_b=0, z_a=\Tilde{z}\right) = \\ &P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=\sensAttr_b=1, z_a=\Tilde{z}\right) \forall \Tilde{z} \end{align*} and \emph{inter-group} pairwise fairness: \begin{align*} &P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=0, \sensAttr_b=1, z_a=\Tilde{z}\right) = \\ &P\left(\hat f (\featOfCand{a}) > \hat f(\featOfCand{b}) \; | \; \score{a} > \score{b}, \sensAttr_a=1, \sensAttr_b=0, z_a=\Tilde{z}\right) \forall \Tilde{z} \end{align*} Intra-group fairness indicates whether across candidates from the same group, those that are more likely to be clicked on are ranked above those less likely, while inter-group fairness describes whether mistakes of the ranker are at the cost of one particular group. \spara{Insights. } The framework as is is not clearly classifiable into WAE or WYSIWYG, because the authors talk about click probability and user engagement without further specifying what these two are composed off. Particularly they do not tell, whether a measure of candidate merit is part of the click probability or not. Click through rate (CTR) is usually defined to contain some measure of relevance~\cite{richardson2007predicting}, in that case $\score{}$ would contain a concept of merit (that is measured in $OS$) and hence the framework would correspond to WYSIWYG. The authors briefly mention that they assume the final ranking model $\hat f$ only operates on relevant documents, which supports the assumption that the underlying worldview of this framework is WYSIWYG. We face the same uncertainty when thinking about which EO framework this work corresponds to. Without knowledge of the actual underlying estimation of click probability and user engagement, and without a statement on an individual's effort, it is not clear to which EO framework or worldview a mathematical fairness definition belongs to. In fact, even though the authors state that they adopted a definition of ``equal opportunity'', depending on the exact definition of click probability and user engagement, their pairwise fairness definition could be mapped to any of the four equal opportunity definitions from section~\ref{sec:02-four-frameworks}. An identification of the addressed bias is also not possible without the CTR definition. \spara{Experiments.} The experiments study the performance of the ranker with respect to a sensitive subgroup of candidates in a synthetic dataset, comparing the performance of this subgroup to the rest of the data, denoted by ``not subgroup.'' The subgroup represents approximately 0.2\% of all items. The authors compare two versions of their approach: a model without any pairwise fairness constraint and one with an inter-group fairness constraint. Performance measures (pairwise accuracy) are aggregated across user engagement levels and averaged. Then the metrics for the subgroup and not-subgroup are divided, such that a ratio of 1 means perfect fairness, a value below 1 means that the average pairwise accuracy for the subgroup is lower than for the other and a value above 1 means that it is higher. Engagement $z$ is grouped into four levels. The overall pairwise fairness evaluation shows that the system under-ranks candidates from the subgroup when the subsequent level of engagement is low, but interestingly slightly over-ranks candidates from the subgroup when the subsequent level of engagement is high. The intra-group pairwise fairness evaluation shows that across all levels of engagement the model has more difficulty selecting the clicked candidate when comparing subgroup candidates than when comparing non-subgroup candidates. This is partly because the subgroup is small and less diverse. The inter-group fairness evaluation shows that across all levels of engagement the subgroup candidates are significantly under-ranked relative to the non-subgroup candidates. Further, the results show that the pairwise accuracy for non-subgroup candidates in inter-group pairs is notably higher than in intra-group pairs, suggesting that subgroup candidates, even when of interest to the user, are ranked under non-subgroup candidates. When optimizing for inter-group fairness, these disparities are mitigated and candidates from the subgroup receive more exposure (which they measure as the probability that candidate $a$ is ranked above item $b$). \subsection{Post-Processing Methods: Re-Ordering Ranked Items} Post-processing algorithms assume that a ranking model has already been trained. A predicted ranking is handed to the algorithm, which re-orders items to improve fairness. Most algorithms operate on a notion of group membership meaning that certain groups are called protected, while one group is non-protected. As such post-processing methods are often similar in their fairness concepts as the score-based fair ranking methods. We will use our running example (reusing Figure~\ref{fig:ad_example}) to illustrate the differences and similarities between the presented methods in this section, but we omit the non-sensitive features $X_1, X_2$ and $X_3$ which are irrelevant at this point. \input{figs/merge-post-process-table} \noindent General advantages of post-processing methods are: \begin{enumerate}[(i)] \item The effect the algorithms have on a ranked output is easy to visualize and understand, because the original ranking before the application of a fairness method can be compared to its result in terms of how items are re-ordered and (in some cases) in terms of a loss in a ranking metric such as NDCG. \item Many of them provide a guaranteed share of visibility for the protected group in the ranking. \end{enumerate} General disadvantages are: \begin{enumerate}[(i)] \item The post-processing idea inherently suggests that fairness comes at the expense of accuracy, because the scores of a previously trained ranking model are taken as ``the true anchor point''. \item Algorithms with a fixed fairness constraint \cite{zehlike2017fa,zehlike2017matching,singh2018fairness} may cause a huge loss in performance w.r.t. the unaware ranking. This may be the case, if the score distribution of the protected group is much lower than for the non-protected group. \end{enumerate} \subsubsection{\textsc{FA*IR}\xspace \cite{zehlike2017fa}} \label{subsubsec:FAIR} \spara{Fairness Definition.} The fairness definition is based on the assumption that rankings are fair when the decisions on candidate placements are drawn from a Bernoulli distribution (coin tosses). The algorithm assures that the number of protected candidates does not fall far below a required minimum percentage $p$ at any point in the ranking, by formulating this fairness as a statistical significance test to test whether a ranking was likely to have been produced by a Bernoulli process. \textsc{FA*IR}\xspace operates in a binary group setting and is concerned with disparate impact, as it does not take any notion of merit into account for its re-ranking strategy. It accepts a ranking prefix of length $k$ as fairly representing the protected group, if the number $\boldsymbol{\tau}_{\group{}}$ of protected items is not significantly i.e. $\alpha$ lower than the minimum target proportion $p$: $F\left(\boldsymbol{\tau}_{\group{}}\; k, p\right)>\alpha$ with $F$ being the binomial cumulative distribution function. If this condition holds for each ranking prefix $k=1, \ldots, n$, the entire ranking is accepted. \spara{Insights.} The existence of a minimum proportion for a protected group suggests a WAE worldview and a main concern with pre-existing bias, that has to be accounted for using said minimum proportion. However, the WYSIWYG worldview can also be adopted, if $p$ is set to a low value, which permits a gradual transition from WYSIWYG to WAE. This however is a mere side effect of the method and not to be seen as a desired goal of the algorithm (as opposed to~\cite{zehlike2017matching}). Critically, if $p$ is chosen too high, the framework will rank a lot more protected items to the highest positions, than non-protected ones. This would be justified only under libertarian EO, or more importantly if one assumes that the protected group is actually a lot more relevant (they score very high in construct space), but the measurements in observable space are extremely biased or the scoring model produces inverted predictions for the protected group in decision space (this translates into low values in either $OS$ or $DS$). While such cases do exist it is questionable, if one should actually attempt to correct such a flawed ranking, rather than disregard it entirely. The framework assumes that differences between groups in the relevance distribution are an artifact of different circumstances for each group and therefore relates to substantive EO. As we stated in Section~\ref{sec:02-four-frameworks}, WAE can relate to both of the described substantive EO frameworks and it is mostly the algorithmic definition itself, that makes it belonging to the one or the other. For \textsc{FA*IR}\xspace however, both mappings are possible, because the algorithm is agnostic to the source of relevance differences and merely implements an affirmative action policy. We therefore conclude that the preliminaries for luck-egalitarian EO are met. As stated above the method ensures a given minimum proportion of a protected group in every prefix of a post-processed ranking. As an example consider Figure~\ref{fig:postProc_example} where a model has predicted relevance scores $\predScore{}$ based on which ranking $\hat \boldsymbol{\tau}_1$ in Figure~\ref{fig:postProc_example} is produced. This ranking is fair according to the \textsc{FA*IR}\xspace algorithm if the input is $p<0.5, \alpha=0.1$. If $p \geq 0.5$ the candidates have to be reordered. Ranking $\hat \boldsymbol{\tau}_2$ in Figure~\ref{fig:postProc_example} is produced by \textsc{FA*IR}\xspace with $p=0.7, \alpha=0.1$. \spara{Algorithm.} For performance improvements the algorithm calculates a table that contains the minimum number of protected candidates at each position in the ranking given a minimum proportion $p$ (see Table~\ref{tbl:ranked_group_fairness_table}). This is done by computing $F^{-1}\left(\alpha; k, p\right)$, the percent point function of the binomial CDF. Then the two groups are ranked separately by decreasing scores and merged into one ranking according to the table. Whenever the ordering by score would violate the minimum number requirement, a protected item is put at the respective position. \spara{Experiments.} The experimental evaluation is done on the three real datasets: COMPAS~\cite{COMPASData}, German credit~\cite{GermanCreditData}, and SAT~\cite{SATData}. The experimental evaluation shows the effects of the methods on performance in terms of NDCG and maximum rank loss. All scenarios under consideration relate to questions of distributive fairness, where certain benefits (scholarship, visibility, pardon) are to be distributed fairly across the two groups. \subsubsection{Continuous Fairness with Optimal Transport \cite{zehlike2017matching}} \spara{Fairness Definition and Algorithm.} This work defines a mathematical framework CFA$\theta$ to continuously interpolate between the worldviews WYSIWYG and WAE. The authors argue that legally WYSIWYG maps with individual fairness, while WAE maps with current anti-discrimination law, that defines group fairness in terms of \emph{statistical parity of outcome}. The question of what a fair distribution of outcome is depends on the estimated extend of indirect discrimination and pre-existing bias in the scoring model. Interestingly, this means that any fairness definitions departing from group fairness as statistical parity and individual fairness as meritocratic scores do not yet have any legal meaning. Furthermore the authors state that current ruling on anti-discrimination cases only involves actual unfair decisions and not on ``softer'' disadvantages such as reduced visibility in a ranking. The WYSIWYG worldview corresponds to the meritocratic ideal and hence to the formal EO framework, while WAE corresponds to Rawlsian EO. This is because an individual's measurable effort (here, their raw score) is seen to be drawn from different distributions $\mu_k$ per group in $OS$, while in $CS$, there exists only one $\nu$, meaning that all groups have essentially the same distribution of true effort. The framework can handle multiple groups that are defined as a partition over all individuals: $\featureSet = \bigcup_{k\in\{0,1\}^N} \groupFunc^{-1}(k)$, where $\groupFunc: \featureSet \rightarrow \{0,1\}^N$ is a mapping that returns 1 if an individual carries a certain trait from the set of $N$ features. The $k$-th group is therefore $\group{k} := \groupFunc^{-1} (k)$. The authors explicitly include all features in the group definition instead of only taking certain attributes that are legally protected into account. This broad definition has the advantage that it can handle non-protected features that serve as proxies for protected ones. The framework assumes a potentially biased scoring function $S$ is given, that maps from space of individual traits $\featureSet$ to a $n$-dimensional vector $S: \featureSet \rightarrow \mathds{R}^n$ and that each group's score forms a probability distribution $\mu_k$. The combined score distribution is called $\mu$ and corresponds to a metric from $OS$, which as usual is prone to pre-existing bias and other systematic errors with different group skews. The authors then define a score distribution $\nu_k = \mu_k \circ T^{-1}_k$ to be the fair score representation of group $k$ obtained by an optimal transport map $T_k: \mathds{R}^n \rightarrow \mathds{R}^n$. Depending on the worldview, $\nu$ is defined differently: In WYSIWYG $\mu = \nu_k$ for all $k$ groups, while in WAE any differences between the $\mu_k$ are solely product of bias and there exists a single $\nu$ that is equal for all groups. In the former case the optimal transport matrix is the identity matrix. In the latter, the WAE fair representation distribution $\nu$ that satisfies statistical parity is defined as the barycenter in Wasserstein space of the $\mu_k$ and a $T_k$ has to be found for each group to transform $\mu_k$ into $\nu$ while minimizing violations against decision maker utility and individual fairness. The framework further defines a displacement interpolation with a fairness parameter $\theta \in [0, 1]$, which allows to transform $\mu_k$ into any distribution $\mu_k^\theta = \mu_k \circ (T^\theta)^{-1}$ between the WYSIWYG (or individual fairness) policy $\mu$ and the WAE (or group fairness) policy $\nu$. This means that a high $\theta$ corresponds to more emphasis on group fairness, while a low one to more individual fairness and $\mu^0 = \mu$ and $\mu^1 = \nu$. A notable advantage of this approach is that it does not rely on the existence of a distance metric between individuals, in contrast to many other methods~\cite{dwork2012fairness, lahoti2019ifair}. This is important because it is not clear how such a distance metric, if actually available in $OS$, would be less prone to biases and errors than a normal optimization metric. \spara{Insights.} As CFA$\theta$ moves the group distributions of predicted scores closer to each other, this means that when setting $\theta=1$ the algorithm achieves statistical parity for each group throughout the ranking \emph{but not more} in contrast to \textsc{FA*IR}\xspace which can push candidates from the protected group even higher. Such a setting would result in the same ranking $\hat \boldsymbol{\tau}_2$ as shown in Figure~\ref{fig:postProc_example} given that there are 50\% of males and females in the dataset. However if there where only 25\% females in the dataset (e.g. candidate $f$ and $k$ are male), then the algorithm will produce a ranking $\hat \boldsymbol{\tau}_3$ as shown in Figure~\ref{fig:postProc_example} when given $\theta=1$ as input. Note that with such a dataset the method cannot return a ranking in which the now only two female candidates are ranked among the admitted (i.e. into the top-4). \spara{Experiments.} The experimental evaluation is done on a synthetic dataset with 100,000 data points, a score feature and two sensitive features. Group membership for this experiment is defined by all combinations of the values of the sensitive features. Rankings are produced based on the score column and the performance of a fair ranking with different $\theta$ is measured in terms of NDCG. The fairness of a ranking is measured as the share of each group at each $n$ positions, with $n$ ranging from 10 to 1000. A second experiment is conducted on the LSAC dataset~\cite{LSACData}. The experiments confirm the general properties of post-processing methods: it is clearly visible how groups are distributed more evenly across all positions with increasing values of $\theta$. However depending on the differences between the $\mu_k$, a processed ranking based on the fair representation $\nu$ can show significant declines in performance measures w.r.t. the raw score ranking. \subsubsection{Fairness of Exposure \cite{singh2018fairness}} \label{subsubsec:FairnessOfExposure} \spara{Fairness Definition. } The fairness objective of this work is set as a linear combination $\mathbf{a}^T\mathbf{P}^{\boldsymbol{\tau}}_{a,i}\mathbf{v}=h$ with $\mathbf{a}$ being a vector to encode group membership, $\mathbf{P}^{\boldsymbol{\tau}}_{a,i}$ as the probability that $\hat f$ places candidate $a$ at rank $i$ in $\boldsymbol{\tau}$ and $\mathbf{v}$ reflecting the importance of a position in a ranking. This equation is solved under three different group fairness constraints based on a definition of exposure that a candidate $a$ receives under $\mathbf P^{\boldsymbol{\tau}}$: \begin{equation*} \operatorname{Exposure}(\featOfCand{a}|\mathbf{P}^{\boldsymbol{\tau}}) = \sum^{k}_{i=1} \mathbf{P}^{\boldsymbol{\tau}}_{a,i}\posBias{i} \end{equation*} with $\posBias{i}$ being the position bias of position $i$ in ranking $\boldsymbol{\tau}$. The average exposure of a group $\group{}$ is defined as follows: \begin{equation*} \operatorname{Exposure}(\group{}|\mathbf{P}^{\boldsymbol{\tau}}) = \frac{1}{|\group{}|}\sum_{\featOfCand{a} \in \group{}}\operatorname{Exposure}(\featOfCand{a}|\mathbf{P}^{\boldsymbol{\tau}}) \end{equation*} The goal is to distribute exposure fairly between two groups $\group{0}$ and $\group{1}$ by use of the following three definitions: \begin{enumerate} \item \textbf{Demographic Parity:} states that the average exposure of groups shall be equal $\operatorname{Exposure}(\group{0}|\mathbf{P}^{\boldsymbol{\tau}}) = \operatorname{Exposure}(\group{1}|\mathbf{P}^{\boldsymbol{\tau}})$. \item \textbf{Disparate Treatment:} requires equity of exposure across groups, i.e. the average exposure in relation to their average utility should be equal across groups: \begin{equation} \frac{\operatorname{Exposure}(\group{0}|\mathbf{P}^{\boldsymbol{\tau}})}{\utilityThreePara{}{\boldsymbol{\tau}}{\group{0}}} = \frac{\operatorname{Exposure}(\group{1}|\mathbf{P}^{\boldsymbol{\tau}})}{\utilityThreePara{}{\boldsymbol{\tau}}{\group{1}}} \end{equation} \item \textbf{Disparate Impact:} is measured in terms of disparate click through rate~\cite{richardson2007predicting} per group, which shall be equal across groups given the groups average utility: \begin{equation*} \operatorname{CTR}(\group{}|\mathbf{P}^{\boldsymbol{\tau}}) = \frac{1}{|\group{}|}\sum_{a \in \group{}} \sum^{k}_{i=1}\mathbf{P}_{a,i}^{\boldsymbol{\tau}} \score{a} \posBias{i} \end{equation*} with $\score{a}$ being the relevance of candidate $a$. \begin{equation} \frac{\operatorname{CTR}(\group{0}|\mathbf{P}^{\boldsymbol{\tau}})}{\utilityThreePara{}{\boldsymbol{\tau}}{\group{0}}} = \frac{\operatorname{CTR}(\group{1}|\mathbf{P}^{\boldsymbol{\tau}})}{\utilityThreePara{}{\boldsymbol{\tau}}{\group{1}}} \end{equation} \end{enumerate} The definition of demographic parity addresses the problem of pre-existing bias and corresponds to the WAE framework, as it tries to balance visibility across groups independently of their performance in $OS$ (note however that a group's exposure is depending on $\mathbf{P}^{\boldsymbol{\tau}}$ and may hence indirectly depend on a utility measure, if $\mathbf{P}^{\boldsymbol{\tau}}$ is calculated based on document utility). The definition assumes that $CS \nsim OS$ and therefore that an individuals true effort is different from the measured effort, which the demographic parity definition accounts for. As the authors do not specify their belief about $CS$ it is not clear whether this demographic parity definition corresponds to Rawlsian or luck-egalitarian EO. However, as visibility is equalized without any assumptions on an individual's true effort, we would argue that the conditions for a luck-egalitarian framework are met. The definition of disparate treatment explicitly addresses the technical bias of a ranking, also known as position bias, by ensuring that all documents of the same utility receive equal visibility. This corresponds to the formal EO framework. The method also corresponds to the WYSIWYG worldview because document utility is measured through features from observable space without taking into account that a biased observation process may exist, hence $CS \sim OS$. Note that it does not necessarily correspond to individual fairness, because utility and exposure are averaged across individuals of a group. This can lead to a downgrading of high scoring individuals in otherwise badly performing groups. Note that if this artefact alone was considered for the mapping to an equal opportunity framework, the best choice would actually be libertarian EO, because items are not assigned to positions based on their own utility anymore, but at least partly based on their group membership. \spara{Insights.} The definition of disparate impact is misleading in two ways, because it does not comply with the \emph{legal} definition of disparate impact, which is described solely in terms of the deviation from statistical parity. First the click through rate contains a notion of document relevance and second it is set relative to the average group utility, which actually includes aspects of libertarian EO rather than substantive, as mentioned before. Statistical parity in contrast does not consider any relevance measure whatsoever, precisely because it assumes that these very measurements are subject to pre-existing biases and a biased mapping from $CS$ to $OS$. As the given definition mostly corresponds to formal EO and a WYSIWYG worldview, it would be more appropriate to label it as a different version of disparate treatment that is concerned with click through rate instead of exposure. \spara{Algorithm.} The algorithmic framework is implemented as an ILP that maximizes ranking utility given one of the above constraints translated into a scalar $h$ (contrary to~\citet{biega2018equity}, who constraint quality and optimize for disparate treatment): \begin{argmaxi}|l| {\mathbf{P}}{\score{}^T\mathbf{Pv}}{}{} \addConstraint{\mathds{1}^T\mathbf{P}}{=\mathds{1}^T}{} \addConstraint{\mathbf{P}\mathds{1}}{=\mathds{1}}{} \addConstraint{0 \leq \mathbf{P}_{a,i}}{\leq 1}{} \addConstraint{\mathbf{a}^T\mathbf{Pv}}{=h}{} \end{argmaxi} Depending on the respective definition of fairness from this paper the outcome rankings will look quite differently. Let us assume that the model is absolutely sure about where to put each candidate such that $\mathbf{P}$ becomes the unit matrix and such the exposure of a group is calculated sum of each group members position bias in the ranking. This gives us a group exposure of 2.13 for the male group and 1.15 for the female group for the ranking $\hat \boldsymbol{\tau}_1$ in Figure~\ref{fig:postProc_example}. Let us consider the demographic parity objective: Here the exposure should be the same for both groups. As the position bias $\mathbf v$ is a constant value, the algorithm will modify the scores $\predScore{}$ until the parity objective is met. A possible solution is the ranking $\hat \boldsymbol{\tau}_4$ as shown in Figure~\ref{fig:postProc_example} with a group exposure of 1.66 for the male group and 1.62 for the female group. (We want to denote at this point that in college admissions the position bias does not play too much of a role and that a more appropriate example would be e.g. a search scenario. We applied the running example to make the outcome differences and similarities across methods visible). The other two objectives work in an similar manner except that there objective functions take a measure of ranking utility into account. \spara{Experiments.} The experiments are framed within three different scenarios of unfairness: biased allocation of opportunity (in job candidates rankings), misrepresentation of real-world distributions (biased Google image search for CEO), fairness as freedom of speech (equality of voice within new media channels like YouTube or Twitter). For these, the authors create a synthetic dataset with 100,000 entries and a binary protected attribute. Furthermore they use real dataset YOW news recommendation dataset~\cite{YowData}. \subsubsection{Equity of Attention \cite{biega2018equity}} \label{subsubsec:equityOfAttention} \spara{Fairness Definition.} This work is one of the few from the FAccT community to address fairness as an issue that arises from the inherent technical bias of a ranking itself. As stated before, each ranking position carries an inherent position bias that increases with the position number, meaning that even if all items had the same relevance, those at the top would receive a lot more attention than others at lower ranks. The authors frame this discrepancy as a problem of distributive individual fairness where they want to achieve equity of attention on the level of individuals: the attention an item gets from users should be proportional to their relevance to the query. Assuming that relevance decreases linearly and attention decreases geometrically, there is a necessary discrepancy between the attention loss that subjects receive and their relevance decrease. This work proposes a post-processing algorithm that optimizes equity of user attention with a constrained overall relevance loss (in contrast to~\ref{subsubsec:FairnessOfExposure} which constraints fairness and optimizes for relevance). As a single ranking can not be fair according to their definition due to position bias, the authors suggest an approach in which unfairness is additive. Their fairness definition ensures that \emph{over time} the differences in attention that well suited candidates $a$ and $b$ receive when ranked in $m$ rankings $\boldsymbol{\tau}_{1 \ldots m}$ is equal: \begin{equation} \frac{\sum^{m}_{i=1} \attention{\boldsymbol{\tau}_i}{a}}{\sum^{m}_{i=1} \utilityThreePara{}{\boldsymbol{\tau}_i}{a}} = \frac{\sum^{m}_{i=1} \attention{\boldsymbol{\tau}_i}{b}}{\sum^{m}_{i=1} \utilityThreePara{}{\boldsymbol{\tau}_i}{b}} \end{equation} Hence unfairness is measured as the accumulated difference between the attention of items and their relevance: \begin{equation} \operatorname{unfairness}(\boldsymbol{\tau}_1, \ldots, \boldsymbol{\tau}_m) = \sum^{n}_{a=1} \left| \sum^{m}_{i=1} \attention{\boldsymbol{\tau}_i}{a} - \sum^{m}_{i=1} \utilityThreePara{}{\boldsymbol{\tau}_i}{a} \right| \end{equation} \spara{Insights. } The authors relate their work to the notion of individual fairness by~\cite{dwork2012fairness} and treat group fairness (which they refer to as equality of attention) as a special case, in which the utility distributions are uniform across all rankings: $ \utilityThreePara{}{\boldsymbol{\tau}}{a} = \utilityThreePara{}{\boldsymbol{\tau}}{b}, \forall a, b$. It therefore relates to the formal EO framework and a WYSIWYG worldview. Note that this understanding of group fairness can not account for biases and errors in the data that are manifesting differently across groups, meaning that the error for the protected group might be high, while the data for the non-protected group is correct. Their definition of group fairness corresponds therefore explicitly with a WYSIWYG worldview and contrasts with most of the other definitions in the literature. Additionally the method is capable of addressing emergent bias, which may result from online learning algorithms that learn through user feedback and click data. \spara{Algorithm.} The fairness definition is implemented as a constrained optimization problem (which is then translated into an Integer Linear Program) in which unfairness is minimized subject to constraints on the maximum NDCG-loss in an online manner. This means that the algorithms allows unfairness minimization over time as subjects can enter and leave the system at any point. The algorithm reorders a given ranking such that unfairness is minimized given the cumulative attention and relevance distribution seen so far: \begin{mini}|l| {}{\sum_a \left| \sum^{l-1}_{i=1} \attention{\boldsymbol{\tau}_i}{a} + \attention{\boldsymbol{\tau}_l}{a} - \left(\sum^{l-1}_{i=1} \utilityThreePara{}{\boldsymbol{\tau}_i}{a} + \utilityThreePara{}{\boldsymbol{\tau}_l}{a}\right) \right| }{}{} \addConstraint{\operatorname{NDCG@k}(\boldsymbol{\tau}_l, \boldsymbol{\tau}_{l^*}) \geq \theta}{}{} \end{mini} with $\boldsymbol{\tau}_l$ being the current and $\boldsymbol{\tau}_{l^*}$ the reordered ranking, and $\theta$ is the quality constraint, that is increased to ensure more fairness in the rankings. As this method measures the attention each item received over time given their relevance, we have to consider several rankings to illustrate its effect. As attention is defined through position bias too, just as in~\cite{singh2018fairness}, let us come back to Figure~\ref{fig:postProc_example}. It is likely that the first few rankings will look like the ranking $\hat \boldsymbol{\tau}_1$ in Figure~\ref{fig:postProc_example}. However at some point, the received relative attention of e.g. candidates $f$ w.r.t. candidate $e$ will be too low, given that their score decreases linearly while the position bias decreases geometrically. If this happens, candidates $e$ and $f$ will be swapped for the next ranking to be produced by the model. \spara{Experiments.} The experiments are run on two real datasets AirBnB~\cite{AirBnBData} and StackExchange~\cite{StackData} and a synthetic dataset, each with two different models for attention gains by position (geometric attention decrease, and only the first position gets all attention) \begin{itemize} \item \textbf{Synthetic Data:} Experiments are set up with three different relevance score distributions (uniform, linear, exponential) and the aforementioned attention models. % In all cases the algorithm shows periodic behavior under lab conditions, meaning that every x rounds, it brings unfairness to 0. Furthermore fairness is not a steady state but will grow with each new ranking for each individual item. \item \textbf{AirBnB:} For these experiments rankings were created from AirBnB apartment listings in Hongkong (4529 items), Boston (3944 items) and Geneva (1728 items) under two scenarios -- first using always the same query, second to varying queries. % The results verify that the difference between the distributions for attention and relevance is generally huge in real world datasets and the question of whether unfairness can be brought to zero is highly dependant on the dataset at hand. % In all cases, only if $\theta = 0$ (i.e. no quality ensuring constraint) unfairness is not growing over time. \item \textbf{StackExchange:} The experiments show that individual subjects appear in relatively few result rankings, causing an extended fairness amortization time. \end{itemize} \section{Fairness in Other Domains: Recommendation and Matching} \label{sec:fair_recsys} Today many web applications with search functionality also implement a recommendation system that provides personalized search results to the user with items dedicated precisely to their interest. The main objective of recommender systems is to facilitate transactions between multiple stakeholders in which personalization plays a key role and therefore fairness issues for more than one group of participants have to be considered. RecSys tasks can be grouped into three different types: finding good items that meet a user's interest, optimizing the utility of users, and predicting ratings of items for a user~\cite{kamishima2018recommendation}. They often consist of multiple models, must balance multiple goals and are difficult to evaluate due to extreme and skewed sparsity~\cite{Beutel:2019:FRR:3292500.3330745}. Examples for such systems are user recommendations in web shops or on streaming platforms. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{figs/recsys.png} \caption{Principle components of a recommender system. A profile learner determines user profiles from training examples (upper right) which are stored in a profile database. When the user issues a query, the filtering component processes the general search result, and returns a list of user specific recommendations. The interaction of the user are collected as feedback and stored in a feedback database. This data is used in turn to update the model of the profile learner. Image taken from~\cite{de2015semantics}.} \label{fig:recsys:functional} \end{figure} Figure~\ref{fig:recsys:functional} shows the functional principle of a recommender system. A profile learner determines user profiles from training examples (upper right) which are stored in a profile database. When the user issues a query, the filtering component processes the general search result (and additional items to be displayed), and returns a list of user specific recommendations. The interaction of the user with the result are collected as feedback and stored in a feedback database. This data is used in turn to update the model of the profile learner. \citet{burke2017multisided} identifies three different stakeholders in recommender systems, namely the consumers, producers and the platform itself. They further identify three different types of recommender systems, distinguished by the respective stakeholders that are considered for fairness. When considering fairness for the recommended items, they speak of producer-fairness, while fairness for the users of the system is considered under the term consumer-fairness. In a system that satisfies consumer fairness the disparate impact of recommendation on protected classes of consumers has to be taken into account, while the fairness of outcomes is not considered for producers. Producer fairness regards the producer side of the system, but not the consumer side, e.g. to ensure market diversity and avoid monopoly domination. At the same time the producers are a passive system component that do not seek out recommendation opportunities, but instead wait for users to request recommendations from the platform. The third fairness constraint simultaneously takes producers and consumers into account and has to be applied when protected groups exist among both stakeholders. In our paper's language consumers would be called users $\userSet{}$ und producers would be called candidates $\candidateSet{}$. Recommender systems often use rankings to present the most suitable candidates to their users, and the question of the fairness for all stakeholders has been raised from different perspectives. We therefore want to give a short abstract on the topic of fairness for recommender systems, and point the curious reader to a recent extensive tutorial by~\citet{ekstrand2019fairness} for additional information. In this section we will use the terms users and consumers, as well as candidates and producers interchangeably to demonstrate that these concepts are analogous to each other, and to illustrate the similarities between the previously described methods, and those that follow in this section. All methods presented in this section can be considered in-processing approaches. \subsection{Recommendation Independence} \spara{Fairness Definition.} This work in~\cite{kamishima2018recommendation} introduces a concept of fairness as \emph{recommendation independence}: an unconditional statistical independence between a recommendation outcome and a specified piece of information, i.e. predictions of ratings should not be based on some previously specified feature. It is formalized as a regularization based approach that can deal with the encoding of sensitive information in the first and second moment of distributions, which means that independence shall be given in terms of of mean \emph{and} standard deviation of the distribution of predicted user ratings. A binary sensitive feature will be specified by user or manager. Users are represented by a random variable $\featOfUser{}$, $\featureSet$ represents items, $\sensFeatSet$ sensitive features and $\score{}$ ground truth ratings. The $i$-th instance of the training dataset $\recsysTrainSet{train}$ is a 4-tuple $(\featOfUser{i}, \featOfCand{i}, \sensFeatSet_i, \score{i})$. The rating predictions $\predScore{}$ are calculated using a modified loss function in which an independence term shall be maximized, meaning that the higher $\operatorname{ind}(\predScore{}, \sensFeatSet)$, the less statistically dependent are $\predScore{}$ and $\sensFeatSet$: \[\sum_{(\featOfUser{i}, \featOfCand{i}, \sensFeatSet_i, \score{i}) \in \recsysTrainSet{train}} \operatorname{loss}(\score{}, \predScore(\featOfUser{i}, \featOfCand{i}, \sensFeatSet_i)) - \eta \operatorname{ind}(\predScore{}, \sensFeatSet) + \operatorname{reg}(\Theta)\] The method can therefore be seen as an in-processing approach. The paper gives three ways in which independence can be measured, all aiming to produce a rating model in which the distributions of predicted ratings are indistinguishable from a sensitive feature: \begin{enumerate} \item \textbf{Mean Matching:} the means of two normal distributions shall match \[-\left(\frac{\mathbb{S}^{(0)}}{N^{(0)}} - \frac{\mathbb{S}^{(1)}}{N^{(1)}}\right)\] where $\mathbb{S}^{(\group{})}$ is the sum of predicted ratings per group and $N^{(\group{})}$ is the number of training items per group. \item \textbf{Distribution Matching:} The similarity between two distributions $Pr[\predScore{}|\sensAttr=0]$ and $Pr[\predScore{}|\sensAttr=1]$ is measured in terms of the negative Bhattacharyya-distance: \[\frac{1}{2}\ln{\left(\frac{2\sqrt{\mathbb{V}^{(0)}\mathbb{V}^{(1)}}}{\mathbb{V}^{(0)} + \mathbb{V}^{(1)}}\right)} - \frac{\left(\frac{\mathbb{S}^{(0)}}{N^{(0)}} - \frac{\mathbb{S}^{(1)}}{N^{(1)}}\right)^2} {4(\mathbb{V}^{(0)} + \mathbb{V}^{(1)})}\] where $\mathbb{V}^{(\group{})}$ is the variance of the training items per group. \item \textbf{Mutual information:} is defined as a degree of statistical independence through a differential entropy function for normal distributions: \[-I(\predScore{};\sensFeatSet) = -(H(\predScore{}) - H(\predScore{}|\sensFeatSet))\] where $H(\predScore{}) = \frac{1}{2}\ln 2 \pi e \mathbb{V}$ and $H(\predScore{}|\sensFeatSet) = \frac{1}{2}\ln 2 \pi e \mathbb{V}^{(s)}$. \end{enumerate} \spara{Insights. }It is not explicitly stated, which worldview they believe in, however the goal of equalizing rating distributions can be associated with the WAE worldview, as the fairness definitions do not contain a measure of merit. Furthermore the fairness definition is agnostic to the underlying ground truth distributions and would work under the assumption that true effort can be shaped by circumstances. It therefore includes the luck-egalitarian idea of EO. Note however, that this analysis only applies if the parameter $\eta$ is set high enough. As in section~\ref{subsubsec:DELTR}, the parameter shifts the framework from WAE and luck-egalitarian EO to WYSIWYG and formal EO when it takes lower values. The definition implicitly addresses the problem of pre-existing biases. Depending on the choice of the protected feature (e.g. popularity), it may also be capable to address emergent bias. \spara{Algorithm.} The algorithm is implemented using probabilistic matrix factorization to predict ratings: \[\predScore(\featOfUser{}, \featOfCand{}, \sensFeatSet) = \mu^{(\group{})} + b^{(\group{})}_{\featOfUser{}} + c^{(\group{})}_{\featOfCand{}} + \mathbf{p}^{(\group{})\top}_{\featOfUser{}}\mathbf{q}^{(\group{})}_{\featOfCand{}}\] where $\mu, b_{\featOfUser{}}$ and $c_{\featOfCand{}}$ are global, per-user and per-item bias parameters respectively and $\mathbf{p}_{\featOfUser{}}$ and $\mathbf{q}_{\featOfCand{}}$ are $K$-dimensional parameter vectors, which represent the cross effects between users and items. Said parameters are learned by minimizing the loss function, where the loss is expressed as a $L_2$-norm: \[\sum_{(\featOfUser{i}, \featOfCand{i}, \sensFeatSet_i, \score{i}) \in \recsysTrainSet{train}} (\score{i} - \predScore(\featOfUser{i}, \featOfCand{i}, \sensFeatSet_i))^2 - \eta \operatorname{ind}(\score{}, \sensFeatSet) + \operatorname{reg}(\Theta)\] All independence measures are differentiable and can hence be optimized efficiently using conjugate gradient methods. \spara{Experiments. } The experiments are done on three datasets with different sensitive features: \begin{enumerate} \item The ML1M \textbf{Movielens dataset}~\cite{harper2015movielens} contains 1M movie items. Two sensitive features where chosen for two separate settings: first whether the movie's release year was before 1990, and second the user's gender. % The first setting investigates on fairness on the side of the producer, whereas the second relates to a fairness concern w.r.t. the users of the system. \item The \textbf{Flixster dataset}~\cite{jamali2010matrix} is also a dataset on movie recommendations and contains almost 10M entries. The popularity of an item was chosen as the protected feature: movies that received the most user ratings (top 1\%) belong to one group, while all movies that received less ratings form the other group. \item The \textbf{Sushi dataset}~\cite{kamishima2003nantonac} contains around 50,000 data points of users and their preferences when ordering Sushi from 25 different restaurants. % Three different choices of sensitive features where investigated: age (whether a person was a teen or not), gender (a user was male or female) and seafood (whether or not a type of sushi was seafood). \end{enumerate} The performance evaluation uses the mean absolute error, while the independence (i.e. fairness) evaluation uses the Kolmogorov-Smirnoff test. The latter evaluates the area between two empirical cumulative distributions and shall be close to zero for high fairness. The experiments compare all three independence measures to each other. In all cases independence can be achieved, though at a small cost of accuracy. \subsection{Familiarity Bias and Superstar Economics} \spara{Fairness Definition.} This work in~\cite{mehrotra2018towards} addresses the problem of disparities in exposure of items to users due to the combination of two factors: the pre-existing familiarity of the user with certain items (someone who likes action movies will likely know Tom Cruise) and the current recommendation strategies of two-sided markets (``superstar economics''). As recommendation systems optimize for relevance this leads to a lock-in of popular and relevant suppliers and thus causes suppliers at the tail end of the exposure distribution struggle to attract consumers. This may lead to a dissatisfaction with the marketplace. The paper aims to understand the interplay of relevance, fairness and satisfaction and investigates on consumer relevance and supplier fairness on a music streaming platform. It introduces a notion of multinomial group fairness, which requires that the content shown to users be spread well across the wide long-tailed popularity spectrum, rather than focusing on a small subset of popular artists. From the popularity distribution of all artists, $K$ bins of equal size are created and artists are grouped into these bins depending on their popularity: \[\Psi(s) = \sum_{i=1}^{K} \sqrt{|t_j|_{\forall t_j \in P_i \cap T(s)}}\] where $P_i$ is the set of artists that belong to popularity bin $i$, $s$ is the recommended set and $T(s)$ is the collection of artists in the set $s$ with $t_j$ being the $j$-th track in $s$. This definition rewards sets that are \emph{diverse} in terms of the different artist popularity bins represented and, as per the current definition, \emph{fair} to different popularity bins of suppliers. There is more benefit to selecting an artist from a popularity bin not yet having one of its artist already chosen. As soon as an artist is selected from a bin, other artists from the same bin start having diminishing gain owing to the square root function. \spara{Insights. } The work does not explicitly express the authors' beliefs about the mapping $g$ from construct space to observable space. However the fairness definition aims to equalize the exposure of different artist independent of their popularity and can therefore be associated with a WAE worldview. It can also be associated to substantive EO, most likely luck-egalitarian EO which would acknowledge different distributions of popularity in construct space. However, it is not clear how an artist's popularity relates to their true underlying effort at all, as it is mainly driven by the \textit{users} of the system, rather than by the artists themselves. The method is addressing a combination of pre-existing and technical bias, where the popularity bias can be seen as pre-existing and its reinforcement by the recommender system as technical bias. It may also address emergent bias w.r.t. popularity shifts in the future. The paper then presents three different policies with $\phi(.)$ being a relevance and $\psi(.)$ a fairness measure: \begin{enumerate} \item A weighted combination of relevance and fairness: \[s^*_u = \argmax_{s \in S_u} (1 - \beta) \phi (u, s) + \beta \psi(s)\] where $S_u$ is the collection of all sets pertinent to the user $u$. \item A probabilistic combination of relevance and fairness, where the weighting factor $\beta \in [0, 1]$ decides on whether to recommend a set based on relevance or fairness: \begin{equation*} s^*_u = \begin{cases} \argmax_{s \in S_u} \psi(s), & \text{if}\ p < \beta \\ \argmax_{s \in S_u} \phi(u, s), & \text{otherwise} \end{cases} \end{equation*} where $p \in [0, 1]$ is a randomly generated number. \item A guaranteed relevance term to ensure that the minimum relevance is $\beta$: \[s^*_u = \argmax_{s \in S_u} \psi(s) \;\; s.t.\;\; \phi(u,s) \geq \beta\] \end{enumerate} They further investigate on different affinities of users to fairness, i.e. some users only want to listen to one particular artist, where others are more flexible. This affinity is measured as the difference between the satisfactions of a user when recommended relevant content vs. when recommended fair content. The paper states that a fairness recommendation policy should be adaptive to this affinity $\xi_u$ and therefore redefine the second fairness policy: \begin{equation*} s^*_u = \begin{cases} \argmax_{s \in S_u} \psi(s), & \text{if}\ \xi_u \geq 0 \\ \argmax_{s \in S_u} \phi(u, s), & \text{otherwise} \end{cases} \end{equation*} Another version redefines the first policy to use the z-scored affinity, denoted as $\hat{\xi}_u$: \[s^*_u = \argmax_{s \in S_u} (1 - \hat{\xi}_u) \phi (u, s) + \hat{\xi}_u \psi(s)\] As all definitions are modifying the objective function of the learning algorithm the method can be classified as in-processing approach. \spara{Algorithm.} The algorithm is implemented as a combinatorial contextual bandit problem with the following consecutive interactions between the customers and the recommendation system: \begin{enumerate} \item The system observes context $\featureSet$ drawn from a distribution of contexts $\mathbf P(\featureSet)$. \item Based on $\featureSet$ the system chooses the sets $s$ to recommend to the user. \item A reward function $\score{} \in [0,1]$ is drawn from distribution $\mathbf P(\score{}|\featureSet,s)$ that expresses user satisfaction. \item The system maximizes $\score{}$ under the different fairness policies. \end{enumerate} As estimating user satisfaction is not easy, because it relies on user feedback which is hard to estimate offline, the recommender system is modeled as a stochastic policy $\pi$ that specifies a conditional distribution $\pi(a|\featureSet)$. The value $V(\pi)$ of a policy is the expected reward, i.e. the user's satisfaction, if an action $a$ is chosen under that policy. This value is to be estimated for a new policy $\pi^*$ given logged training data, using the inverse propensity score (IPS) estimator, which is provably unbiased: \[V_{\operatorname{offline}} = \sum_{\forall (\featureSet, a, \score{a}, p_a)} \frac{\score{a} \mathds{1}(\pi^*(\featureSet) = a)}{p_a}\] with $p_a$ being the propensity scores for an action $a$, that was randomly chosen from the space of all possible actions under context $\featureSet$, and $\mathds{1}(\pi^*(x) = a)$ being the set indicator function that evaluates to 1 if the action selected by the target policy matches the logged training data. With this a metric of user satisfaction can be computed as: \[\score{\pi^*(x)} = \mathds{E}_a \left[ \frac{\score{a} \mathds{1}(\pi^*(\featureSet) = a)}{p_a} \right]\] \spara{Experiments.} The experimental part contains a trade-off analysis between user relevance of sets and fairness of sets, which shows that only very few highly relevant sets also achieve high scores of fairness. This means that a recommendation system that solely optimizes for user relevance will not automatically lead to fair and diverse sets for the suppliers, and in turn that two-sided marketplaces have to optimize for users and suppliers to satisfy both. The evaluation is done on a proprietary dataset from Spotify with 400K users, 49K artists and 5K sets (i.e. playlists). When maximizing relevance $\phi(u,s)$ only, the results show the highest user satisfaction, whereas when optimizing for fairness only, the user satisfaction drops by 35\%. When optimizing for both, user satisfaction increases with an increase of the weight that is given to the relevance objective. However satisfaction does not drop significantly up to a fairness weight of 20\%, such that policies could easily increase fairness up to that point without trading user satisfaction significantly. The guaranteed relevance policy yields the best satisfaction results in absolute numbers and shows a linear trend of user satisfaction with increasing weights for relevance. However for the same levels of $\beta$ this policy achieves the least average fairness scores compared to the other two policies. This means that the usage of the interpolating fairness policy and the probabilistic fairness policy is preferable in situations where less fairness shall be traded for higher relevance values. When evaluating the adaptive policies, the results highlight that personalizing the recommendation policy and adapting based on user level affinity is better than globally balancing relevance and fairness Interestingly the adaptive policies lead to a relatively high fairness mean compared to the global policies while at the same time increasing the overall user satisfaction. This is another example where the general assumption, that a trade-off between fairness and relevance is a necessary evil, is disproved. \subsection{Fairness in Two-sided Matchings} \spara{Fairness Definition.} This work in~\cite{suhr2019two} gives a case study of a two-sided matching platform, namely a ride-hailing platform such as Uber. It discusses two-sided fairness of providers and consumers in a platform performing repeated matchings of providers and consumers over time. Here, the riders are consumers and the drivers are producers. Fairness is seen in terms of fair distribution of provider income given a driver's active time in the system. It is explicitly considered over time, because a single matching does not have a significant long-term effect on the life of the people that are matched and income equity can be amortized over a longer period, such as a week or a month. Note that the paper speaks explicitly about customers and drivers, but in a broader sense those can be seen as users $\userSet{}$ of the system and candidates $\candidateSet{}$ to be matched to them. We will therefore adopt the notation from before to the framework. Customer utility $U_{\userSet{}}$ is described as a customer $\userSet{b}$'s waiting time, which is approximated using the negative distance $d$ of a driver $\candidateSet{a}$ to them: \[U_{\userSet{}}(b, a) = -d(\userSet{b}, \candidateSet{a})\] Driver utility is described as the income a driver receives from transporting a customer, which is approximated using the distance from the customer's pick-up location to their destination, reduced by the distance the driver has to travel in order to arrive at the respective pick-up location: \[U_{\candidateSet{}}(a, b) = d(\userSet{b}, \operatorname{dest}(\userSet{b}^t)) - d(\userSet{b}, \candidateSet{a})\] The paper then defines two fairness concepts. First the authors introduce \textbf{parity fairness}: over time the sum of received utility shall be (almost) equal for all drivers in $\candidateSet{}$: \[\sum_a \sum_b | U^T_{\candidateSet{a}} - U^T_{\candidateSet{b}}| < \epsilon \] with $U^T_{\candidateSet{a}}$ being the total utility driver $a$ received until matching round $T$. Second the authors define \textbf{proportional fairness}: over time, the sum of received utility normalized by their active driving time shall be equal for all drivers \[\sum_a \sum_b \left| \frac{U^T_{\candidateSet{a}}}{\Lambda^T_{\candidateSet{a}}} - \frac{U^T_{\candidateSet{b}}}{\Lambda^T_{\candidateSet{b}}}\right| < \epsilon\] where $\Lambda^T_D(j)$ is the total amount of time a driver has been active on the platform until $T$. \spara{Insights. } The work does not talk about any protected groups and instead tries to equalize the (hourly) wage of all drivers in the system. Also the drivers' efforts are not considered other than in the sense of their active time and such it does not make sense to assign the method to a particular equality of opportunity framework. The assumption behind this work is that driving skills are essentially the same and therefore all drivers should be paid equally. It is an extreme case of the WAE worldview in which not only do all groups have the same qualification distribution, but the absolute qualification values are the same for each individual. The biases addressed are of a technical and emergent nature. The emergent bias would otherwise result in the sense that the drivers' locations are shaped by the platform and may be concentrated on certain hot spots while some locations remain less frequented. \spara{Algorithm.} The method defines a two-sided optimization objective to minimize the difference of the utilities of drivers (customers) as compared to the maximum utility gained by any driver (customer) up until the previous matching round: \begin{equation*} \begin{aligned} \sum_{a} \sum_{b} & \text{ } \lambda \cdot \left| \max_{a'}U^{T-1}_{\candidateSet{a'}} - \left( U^{T-1}_{\candidateSet{a}} + M^T_{a,b} \cdot U_{\candidateSet{}}^T(a,b) \right) \right| \\ & + (1 - \lambda)\cdot \left| \max_{b'}U^{T-1}_{\userSet{b'}} - \left( U^{T-1}_{\userSet{b}} + M^T_{a,b} \cdot U^{T}_{\userSet{}}(b,a) \right) \right| \end{aligned} \label{eq:objective2} \end{equation*} where $M^T_{a,b}$ is 1 if driver $\candidateSet{a}$ is matched to customer $\userSet{b}$ in round $T$ and 0 otherwise and $\lambda$ is a hyper-parameter to continuously interpolate between producer and consumer fairness. This objective is translated into an integer linear program. \spara{Experiments.} For the experiments income inequality is measured using the generalized entropy index. They are performed on a dataset from a ride hailing platform in an Asian city consisting of 1462 registered drivers. As passengers are not registered, every request is handled using a unique job id, with 231,268 jobs in total. The analysis of the dataset shows that supply exceeds demand by an order of magnitude and driver income is hence a scarce resource. Then different matching strategies are compared to each other w.r.t. their effects on driver and customer utility: \begin{itemize} \item \textbf{Nearest driver first} is a simple objective for low customer waiting times. % It yields the highest \emph{average} driver utility, but has the largest discrepancies in terms of driver income (i.e. a high GEI). \item \textbf{Worst-off driver first} yields almost equal driver income, but lowest customer utility. % It also shows undesired effects, such as lowering hourly wages for drivers that have long active periods, because whenever a new driver joins the system, they have the lowest possible income so far, namely zero. \item The \textbf{two-sided optimization} achieves not only better results in terms of income equality for drivers but also in terms of customer waiting time, because drivers happen to be placed in better positions for following requests. % It is not clear if this is a data artifact or a general property of the algorithm though. % This confirms again, as for rankings in~\cite{zehlike2018reducing} and for two-sided platforms in~\cite{mehrotra2018towards}, that a trade-off between fairness considerations and customer utility is not necessarily to be made, but that optimization for fairness can, in some cases increase system utility. \end{itemize} \section{Frameworks and Benchmarks} \label{sec:eval} In this section, we survey several frameworks and benchmarks that focus on fairness in rankings. For each framework, we highlight the following aspects: type of rankers, datasets, system structure (i.e.,\xspace how the framework works), integration of fairness, and accessibility (i.e.,\xspace how the framework can be used). \subsection{Fair Search} FairSearch~\cite{zehlike2020fairsearch} is an open source API to provide fair search results, which is designed as stand-alone libraries (supported in Python and Java) and plugins of Elasticsearch (supported in Java). Users can run FairSearch together with their own datasets once these have been formatted as required. FairSearch implements two algorithms to guarantee fair ranking as search results: an in-processing technique named as DELTR (see details in Sec.~\ref{subsubsec:DELTR}) and a post-processing one called FA*IR (details in Sec.~\ref{subsubsec:FAIR}) For the support of DELTR, FairSearch provides an off-line wrapper to train a fair ranking model using DELTR that is later uploaded and stored in Elasticsearch as plugins. For FA*IR, FairSearch applies FA*IR algorithm to rerank the search results provided by Elasticsearch and presents it as final search results to users. The implementation of FairSearch can be found in \url{https://github.com/fair-search}. \subsection{Fair TREC} FairTrec~\cite{bonartfair} is a project to coordinate research around the idea of fairness in search results, launched by National Institute of Standards and Technology (NIST) and organized by researchers from Microsoft and Boise State University in 2019. It is a part of Text REtrieval Conferences (TREC), which focuses on research in information retrieval and related applications by providing a large test collection, uniform scoring procedures, and a forum for organizations interested in comparing their results. FairTrec2019 uses a set of open data from a specific search tool: the Semantic Scholar search engine, which provides 471 GB data files and can be downloaded from \url{http://api.semanticscholar.org/corpus}. Its goal is to provide fair exposure to different groups of authors defined by various factors such as demographics and stature. Their fairness definition belongs to the group fairness category (see details in Sec.~\ref{sec:frame:group}). The participants are asked to perform a re-rank task on the provided data and their results are evaluated by fair exposure of groups of authors and relevance of documents, both are computed by the provided measurements. Note that the definition of groups is not provided by the organizers and will be a part of evaluation to test the robustness of the results for various group definitions. \subsection{Ranking Facts} Ranking Facts~\cite{yang2018nutritional} is a Web-based application that generates a "nutritional label" for rankings. The nutritional label is made up of a collection of visual widgets that implement research results on fairness, stability, and transparency for rankings, and that communicate details of the ranking methodology, or of the output, to the end user. Ranking Facts supports rule-based rankers that are specified by users as input and can be used for any tabular datasets with numerical and categorical features. To use Ranking Facts, users are asked to upload a tabular dataset as in CSV format, then specify a rule-based ranking function and some sensitive features considering issues of fairness and diversity. For example, for a dataset of financial information, gender can be a feature to consider fair exposure of women and men in a generated ranking. Race can be a feature to consider equally distribution of different racial groups in a generated ranking. Ranking Facts generates the results according to the provided ranking function and the sensitive features. Ranking Facts implements three algorithms to decide whether a generated ranking is fair to a disadvantaged group according to the user-specified sensitive feature. They are proportional representation, pairwise comparison, and FA*IR. An example of using Ranking Facts is illustrated in the following. The tool can be accessed at \url{http://demo.dataresponsibly.com/rankingfacts}. The code is available at \url{https://github.com/DataResponsibly/RankingFacts}. \subsection{Mirror Data Generator} \label{sec:mirror} \textit{Mirror Data Generator} is an open source tool that generates synthetic data, mirroring the different issues in the real data collection. It has been used to produce rankings with intersectional concerns~\cite{yang2020causal}. Mirror Data Generator takes a dependency graph as input. The dependency graph is a weighted DAG that describes the columns in the generated data as its nodes and the dependent relations among the columns as its weighted edges. The weight of an edge in a dependency graph represents how much the child node (the one on the right of the edge) depends on the parent node (the one on the left of the edge). \textit{Mirror Data Generator} can generate ``dirty'' synthetic data that includes the columns with missing values and erroneous values. The generation of these columns are operated as an additional step to remove or modify the values accordingly. A ``dirty'' column can be embedded in a dependency graph to depend on other columns and even on itself, which captures the special patterns of the missing values (e.g.,\xspace not missing randomly). The code is available at \url{https://github.com/DataResponsibly/MirrorDataGenerator}. \section{Discussion and Recommendations} \label{sec:discuss} In the previous sections, we discussed the principal functioning of fair ranking methods, and made explicit the technical choices they make and the value frameworks they encode. In this section we discuss our insights and draw a set of recommendations for the evaluation of fair ranking methods. Our recommendations are aimed at data science researchers and practitioners. With these recommendations we aim to establish best practices for the development, evaluation, and deployment of fair ranking algorithms, and to avoid potentially harmful uninformed transfer of methods from one application domain to another. \paragraph{Recommendation 1: Make Values and Context of Use Explicit} Different application scenarios require different value frameworks. The classification frameworks we presented in this paper are meaningful if the application scenario is concerned with aspects of \emph{distributive justice}. However, even if a situation requires distributive justice to be taken into account, the goods or benefits to be distributed play a key role in determining which framework should be applied. For example, college admissions (educational opportunity) may require a different interpretation of fairness than hiring or user rating prediction in online shops (economic opportunity). To avoid algorithmic solutionism, users of fair rankers must first clearly articulate their own moral goals for a ranking task, and choose a fairness-enhancing method that is consistent with their goals. To facilitate this choice, the fairness concepts behind a fair rankers must be made explicit by their creators. Values are rarely made explicit in fair ranking papers. A reader looking to adopt a method to their application context will often use the experiments section of a paper to decide whether the method will ``work'' for them. We often see experimental sections in which all available datasets, corresponding to vastly different contexts of use---from recidivism risk prediction, to credit scoring, to college admissions, to matching platforms like AirBnB---are used to show performance of a method, but without an explanation as to why the dataset was selected, other than that it was available and items in it have scores on which to rank. For example, the COMPAS dataset~\cite{angwin2016machine} is often used in experiments for papers that propose methods for distributive justice~\cite{zehlike2017fa, yang2017measuring}, yet the dataset was collected for a legal decision making task and so this use is out of scope. We caution against this practice and recommend that, when designing their experiments, the authors should carefully substantiate the appropriateness of using a proposed method on a particular dataset in the context of a specific ranking task. This substantiation should be made by mapping the method and the task to a value framework. \paragraph{Recommendation 2: Surface Normative Consequences of Technical Choices} Algorithmic rankers are complex technical objects, and many implicit and explicit choices go into their design. In this paper we discussed an important technical dimension of ranker design, namely, the representation of group structure: how many sensitive attributes a ranker handles, and whether these are binary or multinary. This technical choice in turn impacts what type of discrimination a fair ranker can help address (e.g.,\xspace on one or on several sensitive attributes), and whether it can address intersectional concerns and, if so, what specific concerns are in scope (e.g.,\xspace representation constraints on intersectional groups, differences in score distributions, or imbalanced loss in utility). We deliberately discussed intersectional discrimination under the heading of mitigation objectives, rather than presenting it as a purely technical choice, and we recommend that designers of fair rankers explicitly discuss both their technical choices, and what consequences they have for applicability. Another important technical dimension is where in the processing pipeline bias mitigation is applied (recall Figure~\ref{fig:ir-flowchart} on page~\pageref{fig:ir-flowchart}). Pre-processing methods have the advantage of early intervention on pre-existing bias. The advantage of in-processing methods is that they do not allow a biased model to be trained. The advantage of post-processing methods is that they provide a guaranteed share of visibility for protected groups. However, post-processing methods are be subject to legal debates in the US because of due process concerns that may make it illegal to intervene at the decision stage (e.g.,\xspace Ricci v. DeStefano~\cite{ricci}). In the EU, post-processing methods can be used if other methods fail to comply with EU anti-discrimination law. We recommend that the designers of fair rankers substantiate the appropriateness of their technical choice based on the context of use, on for which their method was designed, as well as on the region of use to avoid legal pitfalls. \paragraph{Recommendation 3: Draw Meaningful Comparisons} Additional research is needed to understand how methods with opposing worldviews, conflicting understandings of equality of opportunity, and different addressed biases can be compared in a meaningful way. For example, in their experiments the authors of \textsc{Fair-PG-Rank}\xspace~\cite{singh2019policy} compare its results to those of \textsc{DELTR}\xspace~\cite{zehlike2018reducing}, but as their measure of disparate exposure is vastly different and the bias they are focusing on is not the same, it is not clear what conclusion to draw from such comparisons. Making the values and the context of use explicit will go a long way towards helping design meaningful experimental comparisons between methods, rather than mechanically comparing apples to oranges. \section{Conclusion} \label{sec:conc} In this paper we gave an extensive overview of the state-of-the-art literature of fair ranking in the domains of score-based and supervised learning-based ranking. We introduced important dimensions along which to classify fair ranking methods, mapping their assumptions and design choices to the normative values they incorporate. We outlined the technical details of all presented methods, presented differences and commonalities, and categorized each technical choice by its implicit values within the four normative dimensions. We discussed implications of normative choices and gave recommendations for researchers on how to make such choices in their work explicit. Most fair ranking methods are concerned with the concept of \emph{distributive justice}, as they aim to fairly distribute the visibility in a ranking among the candidates. Our focus on distributive justice allowed for the mapping between the worldviews and the equality of opportunity concepts in the framework we proposed. However, this mapping is only meaningful in a distributive context and most likely cannot be transferred to a different setting. In the future we hope to see work that relates to other concepts of justice, such as \emph{procedural justice}, which is concerned with the fairness and transparency of a decision making \emph{process}, and is therefore particularly important in legal decision making. It will be interesting to study whether fairness-enhancing methods designed for concerns of distributive justice can be transferred to the context of procedural justice in a meaningful way. Another interesting direction to classify fair ranking methods within distributive justice contexts is to understand the properties of ranking scores with respect to different indexes of advantage. Commonly, the score models a candidate's \textit{potential utility} to the user of the ranking, which stems from welfarism/utilitarianism and, as such, incorporates an idea of satisfaction and preference~\cite{sen1980equality}. In contrast, Rawls judges the goodness of a distribution in terms of so-called \textit{primary goods}. It is important to explore the implications of these different conceptions of advantage, and, crucially, understand whether they can be combined in a common fairness objective. \section{Preliminaries and notation} \label{sec:prelim} In this section we will build on our running example to discuss score-based and supervised learning-based rankers more formally, and fix the necessary notation. We summarize notation in Table~\ref{tbl:notation} and illustrate it throughout this section. \subsection{Score-based ranking} \label{sec:intro:score-based} Formally, we are given a set $\candidateSet{}$ of candidates; each candidate is described by a set of features $\featureSet$ and a score attribute $\score{}$. Additionally we are given a set of sensitive attributes $\sensFeatSet \subseteq \featureSet$, which are categorical, denoting membership of a candidate in demographic groups.\footnote{While some sensitive attributes like age or degree of disability may be drawn from a continuous domain, we are not aware of any approaches that represent them as such, and so will assume that sensitive attributes are categorical in the remainder of this survey.} A sensitive attribute $\sensAttr \in \sensFeatSet$ may be binary, with one of the values (e.g.,\xspace $\sensAttr=1$ or $\sensAttr=\val{female}$, as in Figure~\ref{fig:admissions}) denoting membership in a minority or historically disadvantaged group (often called ``protected group'') and with the other value (e.g.,\xspace $\sensAttr=0$ or $\sensAttr=\val{male}$) denoting membership in a majority or privileged group. Alternatively, a sensitive attribute may take on three or more values, for example, to represent ethnicity or (non-binary) gender identity of candidates. A \e{ranking} $\boldsymbol{\tau}$ is a permutation over the candidates in $\candidateSet{}$. Letting $n = \left| \candidateSet{} \right|$, we denote by $\boldsymbol{\tau} = \ranking{\lst{\tau}}$ a ranking that places candidate $\tau_i$ at rank $i$. We denote by $\boldsymbol{\tau}(i)$ the candidate at rank $i$, and by $\boldsymbol{\tau}^{-1}(a)$ the rank of candidate $a$ in $\boldsymbol{\tau}$. We are often interested in a sub-ranking of $\boldsymbol{\tau}$ containing its best-ranked $k$ elements, for some integer $k \leq n$; this sub-ranking is called the top-$k$ and is denoted $\boldsymbol{\tau}_{1\ldots k}$. For example, given a ranking $\boldsymbol{\tau}=\angs{b,c,d,e,f,k,l,o}$, $\boldsymbol{\tau}(3)=d$, $\boldsymbol{\tau}^{-1}(l)=7$, and the top-$4$ is $\boldsymbol{\tau}_{1\ldots 4}=\angs{b,c,d,e}$. \input{table-notation} \paragraph{Utility} Because score $Y$ is assumed to encode a candidate's appropriateness, quality, or \emph{utility}, a score-based ranking usually satisfies: \begin{equation} \score{\boldsymbol{\tau}(1)} \geq \score{\boldsymbol{\tau}(2)} \geq \ldots \geq \score{\boldsymbol{\tau}(n)} \label{eq:sort} \end{equation} We will find it convenient to denote by $\utilityTwoPara{k}{\boldsymbol{\tau}}$ the utility of $\boldsymbol{\tau}_{1\ldots k}$. Different methods surveyed in this paper adopt different notions of utility, and we will make their formulations precise as appropriate. The simplest method is to treat $\boldsymbol{\tau}_{1\ldots k}$ as a set (disregarding candidate positions), and to compute the utility of the set as the sum of scores of its elements: \begin{equation} \utilityTwoPara{k}{\boldsymbol{\tau}} = \sum_{i=1}^{k} \score{\boldsymbol{\tau}(i)} \label{eq:agg_utility} \end{equation} Another common method incorporates position-based discounts, following the observation that it is more important to present high-quality items at the top of the ranked list, since these items are more likely to attract the attention of the consumer of the ranking. For example, we may compute position-discounted utility of a ranking as: \begin{equation} \utilityTwoPara{k}{\boldsymbol{\tau}} = \sum_{i=1}^{k} \frac{\score{\boldsymbol{\tau}(i)}}{\log_{2} (i+1)} \label{eq:disc_agg_utility} \end{equation} For example, the utility at top-$4$ of $\boldsymbol{\tau}_1$ in Figure~\ref{fig:ad_example} is $47$ based on Equation~\ref{eq:agg_utility} and $31.4$ based on Equation~\ref{eq:disc_agg_utility}. Note that the base of the logarithm in the denominator of Equation~\ref{eq:disc_agg_utility} is empirically determined, and it can be set to some value $b>1$ other than 2. For these variants of utility and for others, it is often useful to quantify utility realized by candidates belonging to a particular demographic group $\group{} \subseteq \candidateSet{}$, defined by an assignment of values to one or several sensitive attributes. For example, $\group{}$ may contain female candidates, or Asian female candidates. We can then compute to the utility of $\boldsymbol{\tau}_{1\ldots k}$ (per Equation~\ref{eq:agg_utility}) for group $\group{}$ as: \begin{equation} \utilityThreePara{k}{\boldsymbol{\tau}}{\group{}} = \sum_{i=1}^{k} Y_{\boldsymbol{\tau}(i)} \times \mathbbm{1}{[\boldsymbol{\tau}(i) \in \group{}]} \label{eq:agg_utility_g} \end{equation} Here, $\mathbbm{1}$ is an indicator variable that returns 1 when $\boldsymbol{\tau}(i) \in \group{}$ and 0 otherwise. Position-discounted utility (per Equation~\ref{eq:disc_agg_utility}) for group $\group{}$ can be defined analogously. For the ranking $\boldsymbol{\tau}_1$ in Figure~\ref{fig:ad_example}, $\utilityThreePara{4}{\boldsymbol{\tau}_1}{sex=\val{male}} = 36$, $\utilityThreePara{4}{\boldsymbol{\tau}_1}{sex=\val{male} \wedge race=\val{White}} = 24$, and $\utilityThreePara{4}{\boldsymbol{\tau}_1}{sex=\val{male} \wedge race=\val{Black}} = 0$. \paragraph{Fairness} To satisfy objectives other than utility, such as \emph{fairness}, we may output a ranking $\hat \boldsymbol{\tau}$ that is not simply sorted based on the observed values of $\score{}$ as in Equation~\ref{eq:sort}. As is the case for classification and prediction, numerous fairness measures have been defined for rankings. These measures can be used both to assess the fairness of a ranking and to intervene on unfairness, for example, by serving as basis for constraints. A prominent class of fairness measures corresponds to \emph{proportional representation} in the top-$k$ treated as a set, or in every prefix of the top-$k$. These measures are motivated by the need to mitigate different types of bias, based on assumptions about its origins and with a view of specific objectives (to be discussed in Section~\ref{sec:02-four-frameworks}). For example, ranking $\boldsymbol{\tau}_2$ in Figure~\ref{fig:ad_example} re-ranks candidates to satisfy proportional representation by gender at the top-$4$ (treating it as a set), swapping candidates $e$ and $f$. The ranking $\boldsymbol{\tau}_3$ in Figure~\ref{fig:ad_example} additionally reorders candidates $c$ and $d$ to achieve proportional representation by gender in every prefix of the top-$4$. In addition to fairness measures, \emph{diversity} measures have also been proposed in the literature~\cite{drosou2017diversity}. In this survey we will focus on coverage-based diversity that is most closely related to fairness, and requires that members of multiple, possibly overlapping, groups, be sufficiently well-represented among the top-$k$, treated either as a set or as a ranked list. Diversity constraints may, for example, be stated to require that members of each ethnic group, each gender group, and of selected intersectional groups on ethnicity and gender, all be represented at the top-$k$ in proportion to their prevalence in the input. When candidates are re-ranked to meet objectives other than score-based utility, we may be interested to compute \emph{$\score{}$-utility loss}, denoted $L_{\score{}}(\boldsymbol{\tau}, \hat \boldsymbol{\tau})$. We can use a variety of metrics that quantify the distance between ranked lists for this purpose, including, for example, the Kendall distance that counts the number of pairs that appear in the opposite relative order in $\boldsymbol{\tau}$ and $\hat{\boldsymbol{\tau}}$, or one in a family of generalized distances between rankings~\cite{DBLP:conf/www/KumarV10}. However, loss functions that compare rankings $\boldsymbol{\tau}$ and $\hat \boldsymbol{\tau}$ in their entirety are uncommon. Rather, $\score{}$-utility loss is usually specified over the top-$k$. The simplest formulation is: \begin{equation} \yUtilLoss{k}{\boldsymbol{\tau}}{\hat{\boldsymbol{\tau}}} = \utilityTwoPara{k}{\boldsymbol{\tau}} - \utilityTwoPara{k}{\hat{\boldsymbol{\tau}}} \end{equation} Alternatively, we may normalize this quantity: \begin{equation} \yUtilLoss{k}{\boldsymbol{\tau}}{\hat{\boldsymbol{\tau}}} = 1 - \frac{\utilityTwoPara{k}{\hat{\boldsymbol{\tau}}}}{\utilityTwoPara{k}{\boldsymbol{\tau}}} \end{equation} Further, we may be interested to quantify utility loss for a particular demographic group $\group{}$. In that case, we define the utility of $\boldsymbol{\tau}$ and $\hat \boldsymbol{\tau}$ for group $\group{}$, as was done in Equation~\ref{eq:agg_utility_g}, or analogously for other utility formulations. Interestingly, underrepresented groups may see a gain, rather than a loss, in $\score{}$-utility, because they may receive better representation at the top-$k$ when a fairness objective is applied. In our discussion so far we focused on score-based rankers, with an example in Figure~\ref{fig:ad_example}. We now discuss ranking methods that use supervised learning and present concepts that are specific to such rankers. \subsection{Supervised learning to rank} \label{sec:intro:learned} In supervised learning to rank (LtR\xspace), we are given a set $\candidateSet{}$ of candidates; each candidate is described by a set $\featureSet$ of features, including also sensitive features $\sensFeatSet \subseteq \featureSet$. Each candidate $a \in \candidateSet{}$ has an associated score attribute $\score{}$, which describes their quality with respect to a given task ( e.g.,\xspace college admissions). Every such association forms an instance of either the training dataset $\candidateSet{train}$ or the test dataset $\candidateSet{test}$. Like score-based rankers, LtR\xspace rankers compute candidate scores and return a ranking $\boldsymbol{\tau}$ with the highest-scoring candidates appearing closer to the top (per Eq.~\ref{eq:sort}). The difference between score-based and LtR\xspace rankers is in how the score is obtained: in score-based ranking, a function is given to calculate the scores $Y$, while in supervised learning, the ranking function $\hat{f}$ is learned from a set of training examples and the score $\predScore$ is estimated.\footnote{Note that the literature distinguishes point-wise, pair-wise and list-wise LtR\xspace methods and that $\score{}$ has a slightly different meaning for each of them~\cite{DBLP:series/synthesis/2014Li}. However, because the overall procedure remains the same, we will focus on point-wise LtR in the remainder of this section, and give technical details for pair-wise and list-wise methods in later sections, as appropriate.} Figure~\ref{fig:supervised_ranker} describes the LtR\xspace process. We are given two datasets $\candidateSet{train}$ and $\candidateSet{test}$.\footnote{To follow machine learning best practices, we may also produce a separate validation dataset, used to tune model hyperparameters. We leave this out from our discussion for brevity.} We use $\candidateSet{train}$ to train an LtR\xspace model, learning a ranking function $\hat{f}(\featureSet)$ that minimizes the prediction errors on $\score{\text{train}}$. This is usually done by minimizing the sum of the individual errors $\hat{f}$ makes between the ground truth $\score{}$ and its prediction $\predScore$ for $\candidateSet{train}$. To evaluate the performance of the model $\hat{f}$, we apply it to $\candidateSet{test}$, and then compare ground truth scores and predictions. If model testing succeeds, meaning that the ranker's predictions are deemed sufficiently accurate, then $\hat{f}$ is deployed: a new set of candidates is ranked by predicting their scores $\predScore = \hat{f}(\featureSet)$, and ranking the candidates according to these predictions. \input{figs/example-IR-college-admissions} \input{figs/supervised-ranker} As an example, consider Figure~\ref{fig:ir_example} that revisits our college admissions example from Figure~\ref{fig:admissions} in a supervised learning setting. We are given six features, as previously described, and two ground truth scores $\score{1}$ and $\score{2}$ for each candidate. First, training data is prepared as input: We divide the data into a training set $\candidateSet{train} = \{b, c, e, f, k, o\}$ and a test set $\candidateSet{test} = \{d, l\}$ (blue lines). Then a model is trained and tested using any available LtR\xspace method, such as RankNet~\cite{burges2010ranknet} or ListNet~\cite{cao2007learning}. Ranking $\boldsymbol{\tau}$ in Figure~\ref{fig:ir-example_ranking} depicts the ground truth ranking of $\candidateSet{test}$ based on score~$\score{1}$. \paragraph{Prediction accuracy.} In traditional supervised learning, the term \emph{utility} is often used to refer to prediction accuracy of $\hat{f}$. A common measure of prediction accuracy in LtR\xspace is the Normalized Discounted Cumulative Gain (\texttt{NDCG})~\cite{jarvelin2002cumulated}, which compares a ranking generated by model $\hat{f}$ to a ground-truth ranking (sometimes called the ``ideal'' ranking). \texttt{NDCG} measures the usefulness, or \emph{gain}, of the candidates based both on their scores and on their positions in the ranking. NDCG incorporates position-based discounts, capturing the intuition that it is more important to retrieve high-quality (according to score) candidates at higher ranks, and is hence closely related to Equation~\ref{eq:disc_agg_utility}. \texttt{NDCG} of a predicted ranking is computed relative to the gain of the ground-truth ranking, \texttt{IDCG}, and thus \texttt{NDCG} measures the extent to which the model is able to reproduce the ground-truth ranking from $\candidateSet{train}$ in its predictions $\predScore$. We are usually interested in \texttt{NDCG} at the top-$k$ (denoted $\texttt{NDCG}^k$), and so normalize the position-discounted gain of the top-$k$ in the predicted ranking by the position-discounted gain of the top-$k$ in the ideal ranking (denoted $\texttt{IDCG}^k$), per Equation~\ref{eq:ndcg}. \begin{equation} \texttt{NDCG}^k = \frac{1}{\texttt{IDCG}^k} \cdot \sum_{i=1}^{k} \frac{\predScore{\boldsymbol{\tau}(i)}}{log_2 (i+1)} \label{eq:ndcg} \end{equation} An important application of LtR\xspace are information retrieval systems, where users issue search queries, and expect the system to find relevant information and rank the results by decreasing relevance to their queries. Consider again our example in Figure~\ref{fig:ir_example}, and suppose that we are additionally given a set $\setOfQueries$ of queries, each associated with $\candidateSet{}$ via a score. In our example, two queries are given, $\query_1 = $ ``What are the most promising candidates to admit to a STEM major?'' associated with score $\score{1}$, and $\query_2 = $ ``What are the most promising candidates to admit to a humanities or arts major?'' associated with $\score{2}$. The training and test sets are formed by assigning the candidate features $\featureSet$ and their respective scores $\score{}$ per query: $\candidateSet{train} = \left\{ (\featureSet_{\textit{train}}, \score{\query}) \right\}_{\query \in \setOfQueries}$ and $\candidateSet{test} = \left\{ (\featureSet_{\textit{test}}, \score{\query}) \right\}_{\query \in \setOfQueries}$. With these sets as input, we can use the LtR\xspace procedure shown in Figure~\ref{fig:supervised_ranker} to train a single model. To evaluate model performance, its accuracy measures need to be extended to handle multiple queries. Commonly used measures are \texttt{NDCG} (Eq.~\ref{eq:ndcg}) averaged over all queries and Mean Average Precision (\texttt{MAP}). \texttt{MAP}~\cite{manning2008evaluation} consists of several parts: first, precision at position~$k$ ($P@k$) is calculated as the proportion of query-relevant candidates in the top-$k$ positions of the predicted ranking $\hat{\boldsymbol{\tau}}$. This proportion is computed for all positions in $\hat{\boldsymbol{\tau}}$, and then averaged by the number of relevant candidates for a given query to compute average precision (\texttt{AP}). Finally, \texttt{MAP} is calculated as the mean of AP values across all queries. \texttt{MAP} enables a performance comparison between models irrespective of the number of queries that were given at training time. \paragraph{Fairness.} As is the case in score-based ranking, discussed in Section~\ref{sec:intro:score-based}, LtR\xspace methods may incorporate fairness objectives in addition to utility. Fairness interventions in LtR\xspace are warranted because the procedure described in Figure~\ref{fig:supervised_ranker} is prone to pick up and amplify different types of bias (see Section~\ref{sec:frame:bias} for details). For example, let us return to Figure~\ref{fig:ir_example} and note that, by randomly selecting $\candidateSet{train} = \{b, c, e, f, k, o\}$, in which all men are ranked above all women, we may have accidentally injected a strong gender bias into the learned model. This, in turn, may result in an estimated ranking $\hat{\boldsymbol{\tau}}$ in Figure~\ref{fig:ir-example_rank_prop} that places the male candidate $l$ above the female candidate $d$, although their ground truth scores would place them in the opposite relative order. This may lead to biased future predictions for rankings that systematically disadvantage women~($\hat{\boldsymbol{\tau}}$ in Fig.~\ref{fig:ir-example_rank_prop}).\footnote{We denote that this is a very simplified example for technical bias which we use for illustrative purposes.} As a remedy, two main lines of work on measuring fairness in rankings, and enacting fairness-enhancing interventions, have emerged over the past several years: probability-based and exposure-based. Both interpret fairness as a requirement to provide a predefined share of visibility for one or more protected groups throughout a ranking. \emph{Probability-based fairness} is defined by means of statistical significance tests that ask how likely it is that a given ranking was created by a fair process, such as by tossing a coin to decide whether to put a protected-group or a privileged-group candidate at position $i$~\cite{yang2017measuring, zehlike2017fa}. \emph{Exposure-based fairness} is defined by quantifying the expected attention received by a candidate, or a group of candidates, typically by comparing their average \e{position bias}~\cite{joachims2017accurately} to that of other candidates or groups. \begin{equation} \exposure{\boldsymbol{\tau}(i)} = \mathbb{E}_{\boldsymbol{\tau} \sim \pi}\left[\posBias{\boldsymbol{\tau}(i)}\right] \label{eq:exp} \end{equation} \noindent Here, $\pi: rnk(\candidateSet{}) \rightarrow [0,1]$ is the probability mass function over the ranking space, and position bias $\posBias{\boldsymbol{\tau}(i)}$ refers to the observation that users of a ranking system tend to prefer candidates at higher positions, and that their attention decreases either geometrically or logarithmically with increasing rank~\cite{joachims2017accurately,DBLP:journals/cacm/Baeza-Yates18}. Logarithmic position-based discounting when computing exposure is in-line with position-based discounting of utility for score-based rankers (Eq.~\ref{eq:disc_agg_utility}) and with the NDCG measure for supervised LtR\xspace (Eq.~\ref{eq:ndcg}). The algorithmic fairness community is familiar with the distinction between individual fairness, a requirement that individuals who are similar with respect to a task are treated similarly by the algorithmic process, and group fairness, a requirement that outcomes of the algorithmic process be in some sense equalized across groups. Probability-based fairness definitions are designed to express group fairness goals. In contrast, and depending on the specific formalization, exposure-based fairness can serve the goals of either individual fairness or group fairness. Individual unfairness in exposure can be expressed as the discrepancy~$\unfairnessOnePara{.}$ in exposure between two candidates $a$ and $b$: \begin{equation} \unfairnessTwoPara{a}{b} = \left| \exposure{a} - \exposure{b} \right| \end{equation} Group unfairness can be expressed as the discrepancy~$\unfairnessOnePara{.}$ in the average exposure between two groups $\group{1}$ and $\group{2}$: \begin{equation} \unfairnessTwoPara{\group{1}}{\group{2}} = \left| \frac{1}{\left|\group{1}\right|} \sum_{a \in \group{1}} \exposure{a} - \frac{1}{\left|\group{2}\right|} \sum_{b \in \group{2}} \exposure{b} \right| \end{equation} Note that consensus on a definition of exposure has not yet been found and, while many measures feature position bias in some way, they disagree on its importance. An additional distinctive characteristic of fairness definitions is that some of them consider a notion of a candidate's merit when measuring disparities in exposure, while others explicitly leave it out.Most of the former understand merit as the utility score $\score{}$ at face value. However, as we will discuss in Section~\ref{sec:frame:mit_goal}, the understanding of merit depends on worldviews and on one's conception of equal opportunity. In Sections~\ref{sec:fair_ir} and ~\ref{sec:fair_recsys} we will present different interpretations of merit and exposure that have been used by LtR\xspace methods in information retrieval and recommender systems.
train/arxiv
BkiUeBjxK1ThhAcYlM0a
5
1
\section{Summary} The Atacama Large Millimeter Array (ALMA) will consist of up to 64 state-of-the-art sub-mm telescopes, subject to stringent performance specifications which will push the boundaries of the technology, and makes testing of antenna performance a likewise challenging task. Two antenna prototypes were evaluated at the ALMA Test Facility at the Very Large Array site in New Mexico, USA. The dynamic behaviour of the antennas under operational conditions was investigated with the help of an accelerometer system capable of measuring rigid body motion of the elevation structure of the antenna, as well as a few low-order deformation modes, resulting in dynamic performance numbers for pointing stability, reflector surface stability, path length stability, and structure flexure. Special emphasis was given to wind effects, one of the major factors affecting performance on timescales of seconds to tens of minutes. Though the accelerometers could not directly measure antenna performance on timescales longer than 10 seconds, it was possible to use them to derive antenna properties which allowed extrapolation of the wind-affected performance to timescales of 15 minutes and longer. This paper describes the accelerometer system, its capabilities and limitations, and presents the dynamic performance results of the two prototype antennas investigated. In addition to verification of the performance requirements, we investigated the vibration environment on the antennas, relevant for vibration-sensitive equipment for use on the ALMA antennas, the lowest eigenfrequencies for the antennas, and the sensitivity to vibration generated by similar antennas operating nearby. This work shows that seismic accelerometers can successfully be used for characterisation of antenna dynamics, in particular when complemented with simultaneous wind measurements and measurements for static performance. The versatility of the accelerometers also makes them a valuable tool for troubleshooting of unforeseen antenna features. \end{abstract} \begin{keywords} ALMA, Antenna measurements, Acceleration measurement, Dynamic response Dynamics, Millimeter wave antennas, Radio telescope, Wind \end{keywords} \section{Introduction} \label{intro} The enormous growth of radio astronomy in the millimeter and submillimeter wavelength regime (frequencies from 100 - 1000 GHz) over the last 25 years has been made possible both by the emergence of sensitive detectors like SIS-diodes and HEB-devices and the construction of ever larger and more accurate radio telescopes. Reflector antennas for this wavelength region have been built from relatively small (up to 15 m diameter), extremely accurate (reflector accuracy \mbox{15 - 25 $\mu$m}) submillimeter telescopes to larger (20 - 45 m) millimeter antennas with surface accuracy from \mbox{75 - 150 $\mu$m}. Current projects in this area are the 50 m diameter Large Millimeter Telescope (LMT) under construction in Mexico \cite{Kaercher2000}, which is aiming to reach \mbox{75 $\mu$m} rms surface and 1 arcsec pointing accuracy and the Atacama Large Millimeter Array (ALMA). ALMA is a global collaboration of North America (USA and Canada) and Europe (European Southern Observatory) with contributions from Japan to build a powerful millimeter wavelength aperture synthesis array at a 5000 m high plateau in Northern Chile. The instrument will consist of up to 64 high accuracy Cassegrain reflector antennas of 12 m diameter with a surface rms accuracy of \mbox{20 $\mu$m} and a pointing and tracking accuracy and stability of 0.6 arcseconds, all under the severe operational conditions of the high site. This telescope will operate over the entire frequency range from 30 to 950 GHz. These specifications are among the most severe ever realised in radio telescopes and force the designer and manufacturer to push the boundaries of the technology. At the same time, it is becoming increasingly difficult for the contractor and the customer to quantitatively and reliably evaluate the performance characteristics of these instruments. At the longer wavelengths radio astronomers have developed and used methods of antenna evaluation which are based on the use of strong cosmic radio sources and astronomical observing techniques \cite{Baars1973}. These measurements are only of limited use at the short millimeter wavelengths at frequencies above 100 GHz. The number of suitable cosmic test sources is severely limited because of their intrinsic weakness and the sensitivity limitations of the relatively small antennas. \pubidadjcol For the ALMA Project the partners decided early to obtain two prototype antennas from different design and fabrication groups to increase the chance of achieving the desired performance. The prototype antennas tested here, one designed and constructed by VertexRSI, the other by a consortium of Alcatel and European Industrial Engineering, hereafter referred to as AEC, are similar in overall design and built to meet identical requirements, though significant differences exist in the approaches taken to meet these requirements. The antennas, together with a third one from Japan, are located at the ALMA Test Facility (ATF), on the site of the Very Large Array (VLA) in New Mexico, USA (Figures~\ref{fig:vertexrsi} and \ref{fig:atf}). For more information on the antennas and the evaluation program, see \cite{Mangum2006}. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.48,angle=0]{Vertex-astroph.jpg}} \caption{The VertexRSI ALMA prototype antenna.} \label{fig:vertexrsi} \end{figure} \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.65,angle=0]{DSC01896-astroph.jpg}} \caption{The ALMA Test Facility, with the AEC antenna (left), VertexRSI antenna (middle) and Mitsubishi antenna (right).} \label{fig:atf} \end{figure} An international team of radio astronomers was formed to subject the antennas to an extensive evaluation program. In preparing their tasks this group determined that additional measurements and test methods and instruments beyond the usual astronomical testing would be needed to check the very stringent specification of the antennas. A particularly important, but difficult to measure quantity is the accuracy with which the antenna can be pointed at arbitrary positions on the sky and the stability with which such a pointing can be maintained under the influence of variations in temperature and wind forces. Given our need to check these parameters independently of the availability of celestial radio sources, we looked into the possibility of using accelerometers on the antenna structure to establish its dynamical behaviour. The use of seismic accelerometers for performance characterization has been successfully demonstrated on optical telescopes \cite{Smith2004} and mm-antennas \cite{Ukita2002}, \cite{Ukita2004}. Using a set of 10 seismic accelerometers, installed on the antenna back-up structure (BUS), subreflector support structure (apex), and receiver cabin, we have measured accelerations allowing determination of rigid body motion of the elevation structure, and a few low-order distortions of the BUS. The nature of the accelerometers used in this work limits accurate displacement measurement to time scales of at most 10 seconds or frequencies of at least 0.1 Hz. Since this is well below the lowest eigenfrequencies of the antennas, this is sufficient to determine dynamic antenna behaviour. For the frequency range covered accurately by the accelerometers, approximately 0.1 to 30 Hz, it is possible to check the following antenna specifications: \begin{enumerate} \item Variations in surface shape, restricted to large scale effects like focal length and astigmatism \item Variations in pointing in elevation and cross-elevation direction \item Translation of apex structure with respect to the BUS \item Variations in path length along the boresight direction \end{enumerate} Using detailed long term wind studies, and wind measurements simultaneous with accelerometer measurements, antenna performance can be extrapolated to longer time scales under the assumption that wind effects dominate antenna performance on these time scales. Antenna performance should be met for all modes in which the antenna will be used to perform astronomical observations. For this paper, we have considered \begin{enumerate} \item pointing where the antenna is commanded to remain fixed at an azimuth and elevation position, \item sidereal tracking, where azimuth and elevation are updated continuously, \item fast switching mode, where the antenna is switched quickly between two neighbouring points, \item interferometric mosaicking, in which areas of sky are mapped at slow speed (0.05 deg/s), \item on-the-fly mapping (OTF), in which large areas of sky are mapped at high scan speed (0.5 deg/s). \end{enumerate} \section{Accelerometer setup} \label{setup} We placed 10 accelerometers on the antenna in the following configuration (Figure~\ref{fig:accelconfig}): \begin{itemize} \item 3 accelerometers as a 3-axis sensor on the subreflector structure (A1,2,3) \item 4 accelerometers along the rim of the BUS in boresight direction (A4-A7) \item 3 accelerometers as a 3-axis sensor on the receiver flange invar ring (A8,9,10) \end{itemize} \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.3]{accelerometer_locations-astroph.jpg}} \caption{Placement of the accelerometers on the antenna.} \label{fig:accelconfig} \end{figure} The accelerometers used for the tests are Endevco model 86 seismic accelerometers. Together with a multichannel 24 bit A/D converter, the noise properties are as shown in Figure~\ref{fig:accelreadout}, red curve. The noise spectrum in acceleration is somewhat constant with frequency in the range 0.1 - 10 Hz, but since the accelerometers will be used to measure displacement, the acceleration needs to be integrated twice. This turns the originally white noise into ``red'' noise, with high power at low frequencies. In Figure~\ref{fig:accelreadout}, the green curve shows the RMS displacement noise as a function of frequency, where the integration has been applied. For frequencies above a few Hz, measurement accuracy is better than 10 nm, while at 0.1 Hz accuracy has deteriorated to just below half a $\mu$m. The accelerometers were read out at a frequency of 100 Hz. Accelerometer resonance occurs around 200 - 300 Hz, which required a hardware anti-alias filter which cuts in above 10 Hz. Effectively this allows antenna vibrations up to about 30 Hz to be measured. On the low frequency side, both the accelerometers and the read-out electronics cut off frequencies below approximately 0.007 Hz. Note that the DC or static component can not be measured, which implies that only position or offset pointing changes can be detected. For most practical purposes encountered during antenna testing, antenna vibrations below 0.1 to 0.3 Hz are affected by noise. Within the frequency range 0.1 Hz to 30 Hz the sensitivity for displacements is better than a few micrometers, or sub-arcsecond for pointing. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.40]{equipment_noise-astroph.pdf}} \caption{Accelerometer and read-out equipment noise. The red curve is the spectral noise in m/s$^2$/$\sqrt(Hz)$, the green curve is the RMS noise in meters. The value at frequency $\nu$ Hz shows the typical RMS noise over timescales of $\frac{1}{\nu}$ seconds.} \label{fig:accelreadout} \end{figure} The measurements are time series of accelerations or properties derived from this. In most cases, this is not particularly informative, partly because of the limited frequency bandwidth of the signal. Therefore the choice was made to present most results in the form of power spectra. Integration of the signal corresponds to division by the frequency of the power spectrum, and RMS deviations from the average time series is simply integration over the power spectrum. In order to reduce noise in the power spectrum, we can sacrifice low frequency range and frequency resolution by averaging the power spectra of a number of shorter time series instead of one long series. Using this noise reduction, reproducible features show up more clearly in the power spectra. The normalisation for the power spectrum used in this work is with the square of the length $n$ of the time series, and values at negative frequencies are added to those at positive ones: \begin{equation} \hat{A}(\nu) = \frac{|FFT(A(t))|^2}{n^2} \label{eq:aoft} \end{equation} \noindent{with} $A(t)$ a time series with $n$ samples as a function of time, $FFT$ the fast Fourier transform, and $\nu$ the frequency. Note that in the plots in this paper $\sqrt{\hat{A}(\nu)}$ is plotted. The units along the ordinate reflect this. RMS stability of a measured parameter is presented as a function of frequency: \begin{equation} RMS(\nu) = \sqrt{\sum_{\nu'=\nu}^{\infty}{\hat{A}(\nu')}} \label{eq:rms} \end{equation} This type of presentation has the property that at zero frequency, the RMS is that of $A(t)$, while at higher frequencies the RMS decreases to include only those contributions from frequencies above and including $\nu$. An oscillation at a given frequency shows up as a jump in the RMS curve at this frequency. In addition to accelerometer and amplifier noise, thermal effects on the accelerometers and small tilt variations of the accelerometers in the gravity field, add additional low frequency noise to the measurements. In order to obtain pointing and displacement data from the accelerations, the location and orientation of the accelerometers need to be known. This was determined from known and large antenna motions, imposed through the drive system. Through combination of selected accelerometer signals, the following antenna motions could be isolated: \begin{itemize} \item Elevation pointing (top and bottom accelerometers on the rim of the BUS) \item Cross-elevation pointing (left and right accelerometers on the rim of the BUS) \item Boresight motion of BUS (4 accelerometers on the rim of the BUS, one on receiver flange) \item Path length stability (boresight motion of BUS plus accelerometer on apex structure) \item ``Focal length'' stability or ``defocus'' or ``apex axial motion'' of BUS (4 accelerometers on the rim of the BUS, one on receiver flange) \item ``+'' astigmatism of BUS (4 accelerometers on the rim of the BUS) \end{itemize} The accelerometers on the BUS are configured to measure deformations which can be described with low order Zernike polynomials \cite{Zernike1934}. Path length changes, caused by motion of the entire BUS as a rigid body along the boresight direction, can be described as the (n=0, m=0) $Z_0^0$ ``piston'' term; changes in elevation pointing as the (n=1, m=1) $Z_1^1$ ``tilt'' term; changes in cross-elevation pointing as the (n=1, m=-1) $Z_1^{-1}$ ``tip'' term; boresight motion of the rim of the BUS with respect to the receiver flange, considered to be representative for the vertex of the antenna, as the (n=2, m=0) $Z_2^0$ ``defocus'' term; boresight deformation of the left and right side of the rim of the BUS in opposite direction as the top and bottom side of the BUS, as the (n=2, m=2) $Z_2^2$ ``+ - astigmatism'' term. Due to the limited number of accelerometers on the rim of the BUS, the (n=2, m=-2) $Z_2^{-2}$ ``$\times$ - astigmatism'' term can not be measured. \section{Test Conditions} Environmental conditions have a significant influence on the performance of the antenna. On the time scales relevant for the accelerometer measurements, the major environmental effects are caused by the wind. To draw meaningful conclusions from the measurements, it was thus necessary to carefully characterise the local wind through wind measurements both independent of as well as simultaneous with the accelerometer measurements. In addition to variable wind conditions, the thermal environment affects antenna performance. Since thermal effects of the antenna are on timescales longer than those accurately measurable with the accelerometers, they will not be considered for the measurements presented here \cite{Greve2006}. \subsection{Wind Conditions} Wind data at the ATF were recorded over a period of more than a year, sampled continuously at 10 Hz with a sonic anemometer, placed some 30 m north of the antennas. The power spectrum of the wind speed over intervals of approximately 1000 seconds was determined as a function of average wind speed and direction. The local terrain and building geometry affect the wind power spectrum. Overall, the terrain is reasonably flat out to more than 10 km in any direction. Seen from the anemometer, the VLA control building is located a few hundred meters towards the west, the Mitsubishi prototype antenna about 30 m to the south-west, the VertexRSI prototype antenna about 30 m to the south-south-east, and the AEC prototype antenna about 50 m to the south-east. The wind power spectrum has a typical slope of $\nu^{-1.5}$, where $\nu$ is the frequency. From theory, an exponent of $-\frac{5}{3}$ is expected for the microscale range, while an exponent of -1 is expected for the mesoscale turbulence range. The measured exponent is consistent with theory, showing a transition between micro- and mesoscale turbulence. The effect of obstructions on the power spectrum seems to increase the high frequency power in the spectrum, while keeping the same exponent. In addition, the low frequency part is lowered and the exponent is decreased towards lower frequencies. The observed transition between low and high frequency is around 0.1 Hz. Figure~\ref{fig:windpowerspectra} shows the wind power spectra, normalised with the typical undisturbed wind spectrum, for 16 equally spaced wind directions, starting at the north and stepping through the east. Only wind speeds over 2.5 m/s were used. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{wind_psd_16dirs.pdf}} \caption{Wind power spectra, normalised with the typical undisturbed wind spectrum. The 16 curves show equally spaced wind directions, centered on the sonic anemometer, starting at north (top) and increasing clockwise. Each subsequent spectrum is shifted by a factor 0.5 for clarity. Only wind speeds over 2.5 m/s were used.} \label{fig:windpowerspectra} \end{figure} Figure~\ref{fig:windpowernorm} shows the normalised power spectra of Figure~\ref{fig:windpowerspectra} averaged over the frequency range between 0.3 and 2.0 Hz, plotted as a function of azimuth. The shape of the curve can easily be explained by the positions of the antennas and the VLA control building. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{wind_directional_dependence64.pdf}} \caption{Normalised wind power spectra averaged over 0.3 - 2.0 Hz, as a function of wind direction, as seen from the sonic anemometer. The peak between 90 and 180 degrees is the combined effect of the AEC and VertexRSI antenna, the peak between 180 and 240 is due to the Mitsubishi antenna, and the peak around 250 degrees is due to the VLA control building. The red curve shows the measured turbulence component, while the green curve shows an analytical approximation composed of 4 gaussian components matched to the observed peaks. The 4 individual components are shown as blue lines.} \label{fig:windpowernorm} \end{figure} The wind power spectra show distinct features depending on the wind direction. The sonic anemometer can clearly distinguish the VLA control building, Mitsubishi antenna, the clearance between the Mitsubishi and VertexRSI antennas, and the combined effect of the VertexRSI and AEC antennas. This directional dependence complicates interpretation of the wind results. Because any obstruction will be at different azimuths for the antenna and the sonic anemometer, they will be subjected to a different wind spectrum. Three of the four peaks in Figure~\ref{fig:windpowernorm} show the wake turbulence of an antenna. This information is used to correct the wind spectra as experienced by the antennas in order to predict performance for undisturbed wind conditions, but can also be used to predict performance for an antenna in the compact configuration of the ALMA array, where the antennas are only a few tens of meters apart. The wind power spectra are needed to scale the antenna vibration power spectra to that for a reference wind spectrum. Since for short (typically 15 minutes) measurements the error bars on a single power spectrum can be large, and the natural variations in the wind can throw off an observed power spectrum far away from the average, the following wind spectral features were used to treat wind effects. \begin{enumerate} \item Effects of upwind obstructions change the shape of the power spectrum, but above frequencies of approximately 0.2 Hz the result is effectively a change in the level of the power spectrum, while the slope remains unchanged and close to the theoretically expected value (Figure~\ref{fig:windpowerspectra}). \item For frequencies above approximately 1 Hz, effects of aliasing become visible, introducing errors in the measured slope. \end{enumerate} These effects combined leave a frequency interval between 0.2 and 1.0 Hz where the slope of the power spectrum is reasonably well-defined, and independent of upwind obstructions. This interval is used for normalisation of the power spectra, instead of the average wind speed, which may be affected by obstructions and stochastic effects. Prediction of low frequency wind effects was done using the average power spectra calculated from all available wind data, and extrapolation from the 0.2 - 1.0 Hz range. Known obstructions (Figure~\ref{fig:windpowernorm}, and a modification of it to reflect the obstructions as seen by the antennas) and variation of the power spectra with wind speed were taken into account. \subsection{Wind Scaling} \label{windscaling} The dynamic wind pressure on the structure is a function of frequency. For a single point, the pressure can be directly coupled to the power spectrum of the square of the wind speed. For extended structures spatial averaging will occur. Thus, for turbulence of a geometric scale smaller than the typical dimension of the structure, the effective pressure is less than for turbulence of larger scale, given the same power spectral density: the antenna serves as a mechanical low-pass filter. The averaging effect of the dynamic wind pressure can be described with the aerodynamic admittance function (AAF) of the form \begin{equation} AAF(\nu) = \frac{1}{1+\left(2\nu\frac{L}{U}\right)^{\frac{4}{3}}} \label{eq:aaf} \end{equation} \noindent{where} $L$ is the typical scale of the structure, and $U$ is the average wind speed \cite{Davenport1961}. Since the average wind speed varies from measurement to measurement, the AAF also varies. The (static) wind force on a structure is given by \begin{equation} F_{wind}= \frac{1}{2}\cdot C_d \cdot L^2 \cdot \rho \cdot v^2 \label{eq:stat_wind} \end{equation} \noindent{where} $C_d$ is the aerodynamical drag coefficient, $\rho$ the air density, and $v$ the wind speed. Since the antennas are fixed to the ground, the wind force will bend the structure, where the stiffness $k$ of the structure determines the amount of flexure $x$: \begin{equation} F_{wind}= - k \cdot x \label{eq:stiffness} \end{equation} Combination of equations \ref{eq:stat_wind} and \ref{eq:stiffness} yields: \begin{equation} x = - \frac{C_d \cdot L^2 \cdot \rho \cdot v^2 }{2 \cdot k} \end{equation} This is assumed to be valid for any antenna deformation or motion, and to hold for any frequency, and thus for the entrire power spectral density (PSD) curve: \begin{equation} PSD_{antenna}(\nu) = PSD_{wind speed}(\nu) \cdot H(\nu) \cdot AAF(\nu) \cdot \rho \label{eq:psd} \end{equation} \noindent{which} defines $H(\nu)$ as the transfer function between wind power spectrum and antenna motion power spectrum. The drag coefficient $C_d$ as well as the effective area $L^2$ of the antenna, and thus the transfer function $H(\nu)$, depend on the orientation of the antenna in azimuth and elevation relative to the wind. \section{Analysis Methods} \label{methods} The accelerometer sensitivity was calibrated against gravity around 0.1 Hz, using the antenna to tilt the accelerometers in a controlled way. The sensitivity and frequency response of the read-out electronics was calibrated using a switchable resistor, with which a step-function was fed to the electronics. All relevant measurement data were corrected for accelerometer and electronic effects. Signals were combined in the time domain, after which the power spectra were calculated. Frequency response corrections were done in the frequency domain. Double integration of the accelerometer signals, needed to obtain displacement and pointing information, was achieved through division in the frequency domain by $(2 \pi \nu ) ^2$. See the Appendix for details. \subsection{Extrapolation to Lower Frequencies} The accelerometers are limited in the lowest frequency at which displacements can be measured accurately. Depending on the size of the displacements measured by the accelerometers, and canceling of (tilt-)noise terms in the analysis, the lowest frequency varies between 0.08 and 1 Hz. This frequency limit is well below the lowest eigenfrequency of the antennas, which are 6-7 Hz. From dynamic structure behaviour, a constant stiffness is expected at frequencies well below the lowest eigenfrequency. This means that the dynamic stiffness of the antenna should reach a constant value in the frequency range between the accelerometer low noise limit and the lowest eigenfrequency. This would show in the plots of the transfer functions H as a flattening of the curve towards lower frequencies. This flattening is seen in all measurements of wind-excited motion. Thus it is possible to determine the (constant) value of the low frequency stiffness from the flat part in the transfer functions, and extrapolate this constant value to lower frequencies, replacing the values in the transfer function affected by accelerometer noise. The scatter in the flat section of the transfer function is used to determine the extrapolation uncertainty, which typically is 6-9\%. Subsequently, the extrapolated transfer function is used to calculate antenna performance for the assumed or measured wind spectrum, and the aerodynamic admittance function for 9 m/s wind, as required by the specifications. \subsection{Pointing Accuracy} The simplest operational mode for the antenna is pointing to a fixed azimuth and elevation, whereby the drives are powered and the brakes are released. In this mode wind effects are best investigated, since any effects of antenna shake due to position updates from the control system are minimal. During sidereal tracking the antenna drives are constantly updating the azimuth and elevation positions, including the azimuth and elevation speed, in order to achieve a smooth tracking motion. Updates are performed at 48 ms intervals, corresponding to a frequency of 20.83 Hz. Antenna motion at frequencies below half this value can be controlled by the drive system, while higher frequencies can only be excited by the drives but not actively corrected. Wind-induced pointing jitter was investigated for 11 wind-oriented azimuth/elevation combinations, and sidereal tracking was checked for 30 azimuth/elevation combinations spread evenly over the sky. Calm wind conditions were chosen for the sidereal tracking measurements, in order to clearly separate effects due to the drive system from those due to wind. The total pointing accuracy for the antenna is the squared sum of the pointing for high wind conditions without tracking, and the pointing for sidereal tracking during low wind conditions. The 4 accelerometers on the rim of the BUS were used for deriving the BUS pointing accuracy. The positions for the 4 accelerometers are at the top, right, bottom and left of the BUS, when looking into the antenna at low elevation. Differential signals between two accelerometers were used to eliminate the large gravity signal common to all accelerometers and being a function of elevation. The cross-elevation and elevation pointing $\Delta_{XEL}$ and $\Delta_{EL}$ are calculated as follows: \begin{eqnarray} \alpha_{XEL} &=& \frac{a_l - a_r}{x_l - x_r} \\ \alpha_{EL} &=& \frac{a_t - a_b}{y_t - y_b} \\ \Delta_{XEL} &=& \int\int \alpha_{XEL} d^2t \\ \Delta_{EL} &=& \int\int \alpha_{EL} d^2t \label{eq:pointing1} \end{eqnarray} \noindent{where} $\alpha_{XEL}$ is the cross-elevation angular acceleration, $\alpha_{EL}$ is the elevation angular acceleration, and $a_t$, $a_r$, $a_l$, and $a_b$ are the top, right, left, and bottom accelerometer signals, respectively. The variables x and y are the positions of the accelerometers in coordinates along the elevation axis and perpendicular to the elevation axis, respectively, with the assumed intersection of elevation axis and azimuth axis as the origin. The integration is over the time coordinate t. The total pointing error is given by: \begin{equation} \Delta_{TOT} = \sqrt{\Delta^2_{XEL} + \Delta^2_{EL}} \label{eq:pointing2} \end{equation} \noindent{which} assumes uncorrelated pointing jitter in elevation and cross-elevation. The accelerometer signals are not significantly affected by noise and other errors down to a frequency of about 0.1 - 0.2 Hz. Below this frequency, the apparent pointing error increases faster than what can be expected from the wind power spectrum and conservative assumptions on the dynamic behaviour of the antenna structure. In combination with optical pointing telescope (OPT) or radiometer measurements, it is possible to measure pointing behaviour for frequencies lower than 0.1 Hz (see Figure~\ref{fig:OPT_wind_stiffness}) . The transfer functions show a flat section below the lowest eigenfrequency. As explained above, this is expected from dynamic antenna behaviour and allows extrapolation of the antenna properties to lower frequencies, as well as the use of the OPT which follows the bulk motion of the reflector in this frequency range. This has been done in the corresponding plots, where the specified wind spectrum has been used to predict antenna properties for time periods up to 15 minutes (0.001 Hz). Encoder data recorded simultaneously with the wind and accelerometer data were used to calculate the encoder errors, defined as the measured encoder read-out minus the commanded antenna position. Azimuth encoder errors were converted to cross-elevation errors through multiplication with the cosine of the elevation. \subsection{Primary Reflector Deformation} The combined accelerometer signals measured at the rim of the BUS and at the receiver flange give information about some low-order deformations of the BUS, and thus about the accuracy of the primary reflector surface. Numbers for the deformations are expressed as boresight components of the deformation at the location of the rim of the BUS. No averaging over the entire reflector surface is attempted. BUS ``+'' astigmatism is defined as follows: \begin{equation} Astig \equiv \int\int a_t + a_b - a_r - a_l d^2t \label{eq:astig} \end{equation} \noindent{and} apex axial motion ($AAM$) is given by: \begin{equation} AAM \equiv \int\int \frac{\left(a_t + a_r + a_b + a_l\right)}{4} - a_{rf} d^2t \label{eq:focus} \end{equation} \noindent{where} $a_{rf}$ is the accelerometer signal measured on the receiver flange and in boresight direction. \subsection{Path length variations} Path length changes measured with the accelerometers reflect the boresight motion of the BUS in the inertial coordinate system, plus the effects of distance changes between the apex and receiver flange: \begin{eqnarray} \omega_{xel} &=& \int \alpha_{xel} dt \\ \omega_{el} &=& \int \alpha_{el} dt \\ \alpha_{cf} &=& -\left(\omega^2_{xel} + \omega^2_{el}\right) \cdot Z_{apex} \\ a_{BUS} &=& \frac{a_t + a_r + a_b + a_l + 4a_{rf}}{8} \\ pl &=& \int\int 2\left(a_{apex} - \alpha_{cf}\right) - a_{BUS} d^2t \label{eq:pathlength} \end{eqnarray} where $Z_{apex}$ is the distance between the apex boresight accelerometer and the azimuth and elevation axes, $a_{apex}$ is the boresight acceleration component at the apex, and $\alpha_{cf}$ is the centrifugal acceleration at the apex due to elevation and cross-elevation motion. The path length calculated this way represents the total optical path length assuming rigid body motion of the BUS and receiver flange, and allowing for boresight displacements of the apex structure. Since the accelerometers measure in the inertial system, and are aligned with the antenna boresight, there is no need to refer to the ground coordinate system or measure mount and yoke path length variations. \subsection{Structural flexure} \label{flexure} Combination of encoder and accelerometer pointing information allows for calculation of antenna structure deformation. The total deformation of the structure between the encoders and the accelerometers on the rim of the BUS was determined by integrating the angular accelerations measured at the BUS to obtain a time series of the angle. Small corrections for timing differences between the encoder and accelerometer equipment were applied. Encoder read-out (AZ$_{enc}$ and EL$_{enc}$, with the sine and cosine of EL$_{enc}$, $\sin_{el}$ and $\cos_{el}$) and the local acceleration $g$ due to gravity are used to calculate the expected accelerations on any point of the antenna with coordinates x,y,z with the intersection of the azimuth and elevation axes as origin, the coordinate system is fixed to the reflector, with x along the elevation axis, y upward and z along the optical axis for elevation=0, with the axis of measurement of an accelerometer pointing in the direction $\overrightarrow{dir}$: \begin{eqnarray} \omega_{enc,az} &=& \frac{d AZ_{enc}}{dt} \\ \omega_{enc,el} &=& \frac{d EL_{enc}}{dt} \\ \alpha_{enc,az} &=& \frac{d \omega_{enc,az}}{dt} \\ \alpha_{enc,el} &=& \frac{d \omega_{enc,el}}{dt} \end{eqnarray} \begin{eqnarray} \overrightarrow{g} &=& g \cdot \begin{pmatrix} 0 \\ -\cos_{el} \\ -\sin_{el} \end{pmatrix} \end{eqnarray} \begin{eqnarray} \overrightarrow{cf_{el}} &=& \omega^2_{enc,el} \cdot \begin{pmatrix} 0 \\ y \\ z \end{pmatrix} \end{eqnarray} \begin{eqnarray} \overrightarrow{cf_{az}} &=& \omega^2_{enc,az} \cdot \begin{pmatrix} x \\ y \cdot \sin^2_{el} - z \cdot \sin_{el} \cdot \cos_{el} \\ -y \cdot \sin_{el}\cos_{el} + z \cdot \cos^2_{el} \end{pmatrix} \end{eqnarray} \begin{eqnarray} \overrightarrow{ang_{el}} &=& \alpha_{el} \cdot \begin{pmatrix} 0 \\ z \\ y \end{pmatrix} \end{eqnarray} \begin{eqnarray} \overrightarrow{ang_{az}} &=& \alpha_{az} \cdot \begin{pmatrix} y \cdot \sin_{el} - z \cdot \cos_{el} \\ x \cdot \sin_{el} \\ x \cdot \cos_{el} \end{pmatrix} \end{eqnarray} \begin{eqnarray} \overrightarrow{accel} &=& \overrightarrow{g} + \overrightarrow{cf_{el}} + \overrightarrow{cf_{az}} + \overrightarrow{ang_{el}} + \overrightarrow{ang_{az}} \end{eqnarray} \begin{eqnarray} sig_{accel} &=& \overrightarrow{accel} \cdot \begin{pmatrix} dir_x \\ dir_y \\ dir_z \end{pmatrix} \label{eq:flexure} \end{eqnarray} The centrifugal accelerations due to elevation and azimuth slews are given by $\overrightarrow{cf_{el}}$ and $\overrightarrow{cf_{az}}$ respectively, and the accelerations due to angular accelerations in elevation and azimuth are given by $\overrightarrow{ang_{el}}$ and $\overrightarrow{ang_{az}}$ respectively. The expected acceleration thus calculated is then convolved with the time response $TR$ of the read-out equipment, to obtain a time series of expected accelerations which can be directly compared to the measured accelerations: \begin{equation} \alpha_{expected} = sig_{accel} \ast TR \label{eq:accelexp} \end{equation} \noindent{where} $\ast$ denotes convolution, and $\alpha_{expected}$ is the expected bandwidth filtered accelerometer signal. Both measured and expected accelerometer signals are treated in the same way to calculate antenna pointing. The data are resampled to a common time grid, small corrections for accelerometer sensitivity and integration constants are applied to make the expected and measured antenna pointing match, and the difference between the measured and expected pointing is plotted together with a scaled curve of the angular acceleration during the fast motion of the antenna. The scaling factor required to make the acceleration curve match to the structure flexure curve is the measure of flexure of the antenna structure. Since the flexure due to an inertial acceleration scales with the acceleration and the mass of the antenna, and gravitational flexure of the antenna also scales with the mass of the antenna, the numbers for the structure flexure give an indication of generic structure stiffness against gravitational effects as well. \section{Results} The measurement results obtained at the ATF site were used to derive environmentally independent antenna properties where possible. The Statement of Work and Specifications for the ALMA prototype antennas (SoW) specifies antenna performance for the environmental conditions at the ALMA site, where in particular the wind power spectrum and air density are of relevance for this investigation. Using the transfer funcions derived at the ATF site, antenna performance was calculated for conditions representative at the ALMA site and defined in the SoW. The main differences between the ATF and SoW conditions are for air pressure (SoW: 550 mBar), wind speed (ATF: variable between 0.05 m/s and exceeding 20 m/s, SoW: 9 m/s) and wind power spectrum (SoW: more power at lower frequencies than observed at ATF). \subsection{Pointing} Table \ref{tab:accelpoint} summarises the dynamic pointing performance of the AEC and VertexRSI antennas. Measurement noise is typically sub-$\mu$m scale at 0.1 Hz, and several nm at 1 Hz. Uncertainties in the results are generated by the variation of the measurement results over different elevation and azimuth positions, uncertainties in the determination of the wind power spectrum as experienced by the antennas at the time of measurement, and extrapolation to lower frequencies. Where applicable, these three individual contributions to the measurement results are shown in Table~\ref{tab:accelpoint}. \begin{table*} \centering \caption{Pointing} \begin{tabular}{|l|c|c|} \hline Pointing type & VertexRSI & AEC \\ \hline stationary, windy conditions & $0.81~\pm0.24^a~\pm0.05^b~\pm0.20^c$ arcsec& $0.45~\pm0.10^a~\pm0.02^b~\pm0.11^c$ arcsec \\ sidereal tracking, no wind & $0.47~\pm~0.11^a$ arcsec & $0.22~\pm~0.08^a$ arcsec \\ tracking, windy conditions, 0.1 Hz & $0.58~\pm~0.15^a~\pm~0.08^c$ arcsec & $0.29~\pm~0.09^a~\pm~0.05^c$ arcsec \\ tracking, windy conditions, 0.001 Hz & $0.94~\pm~0.26^a~\pm~0.05^b~\pm~0.20^c$ arcsec & $0.50~\pm~0.13^a~\pm~0.02^b~\pm~0.11^c$ arcsec \\ on-the-fly, 0.5 deg/s, 1 Hz & $1.7~\pm~0.7$ arcsec & $0.5~\pm~0.3$ arcsec \\ on-the-fly$^d$, 0.5 deg/s, 1 Hz & & $0.372~\pm~0.015$ arcsec \\ on-the-fly, 0.05 deg/s, 1 Hz & $0.8~\pm~0.5$ arcsec & $0.231~\pm~0.007$ arcsec \\ \hline \multicolumn{3}{l}{$^a$~Spread over different azimuth/elevation combinations.} \\ \multicolumn{3}{l}{$^b$~Extrapolation error for the transfer function.} \\ \multicolumn{3}{l}{$^c$~Uncertainty in wind power spectrum determination.} \\ \multicolumn{3}{l}{$^d$~Ignoring AEC apex rotation.} \\ \end{tabular} \label{tab:accelpoint} \end{table*} \subsubsection{Stationary pointing} Figure~\ref{fig:vertex-pttfunc} shows the transfer functions H between wind PSD and BUS pointing PSD for the VertexRSI antenna, as defined in \S\ref{windscaling}. The red and green curves show elevation and cross-elevation pointing respectively. The low-frequency values of the red and green curves, here shown for frequencies above 1 Hz, show the expected behaviour below the lowest eigenfrequency and can be extrapolated to 0 Hz. Similar curves and behaviour are observed for the AEC antenna, though not shown here. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.35]{point_PSD_Vertex.pdf}} \caption{VertexRSI antenna elevation (red) and cross-elevation (green) pointing transfer functions for wind excitation. The blue and magenta curves show the encoder error transfer functions for elevation and cross-elevation respectively.} \label{fig:vertex-pttfunc} \end{figure} Wind-induced pointing jitter is dominated for both antennas by elevation motion. Encoder errors do not exceed 0.14 arcsec for the VertexRSI antenna, and remain below 0.07 arcsec for the AEC antenna. Both antennas show azimuth-dependendent pointing performance, where pointing jitter is larger when the antenna is looking into the wind, and smallest for wind coming from sideways-behind. These results most likely reflect the smaller projected area of the antenna when viewed from the side, and the higher drag coefficient of the ``cup'' formed by the primary mirror and BUS when viewed from the front. \subsubsection{Sidereal Tracking} Tracking jitter of the VertexRSI antenna is for a significant part due to elevation motion, with large contribution in the 3-6 Hz range. Total tracking jitter over timescales of 10 seconds amounts to $0.47~\pm~ 0.11$ (spread) arcsec average. Largest jitter is observed for low elevation in the south-east and south-west, and minimum tracking jitter is seen while crossing the meridian. Total tracking jitter for the AEC antenna, over timescales of 10 seconds, amounts to $0.22~\pm~0.08$ (spread) arcsec average. As for the VertexRSI antenna, the sidereal tracking jitter depends on the antenna pointing. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.35]{point_track_PSD_Vertex_AEC_new.pdf}} \caption{RMS tracking jitter for VertexRSI and AEC antennas. The extrapolated wind-induced jitter for conditions specified in the SoW is shown as red (VertexRSI) and green (AEC) curves. Pointing stability during sidereal tracking is given by the blue (VertexRSI) and magenta (AEC) curves. For sidereal tracking under windy conditions, both curves must be combined.} \label{fig:trackPSD} \end{figure} \subsubsection{Combined Wind and Tracking} The pointing jitter due to wind and due to sidereal tracking are assumed to be uncorrelated, in which case the cumulative pointing error curves can be added quadratically. A check of an independent measurement with high wind and sidereal tracking showed that this assumption is valid. Since no tracking jitter at 0.001 Hz could be measured, a conservative value equal to the observed tracking jitter at 0.1 Hz is used. Figure~\ref{fig:trackPSD} shows the wind induced and sidereal tracking jitter for both antennas. At 0.1 Hz, the combined wind and tracking jitter was calculated from direct measurements, while at 0.001 Hz the pointing jitter was derived from the extrapolated transfer function. The red and green curves show the wind-induced pointing jitter for the VertexRSI and AEC antennas respectively, and the blue and magenta curves show the sidereal tracking stability. \subsubsection{Pointing During OTF} During OTF scanning the time between switches in scan direction was typically 10 seconds. With a few seconds settling time for the antenna after each switch, only a few seconds remained during which representative measurements could be taken. This limited the effective lowest frequency at which antenna performance could be calculated to 1 Hz. For the VertexRSI antenna, the azimuth and elevation at which the scan was performed had large impact on the pointing stability during the scan. On the other hand, the AEC antenna measurements were affected for some parts of the scan by apex rotation feeding back to the BUS motion, an effect which died out about a minute into the scan. Both effects resulted in a large spread of the measurement results, represented in the standard deviation stated in Table~\ref{tab:accelpoint}. \subsection{BUS deformations} \begin{table*} \centering \caption{BUS Deformation} \begin{tabular}{|l|c|c|} \hline Type of deformation & VertexRSI & AEC \\ \hline astigmatism, wind-induced, 0.001 Hz & 5.3~$\mu$m & 6~$\mu$m \\ astigmatism, 1.5 - 10 s after fast-switch & 2~$\mu$m peak-to-peak & 15~$\mu$m peak-to-peak \\ astigmatism, OTF 0.5 deg/s, 1 Hz & $0.9~\pm~0.3$~$\mu$m & $4~\pm~2$~$\mu$m \\ astigmatism, OTF$^a$ 0.5 deg/s, 1 Hz & & $3.19~\pm~0.09$~$\mu$m \\ astigmatism, OTF 0.05 deg/s, 1 Hz & $0.27~\pm~0.09$~$\mu$m & $2.2~\pm~0.9$~$\mu$m \\ AAM, wind-induced, 0.001 Hz & 2.2~$\mu$m & 5~$\mu$m \\ AAM, OTF, 0.5 deg/s, 1 Hz & $1.9~\pm~0.6$~$\mu$m & $2 - 20$~$\mu$m \\ AAM, OTF$^a$, 0.5 deg/s, 1 Hz & & $1.94~\pm~0.08$~$\mu$m \\ AAM, OTF, 0.05 deg/s, 1 Hz & $1.0~\pm~0.3$~$\mu$m & $1.4~\pm~0.2$~$\mu$m \\ AAM and astigmatism, tracking induced, 0.1 Hz & $< 1$~$\mu$m & $< 1$~$\mu$m \\ \hline \multicolumn{3}{l}{$^a$~Ignoring AEC apex rotation.} \\ \end{tabular} \label{tab:busdeform} \end{table*} Table~\ref{tab:busdeform} summarises the results for deformation of the BUS. For both antennas, surface stability is dominated by the stiffness of the BUS for wind excitation, and apex axial motion and astigmatism are each below 1 $\mu$m RMS for sidereal tracking. During a fast switch of antenna pointing, the accelerometers show a deformation of the BUS rim of nearly 1.4 mm astigmatism peak-to-peak for the VertexRSI antenna, and nearly 4 mm for the AEC antenna. During a fast switch, no astronomical data is recorded, so deformation of the BUS is no concern, as long as recovery of the shape is fast enough after the antenna pointing has stabilised. Astigmatism recorded 1.5 seconds after the fast switch started, up to the next switch 8.5 seconds later, remains well below 2 $\mu$m peak-to-peak for the VertexRSI antenna, and below 15 $\mu$m peak-to-peak for the AEC antenna, after removal of a low order polynomial. The polynomial removes the large noise component in the accelerometer signals at the lowest frequencies. In this case, the crude removal of the noise component is justified, but the resulting number for the remaining astigmatism should only be used as an order of magnitude indication. Five seconds after a fast switch for the AEC antenna, the astigmatism had died down to typically 1 $\mu$m peak-to-peak. The reason for the large peak-to-peak variation is the 5 Hz resonance of the apex structure which takes some time to die out. During apex rotation, the BUS gets deformed through the feed legs bending which drive the apex rotation. BUS astigmatism during the fast OTF scan, with 0.5 deg/s scan rate, averages to $0.9~\pm~0.3$ $\mu$m RMS over timescales of 1 second for the VertexRSI antenna, and $4~\pm~2$ $\mu$m RMS for the AEC antenna. Surface stability is affected for some parts of the scan by apex rotation feeding back to the BUS motion. The spread of 2 $\mu$m ($1~\sigma$) reflects this variable surface stability. Ignoring the parts of the scan affected by apex rotation, the numbers reduce to $3.19~\pm~0.14$ $\mu$m. When the scan rate is reduced to 0.05 deg/s, for interferometric mosaicking, BUS astigmatism averages to $0.27~\pm~0.09$ $\mu$m RMS over timescales of 1 second for the VertexRSI antenna, and $2.2~\pm~0.2$ $\mu$m RMS for the AEC antenna. BUS apex axial motion stability during the fast OTF scan, with 0.5 deg/s scan rate, averages to $1.9~\pm~0.6$ $\mu$m RMS over timescales of 1 second for the VertexRSI antenna, and 2 to 20 $\mu$m RMS for the AEC antenna. Also here, the apex rotation significantly affected the surface stability of the BUS. Ignoring the parts of the scan affected by apex rotation, the numbers reduce to $1.94~\pm~0.08$ $\mu$m. When the scan rate is reduced to 0.05 deg/s, for interferometric mosaicking, VertexRSI BUS AAM stability averages to $1.0~\pm~0.3$ $\mu$m RMS over timescales of 1 second, and AEC BUS AAM stability averages to $1.4~\pm~0.2$ $\mu$m RMS. Overall, dynamic BUS deformations due to motion of the antennas are small if the apex rotation effect can be ignored. \subsection{Path Length} \begin{table*} \centering \caption{Path Length} \begin{tabular}{|l|c|c|} \hline Source of path length variations & VertexRSI & AEC \\ \hline wind-induced, 0.001 Hz & 6~$\mu$m & 6~$\mu$m \\ sidereal tracking, 1 Hz & 2~$\mu$m & 0.5~$\mu$m \\ OTF, 0.5 deg/s, 1 Hz & $12~\pm~7$~$\mu$m & $3.3~\pm~2.9$~$\mu$m \\ OTF$^a$, 0.5 deg/s, 1 Hz& & $2.2~\pm~0.5$~$\mu$m \\ OTF, 0.05 deg/s, 1 Hz & $3.1~\pm~1.0$~$\mu$m & $0.7~\pm~0.02^b$~$\mu$m \\ \hline \multicolumn{3}{l}{$^a$~Ignoring AEC apex rotation.} \\ \multicolumn{3}{l}{$^b$~BUS only.} \\ \end{tabular} \label{tab:pathlength} \end{table*} Wind induced path length variations for both antennas amount to 6 $\mu$m RMS over timescales of 15 minutes, see Table \ref{tab:pathlength} for an overview For sidereal tracking, path length variations over timescales of 1 second remain below 2 $\mu$m for the VertexRSI antenna, and below 0.5 $\mu$m for the AEC antenna. Total path length stability during the fast OTF scan, with 0.5 deg/s scan rate, averages for the VertexRSI antenna to $12~\pm~7$ $\mu$m RMS over timescales of 1 second. Depending on the azimuth, path length stability may be as bad as 20 $\mu$m RMS. The AEC antenna path length jitter averages to $3.3~\pm~2.9$ $\mu$m RMS. The path length stability is affected for some parts of the scan by apex rotation feeding back to the BUS motion, as well as changing the distance between BUS and apex. The spread of 2.9 $\mu$m ($1\sigma$) reflects this variable path length stability. Ignoring the parts of the scan which are affected by apex rotation, the numbers reduce to $2.2~\pm~0.5$ $\mu$m. When the scan rate is reduced to 0.05 deg/s, for VertexRSI antenna interferometric mosaicking, total path length stability averages to $3.1~\pm~1.0$ $\mu$m RMS over timescales of 1 second, and $0.70~\pm~0.02$ $\mu$m RMS for the AEC antenna. Note that for the AEC antenna, this number covers the BUS boresight path length stability only, therefore it was not possible to measure the full apex motion accurately. \subsection{Structural Flexure} \begin{table} \centering \caption{Structural Flexure} \begin{tabular}{|l|c|c|} \hline direction & VertexRSI & AEC \\ \hline cross-elevation & 2.1 arcsec/(deg/s$^2$)& 1.6 arcsec/(deg/s$^2$) \\ elevation & 2.8 arcsec/(deg/s$^2$)& 2.1 arcsec/(deg/s$^2$) \\ \hline \end{tabular} \label{tab:flexure} \end{table} The pointing difference between encoders and accelerometers scales well with the angular acceleration. The scaling factor is the antenna stiffness for this type of load, and is summarised in Table \ref{tab:flexure}. Structure flexure affects pointing accuracy during OTF scanning as a result of large angular accelerations during scan reversal, which affect pointing by 0.8 arcsec in elevation, and 0.6 arcsec in cross-elevation for the VertexRSI antenna, and 0.6 arcsec and 0.5 arcsec, respectively, for the AEC antenna. Since the driving force for the deformation is an acceleration, the numbers are also indicative for the stiffness of the structure for gravity deformation, though the exact numbers there will be different and could not be derived from the measurements performed during this investigation. \subsection{Vibration Environment} \subsubsection{Eigenfrequencies} Figure~\ref{fig:vertexeigen} illustrates some of the VertexRSI antenna locked rotor eigenfrequencies, and Figure~\ref{fig:aeceigen} illustrates some of the AEC antenna. The antennas were in shutdown mode at approximately 45 degrees elevation. The figures show the elevation, cross-elevation, and boresight motion of the BUS. The lowest significant eigenfrequency is visible in elevation and boresight at 5.57 Hz (VertexRSI antenna) and 6.8 Hz (AEC antenna). There is a small peak visible at a frequency of 4.68 Hz for the VertexRSI antenna, which is the print-through of the apex structure rotation mode, along an axis through boresight. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{VertexRSI_el_xel_bs_lin.pdf}} \caption{VertexRSI elevation (red), cross-elevation (green) and boresight (blue) motion power spectral density. The antenna was in shutdown mode at 45 degrees elevation during a typical day at the ATF. The curves for the elevation and cross-elevation motion are in units rad/s$^2$/$\sqrt(Hz)$. The spikes in the boresight motion curve are the result of vibrations of the receiver flange introduced by the cryogenic pump for the receivers.} \label{fig:vertexeigen} \end{figure} \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{AEC_el_xel_bs_lin.pdf}} \caption{AEC elevation (red), cross-elevation (green) and boresight (blue) motion power spectral density. The antenna was in shutdown mode at 45 degrees elevation during a typical day at the ATF. The curves for the elevation and cross-elevation motion are in units rad/s$^2$/$\sqrt(Hz)$.} \label{fig:aeceigen} \end{figure} Some equipment installed on submillimeter radio telescopes, in particular bolometers, may be sensitive to vibration. The accelerometers provide a very accurate measure of the vibration environment provided by the antenna. Two locations on the antenna were investigated in some detail: the receiver flange, and the apex structure. For both locations, 3-axis accelerometer measurements are available for a variety of conditions. Under windy conditions, without sidereal tracking, the RMS acceleration on the VertexRSI antenna receiver flange, combining all 3 axes, amounts to 0.48 mm/s$^2$. For the apex structure, the corresponding number is 4.4 mm/s$^2$. For similar conditions, the AEC antenna has vibration levels of 0.40 mm/s$^2$ at the receiver flange, and 3.6 mm/s$^2$ at the apex structure. For the VertexRSI antenna, Figures~\ref{fig:vertexflange} and \ref{fig:vertexapex} show the X (red, along the elevation axis), Y (green, perpendicular to the elevation axis), and Z (blue, boresight) PSDs of the acceleration, for the receiver flange and apex, respectively. The apex structure rotates slightly around its own axis, with a frequency of approximately 4.7 Hz. The amplitude of the rotation is small, but affects the vibration environment depending on the location of the accelerometer. For this specific measurement, the accelerometers were placed near the edge of the apex cylinder, and makes the Y measurement sensitive to rotation as well as displacement. For the AEC antenna, Figures~\ref{fig:aecflange} and \ref{fig:aecapex} show the X (red), Y (green), and Z (blue) PSDs of the acceleration, for the receiver flange and apex, respectively. Also the AEC antenna apex structure rotates somewhat around its own axis, with a frequency of 5.0 Hz. For this specific measurement, the accelerometers were placed on the outer part of the apex cylinder, and makes the X measurement sensitive to rotation as well as displacement. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{accel_env_rec_cabin_Vertex.pdf}} \caption{VertexRSI receiver flange acceleration PSD. Red, green and blue curves show the X, Y and Z components of the acceleration. Wind was approximately 9 m/s, and elevation was 45 degrees.} \label{fig:vertexflange} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{accel_env_apex_Vertex.pdf}} \caption{VertexRSI apex acceleration PSD for the same conditions as those shown in Figure~\ref{fig:vertexflange}.} \label{fig:vertexapex} \end{figure} \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{accel_env_rec_cabin_AEC.pdf}} \caption{AEC receiver flange acceleration PSD. Red, green and blue curves show the X, Y and Z components of the acceleration. Wind was approximately 9 m/s, and elevation was 45 degrees.} \label{fig:aecflange} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{accel_env_apex_AEC.pdf}} \caption{AEC apex acceleration PSD for the same conditions as those shown in Figure \ref{fig:aecflange}.} \label{fig:aecapex} \end{figure} \subsubsection{External Vibration Pick-Up} During performance testing it became clear that the antennas are sensitive to vibrations caused by motion of the other antennas, which are transmitted through the ground. In order to test this sensitivity in a more controlled way, the Mitsubishi antenna, located 35 m to the west of the VertexRSI antenna, was instructed to make a 2 degree azimuth slew, while accelerometers monitored the motion of the VertexRSI antenna. The move caused peak-to-peak pointing errors of 0.30 arcsec in elevation, 0.40 arcsec in cross-elevation, and 2.9 $\mu$m in boresight motion. The elevation for the VertexRSI antenna was 30 degrees. Vibration transfer from the VertexRSI antenna to the AEC antenna, also placed 35 m apart, was investigated by making the VertexRSI antenna perform fast switches with 1 degree offset in both azimuth and elevation, and by making it perform an interferometric OTF scan at 0.05 deg/s. The elevation of the AEC antenna was 30 and 10 degrees respectively, during the tests. The accelerometers mounted on the AEC antenna measured RMS motion during the fast switching of 0.043 arcsec in elevation, 0.012 arcsec in cross elevation, and 0.7 $\mu$m RMS in boresight motion. However, since the motion is peaked during the acceleration of the VertexRSI antenna, and undetectable a few seconds after the move, the peak-to-peak values of the pointing errors are of interest too. Peak-to-peak motion was 0.29 arcsec in elevation, 0.13 arcsec in cross-elevation, and 5.3 $\mu$m in boresight motion. For the interferometric OTF scan, the numbers are 0.016 arcsec RMS or 0.23 arcsec peak-to-peak in elevation, 0.011 arcsec RMS or 0.31 arcsec peak-to-peak in cross-elevation, and 0.29 $\mu$m RMS or 4.4 $\mu$m peak-to-peak boresight motion. These numbers are presented to give an impression of the impact of the motion of nearby antennas. The tests were far too limited to draw any further conclusions, and would depend critically on the soil conditions at the ALMA and ATF sites. \section{Discussion} All wind-related antenna data obtained at the ATF have been analysed in such a way as to represent antenna properties, independent of the shape of the wind spectrum, or wind speed. With these antenna properties, it is possible to predict wind-driven antenna performance for any wind speed, wind spectrum, and air density, even for conditions not encountered during antenna testing. One of the expected and observed properties of the transfer functions as defined in Eqn. \ref{eq:psd} allows a simple extrapolation of antenna properties measured between 0.1 Hz and a few Hz to values below 0.1 Hz, and in principle to 0 Hz. In the frequency domain this is a simple exercise, but in time domain it corresponds to extrapolation of antenna properties measured at timescales of seconds to timescales of tens of minutes or longer. Since the wind contains the majority of its power at low frequencies, this extrapolation is extremely useful for the prediction of the overall wind performance of the antenna. Besides extrapolation to lower frequencies, the transfer functions can be used to scale antenna performance measured under uncontrolled wind conditions to any known and well-defined wind spectrum. The antennas have been designed to meet the specifications for a given reference spectrum, which was given in the SoW. In order to test compliance with the specifications, the measured antenna performance must be scaled to this SoW reference spectrum, a straightforward task, which does not leave much room for speculation but provides hard numbers instead. One must realise, however, that the numbers for antenna performance obtained using the wind transfer functions are valid only under the assumptions made here, i.e. that wind effects dominate the performance. For timescales of seconds to tens of seconds, this may very well be the case, but when extrapolated to timescales of tens of minutes to hours, other effects such as thermal effects may contribute significantly to the total antenna performance at these timescales. \subsection{Apex Rotation: Experiences with Accelerometer Placement and as Tool for Troubleshooting} The results presented in this paper are for pointing stability as derived from motion of the BUS only. The position of the subreflector determines how the primary focus image is projected at the secondary focus, and shake of the subreflector with respect to the BUS may introduce additional pointing variations. During the measurements, it became clear that the apex structures of both antennas rotate about the boresight axis. With only 3 accelerometers at the apex structure, it was no longer possible to discern between rotations and translations of the subreflector. At the prime focus the plate scale is 34 arcsec/mm, which requires stability of the apex structure to be on the order of a few tens of $\mu$m. Using reconfigured accelerometers on one of the antennas, it was possible to distinguish between rotation and translation in one dimension. This revealed that for some resonance modes, the pointing errors introduced by the BUS were compensated by those introduced due to subreflector translation. Thus, the total pointing error projected at the receiver flange may be larger or smaller than the pointing stability derived from the BUS. Excessive rotation of the AEC antenna apex structure highlighted both strengths and weaknesses of the accelerometer concept used in these investigations. The main weakness was the inability of the 3-accelerometer set-up on the apex structure to discern between translation and rotation of the structure. Since the apex structure rotation was not foreseen at the time the accelerometer system was designed, no provisions were made to distinguish between rotation and translation. As it turned out, the rotation was so large that careful on-axis placement of all three accelerometers would have been necessary, not a trivial task given the dimensions of the accelerometers. The strength of the accelerometer system was demonstrated when one of the accelerometers at the apex was reconfigured to allow distinction between rotation and translation (at the cost of the ability to measure one displacement dimension). With the new configuration, it could be determined that the detected accelerations were indeed caused by rotation of the apex structure, that the rotation was off-axis, and that the rotation axis shifted with elevation. This off-axis rotation of the apex structure translates into a pointing error of up to 1 arcsec peak-to-peak in cross-elevation, provided that the rotation axis is parallel to the boresight axis (which could not be confirmed). In summary, this sequence of tests illustrated the versatility of the accelerometer system as a diagnostic tool for troubleshooting. \subsection{Antenna Wake Turbulence: Spin-Off and Implications for Compact Array Configurations} The SoW has a primary operating condition for antenna wake conditions, with an average wind speed of 7 m/s instead of 9 m/s for non-wake conditions, and a variable component of 4 times the SoW reference wind spectrum. Analysis of wind data measured in the wake of the 3 antennas placed at the ATF, and OPT pointing stability with the AEC antenna in the wake of both the Mitsubishi and VertexRSI antennas, has revealed detailed properties of the antenna wake turbulence. \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.4]{antenna_wake_NAOJ_8192.pdf}} \caption{Antenna wake turbulence as measured with the sonic anemometer, for wind passing the Mitsubishi prototype antenna. The PSD was scaled with the wind speed, and divided by the average scaled PSD for wind coming from unobstructed directions.} \label{fig:wake_turbulence} \end{figure} \begin{figure} \resizebox{\hsize}{!}{ \includegraphics[scale=0.35, angle=90]{opt_accel_transfer_function_AEC_new}} \caption{Antenna wake turbulence and AEC antenna resonance as measured with the OPT and accelerometers, for wind passing the Mitsubishi and VertexRSI prototype antennae. The PSD for elevation pointing was scaled with the wind speed, and divided by the PSD of the wind during the measurement. The green curve is measured with the accelerometers, the red curve with the optical pointing telescope. Wind was approximately 12 m/s average.} \label{fig:OPT_wind_stiffness} \end{figure} Figure~\ref{fig:wake_turbulence} shows the effect antenna wake turbulence has on the undisturbed wind, as measured with the sonic anemometer, for wind passing the Mitsubishi prototype antenna. The measured PSD downwind of the antenna was scaled with the average wind speed, and divided by the average scaled PSD for wind coming from unobstructed directions. The curve shows how the unobstructed wind is affected by the antenna, at a distance of approximately 30 m. The scaling factor for the high frequencies is approximately flat with frequency, at a value of 2.5, but the low frequency part of the spectrum is not affected by the same amount. Turn-over occurs in the range around 0.1 Hz. This means that the turbulence introduced in the antenna wake has frequencies of 0.1 Hz and higher, and will at these frequencies shake any antenna placed in this wake a factor 2.5 more than in the absence of the extra turbulence. Figure~\ref{fig:OPT_wind_stiffness} shows the antenna wake turbulence as measured on the AEC antenna with the accelerometers and the OPT simultaneously. The wind was predominantly from the west, passing over the Mitsubishi and VertexRSI antennas. The high frequency part of the plot is dominated by antenna resonances and vortex shedding off of the other antennas, measured with the accelerometers. Below the lowest eigenfrequency around 7 Hz, the curve is expected to flatten, which is also seen down to frequencies of about 0.2 Hz. Below this frequency, the accelerometers are affected by noise. The simultaneous measurements with the OPT, tracking on Polaris, show an overlap up to 0.7 Hz, and are not affected by low frequency noise, and thus valid down to the lowest frequencies. Note the good overlap of the curves in the frequency range between 0.2 and 0.7 Hz, as expected under the assumption that both the OPT and accelerometers see the same BUS motion. The effects of combined Mitsubishi and VertexRSI turbulence are also here clearly visbible, with a turn-over frequency around 0.1 Hz. The magnitude of the high frequency turbulence gain is approximately a factor 4 to 5. Thus the SoW requirement for 4 times the wind spectrum (which translates to 2 times the wind spectrum in the units used in the figures here, m/s/$\sqrt{Hz}$) in the wake of an antenna is valid, but only for frequencies above 0.1 Hz. It is also clear from the ATF antenna wake turbulence, that the level of the turbulence depends on the distance to the antenna, and that the effects of multiple antennae appear to be multiplicative. Since the low frequency (0.001 Hz) antenna wind performance is dominated by the low frequency stiffness of the antenna for wind excitation, the resonances, which are all well above 1 Hz, and the wake turbulence generated by nearby antennas, at frequencies above approximately 0.1 Hz, do not significantly affect wind-related antenna performance for timescales of 15 minutes (0.001 Hz). The cumulative effect of several antennas upwind may, however, not be negligible as seen with the combined Mitsubishi and VertexRSI wake turbulence. Nor will the wake turbulence be negligible any longer when a properly functioning metrology system suppresses the low frequency (below 0.1 Hz) wind buffeting of the antennas, or when antenna performance (beyond the requirements) at frequencies above 0.1 Hz is an issue. \section{Conclusions} \subsection{Accelerometers} Seismic accelerometers mounted on the back-up structures of large reflector antennas are capable of characterising all relevant BUS rigid body motions, and a few of its low-order deformation modes. Independently of any external sources to the antenna, and even for antennas without receivers, it is possible to derive the dynamical behaviour of the performance parameters of an antenna, such as pointing accuracy, primary reflector surface stability, and path length stability. The accuracy at which performance parameters can be measured is a function of frequency, and typically at the sub-$\mu$m and sub-arcsecond level for frequencies above 0.1 Hz. In combination with wind measurements, antenna performance for windy conditions can safely be extrapolated to frequencies well below the limit of 0.1 Hz imposed by noise in the accelerometer data. The validity of this extrapolation has been demonstrated with the use of an optical pointing telescope, not limited by the low-frequency noise. In addition to performance testing, the robust accelerometer system has proven to be well suited for troubleshooting of unexpected antenna behaviour, such as large and off-axis apex structure rotation for the AEC antenna, and servo-tuning issues for the VertexRSI antenna. \subsection{Wind-Driven Performance} Performance of the antennas for windy conditions was a major design driver. In spite of the very different wind conditions at the antenna test site and the ALMA site, it was possible to accurately predict performance for the ALMA site based on measurements performed at the ATF. The key to this extrapolation is careful characterisation of the wind at the sites, in particular at the ATF site. Wind-driven performance as measured on the antennas was combined with the wind characteristics during the time of measurement, which allowed calculation of wind-independent antenna properties, allowing calculation of antenna performance for any wind condition. As a spin-off of the investigations, antenna wake turbulence at a typical distance of 30 - 50 m was determined; which is useful information for calculation of antenna performance in the compact configuration. \subsection{Antenna Performance} Antenna performance as measured with the accelerometers complements other performance measurements, such as optical and radio pointing, and holography. The full performance of the antennas could not be determined with any of the individual methods, but a combination of them gives confidence in the completeness of the test results. During design, antenna performance was calculated from the sum of individual contributions to the total error budget. The performance numbers presented in this paper are best interpreted in the context of the corresponding error budget contributions used for antenna design. Table~\ref{tab:pointing} gives the measured performance for each antenna, and the calculated error budget entry as taken from the design documentation where available \cite{Mangum2006}. \begin{table*} \centering \caption{Wind Pointing} \begin{tabular}{lll} & VertexRSI & AEC \\ \\ Pointing accuracy (wind only) & 0.81 arcsec& 0.45 arcsec\\ Pointing accuracy (wind only) error budget & 0.035 arcsec& 0.35 arcsec\\ Pointing accuracy (wind + tracking) & 0.94 arcsec& 0.50 arcsec\\ Pointing accuracy offset pointing requirement& 0.6 arcsec& 0.6 arcsec\\ \\ Primary reflector surface stability, wind effects (astigmatism + AAM, at edge of BUS) & 5.3 + 2.2 $\mu$m& 6 + 5 $\mu$m\\ Primary reflector surface stability, wind effects error budget& 8.4 $\mu$m& 2.1 $\mu$m\\ Primary reflector surface stability, overall requirement& 25 $\mu$m& 25 $\mu$m\\ \\ Path length stability, wind effects & 6 $\mu$m& 6 $\mu$m\\ Path length stability, wind effects error budget& 7.6 $\mu$m& 3.5 $\mu$m\\ Path length stability, requirement total non-repeatable residual delay & 15 $\mu$m& 15 $\mu$m\\ \\ Structure flexure cross-elevation & 2.1 arcsec/(deg/s$^2$)& 1.6 arcsec/(deg/s$^2$)\\ Structure flexure elevation & 2.8 arcsec/(deg/s$^2$)& 2.1 arcsec/(deg/s$^2$)\\ \\ Lowest eigenfrequencies & 5.57 Hz & 6.8 Hz \\ \end{tabular} \label{tab:pointing} \end{table*} \section*{Acknowledgment} The authors would like to thank Nobuharu Ukita (National Astronomical Observatory of Japan) and David R. Smith (Merlab) for valuable discussions on the design of the accelerometer system and analysis of the measurements, Angel Otarola and Juan Pablo Perez Beaupuits (ESO) for providing wind data for the ALMA site, and Fritz Stauffer and Nicholas Emerson (NRAO) for valuable support at the ATF. \bibliographystyle{IEEEtran}
train/arxiv
BkiUdMM4eIZjhIQolelO
5
1
\section{Introduction} Let $\P^n$ be the $n$-dimensional projective space over the field of complex numbers $\C$. A famous theorem by Horrocks states that a vector bundle $\mathcal E$ on $\P^n$ splits as the direct sum of line bundles if and only if $\mathcal E$ has no intermediate cohomology, \emph{i.e.}, $h^i(\mathcal E(j))= 0$ for all $0<i<n$ and $j \in \Z$. It is natural to ask the algebro-geometric meaning of these vanishing conditions for the other varieties. Let $X \subset \P^N$ be an $n$-dimensional smooth projective variety with a fixed polarization $\mathcal O_X(1) = \mathcal O_{\P^N}(1)|_{X}$. We call that a vector bundle $\mathcal E$ on $X$ is \emph{ACM (arithmetically Cohen-Macaulay)} if $\mathcal E$ has no intermediate cohomology with respect to the given polarization $\mathcal O_X (1)$. Roughly speaking, the presence of nontrivial ACM bundles measures how $X$ is apart from the projective space $\P^n$. Due to their interesting properties, ACM bundles have played a significant role in the study of vector bundles. In commutative algebra, ACM bundles correspond to \emph{MCM (maximal Cohen-Macaulay)} modules which are Cohen-Macaulay modules achieving the maximal dimension. A particularly interesting case happens when the minimal free resolution of an MCM module becomes completely linear. Such an MCM module has the maximal possible number of minimal generators which are concentrated on a single degree \cite{Ulrich1984}. Eisenbud and Schreyer made a comprehensive study on the geometric analogue of these linear MCM modules, and named them Ulrich sheaves \cite{EisenbudSchreyer2003}. Thanks to foundational works by Beauville \cite{Beauville2000} and Eisenbud-Schreyer \cite{EisenbudSchreyer2003}, Ulrich sheaves provide a number of fruitful applications; for example, linear determinantal representations of hypersurfaces, matrix factorizations by linear matrices, the cone of cohomology tables, and Cayley-Chow forms. Eisenbud and Schreyer conjectured that every projective variety carries an Ulrich sheaf \cite{EisenbudSchreyer2003}, and verified it for a few simple cases. The conjecture is still wildly open even for smooth surfaces. In very recent years, there were several progresses on the conjecture for surfaces; for instance, general K3 surfaces \cite{AproduFarkasOrtega}, abelian surfaces \cite{Beauville2015:Abelian}, and nonspecial surfaces of $p_g = q = 0$ \cite{Casnati}. Much less is known for ACM and Ulrich bundles on threefolds. On a smooth quadric $Q^3 \subset \P^4$, there is only one nontrivial indecomposable ACM bundle, namely, the spinor bundle \cite{BuchweitzEisenbudHerzog}. Arrondo and Costa studied ACM bundles on Fano 3-folds of index 2 of degree $d=3, 4, 5$ \cite{ArrondoCosta2000}. Madonna studied splitting criteria for rank 2 vector bundles on hypersurfaces in $\P^4$ \cite{Madonna1998:Splitting}. He also classified all the possible Chern classes of rank 2 ACM bundles on prime Fano 3-folds and complete intersection Calabi-Yau 3-folds \cite{Madonna2002:ACMprimeFano}. Their results expected the existence of Ulrich bundles on 3-folds of small degree, however, constructions were not complete except for a very few cases. On the other hand, Beauville showed that a general hypersurface of degree $\le 5$ in $\P^4$ is linearly Pfaffian. In other words, such a hypersurface carries a rank 2 Ulrich bundle \cite{Beauville2000}. He also checked that every Fano 3-fold of index 2 carries a rank 2 Ulrich bundle \cite{Beauville2016:IntroductionUlrich}. In particular, a general smooth cubic 3-fold carries Ulrich bundles of rank $r$ for every $r \ge 2$, proved first by Casanellas, Hartshorne, Geiss, and Schreyer \cite{CasanellasHartshorne:Ulrich}. Recently, Lahoz, Macr{\`i}, and Stellari extended this result to every smooth cubic 3-fold using the derived category of coherent sheaves and also described the moduli space of stable Ulrich bundles \cite{LahozMacriStellari}. It is quite natural to ask for the next case, a del Pezzo threefold $X=Q_0^4 \cap Q_{\infty}^4$ of degree four which is the complete intersection of two quadric 4-folds. Indeed, $X$ is very attractive since there are several ways to understand vector bundles on $X$. Since $X$ is a 3-fold, we may construct vector bundles on $X$ by observing curves lying on $X$ via Serre correspondence. On the other hand, it is also well-known that the geometry of the intersection of $2$ even dimensional quadrics is closely related to a hyperelliptic curve. Bondal and Orlov showed that the derived category of coherent sheaves on the intersection of $2$ even dimensional quadrics has a semiorthogonal decomposition whose components are the derived category of the hyperelliptic curve associated to the $2$ given quadrics and the exceptional collection \cite{BondalOrlov:SODforAlgVar}. Recently, there were several attempts to understand vector bundles on a variety using the semiorthogonal decomposition of its derived category. For instance, Kuznetsov studied instanton bundles on some index 2 Fano 3-folds via semiorthogonal decompositions \cite{Kuznetsov:Instanton}. Lahoz, Macr{\`i}, and Stellari studied ACM bundles on cubic 3-folds and 4-folds via semiorthogonal decomposition \cite{LahozMacriStellari, LahozMacristellari:4folds}. Therefore, it is reasonable to apply the semiorthogonal decomposition to understand vector bundles on the intersection of two even dimensional quadrics. Being motivated by earlier works mentioned above, we investigate the existence and the moduli space of Ulrich bundles on the intersection of two 4-dimensional quadrics by two completely different methods: classical Serre corrseponence and Bondal-Orlov theorem. The main result is the following theorem: \begin{thm}[see Theorem~\ref{thm:StableUlrichBundlesClassical} and \ref{thm: Main thm}] The moduli space of stable Ulrich bundles of rank $r\geq 2$ on $X = Q_0^4 \cap Q_\infty^4$ is isomorphic to a nonempty open subscheme of $\mathcal U_C^{\sf s}(r,2r)$, where $\mathcal U_C^{\sf s}(r,2r)$ is the moduli space of stable vector bundles of rank $r$ and degree $2r$ on a curve $C$ of genus $2$. \end{thm} Our approach using Serre correspondence closely follows the works of Arrondo and Costa \cite{ArrondoCosta2000} and of Casanellas, Hartshorne, Geiss, and Schreyer \cite{CasanellasHartshorne:Ulrich}, and our approach using derived categories is strongly influenced by the works of Kuznetsov \cite{Kuznetsov:Instanton} and of Lahoz, Macr{\`i}, and Stellari \cite{LahozMacriStellari, LahozMacristellari:4folds}. The structure of this paper is as follows. In Section \ref{section:Preliminary}, we recall a few useful facts related to ACM and Ulrich bundles. In Section \ref{section:Serre}, we construct Ulrich bundles of any rank $r \ge 2$ on a general intersection of two quadric 4-folds $X=Q_0^4 \cap Q_{\infty}^4$ using Serre correspondence and \emph{Macaulay2}. In Section \ref{section:derived}, we prove the existence of Ulrich bundles of any rank $r \ge 2$ on a smooth complete intersection of two quadric 4-folds $X=Q_0^4 \cap Q_{\infty}^4$ using Bondal-Orlov theorem. We also analyze the moduli of stable Ulrich bundles of rank $r$ on $X$ and provide a description in terms of vector bundles on $C$. \section{Preliminaries on ACM and Ulrich bundles}\label{section:Preliminary}% In this section, we briefly review the definition of ACM and Ulrich bundles and their basic properties. \begin{defi}\label{defi:ACMUlrich} Let $X \subset \P^N$ be an $n$-dimensional smooth projective variety embedded by a very ample line bundle $\mathcal O_X(1)$. \begin{enumerate} \item A coherent sheaf $\mathcal E$ on $X$ is \emph{ACM} if $H^i (\mathcal E(j))=0$ for all $0<i<n$ and $j \in \Z$. \item An ACM sheaf $\mathcal E$ on $X$ is \emph{Ulrich} if $H^0(\mathcal E(-1))=0$ and $h^0(\mathcal E)=\deg (X) \rank(\mathcal E)$. \end{enumerate} \end{defi} \begin{rmk} Since the underlying space $X$ is smooth, $\mathcal E$ being $ACM$ implies that $\mathcal E$ is locally free. Hence it is natural to call ACM (Ulrich) bundles for the objects occurring in the above definition. \end{rmk} We recall the following proposition by Eisenbud and Schreyer. We refer to \cite{Beauville2016:IntroductionUlrich, EisenbudSchreyer2003} for more details. \begin{prop}[{\cite[Theorem~1]{Beauville2016:IntroductionUlrich},\ \cite[Proposition 2.1]{EisenbudSchreyer2003}}]\label{prop: Ulrich Equiv conditions} Let $X \subset \P^N$ and $\mathcal E$ as above. The following are equivalent: \begin{enumerate} \item $\mathcal E$ is Ulrich; \item $H^i (\mathcal E(-i))=0$ for all $i>0$ and $H^j (\mathcal E(-j-1))=0$ for $j<n$. \item $H^i (\mathcal E(-j))=0$ for all $i$ and $1 \le j \le n$. \label{item: Ulrich as Acyclic condition} \item For some (all) finite linear projections $\pi : X \to \P^n$, the sheaf $\pi_{*} \mathcal E$ is isomorphic to the trivial sheaf $\mathcal O_{\P^n}^{\oplus t}$ for some $t$. \item The section module $M:=\oplus_j H^0 (\mathcal E(j))$ is a linear MCM module, that is, the minimal $S=\C[x_0, \ldots, x_N]$-free resolution of $M$ \[ \mathbf{F} : 0 \to F_{N-n} \to \cdots \to F_1 \to F_0 \to M \to 0 \] is linear in the sense that $F_i$ is generated in degree $i$ for every $i$. \end{enumerate} \end{prop} In particular, by Serre duality, we immediately have the following proposition as a consequence: \begin{prop}\label{prop:DualOfACMUlrich} Let $X^n \subset \P^N$ be as above, and let $H:= \mathcal O_X(1)$ be a very ample line bundle. \begin{enumerate} \item If $\mathcal E$ is an ACM bundle on $X$, then $\mathcal E^* (K_X)$ is also an ACM bundle. \item When $X$ is subcanonical, that is, $K_X = \mathcal O_X (k)$ for some $k \in \Z$, $\mathcal E$ is ACM if and only if $\mathcal E^*$ is ACM. \item If $\mathcal E$ is an Ulrich bundle on $X$, then $\mathcal E^* (K_X + (n+1)H)$ is an Ulrich bundle. \end{enumerate} \end{prop} The following proposition about the stability is very useful in later sections. \begin{prop}[{\cite[Theorem 2.9]{CasanellasHartshorne:Ulrich}}]\label{prop: semistability of Ulrich} Let $X$ be a smooth projective variety, and let $\mathcal E$ be an Ulrich bundle on $X$. Then \begin{enumerate} \item $\mathcal E$ is semistable and $\mu$-semistable. \item If $0 \to \mathcal E' \to \mathcal E \to \mathcal E'' \to 0$ is an exact sequence of coherent sheaves with $\mathcal E''$ torsion-free, and $\mu(\mathcal E') = \mu(\mathcal E)$, then both $\mathcal E'$ and $\mathcal E''$ are Ulrich. \item If $\mathcal E$ is stable, then it is also $\mu$-stable. \end{enumerate} \end{prop} \section{Geometric approach via Serre correspondence}\label{section:Serre} In this section, we show the existence of Ulrich bundles using Serre correspondence. \subsection{Serre correspondence} We briefly recall Serre correspondence which enables us to construct a vector bundle as an extension from a codimension 2 subscheme. To obtain a vector bundle, such a subscheme has to satisfy certain generating conditions. For instance, it is well-known that a 0-dimensional subscheme on a smooth surface should satisfy Cayley-Bacharach condition to provide a locally free extension. For higher dimensional cases, the situation gets much more complicated. For example, a curve in $\P^3$ occurs as the zero locus of a rank 2 vector bundle on $\P^3$ if and only if it is a local complete intersection and subcanonical \cite{Hartshorne1978}. It is clear that not all curves come from vector bundles. When it happens, we cannot construct a vector bundle as an extension. However, still in many cases, it is a powerful tool providing constructions of vector bundles. We refer to \cite{Arrondo2007:SerreCorrespondence} for the proof and more details. \begin{thm}[Serre correspondence] \label{thm:SerreCorrespondence} Let $X$ be a smooth variety and let $Y \subset X$ be a local complete intersection subscheme of codimension $2$ in $X$. Let $\mathcal N$ be the normal bundle of $Y$ in $X$ and let $\mathcal L$ be a line bundle on $X$ such that $H^2(\mathcal L^*) = 0$. Assume that $(\wedge^2 \mathcal N \otimes \mathcal L^* )|_Y$ has $(r-1)$ generating global sections $s_1, \ldots, s_{r-1}$. Then there is a rank $r$ vector bundle $\mathcal E$ as an extension \[ 0 \to \mathcal O_X^{r-1} \xrightarrow{(\alpha_1, \ldots, \alpha_{r-1})} \mathcal E \longrightarrow \mathcal I_{Y/X} (\mathcal L) \to 0 \] such that the dependency locus of $(r-1)$ global sections $\alpha_1, \ldots, \alpha_{r-1}$ of $\mathcal E$ is $Y$ with $\sum_{i=1}^{r-1} s_i \alpha_{i}|_Y = 0$. Moreover, if $H^1(\mathcal L^*) = 0$, such an $\mathcal E$ is unique up to isomorphism. \end{thm} \subsection{ACM bundles of rank 2 via Serre correspondence}\label{sec: Serre construction} From now on, let $Q_0, Q_\infty$ be two smooth quadric hypersurfaces in $\P^5$ meeting transversally and let $X = Q_0 \cap Q_\infty \subset \P^5$ be a smooth Fano 3-fold of degree 4 and index 2, \emph{i.e.}, $\omega_X = \mathcal O_X(-2)$. Let $[H_X]$, $[L_X]$, $[P_X]$ be the class of a hyperplane section, a line, and a point in $X$ respectively. Then, \begin{equation}\label{eq: Cohomologies of X} H^2(X,\Z) \simeq \Z\cdot [H_X],\quad H^4(X,\Z) \simeq \Z \cdot [L_X],\quad \text{and}\quad H^6(X,\Z) \simeq \Z \cdot [P_X]. \end{equation} The ring structure is given as follows: $H_X^2 = 4L_X$, $H_X \cdot L_X = P_X$. For a vector bundle $\mathcal F$ on $X$, we define its slope $\mu$ with respect to $H$ by \[ \mu_H(\mathcal F) := \frac{\deg_H \mathcal F}{\op{rank} \mathcal F} \] By virtue of (\ref{eq: Cohomologies of X}), we fix our convention as follows. \begin{nota} Via the isomorphisms $\Z \cdot [H_X] \simeq \Z$, $\Z \cdot [L_X] \simeq \Z$, and $\Z \cdot [P_X] \simeq \Z$, we may regard $c_{i}(\mathcal F)$ as an integer, by omitting the cyclic generators of $H^{2i}(X,\Z)$. Under this convention, one can easily see that \[ \mu_H(\mathcal F) = \frac{c_1(\mathcal F) \deg X}{\op{rank} \mathcal F} = 4 \cdot \frac{c_1(\mathcal F)}{\op{rank} \mathcal F} \] We also omit the redundant coefficient $4$ in the formula and redefine the slope of $\mathcal F$ as follows: \[ \mu(\mathcal F):= \frac{c_1(\mathcal F)}{\op{rank}\mathcal F}. \] \end{nota} The following proposition is useful in later sections. \begin{prop}[{\cite[Proposition~1.2.7]{HuybrechtsLehn:ModuliofSheaves}}]\label{prop: Stability vanishing} Let $\mathcal E$ and $\mathcal E'$ be $\mu$-stable bundles with $\mu(\mathcal E) > \mu(\mathcal E')$. Then $\Hom(\mathcal E,\mathcal E') = 0$. \end{prop} Applying Proposition~\ref{prop:DualOfACMUlrich} to $X = Q_0 \cap Q_\infty$, we get the following: \begin{prop} Let $\mathcal E$ be an Ulrich bundle of rank $r$ on $X = Q_0 \cap Q_\infty$. Then, \begin{enumerate} \item $\mu(\mathcal E)=1$, and \item $\mathcal E^*(2)$ is an Ulrich bundle. \end{enumerate} \end{prop} In \cite{ArrondoCosta2000}, Arrondo and Costa made a comprehensive study of ACM bundles on $X$ extending \cite{SzurekWisniewski1993}. They classified the possible Chern classes for ACM bundles under a mild assumption. In particular, they classified all the rank 2 ACM bundles on $X$. \begin{thm}[{\cite[Theorem 3.4]{ArrondoCosta2000}}]\label{thm: Arrondo-Costa ACM rk 2} An indecomposable rank $2$ ACM vector bundle on $X$ is a twist of one of the following; \begin{enumerate} \item A line type: a semistable vector bundle $\mathcal E_{l}$ fitting in an exact sequnce \[ 0 \to \mathcal O_X \to \mathcal E_{l} \to \mathcal I_{l} \to 0 \] where ${l} \subset X$ is a line contained in $X$; \item A conic type: a stable vector bundle $\mathcal E_\lambda$ fitting in an exact sequence \[ 0 \to \mathcal O_X \to \mathcal E_\lambda \to \mathcal I_\lambda (1) \to 0 \] where $\lambda \subset X$ is a conic contained in $X$; \item An elliptic curve type: a stable vector bundle $\mathcal E_e$ fitting in an exact sequence \[ 0 \to \mathcal O_X \to \mathcal E_e \to \mathcal I_e (2) \to 0 \] where $e \subset X$ is an elliptic curve of degree $6$. \end{enumerate} \end{thm} It is classically well-known that the Fano scheme $F(X)$ of lines $l \subset X$ is isomorphic to the Jacobian $J(C)$ of the hyperelliptic curve $C$ of genus 2 associated to $X$\,(see \cite[Theorem~5]{NarasimhanRamanan:ModuliofVectBdl}, \cite[Theorem~2]{Newstead:StableBundlesofRank2OddDeg} or \cite{Reid:PhD}). Since $\mathcal E_l$ has the unique global section up to constants, the space also coincides with the space of line type ACM bundles. Conic type ACM bundles are also well understood as in the following way. Given a conic $\lambda \subset X$, note that there is only one quadric $Q \in \mathfrak d := \lvert Q_0 + t Q_\infty \rvert_{t \in \P^1}$ in a pencil containing the plane $\Lambda = \left< \lambda \right>$. It is clear that $\Lambda \cap X = \lambda$. Since $Q$ is a 4-dimensional quadric, there is a spinor bundle whose global sections sweep out a family of planes in $Q$ containing $\Lambda$. The bundle $\mathcal E_\lambda$ is the restriction of this spinor bundle. Hence, the moduli of conic type ACM bundle can be naturally identified with the space of spinor bundles associated to the pencil $\mathfrak d$. The last case is particularly interesting. When $e \subset X \subset \P^5$ is an elliptic normal curve of degree 6, we have $h^0(\mathcal I_e(1))=0$ and $h^0(\mathcal I_e(2)) = h^0(\mathcal I_{e/\P^5}(2)) - h^0(\mathcal I_{X/\P^5}(2)) = 9 - 2 = 7$. Hence $\mathcal E_e$ is an initialized ACM bundle with $h^0 (\mathcal E_e ) = 8 = (\deg X) \cdot (\rank \mathcal E_e)$, in other words, it is an Ulrich bundle of rank 2. We refer to \cite{Beauville2016:IntroductionUlrich} for an explicit construction of such curves. \begin{prop}[{\cite[Proposition 8]{Beauville2016:IntroductionUlrich}}]\label{Prop:UlrichRank2} There exists an Ulrich bundle of rank $2$ on $X$. \end{prop} \subsection{Ulrich bundles of higher ranks via Serre correspondence and \emph{Macaulay2}} Similar as the case of cubic 3-folds in $\P^4$, the existence of rank 3 Ulrich bundles on $X$ was expected earlier in \cite[Example 4.4]{ArrondoCosta2000}. However, as Casanellas and Hartshorne pointed out \cite[Remark 5.5]{CasanellasHartshorne:Ulrich}, the construction was incorrect not only for cubic 3-folds but also for our $X$. Arrondo and Costa constructed an arithmetically Cohen-Macaulay curve $D$ of degree 15 and genus 12 using a Gorenstein liaison, however, 2 sections of $H^0 (\omega_D(-1))$ do not generate the graded module $H_{*}^0 (\omega_D)$. Indeed, in loc. cit., the authors started with a twisted cubic curve $D'$, and then found an arithmetically Gorenstein curve $B'$ of degree 18 containing $D'$ where the residual curve is $D$. Hence we have a short exact sequence \[ 0 \to \mathcal I_{B'} \to \mathcal I_{D'} \to \omega_D (-2) \to 0. \] Since $B'$ is arithmetically Gorenstein, we have a short exact sequence of graded $S=H_{*}^0 (\mathcal O_{\P^5})$-modules \begin{equation}\label{Seq:aGshortexact} 0 \to H^0_* (\mathcal I_{B'}) \to H^0_* (\mathcal I_{D'}) \to H^0_* (\omega_D(-2)) \to 0. \end{equation} Note that $B'$ is the zero locus of a section of $\mathcal E_e(1)$, so $\mathcal I_B'$ fits into the short exact sequence $ 0 \to O_X \to \mathcal E_e (1) \to \mathcal I_{B'} (4) \to 0 $. Hence, the first 2 nonzero terms in the sequence (\ref{Seq:aGshortexact}) are \[ H^0 (\mathcal I_{D'}(1)) \simeq H^0 (\omega_D(-1)) \] and \[ H^0 (\mathcal I_{D'}(2)) \simeq H^0(\omega_D). \] Via the exact sequence $0 \to \mathcal I_{X/\P^5} \to \mathcal I_{D' / \P^5} \to \mathcal I_{D'} \to 0$, we may lift the sections in $H^0(\mathcal I_{D'}(j))$ as the homogeneous form of degree $j$ in $S$. It is clear that a twisted cubic curve $D' \subset \mathbb P^5$ is generated by 2 linear forms and 3 quadratic forms in $S$, and hence $H^0 (\omega_D(-1))$ is spanned by these 2 linear forms $l_1$ and $l_2$, namely. However, sections in the image of $H^0(\omega_D(-1)) \otimes H^0 (\mathcal O_{\P^5} (1)) \to H^0 (\omega_D)$ can only span 11 quadrics, since two sections of $\omega_D(-1)$ admit a linear Koszul relation $l_1 l_2 - l_2 l_1 = 0$ in $S$. Hence we conclude that $H^0 (\omega_D(-1)) \otimes H^0 (\mathcal O_{\P^5}(1)) \to H^0 (\omega_D)$ cannot be surjective. We need to construct a curve satisfying the generating condition to construct a rank 3 Ulrich bundle $\mathcal E$ on $X$. If it exists, then two independent global sections of $\mathcal E$ will degenerate along a curve $D$ of degree $15$ since $\mathcal E$ is globally generated always. It is easy to see that the numerical conditions suggested in \cite[Example 4.4]{ArrondoCosta2000} are valid. Hence, we need to construct an ACM curve $D \subset X$ of given invariants such that $H^0(\omega_D(-1))$ has two generating section, that is, the multiplication map \[ H^0 (\omega_D(-1)) \otimes H^0 (\mathcal O_{\P^5}(j)) \to H^0 (\omega_D(j-1)) \] is surjective for each $j \ge 1$. Since $\omega_D^* (1+j)$ is nonspecial for $j \ge 2$, Castelnuovo pencil trick implies that the map is automatically surjective for $j \ge 2$. Hence it is sufficient to check only for the $j=1$ case. The construction follows from \emph{Macaulay2} \cite{Macaulay2} computations, which is analogous to \cite[Appendix]{CasanellasHartshorne:Ulrich} or \cite{Geiss:PhD}. Although the proof goes into the same strategy, in particular, the \emph{Macaulay2} scripts are almost same, it is worthwhile to write down since the difference between the cubic 3-fold case is not that much straightforward. \begin{prop}[{See also \cite[Theorem A.3]{CasanellasHartshorne:Ulrich}}]\label{Thm:ExistenceOfACMg12d15} The space of pairs $D \subset X \subset \P^5$ of smooth ACM curves of degree $15$ and genus $12$ on a complete intersection of $2$ quadrics $X$ has a component which dominates the Hilbert scheme of intersections of $2$ quadrics in $\P^5$. Moreover, the module $H^0_{*} (\omega_D)$ is generated by its $2$ sections in degree $-1$ as $S_{\P^5} = H^0_{*} (\mathcal O_{\P^5})$-modules for a general pair $D \subset X$. In particular, a general intersection of two quadrics in $\P^5$ carries a desired curve we discussed above. \end{prop} \begin{proof} We prove by constructing a family of such curves as in the following strategy. First, we take a family of smooth curves of genus $12$ in $\P^1 \times \P^2$. Next, we observe that a general (precisely, a randomly chosen) curve in this family admits an embedding to $\P^5$ in a natural way. Finally, we check that such a curve in $\P^5$ satisfies the desired properties. Then the whole statement will follow by the deformation theory and the semicontinuity. Let $D$ be a smooth projective curve of genus $12$ together with line bundles $L_1$ and $L_2$ with $|L_1|$ a $\mathfrak g^1_7$ and $|L_2|$ a $\mathfrak g^2_{10}$. Let $D' $ be the image of the map \[ D \xrightarrow{|L_1|, |L_2|} \P^1 \times \P^2. \] Suppose that the maps $H^0(\P^1 \times \P^2 , \mathcal O (n, m)) \to H^0 (D, L_1 ^n \otimes L_2^ m)$ are of maximal rank for all $n, m \ge 1$. Under this assumption, $D$ is isomorphic to its image $D'$ and we may compute the Hilbert series of the truncated ideal \[ I_{trunc} = \bigoplus_{n, m \ge 3} H^0 (\mathcal I_{D'} (n,m)) \] in the Cox ring $S_{\P^1 \times \P^2} = k[x_0, x_1; y_0, y_1, y_2]$ of $\P^1 \times \P^2$, namely, \[ H_{I_{trunc}} (s, t) = \frac{5s^4 t^5 - 11s^4 t^4 - 6s^3 t^5 + 3s^4 t^3 + 10s^3 t^4}{(1-s)^2 (1-t)^3}. \] Hence, by reading off the Hilbert series, we may expect that $I_{trunc}$ admits a bigraded free resolution of type \[ 0 \to F_2 \to F_1 \to F_0 \to I_{trunc} \to 0 \] with modules $F_0 = S_{\P^1 \times \P^2} (-3, -4)^{10} \oplus S_{\P^1 \times \P^2}(-4, -3)^{3}$, $F_1 = S_{\P^1 \times \P^2}(-3, -5)^6 \oplus S_{\P^1 \times \P^2}(-4, -4)^{11}$, and $F_2 = S_{\P^1 \times \P^2}(-4, -5)^5$. We will construct a curve $D' \subset \P^1 \times \P^2$ in a converse direction. First, we take a free resolution of the above form, and then observe that the module represented by such a resolution is indeed an ideal of a curve $D'$. Let $M: F_2 \to F_1$ be a general map chosen randomly, and let $K$ be the cokernel of the dual map $M^* : F_1^* \to F_2^*$. The first terms of a minimal free resolution of $K$ are: \[ \cdots \to G \stackrel{N'} \to F_1^* \stackrel{M^*} \to F_2^* \to K \to 0 \] where $G$ be the module generated by syzygies of $M^*$. Composing $N'$ with a general map $F_0^* \to G$ and dualizing again, we get a map $N : F_1 \to F_0$. The following script shows that the kernel of $N^*$ is $S_{\P^1 \times \P^2}$ so that the entries of the matrix $S_{\P^1 \times \P^2} \to F_0^{*}$ generate an ideal. \begin{verbatim} i1 : setRandomSeed "RandomCurves"; p=997; Fp=ZZ/p; S=Fp[x_0,x_1,y_0..y_2, Degrees=>{2:{1,0},3:{0,1}}]; -- Cox ring m=ideal basis({1,1},S); -- irrelevant ideal \end{verbatim} \begin{verbatim} i2 : randomCurveGenus12Withg17=(S)->( M:=random(S^{6:{-3,-5},11:{-4,-4}},S^{5:{-4,-5}}); -- random map M N':=syz transpose M; -- syzygy matrix of the dual of M N:=transpose(N'*random(source N',S^{3:{4,3},10:{3,4}})); ideal syz transpose N) -- the vanishing ideal of the curve \end{verbatim} \begin{verbatim} i3 : ID'=saturate(randomCurveGenus12Withg17(S),m); -- ideal of D' \end{verbatim} Since the maximal rank assumption is an open condition, the above example provides that there is a component $\mathcal H \subset Hilb_{(7,10),12} (\P^1 \times \P^2)$ in the Hilbert scheme of curves of bidegree $(7,10)$ and genus $12$ defined by free resolutions of the above form. Also note that $D' \in \mathcal H$ admits both $\mathfrak g^1_{7}$ and $\mathfrak g^2_{10}$ induced by the natural projections. We want to verify that a general $D \in \mathcal H'$ equipped with two natural projections acts like a general curve $D \in \mathcal M_{12}$, $L_1$, and $L_2$ in order to show that $\mathcal H'$ dominates $\mathcal M_{12}$. Recall from Brill-Noether theory that for a general curve $D$ of genus $g$, the Brill-Noether locus \[ W^r_d(D) = \{ L \in \Pic (D) \ | \ \deg(L)=d, h^0(L) \ge r+1 \} \] is nonempty and smooth away from $W^{r+1}_d (D)$ of dimension $\rho$ if and only if \[ \rho = \rho(g,r,d) = g-(r+1)(g-d+r) \ge 0. \] Also note that the tangent space at $L \in W^r_d (D) \setminus W^{r+1}_d (D)$ is the dual of the cokernel of Petri map \[ H^0 (D, L) \otimes H^0(D, \omega_D \otimes L^{-1}) \to H^0(D, \omega_D). \] We expect that both $L_1$ and $L_2$ are smooth isolated points of dimension $\rho_1 = \rho_2 = 0$, equivalently, both Petri maps are injective. We refer to \cite[Chapter IV]{ACGH} for details on Brill-Noether theory. Now let $\eta : D \to D'$ be a normalization of a given point $D' \in \mathcal H$, since we do not know that $D'$ is smooth yet. We check that $L_i$ are smooth points in the associated Brill-Noether loci as follows, where $L_i$ is a line bundle on $D$ obtained by pulling back natural $\mathfrak g^1_7$ and $\mathfrak g^2_{10}$ on $D'$ for $i=1, 2$. We first check $L_2$; we take the plane model $\Gamma \subset \P^2$ of $D'$. \begin{verbatim} i4 : Sel=Fp[x_0,x_1,y_0..y_2,MonomialOrder=>Eliminate 2]; R=Fp[y_0..y_2]; -- coordinate ring IGammaD=sub(ideal selectInSubring(1,gens gb sub(ID',Sel)),R); -- ideal of the plane model \end{verbatim} We observe that $\Gamma$ is a curve of desired degree and genus, and its singular locus $\Delta$ consists only of ordinary double points as follows. \begin{verbatim} i5 : distinctPoints=(J)->( singJ:=minors(2,jacobian J)+J; codim singJ==3) \end{verbatim} \begin{verbatim} i6 : IDelta=ideal jacobian IGammaD + IGammaD; -- singular locus distinctPoints(IDelta) o6 = true \end{verbatim} \begin{verbatim} i7 : delta=degree IDelta; d=degree IGammaD; g=binomial(d-1,2)-delta; (d,g,delta)==(10,12,24) o7 = true \end{verbatim} We can also compute the minimal free resolution of $I_{\Delta}$: \begin{verbatim} i8 : IDelta=saturate IDelta; betti res IDelta 0 1 2 o8 = total: 1 4 3 0: 1 . . 1: . . . 2: . . . 3: . . . 4: . . . 5: . 4 . 6: . . 3 \end{verbatim} Thanks to the above Betti table, we immediately check that $\Gamma$ is irreducible since $\Delta$ is not a complete intersection (4,6). Indeed, there is no way to write a degree 10 curve $\Gamma \subset \P^2$ with $24$ nodes as a union of 2 curves. In particular, the normalization of $\Gamma$ is isomorphic to a smooth irreducible curve of genus $g=12$, and thus $D'$ is smooth since $12=g \le p_a(D') \le 12$. Hence from now on, we do not distinguish $D$ and $D'$ since they coincide. By Riemann-Roch, we have $h^0(D, L_2)=3$ since $h^1(D, L_2) = h^0(D, \omega_D \otimes L_2^{-1}) = h^0(\P^2, \mathcal I_{\Delta}(6)) = 4$ by the adjunction formula applied to $D \subset \op{Bl}_\Delta \P^2$. Hence $|L_2|$ is complete and the Petri map for $L_2$ is identified with the muiltiplication \[ H^0 (\P^2, \mathcal O_{\P^2}(1)) \otimes H^0 (\P^2, \mathcal I_{\Delta}(6)) \to H^0 ( \P^2, \mathcal I_{\Delta}(7)). \] Note that the map is injective since there is no linear relation among the 4 sextic generators of $I_{\Delta}$. In fact, the Petri map becomes an isomorphism, and $L_2 \in W^2_{10}(D)$ is a smooth isolated point of dimension $\rho_2 = 0$. To check that $L_1$ is Petri generic, we first compute the embedding $D \to \P H^0 (\omega_D \otimes L_1^{-1}) = \P^5$ and its minimal free resolution by choosing sections of $H^0 (\omega_D) \simeq H^0 (\P^2, \mathcal I_{\Delta}(7))$ which vanish on a fiber of $D \to \P^1$ induced by $|L_1|$: \begin{verbatim} i9 : LK=(mingens IDelta)*random(source mingens IDelta, R^{12:{-7}}); -- compute a basis Pt=random(Fp^1,Fp^2); -- a random point in a line L1=substitute(ID',Pt|vars R); -- fiber over the point KD=LK*(syz(LK -- compute a basis for elements in LK vanish in L1 T=Fp[z_0..z_5]; -- coordinate ring phiKD=map(R,T,KD); -- embedding ID=preimage_phiKD(IGammaD); degree ID==15 and genus ID==12 o9 = true \end{verbatim} \begin{verbatim} i10 : betti(FD=res ID) 0 1 2 3 4 o10 = total: 1 12 25 16 2 0: 1 . . . . 1: . 2 . . . 2: . 10 25 16 . 3: . . . . 2 \end{verbatim} We observe that the curve $D \subset \P^5$ verifies the desired properties. Since the length of the minimal free resolution of $I_D$ equals to the codimension, $D \subset \P^5$ becomes ACM. Note that the dual complex $\Hom_{S_{\P^5}}^{\bullet} (F_D , S_{\P^5}(-6))$ gives a resolution of $\oplus_{n \in \Z} H^0 (\omega_D(n))$ where $F_D$ is the minimal free resolution of $D$. The Betti table also tells us that this module is generated by its 2 global sections in degree $-1$ and $h^0(L_1) = h^0(\omega_D(-1)) = 2$. Hence, $|L_1|$ is also complete and the Petri map for $L_1$ is identified with \[ H^0(D, \omega_D(-1)) \otimes H^0(\P^5, \mathcal O_{\P^5}(1)) \to H^0(D, \omega_D). \] This map is also injective since there is no linear relation between the 2 generators in $H^0(\omega_D(-1))$. Indeed, the Petri map becomes an isomorphism, and $L_1 \in W^1_{7}(D)$ is a smooth isolated point of dimension $\rho_1 = 0$. As consequences, $\mathcal H$ dominates $Z = \mathcal W^1_{7} \times_{\mathcal M_{12}} \mathcal W^2_{10}$ and $\mathcal M_{12}$ thanks to Brill-Noether theory. It remains to check the existence of a dominating family of desired curves in $\P^5$ over the space of intersections of two quadrics in $\P^5$. Since a random curve $D \in \mathcal H$ provides an embedding $D \subset \P^5$ given by a Petri generic line bundle $\mathcal O_D(1) := \omega_D \otimes L_1^{-1}$, the above construction provides a nonempty component $\mathcal H' \subset Hilb_{15t+1-12}(\P^5)$ together with a dominant rational map $\mathcal H' // Aut(\P^5) \to \mathcal M_{12}$. Note that choosing an intersection of 2 quadrics $X \subset \P^5$ containing $D$ is equivalent to choosing a 2-dimensional subspace of $H^0(\P^5, \mathcal I_{D/\P^5} (2))$. Consider the incidence variety \[ V = \{ (D, X) \ | \ D \in \mathcal H' \text{ ACM and } X \in \op{Gr}(2, H^0(\P^5, \mathcal I_{D/\P^5}(2))) \text{ smooth} \}. \] Since the graded Betti numbers are upper semicontinuous in a flat family having the same Hilbert function, we observe that $V$ is birational to $\mathcal H'$ since $H^0 (\P^5, \mathcal I_{D/\P^5}(2))$ is spanned by 2 quadrics for a randomly chosen $D$. We compute the normal sheaf $\mathcal N_{D/X}$ for a random pair $(D, X) \in V$ as follows: \begin{verbatim} i11 : IX=ideal((mingens ID)*random(source mingens ID,T^{2:-2})); ID2=saturate(ID^2+IX); cNDX=image gens ID / image gens ID2; -- conormal sheaf NDX=sheaf Hom(cNDX,T^1/ID); -- normal sheaf HH^0 NDX(-1)==0 and HH^1 NDX(-1)==0 o11 = true \end{verbatim} \begin{verbatim} i12 : HH^0 NDX==Fp^30 and HH^1 NDX==0 o12 = true \end{verbatim} In particular, the Hilbert scheme of $X$ is smooth of dimension $30$ at $[D \subset X]$, and $h^i (\mathcal N_{D/X}(-1))=0$ for $i=0, 1$. We do a similar computation for $\mathcal N_{D/{\P^5}}$: \begin{verbatim} i13 : cNDP=prune(image (gens ID)/ image gens saturate(ID^2)); NDP=sheaf Hom(cNDP,T^1/ID); HH^0 NDP==Fp^68 and HH^1 NDP==0 o13 = true \end{verbatim} Hence $\mathcal H' \subset Hilb_{15t+1-12}$ is smooth of expected dimension 68 at a general smooth point $[D \subset \P^5] \in \mathcal H'$. Consider the natural projections \[ \xymatrix{ & V \ar[dl]_{\pi_1} \ar[dr]^{\pi_2} & \\ \mathcal H' & & Gr(2, H^0(\P^5, \mathcal O_{\P^5}(2))). } \] We observe that $V$ is irreducible of dimension 68 since the fiber of $\pi_1$ over $D$ is exactly a single point. Also note that the map $\pi_2$ is smooth of dimension $h^0(D, \mathcal N_{D/X}) = 30$ at $(D,X)$. Since $\dim \op{Gr}(2, H^0(\P^5, \mathcal O_{\P^5}(2))) = 38$, we conclude that $\pi_2$ is dominant. In particular, a general $X \in Gr(2, H^0(\mathcal O_{\P^5}(2)))$ contains a curve $D \in \mathcal H'$. By the semicontinuity, we conclude that a general $(D,X)$ also satisfies the desired properties. \end{proof} Existence of such a curve $D$ on $X$ provides a construction of a rank 3 Ulrich bundle on $X$ via Serre correspondence. The idea by Casanellas and Hartshorne also makes sense in our case, and consequently, we have the following theorem: \begin{thm}[{See also \cite[Proposition 5.4 and Theorem 5.7]{CasanellasHartshorne:Ulrich}}]\label{thm:StableUlrichBundlesClassical} Let $X \subset \P^5$ be the intersection of $2$ general quadrics in $\P^5$. Then $X$ carries an $(r^2+1)$-dimensional family of stable Ulrich bundles of rank for every $r \ge 2$. \end{thm} \begin{proof} Since the strategy is almost same as in \cite{CasanellasHartshorne:Ulrich}, we only provide a shorter proof here. Note first that there is a rank 2 Ulrich bundle on any smooth complete intersection $X$ \cite{ArrondoCosta2000, Beauville2016:IntroductionUlrich}, namely, an elliptic curve type ACM bundle. Since there is no Ulrich line bundle, any rank 2 Ulrich bundle must be stable by Proposition \ref{prop: semistability of Ulrich}. Because of the same reason, if there is a rank 3 Ulrich bundle, then it is also stable. Proposition \ref{Thm:ExistenceOfACMg12d15} implies that a general $X$ contains a smooth ACM curve $D$ of degree 15 and genus 12 such that $\omega_D(-1)$ has two sections which generate the graded module $H^0_{*}(\omega_D)$ as $S_{\P^5}$-modules. By Serre correspondence, those two generators define a rank 3 vector bundle $\mathcal E$ as an extension \[ 0 \to \mathcal O_{X}^{2} \to \mathcal E \to \mathcal I_{D} (3) \to 0. \] Since $D$ is ACM, we immediately check that $H^1 (\mathcal E(j))=0$ for every $j \in \mathbb Z$. Furthermore, we also have $H^1 (\mathcal E^{*}(j)) = H^2 (\mathcal E(-j-2)) = 0$ for every $j \in \mathbb Z$ from the dual sequence \[ 0 \to \mathcal O_X(-3) \to \mathcal E^{*} \to \mathcal O_X^{2} \to \omega_D(-1) \to 0. \] Hence $\mathcal E$ is an ACM bundle. Applying Riemann-Roch on $D$, we have $h^0 (\mathcal O_D(2)) = 19 = h^0 (\mathcal O_X(2))$ and thus $h^0(\mathcal E(-1)) = h^0(\mathcal I_{D}(2)) = 0$. Similarly, we have $h^0(\mathcal I_{D}(3)) = h^0(\mathcal O_X(3)) - h^0 (\mathcal O_D(3)) = 10$, and thus $h^0(\mathcal E) = 12 = (\deg X) \cdot (\rank \mathcal E)$. Indeed, $\mathcal E$ is a rank 3 Ulrich bundle. As consequences, we show the existence of Ulrich bundles on $X$ of every rank $r \ge 2$ by taking direct sums of Ulrich bundles of rank 2 and 3. Suppose first that we have a stable Ulrich bundle $\mathcal E$ of rank $r$ for every $r \ge 2$. By Riemann-Roch, we have $\chi(\mathcal E \otimes \mathcal E^{*}) = -r^2$. Since the computations in \cite[Proposition 5.6]{CasanellasHartshorne:Ulrich} also holds for our $X$, we have $h^2( \mathcal E \otimes \mathcal E^{*}) = h^3 ( \mathcal E \otimes \mathcal E^{*}) = 0$. Since $\mathcal E$ is simple, we conclude that $h^0 ( \mathcal E \otimes \mathcal E^{*}) = 1$ and $h^1( \mathcal{E} \otimes \mathcal E^{*}) = r^2 + 1$ as desired. Hence the moduli space of stable Ulrich bundles is smooth of expected dimension if it is nonempty. It only remains to show the existence of stable Ulrich bundles of rank bigger than $3$. Let $r \ge 4$, $\mathcal E'$ and $\mathcal E'' \not \simeq \mathcal E'$ be stable Ulrich bundles of rank $2$ and $r-2$, respectively. By Riemann-Roch and \cite[Proposition 5.6]{CasanellasHartshorne:Ulrich}, we have $h^1( \mathcal E' \otimes \mathcal {E''} ^*) = -\chi( \mathcal E' \otimes \mathcal {E''} ^*) = 2r - 4 > 0$. Hence the space $\P \Ext_X ^1 (\mathcal E'' , \mathcal E')$ is nonempty and each element gives a nonsplit extension \[ 0 \to \mathcal E' \to \mathcal E \to \mathcal E'' \to 0 \] where $\mathcal E$ is a simple and strictly semistable Ulrich bundle of rank $r$. Such extensions form a family of dimension \[ \dim \{ \mathcal E' \} + \dim \{ \mathcal E'' \} + \dim \P \Ext_X^1 (\mathcal E'' , \mathcal E') = r^2-2r+5 < r^2+1. \] Since all the other extensions by different ranks form smaller families, we conclude that a general Ulrich bundle of rank $r$ is stable. This completes the proof. \end{proof} \begin{rmk} We finish this section by a few remarks. \begin{enumerate} \item In fact, the proof of Proposition \ref{Thm:ExistenceOfACMg12d15} implies much stronger results. For instance, one can check that $\mathcal H$ is a unirational family which dominates the moduli space $\mathcal M_{12}$ of smooth curves of genus $12$ as in \cite[Appendix]{CasanellasHartshorne:Ulrich}. \item Because we made a computer-based computation over a finite field, we cannot remove the assumption $X$ being general. It is also mysterious that ``how general'' $X$ should be. \item As we mentioned, the above approach closely follows \cite{CasanellasHartshorne:Ulrich}. In loc. cit., the authors also checked that any smooth cubic 3-fold contains an elliptic normal curve of degree 5. Similarly, any smooth complete intersection of two quadrics in $\P^5$ contains an elliptic normal curve of degree 6, as in \cite[Proposition 8]{Beauville2016:IntroductionUlrich}. It is an interesting task to construct smooth ACM curves of degree 15 and genus 12 on any smooth complete intersection of two 4-dimensional quadrics. \end{enumerate} \end{rmk} \section{Derived categorical approaches}\label{section:derived} The notion of semiorthogonal decomposition enables us to reduce problems about Ulrich bundles on $X$ to problems about vector bundles on the associated curve $C$. Let us recall some necessary facts about the moduli space of vector bundles on curves and the derived category of coherent sheaves on $X$. \subsection{Stable vector bundles on curves} Let $C$ be a smooth projective curve of genus $g$, $\mathcal U_C(r,d)$ be the moduli space of S-equivalence classes of rank $r$ semistable vector bundles of degree $d$ on $C$, and $\mathcal{SU}_C(r, L)$ be the moduli space of S-equivalence classes of rank $r$ semistable vector bundles of determinant $L$ on $C$. We use the superscript $(\text{--})^{\sf s}$ to describe the sub-moduli space parametrizing stable objects. It is well-known that $\mathcal U_C(r,d)$ and $\mathcal{SU}_C(r, L)$ are normal projective varieties (see \cite{NarasimhanRamanan:ModuliofVectBdl,Seshadri:SpaceofUnitaryVectBdls}). The lemma below is one of the well-known results for (semi-)stable bundles on curves. \begin{lem} Let $ F$ be a stable vector bundle of rank${}\geq 2$ on $C$. Then, \begin{enumerate} \item $\mu( F) \geq 2g-2$ implies $h^1( F)=0$, and \item $\mu( F) \geq 2g-1$ implies that $ F$ is globally generated. \end{enumerate} If the inequalities on $\mu$ are strict, then the same results are valid for $F$ semistable. \end{lem} \begin{proof} Assume that $h^1( F) \neq 0$. Then $h^0( F^* \otimes \omega_C)\neq 0$, which is imposible unless $ F = \omega_C$ since $ F$ is stable and $\deg( F^*\otimes \omega_C) \le 0$. This proves (1). If $\mu( F) \geq 2g-1$, then $H^1( F(-P))=0$ for any $P \in C$, hence $H^0( F) \to F \otimes \kappa(P)$ is surjective. Using Nakayama's lemma, we conclude that $H^0( F) \otimes \mathcal O_C \to F$ is surjective. \end{proof} \begin{lem}[{\cite[Exercise~2.8]{Popa:GeneralizedTheta}}]\label{lem: semistability in Popa note} Let $ F,\, G$ be vector bundles on $C$ such that $H^p( F\otimes G)=0$ for $p=0,1$. Then both $ F$ and $ G$ are semistable. \end{lem} \begin{proof} By Riemann-Roch, $\mu( F \otimes G)= g-1$. Assume that there exists $0 \neq F' \subset F$ such that $\op{rank} F' < \op{rank} F$ and $\mu( F') > \mu( F)$. Then, $\mu( F' \otimes G) > \mu( F \otimes G) = g-1$. This shows that $\chi( F' \otimes G) > 0$, in particular, $h^0( F' \otimes G) > 0$. This contradicts to $ F' \otimes G \subset F \otimes G$ and $h^0( F \otimes G)=0$. It follows that $F$ is semistable, and the same argument applies to $ G$. \end{proof} Similar as in the case of line bundles, we may define the Brill-Noether locus as follows: \[ W^{k-1}_{r,d}(C) := \{ [F] \in \mathcal U_C^{\sf s} (r,d) \ | \ h^0(C, F) \ge k \} \] which is a subscheme of $\mathcal U_C^{\sf s} (r,d)$ of expected dimension $\rho^{k-1}_{r,d} = r^2(g-1) + 1 - k(k-d+r(g-1))$. The following theorem is useful in the future: \begin{thm}[{\cite[Theorem B]{BGN1997}}] \label{Theorem:BGN} The locus $ W^{k-1}_{r,d} (C)$ is nonempty if and only if \[ d>0, r \le d+(r-k)g \text{ and } (r,d,k) \neq (r,r,r). \] \end{thm} \subsection{Derived categories of $X$} Let $Q^n \subset \P^{n+1}$ be a smooth quadric hypersurface. We unify all the notations which involve spinor bundles in accordance with \cite{BondalOrlov:SODforAlgVar}. Hence, the spinor bundles on the quadric $Q^n$ give the semiorthogonal decomposition\,\cite{Kapranov:DerivedCat_Homogeneous} \[ \D(Q^n) = \left\{ \begin{array}{ll} \bigl\langle\, \mathcal O(-n+1),\,\ldots ,\,\mathcal O,\,S \,\bigr\rangle & \text{if $n$ is odd} \\ \bigl\langle\, \mathcal O(-n+1),\,\ldots ,\,\mathcal O,\,S^+,\,S^- \,\bigr\rangle & \text{if $n$ is even} \end{array} \right. \] Especially in the case $n=4$, $S^\pm$ correspond to the universal quotient bundle and the dual of the universal subbundle under the isomorphism $Q^4 \simeq \op{Gr}(2, \C^4)$. \par Let $Q_0,\, Q_\infty \subset \P^5$ be two nonsingular $4$-dimensional quadrics whose intersection defines $X$. Without loss of generalities, we may assume \[ Q_0 = ( x_0^2 + \ldots + x_5^2 = 0 )\quad\text{and}\quad Q_\infty = ( \lambda_0 x_0^2 + \ldots + \lambda_5 x_5^2 = 0 ) \] for some $\lambda_0,\, \ldots, \lambda_5 \in \C$. We define $X := Q_0 \cap Q_\infty$ a smooth threefold of degree 4. One well known approach to $X$ is to associate the quadric pencil $\mathfrak d := \lvert Q_0 + t Q_\infty \rvert_{t \in \P^1}$ on $\P^5$. Let us assume that the pencil $\mathfrak d$ is nonsingular in the sense of \cite{Reid:PhD}, namely, each singular quadric $Q_{\lambda_i}$\,($i=0,\ldots,5$) is isomorphic to the cone of a smooth quadric $Q^3 \subset \P^4$ over a point. Note that this condition is equivalent to saying that $\lambda_0,\ldots,\lambda_{5}$ are pairwise distinct. Also note that none of $\lambda_0,\ldots,\lambda_{5}$ is zero since $Q_{\infty}$ is smooth. The resolution of indeterminacy of $\varphi_\mathfrak d \colon \P^5 \dashrightarrow \P^1$ gives the relative quadric $\mathcal Q \to \P^1$. Let $\sigma \colon C \to \P^1$ be the double cover ramified over $[1:\lambda_0],\,\ldots,\,[1:\lambda_5] \in \P^1$, and let $\mathcal Q_C := \mathcal Q \times_{\P^1} C$ be the fiber product. Bondal and Orlov\,\cite{BondalOrlov:SODforAlgVar} showed that $C$ is the fine moduli space of spinor bundles on the quadrics in $\mathfrak d$, {\it i.e.} there exists a vector bundle $\mathcal S_{\mathcal Q_C}$ on $\mathcal Q_C$ such that for each $c \in C$, the restriction $\mathcal S_{\mathcal Q_C}\big\vert_{\mathcal Q \times\{c\}}$ is one of the spinor bundles on the quadric $Q_{\sigma(c)}$. When $Q_{\sigma(c)}$ is a singular quadric, then it is a cone $\mathcal C( Q^3 )$ of a $3$-dimensional quadric over a point $v \in \mathbb P^5$. In this case $\mathcal S_{\sigma(c)}$ is the pullback of the unique spinor bundle on $Q^3$ by $\mathcal C( Q^3) \setminus \{v\} \to Q^3$. We define the vector bundle $\mathcal S := \mathcal S_{\mathcal Q_C}\big \vert_{X \times C}$. \begin{thm}[Bondal--Orlov\,\cite{BondalOrlov:SODforAlgVar}]\label{thm: Bondal-Orlov} The Fourier--Mukai transform \[ \Phi_{\mathcal S} \colon \D(C) \to \D(X),\ F^\bullet \mapsto Rp_{X*}( Lp_C^* F^\bullet \Dotimes \mathcal S) \] is fully faithful, and induces a semiorthogonal decomposition \[ \D(X) = \bigl\langle\, \mathcal O_X(-1),\,\mathcal O_X,\,\Phi_{\mathcal S}\bigl(\D(C)\bigr)\,\bigr\rangle. \] \end{thm} Furthermore, $X$ can be regarded as the fine moduli space of stable vector bundles of rank $2$ with fixed determinant of odd degree\,\cite{Newstead:StableBundlesofRank2OddDeg}, and $\mathcal S$ is the universal bundle of this moduli problem. There arises an ambiguity of the choice of this fixed determinant\,(the theorem of Bondal and Orlov is independent of the replacement $\mathcal S \mapsto \mathcal S \otimes p_C^* L$ for any line bundle $ L \in \Pic C$). \begin{defi}\label{def: univ Spinor} We choose $\xi$ a line bundle of degree $1$, and assume that $\mathcal S$ is the universal family of the fine moduli space $\mathcal{SU}_C(2,\xi^*) \simeq X$ which parametrizes the stable vector bundles of rank $2$ and determinant $\xi^*$. Equivalently, $\mathcal S$ is determined by imposing the condition $\det \mathcal S = \mathcal O_X(1) \boxtimes \xi^*$. \end{defi} This choice of $\mathcal S$ is precisely dual to the same symbol in Section 5 of \cite{Kuznetsov:Instanton}. We remark that some parts of the next subsection are following the arguments in \cite{Kuznetsov:Instanton}. This may cause confusions, so we rephrase the details which are necessary for the rest part of the paper. \subsection{Ulrich bundles via derived categories} Let $\op{Coh}(X)$ be the category of coherent sheaves on $X$. There is a natural functor $\op{Coh}(X) \to \D(X)$ which maps a coherent sheaf $\mathcal E$ to the complex concentrated at degree zero: \[ \ldots \to 0 \to \mathcal E \to 0 \to \ldots. \] This identifies $\op{Coh}(X)$ to a full (but not triangulated) subcategory of $\D(X)$, hence we may regard a coherent sheaf on $X$ as an object in $\D(X)$. Conversely, we call an object $\mathcal E^\bullet \in \D(X)$ a coherent sheaf\,(resp. a vector bundle) if $\mathcal E^\bullet$ is isomorphic to an object\,(resp. a locally free sheaf) in $\D(X) \cap \op{Coh}(X)$. We use derived categories to classify Ulrich bundles on $X$. We first assume that there exists an Ulrich bundle $\mathcal E$ of rank $r \geq 2$ on $X$\,(the existence will be proved later). By Proposition~\ref{prop: Ulrich Equiv conditions}, $H^p(\mathcal E(-i)) = \Hom_{\D(X)}( \mathcal E^*(1), \mathcal O(-i+1)[p])=0$ for all $p$ and $i=1,2,3$. Using the semiorthogonal decomposition in Theorem~\ref{thm: Bondal-Orlov}, one immediately sees that $\mathcal E^*(1) \in \Phi_{\mathcal S} \D(C)$. Since $\D(C) \to \Phi_\mathcal S(\D(C))$ is an equivalence of categories, the study of Ulrich bundles on $X$ boils down to the study of certain objects in $\D(C)$. Such objects are obtained by mapping $\mathcal E^*(1)$ along the projection functor $\Phi_\mathcal S^! \colon \D(X) \to \D(C)$. Before to proceed, let us note that the projection $\Phi_\mathcal S^!$ is right adjoint to $\Phi_\mathcal S$. Since the functor $\Phi_\mathcal S$ is given by $ F \mapsto Rp_{X*}(Lp_C^* F \Dotimes \mathcal S)$ where $p_X \colon X \times C \to X$ and $p_C \colon X \times C \to C$ are the natural projections, its right adjoint has the following form\,(\textit{cf.} \cite[Proposition~5.9]{Huybrechts:FourierMukai}): \[ \Phi^!_\mathcal S \colon \D(X) \to \D(C),\qquad \mathcal E \mapsto Rp_{C*}\bigl( Lp_X^* \mathcal E \Dotimes \mathcal S^*) \otimes \omega_C [1]. \] Meanwhile, the Ulrich conditions in Proposition~\ref{prop: Ulrich Equiv conditions}-\ref{item: Ulrich as Acyclic condition} impose an extra condition on $\mathcal E^*(1)$ other than $\mathcal E^*(1) \in \Phi_\mathcal S \D(C)$. Indeed, the condition $H^\bullet( \mathcal E(-3))=0$ is not followed by $\mathcal E^*(1) \in \Phi_\mathcal S \D(C)$. It can be expressed as follows: \begin{align} & \Hom_{\D(X)}(\mathcal E^*(1),\, \mathcal O_X(-2)[p]) = 0 \nonumber \\ \Leftrightarrow& \Hom_{\D(X)}(\Phi_\mathcal S \Phi_\mathcal S^!( \mathcal E^*(1)),\, \mathcal O_X(-2)[p])=0 \nonumber \\ \Leftrightarrow& \Hom_{\D(C)}(\Phi_\mathcal S^!(\mathcal E^*(1)),\, \Phi_\mathcal S^! (\mathcal O_X(-2))[p])=0. \label{eq: Ulrich orthogonal condition} \end{align} \begin{lem}\label{lem: Projection image and Raynaud bundle} We have $\Phi_{\mathcal S}^! (\mathcal O_X(-2))[2] \simeq \mathcal R^* \otimes \omega_C^{\otimes 2}$, where $\mathcal R$ is the second Raynaud bundle which appears in \cite[Section 5.4]{Kuznetsov:Instanton}. \end{lem} \begin{proof} By \cite{Kuznetsov:Instanton}, $\mathcal R \simeq \Phi_{\mathcal S^*}^! \mathcal O_X[-1] = p_{C*} \bigl( \mathcal S \otimes p_X^* \mathcal O_X) \otimes \omega_C$. Thus, \begin{align*} \mathcal R^* &\simeq \varHom_{\D(C)}( p_{C*} \mathcal S \otimes \omega_C,\ \mathcal O_C ) \\ &\simeq p_{C*} \varHom_{\D(X \times C)} ( \mathcal S \otimes p_C^* \omega_C,\ p_X^*\omega_X [3] )\\ &\simeq p_{C*} \bigl( \mathcal S^* \otimes p_X^*\mathcal O_X(-2) \bigr) \otimes \omega_C^* [3] \\ &\simeq \Phi_{\mathcal S}^!( \mathcal O_X(-2)) \otimes \omega_C^{\otimes(-2)}[2], \end{align*} where the second isomorphism is given by Grothendieck-Verdier duality. \end{proof} Together with the orthogonality condition (\ref{eq: Ulrich orthogonal condition}), we have to understand how the object $\Phi_\mathcal S^!( \mathcal E^*(1))$ looks like. One standard way is to analyze the restriction to the point $\Phi_\mathcal S^! \mathcal E^*(1) \otimes \kappa(c) \in \D(\{c\})$. We fix the notations to avoid confusion as follows. \begin{nota} For $x \in X$, we denote by $\mathcal S_x$ the vector bundle over $C$ determined by the restriction of $\mathcal S$ to $\{x\} \times C \simeq C$. Similarly, the vector bundle $\mathcal S_c$\,($c \in C$) over $X$ is defined to be the restriction of $\mathcal S$ to $X \times\{c\} \simeq X$. \end{nota} The proof of the following proposition is essentially due to \cite[Theorem~5.10]{Kuznetsov:Instanton}, but we write down the proof to prevent the confusions arising from the choice of a convention. \begin{prop}\label{prop:UlrichViaDerivedCategory} Suppose there exists an Ulrich bundle $\mathcal E$ of rank $r$ on $X$. Then, $ F := \Phi_\mathcal S^!(\mathcal E^*(1)) \in \D(C)$ is a semistable vector bundle over $C$ of rank $r$ and degree $2r$. Furthermore, $ F$ satisfies \begin{enumerate} \item $\Ext^p_C( \mathcal R,\, F^* \otimes \omega_C^{\otimes 2} ) = 0$ for $p=0,1$ and \item $H^1( F \otimes \mathcal S_x)=0$ for each $x \in X$. \end{enumerate} Conversely, if $ F$ is a semistable vector bundle over $C$ of rank $r$ and degree $2r$ satisfying the conditions (1) and (2) above, then $\Phi_\mathcal S F = \mathcal E^*(1)$ for some Ulrich bundle $\mathcal E$ over $X$. \end{prop} \begin{proof} Let $c \in C$ be a point. Then $ F \otimes \kappa(c) \in \D(\{c\})$ is the complex of $\C$-vector spaces whose cohomology sheaves are controlled by \begin{equation}\label{eq: FM projection of E^*(1) at a point} H^{p + 1} ( X,\ \mathcal E^*(1) \otimes \mathcal S_c^*) \simeq \Ext^{p+1}_X(\mathcal E(-1),\, \mathcal S_c^*). \end{equation} By \cite[p.~310]{Ottaviani:Spinor}, $\mu(\mathcal S_c^*) = -1/2$, regardless whether $c$ is a ramification point or not. Hence $\mu(\mathcal E(-1))=0$, Proposition~\ref{prop: Stability vanishing}, and Serre duality imply that \begin{align*} \Ext_X^0(\mathcal E(-1),\,\mathcal S_c^*) &\simeq \Hom_X(\mathcal E(-1),\, \mathcal S_c^*) = 0. \end{align*} Consider the following short exact sequence (\textit{cf.} \cite[Theorem~2.8]{Ottaviani:Spinor}) \begin{equation}\label{eq: Ottaviani Seq on X} 0 \to \mathcal S_{\tau c}^* \to \mathcal O_X^{\oplus4} \to \mathcal S_{c}^*(1) \to 0 \end{equation} where $\tau \colon C \to C$ is the hyperelliptic involution arising from the double cover $C \to \P^1$. Note that even for the ramification points $c \in C$, one can compose the sequence (\ref{eq: Ottaviani Seq on X}) in a natural way. Tensoring (\ref{eq: Ottaviani Seq on X}) with $\mathcal E^*(j)$ for $j=-1,0,1$, we have $H^{p+1}(\mathcal E^*(1) \otimes \mathcal S_c^*) \simeq H^{p+2}( \mathcal E^* \otimes \mathcal S_{\tau c}^* ) \simeq H^{p+3}(\mathcal E^*(-1) \otimes \mathcal S_c^*)$ and the latter one vanishes for $p \geq 1$. This proves that (\ref{eq: FM projection of E^*(1) at a point}) is zero unless $p = 0$, in other words, $ F$ is a coherent sheaf concentrated at degree $0$. Furthermore, since $p_X^*(\mathcal E^*(1)) \otimes \mathcal S$ is flat over $C$, $c \mapsto \chi( \mathcal E^*(1) \otimes \mathcal S^*_c)$ is a constant function and thus $F$ is a vector bundle on $C$. \par To compute $\op{rank} F$ and $\deg F$, we use Grothendieck-Riemann-Roch which reads \begin{align} \op{ch} (\Phi_\mathcal S F) &= \op{ch} ( Rp_{X*}( p_C^* F \otimes \mathcal S) ) = p_{X*}\bigl( \op{ch}( p_C^* F) \op{ch}(\mathcal S) \op{td}(\mathcal T_{p_X})\bigr) \nonumber\\ &= (2d - 3s) + \frac13 (2s-d) P_X - sL_X + (d-2s)H_X, \label{eq: G-R-R for FM Transform} \end{align} where $d = \deg F$ and $s = \op{rank} F$. The computation method is identical to the one introduced in \cite[Lemma~5.2]{Kuznetsov:Instanton} except that the Fourier-Mukai kernels are dual to each other. Since $\Phi_\mathcal S F = \mathcal E^*(1)$ is of rank $r$ and of degree zero, we find $2d-3s = r$ and $d-2s = 0$. It follows that $s = r$ and $d = 2r$. By (\ref{eq: Ulrich orthogonal condition}) and Lemma~\ref{lem: Projection image and Raynaud bundle}, \begin{align*} \Hom_{\D(C)}(\Phi_S^!(\mathcal E^*(1)),\, \Phi_\mathcal S^! (\mathcal O_X(-2))[p]) &\simeq \Hom_{\D(C)}( F ,\, \mathcal R^* \otimes \omega_C^{\otimes 2}[p-2]) \\ &\simeq \Ext^{p-2}_C( \mathcal R ,\, F^* \otimes \omega_C^{\otimes 2}). \end{align*} Since both $\mathcal R$ and $ F$ are vector bundles, it suffices to require $\Ext^p_C(\mathcal R,\, F^* \otimes \omega_C^{\otimes 2})=0$ for $p=0,1$ to fulfill (\ref{eq: Ulrich orthogonal condition}). The semistability of $ F$ follows from Lemma~\ref{lem: semistability in Popa note}. Finally, $H^1( F \otimes \mathcal S_x)=0$ follows from the fact that $\Phi_\mathcal S F = \mathcal E^*(1)$ is a vector bundle on $X$; indeed, $\Phi_\mathcal S F = Rp_{X*}( p_C^* F \otimes \mathcal S)$ is the complex concentrated at zero, hence $R^1p_{X*}( p_C^* F \otimes \mathcal S)=0$. By the cohomology base change, $H^1( F \otimes \mathcal S_x)=0$ for each $x \in X$. Conversely, assume that $ F$ is a semistable vector bundle on $C$ satisfying all the prescribed conditions. The condition (2) implies that $\Phi_\mathcal S F \in \D(X)$ is a vector bundle on $X$. Then $\Phi_\mathcal S F \in \Phi_\mathcal S \D(C)$ together with (1) can be interpreted as $\Ext_X^p( \Phi_\mathcal S F,\, \mathcal O_X(-j))=0$ for $j=0,1,2$, showing that $\mathcal E:= (\Phi_\mathcal S F)^* \otimes \mathcal O_X(1)$ is an Ulrich bundle over $X$. \end{proof} Using (\ref{eq: G-R-R for FM Transform}) and $\Phi_\mathcal S F = \mathcal E^*(1)$, we can immediately check that \[ (\,c_i(\mathcal E)\,)_i = (\ 1,\, r,\, 2r^2-r,\, \tfrac{1}{3}r(r-2)(2r+1)\ ). \] Proposition~\ref{prop:UlrichViaDerivedCategory} gives a bijection between the set of Ulrich bundles on $X$ with the set of certain semistable vector bundles on $C$. From now on, we bring our focus into the semistable vector bundles on $C$ satisfying the conditions described in Proposition~\ref{prop:UlrichViaDerivedCategory}. First of all, we prove that a general stable bundle in $\mathcal U_C(r,2r)\,(r \geq 2)$ satisfies the condition (2) of Proposition~\ref{prop:UlrichViaDerivedCategory}. \begin{prop}\label{prop: generic FM image is locally free} For $r \geq 2$, let $\mathcal U_C^{\sf s}(r,2r)$ be the moduli space of stable vector bundles on $C$ of rank $r$ and degree $2r$. The subset \[ \bigl\{ [ F] \in \mathcal U_C^{\sf s}(r,2r) : h^1( F \otimes \mathcal S_x)=0\ \text{for every }x \in X\bigr\} \] is open and nonempty. \end{prop} \begin{proof} First of all, we claim that the set $\{[ F] \in \mathcal U_C^{\sf s}(r,2r) : h^1(F \otimes \mathcal S_x)=0\ \text{for every }x \in X\}$ is open in $\mathcal U_C^{\sf s}(r,2r)$. Consider the closed subset $Z \subset X \times \mathcal U_C^{\sf s}(r,2r)$ defined by \[ \{ (x,[ F]) : h^1( F \otimes \mathcal S_x) \geq 1 \}. \] Since the projection morphism $\op{pr}_2 \colon X \times \mathcal U_C^{\sf s}(r,2r) \to \mathcal U_C^{\sf s}(r,2r)$ is proper, $V := \mathcal U_C^{\sf s}(r,2r) \setminus \op{pr}_2(Z)$ is open in $\mathcal U_C^{\sf s}(r,2r)$. Writing down the locus $V$ set-theoretically, we can easily find that \[ V = \{ [ F] \in \mathcal U_C^{\sf s}(r,2r) : h^1( F \otimes \mathcal S_x)=0\ \text{for every }x \in X\}. \] For $r=2$, we know that any smooth $X$ carries an Ulrich bundle $\mathcal E$ of rank 2 as in Proposition~\ref{Prop:UlrichRank2}. Note that its projection image $ F := \Phi_{\mathcal S}^{!} (\mathcal E^* (1))$ is a rank 2 vector bundle of degree $4$ on $C$ satisfying the desired property. Assume that $r \ge 3$. Let $ F$ be a stable vector bundle of rank $r$ and degree $2r$, and let $x \in X$. Suppose that $H^1( F \otimes \mathcal S_x) \simeq \Hom_C ( F, \mathcal S_x ^* \otimes \omega_C)^*$ is nonzero. By the stability condition, any nonzero morphism $ F \to \mathcal S_x ^* \otimes \omega_C$ must be surjective, so we have a short exact seqeunce \[ 0 \to F' \to F \to \mathcal S_x^* \otimes \omega_C \to 0 \] where $ F'$ is a semistable vector bundle of rank $(r-2)$ and degree $(2r-5)$. By Riemann-Roch, we have $ext^1_C (\mathcal S_x^* \otimes \omega_C, F') = 3r-4$. Hence, for each $x \in X$, the locus of vector bundles $ F $ fit into the above exact sequence has dimension at most $(r-2)^2 + 1 + (3r-5) = r^2 - r$. As varying $x \in X$, the bad locus can sweep out a set of dimension at most $r^2 - r + 3 < r^2 + 1$. Hence we conclude that a general $ F \in \mathcal U_C^{\sf s}(r, 2r)$ does not admit a surjection to $S_x^* \otimes \omega_C$ for any $x \in X$. \end{proof} \begin{rmk} The formula (\ref{eq: G-R-R for FM Transform}) tells us that there is no line bundle $ F$ of degree 2 such that $\Phi_{\mathcal S} F$ is locally free. Indeed, there is no line bundle $\mathcal E$ on $X$ such that $\op{ch} (\mathcal E) = 1 - L_X$. In particular, there is no Ulrich line bundle on $X$. \end{rmk} Our aim is to find a semistable vector bundle $ F$ of rank $r$ and degree $2r$ such that $\Ext^p_C( \mathcal R,\, F^* \otimes \omega_C^{\otimes 2})=0$ for $p=0,1$. Since $ G := F^* \otimes \omega_C^{\otimes 2}$ is also a semistable vector bundle of rank $r$ and degree $2r$, the following proposition guarantees the existence of Ulrich bundles at least when $r=3$: \begin{prop}\label{prop: Generic orthogonality r=3} $\Hom_C(\mathcal R, G) = 0$ for a generic stable vector bundle $G$ of rank $3$ and degree $6$. \end{prop} \begin{proof} Suppose that there is a nontrivial morphism $\mathcal R \to G.$ Note that $\mathcal R$ is a stable vector bundle\,\cite[Corollary~6.2]{Hein:Raynaud}. By the stability condition, we observe that the image of $\mathcal R \to G$ is either a rank 2 vector bundle of degree 3, or a rank 3 vector bundle of degree 4, 5, 6. We show by cases that these conditions are not generic. \begin{enumerate} \item Suppose that the image of $\mathcal R \to G$ is a rank 2 vector bundle of degree 3. There are two short exact sequences \[0 \to G'' \to \mathcal R \to G' \to 0 \] and \[ 0 \to G' \to G \to L \to 0 \] where $ G'$ is the image of $\mathcal R$, $ G''$ is a rank 2 vector bundle of degree 1. Note that both $ G'$ and $ G''$ are stable. Also, $ L$ is locally free: indeed, if $\bar{ G}'$ is the kernel of the morphism $ G \to L / \op{Tors} L$, then the stability argument forces that $ G' = \bar{ G}'$, hence $ L = L / \op{Tors} L$ showing that $ L$ is locally free. Since $h^0(C, G')>0$, a nonzero section $s \in H^0(C, G')$ defines the following exact sequence \[ 0 \to \mathcal O_C(D) \stackrel{s} \to G' \to M \to 0 \] where $D$ is the zero locus $V(s)$ of $s$ and $ M = \det G' \otimes \mathcal O_C(-D)$ is a line bundle. By the stability, we have either $\deg D = 0$ or $1$. Tensoring by $ G''^*$, we have \[ 0 \to G''^* (D) \to G' \otimes G''^* \to G''^* \otimes M \to 0. \] When $\deg D=0$, that is, $D=0$, the stability of $ G''$ assures that \begin{align*} \dim \Hom_C ( G'', G') & \le h^0(C, G''^* ) + h^0 (C,\, G''^* \otimes M) \\ & = 0 + 3 = 3. \end{align*} When $\deg D=1$, \begin{align*} \dim \Hom_C ( G'', G') & \le h^0(C, G''^* (D) ) + h^0 (C,\, G''^* \otimes M) \\ & = 1 + 2 = 3 \end{align*} since both the Brill-Noether loci $W^{1}_{2,1} (C)$ and $W^{2}_{2,3}(C)$ are empty by Theorem \ref{Theorem:BGN}. In any cases, we observe that the Quot scheme $[\mathcal R \to G'] \in \op{\it Quot}_{2,3} (\mathcal R)$ has the local dimension at most 3 for any stable quotient $ G' \in \mathcal U_C^{\sf s}(2,3)$. The locus of vector bundles $ G \in \mathcal U_C^{\sf s}(3, 6)$ which is an extension of $ L$ by $ G'$ has the dimension at most \begin{align*} \dim \{ G \} & \le \dim \op{\it Quot}_{2,3} (\mathcal R) + \dim \Pic^3 (C) + \dim \P\Ext^1_C ( L, G) \\ & \le 3 + 2 + 4 = 9\\ & < 10 = \dim \mathcal U_C^{\sf s} (3,6). \end{align*} \item Suppose that the image of $\mathcal R$ is a rank 3 vector bundle of degree 4. We have two short exact sequences $$ 0 \to L \to \mathcal R \to G' \to 0 $$ and $$ 0 \to G' \to G \to T \to 0 $$ where $ L$ is a line bundle of degree 0, $ G'$ is the image of $\mathcal R$, and $ T$ is a torsion sheaf of length 2. Since $\dim \Hom_C( L,\mathcal R) = 1$\,({\textit{cf.}} the proof of \cite[Lemma~5.9]{Kuznetsov:Instanton}), the dimension of the family of stable vector bundles $ G' \in \mathcal U_C(3,4)$ which fit into the first short exact sequence is at most $\dim \Pic^0 (C) = 2$. Hence the dimension of the family of stable vector bundles $ G$ which fit into the second short exact sequence is at most $\dim \{ T \} + \dim \{ G' \} + \dim \P \Ext^1_C ( T, G') = 9$. \item Suppose that the image of $\mathcal R$ is a rank 3 vector bundle of degree 5. We have two short exact sequences $$ 0 \to L \to \mathcal R \to G' \to 0 $$ and $$ 0 \to G' \to G \to T \to 0 $$ where $ L$ is a line bundle of degree $-1$, $ G'$ is the image of $\mathcal R$, and $ T$ is a torsion sheaf of length 1. Since $\mathcal R$ is stable, we have $\dim \Ext^1_C ( L,\mathcal R)= \dim \Hom_C(\mathcal R,\, L \otimes \omega_C)=0$. By Riemann-Roch, we have $\dim \Hom_C( L, \mathcal R)=4$, and thus the dimension of the family of stable vector bundles $ G' \in \mathcal U_C(3,5)$ which fit into the first exact sequence is at most $\dim \Pic^{-1}(C) + \dim \P \Hom_C ( L, \mathcal R) = 5$. Therefore, the dimension of the family of stable vector bundles $ G$ which fit into the second exact sequence is at most $\dim \{ T \} + \dim \{ G' \} + \dim \P \Ext^1_C( T, G') = 8$. \item Suppose that the image of $\mathcal R$ is a rank 3 vector bundle of degree 6, in other words, it coincides with $ G$. We have the following short exact sequence $$ 0 \to L \to \mathcal R \to G \to 0 $$ where $ L$ is a line bundle of degree $-2$. By the stability and Riemann-Roch formula, we have $\dim \Hom_C( L, \mathcal R)=\chi( L,\mathcal R)=8$. Hence the dimension of the family of stable vector bundles $G$ which fits into the above exact sequence is at most $\dim \Pic^{-2}(C) + \dim \P \Hom_C( L, \mathcal R) = 9$. \end{enumerate} To sum up, we conclude that a generic stable vector bundle $ G \in \mathcal U_C(3,6)$ yields $\Hom_C(\mathcal R, G) = 0$.\qedhere \end{proof} \begin{cor}\label{cor:Orthogonality} For each $r \geq 2$, a generic stable vector bundle $ G \in \mathcal U_C(r, 2r)$ satisfies \[ \Ext^p_C( \mathcal R, G) = 0,\ p=0,1. \] \end{cor} \begin{proof} Assume that $ G_i \in \mathcal U_C(r_i,2r_i)$\,($i=1,2$) are stable vector bundles satisfying $\Ext^p_C(\mathcal R, G_i) = 0$. Then $ G_3:= G_1 \oplus G_2$ is a semistable vector bundle satisfying $\Ext^p_C(\mathcal R, G_3)=0$. By the semicontinuity, we see that $\Ext^p_C(\mathcal R, G)=0$ for a general $ G \in \mathcal U_C(r_1+r_2,\,2(r_1+r_2))$. By \cite[Proposition~9]{Beauville2016:IntroductionUlrich}, Proposition~\ref{prop:UlrichViaDerivedCategory}, and Proposition~\ref{prop: Generic orthogonality r=3}, there are vector bundles $ G_1 \in \mathcal U_C(2,4)$ and $ G_2 \in \mathcal U_C(3,6)$ such that $\Ext^p_C(\mathcal R, G_i)=0$. Since direct sums of $ G_1$ and $ G_2$ can produce all the ranks${}\geq 4$, we get the desired result. \end{proof} Recall that the projection image $ F = \Phi_{ \mathcal S}^{!} (\mathcal E^* (1))$ is always a semistable vector bundle. It is easy to see that both the stability and the strict semistability are preserved by this Fourier-Mukai projection. \begin{prop}\label{prop: Stability comparison} Let $\mathcal E$ be an Ulrich vector bundle of rank $r \geq 2$, and let $ F := \Phi_\mathcal S^! \mathcal E^*(1)$ be a semistable vector bundle on $C$. If $\mathcal E$ is stable (resp. strictly semistable), then so is $ F$. \end{prop} \begin{proof} First assume that $\mathcal E$ is strictly semistable. There is a destabilizing sequence \[ 0 \to \mathcal E' \to \mathcal E \to \mathcal E'' \to 0 \] where $\mathcal E'$ and $\mathcal E''$ are Ulrich bundle of smaller ranks by Proposition~\ref{prop: semistability of Ulrich}. This gives the following short exact sequence \[ 0 \to F'' := \Phi_\mathcal S^! \mathcal E''^*(1) \to F = \Phi_\mathcal S^! \mathcal E^*(1) \to F' := \Phi_\mathcal S^! \mathcal E'^*(1) \to 0. \] Since $\mathcal E''$ is Ulrich, we see that $ F'' \subset F$ is a vector bundle of slope 2 on $C$, so $ F$ cannot be stable. Now assume that $\mathcal E$ is stable, but $F$ is strictly semistable. Consider the destabilizing sequence \[ 0 \to F'' \to F \to F' \to 0. \] Since $ F$ comes from an Ulrich bundle, the conditions in Proposition~\ref{prop:UlrichViaDerivedCategory} ensures that $h^1( F' \otimes \mathcal S_x)=0$ and $\Ext^p_C(\mathcal R,\, F'^* \otimes \omega_C^{\otimes 2}) =0$. It follows that $\mathcal E'$ is Ulrich where $\mathcal E'^*(1) := \Phi_\mathcal S( F')$. The existence of the nonzero map $\mathcal E^*(1) \to \mathcal E'^*(1)$ leads to a contradiction; indeed, $\mathcal E^*(1)$ is stable of $\mu=0$ and $\mathcal E'^*(1)$ is semistable of $\mu=0$, thus there is no nonzero map from $\mathcal E^*(1)$ to $\mathcal E'^*(1)$. \end{proof} To sum up the above discussions, we have the following theorem. \begin{thm}\label{thm: Main thm} Let $\mathcal M(r)$\,($r \geq 2$) be the moduli space of S-equivalence classes of Ulrich bundles of rank $r$ over $X$. The projection functor $\Phi_\mathcal S^! \colon \D(X) \to \D(C)$ induces the morphism \[ \varphi \colon \mathcal M(r) \to \mathcal U_C(r,2r),\ [\mathcal E] \mapsto \varphi(\mathcal E):=[\Phi_\mathcal S^!( \mathcal E^*(1))] \] of moduli spaces. Moreover, $\varphi$ satisfies the following properties: \begin{enumerate} \item set-theoretically, $\varphi$ is an injection; \item $\varphi$ maps stable(resp. semistable) objects to stable(resp. semistable) objects; \item let $\mathcal M^{\sf s}(r)$ be the stable locus. Then $\varphi$ induces an isomorphism of $\mathcal M^{\sf s}(r)$ onto \[ \varphi( \mathcal M^{\sf s}(r)) = \left\{ [ F] \in \mathcal U_C^{\sf s}(r,2r) : \begin{array}{ll} \Ext^p_C(\mathcal R,\, F^* \otimes \omega_C^{\otimes 2})=0,\ p=0,1,\\ h^1( F \otimes \mathcal S_x)=0\ \text{for each }x \in X. \end{array} \right\}, \] which is a nonempty open subscheme of $\mathcal U_C(r,2r)$. \end{enumerate} \end{thm} \begin{proof} First of all, to be well defined, $\varphi$ has to preserve S-equivalence classes. Assume that $\mathcal E_1$ and $\mathcal E_2$ are Ulrich bundles which are S-equivalent, \textit{i.e.} there are Jordan-H\"older filtrations \[ 0 = \mathcal E_i^{(0)} \subset \mathcal E_i^{(1)} \subset \ldots \subset \mathcal E_i^{(m)} = \mathcal E_i^*(1) \] such that $ \mathcal E_1^{(j)} / \mathcal E_1^{(j-1)} =: \op{gr}_j(\mathcal E_1^*(1)) \simeq \op{gr}_j (\mathcal E_2^*(1)) := \mathcal E_2^{(j)} / \mathcal E_2^{(j-1)}$. For each $j$, \[ 0 \to \mathcal E_i^{(j-1)} \to \mathcal E_i^{(j)} \to \op{gr}_j (\mathcal E_i^*(1)) \to 0 \] is a short exact sequence of Ulrich bundles by Proposition~\ref{prop: semistability of Ulrich}. The map $\varphi$ preserves both the stability and the strict semistability by Proposition~\ref{prop: Stability comparison}, so it immediately follows that \[ 0 = \Phi_\mathcal S^!( \mathcal E_i^{(0)}) \subset \Phi_\mathcal S^!( \mathcal E_i^{(1)}) \subset \ldots \subset \Phi_\mathcal S^!( \mathcal E_i^{(m)}) = \varphi(\mathcal E_i) \] is a Jordan-H\"older filtration with $\op{gr}_j( \varphi(\mathcal E_i)) \simeq \Phi_\mathcal S^!( \op{gr}_j( \mathcal E_i^*(1)))$. This shows that $\varphi(\mathcal E_1)$ and $\varphi(\mathcal E_2)$ are S-equivalent. The statement (1) follows from the fact that $\Phi_\mathcal S \colon \D(C) \to \Phi_\mathcal S(\D(C))$ is an equivalence of categories, and $\mathcal E^*(1) \in \Phi_\mathcal S(\D(C))$ for each Ulrich bundle $\mathcal E$ over $X$. The statement (2) is already proved in Proposition~\ref{prop: Stability comparison}, so it only remains to prove (3). For any stable Ulrich bundle $[\mathcal E] \in \mathcal M^{\sf s}(r)$, the functor $\Phi_\mathcal S^!$ induces \begin{align*} T_{[\mathcal E]}\mathcal M^{\sf s}(r) &\simeq \Ext^1_X(\mathcal E,\mathcal E) \\ &\simeq \Ext^1_C(\varphi(\mathcal E),\varphi(\mathcal E)) \\ &\simeq T_{[\varphi(E)]} \mathcal U_C^{\sf s}(r,2r). \end{align*} Hence together with (1), $\varphi$ is an isomorphism near $[\mathcal E]$. Finally, by Proposition~\ref{prop: generic FM image is locally free} and Corollary~\ref{cor:Orthogonality}, $\varphi(\mathcal M^{\sf s}(r))$ is open and nonempty. \end{proof} \begin{rmk} It is not true in general that $\varphi(\mathcal M^{\sf s}(r)) = \mathcal U_C^{\sf s}(r,2r)$. For example, choose a point $P \in C$ and consider a stable bundle $ F := \mathcal R^* \otimes \mathcal O_C(-P) \otimes \omega_C^{\otimes 2}$ of rank $4$ and degree $8$. Then $ F^* \otimes \omega_C^{\otimes 2} = \mathcal R \otimes \mathcal O_C(P)$, hence we see that \[ \Hom_C(\mathcal R,\, F^* \otimes \omega_C^{\otimes 2}) \neq 0. \] This shows that $\varphi( \mathcal M^{\sf s}(4))$ is a proper subset of $\mathcal U_C^{\sf s}(4,8)$. \end{rmk} The relation between jumping lines and instanton bundles has been studied in \cite{Kuznetsov:Instanton}. For stable Ulrich bundles, we show that a generic line is not jumping. Recall that $\ell \subset X$ is a \emph{jumping line} for $\mathcal E$ if the direct sum decomposition of $\mathcal E\big\vert_\ell$ contains at least two non-isomorphic direct summands. \begin{prop} Let $\mathcal E$ be a stable Ulrich bundle of rank $r$ over $X$. For a generic line $\ell \subset X$, $\mathcal E\big\vert_\ell \simeq \mathcal O_X(1)^{\oplus r}$. \end{prop} \begin{proof} We may assume that $\xi = \mathcal O_C(P)$ for a point $P \in C$. Indeed, if we choose a suitable $ L \in \Pic^0 (C)$ and make a replacement $\mathcal S' := \mathcal S \otimes p_C^* L$, then all the arguments in this section are still valid for the new Fourier-Mukai transform $\Phi_{\mathcal S'} \colon \D(C) \to \D(X)$ and its right adjoint $\Phi_{\mathcal S'}^!$. In particular, the Raynaud bundle $\mathcal R'$ obtained from $\Phi_{\mathcal S'}^! \mathcal O_X(-2)$ as in Lemma~\ref{lem: Projection image and Raynaud bundle} satisfies $\mathcal R' = \mathcal R \otimes L$. Let $ F:= \Phi_\mathcal S^!( \mathcal E^*(1))$ and $ G := F^* \otimes \omega_C^{\otimes 2}$. We have $ G \otimes \xi^* \neq \mathcal R$; otherwise \[ 0 = \Hom_C(\mathcal R,\, G) = \Hom_C(\mathcal R,\, \mathcal R \otimes \mathcal O_C(P)) \neq 0 \] gives a contradiction. Since $ G$ is stable and $ G \otimes \xi^* \neq \mathcal R$, we have $\Hom_C(\mathcal R,\, G \otimes \xi^*)=0$. By \cite[Lemma~2.4 and Theorem~2.5]{Hein:Raynaud}, $H^0(C,\, L\otimes G \otimes \xi^* ) =0$ for a general $ L \in \Pic^0 (C)$. On the other hand, \begin{align*} H^p(C,\, L \otimes G \otimes \xi^*) &= \Ext^{1-p}_C( G,\, L^* \otimes \xi \otimes \omega_C)^* \\ &= \Ext^{1-p}_C( L \otimes \xi^* \otimes \omega_C , \, F )^* \\ &= \Ext^{1-p}_X( \Phi_\mathcal S( L \otimes \xi^* \otimes \omega_C) ,\, \mathcal E^*(1))^*. \end{align*} We have $\Phi_\mathcal S( L \otimes \xi^* \otimes \omega_C) = \mathcal I_\ell(1)[-1]$ for a line $\ell \subset X$ and its ideal sheaf $\mathcal I_\ell$\,(\textit{cf.} \cite[Lemma~5.5]{Kuznetsov:Instanton}). This establishes a bijection between $\Pic^1 (C)$ and the Fano variety $F(X)$ of lines in $X$. Thus, \[ H^p(C,\, L \otimes G \otimes \xi^*) \simeq \Ext_X^{2-p}( \mathcal I_\ell,\, \mathcal E^*)^* \simeq H^{p+1}( \mathcal E \otimes \mathcal I_\ell (-2) ). \] In the short exact sequence $0 \to \mathcal E \otimes \mathcal I_\ell(-2) \to \mathcal E(-2) \to \mathcal E(-2) \otimes \mathcal O_\ell \to 0$, we easily find that $H^{p+1}( \mathcal E \otimes \mathcal I_\ell(-2)) \simeq H^p( \mathcal E(-2) \otimes \mathcal O_\ell)$. In particular, $h^p(\mathcal E(-2) \otimes \mathcal O_\ell)=0$ which implies $\mathcal E \big\vert_\ell \simeq \mathcal O_X(1)^{\oplus r}$ for a general $\ell \in F(X)$. \end{proof} We finish this paper by some important remarks. \begin{rmk}[Arrondo--Costa revisited]\ \begin{enumerate} \item Arrondo--Costa's classification\,(Theorem~\ref{thm: Arrondo-Costa ACM rk 2}) also can be interpreted via derived categories of coherent sheaves on $X$. The moduli space of rank $2$ ACM bundles of line type is isomorphic to the abelian surface $J(C)$, and the interpretation in terms of categorical language is explained in \cite[Lemma~5.5]{Kuznetsov:Instanton}. The moduli space of rank $2$ ACM bundles of conic type is isomorphic to $C$ and this can be explained by the result of \cite{BondalOrlov:SODforAlgVar} because the image of a conic type ACM bundle along the projection functor is a skyscraper sheaf. Finally, rank $2$ ACM bundles of elliptic curve type are Ulrich, hence $\mathcal E \mapsto \Phi_\mathcal S^!( \mathcal E^*(1))$ shows that the moduli space of ACM bundles of elliptic curve type is isomorphic to an open subset of $\mathcal U_C(2,4)$. \item We observed above that the rank 3 vector bundle $\mathcal E$ constructed in \cite[Example 4.4]{ArrondoCosta2000} is not Ulrich. Indeed, two global sections of $\omega_D(-1)$ has a nontrivial linear relation, that is, \[ H^1(\mathcal E^* (1)) \simeq \ker [ H^0(\omega_D(-1)) \otimes H^0(\mathbb P^5, \mathcal O_{\mathbb P^5}(1)) \to H^0(\omega_D)] \simeq \C^1. \] Hence $h^2(\mathcal E(-3)) = h^3 (\mathcal E(-3)) = 1$. Nevertheless, it is still a very interesting vector bundle as in the following sense. Since $\mathcal E(-1)$ and $\mathcal E(-2)$ have no cohomology, we see that $\mathcal E^* (1)$ is a semistable vector bundle of rank 3 contained in $\Phi_\mathcal S \D(C)$. Indeed, the nonzero section of $H^0 (\mathcal E^*(1)) \simeq H^3 (\mathcal E(-3))^*$ induces a short exact sequence \[ 0 \to \bar{ \mathcal E} \to \mathcal E(-1) \to \mathcal O_X \to 0, \] where $\bar{\mathcal E}$ is a rank 2 vector bundle so called an ``instanton bundle'' of charge 3 (see \cite{Faenzi:Instanton} and \cite[Definition 1.1 and Theorem 3.10]{Kuznetsov:Instanton}). Note that rank 2 Ulrich bundles are instanton bundles of charge 2, which are minimal. Arrondo--Costa construction shows the existence of a non-minimal instanton bundle. \end{enumerate} \end{rmk} \begin{rmk} The second Raynaud bundle $\mathcal R$ has an interesting property. Note that a (semi-)stable vector bundle $ F$ of rank r and slope $g-1(= 1)$ on $C$ defines the theta locus \[ \Theta_{ F} := \{ L \in \Pic^0 (C) \ | \ h^0 (C, F \otimes L) \neq 0 \}, \] which is a natural generalization of the theta divisor. The locus is either a divisor linearly equivalent to $r \Theta$ where $\Theta \subset \Pic^0(C)$ is the usual theta divisor, or the whole Picard group $ \Pic^0(C)$. Indeed, the theta map \[ \theta : \mathcal {SU}_C (r, \det F) \dashrightarrow \lvert r \Theta \rvert \] gives a rational map, which is a morphism when $r \le 3$. However, when $r=4$, $\mathcal R$ does not have a theta divisor since $h^0(\mathcal R \otimes L ) = 1$ for every $L \in \Pic^0(C)$ as treated above (see also \cite[Lemma 5.9]{Kuznetsov:Instanton}). We refer interested readers to \cite{Hein:Raynaud,Pauly:Raynaud, Popa:GeneralizedTheta} for more details on generalized theta divisors and $\mathcal R$. The strange duality provides a following geometric interpretation in terms of generalized theta divisors. Denote $\mathcal L$ by the ample generator of $\Pic \mathcal {SU}_C (4, \det \mathcal R)$, we see that $\mathcal R$ is a base point of $|\mathcal L^k|$ if and only if \[ H^0(C, \mathcal R \otimes G) \neq 0, \text{ for all } G \in \mathcal U_C(k,0). \] By Serre duality, the above condition is equivalent to \[ \Hom_C(\mathcal R, G^* \otimes \omega_C) = \Ext^1_C (\mathcal R, G^* \otimes \omega_C) \neq 0. \] Note that $ G^* \otimes \omega_C$ is a vector bundle of rank $k$ and degree $2k$. Corollary \ref{cor:Orthogonality} actually implies that $\mathcal R$ is not a base point of $|\mathcal L^k|$ for $k \ge 2$. Since Proposition \ref{prop: Generic orthogonality r=3} holds not only for $\mathcal R$ but for any stable rank 4 vector bundle of degree 4, we conclude that \begin{enumerate} \item $\mathcal R \not \in Bs |\mathcal L^2|$, {\textit{i.e.}}, $Bs | \mathcal L^2|$ is a proper subset of $Bs |\mathcal L| = \{ \text{the set of 16 Raynaud type bundles on } C \}$ which correspond to 16 theta characteristics of $C$; \item The linear system $|\mathcal L^k|$ is base-point-free for $k = 3$. \end{enumerate} Since $|\mathcal L^k|$ is base-point-free for $k \ge 4$ \cite[Theorem 8.1]{PopaRoth}, the above statement answers to the question by Popa and Roth for $g=2$ and $r=4$ (cf. \cite[Section 8]{PopaRoth}). Even though our argument do not assure that a generic vector bundle $ F \in \mathcal U_C(2,4)$ is orthogonal to all the 16 Raynaud type bundles, however, it sounds very promising that $|\mathcal L^2|$ is also base-point-free. \end{rmk} \begin{rmk} The strategy in Proposition \ref{prop:UlrichViaDerivedCategory} is also useful to classify Ulrich bundles for smooth complete intersection varieties of two even dimensional quadrics of higher dimensions. In higher dimensional cases, we also observe that every Ulrich bundle is a image of Fourier-Mukai transform of a semistable vector bundle on the associated hyperelliptic curve from Bondal-Orlov's semiorthogonal decomposition. Moreover, the moduli space of stable Ulrich bundles is a smooth Zariski open subset of the moduli space of stable vector bundles on the associated hyperelliptic curve. However, showing the existence becomes more complicated for higher dimensional cases. For instance, there is no Ulrich bundle of rank 2 on such an $n$-dimensional del Pezzo variety of degree 4 when $n \ge 5$ \cite[Theorem 6.3]{Casnati:rank2ACM}. By the way, we know the existence of Ulrich bundles of certain rank in these cases from \cite{BuchweitzEisenbudHerzog}. Therefore, it is interesting to compute all the possible ranks of Ulrich bundles on the higher dimensional smooth complete intersection varieties of two even dimensional quadrics. For example, \cite[Theorem 8.1]{PopaRoth} enables us to make a wild expectation that an Ulrich bundle of rank $2^{2g-2}$ might exist. \end{rmk} \begin{acknowledgement} The authors thank Fabrizio Catanese and Universit{\"a}t Bayreuth for kind hospitality during their visit. Yonghwa Cho would like to express his gratitude to JongHae Keum and Korea Institute for Advanced Study for hospitality when he was visiting there. Yeongrak Kim thanks George Harry Hitching, Mihnea Popa, and Frank-Olaf Schreyer for helpful discussion and suggestions. Kyoung-Seog Lee is grateful to Mudumbai Seshachalu Narasimhan for many invaluable teachings, encouragements and kind hospitality. He thanks Alexander Kuznetsov, Carlo Madonna, and Paolo Stellari for kind explanations and motivating discussions. Part of this work was done while he was a research fellow of Korea Institute for Advanced Study and was visiting Indian Institute of Science. He thanks Korea Institute for Advanced Study and Indian Institute of Science for wonderful working conditions and kind hospitality. He thanks Gadadhar Misra for kind hospitality during his stay in Indian Institute of Science. Yonghwa Cho was partially supported by Basic Science Research Program through the NRF of Korea funded by the Ministry of Education (2016930170). Yeongrak Kim was supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (NRF-2016R1A6A3A03008745). Kyoung-Seog Lee was supported by IBS-R003-Y1. \end{acknowledgement}
train/arxiv
BkiUeeLxK6mkyCKC2EBq
5
1
\section{Introduction} \label{sec:intro} It is widely accepted that the broad range in radio powers (up to 4 decades), sizes (up to Mpc scales), and radio morphologies of jetted active galactic nuclei (AGNs) stems from the complex interplay between the jet production efficiency (linked to the properties of accretion flow around the supermassive black hole (SMBH) located at the centers of galaxies), interaction of jets with the host galaxy, and the large scale environment they reside in \citep[see, for a recent review,][]{Padovani17, Blandford19}. While accretion onto the SMBH has been unequivocally accepted as the main source of AGN activity, it is not clear what fraction of it is channeled as kinetic energy of a jet as the majority of the optically selected galaxies and their high luminosity counterparts, quasars, do not show significant radio activity \citep[see][and references therein]{Balokovic12,Koziel-Wierzbowska17}. Moreover, the energy exchange between the evolving radio source and the interstellar/intergalactic medium on small as well as large scales forms an important part of the feedback that governs the co-evolution of galaxies, central SMBH, and galaxy groups/clusters over cosmological timescales \citep[see, for a review,][]{McNamara07, Fabian12}. The wide contrast between the two main morphological classes of radio galaxies, i.e., Fanaroff-Riley type I and II (FR~I, FR~II; \citealt{Fanaroff74}), and the extremely large number of `young' radio galaxies as compared to the `evolved' ones is already driving the need to obtain large samples of radio galaxies in order to derive statistical inferences on the physics of radio AGN \citep[e.g.,][]{Best12, Tadhunter16}. In this regard, a reliable association of radio sources with their host galaxies is crucial for identifying the conditions responsible for the radio AGN trigger from the nuclear regions (i.e., different accretion flows; \citealt{Ineson15}), host galaxies (i.e., positive/negative feedback; \citealt{Kalfountzou17}), merger history \citep{Singh15b}, and the cluster environment (i.e., rich vs. poor environment; \citealt{Mingo17}). \begin{table*}[!ht] \caption{Summary of the previous/current results on samples of radio sources} \label{AGNsamples} \scriptsize \begin{center} \begin{tabular}{cp{4cm}p{3cm}ccccc}\hline Sample & \centering{Selection procedure} & \centering{$z$ and flux density cuts} & Sky coverage & Number & Radio & Optical\\ (1) & \centering{(2)} & \centering{(3)} & (4) & (5) & (6) & (7) \\ \hline \citet{Machalski99} & Cross-matching of LCRS galaxy sample with NVSS sample using a matching radius $\Delta\mathrm{r}=2\farcs5$ & $z<0.2$; optical magnitude $<18$ and NVSS flux density $>2.5$ mJy & $\sim$0.2 sr & 1,157 & No & No \\ \citet{McMahon02} & Cross-matching of POSS~I sources with FIRST sample using $\Delta\mathrm{r}=20\hbox{$^{\prime\prime}$}$ & optical magnitude $<20$ and FIRST flux density $>1.0$ mJy & $\sim$4,150 deg$^2$ & 70,000 & No & No\\ \citet{Sadler02} & Cross-matching of 2dF Galaxy redshift Survey with NVSS sample using $\Delta\mathrm{r}=15\hbox{$^{\prime\prime}$}$ & $z<0.438$; optical magnitude $<19.5$ and NVSS flux density $>2.5$ mJy & $\sim$325 deg$^2$ & 757 & No & No \\ \citet{Best05} & Cross-matching of SDSS DR\,2 spectroscopic sample with NVSS using $\Delta\mathrm{r}=3\hbox{$^\prime$}$ and then matching with FIRST & $0.01<z<0.56$; optical magnitude $<17.77$ and NVSS flux density $>5.0$ mJy & $\sim$3,324 deg$^2$ & 2,712 & No & No\\ \citet{Gendre08} & Cross-identification of FIRST and NVSS samples and then search of optical counterparts in SuperCosmos Sky Survey \citep{Hambly01}& $0.003<z<3.5$; NVSS flux density $>1.3$ Jy & $\sim$4,924 deg$^2$ & 274 & Yes & No\\ \citet{Kimball08}$^\dag$ & Cross-identification of FIRST, NVSS, WENSS samples with SDSS sample within $\Delta\mathrm{r}=2\hbox{$^{\prime\prime}$}$ & optical magnitude $<17.77$ & $\sim$3,000 deg$^2$ & 2,885 & No & No\\ \citet{Gendre10} & Cross-identification of FIRST and NVSS samples and then search of optical counterparts in SDSS and 2\,MASS \citep{Skrutskie06} & $0.003<z<3.5$; NVSS flux density $>50$ mJy & $\sim$4,924 deg$^2$ & 859 & Yes & No\\ \citet{Lin10}$^*$ & Cross-matching of SDSS spectroscopic sample with NVSS using $\Delta\mathrm{r}=3\hbox{$^\prime$}$ and then matching with FIRST & $0.02<z<0.3$; absolute optical magnitude $<21.27$ and NVSS flux density $>3.0$ mJy & $\sim$6,008 deg$^2$ & 1,040 & No & No\\ \citet{Best12} & Cross-matching of SDSS DR\,7 spectroscopic sample with NVSS using $\Delta\mathrm{r}=3\hbox{$^\prime$}$ and identifying with FIRST & $0.01<z<0.7$; optical magnitude $<17.77$ and NVSS flux density $>5.0$ mJy & $\sim$11,664 deg$^2$ & 18,286 & No & No\\ \citet{Banfield15}$^\ddag$ & Cross-identification of FIRST and ATLAS samples with WISE and SWIRE samples & FIRST flux density $>1$ mJy and ATLAS $>15$ $\mu$Jy & & $\sim$30,000 & No & No\\ \citet{Williams19}$^\ddag$ & Cross-identification of LoTSS DR\,1 sample with Pan-STARRS and WISE samples & g-band optical magnitude $<23.3$ and {\it W1} IR-magnitude $<19.0$ and 150 MHz flux density $>0.639$ mJy & $\sim$424 deg$^2$ & 231,716 & No & No\\ This work & Cross-matching of SDSS DR\,7 spectroscopic sample with FIRST using $\Delta\mathrm{r}=3\hbox{$^{\prime\prime}$}$ and then cross-identifying with NVSS sample & $z<0.7$; optical magnitude $<17.77$ and FIRST flux density $>0.6$ mJy & $\sim$11,664 deg$^2$ & 32,616 & Yes & Yes\\ \hline \end{tabular} \end{center} \begin{minipage}{1.0\textwidth} NOTE--Columns: (1) reference for the sample; (2) key points on the selection procedure; (3) $z$ and flux density limits imposed on the data; (4) total sky coverage; (5) total number of derived radio sources; (6) visual classification of radio morphology; (7) visual classification of optical morphology. \\ $^\dag$ Appendix B for other AGN samples\\ $^*$ Visual identification of extended sources.\\ $^\ddag$ On-going surveys/projects.\\ \end{minipage} \end{table*} Several attempts have been made to obtain such samples by utilizing multiwavelength datasets from surveys, which normally cover large portions of the sky. In particular, Sloan Digital Sky Survey \citep[SDSS;][]{York00}, Palomar Observatory Sky Survey \citep[POSS~\textrm{I};][]{McMahon92}, Las Campanas Redshift Survey \citep[LCRS;][]{Shectman96} at optical frequencies, NRAO VLA Sky Survey\footnote{https://www.cv.nrao.edu/nvss/} \citep[NVSS;][]{Condon98}, First Images of Radio Sky at Twenty Centimetre survey\footnote{http://sundog.stsci.edu/} \citep[FIRST;][]{Becker95, White97}, and Sydney University Molonglo Sky Survey \citep[SUMSS;][]{Mauch03} at radio frequencies, infrared from the Wide Field Survey Explorer \citep[WISE;][]{Wright10} and the Infrared Astronomical Satellite \citep[IRAS;][]{Moshir92}, and X-rays from the X-ray Multi Mirror (XMM)--{\it Newton} \citep{Rosen16}, {\it Chandra} \citep{Evans10}, and {\it Swift}-Burst Alert Telescope \citep[BAT;][]{Baumgartner13} databases have been vastly used to explore the AGN phenomena \citep[e.g.,][]{Machalski99, McMahon02, Best05, Gendre08, Best12, Mingo14, Mingo16, Gupta18, Sabater19}. Due to the large number of sources detected by all-sky surveys, most of these studies rely on deriving AGN samples using ``automated--(not by eye)'' selection methods. For example, at radio frequencies, using the data from the FIRST survey, \citet{Proctor11} derives a sample of radio galaxies (including quasars) by counting the number of radio components within $\sim$0\farcm96\, radius from the source. A different approach was applied by \citet{vanVelzen15}, who selected double-lobed radio sources by counting all the radio components separated by angular distance up to 1\hbox{$^\prime$}\, and flux density limit 12 mJy. In this manner, \citeauthor{vanVelzen15} could select only FR~II sources after applying several cuts: minimum angular separation (18\hbox{$^{\prime\prime}$}), ratio of the integrated flux of the lobes ($f_{l/l}<10$), and integrated--to--core flux ratio ($F_i/F_p<5$). Radio AGN samples derived from cross-matching of sources in different wavebands are subject to contaminations due to source confusion (depending on the angular resolution) and large uncertainty in reliable association due to the complex nature of spatially extended radio sources. The only exceptions to this rule are the on-going Low Frequency Array (LOFAR) Two-metre Sky Survey \citep[LoTSS;][]{Shimwell17} and the Radio Galaxy Zoo \citep[][]{Banfield15} projects. LoTSS uses a combination of automated algorithms as well as visual identifications of the host galaxies of radio sources. The current LoTSS DR\,1 release covers 2\% of the sky and provides optical and/or IR identifications from the Panoramic-Survey Telescope and Rapid Response System (Pan-STARRS; \citealt{Chambers16}) and WISE surveys \citep{Shimwell19, Williams19}. The Radio Galaxy Zoo selects radio sources from FIRST and from the Australia Telescope Large Area Survey DR~3 (ATLAS; \citealt{Franzen15}) and provides IR identifications from the WISE and Spitzer Wide-Area Infrared Extragalactic Survey (SWIRE; \citealt{Lonsdale03}) samples. However, as can be seen from Table~\ref{AGNsamples}, which summarizes the main features of the previous and current efforts to obtain samples of radio sources, most of the catalogs do not provide detailed radio morphological classification, and practically none of them give the morphological classification of the host galaxy. However, we note that the main focus of majority of these studies is to provide samples of radio AGNs with optical counterparts and not necessarily the morphological classification which is the main focus of the present study. Very recently, the radio AGN sample from \citet{Best12} has been used as a parent sample to extract lists of AGNs with specific radio morphological classifications, see for example, \citet[][]{Capetti17a} for FR~I sources, \citet[][]{Capetti17b} for FR~II sources, \citet[][]{Baldi18} for FR sources with linear sizes $<$5 kpc, \citet[][]{Missaglia19} for wide--angle tail (WAT) sources, and \citet[][]{Jimenez-Gallardo19} for compact sources with linear sizes $<$60 kpc. The present study provides a catalog of radio sources associated with optical galaxies, having their central radio component within 3\hbox{$^{\prime\prime}$}\ from the position of the optical galaxy, and comprising of {\it unresolved} or {\it extended} radio morphology. The catalog contains sources with: (1) spectroscopic redshift ($z$); (2) good quality optical spectrum from SDSS to study host galaxy and emission line properties; (3) measured radio flux densities of radio structures; (4) low flux density limit corresponding to the flux density limit of the FIRST radio survey; (5) for each source it provides a morphological classification of the radio structure and of the host galaxy. The present catalog is \textit{handmade} and the radio and the optical morphological classifications are performed \textit{visually}. It provides the {\it largest} sample of spectroscopically selected radio galaxies to date, covering $\sim$30\% of the entire sky \citep[see, in this context,][]{Ching17}. We emphasize that in contrast to previous catalogs based on galaxies from the SDSS DR\,7 release \citep{Abazajian09}, we do not impose any additional radio flux density detection limit (see Table~\ref{AGNsamples}). Therefore, our catalog of the Radio sources associated with Optical Galaxies and having Unresolved or Extended morphologies I (ROGUE~I) contains sources with flux densities reaching down to $\sim$sub--mJy levels, corresponding to the 3$\sigma$ radio source detection provided by the FIRST survey. As a consequence, the ROGUE I catalog has the same limitations as the radio and optical catalogs used for selection. However, based on the ROGUE I catalog statistically complete samples can be selected, allowing detailed investigation of e.g., the AGN phenomena as a function of BH mass, host galaxy mass, stellar population, morphological type (both radio and optical), which will be the subject of forthcoming papers. In Section~\ref{sec:sample} we describe in more detail all the data assembled while cross-matching. The identification procedures are described in Section~\ref{sec:methodology}, where we also describe our schemes for morphological classification and our flux density estimation methods. The results are outlined in Section~\ref{sec:result}. Comments, reclassifications, and new discoveries are presented in Section \ref{sec:resultComments}. Section~\ref{sec:discussion} gives the summary. \section{Sample selection} \label{sec:sample} The first step in the sample selection process was the identification of a parent sample of galaxies with an optical spectrum which will allow the study of stellar population and emission line properties. Our sample, consisting of 673,807 galaxies, is drawn from the SDSS Main Galaxy Sample \citep{Strauss02} and the Red Galaxy Sample \citep{Eisenstein01} based on the spectrum quality \citep[signal-to-ratio in the continuum at 4020 \AA\ $\geq$ 10;][]{Koziel-Wierzbowska17}. We note that some parts covered by SDSS DR\,7 have been observed repeatedly, therefore, in order to avoid duplication in the parent sample, we matched the galaxies in right ascension ($<$0\farcs5), declination ($<$0\farcs5), and $z$ ($\pm 0.005$). This gave us a total of 662,531 unique SDSS galaxy candidates in which we searched for the presence of a radio AGN counterpart. In order to identify the radio sources associated with the selected optical galaxies, we used the FIRST and NVSS radio surveys conducted using the Very Large Array (VLA). These two surveys, although conducted at identical frequency (1.4 GHz) are widely different in terms of the angular resolution of the radio images and of the sensitivity for the point-like and extended/diffuse emission. The FIRST survey provides 5\farcs4 synthesized beam size images and is complete down to a flux density limit of 1 mJy for point-like sources. On the other hand, the NVSS survey provides 45\hbox{$^{\prime\prime}$}\, synthesized beam-size images and is complete down to a 2.5 mJy flux density limit. The FIRST survey is $\sim$2 times more sensitive in detecting compact sources with angular sizes $\lesssim$6\hbox{$^{\prime\prime}$}\, as compared to the NVSS survey, while the NVSS survey is infinitely more sensitive than the FIRST survey for detecting extended/diffuse emission due to its compact array configuration. The sky coverages of the FIRST and NVSS surveys overlap (declination $\geq -10$\hbox{$^\circ$}\, and $-$40\hbox{$^\circ$}, respectively), which makes them complementary to each other in order to perform a blind search for radio galaxies which might contain a point-like radio-core/hot spots and extended/diffuse emission. In addition, the part of the sky covered by the FIRST survey is almost identical to the portion of the sky covered by SDSS DR\,7 spectroscopic observations. Therefore, by combining the SDSS optical survey and the FIRST and the NVSS radio surveys we are able to identify the radio counterparts of the selected SDSS DR\,7 galaxies. \section{Methodology} \label{sec:methodology} We have performed the search in two steps: (1) the optical position of a galaxy from the SDSS catalog was cross-matched with the radio position from the FIRST catalog allowing for an error of 3\hbox{$^{\prime\prime}$}; (2) once the match was found in the FIRST catalog, we made optical/radio overlay maps with angular sizes corresponding to 1 Mpc linear size at the source distance, centered at the host galaxy position, to visualise the morphologies of the radio sources. For this, we used the optical images from the Digitized Sky Survey (DSS)\footnote{http://archive.eso.org/dss/dss} and the radio images from the FIRST and NVSS surveys. We note that optical galaxies for which we did not find a radio match in step (1) are not included in the present catalog. These 629,815 remaining galaxies from our parent SDSS sample might still host extended radio emission, but without a FIRST detection at the position of the optical host galaxy, which will be searched for in future work and lead to the publication of our second catalog, ROGUE~II. Below we describe our selection procedure in detail. \subsection{Optical galaxies with radio cores: cross-matching of source positions} \label{sec:crossmatch} We searched for a radio counterpart by cross-matching the optical positions of the SDSS galaxies with the positions of the radio sources listed in the FIRST catalog. Since the radio and optical surveys have different resolutions, we chose an error circle within which a radio source can be assumed to be coincident with the optical galaxy. The search radius, $\Delta$r, is defined as \begin{eqnarray} \Delta \mathrm{r} &=& [((\alpha_\mathrm{SDSS} - \alpha_\mathrm{FIRST}) \times \cos\delta_\mathrm{SDSS})^2 \nonumber \\ && + (\delta_\mathrm{SDSS} - \delta_\mathrm{FIRST})^2]^{1/2}, \label{match_coor} \end{eqnarray} where $\alpha_\mathrm{SDSS}$, $\delta_\mathrm{SDSS}$ are the right ascension and declination of the optical positions of the sources from the SDSS DR\,7 list and $\alpha_\mathrm{FIRST}$, $\delta_\mathrm{FIRST}$ correspond to the right ascension and declination from the FIRST list, respectively. If the computed $\Delta$r was found to be $\leq$3\hbox{$^{\prime\prime}$}\, \citep[adopted after][see the discussion therein]{Singh15a}, the radio counterpart was considered to be coincident with the optical source. In this manner, we identified 32,616 optical galaxies with a FIRST counterpart initially treated to be a radio core. \subsection{Source morphology: visual identification and classification} Once a radio source was found to coincide with the optical galaxy position (Eq.~\ref{match_coor}), we derived the projected angular size corresponding to 1 Mpc linear size, according to the redshift of the optical galaxy. The luminosity distance to the source, ${\mathrm{d_L}}$, has been computed using the concordant cosmology with the Hubble constant $ H_0= $ 69.6 km/s/Mpc, $\Omega_{M} = 0.286$, and $\Omega_{\Lambda} = 0.714$ \citep{Spergel07}. The luminosity distance has been calculated following \citet{Hogg99}: {\small \begin{eqnarray} &{\mathrm{d_L}}(z; H_0,\Omega_{M},\Omega_\Lambda) = \frac{c(1+z)}{H_0} \times \\ &\int_{0}^z [(1+z')^2 (1+\Omega_{M} z')-z'(2+z')\Omega_\Lambda]^{-1/2} dz'] \, [Mpc],\nonumber \label{dl} \end{eqnarray} } where $c$ is the speed of light. The luminosity distance has then been converted to the angular diameter distance, $\mathrm{d_m}$, \begin{equation} {\mathrm{d_m}} = \frac{\mathrm{d_L}}{(1+z)^2} \mathrm{[\,Mpc]} . \label{dm} \end{equation} Finally, we applied the small angle formula to convert a linear size into an angular size: \begin{equation} \label{smf} \mathrm{angular \, \, size} = 206,265\, \frac{\mathrm{linear \, size}}{\mathrm{d_m}} \, [\hbox{$^{\prime\prime}$}]. \end{equation} Subsequently, we searched for associated radio source within 1 Mpc linear size by making radio (1.4 GHz contours from FIRST and NVSS) -- optical (grayscale from DSS) overlay in the Astronomical Image Processing System (\textsc{AIPS}), using the \textsc{kntr} task. This image size was chosen to ensure that we do not miss out on a large number of galaxies with extended radio emission as the fraction of large (linear size $>$1 Mpc) double-lobed quasars is negligible \citep[see][]{deVries06}. For the optical/radio overlay, we selected contour levels starting from $\sim$3 times of the typical rms provided by the surveys (i.e., 0.6 mJy beam$^{-1}$ and 1.35 mJy beam$^{-1}$ for the FIRST and the NVSS maps, respectively). \subsubsection{Radio morphological classification} \label{galradmorph} The FIRST and the NVSS radio maps of our entire sample of 32,616 sources were visually inspected. All three authors participated in the classification. At first, to train ourselves and to adjust our classification scheme, we jointly classified a set of 1,000 sources. Then, we assigned a set of $\sim$11,000 sources to each of us and we carried on the classification separately. After this round of morphological assignments, two authors together verified all the sources in terms of compatibility with previously adopted classification scheme. The radio morphological classification was made separately for the FIRST and the NVSS maps and the \textit{individual} FIRST and NVSS classifications were assigned to the source using the codes described in Table~\ref{tab:RadioMorph}. Figure~\ref{fig:radioMorph} gives example maps for each of our radio morphological classes. We classified single-component sources as compact (unresolved) or elongated according to their sizes given by FIRST (or NVSS): sources with both axes of deconvolved Gaussian equal to zero were classified as compact, while sources with at least one deconvolved dimension larger than zero were classified as elongated. Single-component NVSS sources were classified as compact when only upper limits of both Gaussian axes were available in the NVSS catalog. The classification of sources with at least two components, which we refer to as {\it extended}, in the FIRST or the NVSS catalogs used the classical \cite{Fanaroff74} classification scheme, where sources with collinear lobes brighter at the edges are classified as FR~II radio galaxies, while sources in which the brightest part of a lobe is close to the center are classified as FR~I radio galaxies. We separated also hybrid sources \citep{Gopal-Krishna00, Kapinska17} with one lobe of FR~I and another of FR~II type morphology, and one-sided FR~I or FR~II sources. Sources classified as one-sided have symmetric NVSS radio emission suggesting that the second lobe is not detected in FIRST. In our classification scheme we distinguished sources with more complex morphology, such as: Z--shaped sources (lobes forming Z or S shape structure), X--shaped (with two inclined pairs of lobes), double--double radio sources (two collinear pairs of lobes), wide-angle tail (WAT; lobes forming an obtuse angle) and narrow-angle tail (NAT; lobes forming an acute angle) sources, head-tail (HT; bright radio core connected to one-sided tail-like emission) sources, or halo (diffuse radio emission around a core) sources. Our classification of bent sources is based only on the morphological features. We note that from the data used in our search we are not able to verify if the source is a cluster member which is often used as the feature of WAT/NAT sources \citep[e.g.,][but, see also, \citealt{Missaglia19}; \citealt{Mingo19}]{Leahy93}. In some galaxies the location and morphology of the radio emission allowed us to classify it as coming from star-forming regions (SFR). Sources with radio emission too complex to be classified into one of the above classes were marked as sources with not clear morphology. In the cases where several sources of radio emission were measured jointly in the FIRST or NVSS catalog as one detection we classified this detection as blended. Furthermore, we found several cases in which the radio emission cannot be physically linked to the optical galaxy under consideration, these are: (i) radio emission is below the NVSS detection threshold or (ii) the radio emission is produced by a nearby galaxy or by just a part of the same galaxy, and we classified these galaxies as not detected in radio. The {\it final} radio morphological classification is based on the combination of the FIRST and the NVSS classifications, because of sheer differences in resolutions and the sensitivity to the extended emission of the two surveys. Due to higher angular-scale resolution of the FIRST survey, compact and elongated types are assigned based solely on the deconvolved angular sizes of the radio components provided by the FIRST survey. In the case of sources showing extended structures, more weight is given to the map with detailed morphological features seen in the source. For example, a small angular size source could be classified as a double-double radio galaxy (DDRG; two separate lobes on either side of the core) in FIRST, while it could be classified only as FR~II in NVSS due to its larger beam size. In such a case, the final classification is DDRG for the source. Similarly, when only a bright core could be seen in FIRST, while the lobe and the counter lobe could be detected in NVSS due to its higher sensitivity for the extended emission, then the assigned classification is FR~II. In the case of sources for which we were not able to assign a reliable classification, we use the prefix p meaning possible. \subsubsection{Optical morphological classification} \label{galoptmorph} The optical morphological classification of the galaxies associated with the radio emission was done using the 120\hbox{$^{\prime\prime}$}\, image snapshots from SDSS. The assigned morphologies were chosen from the classes given in Table~\ref{OpticalMorph}. Those use the standard Hubble classification scheme with some additional classes like distorted galaxies, ring galaxies, and galaxy mergers. We also distinguished objects where the SDSS spectrum concerns a star forming region or a part outside the nucleus of a galaxy. Figure~\ref{fig:optMorph} gives examples of our optical morphological classification scheme. In some cases we used additional codes to describe details of the morphological type like the presence of a bar or signs of interactions and used the prefix p (for possible) when we considered that the classification was uncertain. \begin{figure*}[!htb] \scriptsize \hspace{0cm}{\includegraphics[width=0.3\textwidth]{com_8C_c-crop.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_5897E_c-crop.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_15129I_c.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_18II_c-crop.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_25220Hib_cBS.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_Z12544_cBS.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_28555X_c-crop.pdf}} \hspace{0.8cm}{\includegraphics[width=0.3\textwidth]{com_26146DD_c-crop.pdf}} \hspace{0.8cm}{\includegraphics[width=0.3\textwidth]{com_20337WAT_cBS.pdf}} \caption{Examples of radio morphological classification assigned in the ROGUE~I catalog (Table~\ref{tab:RadioMorph}). The 1.4 GHz radio contours from the FIRST (red) and the NVSS (black) maps are overlaid on the optical DSS (gray scale) image, centered at the host galaxy position marked by a plus sign. Background/foreground sources are marked with ``X'' sign. The FIRST beam is placed inside the square box at the bottom left corner and the NVSS beam ($\sim$9 times of size of the FIRST beam) is not shown here for clarity. The contours levels, for the FIRST and the NVSS maps are drawn at 0.6 mJy beam$^{-1}$ and 1.35 mJy beam$^{-1}$ ($\sim3\sigma$), respectively, which increase by factors of ($\sqrt{2}$)$^{n}$ where $n$ ranges from 0,1,2,3,...20. The contours at $-$3$\sigma$ are shown by the dashed lines. In the title of each image we give the catalog number and the codes adopted for radio morphology based on FIRST, NVSS, and final classifications, respectively.} \label{fig:radioMorph} \end{figure*} \renewcommand{\thefigure}{\arabic{figure} (Cont.)} \begin{figure*}[!htb] \scriptsize \ContinuedFloat \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_2506NAT_c.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_8883HT_c.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_4622Halo_c-crop.pdf}} \hspace{0.2cm}{\includegraphics[width=0.3\textwidth]{com_2859SFR_c-crop.pdf}} \hspace{0.8cm}{\includegraphics[width=0.3\textwidth]{com_19148NC_c.pdf}} \hspace{0.8cm}{\includegraphics[width=0.3\textwidth]{com_2ND_c-crop.pdf}} \caption{Examples of radio morphological classification assigned in the ROGUE~I catalog.\label{}} \end{figure*} \renewcommand{\thefigure}{\arabic{figure}} \begin{figure*}[!ht] {\includegraphics[width=0.3\textwidth]{SDSS_29176_S.jpg}}\vspace{0.5cm} {\includegraphics[width=0.3\textwidth]{SDSS_173_E.jpg}} {\includegraphics[width=0.3\textwidth]{SDSS_13314_L.jpg}} {\includegraphics[width=0.3\textwidth]{SDSS_10663_D.jpg}}\vspace{0.5cm} {\includegraphics[width=0.3\textwidth]{SDSS_24874_R.jpg}} {\includegraphics[width=0.3\textwidth]{SDSS_3873_M.jpg}} {\includegraphics[width=0.3\textwidth]{SDSS_28750_SFR.jpg}}\vspace{0.5cm} \hspace{0.8cm}{\includegraphics[width=0.3\textwidth]{SDSS_15476_iS.jpg}} \hspace{0.8cm}{\includegraphics[width=0.3\textwidth]{SDSS_3674_bS.jpg}} \caption{Examples of optical morphologies assigned in the ROGUE~I catalog (Table~\ref{OpticalMorph}) through {\it visual} inspection of the 120\hbox{$^{\prime\prime}$}\, SDSS image snapshots. Crosses indicate the position of the SDSS aperture used to measure the spectra. In the header of each image we give the catalog number and the code adopted for optical morphology based on the SDSS images.} \label{fig:optMorph} \end{figure*} \renewcommand{\thefigure}{\arabic{figure} (Cont.)} \begin{figure*}[htb!] \ContinuedFloat \centering{\includegraphics[width=0.3\textwidth]{SDSS_2024_O.jpg}} \caption{Examples of optical morphologies assigned in the ROGUE~I catalog.} \end{figure*} \renewcommand{\thefigure}{\arabic{figure}} \begin{deluxetable*}{llp{13cm}} \tabletypesize{\footnotesize} \tablecolumns{3} \tablewidth{0pt} \tablecaption{Radio morphologies of the sources listed in the ROGUE~I catalog with adopted codes and descriptions. \label{tab:RadioMorph}} \tablehead{ \multicolumn{1}{c}{Radio morphology} & Code & Description \\ \multicolumn{1}{c}{(1)} & (2) & (3) } \startdata Compact & C & point-like single-component \\ Elongated & E & elliptical profile single-component \\ FR~I & I & linear structure brighter near core \citep{Fanaroff74}\\ FR~II & II & linear structure brighter near edges \citep{Fanaroff74}\\ hybrid & I/II & hybrid morphology with one lobe of FR~I and another of FR~II morphology \citep{Gopal-Krishna00}\\ One-sided FR~I & O I & one-sided source with FR~I lobe \\ One-sided FR~II & O II & one-sided source with FR~II lobe\\ Z--shaped & Z & Z-- or S--shaped radio morphology \\ X--shaped & X & X--shaped radio morphology \citep{Cheung07}\\ Double-double RG & DD & two pairs of collinear lobes \citep{Lara99}\\ Wide-angle tail & WAT & bent source with angle between lobes $>90^{\circ}$\\ Narrow-angle tail& NAT & bent source with angle between lobes $<90^{\circ}$\\ Head-tail & HT & bright core (head) and a tail \citep{Owen79}\\ Halo & Halo & diffuse radio emission around the core\\ Star-forming region & SFR & emission from the host galaxy \\ Not clear & NC & radio source with unclear morphology \\ Blended & B & radio emission blended with other source\\ Not detected & ND & optical galaxy is not the host of the radio emission\\ \hline Possible & p & uncertain attribution of the above types\\\hline \enddata \end{deluxetable*} \begin{deluxetable*}{llp{12cm}} \tabletypesize{\footnotesize} \tablecolumns{3} \tablewidth{0pt} \tablecaption{Optical morphologies of the host galaxies of sources in the ROGUE~I catalog with adopted codes and descriptions. \label{OpticalMorph}} \tablehead{ \multicolumn{1}{c}{Optical morphology} & Code & Description \\ \multicolumn{1}{c}{(1)} & (2) & (3) } \startdata Spiral galaxy & S & disc galaxy with visible spiral arms, face-on or edge-on\\ Elliptical galaxy & E & elliptical galaxy\\ Lenticular galaxy & L & disc galaxy without spiral arms \\ Distorted & D & galaxy with distorted, perturbed morphology\\ Ring galaxy & R & galaxy with ring-like shape\\ Galaxy merger & M & Merging galaxies, mainly major merger \\ Star-forming region & SFR & SDSS spectrum of star-forming region, and not the galaxy center \\ Off-center & O & off-center spectrum, not corresponding to star-forming region\\ \hline Interacting galaxy & i & if the signs of interaction are visible (iS, iL, iE)\\ Barred galaxy & b & spiral or lenticular galaxies with prominent bars (bS, bL) \\\hline Possible & p & uncertain attribution of the above types \enddata \end{deluxetable*} We need to point out that in this paper we do not study the origin of radio emission of sources in the ROGUE~I catalog. Therefore, in this catalog star-forming (SF) galaxies can be present as well as radio AGNs. The separation into AGN and SF will be discussed elsewhere (Koziel-Wierzbowska, in prep.). Details of the radio and optical morphological classifications for the galaxies presented in the ROGUE~I catalog are given in Section~\ref{sec:result}. \subsection{Estimation of radio flux density, radio luminosity, and absolute optical magnitude} In the ROGUE~I catalog, we also present the \textit{core} and \textit{total} radio flux densities of all sources for which the radio emission can be safely separated from the emission of other nearby sources. \textit{Core} flux densities are the flux densities of the compact central components taken from the FIRST catalog (i.e., the radio sources resulting from Section ~\ref{sec:crossmatch}). In order to estimate the \textit{total} radio flux densities of the ROGUE~I sources, we employed a number of procedures depending on the radio morphology and proximity of neighbouring sources. Below, we outline in detail our methodology for estimating the total radio flux densities. 1) In the case of compact or elongated radio emitters, i.e. the majority of sources in the ROGUE~I catalog, the total flux densities were obtained directly from the NVSS catalog. 2) In the case of sources with extended radio morphology for which the radio emission consists of many components, their total flux density was estimated as a sum of the flux densities of separate components listed in the NVSS catalog. 3) For sources blended with foreground or background point-like sources, the total flux densities were estimated as the difference between the NVSS flux density of the source components and the FIRST flux density of the blended source. 4) The total flux densities of a few sources blended with elongated sources for which we were able to separate individual components were measured manually from the NVSS intensity maps. The manual flux measurements were done with the \textsc{AIPS} software using the \textsc{JMFIT} task developed to fit Gaussian components to a defined part of an image. This method was used in order to be consistent with the NVSS catalog flux measurement method \citep{Condon98}. 5) Moreover, during our analysis, we noticed that in the case of a few sources, the FIRST flux density is higher than given in the NVSS catalog. This can happen in case of variable sources. For such sources we provide values from the FIRST catalog as the total flux density. 6) The FIRST flux densities are given as a measure of total flux density also for sources not detected in the NVSS survey. The observed monochromatic radio luminosity was computed from the flux density, S, as follows: \begin{equation} \mathrm{L_{obs} = 4\,\pi\,d_L^2\,S\, [W\,Hz^{-1}]}\, \label{Lobs} \end{equation} Next, we also compute for the extended sources, the rest frame monochromatic total luminosity using a spectral index $\alpha=-0.75$ \citep{Yuan18}. \begin{equation} \mathrm{L_{rest} = \frac{4\,\pi\,d_L^2\,}{(1+z)^{\alpha+1}} S\, [{W\, Hz^{-1}]}}\, \label{Lrest} \end{equation} The optical absolute magnitude, M$_r$, was calculated from the SDSS apparent magnitude m$_r$ using the Eq.~\ref{absmag}. \begin{equation} \mathrm{M_r = m_r - A_r + 5 - 5 \log(d_L)} \label{absmag} \end{equation} We applied a correction for Galactic reddening using the values of galactic extinction, A$_r$ \citep{Schlegel98}, however, we did not apply any K-correction. \section{Catalog} \label{sec:result} Table~\ref{Catalog} presents the radio and optical morphological classifications of the first 20 optical galaxies of the ROGUE~I catalog. The catalog of our entire sample of 32,616 sources is published in the machine-readable format. The catalog and the analysed radio-optical overlays are also available at \url{http://rogue.oa.uj.edu.pl/}. The columns are as follows: \\ Column 1: catalog number of the source. \\ Column 2: plate number in the SDSS database. \\ Column 3: MJD in the SDSS database. \\ Column 4: fiber number in the SDSS database. \\ Column 5: Right ascension from the SDSS database. \\ Column 6: Declination from the SDSS database. \\ Column 7: redshift, $z$, from the SDSS database. \\ Column 8: right ascension from the FIRST database. \\ Column 9: declination from the FIRST database. \\ Column 10: optical morphological classification from inspection of SDSS images.\\ Column 11: radio morphological classification from inspection of FIRST contour images. \\ Column 12: radio morphological classification from inspection of NVSS contour images. \\ Column 13: final radio morphological classification. \\ Column 14: computed luminosity distance. \\ Column 15: observed flux density of the core at 1.4 GHz. \\ Column 16: error on the flux density of the core at 1.4 GHz. \\ Column 17: observed flux density of the total emission at 1.4 GHz. \\ Column 18: error on the flux density of the total emission at 1.4 GHz. \\ Column 19: reference for the estimation of total flux density, i.e., if directly obtained or, in some rare cases, measured manually. \\ Column 20: apparent optical magnitude of galaxy from the SDSS database. \\ \clearpage \begin{minipage}[t][25cm][t]{1.2\textwidth} \begin{rotatetable*} \vspace{-5cm} \begin{deluxetable*}{lccccccccccccccccccc} \small \tabletypesize{\tiny} \tablecolumns{20} \tablewidth{0pt} \tablecaption{First 20 galaxies from the ROGUE~I catalog.\label{Catalog}} \tablehead{ \multirow{2}{*}{No.} & \multicolumn{6}{c}{SDSS} & \multicolumn{2}{c}{FIRST} & \multicolumn{4}{c}{Classification} & \colhead{$d_{L}$} & \colhead{S$_{\mathrm{core}}$} & \colhead{ eS$_{\mathrm{core}}$} & \colhead{S$_{\mathrm{total}}$}& \colhead{eS$_{\mathrm{total}}$}& \multirow{2}{*}{Flag} & \colhead{m$_r$} \\ & Plate & MJD & Fiber & $\alpha_{\mathrm{opt}}$ & $\delta_{\mathrm{opt}}$ & z & $\alpha_{\mathrm{rad}}$ & $\delta_{\mathrm{rad}}$ & SDSS & FIRST & NVSS & Final & (Mpc) & (mJy) & (mJy)& (mJy) & (mJy) & & (mag) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) & (16) & (17) & (18) & (19) & (20) } \startdata 1 & 266 & 51630 & 25 & 146.95607 & -0.342297 & 0.134663 & 09 47 49.453 & -00 20 33.55 & E & E & C & E & 639.1 & 100.2 & 0.14 & 100.2 & 0.14 & F & 17.54\\ 2 & 266 & 51630 & 42 & 146.565613 & -1.084756 & 0.09758 & 09 46 15.738 & -01 05 04.99 & L & E & ND & E & 451.8 & 1.3 & 0.14 & 1.3 & 0.14 & F & 17.15\\ 3 & 266 & 51630 & 77 & 146.809128 & 0.02636 & 0.126075 & 09 47 14.183 & +00 01 35.25 & E & C & C & C & 595 & 2.75 & 0.19 & 2.8 & 0.4 & N & 16.65\\ 4 & 266 & 51630 & 90 & 146.14357 & -0.741639 & 0.203829 & 09 44 34.458 & -00 44 29.44 & D & E & B & E & 1009.6 & 2.57 & 0.15 & 2.57 & 0.15 & F & 16.6\\ 5 & 266 & 51630 & 100 & 146.007797 & -0.642273 & 0.005024 & 09 44 01.896 & -00 38 32.19 & D & E & C & E & 21.7 & 1.3 & 0.14 & 3.3 & 0.6 & N & 16.16\\ 6 & 266 & 51630 & 119 & 146.737137 & -0.252201 & 0.13054 & 09 46 56.879 & -00 15 08.11 & E & E & C & E & 617.8 & 4.98 & 0.14 & 7.2 & 0.5 & N & 16.76\\ 7 & 266 & 51630 & 141 & 146.373795 & -0.36845 & 0.053307 & 09 45 29.731 & -00 22 04.32 & iS & E & B & E & 239.2 & 3.8 & 0.14 & 3.8 & 0.14 & F & 16.03\\ 8 & 266 & 51630 & 223 & 145.601166 & -0.001393 & 0.14577 & 09 42 24.263 & -00 00 05.23 & D & C & C & C & 696.8 & 4.87 & 0.14 & 5.5 & 0.4 & N & 17.16\\ 9 & 266 & 51630 & 255 & 145.52623 & -0.747411 & 0.218403 & 09 42 06.297 & -00 44 51.13 & E & E & C & E & 1091.1 & 3.58 & 0.15 & 4.1 & 0.4 & N & 17.83\\ 10 & 266 & 51630 & 506 & 146.462982 & 0.63869 & 0.030345 & 09 45 51.057 & +00 38 21.23 & D & E & B & E & 133.9 & 2.81 & 0.15 & 2.81 & 0.15 & F & 15.71\\ 11 & 266 & 51630 & 543 & 146.806839 & 0.665554 & 0.02008 & 09 47 13.587 & +00 39 55.85 & S & E & C & E & 87.9 & 13.09 & 0.15 & 17.7 & 0.7 & N & 17.16\\ 12 & 266 & 51630 & 545 & 146.799088 & 0.702682 & 0.030555 & 09 47 11.672 & +00 42 07.69 & D & E & C & E & 134.8 & 4.53 & 0.15 & 5.4 & 0.5 & N & 15.45\\ 13 & 266 & 51630 & 572 & 146.781509 & 0.737954 & 0.261903 & 09 47 07.515 & +00 44 17.15 & E & WAT & C & WAT & 1340.9 & 9.37 & 0.15 & 49.3 & 1.9 & N & 17.35\\ 14 & 266 & 51630 & 613 & 147.080475 & 0.788018 & 0.211183 & 09 48 19.281 & +00 47 16.60 & E & C & C & C & 1050.6 & 7.84 & 0.14 & 8 & 0.5 & N & 17.29\\ 15 & 267 & 51608 & 9 & 148.829819 & -0.740928 & 0.292166 & 09 55 19.154 & -00 44 27.02 & E & E & C & E & 1520.4 & 2.6 & 0.14 & 3.9 & 0.5 & N & 18.36\\ 16 & 267 & 51608 & 19 & 148.606583 & -0.92869 & 0.358335 & 09 54 25.603 & -00 55 43.81 & E & E & C & E & 1927.9 & 185.64 & 0.13 & 185.64 & 0.13 & F & 17.9\\ 17 & 267 & 51608 & 27 & 149.112656 & -0.47563 & 0.086629 & 09 56 27.061 & -00 28 32.13 & L & C & ND & C & 398 & 1.27 & 0.15 & 1.27 & 0.15 & F & 16.33\\ 18 & 267 & 51608 & 34 & 149.169876 & -0.023346 & 0.139254 & 09 56 40.762 & -00 01 24.26 & E & II & II & II & 662.9 & 2.15 & 0.14 & 199.8 & 5.23 & N & 16.26\\ 19 & 267 & 51608 & 47 & 148.43251 & -1.026422 & 0.110105 & 09 53 43.793 & -01 01 35.04 & pM & E & E & E & 514.1 & 11.11 & 0.14 & 11.11 & 0.14 & F & 16.32\\ 20 & 267 & 51608 & 97 & 148.237686 & -0.791982 & 0.089783 & 09 52 57.012 & -00 47 31.18 & E & E & E & E & 413.5 & 17.26 & 0.15 & 23.6 & 1.5 & N & 15.69\\ \enddata \vspace{0.4cm} \tablecomments{Table is published in its entirety in the machine-readable format. A portion is shown here for guidance regarding its form and content. The columns are: (1) catalog number; (2) plate number from SDSS; (3) MJD from SDSS; (4) fiber number from SDSS; (5) Right ascension from SDSS; (6) Declination from SDSS; (7) redshift; (8) Right ascension from FIRST; (9) Declination from FIRST; (10) result of optical morphological classification (Table\ref{OpticalMorph}); (11) - (12) results of individual radio morphological classification (Table~\ref{tab:RadioMorph}); (13) final radio morphological classification; (14) computed luminosity distance of the source (Eq.~\ref{dl}); (15) flux density of radio core from FIRST; (16) uncertainty of the flux density of radio core from FIRST; (17) total radio flux density; (18) uncertainty of the total radio flux density; (19) reference for the total radio flux density of the source (N--NVSS catalog; F--FIRST catalog; S--NVSS corrected for background source; M--manually obtained); (20) apparent optical magnitude from SDSS.} \end{deluxetable*} \end{rotatetable*} \end{minipage} \clearpage \section{Comments on the catalog} \label{sec:resultComments} \subsection{Number of sources with given radio and optical morphologies} The vast majority of sources in the ROGUE~I catalog possess single-component compact or elongated radio morphologies, forming together a sample of 29,237 ($\sim$90\%) radio sources, 876 sources are classified as SFR, blended, or not detected, while the remaining 2,503 galaxies ($\sim$8\%) are extended radio sources with complex radio structures. In the group of extended radio sources (including I, II, Hybrid, OI, OII, DD, X, Z, WAT, NAT, HT, Halo, and NC classes), 1,519 ($\sim$61\% of extended sources) are considered as Fanaroff--Riley type I, II, hybrid, or one-sided FR~I and FR~II, while 436 ($\sim$17\% of extended sources) are possible classifications of the above types. Bent sources securely classified as wide--angle tail, narrow--angle tail, or head--tail radio sources form a large group of 390 ($\sim$16\%) objects, and 73 ($\sim$3\%) bent sources having possible classifications. Double--double, Z--shaped, X--shaped, and halo radio sources (secure and possible) form a small group of 67 objects ($\sim$3\%). Table~\ref{RadioMorphNumbers} gives a summary of the radio morphologies in the ROGUE~I catalog. \begin{deluxetable}{lll} \tablecolumns{3} \tablewidth{1.0\textwidth} \tablecaption{Summary of the radio morphologies of galaxies in the ROGUE~I catalog. \label{RadioMorphNumbers}} \tablehead{ \multicolumn{1}{c}{Radio morphology} & \colhead{Code} & Number } \startdata Compact & C & 4,785 \\ Elongated & E & 24,452\\ Fanaroff-Riley I & I (pI) & 269 (147)\\ Fanaroff-Riley II & II (pII) & 730 (141)\\ Hybrid & I/II (pI/II) & 115 (101)\\ One-sided FR~I & OI (pOI) & 191 (33)\\ One-sided FR~II & OII (pOII) & 214 (14)\\ Z--shaped & Z (pZ) & 18 (7)\\ X--shaped & X (pX) & 7 (7)\\ Double-double & DD (pDD) & 8 (12)\\ Wide-angle bent & WAT (pWAT) & 273 (36)\\ Narrow-angle bent & NAT (pNAT) & 101 (25)\\ Head-tail & HT (pHT) & 16 (12)\\ Halo & Halo (pHalo) & 3 (5)\\ Star-forming region & SFR & 423 \\ Not clear & NC & 18 \\ Blended & B & 414\\ Not detected & ND & 39\\\hline \enddata \tablecomments{Values in brackets correspond to the numbers of objects with possible classification.} \end{deluxetable} Out of 32,616 galaxies listed in the ROGUE~I catalog, we classified 19,535 objects as elliptical and possible elliptical galaxies, comprising together the most numerous group, i.e. $\sim$60\%. Other large groups of galaxies consist of: spiral and possible spiral --- 5,174 ($\sim$16\%), distorted --- 3,946 ($\sim$12\%), lenticular and possible lenticular --- 2,367 ($\sim$7\%). Secure and possible merger, ring, interacting, and barred galaxies, as well as star-forming regions constitute a group of 1570 objects ($\sim$5\%). The numbers of galaxies with different optical morphologies are listed in Table~\ref{OpticalMorphNumbers}. \begin{deluxetable}{lll} \tablecolumns{3} \tablewidth{1pt} \tablecaption{Summary of the optical morphologies of galaxies in the ROGUE~I catalog.\label{OpticalMorphNumbers}} \tablehead{ \multicolumn{1}{c}{Optical morphology} & \colhead{Code} & Number } \startdata Elliptical & E (pE) & 18,416 (1,119)\\ Interacting elliptical & iE (piE) & 795 (6)\\ Distorted & D & 3,946\\ Spiral & S (pS) & 2,927 (2,247)\\ Interacting Spiral & iS (piS) & 115 (14) \\ Barred spiral & bS (pbS) & 142 (3)\\ Lenticular & L (pL) & 1,580 (787)\\ Interacting lenticular & iL & 16\\ Barred lenticular & bL & 39\\ Merger galaxy & M (pM) & 235 (93)\\ Ring galaxy & R (pR) & 88 (12)\\ Star-forming region & SFR & 12\\ Off-center & O & 24\\\hline \enddata \tablecomments{Values in brackets correspond to the numbers of objects with possible classification. } \end{deluxetable} Table~\ref{ExtendedRadioOpticalNumbers} shows that most of the sources with extended radio morphology are hosted by elliptical galaxies, i.e. 2,445 ($\sim$98\%). The rest of them ($\sim$2\%) are distorted, spiral, lenticular (also barred) galaxies and galaxy mergers. Table~\ref{CompactRadioOpticalNumbers} presents the number of unresolved and elongated radio sources corresponding to different optical morphological classes. Again also here the majority of the host galaxies are elliptical; this is a selection effect arising from our sampling of radio flux densities down to FIRST detection threshold ($\sim$0.6 mJy beam$^{-1}$ corresponding to L$_\mathrm{obs,total} \sim$10$^{22}$ W Hz$^{-1}$ for z$\sim$0.1) where the radio-active galaxy population still dominates over the star-forming galaxy population. However, we note that in unresolved and elongated sources, the variety of optical morphological types is much larger, suggesting that these classes are a mixture of objects in which the radio emission is connected to different phenomena (AGN vs. SF). We also examined the morphologies of the galaxies where the radio emission is considered to originate from a star-forming region. None of the radio emission associated with these objects is identified with elliptical galaxies. \begin{deluxetable}{lccccc} \tablecolumns{6} \tablewidth{0pt} \tablecaption{Number of extended radio sources corresponding to different optical morphological classes. \label{ExtendedRadioOpticalNumbers}} \tablehead{ \multirow{2}{*}{\diagbox[innerwidth=1.3cm, height=8.5ex]{Radio}{Optical}} & \colhead{E + iE} & D & S + bS + iS & L+bL + iL & M \\ \multicolumn{1}{c}{} & (pE+piE) & & (pS + pbS + piS) & (pL+pbL) & (pM)} \startdata I & 265 (1) & 3 & - & - & -\\ pI & 144 & 1 & 1 & (1) & - \\ II & 710 (4) & 10 & 2 (2) & 1 & (1) \\ pII & 134 (3) & 2 & - & 1 (1) & - \\ I/II & 112 (1) & 2 & - & - & - \\ pI/II & 98 (1) & 1 & - & (1) & - \\ OI & 183 & 7 & - & (1) & - \\ pOI & 32 & 1 & - & - & - \\ OII & 208 (1) & 4 & - & 1 & - \\ pOII & 12 (1) & 1 & - & - & - \\ DD & 7 & 1 & - & - & -\\ pDD & 11 (1) & - & - & - & - \\ WAT & 264 (2) & 6 & - & - & (1)\\ pWAT & 36 & - & - & - & -\\ NAT & 101 & - & - & - & -\\ pNAT & 25 & - & - & - & -\\ HT & 16 & - & - & - & - \\ pHT & 11 & 1 & - & - & - \\ X & 6 & 1 & - & - & - \\ pX & 6 & 1 & - & - & - \\ Z & 18 & - & - & - & - \\ pZ & 7 & - & - & - & -\\ Halo & 3 & - & - & - & - \\ pHalo & 4 & 1 & - & - & - \\ NC & 17 & - & - & (1) & -\\ \hline \enddata \tablecomments{Values in brackets correspond to sources with possible classification.} \end{deluxetable} \begin{deluxetable*}{cccccccccccccc} \tablecolumns{14} \tablewidth{0pt} \tablecaption{Number of unresolved and elongated radio sources corresponding to different optical morphological classes. \label{CompactRadioOpticalNumbers}} \tablehead{ \multirow{2}{*}{\diagbox[innerwidth=1.3cm, height=8.5ex]{Radio}{Optical}} & \colhead{E} & iE & D & S & iS & bS & L & iL & bL & R & M & SFR & O\\ \multicolumn{1}{c}{} & (pE) & (piE) & & (pS) & (piS) & (pbS) & (pL) & & & (pR) &(pM) & & } \startdata \multicolumn{1}{l}{C} & 2,609 (249) & 88 (1) & 662 & 281 (371) & 9 (2) & 15 & 284 (169) & 0 & 7 & 16 (2) & 15 (4) & 1 & 0 \\ \multicolumn{1}{l}{E} & 13,326 (848) & 512 (5) & 3,141 & 2,377 (1,828) & 91 (7) & 117 (3) & 1,264 (607) & 14 & 32 & 71 (9) & 152 (40) & 5 & 3\\ \enddata \tablecomments{Values in brackets correspond to sources with possible classification.} \end{deluxetable*} \subsection{Redshift and luminosity distributions of the ROGUE~I sources} The sources of the ROGUE~I catalog cover a wide range of redshifts, $z = 0.0021 - 0.636$, and a wide range of total radio luminosities at 1.4 GHz, L$_\mathrm{obs,total} = 10^{18.86-26.59}$ W Hz$^{-1}$, and core luminosities L$_\mathrm{obs,core} = 10^{18.86-26.26}$ W Hz$^{-1}$. Figure~\ref{redshiftDist} shows the distributions of the radio sources as a function of $z$ and L$_\mathrm{obs,total}$. The peak of the redshift distribution is at about 0.1 and there is a long tail towards higher redshifts. The redshift range of the extended sources is between 0.0162 and 0.5443 with L$_\mathrm{obs,total} = 10^{22.25-26.50}$ W Hz$^{-1}$. As can be inferred from the local radio luminosity function \citep[e.g.][]{Best12}, SF galaxies dominate at low luminosities in the distribution of L$_\mathrm{total}$. At higher luminosities, where also extended sources are found, the majority of the sources are probably AGNs. The evolution of the total and core luminosities with $z$ are shown in the top and bottom panels of Figure~\ref{Mr_Lrad_All}, respectively. The distribution of L$_\mathrm{obs,total}$ shows the detection threshold of the FIRST survey. Extended radio sources are evidently shifted with respect to the whole population of the ROGUE~I sources. This is a result of the selection method in which sources are classified as extended only when there is more than one component in the FIRST or the NVSS maps that can be identified as belonging to one source. It means that a source has to have at least two components to be identified as extended, thereby increasing the total radio flux density threshold as compared to the unresolved or elongated radio sources presented here. We notice an offset of the extended sources in relation to all sources in the core radio luminosity distribution. This is due to the fact that at low luminosities, the lobes of extended radio structures cannot be detected by the FIRST survey as the low-luminosity (if any) extended emission is resolved out and only the core is detected. In such cases, the radio source is classified as compact or elongated (see Section~\ref{sec:methodology}). However, the hint that some of these objects can have extended emission comes from a comparison of flux densities from the FIRST and the NVSS catalogs: in the presence of extended structure NVSS should have an excess of radio flux density compared to the FIRST flux density. We find that $\sim$27\% of the compact and elongated sources have the NVSS flux measurements larger than the FIRST flux measurements by 20\% or more \citep[see also the discussion in ][]{vanVelzen15}. Therefore, these could be potential candidates for low-luminosity (mostly FR~I) extended radio sources. Figure~\ref{morphDist} shows the distribution of secure and possible FR~I and FR~II galaxies as a function of $z$ (top panel) and L$_\mathrm{rest,total}$ (bottom panel). The redshift range of both FR~Is and FR~IIs is similar. This is unexpected since the lower luminosity lobes in FR~I sources should be much harder to detect at larger redshifts. Both the position of the maximum and the width of the distributions of the FR~Is and FR~IIs total radio luminosity are very similar. This is surprising since FR~IIs are considered to have larger radio luminosities. In the next section we discuss in detail the distribution of our sources in the radio--optical luminosity plane, the so-called Ledlow--Owen diagram \citep[]{Ledlow96} and compare it to previous studies. \subsection{The radio-optical luminosity plane for FR~I and ~II sources} Studies based on the most luminous (with a radio flux density larger than 8 Jy) radio sources detected in the 3C catalogs \citep{Edge59,Bennett62, Laing83} found that FR~I and FR~II radio sources can be separated by their radio luminosity: FR~II radio sources are more luminous and the separating luminosity is about $2\times10^{25}$ W Hz$^{-1}$ at 0.178 GHz \citep{Fanaroff74}. Later studies \citep[e.g.][]{Owen94, Ledlow96} found that the luminosity at which these two types of radio sources are separated depends on the luminosity of the optical host galaxy: it is larger for more luminous hosts. This dependence of the separation of FR~Is and FR~IIs on the host luminosity suggested that, beside the jet power, also the environment plays a crucial role in shaping radio sources. However, lower flux density limit surveys \citep[eg.,][]{Gendre13, Miraghaei17, Capetti17b} show that the separation between FR~Is and FR~IIs is no longer discernible when extended down to lower radio luminosities. The area in the Ledlow-Owen diagram reserved before just for FR~Is now is populated also by radio-faint FR~II type sources. Recently, \citet{Capetti17a, Capetti17b} have published a list of FR~I (FRICAT) and FR~II (FRIICAT) radio sources obtained by {\it visual} inspection of the FIRST and the NVSS maps with optical galaxies up to $z < 0.15$ from the SDSS DR\,7 release and the radio AGN sample of \citet{Best12}. Since they also use {\it visual} identification for radio morphological classification, it is useful to compare our results with theirs. In Figure~\ref{LOdiag}, we show the Ledlow--Owen diagram for our FR~I and FR~II sources with secure classification only (Table~\ref{Catalog}). Consistently with \citet{Capetti17b}, we find that most of FR~IIs are found below the original division line in the Ledlow--Owen diagram. We note that the total number of FR~II sources in ROGUE~I is larger than given by FRIICAT for the same redshift limit (202 from ROGUE~I vs. 122 from FRIICAT). This is a result of the lower flux density limit adopted in ROGUE~I (0.6 mJy vs. 5 mJy in \citealt{Capetti17b}). The number of FR~II sources above the \citet{Ledlow96} division line is lower for the same $z$ limit (13 from ROGUE~I vs. 33 from FRIICAT). This is mainly due to the fact that \citet{Capetti17b} includes also radio sources without radio core, which are absent in the ROGUE~I catalog. We also note that the number of FR~I sources detected by us is lower for the same $z$ limit (99 from ROGUE~I vs. 209 from FRICAT). This results from the inclusion of radio sources with only one detection in the FIRST catalog as FR~Is in their classification. In the ROGUE~I catalog these sources are classified as elongated. We note that \citet{Mingo19} also find about a factor of three more FR~I sources than FR~II sources in the LoTSS data release due to significantly lower surface brightness limit of the LoTSS survey as compared to the FIRST survey. \begin{figure}[!htb] \begin{center} \includegraphics[angle=0,scale=0.35]{Hist_z_AE_sq.pdf} \includegraphics[angle=0,scale=0.35]{Hist_Lrad_AE_sq.pdf} \caption{Histograms of $z$ (top panel) and L$_\mathrm{obs,total}$ (bottom panel) at 1.4 GHz of all (black solid lines) and secure as well as possible extended (violet dashed lines) radio sources listed in the ROGUE~I catalog. \label{redshiftDist}} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[angle=270,scale=0.28]{Lrad_z_total_sq.pdf} \includegraphics[angle=270,scale=0.28]{Lrad_z_core_sq.pdf} \caption{Distributions of L$_\mathrm{obs,total}$ vs. $z$ (top panel) and L$_\mathrm{obs,core}$ vs. $z$ (bottom panel) of all (black open circles) and extended (violet filled circles) sources listed in the ROGUE~I catalog. \label{Mr_Lrad_All}} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[angle=0,scale=0.35]{Hist_z_FRI_II.pdf} \includegraphics[angle=0,scale=0.35]{Hist_Lrad_FRI_II.pdf} \caption{Histograms of $z$ (top panel) and $L_\mathrm{rest,total}$ (bottom panel) of secure and possible FR~I (416 objects; red dashed lines) and FR~II (871 objects; blue solid lines) radio sources listed in the ROGUE~I catalog.} \label{morphDist} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[angle=0,scale=0.35]{Mr_Lrad_FR_I_II_ClineKolor.pdf} \caption{Distribution of $L_\mathrm{rest,total}$ at 1.4 GHz vs. absolute magnitude, M$_{r}$ of the FR~I (269 sources; red triangles) and FR~II (730 sources; blue crosses), using only secure classifications in the ROGUE~I catalog. The dotted line marks the division of the FR~I and FR~II type of radio sources \citep{Ledlow96}. Here we reproduce the division line with a correction factor of 0.34 mag resulting from the conversion of color from Cousin to the SDSS filter systems \citep[see also,][]{Capetti17b}}. \label{LOdiag} \end{center} \end{figure} \subsection{Sources with uncommon radio structures}\label{uncommonSources} Comparing maps at the same frequencies but with different angular-scale sensitivities, we were able to classify some of the radio sources more precisely, henceforth our morphological classifications differ from those in the literature. The following tables list all the objects for which we propose new morphological designations, such as giant radio sources (GRSs; Table~\ref{GNewObjects}), possible GRSs (Table~\ref{PGNewObjects}), double-double radio sources (Table~\ref{DDNewObjects}), X--shaped radio sources (Table~\ref{XNewObjects}), and Z--shaped radio sources (Table~\ref{ZshapedObjects}). We classify GRSs following the definition given by \cite{Kuzmicz.etal.2018a}, i.e. sources with linear size $>$700 kpc. We measured the sizes of all the extended radio sources for which the radio structure is larger than 2/3 of the analyzed maps (i.e., $\sim$660 kpc). In the case of sources possessing prominent hot spots, i.e. FR~IIs, the size was estimated as the sum of the lengths of both lobes, being distances from the core to the most distant FIRST/NVSS components. Regarding the gradually darkening structures, i.e. FR~Is, we measured the size manually taking into account deconvolution with the synthesized beam. In a few cases, for example when (1) the lobes are faint and not listed in the FIRST/NVSS catalogs (i.e., in practice only one contour is present in the map), (2) the sizes are slightly smaller than 700 kpc ($\ga$690 kpc), or (3) the assigned final morphology is possible, the sources are considered as a GRS candidate. We also cross-matched our list of double-double and X--shaped radio sources with \cite{Lal07, Cheung07, Kuzmicz17}. We found that 16 (including 12 possible) X--shaped radio sources from ROGUE~I catalog do not appear in their lists, therefore they are newly discovered sources. \begin{deluxetable*}{lccccccclcc} \small \tabletypesize{\scriptsize} \tablecolumns{11} \tablewidth{0pt} \tablecaption{New giant radio sources listed in ROGUE~I. \label{GNewObjects}} \tablehead{\colhead{} & \multicolumn{3}{c}{SDSS} & & \multicolumn{2}{c}{FIRST} & \multicolumn{2}{c}{Classification} & AS & LS \\ No. & Plate & MJD & Fiber & z & $\alpha$ & $\delta$ & Optical & Radio & ($^{\prime\prime}$) & (kpc) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) } \startdata 402$^{\dagger}$ & 290 & 51941 & 44 & 0.1353 & 12 37 45.9 & $-$01 14 16.2 & E & pDD & 693 & 1673 \\ 2,635$^{\star\dagger}$ & 447 & 51877 & 421 & 0.4029 & 08 45 25.5 & $+$52 29 15.8 & E & DD & 212 & 1155 \\ 3,684 & 528 & 52022 & 454 & 0.3212 & 13 36 03.5 & $+$03 07 45.4 & E & II & 159 & 749 \\ 10,340 & 1007 & 52706 & 415 & 0.2413 & 10 10 38.0 & $+$51 11 19.9 & E & II & 187 & 718 \\ 10,864 & 1048 & 52736 & 495 & 0.4340 & 14 53 31.1 & $+$48 26 35.4 & E & II & 164 & 934 \\ 11,817 & 1205 & 52670 & 248 & 0.1842 & 08 00 46.3 & $+$24 43 17.1 & E & II & 312 & 1003 \\ 11,827 & 1205 & 52670 & 491 & 0.1370 & 08 05 31.3 & $+$25 48 11.4 & E & II & 332 & 810 \\ 12,749 & 1281 & 52753 & 178 & 0.3645 & 13 12 16.3 & $+$48 47 45.4 & E & pWAT & 150 & 768 \\ 13,085 & 1301 & 52976 & 265 & 0.2426 & 09 11 54.7 & $+$08 12 31.0 & E & II & 208 & 802 \\ 14,373 & 1373 & 53063 & 554 & 0.0777 & 12 53 03.2 & $+$45 00 44.8 & E & I & 486 & 719 \\ 15,090 & 1415 & 52885 & 307 & 0.2821 & 16 50 25.3 & $+$21 44 57.8 & E & II & 168 & 723 \\ 15,353 & 1430 & 53002 & 5 & 0.1449 & 10 36 36.3 & $+$38 35 07.5 & E & I/II & 301 & 770 \\ 16,242 & 1573 & 53226 & 357 & 0.1481 & 16 22 06.0 & $+$24 49 16.6 & E & II & 463 & 1207 \\ 16,309 & 1576 & 53496 & 575 & 0.2659 & 16 20 31.1 & $+$27 17 37.5 & E & II & 209 & 862 \\ 17,467 & 1648 & 53171 & 370 & 0.2821 & 15 02 08.9 & $+$33 31 14.3 & E & II & 192 & 826 \\ 18,519$^\ddag$ & 1709 & 53533 & 491 & 0.1665 & 14 31 51.1 & $+$10 29 59.4 & E & pX & 271 & 778 \\ 19,682 & 1767 & 53436 & 67 & 0.3876 & 12 27 53.4 & $+$14 16 45.6 & E & II & 166 & 883 \\ 21,094 & 1841 & 53491 & 418 & 0.2384 & 14 31 03.4 & $+$33 45 41.6 & E & II & 192 & 731\\ 21,583 & 1920 & 53314 & 285 & 0.1883 & 07 46 33.7 & $+$17 08 09.6 & E & II & 438 & 1389 \\ 21,944 & 1939 & 53389 & 320 & 0.3597 & 09 23 16.2 & $+$28 54 57.9 & E & II & 162 & 822 \\ 23,741 & 2087 & 53415 & 557 & 0.2150 & 09 19 42.2 & $+$26 09 24.1 & E & II & 213 & 749 \\ 24,327 & 2116 & 53854 & 319 & 0.2404 & 13 50 00.7 & $+$29 47 21.4 & E & II & 183 & 701 \\ 25,187 & 2154 & 54539 & 376 & 0.3584 & 15 08 58.5 & $+$28 26 28.2 & E & II & 182 & 921 \\ 25,302 & 2159 & 54328 & 102 & 0.3358 & 15 24 44.6 & $+$19 59 57.1 & E & II & 248 & 1203 \\ 25,511 & 2169 & 53556 & 29 & 0.1154 & 15 52 06.7 & $+$22 47 39.2 & D & I/II & 668 & 1407 \\ 25,565$^{\S}$ & 2171 & 53557 & 389 & 0.0683 & 15 52 22.4 & $+$22 33 11.8 & E & II & 578 & 760\\ 26,948 & 2284 & 53708 & 269 & 0.4100 & 09 01 36.7 & $+$21 46 33.8 & E & I/II & 130 & 716 \\ 27,059 & 2291 & 53714 & 114 & 0.0345 & 09 23 31.5 & $+$24 26 46.7 & D & I/II & 1080 & 746 \\ 28,725 & 2494 & 54174 & 488 & 0.1781 & 11 21 45.0 & $+$17 24 25.3 & E & I/II & 255 & 773 \\ 28,749 & 2495 & 54175 & 564 & 0.1665 & 11 23 32.3 & $+$20 04 17.6 & E & II & 263 & 755 \\ 29,643 & 2577 & 54086 & 54 & 0.1602 & 09 21 01.5 & $+$11 29 44.6 & E & II & 259 & 720\\ 29,804 & 2585 & 54097 & 327 & 0.1235 & 09 59 40.4 & $+$17 25 28.2 & E & II & 530 & 1183 \\ 29,989 & 2593 & 54175 & 397 & 0.1368 & 10 34 03.9 & $+$18 40 49.0 & E & II & 540 & 1316 \\ \hline \enddata \tablecomments{Columns: (1) catalog number; (2) plate number from SDSS; (3) MJD from SDSS; (4) Fiber number from SDSS; (5) redshift; (6) Right ascension from FIRST; (7) Declination from FIRST; (8) Optical morphological classification (Table~\ref{OpticalMorph}); (9) final radio morphological classification; (10) angular size; (11) projected linear size. \\ $^{\dagger}$classified also as a new double-double or possible double-double radio source.\\ $^\ddag$classified also as a new possible X--shaped radio source. \\ $^\S$formerly classified as FR~I radio galaxy \citep{Capetti17a}.\\$^\star$Object was included in the FR~II radio galaxy sample of \cite{KozielWierzbowska11}.} \end{deluxetable*} \begin{deluxetable*}{lccccccclcc} \small \tabletypesize{\scriptsize} \tablecolumns{11} \tablewidth{0pt} \tablecaption{Candidates for new giant radio sources listed in ROGUE~I.\label{PGNewObjects}} \tablehead{\colhead{} & \multicolumn{3}{c}{SDSS} & & \multicolumn{2}{c}{FIRST} & \multicolumn{2}{c}{Classification} & AS & LS \\ No. & Plate & MJD & Fiber & z & $\alpha$ & $\delta$ & Optical & Radio & ($^{\prime\prime}$) & (kpc) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) } \startdata 370 & 287 & 52023 & 573 & 0.2510 & 12 14 34.6 & $+$00 47 28.3 & E & II & 204 & 807\\ 1,415 & 375 & 52140 & 399 & 0.2102 & 22 20 55.9 & $+$00 18 20.0 & E & II & 202 & 698\\ 2,500 & 442 & 51882 & 259 & 0.1603 & 08 17 36.1 & $+$49 59 31.6 & E & I/II & 251 & 699 \\ 7,161 & 817 & 52381 & 623 & 0.2707 & 16 38 09.9 & $+$40 58 39.9 & E & II & 167 & 698 \\ 7,346 & 832 & 52312 & 555 & 0.3242 & 09 17 17.9 & $+$44 34 26.1 & E & II & 146 & 692 \\ 11,034 & 1058 & 52520 & 395 & 0.4046 & 16 25 13.3 & $+$33 41 51.4 & E & I & 127 & 694 \\ 11,417 & 1175 & 52791 & 74 & 0.1994 & 16 52 47.4 & $+$32 34 59.4 & E & II & 244 & 810 \\ 12,374 & 1237 & 52762 & 105 & 0.1766 & 10 16 14.4 & $+$08 15 13.8 & E & pI/II & 278 & 837\\ 12,766 & 1282 & 52759 & 26 & 0.3318 & 13 30 41.8 & $+$48 27 54.8 & E & II & 255 & 1227 \\ 13,360 & 1318 & 52781 & 221 & 0.1071 & 12 57 17.6 & $+$56 39 12.1 & E & pI/II & 582 & 1148 \\ 14,074$^{\ddagger}$ & 1360 & 53033 & 175 & 0.0921 & 10 30 53.6 & $+$41 13 15.8 & E & pX & 530 & 915 \\ 14,686 & 1388 & 53119 & 40 & 0.1991 & 15 36 59.2 & $+$31 05 38.8 & E & I & 270 & 895 \\ 14,841 & 1396 & 53112 & 120 & 0.3350 & 14 41 35.0 & $+$41 56 32.7 & E & pI/II & 295 & 1429 \\ 16,484 & 1587 & 52964 & 238 & 0.0878 & 08 36 07.8 & $+$26 48 43.7 & E & pII & 552 & 912 \\ 19,048 & 1737 & 53055 & 197 & 0.1851 & 07 48 18.8 & $+$45 44 46.3 & E & pI/II & 228 & 713 \\ 20,418$^{\dagger}$ & 1805 & 53875 & 413 & 0.1501 & 13 51 10.8 & $+$07 28 46.2 & pE & pDD & 338 & 891 \\ 25,432 & 2165 & 53917 & 363 & 0.2052 & 15 37 21.2 & $+$24 55 58.7 & E & pII & 217 & 736 \\ 25,587 & 2172 & 54230 & 332 & 0.0897 & 15 52 09.1 & $+$20 05 48.3 & E & pII & 1186 & 1998\\ 26,971 & 2285 & 53700 & 401 & 0.3289 & 09 02 33.7 & $+$20 23 43.9 & E & II & 145 & 694 \\ 29,884 & 2588 & 54174 & 389 & 0.2593 & 10 12 07.2 & $+$16 19 26.2 & E & pII & 206 & 834 \\ 29,900 & 2589 & 54174 & 382 & 0.4518 & 10 18 06.1 & $+$17 48 09.1 & E & I & 131 & 764 \\ 29,948 & 2591 & 54140 & 268 & 0.2982 & 10 24 24.4 & $+$17 09 17.2 & E & pII & 188 & 841 \\ \hline \enddata \tablecomments{Columns: (1) catalog number; (2) plate number from SDSS; (3) MJD from SDSS; (4) Fiber number from SDSS; (5) redshift; (6) Right ascension from FIRST; (7) Declination from FIRST; (8) Optical morphological classification (Table~\ref{OpticalMorph}); (9) final radio morphological classification; (10) angular size; (11) projected linear size.\\ $^{\dagger}$classified also as a new double-double or possible double-double radio source.\\ $^\ddag$classified also as a new possible X--shaped radio source. \\} \end{deluxetable*} \begin{deluxetable*}{lcccccccl} \small \tabletypesize{\scriptsize} \tablecolumns{9} \tablewidth{0pt} \tablecaption{New double-double radio sources listed in ROGUE~I.\label{DDNewObjects}} \tablehead{\colhead{} & \multicolumn{3}{c}{SDSS} & & \multicolumn{2}{c}{FIRST} & \multicolumn{2}{c}{Classification}\\ No. & Plate & MJD & Fiber & z & $\alpha$ & $\delta$ & Optical & Radio \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) } \startdata 2,635$^{\star\dagger}$ & 447 & 51877 & 421 & 0.4029 & 08 45 25.5 & $+$52 29 15.8 & E & DD \\ 8,602$^\star$ & 906 & 52368 & 169 & 0.1446 & 10 46 32.2 & $+$54 35 59.7 & E & DD \\ 9,180$^\star$ & 941 & 52709 & 201 & 0.0721 & 09 47 08.8 & $+$42 11 25.6 & E & DD\\ 19,429 & 1753 & 53383 & 486 & 0.1720 & 11 24 22.8 & $+$15 09 58.0 & E & DD \\\hline 402$^{\star\dagger}$ & 290 & 51941 & 44 & 0.1353 & 12 37 45.9 & $-$01 14 16.2 & E & pDD \\ 1,232$^\star$ & 358 & 51818 & 161 & 0.333 & 17 32 50.2 & $+$56 34 27.0 & E & pDD \\ 4,035 & 548 & 51986 & 18 & 0.1800 & 08 33 46.8 & $+$45 15 18.6 & E & pDD \\ 4,581 & 580 & 52368 & 461 & 0.0354 & 10 59 14.6 & $+$05 17 31.3 & E & pDD \\ 4,861 & 599 & 52317 & 129 & 0.1045 & 12 13 26.0 & $+$63 59 09.1 & E & pDD \\ 6,099 & 725 & 52258 & 79 & 0.1593 & 23 06 32.1 & -09 30 18.0 & E & pDD \\ 15,170 & 1419 & 53144 & 481 & 0.3038 & 16 08 10.8 & $+$32 54 18.9 & E & pDD \\ 16,107$^\star$ & 1465 & 53082 & 522 & 0.2249 & 13 43 00.4 & $+$46 27 19.9 & E & pDD \\ 20,418$^{\dagger}$ & 1805 & 53875 & 413 & 0.1501 & 13 51 10.8 & $+$07 28 46.2 & pE & pDD \\ 24,209$^\S$ & 2110 & 53467 & 344 & 0.0162 & 13 23 45.0 & $+$31 33 56.7 & E & pDD \\ 25,983 & 2218 & 53816 & 458 & 0.0732 & 11 29 12.2 & $+$27 33 14.1 & E & pDD \\ 30,866$^\S$ & 2656 & 54484 & 499 & 0.0226 & 12 08 05.6 & $+$25 14 14.1 & E & pDD \\\hline \enddata \tablecomments{Columns: (1) catalog number; (2) plate number from SDSS; (3) MJD from SDSS; (4) Fiber number from SDSS; (5) redshift; (6) Right ascension from FIRST; (7) Declination from FIRST; (8) Optical morphological classification (Table~\ref{OpticalMorph}); (9) final radio morphological classification. \\ $^\S$formerly classified as FR~I radio galaxy \citep{Kharb12}.\\ $^\star$ included in the FR~II radio galaxy sample of \cite{KozielWierzbowska11}.\\ $^{\dagger}$ classified also as a new giant or a candidate for a new giant radio source.\\} \end{deluxetable*} \begin{deluxetable*}{lcccccccl} \small \tabletypesize{\scriptsize} \tablecolumns{9} \tablewidth{0pt} \tablecaption{New X--shaped radio sources listed in ROGUE~I.\label{XNewObjects}} \tablehead{\colhead{} & \multicolumn{3}{c}{SDSS} & & \multicolumn{2}{c}{FIRST} & \multicolumn{2}{c}{Classification}\\ No. & Plate & MJD & Fiber & z & $\alpha$ & $\delta$ & Optical & Radio \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) } \startdata 18,205 & 1695 & 53473 & 492 & 0.2077 & 12 57 21.7 & $+$12 28 19.3 & E & X \\ 25,798$^\star$ & 2209 & 53907 & 286 & 0.2791 & 16 30 16.6 & $+$14 35 11.4 & E & X \\ 27,729 & 2368 & 53758 & 58 & 0.191 & 09 32 38.3 & $+$16 11 58.0 & E & X \\ \hline 3,155 & 484 & 51907 & 497 & 0.2698 & 09 09 51.0 & $+$58 47 07.0 & E & pX \\ 4,287 & 561 & 52295 & 303 & 0.3626 & 10 40 21.9 & $+$59 58 41.3 & E & pX \\ 6,713 & 776 & 52319 & 99 & 0.1112 & 11 37 21.4 & $+$61 20 00.9 & E & pX \\ 14,074$^{\dagger}$ & 1360 & 53033 & 175 & 0.0921 & 10 30 53.6 & $+$41 13 15.8 & E & pX \\ 18,519$^\dagger$ & 1709 & 53533 & 491 & 0.1665 & 14 31 51.1 & $+$10 29 59.4 & E & pX \\ 23,094 & 2012 & 53493 & 629 & 0.1148 & 11 44 27.2 & $+$37 08 32.4 & E & pX \\ \hline \enddata \tablecomments{Columns: (1) catalog number; (2) plate number from SDSS; (3) MJD from SDSS; (4) Fiber number from SDSS; (5) redshift; (6) Right ascension from FIRST; (7) Declination from FIRST; (8) Optical morphological classification (Table~\ref{OpticalMorph}); (9) final radio morphological classification. \\ $^\star$ included in the FR~II radio galaxy sample of \cite{KozielWierzbowska11}.\\ $^{\dagger}$ classified also as a new giant or a candidate for a new giant radio source.\\} \end{deluxetable*} Although sources with Z--shaped symmetry are traditionally included in the group of X--shaped sources \citep{Gopal-Krishna03, Cheung07}, the radio morphologies are significantly different for these two classes. The former class has a single pair of jets gradually (S--shaped) or abruptly (Z--shaped) changing direction of propagation (Figure~\ref{fig:radioMorph}; Z--shaped source), while the latter possesses two pairs of lobes, where the axis of the second pair of lobes also crosses the core at an angle to the first pair (Figure~\ref{fig:radioMorph}; X--shaped source). This is in the agreement with \citet{Roberts15,Roberts18}, who divide the two classes where the former class is identified as a separate category and the latter class is termed as \textit{true} X-shaped sources. In Appendix~\ref{appendix}, we present the radio-optical overlays for the newly discovered sources and give notes on selected sources. In particular, see Section~\ref{Giants} and Figure~\ref{fig:NewGiantsMaps} for GRSs, Section~\ref{PGiants} and Figure~\ref{NewPGiantsMaps} for possible GRSs, Section~\ref{DD} and Figure~\ref{NewDDMaps} for DDs, Section~\ref{X} and Figure~\ref{NewXMaps} for X--shaped sources, and Section~\ref{Z} and Figure~\ref{Z} for Z--shaped sources. \begin{deluxetable*}{lcccccccc} \small \tabletypesize{\scriptsize} \tablecolumns{9} \tablewidth{0pt} \tablecaption{Z--shaped radio sources listed in ROGUE~I.\label{ZshapedObjects}} \tablehead{ \colhead{} & \multicolumn{3}{c}{SDSS} & & \multicolumn{2}{c}{FIRST} & \multicolumn{2}{c}{Classification}\\ No. & Plate & MJD & Fiber & z & $\alpha$ & $\delta$ & Optical & Radio \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) } \startdata \hline 1,606 & 385 & 51877 & 375 & 0.1835 & 23 39 00.3 & $+$00 42 58.2 & E & Z \\ 2,798 & 456 & 51910 & 365 & 0.3485 & 02 43 20.6 & $-$07 14 45.9 & E & Z \\ 2,809 & 457 & 51901 & 193 & 0.0782 & 02 52 27.6 & $-$07 56 04.8 & E & Z \\ 4,142 & 552 & 51992 & 471 & 0.0992 & 09 02 36.8 & $+$52 03 48.0 & E & Z\\ 7,225 & 826 & 52295 & 491 & 0.0618 & 08 21 10.4 & $+$35 47 35.9 & E & Z \\ 7,502 & 842 & 52376 & 209 & 0.1489 & 12 04 25.3 & $+$03 45 09.3 & E & Z \\ 7,977$^{\dag}$ & 875 & 52354 & 521 & 0.1539 & 10 40 22.5 & $+$50 56 23.0 & E & Z \\ 11,229 & 1164 & 52674 & 103 & 0.1331 & 15 02 29.0 & $+$52 44 02.2 & E & Z \\ 12,408 & 1238 & 52761 & 550 & 0.0626 & 10 23 22.6 & $+$08 52 01.6 & E & Z\\ 12,544$^{\ddag}$ & 1269 & 52937 & 243 & 0.0788 & 08 39 15.8 & $+$28 50 39.1 & E & Z \\ 13,184 & 1307 & 52999 & 67 & 0.1615 & 10 04 56.7 & $+$09 47 04.7 & E & Z \\ 18,459 & 1707 & 53885 & 313 & 0.3442 & 14 18 13.3 & $+$09 52 37.0 & E & Z \\ 21,690 & 1926 & 53317 & 156 & 0.0959 & 08 18 54.1 & $+$22 47 46.1 & E & Z\\ 22,328 & 1971 & 53472 & 124 & 0.3522 & 12 32 11.6 & $+$31 30 56.6 & E & Z \\ 22,897$^\star$ & 2002 & 53471 & 462 & 0.0728 & 13 19 04.2 & $+$29 38 35.4 & E & Z \\ 25,124$^\S$ & 2151 & 54523 & 113 & 0.0541 & 15 04 57.1 & $+$26 00 58.3 & E & Z \\ 28,729 & 2494 & 54174 & 626 & 0.1422 & 11 24 57.4 & $+$17 17 43.2 & E & Z\\ 32,462 & 2954 & 54561 & 299 & 0.1161 & 15 26 42.0 & $+$00 53 30.1 & E & Z\\ \hline 6,195 & 733 & 52207 & 202 & 0.2641 & 21 49 39.7 & $+$10 57 27.4 & E & pZ \\ 10,652 & 1038 & 52673 & 475 & 0.0851 & 12 46 47.5 & $+$54 53 15.2 & E & pZ\\ 10,919 & 1051 & 52468 & 483 & 0.2531 & 15 23 33.5 & $+$45 03 36.6 & E & pZ\\ 13,950 & 1352 & 52819 & 491 & 0.3860& 15 05 57.1 & $+$37 02 07.2 & E & pZ \\ 18,391 & 1704 & 53178 & 2 & 0.0914 & 14 8 33.0 & $+$12 24 25.5 & E & pZ\\ 24,512 & 2123 & 53793 & 634 & 0.2939 & 14 12 24.4 & $+$27 17 59.7 & E & pZ\\ 28,012 & 2424 & 54448 & 561 & 0.1647 & 08 30 59.5 & $+$12 52 53.2 & E & pZ \\ \hline \enddata \tablecomments{Columns: (1) Catalog number; (2) Plate number from SDSS; (3) MJD from SDSS; (4) Fiber number from SDSS; (5) Redshift; (6) Right ascension from FIRST; (7) Declination from FIRST; (8) Optical morphology classification (Table~\ref{OpticalMorph}); (9) Final radio morphology classification. \\Objects earlier classified as: $^\dag$X-shaped \citep{Cheung07}, $^\ddag$WAT \citep{Donoghue93}, $^\star$FR~II \citep{KozielWierzbowska11}, and $^\S$FR~I with lobes \citep{Croston18}.} \end{deluxetable*} \subsection{Comparison with other catalogs} The catalog of \citet[][BH12 hereafter]{Best12} is the recent and widely explored catalog of radio sources with optical counterparts from SDSS\,DR\,7 \citep[see,][]{Capetti17a, Capetti17b, Baldi18}. As stated in Table~\ref{AGNsamples}, the BH12 catalog was made by cross-matching SDSS\,DR\,7 with NVSS and FIRST using automatic methods described in \citet{Best05} with modifications introduced in \citet{Donoso2009}. The BH12 catalog contains 18,286 radio objects selected with a flux density limit of 5 mJy. All the objects in this catalog were classified as AGNs (about 15,000) or SF galaxies (about 3,000) using three different classification schemes \citep[see Appendix A in][]{Best12}. Compared to BH12, the ROGUE~I catalog was selected using the same optical and radio surveys, but without applying any additional radio flux density limit. As we already mentioned, in the SDSS catalogs there are galaxies with repeated observations, therefore we performed our comparison based on the SDSS coordinates and not plate, MJD, and fiber numbers. Out of the 14,383 BH12 sources that fulfill our spectrum and photometry quality criteria (see Section~\ref{sec:sample}), 11,882 sources are also in ROGUE~I. However, two of them are sources with off-centered spectra, therefore, the ROGUE~I and the BH12 catalogs contain 11,880 unique sources in common. BH12 contains 2,500 sources that are not listed in ROGUE~I, however, all these sources do not have a FIRST radio detection within 3\hbox{$^{\prime\prime}$}\, from the optical galaxy. On the other hand, in the ROGUE~I catalog there are 20,728 sources not included in BH12, 846 of which having total radio flux densities higher than 5 mJy, including 169 \textit{extended}, 53 compact, 578 elongated radio sources, and 46 SFRs. This comparison shows that in automatic search even bright sources with radio cores can be missed, therefore, in such projects selection criteria should be chosen with caution. Since ROGUE~I contains extended sources with assigned morphological types, our comparison should be also made with catalogs of extended sources. In the \citet{Lin10} catalog, SDSS DR\,6 was cross-matched with NVSS and FIRST. The \citeauthor{Lin10} sample is limited in redshift (0.02 $<$ z $<$ 0.3) and contains galaxies that are more luminous than the characteristic magnitude in the galaxy luminosity function \citep{Blanton03}. They applied a radio flux density limit of 3 mJy and a search radius of 3\hbox{$^\prime$}\, between the optical galaxy and the NVSS radio source. The \citet{Lin10} catalog contains about 10,500 objects of which 1,040 have extended morphologies. \citeauthor{Lin10} proposed a classification scheme similar to the standard \citet{Fanaroff74} one, which was based on the ratio, r$_{S}$, of the separation between the brightest regions on either sides of the host galaxy and the total size of the radio source. The comparison of ROGUE~I and extended sources from \citet{Lin10} gives 505 common objects, 154 sources which are not in our optical galaxy sample, and 381 sources which have no FIRST detection within 3\hbox{$^{\prime\prime}$}. Among common objects, the majority of sources with the highest values of r$_{S}$ are ROGUE~I FR~II radio sources. Going to lower values of r$_{S}$, more sources with more complex structures are found, which is in agreement with our and \citeauthor{Lin10} classification schemes. However, we note that the values of this ratio for some sources do not match their ROGUE~I morphology. It is a result of incompatible identification of radio components as a part of the radio source. This shows that the proper identification of all parts of radio sources will be a challenging problem in future automatic searches. It also shows that measuring sizes and classification similar to the one proposed by \citeauthor{Lin10} can be inaccurate in the case of bent sources, or sources with more than one pair of lobes \citep[see also comments in][]{Mingo19}. \section{Summary} \label{sec:discussion} We have presented ROGUE~I, the \emph{largest handmade} catalog of radio sources associated with optical galaxies. It has been constructed using the SDSS DR\,7 spectroscopic catalog of galaxies and the FIRST and the NVSS radio catalogs. All ROGUE~I objects have spectroscopic redshifts and good quality optical spectra that can be used to derive basic host galaxy properties, as well as stellar velocity dispersions from which BH masses can be estimated. ROGUE~I consists only of sources with a central FIRST component which, in the case of AGNs, can be interpreted as a radio core. A second catalog, ROGUE~II, which will deal with radio galaxies with SDSS counterparts but \textit{without} a FIRST core, is in preparation. ROGUE~I provides the morphological classification of the host galaxies as well as of the associated radio sources, and a careful estimation of the total radio flux densities. The main results of our visual classifications are as follows: \begin{enumerate} \item Unresolved (compact) and elongated radio sources dominate in the ROGUE~I catalog. They constitute $\sim$90\% of the total number. About 8\% of the sources in the sample exhibit extended morphology. \item Radio sources (secure and possible classifications) of \citeauthor{Fanaroff74} I, II, hybrid, and one-sided types constitute $\sim$78\% of the extended sources, bent (wide-angle, narrow-angle, head-tail) sources $\sim$18\%, while sources with intermittent or reoriented jet activity (double-double, X-shape, Z-shape sources) $\sim$3\%. \item Although the FIRST and the NVSS catalogs together with SDSS DR\,7 have been extensively explored in the past, our selection procedure allowed us to discover or reclassify a number of objects as giant, double--double, X--shaped, and Z--shaped radio sources. Moreover, we have classified much bigger samples of \citeauthor{Fanaroff74} I and II types (416 and 871, respectively, including both secure and possible assignments), than presented in \citet{Capetti17a,Capetti17b} due to higher $z$ range and lower radio detection thresholds in our study. We note that the ROGUE~II catalog, which will comprise of radio sources without cores, will further increase the numbers of extended radio sources. \item We identify a total of 81 GRSs (55 new and 26 from the sample of \citealt{Kuzmicz.etal.2018a}) among the group of 2,503 extended radio sources in ROGUE~I. This corresponds to $\sim$3\% of the extended radio source population, in agreement with the fraction of GRSs in the local Universe \citep[i.e., $z<$1;][]{Saripalli12}. \item The optical morphological classification of the host galaxies revealed that $\sim$62\% of radio sources detected at 1.4\,GHz have elliptical, $\sim$17\% spiral, and $\sim$7\% lenticular hosts. A significant number of sources ($\sim$12\%) have host galaxies with distorted morphologies. \item In accord with earlier studies, most of the FR~II radio sources in ROGUE~I have low radio luminosities, comparable to the luminosities of the FR~I sources. \end{enumerate} Comparisons with automatically selected catalogs show that visual analysis, although more time-consuming, still gives better results, and that the selection and classification schemes used in such procedure can be more complex than in automatic searches. Although our method would be very difficult to apply to the catalogues based on the large radio surveys like LOFAR or EMU \citep{Norris11}, our sample can serve as a database for training automatic methods of radio source identification and classification \citep[as][]{Alger18, Ma18}. \acknowledgements The authors thank the anonymous referee for the useful comments, Gra\.zyna Stasi{\'n}ska, Natalia Vale Asari, Marek Sikora, and Marek Jamrozy for discussions, and Marian Soida for his help in setting up the computing facility. We also thank Richard L. White for queries related to the FIRST database, and Benjamin Alan Weaver and Aniruddha R. Thakar for help with the SDSS data. DKW acknowledges the support of Polish National Science Centre (NCN) grant via 2016/21/B/ST9/01620. AG acknowledges the full support of NCN via 2018/29/B/ST9/02298. N\.Z work is supported by the NCN through the grant DEC-2014/15/N/ST9/05171. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US government grant NAGW-2166. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. \clearpage \newpage
train/arxiv
BkiUdQc5qoYAttXCyvwa
5
1
\section{INTRODUCTION} Superconductivity in organic conductors was first discovered in the ion radical salt (TMTSF)$_2$PF$_6$.\cite{Jerome80} Later on, it was found in most Bechgaard [(TMTSF)$_2$X] and Fabre [(TMTTF)$_2$X] salts. These salts are based on the organic molecules tetramethyltetraselenafulvalene (TMTSF) and tetramethyltetrathiafulvalene (TMTTF). The monovalent anion X can be either a centrosymmetric (PF$_6$, AsF$_6$, etc.) or a non-centrosymmetric (ClO$_4$, ReO$_4$, NO$_3$, FSO$_3$, SCN, etc.) inorganic molecule. (See Refs.~\onlinecite{Jerome82,Ishiguro90} for previous reviews on these compounds.) Although they are definitely not ``high-$T_c$'' superconductors -- the transition temperature is of the order of 1 K --, these quasi-one-dimensional (quasi-1D) conductors share several properties of high-$T_c$ superconductors and other strongly-correlated electron systems such as layered organic superconductors\cite{McKenzie98,Lefebvre00} or heavy-fermion materials.\cite{Flouquet05} The metallic phase of all these conductors exhibits unusual properties which cannot be explained within the framework of Landau's Fermi liquid theory and remain to a large extent to be understood. The superconducting phase is unconventional (not $s$-wave). Magnetism is ubiquitous in these correlated systems and might provide the key to the understanding of their behavior. The quest for superconductivity in organic conductors was originally motivated by Little's proposal that highly polarizable molecules could lead -- {\it via} an excitonic pairing mechanism -- to tremendously large transition temperatures. Early efforts towards the chemical synthesis of such compounds were not successful -- as far as superconductivity is concerned --, but lead to the realization of a 1D charge transfer salt (TTF-TCNQ) undergoing a Peierls instability at low temperatures.\cite{Jerome04} Attempts to suppress the Peierls state and stabilize a conducting (and possibly superconducting) state by increasing the 3D character of this 1D conductor proved to be unsuccessful. Organic superconductivity was eventually discovered in the Bechgaard salt (TMTSF)$_2$PF$_6$ under 9~kbar of pressure.\cite{Jerome80} It was subsequently found in other members of the (TMTSF)$_2$X series. Most of the Bechgaard salts are insulating at ambient pressure and low temperatures,\cite{Bechgaard80} and it came as a surprise that the insulating state of these materials is a spin-density-wave (SDW) rather than an ordinary Peierls state.\cite{Jerome82} The important part played by magnetism in these compounds was further revealed when it was found that their phase diagram only shows a part of a larger sequence of ordered states, which includes the N\'eel and the spin-Peierls phases of their sulfur analogs, the Fabre salts (TMTTF)$_2$X series.\cite{Bourbonnais99} The charge transfer from the organic molecules to the anions leads to a commensurate band filling 3/4 coming from the 2:1 stoichiometry. The metallic character of these compounds at high enough temperature is due to the delocalization of carriers via the overlap of $\pi$-orbitals between neighboring molecules along the stacking direction ($a$ axis) (Fig.~\ref{fig:structure}).\cite{Jerome82} The electronic dispersion relation obtained from quantum chemistry calculations (extended H\"uckel method) is well approximated by the following tight-binding form\cite{Grant83,Yamaji82,Ducasse86,Balicas94} \begin{eqnarray} \epsilon({\bf k}) &=& -2t_a \cos(k_aa/2)-2t_{\perp b} \cos(k_b b) -2t_{\perp c} \cos(k_c c) \nonumber \\ &\simeq& v_F(|k_a|-k_F) -2t_{\perp b} \cos(k_b b) - 2t'_{\perp b} \cos(k_b b) \nonumber \\ && -2t_{\perp c} \cos(k_c c) + \mu , \label{dispersion} \eleq where it is assumed that the underlying lattice is orthorhombic. This expression is a simplification of the dispersion relation -- the actual crystal lattice symmetry is triclinic -- but it retains the essential features. The conduction band along the chain direction has an overall width $4t_a$ ranging between 0.4 and 1.2 eV, depending on the organic molecule (TMTSF or TMTTF) and the anion. As the electronic overlaps in the transverse $b$ and $c$ directions are much weaker than along the organic stacks, the dispersion law is strongly anisotropic, $t_{\perp b}/t_a\simeq 0.1$ and $t_{\perp c}/t_{\perp b}\simeq 0.03$, and the Fermi surface consists of two open warped sheets (Fig.~\ref{fig:structure}). In the second line of Eq.~(\ref{dispersion}), the electronic dispersion is linearized around the two 1D Fermi points $\pm k_F$, with $v_F$ the Fermi velocity along the chains ($\mu$ is the chemical potential). The next-nearest-chain hopping $t'_{\perp b} \propto t_{\perp b}^2/t_a + \ldots$ is introduced in order to keep the shape of the Fermi surface unchanged despite the linearization. The anions located in centrosymmetric cavities lie slightly above or below the molecular planes. This structure leads to a dimerization of the organic stacks and a (weak) gap $\Delta_D$, thus making the hole-like band effectively half-filled at sufficiently low energy or temperature.\cite{Emery82,Barisic81} (See Refs.~\onlinecite{Jerome82,Jerome04,Bourbonnais99} for a detailed discussion of the structural properties of quasi-1D organic conductors.) In the presence of interactions, commensurate band-filling introduces Umklapp scattering, which affects the nature of the possible phases in these materials. \begin{figure} \centerline{\includegraphics[bb=95 570 400 695,width=12cm]{structure-FS.ps}} \caption{(Left) A side view of the Bechgaard/Fabre salt crystal structure with the electron orbitals of the organic stacks (courtesy of J. Ch. Ricquier). (Right) Electronic dispersion relation and projected 2D Fermi surface of (TMTTF)$_2$Br (reprinted with permission from Ref.~\onlinecite{Balicas94}. Copyright 1994 by EDP Sciences).} \label{fig:structure} \end{figure} What is remarkable about these electronic systems is the variety of ground states that can be achieved either by chemical means, namely substituting selenium by sulfur in the organic molecule or changing the nature of the anion (its size or symmetry), or applying pressure (Fig.~\ref{fig:dia_gene}). At low pressure, members of the sulfur series are Mott insulators (MI) from which either a lattice distorted spin-Peierls (SP) state -- often preceded by a charge ordered (CO) state -- or a commensurate-localized antiferromagnetic state (AF) can develop. On the other hand, itinerant antiferromagnetism (spin-density wave (SDW)) or superconductivity is found in the selenide series. Under pressure, the properties of the Fabre salts evolve towards those of the Bechgaard salts. The compound (TMTTF)$_2$PF$_6$ spans the entire phase diagram as pressure increases up to 50 kbar or so (Fig.~\ref{fig:dia_pf6}),\cite{Jaccard01,Wilhelm01,Adachi00} thus showing the universality of the phase diagram in Fig.~\ref{fig:dia_gene}.\cite{Bourbonnais98} \begin{figure} \centerline{\includegraphics[bb=64 196 569 668,width=8cm]{dia_gen.eps}} \caption{The generic phase diagram of the Bechgaard/Fabre salts as a function of pressure or anion X substitution. LL: Luttinger liquid, MI: Mott insulator, CO: Charge order, SP: Spin-Peierls, AF: antiferromagnetism, SC: superconductivity, FL: Fermi liquid.} \label{fig:dia_gene} \end{figure} A large number of both theoretical and experimental works have been devoted to the understanding of the normal phase and the mechanisms leading to long-range order at low temperature. The presence of antiferromagnetism over a large pressure range does indicate that repulsive interactions among carriers are important. The low-dimensionality of the system is also expected to play a crucial role. On the one hand, in the presence of repulsive interactions a strongly anisotropic Fermi surface with good nesting properties is predominantly unstable against the formation of an SDW state which is reinforced at low temperature by commensurate band filling. On the other hand, when the temperature exceeds the transverse dispersion $\sim t_{\perp b}$, 3D (or 2D) coherent electronic motion is suppressed and the conductor behaves as if it were 1D; the Fermi liquid picture breaks down and the system becomes a Luttinger liquid.\cite{Haldane81,Giamarchi_book} The relevance of 1D physics for the low-temperature properties ($T\lesssim t_{\perp b}$), as well as a detailed description of the crossover from the Luttinger liquid to the Fermi liquid, is one of the most important issues in the debate surrounding the theoretical description of the normal state of these materials. As far as low-temperature phases are concerned, a chief objective is to reach a good description of the superconducting phase -- the symmetry of the order parameter is still under debate -- and the mechanisms leading to superconductivity. Owing to the close proximity of superconductivity and magnetism in the phase diagram of Fig.\ref{fig:dia_gene}, it is essential to first discuss the origin of antiferromagnetism in both series of compounds. \section{N\'EEL ANTIFERROMAGNETISM AND SPIN-DENSITY WAVE} \subsection{Fabre salts at ambient pressure: Mott-insulator regime} The Fabre salts (TMTTF)$_2$X at ambient pressure are located on the left of the phase diagram in Figure \ref{fig:dia_gene}. Both the nature of correlations and the mechanism of long-range order at low temperature are now rather well understood. Below the temperature $T_\rho\sim 100$ K (see Fig.~\ref{fig:dia_gene}), the resistivity develops a thermally activated behavior\cite{Coulon82a} and the system enters a Mott-insulator regime. The corresponding charge gap $\Delta_\rho\sim \pi T_\rho$ can be deduced from $T_\rho$ and turns out to be larger than the (bare) transverse bandwidth $t_{\perp b}$, which in turn suppresses any possibility of transverse single particle band motion and makes the system essentially one-dimensional. The charge gap $2\Delta_\rho$ is also directly observed in the optical conductivity.\cite{Schwartz98} The members of the (TMTTF)$_2$X series thus behave as typical 1D Mott insulators below $T_\rho$ with the carriers confined along the organic stacks -- as a result of the Umklapp scattering due to the commensurability of the electronic density with the underlying lattice.\cite{Emery82,Barisic81} This interpretation agrees with the absence of anomaly in the spin susceptibility at $T_\rho$,\cite{Wzietek93} in accordance with the spin-charge separation characteristic of 1D systems.\cite{Giamarchi_book} It is further confirmed by measurements of the spin-lattice relaxation rate $1/T_1$. The Luttinger liquid theory predicts\cite{Bourbonnais89,Bourbonnais93} \begin{equation} \frac{1}{T_1} = C_0 T \chi_s^2(T) + C_1 T^{K_\rho} , \label{T1} \end{equation} where $C_0$ and $C_1$ are temperature independent constants. $\chi_s(T)$ is the uniform susceptibility and $K_\rho$ the Luttinger liquid charge stiffness parameter. The two contributions in (\ref{T1}) correspond to paramagnons or spinons ($q\simeq 0$) and AF spin fluctuations ($q\simeq 2k_F$). Both the temperature dependence of $\chi_s(T)$ and the presence of AF fluctuations lead to an enhancement of $1/T_1$ with respect to the Korringa law $(T_1T)^{-1}={\rm const}$ which holds in higher-dimensional metals. In a 1D Mott insulator $K_\rho=0$, which leads to $T_1^{-1}=C_0 T \chi_s^2(T)+C_1$ in good agreement with experimental measurements of $T_1$ and $\chi_s$.\cite{Wzietek93} The low-energy excitations in the Mott-insulator regime are 1D spin fluctuations. By lowering the temperature, these fluctuations can propagate in the transverse direction and eventually drive an AF transition. This transition is not connected to Fermi surface effects. The condition $\Delta_\rho>t_{\perp b}$ precludes a single-particle coherent motion in the transverse direction, and the concept of Fermi surface remains ill defined in the Fabre salts at ambient pressure. AF long-range order comes from interchain transfer of bound electron-hole pairs leading to a kinetic exchange interaction $J_\perp$ between spin densities on neighboring chains -- much in analogy with the exchange interaction between localized spins in the Heisenberg limit. An effective Hamiltonian can be derived from a renormalization group (RG) calculation,\cite{Bourbonnais88,Bourbonnais91} \begin{equation} H_\perp = J_\perp \int dx \sum_{\mean{i,j}} {\bf S}_i(x) \cdot {\bf S}_j(x) , \;\;\;\;\; J_\perp \simeq \frac{\xi_\rho}{a} \frac{t_{\perp b}^{*2}}{\Delta_\rho} , \label{Jperp} \end{equation} where $t_{\perp b}^*$ is the effective interchain hopping at the energy scale $\Delta_\rho$ and $a$ the lattice spacing along the chain. The sum in Eq.~(\ref{Jperp}) is over nearest-neighbor chains. The naive value $t_{\perp b}^{*2}/\Delta_\rho$ of the exchange interaction $J_\perp$ is enhanced by the factor $\xi_\rho/a$ where $\xi_\rho=v_F/\Delta_\rho$ is the intrachain coherence length induced by the Mott gap along which virtual interchain hoppings can take place. Within a mean-field treatment of $H_\perp$, the condition for the onset of long-range order is given by $J_\perp \chi_{\rm 1D}(2k_F,T)=1$ where $\chi_{\rm 1D}(2k_F,T)\sim (T/\Delta_\rho)^{-1}$ is the exact power law form of the 1D AF spin susceptibility. This yields a N\'eel temperature \begin{equation} T_N \sim \frac{t_{\perp b}^{*2}}{\Delta_\rho} . \label{TN} \end{equation} Since $T_\rho$ and $\Delta_\rho$ decrease under pressure (Fig.\ref{fig:dia_gene}), Eq.~(\ref{TN}) predicts an increase of $T_N$ with pressure -- assuming a weak pressure dependence of $t^*_b$ -- as observed experimentally (see Fig.~\ref{fig:dia_gene}). The relation $T_NT_\rho \sim t_{\perp b}^{*2} \sim {\rm const}$ has been observed in (TMTTF)$_2$Br.\cite{Brown97} \subsection{Bechgaard salts: itinerant magnetism} \label{subsec:itinerant} With increasing pressure, $T_\rho$ drops and finally merges with the AF transition line at $P_m$, beyond which there is no sign of a Mott gap in the normal phase. The Fabre salts then tend to behave similarly to the Bechgaard salts (Fig.~\ref{fig:dia_gene}). The change of behavior at $P_m$ is usually attributed to a deconfinement of carriers, i.e. a crossover from a Mott insulator to a -- metallic -- Luttinger liquid. At lower temperature, single-particle transverse hopping is expected to become relevant and induce a dimensional crossover at a temperature $T_x$ from the Luttinger liquid to a 2D or 3D metallic state. With increasing pressure, the AF transition becomes predominantly driven by the instability of the whole warped Fermi surface due to the nesting mechanism. Although there is a general agreement on this scenario, there is considerable debate on how the dimensional crossover takes place and the nature of the low-temperature metallic state. On the theoretical side, simple RG arguments indicate that the crossover from the Luttinger liquid to the 2D regime takes place at the temperature\cite{Bourbonnais84} \begin{equation} T_x \sim \frac{t_{\perp b}}{\pi} \left( \frac{t_{\perp b}}{t_a}\right)^{\frac{1-K_\rho}{K_\rho}} , \label{Tx} \end{equation} where $K_\rho$ is the Luttinger liquid parameter. For non-interacting electrons ($K_\rho=1$), Eq.~(\ref{Tx}) would give $T_x\sim t_{\perp b}$: the 2D Fermi surface is irrelevant when temperature is larger than the dispersion in the $b$ direction. For interacting electrons ($K_\rho\neq 1$), the interchain hopping amplitude $t_{\perp b}$ is reduced to an effective value $t_{\perp b}^*$ and the dimensional crossover occurs at a lower temperature $T_x\sim t_{\perp b}^*<t_{\perp b}$. A detailed theoretical picture of the dimensional crossover is still lacking. In particular, whether it is a sharp crossover or rather extends over a wide temperature range -- as shown by the shaded area in Fig.~\ref{fig:dia_gene} -- is still an open issue. \subsubsection{The strong-correlation picture} Some experiments seem to indicate that correlations still play an important role even in the low-temperature phase of the Bechgaard salts. For instance, a significant enhancement of $1/T_1T$ with respect to the Korringa law -- although weaker than in the Fabre salts at ambient pressure -- is still present.\cite{Wzietek93} This behavior has been explained in terms of 1D spin fluctuations persisting down to the dimensional crossover temperature $T_x\sim 10$ K, below which the Korringa law is recovered.\cite{Wzietek93,Bourbonnais93} The restoration of a plasma edge in the transverse $b'$ direction at low temperature in (TMTSF)$_2$PF$_6$ -- absent in the Fabre salts -- suggests the gradual emergence of a coherent motion in the $(ab)$ planes below $T_x \sim 100$ K.\cite{Jacobsen81,Jacobsen83} (${\bf b}'$ is normal to ${\bf a}$ and ${\bf c}$ in the $(ab)$ plane. It differs from ${\bf b}$ due to the triclinic structure.) However, the frequency dependence of the optical conductivity is inconsistent with a Drude-like metallic state.\cite{Cao96,Dressel96,Schwartz98} The low-energy peak carries only 1\% of the total spectral weight and is too narrow to be interpreted as a Drude peak with a frequency-independent scattering time. It has been proposed that this peak is due to a collective mode that bears some similarities with the sliding of a charge-density wave -- an interpretation supported by the new phonon features that emerge at low temperature.\cite{Cao96} Furthermore, 99\% of the total spectral weight is found in a finite energy peak around 200 cm$^{-1}$. It has been suggested that this peak is a remnant of a ${1\over 4}$-filled Mott gap $\Delta_\rho$, observed in the less metallic Fabre salts at ambient pressure.\cite{Giamarchi97,Schwartz98} In this picture, (TMTSF)$_2$PF$_6$ is close to the border between a Mott insulator and a Luttinger liquid, and the low-temperature metallic behavior is made possible by the interchain coupling.\cite{Schwartz98,Vescoli98,Giamarchi04} A different interpretation has been proposed for the far infrared spectrum in optical conductivity and is based on the weak half-filling character of the band for interactions in the Hubbard limit.\cite{Favand96} The longitudinal resistivity in (TMTSF)$_2$PF$_6$ is found to be metallic, with a $T^2$ law between the SDW transition and 150 K, crossing over to a sublinear temperature dependence above 150 K with an exponent in the range $0.5-1$.\cite{Moser00,Auban99} While this observation would be consistent with a dimensional crossover to a low-temperature Fermi liquid regime taking place at $T_x\sim 150$ K, the transverse resistance $\rho_b$ along the $b$ axis apparently fails to show the expected $T^2$ behavior. Given the difficulties of a direct dc measurement, owing to non-uniform current distributions between contacts, conflicting results have been published in the literature.\cite{Mihaly00,Moser00} Nevertheless, below $T\sim 80$ K $\rho_b$ can be deduced from $\rho_a\sim T^2$ and $\rho_c\sim T^{1.5}$ using a tunneling argument, which yields $\rho_c=(\rho_a\rho_b)^{1/2}$ and therefore $\rho_b\sim T$. Moreover, contactless -- microwave -- transverse conductivity measurements in the (TMTSF)$_2$PF$_6$ salt fail to reveal the emergence of a Fermi liquid $T^2$ temperature dependence of the resistivity in the $b$ direction in this temperature range.\cite{Fertey99} As far as $\rho_c$ is concerned, a maximum around $T_{\rm max}\sim 80$ K has been observed, with a metallic -- though incoherent -- behavior $\rho_c\sim T^{1.5}$ at lower temperature.\cite{Moser98} $T_{\rm max}$ is highly sensitive to pressure, whereas the interchain hopping $t_{\perp b}$ is not. Therefore, $T_{\rm max}$ cannot be directly identified with $t_{\perp b}$, but could be related to a -- weakly -- renormalized value $t_{\perp b}^*\sim T_x$ in agreement with predictions of the Luttinger liquid theory [see Eq.~(\ref{Tx})]. The transport measurements seem to be indicative of a gradual crossover between a Luttinger liquid and a Fermi liquid occurring in the temperature range $10-80$ K. The onset of 3D coherence and Fermi liquid behavior would then be related to the interplane coupling $t_{\perp c}$ between $(a,b)$ planes.\cite{Moser98} The absence of Fermi liquid behavior down to very low temperatures in the Bechgaard salts seems to be further supported by photoemission experiments. ARPES fails to detect quasi-particle features or the trace of a Fermi surface at 150 K.\cite{Zwick97} Similar conclusions were deduced from integrated photoemission at 50 K.\cite{Dardel93} However, photoemission results -- e.g. the absence of dispersing structure and a power-law frequency dependence which is spread over a large energy scale of the order of 1 eV -- do not conform with the predictions of the Luttinger theory and might be strongly influenced by surface effects. The existence of strong correlations suggests that the kinetic interchain exchange $J_\perp$, which drives the AF transition in the sulfur series, still plays an important role in the Bechgaard salts. In this picture, the decrease of $T_N$ with increasing pressure is due both to the decrease of $J_\perp$ and the deterioration of the Fermi surface nesting. This scenario is supported by RG calculations.\cite{Bourbonnais91} All the experiments mentioned so far favor different -- and sometimes incompatible -- scenarios for the dimensional crossover. However, the high-temperature phase of the Bechgaard salts is always analyzed on the basis of the Luttinger liquid theory. A consistent interpretation of the experimental results therefore requires to find a common $K_\rho$ parameter and to determine the value of the remnant of the Mott gap $\Delta_\rho$. NMR,\cite{Wzietek93} dc transport,\cite{Moser98,Georges00} and optical measurements\cite{Schwartz98,Vescoli98} have been interpreted in terms of the Luttinger theory with $K_\rho\simeq 0.23$ and quarter-filled Umklapp scattering.\cite{Jerome04,Giamarchi04} This interpretation, as well as the mere existence of strong correlations, is not without raising a number of unanswered questions (see the next section). For instance, $K_\rho\simeq 0.23$ would lead according to (\ref{Tx}) to $T_x\sim 10^{-3}t_{\perp b}$, a value much below the experimental observations. \subsubsection{The weak-correlation picture} On the other hand, there are experiments pointing to the absence of strong correlations in the Bechgaard salts. One of the most convincing arguments comes from the so-called Danner-Chaikin oscillations.\cite{Danner94} Resistance measurements of (TMTSF)$_2$ClO$_4$ in the $c$ direction show pronounced resonances when an applied magnetic field is rotated in the $(ac)$ plane at low temperature. The complete angular dependence of the magneto-resistance can be reproduced within a semiclassical approach. The position of the resonance peaks is given by the zeros of the Bessel function $J_0(\gamma)$ evaluated at $\gamma=2t_{\perp b}cB_x/v_FB_z$ ($c$ is the interchain spacing in the $c$ direction). This enables a direct measure of the interchain hopping amplitude in the $b$ direction, yielding $t_{\perp b}\simeq 280$ K above the anion ordering transition taking place at 24 K, in very good agreement with values derived from band calculations.\cite{Grant83,Yamaji82,Ducasse86} These results can hardly be reconciled with the existence of strong correlations. Sizeable 1D fluctuations should lead to a strong ($k_\parallel,\omega$) dependence of the self-energy, and in turn to a significant renormalization of $k_\perp$-dependent quantities like the interchain hopping amplitudes.\cite{Bourbonnais91} This lends support to the idea that the low-temperature phase of the Bechgaard salts can be described as a weakly interacting Fermi liquid subject to spin fluctuations induced by the nesting of the Fermi surface.\cite{Gorkov96,Zheleznyak99} The weak-coupling approach has been particularly successful in the framework of the Quantized Nesting Model.\cite{Chaikin96,Lederer96,Yakovenko96} The latter explains the cascade of SDW phases induced by a magnetic field in (TMTSF)$_2$PF$_6$ and (TMTSF)$_2$ClO$_4$, and provides a natural explanation for the quantization of the Hall effect -- $\sigma_{xy}=2Ne^2/h$ ($N$ integer) per $(ab)$ plane -- observed in these phases. Furthermore, it reproduces the experimental phase diagram only for interchain hopping amplitudes $t_{\perp b},t_{\perp c}$ close to their unrenormalized values. Despite the apparent success of the weak-coupling approach, it has nevertheless become clear that the SDW phase of the Bechgaard salts is not conventional. Recent experiments have shown that the $2k_F$ SDW coexists with a $2k_F$ and a -- weaker -- $4k_F$ charge-density wave (CDW) in (TMTSF)$_2$PF$_6$.\cite{Pouget96,Kagoshima99} Since there is no $2k_F$ phonon softening associated to this transition, the emergence of this CDW state differs from what is usually seen for an ordinary Peierls state. This unusual ground-state can be explained on the basis of a quarter-filled 1D model with dimerization and onsite, nearest-neighbor and next-nearest-neighbor Coulomb interactions,\cite{Seo97,Kobayashi98,Mazumdar99,Tomio00,Tomio01} but this explanation remains to be confirmed. \subsubsection{The normal phase above the superconducting phase} \label{subsubsec:np} It is remarkable that the superconducting phase lies next to the SDW phase -- which is actually a mixture SDW-CDW -- and reaches its maximum transition temperature $T_c\sim 1$ K at the pressure $P_c$ where $T_{\rm SDW}$ and $T_c$ join (see Figs.~\ref{fig:dia_gene} and \ref{fig:dia_pf6}). In the normal phase above the SDW phase, the resistivity along the $a$ axis decreases with temperature, reaches a minimum at $T_{\rm min}$, and then shows an upturn and a strong enhancement related to the proximity of the SDW phase transition that occurs at $T_{\rm SDW}<T_{\rm min}$. The region of the normal phase where strong AF fluctuations are present ($T_{\rm SDW}<T<T_{\rm min}$) extends over the pressure range where the ground state is superconducting (Fig.~\ref{fig:dia_pf6}). Its width in temperature decreases with increasing pressure, so that the superconducting transition temperature appears to be closely linked to $T_{\rm min}$. These observations strongly suggest an intimate relationship between spin fluctuations and superconductivity in the Bechgaard/Fabre salts.\cite{Jaccard01,Wilhelm01} The importance of spin fluctuations above the superconducting phase is further confirmed by the persistence of the enhancement of the spin-lattice relaxation rate $1/T_1$ for $P>P_c$.\cite{Wzietek93} Besides the presence of spin fluctuations at low temperature, charge fluctuations have also been observed in the normal phase {\it via} optical conductivity measurements.\cite{Cao96} \begin{figure} \centerline{\includegraphics[bb=0 10 450 410,width=7cm]{dia_pf6.ps}} \caption{(color online) $(P,T)$ phase diagram of (TMTTF)$_2$PF$_6$. The (green) shaded area above the SDW and SC phase indicates the region of the normal phase where spin fluctuations are significant. (Reprinted with permission from Ref.~\onlinecite{Wilhelm01}. Copyright 2001 by EDP Sciences.) } \label{fig:dia_pf6} \end{figure} \section{SUPERCONDUCTIVITY} Some of the early experiments in the Bechgaard salts were not in contradiction with a conventional BCS superconducting state. For instance, the specific heat in (TMTSF)$_2$ClO$_4$ obeys the standard temperature dependence $C/T=\gamma + \beta T^2$ above the superconducting transition, and the jump at the transition $\Delta C/\gamma T_c\simeq 1.67$ is close to the BCS value 1.43. The ratio $2\Delta(T=0)/T_c\simeq 3.33$, obtained from the gap deduced from the thermodynamical critical field, is also in reasonable agreement with the prediction of the BCS theory ($2\Delta/T_c\simeq 3.52$).\cite{Garoche82,Garoche82a} Early measurements of $H_{c2}(T)$, performed in the vicinity of the zero-field transition temperature, were also interpreted on the basis of the BCS theory.\cite{Gubser81,Murata82,Green82,Brusetti82} Nevertheless, soon after the discovery of organic superconductivity, the high-sensitivity of the superconducting state to ir\-ra\-dia\-tion\cite{Choi82,Bouffard82} led Abri\-ko\-sov\cite{Abrikosov83} to suggest the possibility of an unconventional -- triplet -- pairing, although the non-magnetic nature of the induced defects is questionable.\cite{Jerome04} The sensitivity to non-magnetic impurities, and thus the existence of unconventional pairing, was later on clearly established by the suppression of the superconducting transition upon alloying (TMTSF)$_2$ClO$_4$ with a very small concentration of ReO$_4$ anions.\cite{Coulon82,Tomic83} A recent study\cite{Joo04} of the alloy \\ (TMTSF)$_2$(ClO$_4$)$_x$(ReO$_4$)$_{1-x}$ -- with different cooling rates and different values of $x$ -- has confirmed this in remarkable way by showing that the transition temperature $T_c$ is related to the scattering rate $1/\tau$ by \begin{equation} \ln \left(\frac{T_{c0}}{T_c}\right) = \Psi\left(\frac{1}{2} + \frac{1}{4\pi\tau T_c} \right) - \Psi\left(\frac{1}{2}\right) \end{equation} ($T_{c0}$ is the transition temperature of the pure system and $\Psi$ the digamma function), as expected for an unconventional superconductor in the presence of non-magnetic impurities.\cite{Yuan03} Another indication of a possible unconventional pairing came from the observation of Gor'kov and J\'erome\cite{Gorkov85} that the upper critical field $H_{c2}(T)$, extrapolated down to $T=0$, would exceed the Pauli limited field\cite{Clogston62,Chandrasekhar62} $H_P=1.84 T_{c0} /\mu_B\sim 2$ T by a factor of 2. (The value of $H_P$ quoted here corresponds to $s$-wave pairing.) As spin-orbit interaction is weak in these systems and cannot provide an explanation for such a large $H_{c2}$, it is tempting to again invoke triplet pairing. This issue has been revived by recent measurements of the upper critical field in (TMTSF)$_2$PF$_6$ with substantially improved accuracy in angular alignment and lower temperatures. Lee {\it et al.}\cite{Lee97,Lee00} observed a pronounced upward curvature of $H_{c2}(T)$ without saturation -- down to $T\sim T_c/60$ -- for a field parallel to the $a$ or $b'$ axis, with $H^{b'}_{c2}(T)$ and $H^a_{c2}(T)$ exceeding the Pauli limited field $H_P$ by a factor of 4. Moreover, $H^{b'}_{c2}(T)$ becomes larger than $H^a_{c2}(T)$ at low temperatures. Similar results were obtained from simultaneous resistivity and torque magnetization experiments in (TMTSF)$_2$ClO$_4$.\cite{Oh04} The extrapolated value to zero temperature, $H_{c2}(0)\sim 5$ T, is at least twice the Pauli limited field. \begin{figure} \centerline{\includegraphics[bb=20 18 290 215,width=8cm]{Hc2_torque.eps}} \caption{ Resistivity (left scale) and torque magnetization (right) in (TMTSF)$_2$ClO$_4$ at 25 mK for $H\parallel b^\prime$. The dotted line and + symbols on the torque curve represent a temperature-independent normal state contribution. The onsets of diamagnetism and decreasing resistivity, upon decreasing field, are indicated by the arrow near $H_{c2}\sim$ 5T. Arrows in the low field vortex state indicate field sweep directions. (Reprinted with permission from Ref.~\onlinecite{Oh04}. Copyright 2004 by the American Physical Society.) } \end{figure} There are different mechanisms that can greatly increase the orbital critical field $H_{c2}^{\rm orb}(T)$ in organic conductors. Superconductivity in a weakly-coupled plane system can survive in a strong parallel magnetic field if the interplane (zero-field) coherence length $\xi_\perp(T)$ becomes smaller than the interplane spacing $d$ at low temperature. Vortex cores, with size $\xi_\perp(T)\lesssim d$, can then fit between planes without destroying the superconducting order in the planes, and lead to a Josephson vortex lattice. In the Bechgaard salts, even for a field parallel to the $b'$ axis, the Josephson limit $\xi_\perp(T)\lesssim d$ is however unlikely to be reached, since the interchain hopping amplitude $t_{\perp c}\sim 5-10$ K is larger than the transition temperature $T_c\sim 1.1$ K. Nevertheless the orbital critical field can be enhanced by a field-induced dimensional crossover.\cite{Lebed86,Burlachkov87,Dupuis93,Dupuis94,Dupuis95} A magnetic field parallel to the $b'$ axis tends to localize the wavefunctions in the $(ac)$ planes, which in turn weakens the orbital destruction of the superconducting order. When $\omega_c=eHc\gtrsim t_{\perp c}$ (which corresponds to a field of a few Tesla in the Bechgaard salts), the wave functions are essentially confined in the $(ac)$ planes and the orbital effect of the field is completely suppressed. The coexistence between SDW and superconductivity, as observed in a narrow pressure domain of the order of 0.8 kbar below the critical pressure $P_c$ (Fig.~\ref{fig:dia_gene}), can also lead to a large increase of the orbital upper critical field.\cite{Green80,Brusetti82a,Vuletic02,Lee02a,Lee05} Regardless of the origin of the large orbital critical field, another mechanism is required to exceed the Pauli limited field $H_P$ in the Bechgaard salts. For singlet spin pairing, the Pauli limit may be overcome by a non-uniform Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) state, where Cooper pairs form with a nonzero total momentum.\cite{Larkin65,Fulde64} This mechanism is particularly efficient in a 1D system,\cite{Buzdin83,Lebed86,Dupuis93,Dupuis95} due to the large phase space available for pairing at nonzero total momentum. For a linearized dispersion law, the mean-field upper critical field $H_c^{\rm LOFF}$ diverges as $1/T$ in a pure superconductor. Lebed\cite{Lebed99} has argued that the quasi-1D anisotropy reduces $H_c^{\rm LOFF}$ below the experimental observations. The only possible explanation for a large upper critical field would then be an equal-spin triplet pairing. A $p_x$-wave triplet state with a ${\bf d}$ vector perpendicular to the $b'$ axis was proposed\cite{Lebed00} as a possible explanation of the experimental observations reported in Refs.~\onlinecite{Lee97,Lee00}. The triplet scenario in the Bechgaard salts is supported by recent NMR Knight shift experiments.\cite{Lee02,Lee03} Early NMR experiments by Takigawa {\it et al.} already pointed to the unconventional nature of the superconducting state in (TMTSF)$_2$ClO$_4$.\cite{Takigawa87} The proton spin lattice relaxation rate $1/T_1$ does not exhibit a Hebel-Slichter peak. It decreases rapidly just below $T_c$ in contrast to the typical BCS superconductor where it increases below $T_c$, reaching a maximum at $T\sim 0.9 T_c$. Furthermore, $1/T_1\sim T^3$ for $T_c/2\lesssim T \leq T_c$ -- as it is the case for most unconventional superconductors -- suggesting zeros or lines of zeros in the excitation spectrum. Recent experiments by Lee {\it et al.} in (TMTSF)$_2$PF$_6$ show that the Knight shift, and therefore the spin susceptibility, remains unchanged at the superconducting transition.\cite{Lee02,Lee03} This indicates triplet spin pairing, since a singlet pairing would inevitably lead to a strong reduction of the spin susceptibility ($\chi(T\to 0)\to 0$). It should however be noticed that the interpretation of the Knight shift results -- due to a possible lack of sample thermalization during the time of the experiment -- has been questioned.\cite{Jerome04,Jerome05} \begin{figure} \centerline{\includegraphics[width=8cm]{Knight_shift.eps}} \caption{$^{77}$Se NMR spectra collected above and below $T_c$ (0.81 K at 1.43 T). Each trace is normalized and offset for clarity. The temperatures shown in parentheses are the measured equilibrium temperatures before the pulse. In the inset, the spin susceptibility normalized by the normal state $\chi/\chi_n$ from measured first moments are compared with theoretical calculations\cite{Fulde65} for $H/H_{c2}(0)\sim 0$ (curve $a$) and 0.63 (curve $b$). Curves $c$ and $d$ are obtained from the ratio of applied field (1.43 T) to the measured upper critical field $H_{c2}(T)$ at which the superconducting criteria ``onset'' and ``50\% transition'' have been used, respectively, to determine $H_{c2}(T)$. (Reprinted with permission from Ref.~\onlinecite{Lee02}. Copyright 2002 by the American Physical Society.)} \end{figure} In principle, the symmetry of the order parameter can be determined from tunneling spectroscopy. Sign changes of the pairing potential around the Fermi surface lead to zero-energy bound states in the superconducting gap. These states manifest themselves as a zero-bias peak in the tunneling conductance into the corresponding edge.\cite{Sengupta01} More generally, different pairing symmetries can be unambiguously distinguished by tunneling spectroscopy in a magnetic field.\cite{Tanuma01,Tanuma02,Tanuma03} In practice however, the realization of tunnel junctions with the TMTSF salts appears to be very difficult. A large zero-bias conductance peak -- suggesting $p$-wave symmetry -- across the junction between two organic superconductors was observed.\cite{Ha03} But the absence of temperature broadening could indicate that this peak is due to disorder rather than to a midgap state.\cite{Naughton05} Information about the symmetry of the order parameter can also be obtained from thermal conductivity measurements. The latter indicate the absence of nodes in the excitation spectrum of the superconducting state in (TMTSF)$_2$ClO$_4$,\cite{Belin97} thus suggesting a $p_x$-wave symmetry. However, because of the doubling of the Fermi surface in the presence of anion ordering, a singlet $d$- or triplet $f$-wave order phase would also be nodeless in (TMTSF)$_2$ClO$_4$ (see Fig.~\ref{fig:gap} for the different gap symmetries in a quasi-1D superconductor).\cite{Bourbonnais99,Shimahara00} \begin{figure} \centerline{\includegraphics[bb=90 510 430 770,width=8cm]{op.ps}} \caption{(color online) Gap symmetries $\Delta_r(\kperp)$ in a quasi-1D superconductor (after Ref.~\onlinecite{Fuseya05}, courtesy of Y. Suzumura). $r=+/-$ denotes the right/left sheet of the Fermi surface. (Singlet pairing) $s$: const, $d_{x^2-y^2}$: $\cos\kperp$, $d_{xy}$: $r\sin\kperp$. (Triplet pairing) $p_x$: $r$, $f$: $r\cos\kperp$, $p_y$: $\sin\kperp$. Next-nearest-neighbor and longer-range pairings are not considered. } \label{fig:gap} \end{figure} \section{MICROSCOPIC THEORIES OF THE SUPERCONDUCTING STATE} The phase diagram of the 1D electron gas within the g-ology frame\-work\cite{Solyom79} is shown in Fig.~\ref{fig:dia_gology}. $g_1$ and $g_2$ denote the backward and forward scattering amplitudes, respectively, and $g_3$ the strength of the (half-filling) Umklapp processes. Given the importance of spin fluctuations in the phase diagram of the Bechgaard/Fabre salts, as well as the existence of AF ground states, the Bechgaard/Fabre salts should pertain to the upper right corner of the 1D phase diagram ($g_1,g_2>0$ and $g_1-2g_2<|g_3|$) where the Umklapp processes are relevant and the dominant fluctuations antiferromagnetic. In the Fabre salts, the non-magnetic insulating phase observed below $T_\rho \sim 100$ K indicates the importance of Umklapp scattering and suggests sizable values of $g_3$ for this series. Since the long-range Coulomb interaction favors $g_1<g_2$, the Fabre salts are expected to lie to the right of the phase diagram, i.e. far away from the boundary $g_1-2g_2=|g_3|$. Since the triplet superconducting phase is lying next to the SDW phase (Fig.~\ref{fig:dia_gology}), it is tempting to invoke a change of the couplings $g_i$ under pressure to argue in favor of a $p_x$-wave triplet superconducting state.\cite{Abrikosov83,Emery83} Such a drastic change of the couplings, which would explain why (TMTTF)$_2$PF$_6$ becomes superconducting above 4.35 GPa,\cite{Jaccard01,Wilhelm01,Adachi00} is however somewhat unrealistic and has not received any theoretical backing so far. The Umklapp scattering being much weaker in the Bechgaard salts, one cannot exclude that these compounds lie closer to the boundary between the SDW and the triplet superconducting phase. A moderate change of the couplings under pressure would then be sufficient to explain the superconducting phase of (TMTSF)$_2$PF$_6$ observed above 6 kbar or so. However, the destruction of the superconducting phase by a weak magnetic field and the observation of a cascade of SDW phases for slightly higher fields\cite{Chaikin96,Lederer96,Yakovenko96} would imply that the interaction is strongly magnetic-field dependent -- again a very unlikely scenario. \begin{figure} \centerline{\includegraphics[width=6cm]{dia_gology.eps}} \caption{Phase diagram (leading fluctuations) of the 1D electron gas in presence of Umklapp scattering. SS (TS): singlet (triplet) superconductivity. A gap develops in the charge sector (Mott insulating behavior) for $g_1-2g_2<|g_3|$.} \label{fig:dia_gology} \end{figure} In all probability, the very origin of the superconducting instability lies in the 3D behavior of these quasi-1D conductors. Thus the attractive interaction is a consequence of a low-energy mechanism that becomes more effective below the dimensional crossover temperature $T_x$. Transverse hopping makes retarded electron-phonon interactions more effective, since it is easier for the electrons to avoid the Coulomb repulsion.\cite{Emery83} By comparing the sulfur and selenide series, it can however be argued that, in the pressure range where superconductivity is observed, the strength of the electron-phonon interaction is too weak to explain the origin of the attractive interaction. For narrow tight-binding bands in the organics, the attraction is strongest for backscattering processes in which $2k_F$ phonons are exchanged.\cite{Barisic70,Su79} According to the results of X-ray experiments performed on (TMTSF)$_2$X, however, the electron-phonon vertex at this wave vector does not undergo any significant increase in the normal state (Fig.~\ref{fig:Xray}). The amplitude of the $2k_F$ lattice susceptibility in (TMTSF)$_2$PF$_6$ -- which is directly involved in the strength of the phonon exchange -- is weak. It is instructive to compare with the sulfur analog compound (TMTTF)$_2$PF$_6$, for which the electron-phonon vertex at $2k_F$ becomes singular, signaling a lattice instability towards a spin-Peierls distortion (Fig.~\ref{fig:Xray}). This instability produces a spin gap that is clearly visible in the temperature dependence of the magnetic susceptibility and nuclear relaxation rate.\cite{Creuzet87,Bourbonnais96} These effects are not seen in (TMTSF)$_2$PF$_6$ close to $P_c$. The persistent enhancement of these quantities indicates that interactions are dominantly repulsive (Sec.~\ref{subsubsec:np}), making the traditional phonon-mediated source of pairing inoperant. \begin{figure} \centerline{\includegraphics[width=7cm]{Xray.eps}} \caption{Temperature dependence of the $2k_F$ lattice susceptibility $(I/T)$ as a function of temperature in the normal phase of (TMTSF)$_2$PF$_6$ (top) and (TMTTF)$_2$PF$_6$ (bottom). (Reprinted with permission from Ref.~\onlinecite{Pouget96}. Copyright 1996 by EDP Sciences.) } \label{fig:Xray} \end{figure} Emery\cite{Emery86} pointed out that near an SDW instability, short-range AF spin fluctuations can give rise to anisotropic pairing and thus provide a possible explanation of the origin of the superconducting phase in the Bechgaard salts. Such fluctuations give rise to an oscillating potential that couples to the electrons. Carriers can avoid the local Coulomb repulsion and take advantage of the attractive part of this potential by moving on different chains. This mechanism, which can lead to superconductivity at low temperatures, is the spin-analog of the so-called Kohn-Luttinger mechanism which assumes the pairing to originate in the exchange of charge-density excitations produced by Friedel oscillations.\cite{Kohn65} While most theoretical results on the spin-fluctuation-induced superconductivity are based on RPA-like calculations,\cite{Beal86,Caron86,Bourbonnais88,Scalapino86,Scalapino87,Miyake86,Shimahara88,Kino99,Kuroki99,Scalapino95} the existence of such an electronic pairing mechanism in a quasi-1D conductor has been recently confirmed by an RG approach.\cite{Duprat01} Moreover, it has been recently realized that CDW fluctuations can play an important role in stabilizing a triplet phase.\cite{Kuroki01,Onari04,Tanaka04,Fuseya05,Bourbonnais04,Nickel05a,Nickel05b} Below we discuss in simple terms the link between spin/charge fluctuations and unconventional pairing,\cite{Scalapino95} and present recent results obtained from an RG approach.\cite{Bourbonnais04,Nickel05a,Nickel05b} \subsection{Superconductivity from spin and charge fluctuations} Considering for the time being only intrachain interactions, the interacting part of the Hamiltonian within the g-ology framework\cite{Solyom79} reads \begin{equation} H_{\rm int} = \sum_{{\bf q}} [ g_{\rm ch} \rho(-{\bf q})\rho({\bf q}) + g_{\rm sp} {\bf S}(-{\bf q})\cdot {\bf S}({\bf q}) ] \label{Hint} \end{equation} (from now on we neglect the $c$ axis and consider a 2D model), where $\rho_{\bf q}$ and ${\bf S}_{\bf q}$ are the charge- and spin-density operators in the Peierls channel ($q_x\sim 2k_F$), $g_{\rm ch}=g_1-g_2/2$ and $g_{\rm sp}=-g_2/2$. Starting from a half-filled extended Hubbard model, we obtain $g_1=U-2V$ and $g_2=U+2V$, where $U$ is the onsite and $V$ the nearest-neighbor lattice site (dimer) interaction. For simplicity, we do not consider Umklapp scattering ($g_3$), since it does not play an important role in the present qualitative discussion. For repulsive interactions $g_1\sim g_2>0$, short-range spin fluctuations develop at low temperatures due to the nesting of the Fermi surface. They can be described by an effective Hamiltonian $H^{\rm eff}_{\rm int}$ obtained from (\ref{Hint}) by replacing the bare coupling constants by their (static) RPA values \begin{eqnarray} g_{\rm ch}^{\rm RPA}({\bf q}) &=& \frac{g_{\rm ch}}{1+g_{\rm ch}\chi_0({\bf q})} = g_{\rm ch} - g_{\rm ch}^2 \chi_{\rm ch}^{\rm RPA}({\bf q}) , \nonumber \\ g_{\rm sp}^{\rm RPA}({\bf q}) &=& \frac{g_{\rm sp}}{1+g_{\rm sp}\chi_0({\bf q})} = g_{\rm sp} - g_{\rm sp}^2 \chi_{\rm sp}^{\rm RPA}({\bf q}) , \eleq where $\chi^{\rm RPA}$ is the static ($\omega=0$) RPA susceptibility. The bare particle-hole susceptibility diverges at low temperatures, i.e. $\chi_0(\Q) \sim\ln(E_0/{\rm max}(T,t_{\perp b}'))$, due to the $\Q=(2k_F,\pi)$ nesting of the quasi-1D Fermi surface ($\epsilon_{\bf k}-\mu \simeq -\epsilon_{{\bf k}+\Q}+\mu$). ($E_0$ is a high-energy cutoff of the order of the bandwidth.) The divergence is cut off by deviations from perfect nesting, characterized by the energy scale $t_{\perp b}'$ [Eq.~(\ref{dispersion})]. In the Bechgaard salts $t_{\perp b}'\sim 10$ K and varies with pressure. When the nesting of the Fermi surface is good (small $t_{\perp b}'$), the spin susceptibility $\chi_{\rm sp}^{\rm RPA}(\Q)$ diverges at low temperatures, thus signaling the formation of an SDW. A larger value of $t_{\perp b}'$ frustrates antiferromagnetism and, when exceeding a threshold value, eliminates the transition to the SDW phase.\cite{Hasegawa86,Montambaux88} In that case, the (remaining) short-range spin fluctuations can lead to pairing between fermions. To see this, we rewrite the effective Hamiltonian $H^{\rm eff}_{\rm int}$ in the particle-particle (Cooper) channel \begin{equation} H_{\rm int}^{\rm eff} = \sum_{{\bf k},{\bf k}'} [ g_s({\bf k},{\bf k}') O^*_s({\bf k}) O_s({\bf k}') + g_t({\bf k},{\bf k}') \O^*_t({\bf k}) \cdot \O_t({\bf k}') ] \label{Heff2} \end{equation} (we consider only Cooper pairs with zero total momentum), where \begin{eqnarray} g_s({\bf k},{\bf k}') &=& - 3 g_{\rm sp}^{\rm RPA}({\bf k}+{\bf k}') + g_{\rm ch}^{\rm RPA}({\bf k}+{\bf k}') , \nonumber \\ g_t({\bf k},{\bf k}') &=& - g_{\rm sp}^{\rm RPA}({\bf k}+{\bf k}') - g_{\rm ch}^{\rm RPA}({\bf k}+{\bf k}') \label{gst} \eleq are the effective interactions in the singlet and triplet spin pairing channels (Fig.~\ref{fig:geff}). $O_s({\bf k})$ ($\O_t({\bf k})$) is the annihilation operator of a pair $({\bf k},-{\bf k})$ in a singlet (triplet) spin state, and $\O_t=(O_t^1,O_t^0,O_t^{-1})$ denotes the three components $S^z=1,0,-1$ of the triplet state (total spin $S=1$). \begin{figure} \centerline{\includegraphics[width=6.5cm]{geff.eps}} \caption{Diagrammatic representation of the effective interaction $g_{s,t}$ in the Cooper channel within the RPA. } \label{fig:geff} \end{figure} On the basis of the effective Hamiltonian (\ref{Heff2}) the BCS theory predicts a superconducting transition whenever the effective interaction $g_{s,t}$ turns out to be attractive in (at least) one pairing channel. A simple argument shows that this is indeed always the case in the presence of short-range spin fluctuations. The spin susceptibility $\chi^{\rm RPA}_{\rm sp}({\bf k}+{\bf k}')$ exhibits a pronounced peak around ${\bf k}+{\bf k}'=\Q$. Neglecting the unimportant $k_\parallel$ dependence, its Fourier series expansion reads \begin{multline} \chi^{\rm RPA}_{\rm sp}(2k_F,\kperp+\kperpp) = \sum_{n=0}^\infty a_n (-1)^n \cos[n(\kperp+\kperpp)] \\ = \sum_{n=0}^\infty a_n (-1)^n [ \cos n\kperp \cos n\kperpp - \sin n\kperp \sin n\kperpp ] , \label{chi} \end{multline} where $a_n\geq 0$. Choosing $a_n=a_0$, one obtains a diverging spin susceptibility $\chi^{\rm RPA}_{\rm sp}(2k_F,\kperp+\kperpp) \propto \delta(\kperp+\kperpp-\pi)$. The condition $a_0>a_1>\cdots \geq 0$ gives a broadened peak around $\kperp+\kperpp=\pi$. Eqs.~(\ref{gst},\ref{chi}) show that the effective interaction in the singlet channel contains attractive interactions for any value of $n$. In real space, $n$ corresponds to the range of the pairing interaction in the $b$ direction. The dominant attractive interaction corresponds to nearest-neighbor-chain pairing ($n=1$) and a $d_{x^2-y^2}$-wave order parameter $\Delta_r(\kperp)\sim \cos\kperp$ ($r={\rm sgn}(k_x)$). The interaction is also attractive in the triplet $f$-wave channel ($\Delta_r(\kperp)\sim r\cos\kperp$). However, all the three components of a (spin-one boson) SDW fluctuation contribute to the superconducting coupling in the singlet channel -- hence the factor of 3 in the first of equations (\ref{gst}). The latter therefore always dominates over the triplet one when charge fluctuations are not important. Note that the interaction is repulsive in the singlet $d_{xy}$-wave ($\Delta_r(\kperp)\sim r\sin\kperp$) and the triplet $p_y$-wave ($\sin\kperp$) channels. Eqs.~(\ref{gst}) show that CDW fluctuations tend to suppress the singlet pairing, but reinforce the triplet one. In the Bechgaard salts, the physical relevance of CDW fluctuations has been borne out by the puzzling observation of a CDW that coexists with the SDW (Sec.~\ref{subsec:itinerant}).\cite{Pouget96,Kagoshima99,Cao96} Within the framework of an extended anisotropic Hubbard model, recent RPA calculations have shown that the triplet $f$-wave pairing can overcome the singlet $d_{x^2-y^2}$-wave pairing when the intrachain interactions are chosen such as to boost the CDW fluctuations with respect to the SDW ones.\cite{Kuroki01,Onari04,Tanaka04} In a half-filled model, this however requires the nearest-neighbor (intrachain) interaction $V$ to exceed $U/2$. In a quarter-filled model -- appropriate if one ignores the weak dimerization along the chains -- the condition for $f$-wave superconductivity becomes $V_2\geq U/2$ ($V_2$ is the next-nearest-neighbor (intrachain) Coulomb interaction) and appears even more unrealistic. Similar conclusions were reached within an RG approach.\cite{Fuseya05} Given that electrons interact through the Coulomb interaction, not only intrachain but also {\it inter}chain interactions are present in practice. At large momentum transfer, the interchain interaction is well known to favor a CDW ordered state.\cite{Gorkov74,Mihaly76,Lee77,Menyhard77} This mechanism is mostly responsible for CDW long-range order observed in several organic and inorganic low-dimensional solids (e.g. TTF-TCNQ).\cite{Barisic85,Pouget89} In the Bechgaard salts, both the interchain Coulomb interaction and the kinetic interchain coupling ($t_{\perp b}$) are likely to be important in the temperature range where superconductivity and SDW instability occur, and should be considered on equal footing. An RG approach has recently been used to determine the phase diagram of an extended quasi-1D electron gas model that includes interchain hopping, nesting deviations and both intrachain and interchain interactions.\cite{Bourbonnais04,Nickel05a,Nickel05b} The intrachain interactions turn out to have a sizeable impact on the structure of the phase diagram. Unexpectedly, for reasonably small values of the interchain interactions, the singlet $d_{x^2-y^2}$-wave superconducting phase is destabilized to the benefit of the triplet $f$-wave phase with a similar range of $T_c$. The SDW phase is also found to be close in stability to a CDW phase. Before presenting these results in more detail (Sec.~\ref{subsubsec:rg}), let us discuss in simple terms the role of interchain interactions. The interchain backward scattering amplitude $g_1^\perp$ ($>0$) contributes to the effective interaction in the Cooper channel, \begin{eqnarray} g_s(\kperp,\kperpp) &\to& g_s(\kperp,\kperpp) + 2 g_1^\perp [\cos\kperp\cos\kperpp - \sin\kperp\sin\kperpp ] , \nonumber \\ g_t(\kperp,\kperpp) &\to& g_t(\kperp,\kperpp) + 2 g_1^\perp [-\cos\kperp\cos\kperpp + \sin\kperp\sin\kperpp ] . \eleq It thus tends to suppress singlet $d_{x^2-y^2}$ pairing, but favors triplet $f$-wave pairing. In addition to this ``direct'' contribution, $g_1^\perp$ reinforces CDW fluctuations, \begin{equation} g_{\rm ch}(\qperp) \to g_{\rm ch}(\qperp) + 2g_1^\perp \cos\qperp , \end{equation} and therefore enhances the $f$-wave pairing over the $d_{x^2-y^2}$-wave pairing {\it via} the mechanism of fluctuation exchange [see Eq.~(\ref{gst})]. As for the interchain forward scattering $g_2^\perp$, its direct contribution to the DW channel is negligible, but it has a detrimental effect on both singlet and triplet nearest-neighbor-chain pairings. This latter effect, which is neutralized by the Umklapp scattering processes, can lead to next-nearest-neighbor-chain pairings when Umklapp processes are very weak.\cite{Nickel05b} \subsection{RG calculation of the phase diagram of quasi-1D conductors} \label{subsubsec:rg} As a systematic and unbiased method with no {\it a priori} assumption, the RG method is perfectly suited to study competing instabilities. The zero-temperature phase diagram obtained with this technique is shown in Fig.~\ref{fig:diarg1}.\cite{Nickel05a,Nickel05b} In the absence of interchain interactions ($g_1^\perp=g_2^\perp=0$), it confirms the validity of the qualitative arguments given above. When the nesting of the Fermi surface is nearly perfect (small $t_\perp'$) the ground state is an SDW. Above a threshold value of $t_\perp'$, the low-temperature SDW instability is suppressed and the ground state becomes a $d_{x^2-y^2}$-wave superconducting (SC$d$) state with an order parameter $\Delta_r(k_\perp)\propto \cos k_\perp$.\cite{Duprat01} In the presence of interchain interactions ($g_1^\perp>0$), the region of stability of the SC$d$ phase shrinks, and a triplet superconducting $f$-wave (SC$f$) phase appears next to the $d$-wave phase for $\tilde g_1^\perp=g_1^\perp/\pi v_F \simeq 0.1$ -- obtained here for typical values of intrachain couplings and band parameters.\cite{Nickel05a,Nickel05b} For larger values of the interchain interactions, the SC$d$ phase disappears and the region of stability of the $f$-wave superconducting phase widens. In addition a CDW phase appears, thus giving the sequence of phase transitions SDW$\to$CDW$\to$SC$f$ as a function of $t'_\perp$. For $\tilde g^\perp_1 \gtrsim 0.12$, the SDW phase disappears. Note that for $\tilde g^\perp_1\simeq 0.11$, the region of stability of the CDW phase is very narrow, and there is essentially a direct transition between the SDW and SC$f$ phases. \begin{figure} \centerline{\includegraphics[bb=135 545 353 678,width=7.5cm]{dia_zeroT.ps}} \caption{$T=0$ phase diagram as a function of $t'_\perp/t_\perp$ and $\tilde g^\perp_1$. Circles: SDW, squares: CDW, triangles: SC$d$ ($\Delta_r(k_\perp)\propto\cos k_\perp$), crosses: SC$f$ ($\Delta_r(k_\perp) \propto r \cos k_\perp$). The dashed lines indicate two (among many) possible pressure axes, corresponding to transitions SDW$\to$SC$d$ and SDW$\to$SC$f$.\cite{Nickel05a,Nickel05b} } \label{fig:diarg1} \end{figure} The RG calculations yield $T_c\sim 30$ K for the SDW phase in the case of perfect nesting and $T_c\sim 0.6-1.2$ K for the superconducting phase, in reasonable agreement with the experimental observations in the Bechgaard salts. Fig.~\ref{fig:diarg2} shows the transition temperature $T_c$ as a function of $t'_\perp$ for three different values of the interchain interactions, $\tilde g^\perp_1=0$, 0.11 and 0.14, corresponding to the three different sequences of phase transitions as a function of $t_\perp'$: SDW$\to$SC$d$, SDW$\to$(CDW)$\to$SC$f$ and CDW$\to$SC$f$. The phase diagram is unchanged when both $g_2^\perp$ and a weak Umklapp scattering amplitude $g_3$ are included.\cite{Nickel05a,Nickel05b} \begin{figure} \centerline{\includegraphics[bb=50 65 410 300,width=7cm]{Tcs.eps}} \caption{Transition temperature as a function of $t'_\perp/t_\perp$ for $\tilde g^\perp_1=0$, 0.11 and 0.14, corresponding to solid, dotted, and dashed lines, respectively.\cite{Nickel05a,Nickel05b} } \label{fig:diarg2} \end{figure} The RG approach also provides important information about the fluctuations in the normal phase. The dominant fluctuations above the SC$d$ phase are SDW fluctuations as observed experimentally (Sec.~\ref{subsec:itinerant}). Although they saturate below $T\sim t'_\perp$ where the SC$d$ fluctuations become more and more important, the latter dominate only in a very narrow temperature range above the superconducting transition (Fig.~\ref{fig3}). Above the SC$f$ and CDW phases, one expects strong CDW fluctuations driven by $g^\perp_1$. Fig.~\ref{fig4} shows that for $\tilde g^\perp_1 \sim 0.11-0.12$, strong SDW and CDW fluctuations coexist above the SC$f$ phase. Remarkably, there are regions of the phase diagram where the SDW fluctuations remain the dominant ones in the normal phase above the SC$f$ or CDW phase (right panel in Fig.~\ref{fig4}). \begin{figure} \centerline{\includegraphics[bb=50 65 410 300,width=6.5cm]{chiTscd.eps}} \caption{Temperature dependence of the susceptibilities in the normal phase above the SC$d$ phase ($t'_\perp=0.152 t_\perp$ and $\tilde g_1^\perp=0.08$). The continuous, dotted, dashed, and dashed-dotted lines correspond to SDW, CDW, SC$d$ and SC$f$ correlations, respectively.\cite{Nickel05a,Nickel05b} } \label{fig3} \end{figure} \begin{figure} \centerline{\includegraphics[bb=94 588 420 695,width=12.4cm]{chi_spin_charge.ps}} \caption{Temperature dependence of the susceptibilities in the normal phase above the SC$f$ phase for $\tilde g^\perp_1=0.12$, $t_\perp'=0.152 t_\perp$ (left) and $t'_\perp=0.176t_\perp$ (right).\cite{Nickel05a,Nickel05b} } \label{fig4} \end{figure} A central result of the RG calculation is the close proximity of SDW, CDW and SC$f$ phases in the phase diagram of a quasi-1D conductor within a {\it realistic} range of values for the repulsive intrachain and interchain interactions. Although this proximity is found only in a small range of interchain interactions, there are several features that suggest that this part of the phase diagram is the relevant one for the Bechgaard salts. i) SDW fluctuations remain important in the normal phase throughout the whole phase diagram. They are the dominant fluctuations above the SC$d$ phase, and remain strong -- being sometimes even dominant -- above the SC$f$ phase where they coexist with strong CDW fluctuations, in accordance with observations.\cite{Wzietek93,Cao96} ii) The SC$f$ and CDW phases stand nearby in the theoretical phase diagram, the CDW phase always closely following the SC$f$ phase when the interchain interactions increase. This agrees with the experimental finding that both SDW and CDW coexist in the DW phase of the Bechgaard salts\cite{Pouget96,Kagoshima99} and the existence, besides SDW correlations, of CDW fluctuations in the normal state above the superconducting phase.\cite{Cao96} iii) Depending how one moves in practice in the phase diagram as a function of pressure, these results are compatible with either a singlet $d_{x^2-y^2}$-wave or a triplet $f$-wave superconducting phase in the Bechgaard salts (see the two pressure axes in Fig.~\ref{fig:diarg1}). Moreover, one cannot exclude that both SC$d$ and SC$f$ phases exist in these materials, with the sequence of phase transitions SDW$\to$SC$d\to$SC$f$ as a function of pressure. It is also possible that the SC$f$ phase is stabilized by a magnetic field,\cite{Shimahara00b} since an equal-spin pairing triplet phase is not sensitive to the Pauli pair breaking effect contrary to the SC$d$ phase. This would make possible the existence of large upper critical fields exceeding the Pauli limit,\cite{Lee97,Oh04} and would also provide an explanation for the temperature independence of the NMR Knight shift in the superconducting phase.\cite{Lee02} \section{CONCLUSION} Notwithstanding the recent experimental progresses, many of the basic questions related to superconductivity in the Bechgaard and Fabre salts remain largely open. The very nature of the superconducting state -- the orbital symmetry of the order parameter and the singlet/triplet character of the pairing -- is still not known without ambiguity even though recent upper critical field measurements\cite{Lee97,Lee00,Oh04} and NMR experiments\cite{Lee02,Lee03} support a triplet pairing. We argued that the conventional electron-phonon mechanism is unable to explain the origin of the superconducting phase. On the other hand, the proximity of the SDW phase, as well as the observation of strong spin fluctuations in the normal state precursor to the superconducting phase,\cite{Jaccard01,Wilhelm01,Wzietek93} strongly suggest an intimate relationship between antiferromagnetism and superconductivity in the Bechgaard/Fabre salts. The scenario originally proposed by Emery,\cite{Emery86} whereby short-range AF spin fluctuations can give rise to anisotropic pairing and thus stabilize a superconducting phase, is so far the only one that is consistent with the experimental observations and the repulsive nature of the electron-electron interactions. Within the framework of the RG approach, it has recently been shown that when spin and charge fluctuations are taken into account on equal footing, both singlet $d_{x^2-y^2}$- and triplet $f$-wave superconducting phases can emerge at low temperatures whenever the nesting properties of the Fermi surface deteriorate under pressure.\cite{Kuroki01,Onari04,Tanaka04,Fuseya05,Bourbonnais04,Nickel05a,Nickel05b} CDW fluctuations are enhanced by the long-range part of the Coulomb interaction. Remarkably, for a reasonably small value of the interchain interactions, the singlet $d_{x^2-y^2}$-wave phase is destabilized to the benefit of a triplet $f$-wave with a similar range of $T_c$.\cite{Nickel05a,Nickel05b} The physical relevance of CDW fluctuations in the Bechgaard salts has been born out by the observation of a CDW that actually coexists with the SDW.\cite{Pouget96,Kagoshima99} CDW fluctuations were also observed in the normal state precursor to the superconducting state.\cite{Cao96} As a systematic and unbiased method with no {\it a priori} assumptions, the RG has proven to be a method of choice to study the physical properties of quasi-1D organic conductors. An important theoretical issue is now to go beyond the instabilities of the normal state. On the one hand, the RG analysis should be extended to the low-temperature broken-symmetry phases in order to study the possible coexistence of superconductivity and antiferromagnetism, as well as CDW and SDW, as observed in the Bechgaard salts.\cite{Vuletic02,Lee05,Pouget96,Kagoshima99} On the other hand, the RG technique might also enable to tackle the unusual properties of the metallic phase. A recent RG analysis\cite{Fuseya05} of the AF spin susceptibility in the normal phase has shown that below the dimensional crossover temperature, it differs significantly from the prediction of single-channel (RPA) theories. The interplay between the superconducting and Peierls channels, which is at the origin of spin-fluctuation induced superconductivity, might also be responsible for the unusual properties of the metallic state below the dimensional crossover temperature.
train/arxiv
BkiUfBs4uzqgCTcsvPRZ
5
1
\section{Introduction} Viscoelastic properties of polymer in melts or concentrated solutions depend strongly on the molecular weight of the polymer chains. The main effect of increasing molecular weight is the apparition of topological constraints between the chains called entanglements. These constraints are a universal aspect of polymer physics and arise in any flexible polymer if the chain is sufficiently long and the concentration is high enough. Under these conditions, the effect of entanglements becomes so relevant that system dramatically changes their physical properties such as viscosity, diffusion, rheological and mechanical behavior. Nowadays, the most extended and successful theory regarding entangled polymer dynamics is the \textit{Tube model} presented by Doi and Edwards \cite{Doi_Edwars_1988} and extended as \textit{Reptation model} by de Gennes \cite{DeGennes1971,DeGennes1976} which provided a framework for understanding many aspects of the underlying polymer physic in both regimes: melt and solution. The theory averages the collective effect of all surroundings chains over a given strand to a tubelike-region of confinement whose central axis is one segment of the called \textit{primitive path} ($PP$). In this approach, the primitive path is an essential theoretical concept introduced by Edwards \cite{Edwards1965,Edwards1967} and defined as the shortest path connecting the two ends of the chain preserving its topology. As a result of this confinement in a virtual tube, the strand moves back and forth performing a \textit{slithering motion} inside the tube (reptates). Despite its conceptual simplicity this theory proved to be a powerful tool to understand polymer dynamics and their quantitative predictions fit quite well with experimental results. Polymer thin films have numerous applications (e.g., coatings, dielectrics, adhesives, lubricants\cite{Rayss1993,Zhang1999,Marencic2010}), but are also of fundamental interest. Thin films below a certain thickness induce geometrical confinement so that the polymeric material exhibits unusual physical properties compared to its bulk behavior\cite{Tsui2001,Rathfon2011}. Viscoelastic properties are not the exception and are affected below of a certain confinement strength. This is clearly reported in experiments\cite{Campise2017,Aoki2008}; however, the precise link between these modifications and the changes in the topological structure, or entanglement network, are not fully understood yet. Indeed, the manner in which the entanglement network is modified under confinement is a subject of current interest. As entanglements are not directly observable via experiments, numerical simulations are an essential tool to study their nature. Since entangled chains have very long relaxation times, classical Molecular Dynamics is quite limited for such studies. Recently new coarse-graining techniques were introduced to simulate entanglements such as \textit{slip-springs} or \textit{slip-links} which introduce a temporary attractive force between nearby beads, imitating entanglement effects\cite{Chappa2012,Delbiondo2013,Masnada2013,Ramirez2013,Ramirez2015, Ramirez2017}. However, in such studies, the effect of heterogeneity or confinement on the slip link (entanglement) density has to be specified somewhat arbitrarily, so that it becomes essential to inform such techniques using a more microscopic approach, numerical or theoretical. Recently, a step in this direction was performed by extending the principle of conformational transfer of Silberberg to predict the entanglement reduction in thin films or cylinders as a function of the aspect ratio between the film thickness (or cylinder radius) and the end-to-end chain distance\cite{Sussman2014,Sussman2016}. The predictions of the theory were tested using molecular dynamics simulations, however in weakly entangled systems. In this paper, we extend this analysis to more strongly entangled systems, using a technique that uses ultrasoft potentials to speed up the simulation \cite{Korolkovas2016}. Our primary aim is to unveil how, in a thin film built with linear polymers, the geometrical confinement acts as an external field that modifies the entanglement state of the system, and what are its most relevant consequences. {The article is organized as follow: In the next section, we describe the simulation model and the methods and protocols used in our study. Then, the section \textit{``Results and discussion''} presents the results, with five subsections addressing different aspects of the confinement effect. The main conclusions are summarized in the last section.} \section{Model and methods} \label{sec:1} \hspace{\parindent} The model is based on a new original approach to simulate entangled of polymer in melt or concentrated solution condition, reported in an earlier work \cite{Korolkovas2016}, and which was recently successfully used to study polymer brushes under shear flow\cite{Korolkovas2017}. The main idea is to use a pseudo-continuous model of a polymer solution, consisting of long chains interacting through a soft potential field. The motion is then resolved using Brownian dynamics with large time steps. The motion of $C$ chains in dense conditions, each described by a continuous curve $\mathbf{R}_{c}(t, s)$, with variables $t$ for time and $s \in (0,1)$ as the monomer index, is solved numerically. The continuous backbone $s$, uses a finite number of discrete points $j=1,2,...,J$ to, generally, oversample the chains. Choosing $J=N$ the chain is reduced to the standard bead-spring model, which for this soft-potential has gaps that may allow chains to cross each other. This is a novelty aspect on this coarse-graining, where the crosses are avoided oversampling enough the chains to suppress the gaps along the backbone effectively. In this work, we found that $J=4N$ is sufficient to describe the chains in all our simulations well. Every chain has $N$ degrees of freedom that correspond to the usual Rouse modes (or alternatively to $N$ beads through the usual Rouse transformation, see Ref. \citenum{Korolkovas2016}), and follows the stochastic first order equation of motion: \begin{eqnarray} \zeta \frac{\partial \mathbf{R}_{c}(t, s)}{\partial t} = F_{s} - N \nabla V_{c} + \sqrt{2k_B T\zeta}\mathbf{W}_{c}(t, s) \label{Eq:Motion} \end{eqnarray} here $\zeta= N\zeta_0$ is the friction coefficient of the chain center of mass. The strength of the thermal noise is modeled by a Wiener process $<\mathbf{W}_{c}(t, s) \mathbf{W}_{c'}(t', s')>=\delta_{cc'}\delta(t-t')\delta(s-s')$. In Eq. \ref{Eq:Motion} $F_s$ models the bonded interaction (bead-spring): \begin{eqnarray} F_{s} = \left(\frac{3 k_b T}{Nb^2}\right) \frac{\partial^2 \mathbf{R}_{c}(t, s)}{\partial s^2}. \end{eqnarray} {where} $Nb^2$ is the {mean square} end-to-end distance of a free chain, and can be combined with other parameters to define the microscopic unit of time, $\tau= \zeta_0 b^2/k_BT$. \noindent $V_{c}$ describes the nonbonded interactions between chains: \begin{eqnarray} V_{c} = \sum_{c'=1}^{C} \int_0^1 \Phi[\mathbf{R}_{c}(t, s)-\mathbf{R}_{c'}(t, s')] ds' \end{eqnarray} Here, we propose as $\Phi(r)$ a soft potential model through a combination of Gaussian functions, that takes into account both relevance interaction; excluded volume and attractive force: \begin{eqnarray} \Phi(r) = \left(\frac{N}{J}\right) k_B T \left[(w+1)e^{-r^2/2\lambda^2} - we^{-r^2/4\lambda^2} \right] \label{Eq:SoftPotential} \end{eqnarray} \noindent where $w\geq0$ is a parameter to control the relative weight of the attractive part. At this point, it is important to remark that in potentials such as the one proposed in Eq. \ref{Eq:SoftPotential} a problem of thermodynamic stability may arise, so the selection of the value for $w$ is not a trivial question. In fact is well known\cite{Louis2000,Heyes2008,Heyes2010} that when interacting bodies without an infinitely repulsive core (i.e., finite force value at zero separation) also interact through attractive forces, if the attraction is too strong the weak short-range repulsion may not be sufficient to prevent a ``collapse'' of the system allowing all particles eventually to overlap in a finite region of space: the thermodynamic catastrophe occurs. Well-defined criteria to ensure thermodynamic stability were derived by Fisher and Ruelle\cite{FisherRuelle,Ruelle1999}. In this work, we determined a thermodynamic safety interval of values for $w$ applying these criteria following a straightforward approach proposed in Ref. \citenum{Heyes2007}. We found that the stability condition is $w \geq (2^{3/2}-1)^{-1}$ for this parameter, and we use the value $w=0.5$ for all the simulation reported here. More details are given in Appendix I. A central point of this coarse-grained description is the use of an approximate but high-speed method of evaluating the interparticle forces. The technique involves splitting the force into two terms and evaluate them on a staggered grid. The first term takes into account short-range interactions and the second one the long-range contributions. The short-range part is calculated through linearization of the gaussian force, and the long-range part using a convolution between the density field and the potential in Fourier space, where the periodic boundary conditions are incorporated naturally. The matrix-matched nature of this problem allows the implementation of a transparent parallelization into the GPU paradigm, which enabled us to take advantage of these high-performance devices. Hence, the simulation code was programmed in CUDA with an optimized implementation of the available memories and all simulations reported in this manuscript were run in two GPU cards Nvidia Quadro P1000 both included in a conventional desktop computer. In order to reach the equilibrium state quickly, we use the method proposed in Ref. \citenum{Subramanian2010}. The method starts by locating randomly $C$ monomer in a box with the proper dimensions to set the target density. Initially, there are $C$ \textit{chains} with one monomer each one. Then, the method adds monomer systematically along the chain backbone, rescaling the box properly to conserve the density. This process is repeated until the desired chain-length is reached. We then run a simulation in which the mean square displacement (MSD) of the central monomer is followed and ensure that the chains diffuse enough distance to sample the film thickness adequately and the equilibration time is already passed before starting to compute observables. \begin{figure}[h] \centering \includegraphics[width=16 cm]{Fig01.png} \caption{(a) A free-standing film of thickness $h_{eff} \sim 28$ containing $C=64$ chains of length $N=512$ built with this model. The 3D box container is not the simulation box and is just shown to improve visualization. The entire chains are shown, without taking into account the periodic boundary conditions. (b) Primitive Path chain reduction obtained by $Z1$ algorithm. The label color is preserved between original chains and the corresponding primitive paths.} \label{fig:fig1} \end{figure} The dimensions of the cubic simulation box were chosen to create a system with initial monomer density of $\rho_i = 0.12$, so $L_{box}^3 = NC/\rho_i$. As a result of the attractive interaction, the system spontaneously forms a thin film in a central region of the box, reaching an equilibrium density of $\rho_f=0.277$ inside the film. This value is determined from the density profile shown in Figure \ref{fig:fig2}. This final density depends on the parameter $w$ which, as we mentioned before, is fixed to $w=0.5$ in our study. We also confirmed that the largest radius of gyration is roughly three times smaller than $L_{box}$, which should be enough to ensure that the chains do not interact with their periodic images. Figure \ref{fig:fig1}a shows an instantaneous configuration of a free-standing film obtained with this preparation protocol. We have chosen to work with monodisperse linear chains of $N = 512$, $1024$ and $2048$ monomers conforming self-confined films containing $C = 8$, $16$, $32$, $64$, $128$, $256$, and $512$ chains. Also, in order to have systems of reference for the different chains lengths, it was performed simulations of $C=64$ chains with $N=512$, $1024$ and $2048$ monomers with the same parameters as before but in bulk conditions, i.e., setting the box dimension correctly to get a constant density of $\rho_{bulk}=0.277$. \begin{figure}[h] \centering \includegraphics[width=7.8 cm]{Fig02.png} \caption{Monomer density profiles for a chain length $N=1024$ and films built with $C=8, 16, 32, 64$ and $128$ chains.} \label{fig:fig2} \end{figure} Figure \ref{fig:fig2} shows the monomer density profiles for some films built with chains of length $N=1024$. The film thickness is obtained by fitting these profiles with the following hyperbolic tangent function as a function of the $z$ coordinate: \begin{eqnarray} \rho(z) = \frac{\rho_0}{2} \left( 1 - \tanh\left(\frac{\arrowvert z - z_0 \arrowvert - \xi}{d}\right) \right) \label{Eq:hiperbolic} \end{eqnarray} where $\rho_0$ is the density in the interior of the film, $\xi$ is the half width of the interior thickness, $z_0$ is the position of the middle film, and $d$ is a measure of the width of the interface, which is a consequence of the density fluctuations near the surface. Finally, the effective film thickness can be reduced to $h_{eff} = 2\xi$. By this measure, the thickness of the profiles in Figure \ref{fig:fig2} were $\sim 17.6$, $22.2$, $28.0$, $35.3$, and $44.4$ in units of the monomer diameter $\lambda$ and the inner density for all of them is $\rho=0.277$. As is expected, these values are in good agreement with the direct estimation of the thickness from $h_{eff}=(\rho_i/\rho_f)L_{box}$. To characterize the behavior of some important vectors (segments of $PP$, $\mathbf{R_{ee}}$, etc.), we will use the second Legendre polynomial: \begin{eqnarray} P_2 = \frac{3}{2}\langle\cos^2(\theta)\rangle-\frac{1}{2} \label{Eq:P2_param} \end{eqnarray} where $\theta$ is the angle between the vector under study and a given \textit{fixed} direction of interest defined by a unit vector called the director, which will be explicitly indicated in each case. $P_2$ is widely used to study nematic order in diverse systems (liquid crystal, etc.), and is also helpful to quantify the behavior of a vector (or a vector field) around a given direction of interest. This order parameter lies within the interval $-0.5 \leq P_2 \leq 1$, where a value of $P_2 = 1$ indicates that vectors under analysis align perfectly with the direction of reference, $P_2 = 0$ corresponds to an isotropic distribution around the reference direction. The negative value of the lower bound, $-0.5$, corresponds to vectors all oriented in a plane perpendicular to the director. The topological analysis of the systems presented in the following are all performed using the \textit{$Z1$ algorithm} \cite{Kroger2005,Shanbhag2007,Karayiannis2009,Hoy2009}, a method based on the MD trajectories which finds entanglement by geometrical minimization. In this code, all chain ends are maintained them fixed in the space, excluded volume interactions are disabled, but the chain uncrossability condition is preserved. Then, a set of geometric operations are applied over of all this \textit{pseudo}-chains, which monotonically reduce its contour lengths. Eventually, the method builds a $PP$ for each chain thereby reducing the linear polymer system to an entanglement network of $PPs$. This iterative geometrical minimization procedure terminates as soon as the mean length of all $PPs$ has converged. Figure \ref{fig:fig1}b shows the $PP$ network obtained by applying $Z1$ algorithm to the film plotted in Figure \ref{fig:fig1}a, here is important to remark that the chains are drawn entirely, i.e., without cutting at the periodic boundary conditions of the simulation box. As a result, some chains (or extremes) which appear isolated, have entanglements, as they cross the box boundary and are effectively surrounded by periodic images of the chains represented. Additionally, the $Z1$ code provides the statistical properties of the underlying topological network but also the positions of the interior \textit{kinks}\cite{Shanbhag2007,Karayiannis2009} along the three-dimensional $PP$ for each chain. For long chains, the number of kinks is proportional to the number of entanglements and in this context, both terms can be considered as \textit{equivalents}. In this approach, self-entanglements (intramolecular knots) are neglected, as they represent a small fraction and are irrelevant for most polymeric systems. \section{Results and discussion} \subsection{Statistics of entanglements in bulk} \hspace{\parindent} As a reference, we start by using the $Z1$ algorithm\cite{Kroger2005,Shanbhag2007,Hoy2009,Karayiannis2009} to perform a topological analysis of the bulk system configurations. \begin{figure}[h] \centering \includegraphics[width=8.6 cm]{Fig03.png} \caption{ (a) Temporal evolution of the average number of entanglements per chain ($Z$) measured by the $Z1$ algorithm after equilibration {for three different melts built with $C=64$ chains of lengths $N=512$, $1024$ and $2048$} in bulk conditions at $\rho_{bulk}=0.277$. (b) Temporal average number of entanglements per chain $\langle Z \rangle$ as a function of the chain-length $N$. } \label{fig:fig3} \end{figure} Figure \ref{fig:fig3}a shows the temporal evolution of the total numbers of entanglements per chain ($Z$) after equilibration. This number fluctuates slightly around its average value, which is an indicator that the systems were well equilibrated. Moreover, as is expected in bulk at fixed density, the average value $\langle Z \rangle$ of the number of entanglements per chain increases linearly with the chain-length $N$, with a slope of $\alpha = 0.03125$ (Figure \ref{fig:fig3}b). This slope is just the reciprocal of the entanglement length $N_e = 1/\alpha = 32$ (in number of monomers), which can be used to characterize the crossover between the Rouse and Reptation regimes. In the reptation model, $N_e$ is defined as the arc length of a chain with mean-square end-to-end distance equal to tube diameter $a$ ($N_e=(a/b)^2$, $b$ being the statistical segment length). The statistic of entanglements in bulk systems is well explained by the \textit{chain packing model}\cite{Lin1987,Kavassalis1988,Fetters1994}. Mostly, the idea is that the larger the dimensions of a chain, the greater the volume pervaded by that chain, so the greater the number of other neighbors chains it will encounter and with which it might entangle. In this model, $N_e$ is defined as the ratio of the pervaded volume $V_p$ to the real volume occupied by the chain $V_c$. Although $V_p$ is not easy to calculate, a well-accepted estimate is proportional to the volume covered by one of their characteristic lengths of the chain: $R_{ee}$ or $R_g$. Thus, the pervaded volume can be estimated as $V_p \propto R_{ee}^3$ while the effective volume occupied by the chain is $V_c \propto N\lambda^3$. This model assumes that an entanglement arises when the molecular weight and the concentration are such that at least two chains share the same pervaded volume, i.e., $V_p/V_c \sim 2$. These volumes scale differently with molecular weight, $V_p \propto N^{3/2}$ while $V_c \propto N$. As a result, increasing the chain length increases the number of chains that are allowed to share the same pervaded volume, favoring the interchain contacts that lead to entanglements. In the following, we will, however, see that this model is not sufficient to explain the statistics of entanglements in confined systems. \subsection{Global impact of the confinement on the entanglements} \hspace{\parindent} Figure \ref{fig:fig4} shows the total number of entanglements per chain compared to its bulk value, $\langle Z \rangle / \langle Z \rangle_{bulk}$, for thin films of various thickness. The film thickness $h_{eff}$ is normalized here using the average end-to-end distance in bulk conditions for the same chain length. This normalization is guided by the proposal of reference \cite{Sussman2014}, which suggests that the corresponding curve should be universal in the limit of large molecular weight. \begin{figure}[h] \centering \includegraphics[width=8.0 cm]{Fig04.png} \caption{ Normalized reduction of entanglements per chain as a function of confinement for all free-standing films thicknesses studied here, for chain lengths $N=512$, $1024$ and $2048$. Data of a similar system reported in Ref. \citenum{Sussman2016} and the model proposed in Ref. \citenum{Sussman2014} are also included. } \label{fig:fig4} \end{figure} Globally, we find that confinement leads to a decrease in the average number of entanglements per chain. Qualitatively, this result is in good agreement with the one observed in experiments\cite{SiLun2005,Rathfon2011,LiuYujie2015}, simulations\cite{Cavallo2005,Vladkov2007,Ramirez2015}, and with the theoretical model proposed in Ref. \citenum{Sussman2014}. However, Figure \ref{fig:fig4} shows that, quantitatively, there is a notable difference between our results and the model proposed by Sussman and coworkers\cite{Sussman2014}. Figure \ref{fig:fig4} clearly shows that the model accounts reasonably well for the data obtained for shorter chains, but strong deviations are observable for thin films made of long chains. In those films, the decrease of the entanglements is observed only for films that are significantly thinner than the size of the unperturbed chain. Data for $N=2000$ extracted from a recent manuscript\cite{Sussman2016} displays a similar trend. In the next section, we discuss possible reasons for the origin of the discrepancy between theory and simulation. The $Z1$ algorithm also provides the primitive path ($PP$) conformation of each chain, from which it was possible to determinate the position of entanglements within the film. A spatially resolved profile of the entanglement density across the film is shown in Figure \ref{fig:fig5} and compared with the monomer density profile for films of different thickness built with chains of $N=1024$ monomers. \begin{figure}[h] \centering \includegraphics[width=7.8 cm]{Fig05.png} \caption{ Monomer and entanglement density profiles. For $N=1024$ and three films of $C=16$, $32$ and $64$ chains respectively. Two different scales (left blue and right red axis) are used to plot these quantities in the same figure for comparison. } \label{fig:fig5} \end{figure} Is interesting to note in Figure \ref{fig:fig5}, that entanglements sample the space uniformly within the film exhibiting a notable decrease only near to the surface. \subsection{Understanding the discrepancy between simulations and theory} \hspace{\parindent} The theoretical model presented in Ref. \citenum{Sussman2014} is based on three fundamental hypothesis: (I) validity the principle of conformational transfer proposed by Silberberg\cite{Silberberg1982}, generalized to a thin film geometry, (II) the distribution of orientations of the end-to-end vector is made anisotropic by the geometric confinement, and this orientation distribution is directly communicated to the primitive path network, (III) the distribution of orientations at the $PP$ scale is used to predict the changes in the entanglement network. In the following, we will analyze the validity of these assumptions for our simulations, in order to understand the origin of the observed discrepancy. Hypothesis (I), involves a modification of the Silberberg model\cite{Silberberg1982} which treats the chain as a random walk using reflecting boundary conditions to compute changes to the chain conformation in the presence of a wall. The original model formulated by Silberberg consider the perturbation of chains near to one wall in space and makes quantitative predictions for the statistics of the chain conformations in the direction normal to the surface. Following the philosophy of Silberberg, the authors of Ref. \citenum{Sussman2014} proposed an extension of this idea for thin films. Two walls with reflecting boundary condition delimit the film, and the contribution of both surfaces are added to obtain the chain conformation inside the film. At first order, the early two reflections are taken into account. Formally, in analogy with the method of images in electrostatics, second and higher order reflections should also be taken into account, so that the final results involves summing an infinite series. Fortunately, due to the fast decaying of the superior order contributions, the convergence of the series is quick, and only a few terms are needed to reach an accuracy. This extended model allows one to predict the change in the normal component of the mean end-to-end vector as a function of the distance from the final random walk step to the surface, $R_{ee,z}^2(z)$. Integrating this function through the film thickness, it is possible to compute the global change in the normal component as a function of the film thickness. For further information on this approach see Ref. \citenum{Sussman2014}. In Figure \ref{fig:fig6}, the prediction of this extended Silberberg model for the perpendicular component of the bulk normalized mean end-to-end vector as a function of the normalized films thicknesses is compared with our simulation results for different chain length exhibiting an excellent agreement and supporting the validity of this first hypothesis. \begin{figure}[h] \centering \includegraphics[width=8.0 cm]{Fig06.png} \caption{Root-mean-square components of the end-to-end as a function of the normalized film thickness for all thin films and chain lengths studied here. Filled symbols are the component normal to the surface of confinement $R_{ee}^z$ for different chain-lengths, and the semi-filled symbols are the components parallel to the surface, $R_{ee}^{x,y}$. The continuous line is the prediction of the extended Silberberg model {and the dashed black line is the bulk value $R_{ee,bulk}/\sqrt{3}$.} } \label{fig:fig6} \end{figure} Figure \ref{fig:fig6} also displays the normalized parallel components ($R_{ee}^x$, $R_{ee}^y$) averaged over the film thickness. As expected, these components are only {slightly} affected by confinement and exhibit a bulk-like behavior. In order to check the second hypothesis (II), we analyzed the orientational probability distributions of the $PP$ segments and $R_{ee}$ vectors concerning the $z$-direction for different degrees of confinement, here expressed as $\delta = h_{eff}/R_{ee}$. These results are shown in Figure \ref{fig:fig7}a, b and c where is immediately evident, at least globally speaking, the orientations present $R_{ee}$ are not communicated to the $PP$ length scale, shedding doubt on the validity of this second hypothesis. \begin{figure}[h] \centering \includegraphics[width=8.6 cm]{Fig07.png} \caption{ (a), (b), (C) Average distribution of the angle with respect to the $z$-axis of the $PP$ segments (solid lines) and the $R_{ee}$ vectors (dashed lines) {for different chain lengths and confinement strength $\delta$. } (d) profile of the $P_2$ parameter evaluated using the angle defined between the $PP$ segments and the $R_{ee}$ vector. } \label{fig:fig7} \end{figure} The chains are much more strongly oriented at the scale of the end-to-end vector than at the scale of the primitive path segments. The data in Figure \ref{fig:fig7}a shows that as the confinement increases ($\delta$ decreases) the two distribution become peaked around zero, i.e., the chain tend to lie parallel to the interface. However, the order of magnitude of the effect is much more pronounced for the end-to-end vector than for the primitive path segments. Furthermore, this difference becomes more notable for longer chains. The distributions for $R_{ee}$ becomes more peaked for $N=1024$ (Figure \ref{fig:fig7}b) and even more for $N=2048$ (Figure \ref{fig:fig7}c) for a similar degree of confinement, while the orientation of the primitive path seems insensitive to the chain length and only slightly dependent on $\delta$. In addition to this global analysis, we have studied the local behavior of both vectors to assess the possible existence of a local correlation. We have evaluated a profile of the $P_2$ order parameter for the angle between the end-to-end vector of a chain and the $PP$ vectors (segments) belonging to this chain. {The sketch in Figure \ref{fig:fig8} illustrates this idea and Figure \ref{fig:fig7}d reports this observable.} \begin{figure}[h] \centering \includegraphics[width=15.0 cm]{Fig08.png} \caption{ The method implemented to study the local orientational correlation between the $PP$ segments and the end-to-end vector through $P_2$. In all those images are shown the PP (in blue) and the end-to-end vector (in green). (a) Highlighted in red, a chain embedded in the film. All the others chains are thinned and transparent. In (b) all others chains were removed to improve visualization. (c) The segments $S_1,S_2,..., S_m$ forming the $PP$ are used to evaluate $P_2$ relative to the end-to-end vector. } \label{fig:fig8} \end{figure} The local correlation becomes more important near the surface, however, all values remain below $P_2=0.3$, indicating a very poor orientational correlation between these vectors. Moreover, the range over which the correlation is felt appears to be independent of chain length, and for longer chains, the correlation is completely lost in the middle of the film. Clearly, these observations indicate that, while the extension of the Silberberg model gives an accurate picture of the global chain conformation, the primitive path is much less affected by confinement than expected in the theory. Indeed, the thickness over which the orientation of the primitive path is affected does not appear to scale with molecular weight but is restricted to a finite thickness layer at the surface of the film. This leads to the deviation of $\langle Z \rangle/\langle Z \rangle_{\mathrm{bulk}}$ from the $h_{\mathrm{eff}}/R_{ee}$ scaling reported in figure \ref{fig:fig4}. \subsection{Chains conformations and the importance of surface effect} \begin{figure}[h] \centering \includegraphics[width=8.6 cm]{Fig09.png} \caption{ Profiles evaluated using the location of the center of mass of the chains within the film of (a) Bulk normalized and temporal-averaged components of the radii of gyration $G_{xx,yy}$ and $G_{zz}$, (b) Average-absolute value of the cosine director of the eigenvector associated with the minimum eigenvalue of the gyration tensor with respect to the surface of confinement. {These observables were evaluated for two film thicknesses $h_{eff}=22.16$ (in blue) and $h_{eff}=44.32$ (in red).} } \label{fig:fig9} \end{figure} \hspace{\parindent} Heretofore, we have studied how the confinement affects the statistics of entanglements and the distributions of end-to-end vectors or primitive paths segment self-confined free-standing films. In this section, we investigate how the confinement alters the global shape of the chains. The first set of descriptors are the three diagonal components of the inertia tensor (or gyration tensor) $G_{xx}$, $G_{yy}$ and $G_{zz}$. The squared gyration radius can be expressed as $R^2_g=G_{xx}+G_{yy}+G_{zz}$. Figure \ref{fig:fig9} shows the averaged value of these components normalized with the corresponding bulk value as a function of the position of the center of mass of the chain. To avoid redundancy of data, we report these observables for only two different thicknesses ($h_{eff}=22.16$ blue lines, $h_{eff}=44.32$ red lines) built with chains of $N=512$ monomers, but the data obtained with varying lengths of chain are very similar. In Figure \ref{fig:fig9}a is possible to see that for both thicknesses (continuous lines) the component $G_{zz}$, associated with the direction of confinement, tends to induce a noticeable shrinking of the chains on that direction, i.e., the chains break their spatial isotropy and tend to adopt a flat shape near the surface. In counterpart, the components $G_{xx,yy}$ slightly change their values, increasing as much a $10\%$ compared to their bulk value (dashed lines in Figure \ref{fig:fig9}a), which is in good agreement with previous works\cite{Muller2002}. Until now, we have used the tensor as is, expressed in the canonical system of reference $\{x,y,z\}$, i.e., without diagonalizing in its principal axes. By diagonalizing the $G$ tensor and studying the cosine director of the eigenvector associated with the minimum eigenvalue (which represent the most important direction to where the chain is elongated), it is possible to know the main direction of this flatness. In Figure \ref{fig:fig9}b is reported how the average orientation of this vector is dictated by the position of the chain within the film. Independently of the molecular weight, all chains are flat at the edges of the film, then this effect decreases monotonically while entering in the film. This idea is also in good agreement with the predicted compression of the end-to-end vector discussed in the previous section. In Figure \ref{fig:fig9}a and Figure \ref{fig:fig9}b it is seen that the flattening alters the chain shape all across the film thickness for this chain-length ($N=512$, $R_g \sim 19.1$). In Figure \ref{fig:fig9}a, at the edge of the thicker film ($h_{eff}=44.32$, red lines) the chain compression reaches a maximum of around $60\%$ compared to its bulk value as is evidenced in the component of the radius gyration perpendicular to the plane of confinement ($G_{zz}$). Then, the compression decreases monotonically along the film achieving a bulk-like state in the center. Considering here that the film thickness is around two times $R_g$ (i.e., the chain size is comparable with the film thickness) it is reasonable that the chain whose center of mass is located roughly in the middle film adopt bulk-like conformations. In the case of the thinner film ($h_{eff}=22.16$, blue lines) the chains experience a stronger shrinkage at the boundary, and although the effect decreases inside the film, the bulk state is not reached in the center. Which is also reasonable due to this thickness is almost the half of the chain size ($2R_g$), so the chain conformation is strongly confined. Even in the center of the film the chain reaches only around half of its bulk size in the $z$-direction. This simple analysis provide an accurate picture of the chain conformations across the film. \subsection{Primitive path network under confinement} \begin{figure*}[h] \centering \includegraphics[width=17 cm]{Fig10.png} \caption{Primitive path characterization. In the top, average $PP$ segment length ($L_{pp}$) normalized with the bulk value. Bottom, $P_2$ order parameter evaluated on the angle defined between the $PP$ segments and the $\hat{z}=(0,0,1)$ direction. The position in the profile is computed using the geometrical center of the $PP$ segments. The zone delimited by the center of mass of the chains is indicated.} \label{fig:fig10} \end{figure*} \hspace{\parindent} In this section, we study how the confinement impacts the primitive path network. To quantify this confinement effect, we have calculated the profiles across the film of two characteristic quantities of the $PP$ segments. One is the profile of the $P_2$ order parameter for the angle between the $PP$ segments and the normal to the plane of confinement ($\hat{z}=(0,0,1)$) and the other one is the length of the $PP$ segments. The location within the film was computed using the geometrical center of the $PP$ segments. Here, is relevant to mention that we just will consider these observables inside of the called \textit{``center of mass zone''} (marked as CM zone in Figure \ref{fig:fig10})), i.e., the zone within the film reachable by the centers of mass of chains. Due to the nature of the $PP$ segments, they can exist beyond the space reachable by the center of mass of the chains and that is the reason of why $L_{pp}$($P_2$ of $PP_z$) after achieving a maximum(minimum) goes to zero. We note that the location of these extrema coincides with the boundary of the center of the mass zone. Beyond this limit, the data is mostly related to the tails of the chains. Interestingly, we found that the primitive path segments are quite insensitive to the confinement and the chain-length exhibit two characteristic \textit{weak} response depending on their location within the film. There is a relatively vast region of the center-film where the length of the segments is constant, and at some point, near to the edge, their length increases monotonically, stretching up to $10\%-18\%$. We note that the starting point for these deviations is at a distance of around one segment length of $L_{pp,bulk}$ from the edge. This seems reasonable if we consider that the segments contributing statistically to this part of the data are in the zone of maximum chain compression and, as we explained before, decreases the number of entanglement locally increasing at the same time $PP$ segments length. Furthermore, this compression forces the $PP$ to align parallel to the surface of confinement as seen in the bottom graphs showing $P2$ in figure \ref{fig:fig10}, where the negative number indicates perpendicularity with the $z$-direction. This behavior seems universal for the $PP$ network and only weakly dependent on the confinement strength and the molecular weight. Only for strong confinement ($h_{eff} = 22.16$, $h_{eff} = 27.92$), the curves obtained for different molecular weights slightly depart from each other by a small vertical shift. \section{Summary and conclusions} \hspace{\parindent} In summary, we have performed an analysis of entanglement statistics in a coarse-grained model of free-standing thin films made out of long linear polymers. We found that the geometric confinement breaks the isotropic conformation of the chains, compressing and flattening their shape in the direction perpendicular to the plane of confinement as is shown in Figure \ref{fig:fig9}a and \ref{fig:fig9}b. This anisotropic contraction seems not completely compensated in the other direction, and their lateral extension increases by just around $10\%$ ($G_{xx,yy}$ in Figure \ref{fig:fig9}a), which results in an effective decrease of the volume pervaded by the chain. This decrease in the pervaded volume reduces the number of neighbor chains inside the shared volume, lowering the potential contacts between them, with as chief consequence the effective reduction of entanglements, while the monomer density remains constant. However, the flattening effect is poorly captured by the $PP$ network, as is reported in Figure \ref{fig:fig10}. First, for all chain length under \textit{weak} confinement ($h_{eff}=44.32$), the $PP$ segments seems to be unaffected and behave in a bulk-like manner for a wide range in the center of the film. It is only near to the surface that the flattening becomes more important, inducing the segments to becomes parallel to the surface of confinement and increasing slightly (around a $15\%$) their length. This \text{pronounced} change in both observables takes place when the $PP$ is within a distance comparable to $L_{pp,bulk}$ from the surface. In the first two graphs of $L_{pp}/L_{pp,bulk}$ in Figure \ref{fig:fig10} it is notable how, under strong confinement, ($h_{eff}=22.16-27.92$) only the $PP$ segments associated with longer chains shift their length slightly (just around $~10\%$ with respect to the bulk value) inside the film while conserving the characteristic effect of increasing near to the film edge. We also performed a comparison of our data with the theory proposed in Ref. \citenum{Sussman2014}, which models the entanglement reduction in confined systems as a function of the strength of confinement. After detailed tests of the central hypothesis, we found evidence that the second hypothesis: \textit{``the oriental correlation at the end-to-end vector scales created by geometric confinement are directly communicated to the primitive path network''}, seems not to be right. In fact, we found substantial evidence that these vectors have uncorrelated orientations, except for a thin layer close to the surface, and this effect is even more notable for longer chains. However, the extension of the Silberberg model proposed by the same authors fits the results for the chain conformation quite well in all the simulated range. A better understanding of the response of the primitive path to global chain deformation would be desirable to generalize the ideas of Ref. \citenum{Sussman2014}. \section{Appendix I} As mentioned in section II, since the interparticle potential used in this study is built with a soft-core repulsion and an attractive tail, depending on the relative weight of both interaction a problem of thermodynamic stability may arise. Thus, to ensure the stability of our system it was necessary to determine a safety range of values for the \textit{independent parameter} $w$ (see Eq. \ref{Eq:SoftPotential}). Originally, the theoretical framework for predicting the stability of these kinds of systems was provided by Fisher and Ruelle\cite{FisherRuelle,Ruelle1999}. According to Proposition 3.2.2 in Ref. \citenum{Ruelle1999} the stability is ensured if the total potential energy $U$ of a given system with $N_t$ particles interacting through a pair potential $\Phi(r)$ satisfies the inequality: \begin{eqnarray} U(\mathbf{r}_1,\mathbf{r}_2,...,\mathbf{r}_{N_t}) = \sum^{{N_t}-1}_{i=1}\sum^{N_t}_{j>i} \Phi(\rvert \mathbf{r}_i - \mathbf{r}_j \rvert) \geq -N_t\varepsilon \label{eq:EnergyConvergency} \end{eqnarray} where $\mathbf{r}_i$ is the vector position of the particle $i$, and $\varepsilon \geq 0$ is a finite constant independent of $N_t$. This inequality ensures the convergence of the grand partition function. Beyond this formal criterion, Fisher and Ruelle also provided two more straightforward rules which help one to decide whether a potential will lead to a steady thermodynamic state. The first criterion is a weaker condition: \begin{eqnarray} \int d\mathbf{r} \, \Phi(r)>0 \end{eqnarray} which is necessary but not sufficient, i.e., if $\int d\mathbf{r} \, \Phi(r)<0$ the system is unstable. A sufficient condition for stability is that given in Ref. \citenum{FisherRuelle} \begin{eqnarray} \widetilde{\Phi}(k) = \frac{1}{(2\pi)^3}\int d\mathbf{r} \, \Phi(r) e^{-i\mathbf{k} \mathbf{r}} \geq 0 \end{eqnarray} with the following equivalent form\cite{Heyes2007}: \begin{eqnarray} \widetilde{\Phi}(k) = \frac{1}{2\pi^2k}\int_0^{\infty} r \, \Phi(r) \sin(kr)dr \geq 0 \label{Eq:finalCriteria} \end{eqnarray} that must be verified for all $k$. Then, applying this criterion to our potential (replacing Eq. \ref{Eq:SoftPotential} in Eq. \ref{Eq:finalCriteria}) and integrating, an inequality in terms of $w$ is obtained: \begin{eqnarray} \widetilde{\Phi}(k) = \frac{\lambda^3 e^{-k^2 \lambda^2} \left(\sqrt{2} (1+w) e^{k^2 \lambda^2/2} - 4 w \right) }{4 \pi ^{3/2}} \geq 0 \label{Eq:FinalConditionPotential} \end{eqnarray} where is easy to see that the sign of this expression is given for the expression inside the parenthesis: \begin{eqnarray} \sqrt{2} (1+w) e^{k^2 \lambda^2/2} - 4 w \geq 0 \label{eq:Inq1} \end{eqnarray} as all other terms are positive. Moreover, in Eq. \ref{eq:Inq1} is clear that the term $e^{k^2 \lambda^2/2}$ is greater or equal to 1 for all $k$ values, and in particular the inequality is verified independently of $w$ if $k \rightarrow \infty$. Then and since this inequality should be satisfied for all $k$ values, without loss of generality we can take $k=0$, so the final condition is reduced to: \begin{eqnarray} \sqrt{2} (1+w) - 4 w \geq 0 \label{eq:Inq2} \end{eqnarray} which is always satisfied if: \begin{eqnarray} w \leq (2^{3/2}-1)^{-1} \label{eq:Inq3} \end{eqnarray} Finally, taking $w$ in the interval $0 \leq w \leq (2^{3/2}-1)^{-1}$ ensures the thermodynamic stability for our system. \begin{acknowledgement} {We are grateful to Prof. Martin Kr\"{o}ger (ETH Z\"{u}rich) for his help with the $Z1$ algorithm.} \end{acknowledgement}
train/arxiv
BkiUeB05qhLAB45oKIqd
2
0.4
\section{Acknowledgements} Thank you to Safa Motesharrei for his corrections and comments. J.A.Y. was partially supported by NSF Grant DMS-0616585 and NIH Grant R01-HG0294501. E.S. was partially supported by NSF Grants DMS-0639300 and DMS-0907818, and NIH Grant R01-MH79502. \begin{flushleft} \addcontentsline{toc}{subsection}{References} \footnotesize \bibliographystyle{aps}
train/arxiv
BkiUcYjxK7kjXMEc8S9t
5
1
\section{Introduction} A stacked triangular Ising antiferromagnet (STIA) is a geometrically frustrated spin system that has attracted considerable attention over the past several decades~\cite{Berker,Blankschtein,Coppersmith,Heinonen,Kim,Netz1,Netz2,Plumer-1,Plumer0,Plumer1,Plumer2,Plumer3,Bunker,Nagai,Kurata,Koseki,Todoroki,Meloche,Zukovic1,Zukovic2,Liu} due to its frustration-induced intriguing and controversial behavior as well as the fact that it reasonably describes some real magnetic materials, such as the spin-chain compounds $\rm{CsCoX}_3$ (X is Cl or Br) and $\rm{Ca}_3\rm{Co}_2\rm{O}_6$. The model consists of layers of triangular lattices stacked on top of each other thus forming linear chains of spins in the perpendicular direction. The interaction between spins within the chains (or between layers) can be considered to be either ferromagnetic (FSTIA model) or antiferromagnetic (ASTIA model). In the absence of an external magnetic field the physics of both systems is the same and, therefore, most of the previous studies chose the FSTIA model for their investigations~\cite{Berker,Blankschtein,Coppersmith,Heinonen,Kim,Netz1,Netz2,Plumer1,Plumer2,Plumer3,Bunker,Nagai,Kurata,Meloche,Zukovic1,Zukovic2,Liu}. In zero field, the system has been found to undergo a second-order phase transition from the paramagnetic (P) to a partially disordered (PD) phase $(M,-M,0)$, with two sublattices ordered antiferromagnetically and the third one disordered. There is a wide consensus that the transition belongs to the 3D XY universality class~\cite{Berker,Blankschtein,Plumer1,Bunker,Meloche} albeit the tricritical behavior has also been suggested~\cite{Heinonen}. Another phase transition at lower temperatures to a ferrimagnetic (FR) phase $(M,-M/2,-M/2)$, with one sublattice fully ordered and two partially disordered has been proposed~\cite{Blankschtein,Netz1,Todoroki} but questioned by several other studies~\cite{Coppersmith,Heinonen,Zukovic2,Borovsky}, which argued that the low-temperature phase is a 3D analog of the 2D Wannier phase. In the presence of the magnetic field, most of theoretical studies focused on elucidation of peculiar phenomena in magnetization processes observed in the experimental realizations $\rm{CsCoX}_3$ and $\rm{Ca}_3\rm{Co}_2\rm{O}_6$~\cite{Zukovic1,Kudasov1,Kudasov2,Kudasov3,Yao1,Yao2,Yao3,Qin,Soto,Kudasov}. Also critical properties of the FSTIA model have attracted a lot of interest due to phase transitions belonging to a variety of universality classes and multicritical behavior. In particular, the Monte-Carlo Mean-Field theory predicted the phase diagram in the temperature-field plane, with a small region of the PD phase stabilized at higher temperatures and small fields and the remaining part occupied by the FR phase~\cite{Netz1}.The character of the P-PD transition line is concluded as second-order belonging to the XY universality class, however, at higher fields the P-FR transition line is identified as first-order due to its thee-state Potts universality class. The FR-PD is reasoned to belong to the Ising universality class with possible crossover to the first-order behavior at low temperatures and very small fields. Later Monte Carlo simulations confirmed the first-order nature of the P-FR transition, however, suggested that the PD phase is probably destabilized by any finite field and phase transitions at smaller fields were determined to belong to the tricritical universality class~\cite{Plumer3}. There have been attempts to also determine the phase diagram of the ASTIA model, which in the presence of the field is expected to differ from the FSTIA model, by the Monte-Carlo Mean-Field~\cite{Netz2} and the Landau~\cite{Plumer-1,Plumer0} theories. Both approaches predicted, besides the high-temperature P-PD line of second-order transitions, also one~\cite{Netz2} and up to two~\cite{Plumer-1,Plumer0} phase transitions to ferrimagnetic states at lower temperatures which can be first- or second-order of the Ising universality. The goal of the present study is to confront these early results obtained by the above approximate approaches with Monte Carlo (MC) simulations and a finite-size scaling analysis. \section{Model and methods} \label{model} \subsection{Model} We consider the ASTIA model described by the Hamiltonian \begin{equation} H = - J_1 \sum_{\left\langle i, j \right\rangle} \sigma_i \sigma_j - J_2 \sum_{\left\langle i, k \right\rangle} \sigma_i \sigma_k - h \sum_i \sigma_i, \end{equation} where $\sigma_i=\pm1$ is an Ising spin variable, $J_1<0$ and $J_2<0$ are respectively antiferromagnetic intralayer and interlayer exchange interactions, $h$ is an external magnetic field, and the first and second summations run over the nearest neighbor pairs within and between the layers, respectively. Due to the antiferromagnetic nature of both interactions $J_1$ and $J_2$ it is desirable do decompose the entire lattice into six interpenetrating sublattices, as shown in Fig.~\ref{fig:ASTIA}. The total coordination number is $z=8$ and each spin is coupled to six neighbors from two sublattices ($3+3$) in the same layer and two neighbors from another sublattice in the adjacent layers. \begin{figure}[t!] \centering \vspace*{-15mm} \includegraphics[width=0.5\textwidth]{astia.eps} \caption{ASTIA lattice partitioned into six sublattices marked by different symbols. The solid (dashed) lines represent intralayer (interlayer) interaction $J_1$ ($J_2$).} \label{fig:ASTIA} \end{figure} \subsection{Monte Carlo simulations} In our Monte Carlo (MC) simulations we consider the ASTIA system of the size $V=L_x \times L_y \times L_z=L \times L \times 4L/3$, i.e., $L_z=4L/3$ layers of the size $L \times L$ stacked along the $z$-axis, comprising in total $V=4L^3/3$ spins. For obtaining temperature dependencies of various thermodynamic functions the linear lattice size is fixed to $L=24$ and for the finite-size scaling (FSS) analysis it takes values $L=24,36$, and $48$. In all simulations the periodic boundary conditions are imposed. Initial spin states are randomly assigned and the updating follows the Metropolis dynamics. The lattice structure and the short range nature of the interactions enable vectorization of the algorithm. Since the spins on one sublattice interact only with the spins on the other, each sublattice can be updated simultaneously. Thus one sweep through the entire lattice involves just six sublattice updating steps. For thermal averaging, we typically consider $N = 10^5$ MC sweeps in the standard and up to $N = 10^7$ MC sweeps in the histogram MC simulations~\cite{Ferrenberg1,Ferrenberg2}, after discarding another $20$\% of these numbers for thermalization. To assess uncertainty of the calculated quantities, we perform $10$ runs, using different random initial configurations, and the error bars are taken as twice of the standard deviations. We calculate the enthalpy per spin $e=E/V|J_1|=\langle H \rangle/V|J_1|$, where $\langle \cdots \rangle$ denotes the thermodynamic mean value, the sublattice magnetizations per spin \begin{equation} \label{Magn_i} m_\alpha = 6 \langle M_\alpha \rangle/V = 6 \Big\langle \sum_{j \in \alpha}\sigma_{j} \Big\rangle/V,\ \alpha=1,2,\hdots,6, \end{equation} and the total magnetization per spin \begin{equation} \label{Magn_tot} m = \langle M \rangle/V = \Big\langle \sum_{i=1}^{V}\sigma_{i} \Big\rangle/V. \end{equation} The magnetic susceptibility is defined as \begin{equation} \chi_m = \beta (\left\langle M^2 \right\rangle - \left\langle M \right\rangle^2)/V. \label{eq:SuscM} \end{equation} and the specific heat as \begin{equation} C = \beta^2 (\left\langle E^2 \right\rangle - \left\langle E \right\rangle^2)/V, \label{eq:HeatCapZ} \end{equation} where $\beta=1/k_BT$. To measure a degree of the ferrimagnetic ordering within the planes and the antiferromagnetic ordering in the stacking direction, we introduce the order parameters $o_{xy}$ and $o_z$, defined as \begin{equation} o_{xy} = \langle O_{xy} \rangle_{z}/L^2 = \langle M_{max}-M_{min}+|M_{med}| \rangle_{z}/L^2, \label{eq:order_par_oxy} \end{equation} and \begin{equation} o_z = \langle O_z \rangle/L_z =\Big\langle \sum_{k=1}^{L_z} (-1)^k\sigma_k \Big\rangle_{xy}/L_z, \label{eq:order_par_oz} \end{equation} where $M_{max}$, $M_{min}$, and $M_{med}$ are sublattice magnetizations in each plane with the maximum, minimum, and medium (remaining) values, respectively, and the symbols $\langle \cdots \rangle_{z}$ and $\langle \cdots \rangle_{xy}$ denote the mean values taken over the planes and over the chains, respectively. To study phase transitions in the present six-sublattice system, we define the order parameter in accordance with Ref.~\cite{Landau} as \begin{equation} o = \langle O \rangle/V =\Bigg\langle \frac{\sqrt{3}}{3}\left(\sum_{\alpha=1}^{6} O_\alpha^2 \right)^{1/2}\Bigg\rangle\Bigg/V, \label{eq:order_par} \end{equation} where $O_1 = (M_1 - (M_2+M_3)/2)/2$, $O_2 = (M_2 - (M_1+M_3)/2)/2$, $O_3 = (M_3 - (M_1+M_2)/2)/2$, $O_4 = (O_4 - (M_5+M_6)/2)/2$, $O_5 = (M_5 - (M_4+M_6)/2)/2$, $O_6 = (M_6 - (M_4+M_5)/2)/2$, and the corresponding susceptibility \begin{equation} \chi_o = \beta (\left\langle O^2 \right\rangle - \left\langle O \right\rangle^2)/V. \label{eq:SuscO} \end{equation} In order to calculate the critical exponents and thus determine the order of the transition and also the universality class if the transition is second order, we employ a FSS analysis with the following scaling relations: \begin{eqnarray} C(L) \propto L^{\alpha/\nu}, \\ O(L) \propto L^{-\beta/\nu}, \\ \chi(L) \propto L^{\gamma/\nu} \label{eq:CrtiExpL} \end{eqnarray} \begin{equation} \frac{d \left\langle O \right\rangle}{d \beta} = \left\langle O \right\rangle \left\langle E \right\rangle - \left\langle O E \right\rangle \propto L^{(1-\beta)/\nu}, \label{eq:dO} \end{equation} \begin{equation} \frac{d \ln \left\langle O^2 \right\rangle}{d\beta} = \left\langle E \right\rangle - \frac{\left\langle O^2 E \right\rangle}{\left\langle O^2 \right\rangle} \propto L^{1/\nu}, \label{eq:dlnO} \end{equation} where $\alpha,\beta,\gamma$ and $\nu$ are the critical exponents corresponding to the specific heat, the order parameter, its susceptibility and the correlation length, respectively. Having estimated the exponent $\nu$, the inverse critical (N{\'e}el) temperature $\beta_N=1/k_BT_N$ can be obtained from the relation \begin{equation} \beta_{\max}(L) = \beta_N + a_i L^{-1/\nu}, \label{eq:BetaScaling} \end{equation} where $\beta_{max}$ is the inverse temperature in the vicinity of the transition point at which various quantities display maxima. \section{Results and discussion} \label{results} \subsection{Ground states} At zero temperature, the minimum energy states for different fields can be determined directly from the Hamiltonian. In Table~\ref{tab:GS} we present the identified states, showing the sublattice and total magnetizations, as well as the the reduced enthalpy. There are three phases, corresponding to the three field intervals $0 \leq h/|J_1|<-2J_2/|J_1|$ (phase I), $-2J_2/|J_1|<h/|J_1|<6-2J_2/|J_1|$ (phase II) and $6-2J_2/|J_1|<h/|J_1|<\infty$ (fully polarized phase P). \begin{table}[t!] \centering \begin{tabular}{c||c|c|c} $\frac{h}{|J_1|}$ & $( 0, -\frac{2 J_2}{|J_1|} )$ & $( -\frac{2 J_2}{|J_1|}, 6 - \frac{2 J_2}{|J_1|} )$ & $( 6-\frac{2 J_2}{|J_1|}, \infty )$ \\ \hline $\left( \frac{m_1, m_2, m_3}{m_4, m_5, m_6} \right)$ & $\left( \frac{+1,+1,-1}{-1,-1,+1} \right)$ & $\left( \frac{+1,+1,-1}{+1,-1,+1} \right)$ & $\left( \frac{+1,+1,+1}{+1,+1,+1} \right)$ \\ \hline $m$ & 0 & 1/3 & 1 \\ \hline $e$ & $-1 + \frac{J_2}{|J_1|}$ & $-1 + \frac{J_2}{3|J_1|} - \frac{h}{3|J_1|}$ & $3 - \frac{J_2}{|J_1|} - \frac{h}{|J_1|}$ \end{tabular} \caption{Ground states of the ASTIA model in an external field. The respective phases are characterized by the schematic arrangement of the sublattice magnetizations, $m_i$ ($i=1,2,\hdots,6$), the total magnetization, $m$, and the enthalpy, $e$.} \label{tab:GS} \end{table} In the phase I, there is an antiferromagnetic (AF) order within all chains in the $z$-axis direction and the spins in each triangular plaquette (belonging to the neighboring chains) are arranged ferrimagnetically (two parallel and one antiparallel). As one can see, at small fields the enthalpy does not depend on the field values and thus the ground state is expected to be the same as in zero field. In the phase II, the sublattice magnetization arrangement indicates that spins on one sublattice flip into the field direction and thus two thirds of the chains retain the AF order but remaining one third becomes ferromagnetic (FM). The total magnetization of such a state corresponds to $m=1/3$ and the enthalpy becomes field-dependent. Finally, in the fully polarized phase, all the spins that are not yet aligned with the field flip to its direction and the total magnetization becomes fully saturated with $m=1$. In the following we set $J_2/|J_1|=-1$ and study the behavior in the field intervals $(0,2),(2,8)$, and $(8,\infty)$. \subsection{Finite temperatures} \begin{figure}[t!] \centering \vspace*{-15mm} \subfigure{\includegraphics[scale=0.41,clip]{MC_Tmi_L24_H0c.eps}\label{fig:MC_Tmi_L24_H0}}\hspace*{-5mm} \subfigure{\includegraphics[scale=0.41,clip]{MC_Tmi_L24_H1c.eps}\label{fig:MC_Tmi_L24_H1}} \subfigure{\includegraphics[scale=0.41,clip]{MC_Tmi_L24_H3c.eps}\label{fig:MC_Tmi_L24_H3}} \caption{Temperature dependencies of the sublattice magnetizations per spin, $m_i$, $i=1,\hdots,6$, for (a) $h/|J_1|=0$, (b) $h/|J_1|=1$, and (c) $h/|J_1|=3$.} \label{fig:MC-T-mi} \end{figure} In Fig.~\ref{fig:MC-T-mi} we plot the sublattice magnetizations as functions of the temperature, for $h/|J_1|=0,1$ and $3$. In zero field the behavior resembles that of the FSTIA model~\cite{Netz1}, except that there are three more sublattices $m_4,m_5$, and $m_6$ antiferromagnetically coupled to $m_1,m_2$, and $m_3$, respectively, and thus $m_4=-m_1$, $m_5=-m_2$, and $m_6=-m_3$. At the intermediate temperatures one can observe the PD phase with two sublattices in each plane AF ordered and one disordered. In the low-temperature region, all the sublattice magnetizations ``freeze'' without reaching saturation values at zero temperature. The lack of saturation is related to the inherent degeneracy of the phases I and will be discussed in more detail below. Another source of the saturation failure it the kinetic freezing phenomenon, as previously reported in the zero-field FSTIA model~\cite{Netz1,Borovsky}, when a standard single-spin-flip MC simulation is employed. Nevertheless, as we show in the inset of Fig.~\ref{fig:MC_TE}, the lowest-temperature energies reached in our simulations coincide rather well with the true ground-state values. The exceptions are the cases of $h/|J_1|=0$ and $1.5$, where we recorded small deviations of about $(E_{GS,EX}-E_{GS,MC})/N|J_1|=2\times 10^{-4}$, due to the kinetic freezing. \begin{figure}[t!] \centering \vspace*{-15mm} \subfigure{\includegraphics[scale=0.41]{MC_TM_L24.eps}\label{fig:MC_TM}}\hspace*{-5mm} \subfigure{\includegraphics[scale=0.41]{MC_TX_L24.eps}\label{fig:MC_TX}}\vspace*{-5mm}\\ \subfigure{\includegraphics[scale=0.41]{MC_TE_L24.eps}\label{fig:MC_TE}} \hspace*{-5mm} \subfigure{\includegraphics[scale=0.41]{MC_TC_L24.eps}\label{fig:MC_TC}}\vspace*{-5mm}\\ \subfigure{\includegraphics[scale=0.41]{MC_TO_L24.eps}\label{fig:MC_TO}} \caption{Temperature dependencies of (a) the total magnetization per spin, (b) the magnetic susceptibility, (c) the enthalpy per spin, (d) the specific heat, and (e) the order parameter $o$, for different fields $h/|J_1|=0,0.5,\hdots,7.8$. The inset in (c) shows the difference between the enthalpy at the lowest simulated temperature and the exact GS value. The arrows in (a) and (c) show respectively increasing and decreasing trends in the magnetization and the enthalpy with the increasing field.} \label{fig:MC-T-X} \end{figure} In Fig.~\ref{fig:MC-T-X} we present temperature dependencies of (a) the total magnetization $m$, (b) the magnetic susceptibility $\chi_m$, (c) the enthalpy $e$, (d) the specific heat $C$, and (e) the order parameter $o$, for various values of the field $h/|J_1|$. We can observe the sharp high-temperature and broad low-temperature peaks or shoulders in the response functions. Nevertheless, there are no apparent discontinuities in the magnetization and the energy and the character of the high-temperature peaks of the response functions do not signal any change of the phase transition with the increasing field. The low-temperature anomalies are reflected in the behavior of the order parameter $o$, which due to the degeneracies of the phases I and II failed to reach the saturation value. \begin{figure}[t!] \centering \vspace*{-15mm} \subfigure{\includegraphics[scale=0.45,keepaspectratio,clip=true]{GS_snapshot_oz_h1a.eps}\label{fig:snapshot_h1}} \subfigure{\includegraphics[scale=0.45,keepaspectratio,clip=true]{GS_snapshot_oz_h4a.eps}\label{fig:snapshot_h4}} \subfigure{\includegraphics[scale=0.45,keepaspectratio,clip=true]{GS_snapshot_oz_h7a.eps}\label{fig:snapshot_h7}} \subfigure{\includegraphics[scale=0.45,keepaspectratio,clip=true]{GS_snapshot_oz_h75a.eps}\label{fig:snapshot_h75}} \caption{Intraplane and intrachain order parameters $o_{xy}$ and $o_z$, observed close to the ground state ($k_BT_N/|J_1| = 0.01$) for selected field values (a) $h/|J_1|=1$, (b) $h/|J_1|=4$, (c) $h/|J_1|=7$, and (d) $h/|J_1|=7.5$. Individual chains are shown projected onto the $x-y$ plane by circles of different colors representing values of the parameter $o_z$. $O_z^{(A)}$, $O_z^{(B)}$ and $O_z^{(C)}$ give unnormalized values of $o_z$ in the respective sublattices.} \label{fig:snapshot} \end{figure} Let us study the character of the low-temperature phases I and II in more detail by inspection of MC snapshots taken close to the ground state, at $k_BT/|J_1|=0.01$, where thermal effects are negligible. In Fig.~\ref{fig:snapshot}, we present the snapshots taken at the fields (a) $h/|J_1|=1$, (b) $h/|J_1|=4$, (c) $h/|J_1|=7$, and (d) $h/|J_1|=7.5$, which visualize the ordering within the chains in the stacking direction (the chain order parameter $o_z$) as well as the inplane ordering (the inplane order parameter $o_{xy}$). Individual chains are shown projected to the $x-y$ plane and the degree of their AF ordering is represented by circles of different colors and their intensities: dark (pale) red - full (partial) AF arrangement $(\uparrow\downarrow)$, white - full FM arrangement, and dark (pale) blue - full (partial) AF arrangement $(\downarrow\uparrow)$. The parameters $O_z^{(A)}$, $O_z^{(B)}$, and $O_z^{(C)}$ show the (unnormalized) values of the parameter $o_z$ in the three sublattices of the triangular lattice: A (includes sublattices $1$ and $4$), B (includes sublattices $2$ and $5$), and C (includes sublattices $3$ and $6$). As one can see, for $h/|J_1|=1$ (phase I) all the chains are perfectly AF ordered ($o_z=1$) but there is no long-range ordering among them ($o_{xy} \approx 0$). The minimum energy condition is satisfied when on each elementary triangular plaquette two chains are parallel and one antiparallel. This state corresponds to the zero-field state and can be considered as a three-dimensional equivalent of the Wannier state, if the fully AF ordered chains are viewed as giant spins. On the other hand, the figures in (b), (c) and (d) show examples of rather different spin arrangements in the phase II, all with the intraplane FR LRO. The snapshot in (b) represents the case when all the chains are fully ordered - two thirds of them show AF and one third FM ordering, while the snapshots in (c) and (d) are the examples the FR intraplane LRO without full itrachain ordering. The latter cases apparently result is the unsaturated values of the sublattice magnetizations $m_i$ ($i=1,2,\hdots,6$), as well as the order parameter $o$. \begin{figure}[t!] \centering \vspace*{-15mm} \includegraphics[scale=0.35,clip]{ASTIA_degen.eps} \caption{Schematic demonstration of the ground-state degeneracy within the phase II. The arrows represent spin orientations in three neighboring chains, with the boxes showing chunks of ferromagnetically arranged chains in the stacking direction.} \label{fig:ASTIA_degen} \end{figure} The mechanism leading to such a behavior is illustrated in Fig.~\ref{fig:ASTIA_degen}. The figure schematically shows spin ordering in the stacking direction in three neighboring chains belonging to the sublattices A, B and C in four degenerate states. The state $1$ corresponds to the snapshot in Fig.~\ref{fig:snapshot_h4}, with the FM chain in the sublattice A marked by the vertical lines. It is easy to verify that if we, for example, swap the lower half of the FM chain with its AF neighbor in the sublattice B (state $2$) the energy remains the same. The FM chain can break into smaller pieces and those can ``migrate'' between different sublattices (states $3$ and $4$) without any energy change. The result is a highly degenerate phase with the lack of saturation of the introduced order parameters. \begin{figure}[t!] \centering \vspace*{-15mm} \subfigure{\includegraphics[scale=0.41,clip]{MC_hM_L24.eps}\label{fig:MC_hM}}\hspace*{-5mm} \subfigure{\includegraphics[scale=0.41,clip]{MC_hX_L24.eps}\label{fig:MC_hX}}\vspace*{-5mm}\\ \subfigure{\includegraphics[scale=0.41,clip]{MC_hE_L24.eps}\label{fig:MC_hE}}\hspace*{-5mm} \subfigure{\includegraphics[scale=0.41,clip]{MC_hC_L24.eps}\label{fig:MC_hC}} \caption{Field dependencies of (a) the total magnetization per spin, (b) the magnetic susceptibility, (c) the enthalpy, and (d) the specific heat, for different temperatures. The zero-temperature values in (a) and (c) are exact. The insets in (b) and (d) show more detailed views of the quantities for different system sizes.} \label{fig:MC_field} \end{figure} Fig.~\ref{fig:MC_field} shows the evaluated quantities as functions of the applied field, for different temperatures. In Figs.~\ref{fig:MC_hM} and~\ref{fig:MC_hE} we also include the exact values corresponding to the ground state (bold black lines). Non-differentiability of the enthalpy and discontinuity of the magnetization indicate first-order transitions at $h/|J_1|=2$ and $8$. Nevertheless, at finite temperatures the curves become rounded and the corresponding response functions show a behavior typical for a standard phase transition (sharp peak) only close to $h/|J_1|=8$ but not in the vicinity of $h/|J_1|=2$. Namely, at $h/|J_1|=2$ the low-temperature specific heat becomes suppressed to practically zero and only round-peak anomalies appear from both sides (Fig.~\ref{fig:MC_hC}). The magnetic susceptibility features one round peak close to $h/|J_1|=2$, the height and width of which does not seem to be sensitive the lattice size (see the insets of Figs.~\ref{fig:MC_hX} and \ref{fig:MC_hC}). This behavior makes us believe that the origin of the low-temperature anomalies is not a conventional phase transition but rather linear chain-like excitations. \begin{table}[] \centering \vspace*{-15mm} \begin{tabular}{||l||c|c|c|c|c||} \hline \hline $h/|J_1|$ & $k_B T_N / |J_1|$ & $\alpha$ & $\beta$ & $\gamma$ & $\nu$ \\ \hline \hline 0 & 2.927 & -0.01(8) & 0.34(2) & 1.34(3) & 0.675(8) \\ \hline 2 & 2.799 & 0.03(9) & 0.31(3) & 1.33(3) & 0.668(9) \\ \hline 4 & 2.558 & 0.02(7) & 0.33(2) & 1.33(3) & 0.664(8) \\ \hline 6 & 1.943 & -0.02(8) & 0.35(2) & 1.32(3) & 0.666(8) \\ \hline 7 & 1.329 & 0.08(12) & 0.32(3) & 1.29(5) & 0.662(12) \\ \hline 7.5 & 0.779 & 0.02(21) & 0.28(7) & 1.41(7) & 0.688(18) \\ \hline \hline 3D XY & - & -0.006 & 0.345 & 1.316 & 0.671 \\ \hline \hline \end{tabular} \caption{Critical exponents and N{\'e}el temperatures for various values of the external field. The last row gives the critical exponents of the three-dimensional XY model universality class.} \label{tab:FSS} \end{table} On the other hand, the high-temperature anomalies in the form of sharp peaks in the response functions appear to indicate standard second-order phase transitions. In order to confirm this presumption based on the standard MC simulation results, we additionally perform the FSS analysis employing the scaling relations~(\ref{eq:CrtiExpL}),~(\ref{eq:dO}) and~(\ref{eq:dlnO}). The estimated N{\'e}el temperatures, obtained from the scaling relation~(\ref{eq:BetaScaling}), and the critical exponents for various fields in the range $0 \leq h/|J_1| \leq 7.5$ are summarized in Table~\ref{tab:FSS}. Due to the well known problem with reliable estimation of the critical exponents $\alpha \approx 0$, their values were obtained indirectly from the Rushbrook relation $\alpha + 2\beta + \gamma =2$. All the obtained critical exponents are in good correspondence with the three-dimensional XY universality class (see the last row in the table) and, thus, exclude the possibility of the crossover to the first-order transition. This is the case at least for $h/|J_1| \leq 7.5$, however, even the values obtained for $h/|J_1| = 7.5$ are in good accordance with those for $h/|J_1| = 0$ (see also Fig.~\ref{fig:FSS}). \begin{figure}[t!] \centering \vspace*{-15mm} \includegraphics[scale=0.6,clip]{FSS.eps} \caption{FSS of extrema of the quantities $\chi_o$ (blue circles), $d \ln \left\langle O^2 \right\rangle /d\beta$ (red squares) and $d \left\langle O \right\rangle /d\beta$ (green diamonds) at the transition to the paramagnetic state, for $h/|J_1|=0$ (empty symbols) and $7.5$ (filled symbols).} \label{fig:FSS} \end{figure} Finally, in Fig.~\ref{fig:PD_MC} we present the phase diagram. The order-disorder phase boundary (circles) represents second-order phase transitions belonging to the 3D XY universality class. The low-temperature boundaries (downward triangles) are determined from anomalies in the specific heat and represent crossovers to the highly degenerate phases I and II. \begin{figure}[t!] \centering \vspace*{-15mm} \includegraphics[scale=0.6,clip]{PD_MC.eps} \caption{Phase diagrams in the $h-T$ parameter plane featuring the paramagnetic (P), the partially disordered (PD) and the highly degenerate I and II phases. The entire P-PD boundary represents second-order phase transitions. The downward triangles mark the low-temperature anomalies observed in the specific heat at the PD-I and PD-II crossovers. The red solid squares show the ground-state transition points between the phases I and II at $h/|J_1|=2$ and from II to the fully polarized P phase at $h/|J_1|=8$.} \label{fig:PD_MC} \end{figure} \section{Summary and conclusions} \label{summary} We studied the stacked triangular Ising antiferromagnet (ASTIA model) with antiferromagnetic (AF) interactions both within the triangular planes ($J_1<0$) as well as in the stacking direction ($J_2<0$) by Monte Carlo (MC) simulations. At zero temperature we identified three ground-state phases, corresponding to the three field intervals $0 \leq h/|J_1|<-2J_2/|J_1|$ (phase I), $-2J_2/|J_1|<h/|J_1|<6-2J_2/|J_1|$ (phase II) and $6-2J_2/|J_1|<h/|J_1|<\infty$ (phase P). The phase I is characterized by a full AF spin arrangement in the stacking direction but no long-range ordering (LRO) within the planes (Wannier-like phase). On the other hand, in the phase II the system shows a ferrimagnetic $(\uparrow\uparrow\downarrow)$ LRO within the planes but no LRO in the stacking direction. Thus, both the phases I and II are highly degenerate but in different directions. The phase P represents a fully polarized phase with all spins pointing in the field direction. At finite temperatures we limited our considerations to the case of $J_2/|J_1|=-1$. The results indicated the presence of only one phase transition within the entire interval $0 \leq h/|J_1|< 8$, which is of second order belonging to the 3D XY universality class. We note, that this behavior is quite different from the FSTIA model, which shows a crossover to the first-order regime at finite fields. There are also anomalies in the response functions at lower temperatures but their character (broad bumps and shoulders insensitive to finite-size effects) do not indicate the occurrence of any conventional phase transition or even appearance of a new intermediate phase, as suggested by some previous approximate approaches~\cite{Plumer0,Netz2}, but rather linear-chain-like excitations. In the current study we focused on the isotropic interaction case $J_1=J_2$. In order to better understand unusual properties of the quasi one-dimensional Ising-like antiferromagnets $\rm{CsCoCl}_3$ and $\rm{CsCoBr}_3$ in the future work it would be interesting to extend the present investigation to the dimensional crossover region of $|J_2|>>|J_1|$. The dimensional crossover phenomena in such a frustrated spin system would also be of theoretical interest. \section*{Acknowledgments} This work was supported by the Scientific Grant Agency of Ministry of Education of Slovak Republic (Grant No. 1/0331/15) and the scientific grants of the Slovak Research and Development Agency provided under contract No.~APVV-16-0186 and No.~APVV-14-0073. \section*{References}
train/arxiv
BkiUdFw5qdmB629MG3Sk
5
1
\section{Introduction} A new technique called `polarization' has recently been introduced in \cite{ari} to develop efficient channel coding schemes. The codes resulting from this technique, called polar codes, have several nice attributes: (1) they are linear codes generated by a low-complexity deterministic matrix (2) they can be analyzed mathematically and bounds on the error probability (exponential in the square root of the block length) can be {\it proved} (3) they have a low encoding and decoding complexity (4) they allow to reach the Shannon capacity on any discrete memoryless channels (DMC). These codes are indeed the first codes with low decoding complexity that are provably capacity achieving on any DMC. The key result in the development of polar code is the so-called `polarization phenomenon', initially shown in the channel setting in \cite{ari}. The same phenomenon admits a source setting formulation, as follows. \begin{thm}\label{thmari}[\cite{ari,ari3}] Let $X=[X_1,\dots,X_n]$ be i.i.d. Bernoulli($p$), $n$ be a power of 2, and $Y=X G_n$, where $G_n= \bigl[\begin{smallmatrix} 1 & 0 \\ 1 & 1 \\ \end{smallmatrix}\bigr]^{\otimes \log_2(n)}$. Then, for any $\varepsilon \in (0,1)$, \begin{align} &\frac{1}{n} |\{j \in [n]: H(Y_j | Y^{j-1}) \geq 1-\varepsilon \}| \stackrel{n \to \infty}{\longrightarrow} H(p), \label{polar} \end{align} where $H(p)$ is the entropy of a Bernoulli($p$) distribution. \end{thm} Note that \eqref{polar} implies that the proportion of components $j$ for which $H(U_j | U^{j-1}) \in (\varepsilon,1-\varepsilon)$ tends to 0. Hence most of the randomness has been extracted in about $nH(p)$ components having conditional entropy close to 1 and indexed by \begin{align} R_\varepsilon(p)=\{ j \in [n] : H(Y_j |Y^{j-1}) \geq 1 -\varepsilon \} \label{defr} \end{align} and besides $o(n)$ fluctuating components, the remaining $n(1-H(p))$ components have conditional entropy below $\varepsilon$. This theorem is extended in \cite{ari3} to $X=[X_1,\dots,X_n]$ being i.i.d.\ from an arbitrary distribution $\mu$ on $\F_q$, where $q$ is a prime, replacing $H(p)$ by $H(\mu)$ (and using the logarithm in base $q$). It is however mentioned that the theorem may fail when $q$ is not a prime but a power of a prime, with a counter-example provided for $q=4$. In Section \ref{galois} of this paper, we show a generalized version of the polarization phenomenon, i.e., of Theorem \ref{thmari}, for powers of primes (we show it explicitly for powers of 2, but the same holds for arbitrary primes). Also, the formulation of Theorem \ref{thmari} is slightly more general in \cite{ari3}, it includes an auxiliary random variable $Y$ (side-information), which is a random variable correlated with $X$ but not intended to be compressed, and which is introduced in the conditioning of each entropy term. Although this formulation is mathematically close to Theorem \ref{thmari}, it is more suitable for an application to the Slepian-Wolf coding problem (distributed data compression), by reducing the problem to single-user source coding problems. A direct approach for this problem using polar codes is left open for future work in \cite{ari3}; we investigate this here in Section \ref{sw}. Finally, we also generalize Theorem \ref{thmari} to a setting allowing dependencies within the source (non i.i.d.\ setting.) This paper provides a unified treatment of the three problems mentioned above, namely, the compression of multiple correlated sources, non i.i.d.\ sources and non binary sources. The main result of this paper is Theorem \ref{main}, where a ``matrix polarization'' shows how not only randomness but also dependencies can be extracted using $G_n$. Some results presented in this paper can be viewed as counter-parts of the results in \cite{mmac} for a source rather than channel setting. Reciprocally, some results presented here in the source setting can be extended to a channel setting (such as channels with memory, or non-prime input alphabets). Finally, connections with extractors in computer science and the matrix completion problem in machine learning are discussed in Sections \ref{pexts} and \ref{discussion}. \subsection*{Some notations} \begin{itemize} \item $[n]=\{1,2,\dots,n\}$ \item For $x \in \F_2^k$ and $S \subseteq [k]$, $x[S]=[x_i : i \in S]$ \item For $x \in \F_2^k$, $x^{i}=[x_1,\dots,x_{i}]$ \item $\{0,1,\dots,m\} \pm \varepsilon = [-\varepsilon,\varepsilon] \cup [1-\varepsilon,1+\varepsilon] \cup \dots \cup [m-\varepsilon,m+\varepsilon]$ \item $H(X|Y)= \sum_y (\sum_x p_{X|Y}(x|y) \log 1/p_{X|Y}(x|y)) p_Y(y)$ \item For a matrix $A$, the matrix $A^{\otimes k}$ is obtained by taking $k$ Kronecker products of $A$ with itself. \end{itemize} \section{Results}\label{res} \begin{definition} A random variable $Z$ over $\F_2^k$ is $\varepsilon$-uniform if $H(Z) \geq k(1- \varepsilon)$, and it is $\varepsilon$-deterministic if $H(Z) \leq \varepsilon k$. We also say that $Z$ is $\varepsilon$-deterministic given $W$ if $H(Z|W) \leq \varepsilon k$. \end{definition} \begin{thm}\label{main} (1) Let $n$ be a power of 2 and $X$ be an $m \times n$ random matrix with i.i.d.\ columns of arbitrary distribution $\mu$ on $\F_2^m$. Let $Y=X G_n$ where $G_n=\bigl[\begin{smallmatrix} 1 & 0 \\ 1 & 1 \\ \end{smallmatrix}\bigr]^{\otimes \log_2(n)}$. Then, for any $\varepsilon>0$, there exist two disjoint subsets of indices $R_\varepsilon, D_\varepsilon \subseteq [m] \times [n]$ with $| [m] \times [n] \setminus (R_\varepsilon \cup D_\varepsilon)|=o(n)$ such that the subset of entries $Y[U_{\varepsilon}]$ is $\varepsilon$-uniform and $Y[D_{\varepsilon}]$ is $\varepsilon$-deterministic given $Y[D_{\varepsilon}^c]$. (Hence $|R_\varepsilon| \stackrel{\cdot}{=} nH(\mu)$, $|D_\varepsilon| \stackrel{\cdot}{=} n(m-H(\mu))$.) (2) Moreover, the computation of $Y$ as well as the reconstruction of $X$ from the non-deterministic entries of $Y$ can be done in $O(n \log n)$, with an error probability of $O(2^{-n^\beta})$, $\beta < 1/2$, using the algorithm \texttt{polar-matrix-dec}. \end{thm} \noindent {\bf Remarks.} \begin{itemize} \item The multiplication $X G_n$ is over $\F_2$ \item The sets $R_\varepsilon, D_\varepsilon$ depend on the distribution $\mu$ (and on the dimensions $m$ and $n$), but not on the realization of $Y$. These sets can be accurately computed in linear time (cf.\ Section \ref{discussion}). \item To achieve an error probability of $O(2^{-n^\beta})$, one picks $\varepsilon=\varepsilon_n = 2^{-n^\alpha}$, for $\alpha < 1/2$. \end{itemize} The following lemma provides a characterization of the dependencies in the columns of $Y$, it is proved in Section \ref{proofs1}. Recall that $Y_j$ denotes the $j$-th column of $Y$, $Y_j(i)$ the $(i,j)$-entry of $Y$, $Y_j[S]=[Y_j(i): i \in S]$ and $Y^{j}=[Y_1,\dots,Y_j]$. \begin{lemma}\label{mainlemma} For any $\varepsilon>0$, we have, \begin{align*} \frac{1}{n} |\{j \in [n]: H(Y_j[S]|Y^{j-1}) \in \{0,1,\dots,|S|\} \pm \varepsilon, \forall S \subseteq [m] \}| \\ \to 1 \end{align*} \end{lemma} This lemma implies the first part of Theorem \ref{main}, as shown in next section. The second part of the theorem is proved in Section \ref{proofs2}, together with the following result, which further characterizes the dependency structure of $Y$. \begin{lemma}\label{null} For any $\varepsilon>0$ and $j \in [n]$, let $A_j$ denote the binary matrix of maximal rank such that \begin{align*} H(A_j Y_j |Y^{j-1}) \leq \varepsilon. \end{align*} Note that $A_j$ can have zero rank, i.e., $A_j$ can be a matrix filled with zeros. We then have, \begin{align*} \frac{1}{n} \sum_{j=1}^n \mathrm{nullity}(A_j) \to H(\mu). \end{align*} Moreover, the result still holds when $\varepsilon=\varepsilon_n = 2^{-n^{\alpha}}$, for $\alpha < 1/2$. \end{lemma} Note that, if $H(A_j Y_j |Y^{j-1}) \leq \varepsilon$, $A_j Y_j$ is $\varepsilon$-deterministic given $Y^{j-1}$, and if $A_j$ has rank $r_j$, by freezing $k_j=m-r_j$ components in $Y_j$ appropriately, say on $B_j$, we have that $A_j Y_j$ can be reduced to a full rank matrix multiplication $\tilde{A}_j \mathcal{Y}_j[B_j^c]$, and hence $\mathcal{Y}_j[B_j^c]$ is $\varepsilon$-deterministic given $Y^{j-1}$ and $Y_j[B_j]$. Hence the number of bits to freeze, is exactly $\sum_j k_j$, and as stated in the lemma, this corresponds to the total entropy of Y (up to a $o(n)$). \subsection{Proof of Theorem \ref{main} (part 1) and how to set $R_\varepsilon$ and $D_\varepsilon$}\label{proof1} Let $\varepsilon>0$ and Let $E_n=E_n(\varepsilon)$ be the set of indices $i \in [n]$ for which $H(Y_j[S]|Y^{j-1}) \in \{0,1,\dots,|S|\} \pm \varepsilon$, for any $S \subseteq [m].$ From Lemma \ref{mainlemma}, $n-|E_n|=o(n)$. Note that for $i \in E_n$, there exists a minimal set (not necessarily unique) $T_j$ such that \begin{align} & H(Y_j[T_j]|Y^{j-1}) \geq H(Y_j |Y^{j-1}) - \varepsilon \label{max} \end{align} which also implies \begin{align} & H(Y_j[T_j]|Y^{j-1}) \geq |T_j| - \varepsilon \label{max2}, \end{align} and, by the chain rule and defining $S_j:=T_j^c$, \begin{align} H(Y_j[S_j]|Y^{j-1} Y_j[S_j^c]) \leq \varepsilon. \label{corr} \end{align} (Note that if $H(Y_j |Y^{j-1}) \leq \varepsilon$, we define $T_j=\emptyset$ so that $S_j=[m]$.) We then have \begin{align*} &H(\cup_{j \in E_n}Y_j[S_j]| (\cup_{j \in E_n}Y_j[S_j])^c) \\ & \leq \sum_{j \in E_n} H(Y_j[S_j]| Y^{j-1}Y_j[S_j^c] ) \leq \varepsilon n \end{align*} and $\cup_{j \in E_n}Y_j[S_j]$ is $\varepsilon$-deterministic given $(\cup_{j \in E_n}Y_j[S_j])^c$, so that $D_{\varepsilon}=\cup_{j \in E_n} S_j$. Moreover, we have \begin{align} H(Y)&\geq H(\cup_{j \in E_n}Y_j[T_j]) \geq \sum_{j \in E_n} H(Y_j[T_j]|Y^{j-1}]) \notag \\ & \geq \sum_{j \in E_n} H(Y_j |Y^{j-1}) -\varepsilon n \notag \\ & \geq \sum_{j=1}^n H(Y_j |Y^{j-1}) -\varepsilon n - o(n)\notag \\ & = H(Y) -\varepsilon n - o(n), \label{same} \end{align} where the third inequality uses \eqref{max}, and from \eqref{max2}, \begin{align*} \sum_{j \in E_n} |T_j| \geq H(\cup_{j \in E_n}Y_j[T_j])& \geq \sum_{j \in E_n} |T_j| -\varepsilon n. \end{align*} Since $H(Y) = H(X) = n H(\mu)$, we have $$n H(\mu) +\varepsilon n \geq \sum_{j \in E_n} |T_j| \geq n H(\mu) -\varepsilon n -o(n)$$ and $\cup_{j \in E_n}Y_j[T_j]$ is $\frac{\varepsilon }{H(\mu)-2 \varepsilon}$-uniform, so that $R_{\varepsilon/(H(\mu)-2\varepsilon)}=\cup_{j \in E_n} T_j$. \subsection{Decoding algorithm} \begin{definition}\texttt{polar-matrix-dec}\\ Inputs: $D^c \subseteq [m] \times [n]$, $y[D^c] \in \F_2^{|D^c|}$.\\ Output: $y \in \F_2^{m n}$.\\ Algorithm:\\ 0. Let $M=D$;\\ 1. Find the smallest $j$ such that $S_j=\{(i,j) \in M\}$ is not empty; compute $$\hat{y}[S_j]=\arg\max_{u \in \F_2^{|S_j|}} \mathbb{P}\{ Y[S_j]= u | Y^{j-1}=y^{j-1}, Y[S_j^c]=y[S_j^c] \};$$ 2. Update $M=M \setminus \{j\}$, $y[M]=y[M] \cup \hat{y}[S_j]$; \\ 3. If $M$ is empty output $y$, otherwise go back to 1. \end{definition} Note that, using \eqref{max} for the definition of $S_j$ (and the corresponding $D_\varepsilon$), the realizations of $Y^{j-1}$ and $Y_j[S_j^c]$ are known, and with high probability one guesses $Y_j[S_j]$ correctly in step 1, because of \eqref{corr}. Moreover, due to the Kronecker structure of $G_n$, and similarly to \cite{ari}, step 1.\ and the entire algorithm require only $O(n \log n)$ computations. Finally, from the proof of Theorem \ref{main} part (2), it results that step 1. can also be performed slightly differently, by finding sequentially the inputs $Y[j]$ for $j \in S_j$, reducing an optimization over all possible $y \in \F_2^{|S_j|}$, where $|S_j|$ can be as large as $m$, to only $m$ optimizations over $\F_2$ (which may be useful for large $m$). \section{Three Applications} We present now three direct applications of Theorem \ref{main}: \begin{itemize} \item Distributed data compression, i.e., Slepian-Wolf coding \item Compression of sources on arbitrary finite fields \item Compression of non i.i.d.\ sources \end{itemize} \subsection{Source polarization for correlated sources: Slepian-Wolf coding}\label{sw} In \cite{ari3}, the two-user Slepian-Wolf coding problem is approached via polar codes by reducing the problem to single-user source coding problems. A direct approach is left open for future work; we investigated this here, for arbitrary many users. Consider $m$ binary sources which are correlated with an arbitrary distribution $\mu$. We are interested in compressing an i.i.d.\ output of these sources. That is, let $X_1,\dots,X_n$ be i.i.d.\ under $\mu$ on $\F_2^m$, i.e., $X_i$ is an $m$ dimensional binary random vector and, for example, $X_1[i],\dots,X_n[i]$ is the sources output for user $i$. If we are encoding these sources together, a rate $H(\mu)$ is sufficient (and it is the lowest achievable rate). In \cite{slepian}, Slepian and Wolf showed that, even if the encoders are not able to cooperate, lossless compression can still be achieved at rate $H(\mu)$. We now present how to use Theorem \ref{main} to achieve this rate with a polar coding scheme. {\it Polar codes for distributed data compression:}\\ 1. For a given $n$ and $\varepsilon$ (which sets the error probability), since each user knows the joint distribution $\mu$, each user can compute the ``chart'' of the deterministic indices, i.e., the set $D_\varepsilon \subset [m] \times [n]$ and identify its own chart $D_\varepsilon(i,\cdot)$. \\ 2. Each user computes $Y(i,\cdot)=X(i,\cdot) G_n$ and stores $Y(i,\cdot)[D_\varepsilon(i,\cdot)^c]$, so that the joint decoder is in possession of $Y[D_\varepsilon^c]$, and can run \texttt{polar-dec-matrix} with $Y[D_\varepsilon^c]$ as input to get $Y$, with an error probability at most $\varepsilon n$. Since $G_n$ is invertible, indeed $G_n^{-1}=G_n$, one can then find $X=Y G_n$. From Theorem \ref{main}, we have the following result. \begin{corol}\label{swc}[Distributed polar compression] For $m$ correlated sources of joint distribution $\mu$, previously described scheme allows to perform lossless and distributed compression of the sources at sum-rate $H(\mu)$, with an error probability of $O(2^{-n^{\sqrt{\beta}}})$, $\beta < 1/2$, and an encoding and decoding complexity of $O(n \log n)$. \end{corol} Note that this result allows to achieve the sum-rate of the Slepian-Wolf region, i.e., a rate belonging to the dominant face of the Slepian-Wolf achievable rate region, it does not say that any rate in that region can be reached with the proposed scheme. \subsection{Polarization for arbitrary finite fields}\label{galois} In \cite{ari3}, the source polarization result is stated for sources that are i.i.d.\ and $q$-ary, where $q$ is prime. It is also mentioned that if $q$ is not prime, the theorem may fail. In particular, an example for $q=4$ is provided where the conclusion of Theorem \ref{thmari} does not hold. It is also mentioned that if additional randomness is introduced in the construction of the polar transformation (leading no longer to a deterministic matrix $G_n$), the result holds for arbitrary powers of primes. We show here that a generalized polarization phenomenon still holds for arbitrary powers of primes (we formally show it for powers of 2 only but any prime would work) even for the deterministic polar transform $G_n$. \begin{corol}\label{galoisc}[Polarization for finite fields] Let $X=[X_1,\dots,X_n]$ be i.i.d.\ under $\mu$ on $\F_q$ where $q=2^m$, and let $Y=X G_n$ (computed over $\F_q$). Then, although $Y$ may no polarize over $\F_{2^m}$, it polarizes over $\F_2^m$ in the sense of Theorem \ref{main}, more precisely: Define by $V$ a $\F_2^m$ representation of $\F_{2^m}$, $\widetilde{\mu}$ the distribution on $\F_2^m$ induced by $\mu$ on $\F_{2^m}$, and set $\widetilde{Y}:=V(Y)$ (organized as an $m \times n$ matrix). Then the conclusions of Theorem \ref{main} hold for $\widetilde{Y}$. \end{corol} Note: this theorem still holds when $q$ is a power of any prime, by combining it with the result in \cite{ari3} for prime alphabets. The case where $q=2^m$ is particularly interesting for complexity considerations (cf.\ Section \ref{discussion}). {\it Interpretation of Corollary \ref{galoisc}:} When $q$ is a prime, $H(Y_j | Y^{j-1}) \in \{0,\log q\} \pm \varepsilon$, which means that $Y_j$ is either roughly uniform and independent of the past or roughly a deterministic function of the past. However, for $q$ being a power of 2 (or a power of a prime), we only get that $H(Y_j | Y^{j-1}) \in \{0,1,\dots,\log q\} \pm \varepsilon$, and previous conclusion cannot be drawn, stressing indeed a different polarization phenomenon. However, Corollary \ref{galoisc} says that if we work with the vector representation of the elements in $\F_{q}$, we still have a `polarization phenomenon' in the sense of Theorem \ref{main}, i.e., for almost all $j \in [n]$, a subset of the components of the $\widetilde{Y_j}$ are either roughly uniform and independent or deterministic functions of the past and the complementary components. {\it Compression of $2^m$-ary i.i.d.\ sources:} For a given $X=[X_1,\dots,X_n]$, compute $Y=X G_n$ and transform $Y$ into $\widetilde{Y}$ based on the representation of $\F_{2^m}$ by $\F_2^m$. Organize $\widetilde{Y}$ to be an $m \times n$ matrix. Note one can equivalently map $X$ into $\widetilde{X}$ and then take $G_n$ to get $\widetilde{Y}$. This is due to the fact that the $\F_{2^m}$ addition corresponds to the pointwise addition in $\F_2^m$. Finally, store $\widetilde{Y}$ on $D_\varepsilon(\widetilde{\mu})^c$, and run \texttt{polar-matrix-dec} to recover $\widetilde{Y}$, hence $Y$ and $X$. \subsection{Source polarization for non i.i.d.\ sources}\label{memory} Let a binary source consist of i.i.d.\ blocks of length $m$, each block having an arbitrary distribution $\mu$. We can then compress the source as follows. From $n$ blocks $X_1,\dots,X_n$ each of length $m$, i.e., $m n$ outputs of the source, create the matrix $X=[X_1^t | \dots | X_n^t ]$ and apply the polar transform to get $Y=X G_n$. Then store the components of $Y$ which belong to $D_{\varepsilon}(\mu)^c$. To reconstruct $X$, reconstruct $Y$ from $Y[D_{\varepsilon}(\mu)^c]$ using \texttt{polar-matrix-dec} and find $X=Y G_n$. If the source is not exactly block i.i.d.\ but is mixing, i.e., if $\lim_{n \to \infty} \mathbb{P} \{X_n = x | X_0 =x_0\} = \mathbb{P} \{X_n = x\}$, for any $x_0$, we can open windows of length $o(n^2)$ between the blocks and store without compression these $o(n^2)$ inter-block bits, which does not increase the compression rate. We are then left with a source formed by blocks which are `almost' i.i.d.\ and a similar procedure can be used. From Theorem \ref{main}, we have the following. \begin{corol}\label{memoryc For a binary source consisting of i.i.d.\ blocks of length $m$, each block having distribution $\mu$, the polar coding scheme described previously allows to compress losslessly the source at rate $H(\mu)$, with an error probability of $O(2^{-n^\beta})$, $\beta < 1/2$, and an encoding and decoding complexity of $O(n \log n)$. \end{corol} As discussed previously, a similar result holds for source which are mixing. \section{Extractors in computer science}\label{pexts} We have discussed in this paper a procedure to extract randomness, i.e., uniform bits, from non uniform bits. The applications we considered are in compression and coding, but there are also numerous applications of randomness extraction problems in computer science. In particular, there is a notion of ``extractor'' in theoretical computer science, which aims at extracting uniform bits from sources having much more general assumptions than the one considered here. Phrased in our terminology, an extractor is roughly a map that extracts $m$ bits that are $\varepsilon$-uniform from $n$ bits that have a total entropy at least $k$, with the help of a seed of $d$ uniform bits. For more details and a survey on extractors see for example \cite{trevisan,shalt}. The notion of $\varepsilon$-uniform, or $\varepsilon$-close to uniform, used in computer science is usually measured by the $l_1$-norm, rather than the entropy as used in this paper. Nevertheless, these two notions can be related and this is a minor distinction. Also, the entropy used in the computer science literature is the min-entropy rather than the Shannon-entropy, which is a stronger assumption, since the Shannon-entropy is an upper bound to the min-entropy. On the other hand, the source for the extractor definition is only assumed to have min-entropy $k$, and no further assumptions are made on the distribution of $X_1,\dots,X_n$, whereas in our setting, we consider sources that are at least ergodic and with a known distribution. One should also stress that we did not make use of any seed in our problems\footnote{Note that, as opposed to the compression problem, when only concerned with randomness extraction, the treatment of the deterministic bits and reconstruction algorithm may not matter.}. In order to establish a more concrete connection between polar coding and formal extractors, we present here a result which takes into account one of the two caveat just mentioned: we only assume that the source has entropy at least $k$, without requiring the exact knowledge of the distribution, but we keep an i.i.d.\ setting. Using Section \ref{memory}, one can generalize this result to a setting where the source is mixing, but in even then we do not make use of any seed. In particular, if one could use a seed, ideally of small size, e.g., $O(\log n)$, to turn an arbitrary source of lower-bounded entropy, into a mixing source of comparable entropy, one could use the following result to construct real extractors (work in progress). \begin{definition} Let $(k,\varepsilon)$-$\mathrm{Pext}: \F_2^n \to \F_2^m$ be the matrix obtained by deleting the columns of $G_n$ that are not in $R_{\varepsilon^2/2n}(p(k))$, where $p(k)$ is one of the two binary distribution having entropy $H(p(k)) = k/n$ (and $R_\varepsilon(\cdot)$ as defined in \eqref{defr}). \end{definition} Note that $\mathrm{Pext}$ benefits from the low encoding complexity of $G_n$, namely $O(n \log n)$. \begin{lemma}\label{pext} Let $n$ be a power of two and $X=[X_1,\dots,X_n]$ be i.i.d.\ Bernoulli such that $H(X_1^n) \geq k$ (where $H$ denotes the Shannon or min-entropy). For any $\varepsilon \in (0,1)$, $\mathrm{Pext}(X)$ is $\varepsilon$-uniform (in the $l1$ or entropy sense) and $$m = k + o(n).$$ \end{lemma} This result is proved in Section \ref{proofext}, and using Section \ref{memory} it can be extended to a setting where the source is mixing. Note that even in a mixing setting, the source entropy is $\Omega(n)$, which is indeed a regime where good extractors are known \cite{zuck}. \section{Discussion}\label{discussion} We have treated in this paper three problems, namely, compression of correlated sources, sources with memory and sources on finite fields, with a unified approach using a matrix polarization (Theorem \ref{main}), and we provided polar coding schemes for each of these problems. The advantage of using polar coding schemes is that these schemes have low encoding and decoding complexity, and achieve the optimal performance (Shannon limit) meanwhile affording mathematical guarantees on the performance, as described in Corollaries \ref{swc}, \ref{galoisc} and \ref{memoryc}. One can now also combine these different problems. Namely, for multiple sources that are define on some finite fields, with some well-behaved correlations between and within themselves, one can, using the interleaving trick and the vector representation described respectively in Sections \ref{memory} and \ref{galois}, organize the sources outputs in a matrix form so as to meet the hypotheses of Theorem \ref{main}, and hence have a polar compression scheme requiring the minimal compression rate. One can also translate the results in this paper to a channel setting, such as $m$-user multiple access channels (already treated in \cite{mmac}), channels with memory or channels with non binary fields inputs, by using duality arguments. Although the results in this paper are expected to hold when $m=o(n)$, one has to be careful with the complexity scaling when $m$ gets large. In that regard, an advantage of using finite fields of cardinality $q=2^m$ rather than modular fields of prime cardinality, is that some operations required in the polar decoding algorithm are convolution-like operations over the underlying field, and as the FFT algorithm allows to reduce the computational cost of a convolution from $O(q^2)$ to $O(q \log_2 q)$ when $q$ is a power of 2, one can benefit from this fact. We have assumed in this paper that the sets $D_\varepsilon(\mu)$ and $R_\varepsilon(\mu)$ can be computed, without discussing how. The first reason why we do not stress this aspect here, as in other papers in polar coding, is that these sets do not depend on the realization of the source(s). Namely, if one is able to compute these sets once for several values of interest of $\varepsilon$ and of the dimensions, one can then use the same sets for any outputs realizations. This is fundamentally different than the decoding algorithm which takes the source realization as an input. Yet, it is still crucial to be able to compute these sets once, for the parameters of interests. In order to do so, there are at least two possible approaches. The first one is via simulations, and is discussed in \cite{ari}: using the Kronecker structure of $G_n$, it is possible to run simulations and get accurate estimate of the conditional entropies $H(Y_j|Y^{j-1})$, in particular (from Section \ref{proof1}) of the sets $D_\varepsilon(\mu)$ and $R_\varepsilon(\mu)$. Another option is to use algorithms to approach the exact values of $H(Y_j|Y^{j-1})$ within a given precision, in linear time; this has been proposed in particular in \cite{vardy}. It would also be interesting to have mathematical characterizations of these sets. At the moment, this is an open problem, even for the simplest settings (single binary i.i.d.\ source, or in the channel setting, the binary erasure channel). Finally, this work could also apply to the matrix completion setting. For example, if $X$ is an $m \times n$ matrix where column $X_j$ contains the ratings of $m$ movies by user $j$, we can use Theorem \ref{main} to show that by applying the matrix\footnote{the matrix obtained by deleting the columns of $G_n$ that are not in $D_\varepsilon$} $G_n \times I_{(D_\varepsilon)^c}$ to $X$, we are left with fewer entries (the more correlations between the movie ratings the fewer entries) that yet allow to recover the initial matrix. Hence, if we are given only a smaller set of appropriate entries (and which sets can be characterized using Section \ref{proof1}), we can reconstruct the initial matrix using \texttt{polar-matrix-dec}. \section{Proofs}\label{proofs} \subsection{Proof of Lemma \ref{mainlemma}}\label{proofs1} In order to prove Lemma \ref{mainlemma}, we need the following definition and lemmas. \begin{definition} For a random vector $V$ distributed over $\F_2^m$, define $V^{-}=V+V'$ and $V^{+}=V'$, where $V'$ is an i.i.d.\ copy of $V$. Let $\{b_i\}_{i \geq 1}$ be i.i.d.\ binary random variables in $\{-,+\}$ with uniform probability distribution, and let \begin{align*} & \eta_k[S]=H(V^{b_1 \dots b_k}[S]| V^{c_1 \dots c_k}, \forall (c_1 \dots c_k) < (b_1 \dots b_k)) \end{align*} for $S \subseteq [m]$, where the order between $(-,+)$-sequences is the lexicographic order (with $- < +$). \end{definition} Note that $$\{ V^{b_1 \dots b_k} : (b_1 \dots b_k) \in \{-,+\}^k \} \stackrel{(d)}{=} X G_{2^k}$$ where $X$ is the matrix whose columns are i.i.d copies of $V$. The following lemma justifies the definition of previous random processes. \begin{lemma}\label{corresp} Using $V \sim \mu$ in the definition of $\eta_k[S]$, we have for any $n$ and any set $D \subseteq [0,|S|]$ $$\frac{1}{n} \{j \in [n] : H(Y_j[S]| Y^{j-1} ) \in D \} = \mathbb{P}\{ \eta_{\log_2 (n)}[S] \in D \}.$$ \end{lemma} The proof is a direct consequence from the fact that the $b_k$'s are i.i.d.\ uniform. Using the invertibility of $\bigl[\begin{smallmatrix} 1 & 0 \\ 1 & 1 \\ \end{smallmatrix}\bigr]$ and properties of the conditional entropy, we have the following. \begin{lemma} $\eta_k[S]$ is a super-martingale with respect to $b_k$ for any $S \subseteq [m]$ and a martingale for $S=[m]$. \end{lemma} \begin{proof} For $n=2$, we have \begin{align} 2 H(X_1 [S] ) &= H(X_1 [S] X_2[S] ]) \notag \\ &= H(Y_1 [S] Y_2[S] ) \notag \\ &= H(Y_1 [S] ) + H(Y_2 [S] | Y_1[S] ) \notag \\ & \geq H(Y_1 [S] ) + H(Y_2 [S] | Y_1 ) \label{last} \end{align} with equality in the \eqref{last} if $S=[m]$. For $n\geq 2$, the same expansion holds including in the conditioning the appropriate ``past'' random variables. \end{proof} Note that because $\eta_k[S]$ is a martingale for $S=[m]$, the sum-rate $H(\mu)$ is conserved through the polarization process. Now, using previous lemma and the fact that $\eta_k[S] \in [0,|S|]$ for any $S$, the martingale convergence theorem implies the following. \begin{corol}\label{conv} For any $S \subseteq [m]$, $\eta_k[S]$ converges almost surely. \end{corol} The following allows to characterize possible values of the process $\eta_k[S]$ when it converges. \begin{lemma}\label{invar} For any $\varepsilon>0$, $X$ valued in $\F_2^{m}$, $Z$ arbitrary, $(X',Z')$ an i.i.d.\ copy of $(X,Z)$, $S \subseteq [m]$, there exists $\delta=\delta(\varepsilon)$ such that $H( X'[S] | Z') - H(X' [S]| Z, Z',X[S]+X'[S]) \leq \delta$ implies $H(X'[S] | Z')-H(X'[S \setminus i] | Z') \in \{0,1 \} \pm \varepsilon$ for any $i \in S$. \end{lemma} \begin{proof} We have \begin{align} &H( X'[S] | Z') - H(X' [S]| Z, Z',X[S]+X'[S]) \notag \\ &=I(X'[S]; X[S]+X'[S]| Z ,Z') \notag \\ & \geq I(X'[S]; X[i]+X'[i]| Z, Z') \notag \\ & \geq I(X'[i]; X[i]+X'[i]| Z ,Z', X'[S \setminus i]) \notag \\ & = H(X'[i]| Z', X'[S \setminus i]) - H(X[i]+X'[i]| Z ,Z', X'[S \setminus i]) . \label{squiz} \end{align} It is shown in \cite{ari} that if $A_1,A_2$ are binary random variables and $B_1,B_2$ are arbitrary such that $\mathbb{P}_{A_1 A_2 B_1 B_2} (a_1,a_2,b_1,b_2) = \frac{1}{4} Q(b_1|a_1+ a_2) Q(b_2|a_2)$, for some conditional probability $Q$, then, for any $a>0$, there exists $b >0$ such that $H(A_2|B_2) - H(A_2|B_1 B_2 A_1) \leq b$ implies $H(A_2|B_2) \in\{0,1\} \pm a$. Using this result, we can pick $\delta$ small enough to lower bound \eqref{squiz} and show that $H(X'[i]| Z', X'[S \setminus i]) \in \{0,1\} \pm \varepsilon$. From the chain rule, we conclude that $H(X'[S \setminus i]| Z') \in \{0,1\} \pm \varepsilon$. \end{proof} We then get the following using Corollary \ref{conv} and Lemma \ref{invar}. \begin{corol}\label{integer} With probability one, $\lim_{k \to \infty} \eta_k[S] \in \{0,1,\dots,|S|\}$. \end{corol} Finally, Lemma \ref{corresp} and Corollary \ref{integer} imply Lemma \ref{mainlemma}. \subsection{Proof of Lemma \ref{null} and Theorem \ref{main} part (2)}\label{proofs2} In order to prove Theorem \ref{main} part (2), we basically need to show that part (1) still holds when taking $\varepsilon$ scaling like $\varepsilon_n=2^{-n^{\alpha}}$ for $\alpha <1/2$, as in \cite{ari2}. We did not find a direct way to show that when $\eta_k[S]$ converges to $|S|$, it must do it that fast (the sub-martingale characterization is too week to apply results of \cite{ari2} directly). This is why we looked into Lemma \ref{null}. By developing a correspondence between previous results and analogue results dealing with linear forms of the $X[S]$'s, we are able to use the speed convergence results shown for the single-user setting and conclude. This approach was developed in \cite{mmac} for the multiple access channel, below is the counter-part for our source setting. \begin{lemma}\label{tech1} For a random vector $Y$ valued in $\F_2^m$, and an arbitrary random vector $Z$, if $$H(Y [S]| Z) \in \{0,1,\dots,|S|\} \pm \varepsilon$$ for any $S \subseteq [m]$, we have $$ H(\sum_{i \in S} Y[i]|Z) \in \{0,1\} \pm \delta(\varepsilon),$$ with $\delta(\varepsilon) \stackrel{\varepsilon \to 0}{\to}0$. \end{lemma} This lemma is proved in \cite{abbematroid}. Using this result, we have that for $j \in E_n$, there exists a matrix $A_j$ of rank $r_j=|S_j|$, such that $$H(A_j Y_j |Y^{j-1}) \leq m \delta(\varepsilon) .$$ This implies the first part of Lemma \ref{null}, and we now show how we can use this other characterization of the dependencies in $Y$ to conclude a speed convergence result. We first need the following ``single-user'' result. \begin{lemma}\label{tech2} For any $\beta<1/2$ and $\varepsilon_n=2^{-n^{\beta}}$, we have, \begin{align*} \frac{1}{n} | \{j \in [n]: \varepsilon_n < H(\sum_{i \in S} Y_j[i]|Y^{j-1}) <\varepsilon , \forall S \subseteq [m] \} | \to 0 . \end{align*} \end{lemma} \begin{proof} We define the auxiliary family of random processes $\zeta_k[S]$, for $S \subseteq [m]$, by \begin{align*} & \zeta_k[S]=Z(\sum_{i \in S}V^{b_1 \dots b_k}[i]| V^{c_1 \dots c_k}, \forall (c_1 \dots c_k) < (b_1 \dots b_k)) \end{align*} where, for a binary uniform random variable $A$ and an arbitrary random variable $B$, $Z(A|B)=2 \mathbb{E}_B (\mathbb{P}\{ A=0 | B \} \mathbb{P}\{ A=1 | B \})^{1/2}$ is the Bhattacharyya parameter. Note that \begin{align} Z(A|B) \geq H(A|B). \label{bata} \end{align} (This also follows from Proposition 2 in \cite{ari3}.) We then have, using the chain rule and source polarization inequalities on the Bhattacharyya parameter, namely Proposition 1 in \cite{ari3}, that \begin{align*} & \zeta_{k+1}[S] \leq \zeta_{k}[S]^2 \text{ if } b_{k+1}=1,\\% \label{z1} \\ & \zeta_{k+1}[S] \leq 2 \zeta_k[S] \text{ if } b_{k+1}=0, \end{align*} and using Theorem 3 of \cite{ari2}, we conclude that for any $\alpha < 1/2$ $$\lim \inf_{\ell \rightarrow \infty} \mathbb{P}(\zeta_k \leq 2^{-2^{\alpha k}}) \geq \mathbb{P}( \zeta_\infty=0).$$ Finally, we conclude using \eqref{bata}. \end{proof} We then use Lemma \ref{tech1} and \ref{tech2} to conclude that \begin{align} \frac{1}{n} | \{j \in [n]: &H(Y_j [S]| Y^{j-1}) \in \{0,1,\dots,|S|\} \pm \varepsilon, \forall S \subseteq [m], \notag \\ &\exists A_j \text{ with } \mathrm{rank} (A_j)= \mathrm{int}( m-H(Y_j | Y^{j-1})),\notag \\ &H(A_jY_j |Y^{j-1}) <\varepsilon_n \} | \to 1 , \label{set} \end{align} which implies Lemma \ref{null}. To conclude the proof of Theorem \ref{main} part (2), let $\varepsilon_n=2^{-n^{\alpha}}$ and $E_n=E_n(\varepsilon_n)$ be the set defined through \eqref{set} (which, in view of previous results, is equivalent to the definition given in Section \ref{proof1}). We then have for $j \in E_n$ that the components $S_j$ to be decoded in $Y_j$ are not correctly decoded with probability $$P_e(j) \leq H(A_jY_j |Y^{j-1}) \leq \varepsilon_n,$$ and the block error probability is bounded as $$P_e \leq \sum_{j \in E_n} P_e(j) \leq n \varepsilon_n,$$ so that taking $\alpha <1/2$ large enough, we can reach a block error probability of $O(2^{-n^{\beta}})$ for any $\beta < 1/2$. \subsection{Proof of Lemma \ref{pext}}\label{proofext} For $j \in R_{\varepsilon^2/2n}(p(k))$, \begin{align*} H(Y_j(p(k)) |Y^{j-1}(p(k))) \geq 1 - \tilde{\varepsilon} \end{align*} where $\tilde{\varepsilon}=\varepsilon^2/2n$ and $Y(p(k))=X(p(k)) G_n$ where $X(p(k))$ is i.i.d. under $p(k)$. Moreover, for any distribution $p$ on $\F_2$ such that $H(p) \geq H(p(k))=k/n$, there exists a distribution $\nu$ on $\F_2$ such that $p(k) \star \nu = p$, where $\star$ denotes the circular convolution. Equivalently, there exists $Z \stackrel{\text{iid}}{\sim} \nu$ independent of $X(p(k))\stackrel{\text{iid}}{\sim} p(k)$, such that $X(p)=X(p(k)) \oplus Z \stackrel{\text{iid}}{\sim} p$. Define $Y(p)=G_n X(p)$, $Y(p(k))=G_n X(p(k))$ and $W=G_n Z$, hence $Y(p) = Y(p(k)) \oplus W$. We have \begin{align} H(Y(p)_j | Y(p)^{j-1}) &\geq H(Y(p)_j | Y(p)^{j-1}, W) \notag \\ &= H(Y(p(k))_i | Y(p(k))^{j-1}, W) \notag \\ &= H(Y(p(k))_i | Y(p(k))^{j-1}) \label{allo} \end{align} where the last equality follows from the fact that $Y(p)$ is independent of $W$ since $X(p)$ is independent of $Z$. Therefore, for any $X(p)$ i.i.d.\ such that $H(p) \geq k/n$ and for any $j \in R_{\tilde{\varepsilon}}(p(k))$, we have \begin{align} H(Y(p)_j | Y(p)^{j-1}) &\geq 1- \tilde{\varepsilon} \end{align} and \begin{align} H(Y(p)[R_{\tilde{\varepsilon}}(p(k))]) &\geq \sum_{j \in R_{\tilde{\varepsilon}}(p(k))} H(Y_j(p)|Y^{j-1}(p)]) \notag \\ & \geq |R_{\tilde{\varepsilon}}(p(k))| ( 1-\tilde{\varepsilon} ) .\notag \end{align} Hence, defining by $\mu_R$ the distribution of $Y(p)[R_{\tilde{\varepsilon}}(p(k))]$ and $U_R$ the uniform distribution on $R_{\tilde{\varepsilon}}(p(k))$, we have \begin{align} D(\mu_R || U_R) &\leq H(U_R) - H(\mu_R) \notag \\ & \leq |R_{\tilde{\varepsilon}}(p(k))| \tilde{\varepsilon} \notag \\ & \leq n \tilde{\varepsilon} \label{Dbound}. \end{align} Using Pinsker inequality and \eqref{Dbound}, we obtain \begin{align*} \| \mu_R - U_R \|_1 \leq 2 \ln 2 D(\mu_R || U_R)^{1/2} &\leq \varepsilon . \end{align*} Finally, we have from Theorem \ref{thmari} \begin{align*} |R_{\tilde{\varepsilon}}(p(k))| = k + o(n). \end{align*}
train/arxiv
BkiUdgk5qsBC2t5soxYi
5
1
\section{Introduction} Probabilistic Cellular Automata (PCA) are discrete-time Markov chains on a product space $S^{\Lambda}$ (configuration space) whose transition probability is a product measure, i.e. given two generic configurations $\tau=(\tau_1,\dots,\tau_N)$ and $\sigma=(\sigma_1,\dots,\sigma_N)$: \begin{equation}\label{eq:general_pca_transition_probability} P\set*{X_n=\tau|X_{n-1}=\sigma} = \prod_{i=1}^N P \set*{(X_N)_i=\tau_i|X_{N-1}=\sigma}, \end{equation} so that for each time $n$, the components of the configuration are independently updated. From a computational point of view, the evolution of a Markov chain of this type is well suited to be simulated on parallel processors. Recently, a class of PCA has been introduced in order to study nearest neighbors spin systems on lattices and, more generally, spin systems on arbitrary graphs $G=(V,E)$, where the interaction Hamiltonian is given by \begin{equation*} H(\sigma)=-\sum_{e=\{x, y\} \in E} J_{x y} \sigma_{x} \sigma_{y} - 2\sum_{x \in V} \lambda_{x} \sigma_{x} \end{equation*} with both $J_{x y}$ and $\lambda_{x}$ in $\mathbb{R}$, and $\sigma \in\{-1,+1\}^{V}$ a \emph{configuration} on $G$. In this context, the transitions probability from a configuration $\sigma$ to a configuration $\tau$ is defined in terms of a \emph{pair Hamiltonian} $H(\sigma, \tau)$ and these transitions are such that, at each time step, the value of all spins is simultaneously updated (see: \cite{dss12, ls, dss15, pss, pssboundary}). In this framework, a new parallel dynamics for Ising-like models on general finite graph called \emph{shaken dynamics} has been introduced in \cite{shaken2d} and has been extensively investigated in the case of the two dimensional square lattice. The distinctive feature of the shaken dynamics is the fact that transitions between states are obtained through a combination of two \emph{half steps}. In each of these half steps the value of the spin at site $x$ is updated according to a probability distribution depending, through a self interaction parameter $q > 0$, on the value of the spin at site $x$ itself and the values of the spins sitting at a suitable subset of the sites adjacent to $x$ in such a way that all neighbors of $x$ are considered exactly once in the \emph{whole step}. It is worth noting that a shaken dynamics on a given graph structure can be naturally associated to a dynamics on an \emph{induced} bipartite graph where the spins in each partition are alternatively updated. The vertex set of this bipartite graph consists of two copies of the original vertex set so that this induced graph can be thought as to have two \emph{layers}. The sub-configuration on one of the layers is the ``current'' configuration of the shaken dynamics whereas the sub-configuration on the other layer is the ``intermediate'' configuration reached through the first half step. In this view, the shaken dynamics can be seen as the evolution taking place on one of the layers of the associated alternate dynamics. The geometry of the induced bipartite graph where the alternate dynamics lives varies continuously with $q$. For instance, in the case of the shaken dynamics on $\numberset{Z}^2$ with $J_{xy} = J$ for all pairs of nearest neighbors $\set*{x,y}$, the bipartite graph where the associated alternate dynamics evolves is a non homogeneous hexagonal lattice. For $q = J$ this graph becomes the homogeneous hexagonal lattice; in the limit $q \to \infty$ the hexagonal lattice ``collapses'' onto the square lattice whereas for $q = 0$ the hexagonal lattice becomes a collection of independent one dimensional lattices. In this work we introduce the shaken dynamics on the $3d$ cubic lattice with $J_{x,y} = J$ for all $\set*{x,y}$ that are nearest neighbors. We determine the stationary measure of the shaken dynamics and show that if the self interaction $q$ is sufficiently large, then this equilibrium measure tends to the Gibbs measure in total variation distance. Furthermore, in the case of null external magnetic field, we show that the associated alternate dynamics takes place on a suitable tetrahedral lattice that becomes homogeneous if $J = q$, becomes the cubic lattice in the limit $q\to\infty$ and reduces to a collection of independent $2d$ hexagonal lattices if $q=0$. Exploiting the connection with this alternate dynamics we determine two curves in the $J-q$ plane such that above the ``upper curve'' the system is in a ordered phase (low temperature regime), whereas below the ``lower curve'' the system is in a disordered phase (high temperature regime). It is very reasonable to assume the existence of a curve $J_c(q)$ between these two functions for which we provide a simple numerical estimation obtained by simulating the shaken dynamics with different values of $q$. We see that our estimates for $J_c(0)$, $J_c(J)$ and $J_c(\infty)$ are, respectively, in good agreement with the critical temperature of the Ising model on the hexagonal lattice and the numerical estimates available for the critical temperature of the Ising model on the tetrahedral and the cubic lattice. This suggests that the numerically determined critical curve should be not too far apart from the ``real'' one. Moreover we study numerically the critical exponents for the magnetic susceptibility as a function of the self interaction $q$ and provide some evidence that the system retains its three dimensional structure as long as $q > 0$ whereas it becomes two dimensional when $q = 0$. In other words our model is able to capture the dimesional transition at $q=0$. In the next section we define the lattice spin model and the shaken dynamics, we describe the first properties of the model, and highlight its relation with the alternate dynamics on the tetrahedral lattice. Further we state our main results. Section 3 is devoted to the proofs. Finally, in section 4 we present our numerical findings concerning the critical curve and discuss the behavior of the critical exponents. \section{The model} \subsection{Definitions and first properties} Let $\Lambda$ be a $L \times L \times L$ square box in $ \numberset{Z}^3 $ and let $B_\Lambda$ be the set of all pairs of nearest neighbors in $\Lambda$, that is, $B_\Lambda=\{(x,y): x,y\in \Lambda, |x-y|=1\}$. Let $\ensuremath{\mathcal{X}}$ be the set of all possible spin configurations in $\Lambda$, i.e. $\ensuremath{\mathcal{X}}=\{\sigma : \sigma=\{-1,1\}^{|\Lambda|}\}$. Let $\sigma, \tau \in \ensuremath{\mathcal{X}}$ be two spin configurations and define a pair Hamiltonian in the following manner: \begin{align} \label{equation1} H_{\lambda}(\sigma,\tau) & = -\sum_{x\in\Lambda} [J\sigma_x(\tau_x^u+\tau_x^r+\tau_x^f)+q\sigma_x\tau_x+\lambda(\sigma_x+\tau_x)]\\ \label{equation2} & =-\sum_{x\in\Lambda} [J\tau_x(\sigma_x^d+\sigma_x^l+\sigma_x^b)+q\tau_x\sigma_x+\lambda(\tau_x+\sigma_x)], \end{align} where \begin{itemize} \item $J>0$ represents the ferromagnetic interaction constant, \item $q>0$ represents the inertial (or self-interaction) term, \item $\lambda>0$ represents the intensity of external magnetic field, \item $x^u$ is the site above $x$ at lattice distance $1$ from $x$ itself, \item $x^r$ is the site on the right of $x$ \item $x^f$ is the site in front of $x$ \item $x^d$ is the site below (down) $x$ \item $x^l$ is the site on the left of $x$ \item $x^b$ is the site behind $x$ \item $\sigma_x$ (resp. $\tau_x$) is the spin at site $x$ in configuration $\sigma$ (resp. $\tau$) \item $\sigma_x^d$ is the spin at site $x^d$ in configuration $\sigma$ ($\sigma_x^l$, $\sigma_x^b$, $\tau_x^u$, \ldots are defined likewise) \end{itemize} See Fig.~\ref{cubic}. \\ \begin{figure} \centering \includegraphics[scale=0.5]{cubic.jpg} \caption{cubic lattice.} \label{cubic} \end{figure} It is straightforward to check that Hamiltonian \eqref{equation1} is linked tightly to the standard Ising one. In particular the following proposition holds. \begin{proposition} \label{proposition1} \begin{equation} \label{equation3} H_{\lambda}(\sigma,\sigma)=H_{2\lambda}(\sigma)-q|\Lambda|, \end{equation}\\ with \begin{equation} \label{equation4} H_{2\lambda}(\sigma)=-\sum_{(x,y)\in B_{\Lambda}} J\sigma_x\sigma_y-2\lambda\sum_{x\in\Lambda} \sigma_x, \end{equation} the standard Ising Hamiltonian with external magnetic field twice that of Hamiltonian (\ref{equation1}). \end{proposition} \begin{proof} Follows immediately from (\ref{equation1}) by setting $\sigma=\tau$, see \cite{shaken2d} \end{proof} In the same spirit of \cite{shaken2d} we want to define a \emph{shaken dynamics} on $\ensuremath{\mathcal{X}}$. To this end, consider a Markov Chain that updates the spin configuration with transitions probability $\mathrm{P^{dlb}}$ at odd times and $\mathrm{P^{urf}}$ at even times where: \begin{equation} \label{equation7} \mathrm{P^{dlb}}(\sigma,\sigma')=\frac{e^{-H_{\lambda}(\sigma,\sigma')}}{\overrightarrow{\rm Z_\sigma}} \ \ \ \text{and} \ \ \ \mathrm{P^{urf}}(\sigma,\sigma')=\frac{e^{-H_{\lambda}(\sigma',\sigma)}}{\overleftarrow{\rm Z_\sigma}}, \end{equation} with $\overrightarrow{\rm Z_\sigma}=\sum_{\sigma'\in\chi} e^{-H_{\lambda}(\sigma,\sigma')} \ \ \ \text{and} \ \ \ \overleftarrow{\rm Z_\sigma}=\sum_{\sigma'\in\chi} e^{-H_{\lambda}(\sigma',\sigma)}$ normalizing constants.\\ Then, the shaken dynamics is defined through the composition of an ``odd'' and an ``even'' step. More precisely: \begin{align} \label{equation13} \mathrm{P^{sh}}(\sigma,\tau)=\sum_{\sigma'\in\chi} \mathrm{P^{dlb}}(\sigma,\sigma')\mathrm{P^{urf}}(\sigma',\tau)=\sum_{\sigma'\in\chi} \frac{e^{-H_{\lambda}(\sigma,\sigma')}}{\overrightarrow{\rm Z_\sigma}}\frac{e^{-H_{\lambda}(\tau,\sigma')}}{\overleftarrow{\rm Z_\sigma'}}. \end{align} Though, strictly speaking, the shaken dynamics \eqref{equation13} is not a PCA in the sense of \eqref{eq:general_pca_transition_probability}, it is the composition of two steps each having a factorized transition probability. Indeed: \begin{align} \mathrm{P^{dlb}}(\sigma,\sigma')= \prod_{x\in\Lambda}\frac{e^{h_{x}^{\mathrm{dlb}}(\sigma)\sigma_x'}}{2\cosh h_{x}^{\mathrm{dlb}}(\sigma)}, \quad \mathrm{P^{urf}}(\sigma,\sigma')= \prod_{x\in\Lambda}\frac{e^{h_{x}^{\mathrm{urf}}(\sigma)\sigma_x'}}{2\cosh h_{x}^{\mathrm{urf}}(\sigma)} \end{align} where \begin{align} h_{x}^{\mathrm{dlb}}(\sigma)= J(\sigma_x^d+\sigma_x^l+\sigma_x^b)+q\sigma_x-\lambda, \quad h_{x}^{\mathrm{urf}}(\sigma)= J(\sigma_x^u+\sigma_x^r+\sigma_x^f)+q\sigma_x+\lambda \end{align} are the local fields felt at site $x$ at, respectively, the odd and the even ``half steps''. Observe that the Hamiltonian \eqref{equation1} is not symmetric: $ H_{\lambda}(\sigma,\tau)\neq H_{\lambda}(\tau,\sigma). $ This implies that a dynamics evolving solely according to $\mathrm{P^{dlb}}$ or $\mathrm{P^{urf}}$ is not reversible. However, when the shaken dynamics \eqref{equation13} is considered, then the following result holds: \begin{proposition}\label{proposition4} The shaken dynamics $\mathrm{P^{sh}}(\sigma,\tau)$ is reversible with respect to the measure $ \pi_\Lambda(\sigma)=\frac{\overrightarrow{\rm Z_\sigma}}{Z} $ which is, therefore, its stationary measure. \end{proposition} \begin{proof} The detailed balance condition is readily established, indeed: \begin{align*} \overrightarrow{\rm Z_\sigma}\mathrm{P^{sh}}(\sigma,\tau) & = \overrightarrow{\rm Z_\sigma}\sum_{\sigma'\in\chi} \frac{e^{-H_{\lambda}(\sigma,\sigma')}}{\overrightarrow{\rm Z_\sigma}}\frac{e^{-H_{\lambda}(\tau,\sigma')}}{\overleftarrow{\rm Z_\sigma'}} = \sum_{\sigma'\in\chi} \frac{e^{-[H_{\lambda}(\sigma,\sigma')+H_{\lambda}(\tau,\sigma')]}}{\overleftarrow{\rm Z_\sigma'}} \\ & = \overrightarrow{\rm Z_\tau}\sum_{\sigma'\in\chi} \frac{e^{-H_{\lambda}(\tau,\sigma')}}{\overrightarrow{\rm Z_\tau}}\frac{e^{-H_{\lambda}(\sigma,\sigma')}}{\overleftarrow{\rm Z_\sigma'}} = \overrightarrow{\rm Z_\tau}\mathrm{P^{sh}}(\tau,\sigma) \end{align*} \end{proof} \ \begin{figure} \centering \includegraphics[scale=0.6]{reticolo.png} \caption{The graph $\Lambda_\mathcal{D}$. Blue and red dots represent respectively the sites in the two three-dimensional lattices $V_1$ and $V_2$. The black segments correspond to the interaction governed by the parameter $ J $, the green segments correspond to the self-interaction, governed by the parameter q.} \label{reticolo1} \end{figure} \subsection{Alternate dynamics} Let $V_1$ and $V_2$ be two copies of $V$ the vertex set of $\Lambda$ and let $\Lambda_\mathcal{D}$ be a finite graph with vertex set given by $V_1 \cup V_2$. For each site $x$ in $\Lambda$, $x_1, x_2$ are the two copies of $x$ in, respectively, $V_1$ and $V_2$ and are called corresponding sites. Consider a spin configuration $\sigma_1$ on $V_1$ and a spin configuration $\sigma_2$ on $V_2$. Then, the Hamiltonian $H(\sigma_1, \sigma_2)$ defines the set of edges on $\Lambda_\mathcal{D}$. In particular each pair $x_1, x_2$ of corresponding sites is connected by an edge with weight $q$. Moreover, each $x_1 \in V_1$ has there additional edges, $\{x_1, x_2^u\}$, $\{x_1, x_2^r\}$, $\{x_1, x_2^f\}$, with weight $J$. Note that the graph $\Lambda_\mathcal{D}$ is bipartite by construction and each edge has one endpoint in $V_1$ and one in $V_2$. A graphical representation of $\Lambda_\mathcal{D}$ is given in Fig.~\ref{reticolo1}. The parameter $q$ determines the geometry of the lattice $\Lambda_\mathcal{D}$. Indeed, thinking to the edge weight as to be proportional to the inverse of the geometrical distance between the vertices we have: \begin{itemize} \item the limit $q\to 0$ correspond to erasing the $q-$edges obtaining, from the lattice $\Lambda_\mathcal{D}$, ``independent'' copies of the two-dimensional honeycomb lattice; \item when $J=q$, the $q-$edges and the $J-$edges become of the same length, so the lattice $\Lambda_\mathcal{D}$ becomes a tetrahedral lattice, that we can imagine like a diamond structure; \item the limit $q\to\infty$ correspond to identify the two vertices linked by the $q-$edge, in this case the lattice $\Lambda_\mathcal{D}$ degenerates into a simple cubic lattice. \end{itemize} Consider a dynamics (in the remainder referred to as \emph{alternate dynamics}) that, alternatively, at each step updates all spins in one of the two layers. Then the shaken dynamics can be seen as the projection of the alternate dynamics onto one of the layers. To make this statement precise, let $\vec{\sigma} = (\sigma_1,\sigma_2), \; \vec{\tau} = (\tau_1, \tau_2) \in \ensuremath{\mathcal{X}_\mathcal{D}}$. Then, the alternate dynamics on $\ensuremath{\mathcal{X}_\mathcal{D}}$ is defined by the transition probabilities \begin{equation} \label{equation18} \mathrm{P^{alt}}(\vec{\sigma},\vec{\tau}) = \mathrm{P^{dlb}}(\sigma_1,\tau_2)\mathrm{P^{urf}}(\tau_2,\tau_1) = \frac{e^{-H_{\lambda}(\sigma_1,\tau_2)}} {\overrightarrow{\rm Z_ {\sigma_1}}} \frac{e^{-H_{\lambda}(\tau_1,\tau_2)}} {\overleftarrow{\rm Z_{\tau_2}}}. \end{equation} and the transition probabilities of the shaken dynamics can be written in the form \begin{equation} \label{equation17} \mathrm{P^{sh}}(\sigma_1,\tau_1) =\sum_{\tau_2\in\ensuremath{\mathcal{X}}} \mathrm{P^{alt}} ((\sigma_1,\cdot), (\tau_1,\tau_2)), \end{equation} As far as the stationary measure of the alternate dynamics is concerned we have the following \begin{proposition} The alternate dynamics defined on $\ensuremath{\mathcal{X}_\mathcal{D}}$ with transition probability $\mathrm{P^{alt}}(\vec{\sigma},\vec{\tau})$ defined above, has the following stationary measure: $$\pi_2(\sigma,\tau)=\frac{1}{Z} e^{-H_{\lambda}(\sigma,\tau)}.$$ Moreover, in general, this dynamics is irreversible. \end{proposition} \begin{proof} \begin{align*} \sum_{\sigma_1,\sigma_2} \pi_2 (\sigma_1,\sigma_2) \mathrm{P^{alt}}(\vec{\sigma},\vec{\tau}) & = \sum_{\sigma_1,\sigma_2} \frac{1} {Z} e^{-H(\sigma_1,\sigma_2)} \frac{e^{-H_{\lambda}(\sigma_1,\tau_2)}} {\overrightarrow{\rm Z_{\sigma_1}}} \frac{e^{-H_{\lambda}(\tau_1,\tau_2)}} {\overleftarrow{\rm Z_{\tau_2}}} \\ & = \sum_{\tau_1,\tau_2} \frac{1}{Z} e^{-H_{\lambda}(\sigma_1,\sigma_2)} = \pi_2 (\tau_1,\tau_2). \end{align*} However, in general, \begin{align*} \pi_2 (\sigma_1,\sigma_2) \mathrm{P^{alt}}(\vec{\sigma},\vec{\tau}) \neq \pi_2 (\tau_1,\tau_2) \mathrm{P^{alt}}(\vec{\tau}, \vec{\sigma}) \end{align*} since $ H_{\lambda}(\sigma,\tau)\neq H_{\lambda}(\tau,\sigma)$. \end{proof} \begin{remark} The stationary measure of the shaken dynamics is the marginal of the stationary of the alternate dynamics, that is \begin{align*} \pi_\Lambda(\sigma) = \sum_{\tau \in \ensuremath{\mathcal{X}}} \pi_2(\sigma, \tau). \end{align*} \end{remark} \subsection{Results} In this Section we state our main results. In particular we identify conditions ensuring the convergence of the stationary measure of the shaken dynamics to the Gibbs measure and identify two regions of analiticity of the partition function. These two regions correspond, respectively, to a low temperature and a high temperature regime for the system. The proofs are given in the next Section. We define, for the model introduced previously, the Gibbs measure: $ \pi_{\Lambda}^G(\sigma)=\frac{e^{-H_{2\lambda}(\sigma)}}{Z^G}, $ \\with $ Z^G=\sum_{\sigma \in \chi} e^{-H_{2\lambda}(\sigma)}, $ where $H_{2\lambda}(\sigma)$ is given by (\ref{equation4}). The stationary measure of the Markov chain defined above, that is $\pi_{\Lambda}(\sigma)$, is linked to the Gibbs measure $\pi_{\Lambda}^G(\sigma)$ by the following result: \begin{theorem}\label{theorem1} Let $\delta=e^{-2q}$. If $\lim_{\abs{\Lambda} \to \infty} \delta^2 \abs{\Lambda}=0$, then there exist a $\bar{J}$ and $\tilde{J}$ such that, for $J> \bar J$ and $J<\tilde{J}$we have: \begin{equation}\label{thm1} \lim_{\abs{\Lambda} \to \infty} \norma{\pi_{\Lambda}-\pi_{\Lambda}^G}_{TV}=0.\footnotemark \footnotetext{$\norma{\cdot}_{TV}$ denotes the total variation distance: $\norma{\pi_{\Lambda}-\pi_{\Lambda}^G}_{TV}=\frac{1}{2}\sum_{\sigma \in \chi} \abs{\pi_{\Lambda}(\sigma)-\pi_{\Lambda}^G(\sigma)}$} \end{equation} \end{theorem} To identify the low temperature regime, we look at the magnetization of the origin and determine parameters such that this magnetization is positively correlated with the magnetization at the boundary even in the thermodynamic limit. Let $\pi^{+}_{\Lambda}(\sigma)$ (resp. $\pi^{-}_{\Lambda}(\sigma)$) be the equilibrium measure of the shaken dynamics when $+$ (resp. $-$) boundary conditions are taken into account and let $\angbra{\sigma_{0}}^{+}_{\Lambda}$ (resp. $\angbra{\sigma_{0}}^{-}_{\Lambda}$) be the expected value of $\sigma_{0}$ with respect to this probability measure, that is $\angbra{\sigma_{0}}^{+}_{\Lambda} = \sum_{\sigma} \sigma_{0} \pi^{+}_{\Lambda}(\sigma)$ and $\angbra{\sigma_{0}}^{-}_{\Lambda} = \sum_{\sigma} \sigma_{0} \pi^{-}_{\Lambda}(\sigma)$. Then \begin{theorem}\label{thm:low_temp_regime} In the thermodynamic limit, the mean magnetization of the origin depends on the boundary conditions, that is \begin{equation*} \lim_{\abs{\Lambda} \to \infty} \angbra{\sigma_{0}}^{+}_{\Lambda} \neq \lim_{\abs{\Lambda} \to \infty} \angbra{\sigma_{0}}^{-}_{\Lambda} \end{equation*} if $J$ and $q$ are sufficiently large. The explicit description of the low temperature region is given in \eqref{condtrans}. \end{theorem} Conversely, when the system is at high temperature, it is possible to identify conditions ensuring the analyticity of the partition function by considering a suitable (high temperature) expansion. \begin{theorem}\label{thm:high_temp_regime} The function \begin{equation}\label{sigmahightemp} f_{\Lambda_\mathcal{D}}(J,q) = \frac{1}{\abs{\Lambda_\mathcal{D}}} \ln Z_{\Lambda_\mathcal{D}}(J,q) \end{equation} is analytic if the parameters $J$ and $q$ are sufficiently small. The explicit description of the high temperature analyticity region is given in \eqref{FP13}. \end{theorem} \section{Proofs of the main results} \subsection{Proof of theorem \ref{theorem1}} We start proving the first part of the theorem: if\linebreak \mbox{$ \lim_{\abs{\Lambda} \to \infty} \delta^2 \abs{\Lambda}=0 $}, then there exists a $\bar J$ such that, for $J> \bar J$ we have: \linebreak $ \lim_{\abs{\Lambda} \to \infty} \norma{\pi_{\Lambda}-\pi_{\Lambda}^G}_{TV}=0. $ To this end, we need some preliminaries lemmas. \begin{lemma} \label{lemma1} \begin{equation} \label{equation23} \overrightarrow{Z_{\sigma}}=e^{q\abs{\Lambda}}e^{-H_{2\lambda}(\sigma)}\prod_{x \in \Lambda} \left(1+\delta e^{-2g_x^{dlb}(\sigma)\sigma_x-2\lambda\sigma_x}\right), \end{equation} where: $$ g_x^{dlb}(\sigma)=J(\sigma_x^d+\sigma_x^l+\sigma_x^b). $$ \end{lemma} \begin{proof} It follows using the same steps of \cite[Equation~25]{shaken2d} \end{proof} In order to compare the stationary measure $\pi_\Lambda$ with the Gibbs measure $\pi_\Lambda^G$, it is convenient to rewrite the previous expression for $\overrightarrow{Z_{\sigma}}$, in terms of the \emph{Gibbs weight} $\omega^G(\sigma)=e^{-H_{2\lambda}(\sigma)}$ of configuration $\sigma$. Write \begin{equation} \label{omegasigma} \omega(\sigma)=e^{-H_{2\lambda}(\sigma)}f(\sigma)=\omega^G(\sigma)f(\sigma), \end{equation} with \begin{equation} \label{equation27} f(\sigma)=\prod_{x \in \Lambda} \sqparens*{ 1+\delta e^{-2g_x^{dlb}(\sigma)\sigma_x-2\lambda \sigma_x} }. \end{equation} Then $\overleftarrow{Z_\sigma}$ can be written as \begin{equation} \overrightarrow{\rm Z_\sigma}=\omega^G(\sigma)e^{q|\Lambda|}f(\sigma). \end{equation} Recalling the definition of the Gibbs measure: \begin{equation} \pi_{\Lambda}^G(\sigma)=\frac{e^{-H_{2\lambda}(\sigma)}}{Z^G}=\frac{\omega^G(\sigma)}{Z^G} \end{equation} then $\pi_\Lambda$ can be written as: \begin{equation} \pi_{\Lambda}(\sigma) = \frac{\overrightarrow{\rm Z_\sigma}}{Z} =\frac{\frac{\omega^G(\sigma)}{Z^G}f(\sigma)} {\sum_{\sigma \in\ensuremath{\mathcal{X}}} \frac{\omega^G(\sigma)}{Z^G}f(\sigma) } =\frac{\pi^G_{\Lambda}(\sigma)f(\sigma)}{\pi^G_{\Lambda}(f)}, \end{equation} where: \begin{equation} \label{pigf} \pi^G_{\Lambda}(f)=\sum_{\sigma\in\ensuremath{\mathcal{X}}} \pi^G_{\Lambda}(\sigma)f(\sigma). \end{equation} With this notation, the following lemma provides a bound for the \emph{difference} between the two measures on $\ensuremath{\mathcal{X}}$. \begin{lemma} \label{lemma2} \begin{equation} \norma{\pi_{\Lambda}-\pi_{\Lambda}^G}_{TV} \leq [\Delta(\delta)]^{\frac{1}{2}}, \end{equation} with: \begin{equation} \Delta(\delta)=\frac{\pi^G_{\Lambda}(f^2)}{(\pi^G_{\Lambda}(f))^2}-1. \end{equation} \end{lemma} \begin{proof} See \cite[Proof of Theorem~2.5]{shaken2d} \end{proof} Hence, to prove the first part of theorem (\ref{theorem1}), we need to show that: \begin{equation} \label{equation35} \Delta(\delta)=\mathcal{O}(\delta^2\abs{\Lambda}). \end{equation} Writing: $$ \Delta(\delta)=\frac{\pi^G_{\Lambda}(f^2)}{\pi^G_{\Lambda}(f)^2}-1=e^{\log{[\pi^G_{\Lambda}(f^2)]}-2\log{[\pi^G_{\Lambda}(f)]}}-1, $$ then we need to show that $\exists \, \bar{J}$ such that, for $J > \bar{J}$, we have: \begin{enumerate}[label=\emph{\alph*})] \item The functions \begin{equation}\label{condition1} \frac{\log{[\pi^G_{\Lambda}(f^2)]}}{\abs{\Lambda}} \text{\, and \,} \frac{\log{[\pi^G_{\Lambda}(f)]}}{\abs{\Lambda}} \end{equation} are both analytic for $\abs{\delta}<\delta_J$ for a suitable $\delta_J$ depending on $J$ \item \begin{equation}\label{condition2} \frac{\log{[\pi^G_{\Lambda}(f^2)]}}{\abs{\Lambda}}-2\frac{\log{[\pi^G_{\Lambda}(f)]}}{\abs{\Lambda}}=\mathcal{O}(\delta^2). \end{equation} \end{enumerate} To prove both claims, it is convenient to partition the sites of $\Lambda$ according to the value of their spin and the sum of the spins at the downwards, leftwards and backwards neighboring sites. To this purpose we define the sets \begin{itemize}[leftmargin=5mm] \item $ N_3^{-} = \{ x \in \Lambda : \sigma_x=-1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=-3 \} = \{ x \in \Lambda : \sigma_x=-1 \land g_x^{dlb}(\sigma)=-3J \}$, \item $ N_2^{-} = \{ x \in \Lambda : \sigma_x=-1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=-1 \} = \{ x \in \Lambda : \sigma_x=-1 \land g_x^{dlb}(\sigma)=-1J \}, $ \item $ N_1^{-} = \{ x \in \Lambda : \sigma_x=-1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=+1 \} = \{ x \in \Lambda : \sigma_x=-1 \land g_x^{dlb}(\sigma)=+1J \}, $ \item $ N_0^{-} = \{ x \in \Lambda : \sigma_x=-1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=+3 \} = \{ x \in \Lambda : \sigma_x=-1 \land g_x^{dlb}(\sigma)=+3J \}, $ \item $ N_3^{+} = \{ x \in \Lambda : \sigma_x=+1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=+3 \} = \{ x \in \Lambda : \sigma_x=+1 \land g_x^{dlb}(\sigma)=+3J \}, $ \item $ N_2^{+} = \{ x \in \Lambda : \sigma_x=+1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=+1 \} = \{ x \in \Lambda : \sigma_x=+1 \land g_x^{dlb}(\sigma)=+1J \}, $ \item $ N_1^{+} = \{ x \in \Lambda : \sigma_x=+1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=-1 \} = \{ x \in \Lambda : \sigma_x=+1 \land g_x^{dlb}(\sigma)=-1J \}, $ \item $ N_0^{+} = \{ x \in \Lambda : \sigma_x=+1 \land \sigma_x^d+\sigma_x^l+\sigma_x^b=-3 \} = \{ x \in \Lambda : \sigma_x=+1 \land g_x^{dlb}(\sigma)=-3J \}. $ \end{itemize} Checking that $ \Lambda = N_3^{-} \cup N_2^{-} \cup N_1^{-} \cup N_0^{-} \cup N_3^{+} \cup N_2^{+} \cup N_1^{+} \cup N_0^{+}$ is straightforward. \\ Then, arguing as in \cite{shaken2d}, is possible to rewrite $f(\sigma)$ in this way: \begin{equation} \label{eqf} f(\sigma)=(1+\delta e^{-6J-2\lambda})^{\abs{\Lambda}} \tilde{\xi}(\sigma,\lambda), \end{equation} \\ with: \begin{align} \label{xi} \begin{aligned} \tilde{\xi}(\sigma,\lambda) & = \Bigl( \frac{1+\delta e^{-6J+2\lambda}}{1+\delta e^{-6J-2\lambda}} \Bigr)^{\abs{N_3^-}} \Bigl( \frac{1+\delta e^{-2J+2\lambda}}{1+\delta e^{-6J-2\lambda}} \Bigr)^{\abs{N_2^-}} \Bigl( \frac{1+\delta e^{2J+2\lambda}}{1+\delta e^{-6J-2\lambda}} \Bigr)^{\abs{N_1^-}} \\ & \times \Bigl( \frac{1+\delta e^{6J+2\lambda}}{1+\delta e^{-6J-2\lambda}} \Bigr)^{\abs{N_0^-}} \Bigl( \frac{1+\delta e^{-2J-2\lambda}}{1+\delta e^{-6J-2\lambda}} \Bigr)^{\abs{N_2^+}} \\ & \times \Bigl( \frac{1+\delta e^{2J-2\lambda}}{1+\delta e^{-6J-2\lambda}} \Bigr)^{\abs{N_1^+}} \Bigl( \frac{1+\delta e^{6J-2\lambda}}{1+\delta e^{-6J-2\lambda}} \Bigr)^{\abs{N_0^+}}. \end{aligned} \end{align} To bound $\tilde{\xi}$, we rewrite $H_{2\lambda}(\sigma)$ in terms of \emph{$3d$-Peierls contours} defined in the following way: for each pair nearest neighboring sites $x$ and $y$ such that $\sigma_x\sigma_y = - 1$, we build a square unitary plate that is orthogonal to the segment between $\sigma_x$ and $\sigma_y$ and passing through the midpoint of this segment. In this way, starting from a spin configuration $\sigma$, we can introduce a family of closed polyhedra (or $3d$-Peierls contours configuration) $\Gamma(\sigma)=\{\gamma_1,\dots,\gamma_N\}$ separating the regions with spin $ + $ 1 from those with spin $ -1 $.\footnote{The correspondence between $\sigma$ and $\Gamma(\sigma)$ is one to one with $\pm 1$ boundary conditions and is one to two with periodic boundary conditions.} Denote by $B^-$ the total number of $-1$ bonds in $B_\Lambda$, that is the total number of edges with spins of opposite sign on its endpoints and by $B_{TOT}$ the total number of bonds in $B_\Lambda$. Denoting by \begin{equation} \abs{\Gamma(\sigma)}=\sum_{\gamma_i \in \Gamma} \abs{\gamma_i}, \end{equation} the total number of plates of the contours configuration, we clearly have \begin{equation} \label{totalcontours} \abs{\Gamma(\sigma)}=B^-. \end{equation} Performing simple algebraic calculations, we can rewrite $H_{2\lambda}(\sigma)$: \begin{align} H_{2\lambda}(\sigma)=-J (-2B^-+B_{TOT})-2\lambda \abs{\Lambda} + 4\lambda \abs{V_-(\sigma)}. \end{align} Now, for periodic b.c., we have: $B_{TOT}=3\abs{\Lambda}$. Moreover, using \eqref{totalcontours}: \begin{align}\label{HPeierls} \begin{aligned} H_{2\lambda}(\sigma) & =2J\abs{\Gamma(\sigma)}-3J\abs{\Lambda}-2\lambda \abs{\Lambda}+4\lambda \abs{V_-(\sigma)}\\ & =2J\abs{\Gamma(\sigma)}+4\lambda \abs{V_-(\sigma)}-(3J+2\lambda)\abs{\Lambda}. \end{aligned} \end{align} And hence: \begin{equation} \label{e^HPeierls} e^{-H_{2\lambda}(\sigma)}=e^{(3J+2\lambda)\abs{\Lambda}-2J\abs{\Gamma(\sigma)}-4\lambda\abs{V_-(\sigma)}}. \end{equation} Using this result, we can write: \begin{align}\label{piPeierls} \begin{aligned} \pi_{\Lambda}^G(f^k) & =\frac{e^{(3J\abs{\Lambda}+2\lambda)}}{Z_G} \left( 1+\delta e^{-6J+2\lambda}\right)^{k\abs{\Lambda}} \sum_{\sigma} \left[ e^{-2J\abs{\Gamma(\sigma)}}e^{-4\lambda\abs{V_-(\sigma)}} \tilde{\xi}^k(\sigma,\lambda)\right]\\ & =\frac{e^{(3J\abs{\Lambda}+2\lambda)}}{Z_G} \left( 1+\delta e^{-6J+2\lambda}\right)^{k\abs{\Lambda}} \sum_{\sigma} \left[ e^{-2J\abs{\Gamma(\sigma)}}(e^{-2\lambda\abs{V_-(\sigma)}})^{2-k} \xi^k(\sigma,\lambda)\right], \end{aligned} \end{align} with: \begin{align} \begin{aligned} \xi^k(\sigma, \lambda) & = \parens*{ \frac{e^{-2\lambda}(1+\delta e^{-6J+2\lambda})}{1+\delta e^{-6J-2\lambda}}}^{k\abs{N_3^-}} \parens*{ \frac{e^{-2\lambda}(1+\delta e^{-2J+2\lambda})}{1+\delta e^{-6J-2\lambda}}}^{k\abs{N_2^-}} \\ & \times \parens*{ \frac{e^{-2\lambda}(1+\delta e^{2J+2\lambda})}{1+\delta e^{-6J-2\lambda}} }^{k\abs{N_1^-}} \parens*{ \frac{e^{-2\lambda}(1+\delta e^{6J+2\lambda})}{1+\delta e^{-6J-2\lambda}} }^{k\abs{N_0^-}} \\ & \times \parens*{ \frac{1+\delta e^{-2J-2\lambda}}{1+\delta e^{-6J-2\lambda}} }^{k\abs{N_2^+}} \parens*{ \frac{1+\delta e^{2J-2\lambda}}{1+\delta e^{-6J-2\lambda}} }^{k\abs{N_1^+}} \parens*{ \frac{1+\delta e^{6J-2\lambda}}{1+\delta e^{-6J-2\lambda}} }^{k\abs{N_0^+}}. \end{aligned} \end{align} It is now straightforward to prove the next technical lemma: \begin{lemma} \label{lemma3} \begin{equation} \xi^k(\sigma, \lambda) \leq \xi^k(\sigma, 0), \text{ \ for both \ } k=1 \text{ \ and \ } k=2. \end{equation} \end{lemma} \begin{proof} A simple algebraic calculation show that each factor of $\xi^k(\sigma, \lambda)$ is less or equal to the respectively factor of $\xi^k(\sigma, 0)$ for both $k=1$ and $k=2$.\\ \end{proof} As a consequence of the previous lemma: \begin{align*} \sum_{\sigma} \left[ e^{-2J\abs{\Gamma(\sigma)}}(e^{-2\lambda\abs{V_-(\sigma)}})^{2-k} \xi^k(\sigma,\lambda) \right] & \leq \sum_{\sigma} \left[ e^{-2J\abs{\Gamma(\sigma)}} \xi^k(\sigma,0) \right]\\ & = 2\sum_{\Gamma} \left[ e^{-2J\abs{\Gamma}} \xi^k(\Gamma,0) \right]. \end{align*} And hence: \begin{equation} \label{pigamma} \pi_{\Lambda}^G(f^k) \leq \frac{2}{Z_G} e^{(3J+2\lambda)\abs{\Lambda}} \left( 1+\delta e^{-6J-2\lambda}\right)^{k\abs{\Lambda}} \sum_{\Gamma} \left[ e^{-2J\abs{\Gamma}} \xi^k(\Gamma,0)\right], \end{equation} with: \begin{equation} \label{xigamma} \xi^k(\Gamma,0)=\Bigl( \frac{1+\delta e^{-2J}}{1+\delta e^{-6J}} \Bigr)^{k(\abs{N_2^-}+\abs{N_2^+})} \Bigl( \frac{1+\delta e^{2J}}{1+\delta e^{-6J}} \Bigr)^{k(\abs{N_1^-}+\abs{N_1^+})} \Bigl( \frac{1+\delta e^{6J}}{1+\delta e^{-6J}} \Bigr)^{k(\abs{N_0^-}+\abs{N_0^+})}. \end{equation} Then: \begin{align} \label{loggamma} \begin{aligned} \frac{\log{\pi_{\Lambda}^G(f^k)}} {{\abs{\Lambda}} } \leq & -\frac{2 \log{Z_G}}{\abs{\Lambda}}+3J+2\lambda + k \log \left[ 1+\delta e^{-6J-2\lambda} \right] \\ & +\frac{1}{\abs{\Lambda}} \log{\sum_{\Gamma} \left[ e^{-2J\abs{\Gamma}} \xi^k(\Gamma,0)\right]}. \end{aligned} \end{align} And thus, in order to proof condition \eqref{condition1}, we can proceed as in \cite{pss}. This conclude the proof of the first part of Theorem~\ref{theorem1}.\\ The second part of theorem says: if $\lim_{\abs{\Lambda} \to \infty} \delta^2 \abs{\Lambda}=0$, then exist a $\tilde{J}$ such that, for $J<\tilde{J}$we have: $ \lim_{\abs{\Lambda} \to \infty} \norma{\pi_{\Lambda}-\pi_{\Lambda}^G}_{TV}=0. $ This simply follows from \cite{dss12}.\\ Thus, the proof of Theorem~\ref{theorem1} is concluded. \subsection{Proof of theorem~\ref{thm:low_temp_regime}} We want to show that the mean value of a spin, in the low temperature regime, at the centre of the lattice depends on the boundary conditions in the case of finite volume and continues to depend on the boundary even in the limit of infinite volume.We interpret this as the fact that the system is in the ordered phase. Of course if we have a external magnetic field different from zero, all spins follow the orientation of such external field.\\ So to study the spontaneous behavior of the system, we go back to Hamiltonian \eqref{equation1}, \eqref{equation2} and set $\lambda = 0$. Moreover, from now on, we fix the external spins of $\Lambda$ (that we denote by $\partial^{ext}\Lambda$) to assume the value $+1$ that is we impose $+1$ boundary conditions. We have \begin{equation} \label{hamiltonian_no_field} H^+(\sigma,\tau)=-\sum_{x\in\Lambda} [J\sigma_x(\tau_x^u+\tau_x^r+\tau_x^f)+q\sigma_x\tau_x]=-\sum_{x\in\Lambda} [J\tau_x(\sigma_x^d+\sigma_x^l+\sigma_x^b)+q\tau_x\sigma_x]. \end{equation}\\ From Proposition~\ref{proposition1} it follows: \begin{equation} \label{equation69} H^+(\sigma,\sigma)=H^+(\sigma)-q|\Lambda^+|, \end{equation}\\ with \begin{equation} \label{equation70} H^+(\sigma)=-\sum_{(x,y)\in B^+_{\Lambda}} J\sigma_x\sigma_y, \end{equation}\\ where $B^+_{\Lambda}$ is the set of all nearest neighbors pairs in $\Lambda \cup \partial^{ext}\Lambda$: \begin{equation} \label{B+} B_\Lambda^+=\{(x,y): x,y\in \Lambda \cup \partial^{ext}\Lambda, |x-y|=1\}. \end{equation}\\ From now on, for convenience, we will omit the over-script $+$ in the Hamiltonian, that is, from now on $H$ is $H^+$.\\ From Lemma~\ref{lemma1}, we have: \begin{equation} \label{Zetaboundary} \overrightarrow{Z_{\sigma}}=e^{q\abs{\Lambda}}e^{-H(\sigma)} \prod_{x \in \Lambda} \left(1+\delta e^{-2Jh_x(\sigma)\sigma_x}\right), \end{equation} where: \begin{equation} \label{Zetaboundary2} \delta=e^{-2q} \text{ \ \ and \ \ } h_x(\sigma)=\sigma_x^d+\sigma_x^l+\sigma_x^b. \end{equation} At this point, we can compute the mean value of $\sigma_0$.\\ We assume that the lattice goes from $-L/2$ to $+L/2$ in all the three directions, so $\sigma_0$ is the spin at the centre of the lattice, that is the furthermost from the boundary.\\ The mean value of $\sigma_0$ with positive boundary conditions is: \begin{equation} \label{mean} \left\langle \sigma_0 \right\rangle_{\Lambda}^+ = \sum_{\sigma} \sigma_0 \pi_{\Lambda}(\sigma) = \sum_{\sigma} \frac{\sigma_0\overrightarrow{Z_{\sigma}}}{Z} = \frac{1}{Z}\sum_{\sigma, \tau} \sigma_0 e^{-H(\sigma, \tau)} = \frac{\sum_{\sigma, \tau} \sigma_0 e^{-H(\sigma, \tau)}}{\sum_{\sigma, \tau} e^{-H(\sigma, \tau)}}. \end{equation} Using \eqref{Zetaboundary} the last expression can be written as follow: \begin{equation} \label{mean2} \left\langle \sigma_0 \right\rangle_{\Lambda}^+= \frac{\sum_{\sigma}\sigma_0e^{-H(\sigma)}\prod_{x \in \Lambda} \left(1+\delta e^{-2Jh_x(\sigma)\sigma_x}\right)}{\sum_{\sigma}e^{-H(\sigma)}\prod_{x \in \Lambda} \left(1+\delta e^{-2Jh_x(\sigma)\sigma_x}\right)}. \end{equation} Adapting the notation of the previous sections to the present one, we have: \begin{equation} \omega^G(\sigma) = e^{-H(\sigma)}, \end{equation} \begin{equation} f(\sigma) = \prod_{x \in \Lambda} \left( 1+\delta e^{-2Jh_x(\sigma)\sigma_x} \right), \end{equation} \begin{equation} \overrightarrow{Z_{\sigma}}=e^{q\abs{\Lambda}}\omega^G(\sigma)f(\sigma), \end{equation} and, consequently, \begin{equation} Z=\sum_{\sigma}e^{q\abs{\Lambda}}\omega^G(\sigma)f(\sigma). \end{equation} Note that the expressions of $\pi_{\Lambda}^G(\sigma),\pi_{\Lambda}(\sigma), \pi_{\Lambda}^G(f)$ remain unchanged. We now compute $\left\langle \sigma_0 \right\rangle_{\Lambda}^+$ in the low temperature regime $J \gg 1$. Clearly: \begin{equation} \label{sigmaprob} \left\langle \sigma_0 \right\rangle_{\Lambda}^+ = (+1)\mathbb{P}_{\Lambda}^{+}(\sigma_0=+1)+(-1)\mathbb{P}_{\Lambda}^{+}(\sigma_0=-1)=1-2\mathbb{P}_{\Lambda}^{+}(\sigma_0=-1). \end{equation} We can now estimate $\mathbb{P}_{\Lambda}^{+}(\sigma_0=-1)$ using a contour representation, defining the 3d-Peierls contours as in the previous section.\\ Let $\Gamma(\sigma)=\{\gamma_1,\dots,\gamma_N\}$ be the family of 3d-Peierls contours associate to the spin configuration $\sigma$. If $\sigma_0=-1$ in a given configuration $\sigma$, then there exists at least one polyhedron in $\Gamma(\sigma)$ that surrounds $\sigma_0$. Moreover, in $\Gamma(\sigma)$ can exist even more that one polyhedron that surrounds $\sigma_0$. We use the notation $\gamma_0 \odot \{0\}$ to denote a polyhedron $\gamma_0$ that surrounds $\sigma_0$.\\ Moreover, given a particular Peierls contour $\gamma_0$, we denote with $A_{\gamma_0}$ the set of family of contours containing $\gamma_0$: \begin{equation} \label{Agamma} A_{\gamma_0} = \{\Gamma: \gamma_0 \in \Gamma\}. \end{equation} Then: \begin{equation}\label{prob1} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) \leq \mathbb{P}_{\Lambda}^{+}\left(\cup_{\gamma_0 \odot \{0\}}A_{\gamma_0} \right) \leq \sum_{\gamma_0 \odot \{0\}}\mathbb{P}_{\Lambda}^{+}(A_{\gamma_0}), \end{equation} and \begin{equation} \label{prob2} \mathbb{P}_{\Lambda}^{+}(A_{\gamma_0})=\frac{\sum_{\Gamma:\gamma_0\in\Gamma} \frac{\overrightarrow{Z_{\sigma}}}{Z}}{\sum_{\Gamma} \frac{\overrightarrow{Z_{\sigma}}}{Z}}=\frac{\sum_{\Gamma:\gamma_0\in\Gamma} \omega^G(\Gamma(\sigma))f(\Gamma(\sigma))}{\sum_{\Gamma} \omega^G(\Gamma(\sigma))f(\Gamma(\sigma))}. \end{equation} Where $\omega^G(\Gamma(\sigma))$ and $f(\Gamma(\sigma))$ are $\omega^G(\sigma)$ and $f(\sigma)$ written in terms of Peierls contours. We now see how $\omega^G(\sigma)$ and $f(\sigma)$ can be written in terms of Peierls contours. \begin{lemma} \label{omegagamma} \begin{equation} \label{omegagammaeq} \omega^G(\Gamma(\sigma))=e^{3JL^2(L+1)}e^{-2J|\Gamma|}, \end{equation} where $\abs{\Gamma}$ is the total length of all Peierls contours in $\Gamma(\sigma)$: \begin{equation} \label{absgamma} |\Gamma|=\sum_i |\gamma_i|. \end{equation} \end{lemma} \begin{proof} Let $B^+$ be the number of bonds in $B^+_{\Lambda}$ of positive sign (the number of nearest neighbors with same sign) and with $B^-$ the number of bonds of negative sign (the number of nearest neighbors with opposite sign). Then $$ \omega^G(\Gamma(\sigma))=e^{-H(\sigma)}=e^{\sum_{(x,y)\in B_{\Lambda}^+} J\sigma_x\sigma_y}=e^{J(B^{+}-B^{-})}. $$ Now, let $ \{ \gamma_1, \gamma_2, \dots, \gamma_N\}$ be a contours configuration associate to the spin configuration $\sigma$. Then, by construction: $$ \abs{\Gamma}=\sum_{i=1}^N \abs{\gamma_i} = B^- $$ as above. Moreover:$B^{+}+B^{-}=B_{TOT}$. Then: $$ B^{+}=B_{TOT}-B^{-}=B_{TOT}-|\Gamma|, $$ and so: $$ \omega^G(\Gamma(\sigma))=e^{J(B^{+}-B^{-})}=e^{J(B_{TOT}-2|\Gamma|)}=e^{JB_{TOT}}e^{-2J|\Gamma|}. $$ Finally, is easy to show that $B_{TOT}=3L^2(L+1).$ Then, we have: $$ \omega^G(\Gamma(\sigma))=e^{3JL^2(L+1)}e^{-2J|\Gamma|}. $$ \end{proof} Lemma~\ref{omegagamma} describes how $\omega^G(\sigma)$ can be written in terms of Peierls contours. Similarly, also $f(\sigma)$ can be written in terms of Peierls contours. To this end, recalling \eqref{eqf}, \eqref{xi}, \eqref{xigamma}), we have: \begin{equation} \label{fgamma} f(\Gamma)=\left(1+\delta e^{-6J}\right)^{|\Lambda|}\left(\frac{1+\delta e^{-2J}}{1+\delta e^{-6J}}\right)^{|N_2|}\left(\frac{1+\delta e^{2J}}{1+\delta e^{-6J}}\right)^{|N_1|}\left(\frac{1+\delta e^{6J}}{1+\delta e^{-6J}}\right)^{|N_0|}, \end{equation} with: \begin{itemize} \item $N_3=\{x\in\Lambda: \sigma_x^d+\sigma_x^l+\sigma_x^b=3\sigma_x\}=\{x\in\Lambda: h_x(\sigma)=3\sigma_x\}.$ \item $N_2=\{x\in\Lambda: \sigma_x^d+\sigma_x^l+\sigma_x^b=\sigma_x\}=\{x\in\Lambda: h_x(\sigma)=\sigma_x\}.$ \item $N_1=\{x\in\Lambda: \sigma_x^d+\sigma_x^l+\sigma_x^b=-\sigma_x\}=\{x\in\Lambda: h_x(\sigma)=-\sigma_x\}.$ \item $N_0=\{x\in\Lambda: \sigma_x^d+\sigma_x^l+\sigma_x^b=-3\sigma_x\}=\{x\in\Lambda: h_x(\sigma)=-3\sigma_x\}.$ \end{itemize} We can now compute $\mathbb{P}_{\Lambda}^{+}(\sigma_0=-1)$ using equations (\ref{prob1}), (\ref{prob2}), (\ref{omegagammaeq}) and (\ref{fgamma}) and noting that the terms $e^{3JL^2(L+1)}$ and $\left(1+\delta e^{-6J}\right)^{|\Lambda|}$ appear both in the numerator and in the denominator as factors for all $\Gamma$: \begin{equation} \begin{aligned} & \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) \leq \sum_{\gamma_0 \odot \{0\}}\frac{\sum_{\Gamma:\gamma_0\in\Gamma} \omega^G(\Gamma(\sigma))f(\Gamma(\sigma))}{\sum_{\Gamma} \omega^G(\Gamma(\sigma))f(\Gamma(\sigma))}\\ & = \sum_{\gamma_0 \odot \{0\}} \frac{ \sum_{\Gamma:\gamma_0\in\Gamma} e^{-2J\sum_{i=0}^N |\gamma_i|} \left(\frac{1+\delta e^{-2J}}{1+\delta e^{-6J}}\right)^{|N_2|} \left(\frac{1+\delta e^{2J}}{1+\delta e^{-6J}}\right)^{|N_1|} \left(\frac{1+\delta e^{6J}}{1+\delta e^{-6J}}\right)^{|N_0|} } { \sum_{\Gamma} e^{-2J\sum_{i=1}^N |\gamma_i|} \left(\frac{1+\delta e^{-2J}}{1+\delta e^{-6J}}\right)^{|N_2|} \left(\frac{1+\delta e^{2J}}{1+\delta e^{-6J}}\right)^{|N_1|} \left(\frac{1+\delta e^{6J}}{1+\delta e^{-6J}}\right)^{|N_0|} }. \end{aligned} \end{equation}\\ Call \begin{align*} F(\gamma_i)= e^{-2J|\gamma_i|} \tilde{F}(\gamma_i). \end{align*} with \begin{align*} \tilde{F}(\gamma_i) = \left(\frac{1+\delta e^{-2J}}{1+\delta e^{-6J}}\right)^{|N_2(\gamma_i)|} \left(\frac{1+\delta e^{2J}}{1+\delta e^{-6J}}\right)^{|N_1(\gamma_i)|} \left(\frac{1+\delta e^{6J}}{1+\delta e^{-6J}}\right)^{|N_0(\gamma_i)|} \end{align*} and observe that: $|N_0|=\sum_{i=1}^N |N_0(\gamma_i)|$, $|N_1|=\sum_{i=1}^N |N_1(\gamma_i)|$ and $|N_2|=\sum_{i=1}^N |N_2(\gamma_i)|$. Then \begin{equation} \begin{aligned} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) =\sum_{\gamma_0 \odot \{0\}} \frac{ \sum_{\Gamma:\gamma_0\in\Gamma} \prod_{i=0}^N F(\gamma_i) } { \sum_{\Gamma} \prod_{i=1}^N F(\gamma_i) }. \end{aligned} \end{equation} The sum at the numerator is over all and only the families of Peierls contours that contains a particular contour $\gamma_0$ that surrounds the origin, whereas the sum at the denominator is over all families of Peierls contours.\\ The idea is to isolate in the sum at the numerator the contribution given by $\gamma_0 \odot \{0\}.$\\ Hence the remaining sum at the numerator is over all and only the families of contours that contains $\gamma_0 \odot \{0\},$ from which, however, we remove $\gamma_0$. We denote this sum with $\sum_{\Gamma}^*.$\\ Thus, the last inequality becomes: $$ \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1)\leq \sum_{\gamma_0 \odot \{0\}} F(\gamma_0)\frac{\sum_{\Gamma}^* \prod_i F(\gamma_i)}{\sum_{\Gamma} \prod_i F(\gamma_i)}.$$ Since in the numerator of the r.h.s. of the last expression there are fewer terms than in the denominator, and they are all positive, we have:\\ \begin{equation*} \begin{aligned} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) & \leq \sum_{\gamma_0 \odot \{0\}} F(\gamma_0)\\ & =\sum_{\gamma_0 \odot \{0\}} e^{-2J|\gamma_0|} \left(\frac{1+\delta e^{-2J}}{1+\delta e^{-6J}}\right)^{|N_2(\gamma_0)|} \left(\frac{1+\delta e^{2J}}{1+\delta e^{-6J}}\right)^{|N_1(\gamma_0)|} \left(\frac{1+\delta e^{6J}}{1+\delta e^{-6J}}\right)^{|N_0(\gamma_0)|}. \end{aligned} \end{equation*} Since $1+\delta e^{-6J}\geq 1$, then $\frac{1+\delta e^{-2J}}{1+\delta e^{-6J}}<1+\delta e^{-2J}$, $\frac{1+\delta e^{2J}}{1+\delta e^{-6J}}<1+\delta e^{2J}$ and $\frac{1+\delta e^{6J}}{1+\delta e^{-6J}}<1+\delta e^{6J}$. Therefore \begin{equation*} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1)\leq \sum_{\gamma_0 \odot \{0\}} e^{-2J|\gamma_0|} \left(1+\delta e^{-2J}\right)^{|N_2(\gamma_0)|} \left(1+\delta e^{2J}\right)^{|N_1(\gamma_0)|} \left(1+\delta e^{6J}\right)^{|N_0(\gamma_0)|}. \end{equation*} Now observe that \begin{equation} \label{obsgamma} |\gamma_0|=3|N_0(\gamma_0)|+2|N_1(\gamma_0)|+1|N_2(\gamma_0)|. \end{equation} Indeed each site in $N_0(\gamma_0)$ contributes to the length of $\gamma_0$ with $3$ unitary plates; each site in $N_1(\gamma_0)$ contributes with two plates and each site in $N_2(\gamma_0)$ contributes with one plate.\\ Then, a simple algebraic computation leads to: \begin{equation}\label{prob5eq} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) \leq \sum_{\gamma_0 \odot \{0\}} \left(e^{-2J}+\delta e^{-4J}\right)^{|N_2(\gamma_0)|} \left(e^{-4J}+\delta e^{-2J}\right)^{|N_1(\gamma_0)|} \left(e^{-6J}+\delta\right)^{|N_0(\gamma_0)|}. \end{equation} Let us proceed now by transforming the sum on the contours of Peierls $\gamma_0 \odot \{0\}$ in a sum over their lengths: $|\gamma_0|=k.$\\ We indicate with $\eta_0(k)$ the number of Peierls contours of length $k$ that surrounds the site $0$. Moreover $k\geq 6$ for closed contours.\\ Then, we can write: \begin{equation} \label{prob6eq} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) \leq \sum_{k=6}^{\infty} \eta_0(k)D^k, \end{equation} with \begin{equation}\label{D} D=\max\set*{ \left(e^{-2J}+\delta e^{-4J}\right);\, \left(e^{-4J}+\delta e^{-2J}\right)^{1/2};\, \left(e^{-6J}+\delta\right)^{1/3} }, \end{equation} indeed, thanks to (\ref{obsgamma}), each factor of the form $\left(e^{-2J}+\delta e^{-4J}\right)$ contributes with exponent $1$ to the length of $\gamma_0$, each factor of the form $\left(e^{-4J}+\delta e^{-2J}\right)$ contributes with exponent $1/2$ and each factor of the form $\left(e^{-6J}+\delta\right)$ contributes with exponent $1/3.$\\ For $J$ large enough (low temperature regime): \begin{align} D & =\max\set*{ \left(e^{-2J}+\delta e^{-4J}\right);\, \left(e^{-4J}+\delta e^{-2J}\right)^{1/2};\, \left(e^{-6J}+\delta\right)^{1/3} }\\ \label{D2} & =\max\set*{ e^{-4J}\left(e^{-6J}+\delta\right);\, e^{-J}\left(e^{-6J}+\delta\right)^{1/2};\, \left(e^{-6J}+\delta\right)^{1/3} }\\ & = \left(e^{-6J}+\delta\right)^{1/3}. \end{align} Finally, an estimate of the number of contours of length $k$ surrounding the origin is given by Ruelle's lemma: $\eta_0(k)\leq 3^k$. Then (\ref{prob6eq}) becomes: \begin{equation} \label{prob7eq} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) \leq \sum_{k=6}^{\infty} 3^k\left(e^{-6J}+\delta\right)^{k/3}=\sum_{k=6}^{\infty} \left[3\left(e^{-6J}+e^{-2q}\right)^{1/3}\right]^k. \end{equation} This expression identifies a geometric series with common ratio $r=3\left(e^{-6J}+e^{-2q}\right)^{1/3}$. The series is convergent if $r<1$, that is: $\left(e^{-6J}+e^{-2q}\right)^{1/3}<\frac{1}{3}$. Under this conditions, the series converges to the value: $\sum_{k=6}^{\infty} r^k=\frac{r^6}{1-r}$. Then (\ref{prob7eq}) becomes: $\mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) \leq \frac{3^6\left(e^{-6J}+e^{-2q}\right)^2}{1-3\left(e^{-6J}+e^{-2q}\right)^{1/3}}$. This expression tends toward zero for $J\gg 1$ and $q\gg 1$ uniformely in $\Lambda$.\\ Therefore, for $J$ and $q$ large enough: \begin{equation} \label{eqP+} \mathbb{P}_{\Lambda}^{+}(\sigma_0=-1) < 1/2. \end{equation} This happens if the following condition holds: \begin{equation} \label{condtrans} \frac{3^6\left(e^{-6J}+e^{-2q}\right)^2}{1-3\left(e^{-6J}+e^{-2q}\right)^{1/3}}<\frac{1}{2}. \end{equation} Finally, if (\ref{condtrans}) holds, recalling (\ref{sigmaprob}) and (\ref{eqP+}), we have: \begin{equation} \label{sigma+} \left\langle \sigma_0 \right\rangle_{\Lambda}^+ >0. \end{equation} Moreover, in the same manner, we can show that, under the condition (\ref{condtrans}) we have: \begin{equation}\label{eqP-} \mathbb{P}_{\Lambda}^{-}(\sigma_0=-1) > 1/2, \end{equation} and then \begin{equation}\label{sigma-} \left\langle \sigma_0 \right\rangle_{\Lambda}^- <0. \end{equation} From (\ref{sigma+}) and (\ref{sigma-}), we have: \begin{equation}\label{sigmadiffer} \left\langle \sigma_0 \right\rangle_{\Lambda}^+ \neq \left\langle \sigma_0 \right\rangle_{\Lambda}^-. \end{equation} And this inequality holds even in the thermodynamic limit $\Lambda \to \infty$: \begin{equation} \label{sigmadifferinfty} \left\langle \sigma_0 \right\rangle^+ \neq \left\langle \sigma_0 \right\rangle^-. \end{equation} So, the 1-point correlation functions are not unique, in the low-temperature regime, with respect to boundary conditions. This suffice to asserts that the equilibrium state, that is the family of all n-points correlation functions in the thermodynamic limit (with $n=1,2,...,\abs{\Lambda}$), is not unique at low temperature, but it depends on boundary conditions. \subsection{Proof of Theorem~\ref{thm:high_temp_regime}} \begin{proof} We start by rewriting the Hamiltonian \eqref{hamiltonian_no_field}: \begin{equation} \label{H_low_temp} H(\sigma,\tau)=-\sum_{x\in\Lambda} [J\tau_x(\sigma_x^d+\sigma_x^l+\sigma_x^b)+q\tau_x\sigma_x]=-\sum_{e \in E_{\Lambda_\mathcal{D}}} J_e \sigma_{e_1} \tau_{e_2}, \end{equation} where $E_{\Lambda_\mathcal{D}}=E_J \cup E_q$ is the edge set of the lattice $\Lambda_\mathcal{D}$ and $e_1,e_2$ are two sites in $\Lambda_\mathcal{D}$ linked by the edge $e$. Moreover: $$ J_e= \begin{cases} J \text{ \ \ if \ \ } e \in E_J\\ q \text{ \ \ if \ \ } e \in E_q. \end{cases} $$ Setting $\vec{\sigma} = (\sigma,\tau) \in \chi^2 \text{ and } b_e=\sigma_{e_1} \tau_{e_2} \in \{-1,1\}$, we can write: \begin{equation} \label{H_good} H(\sigma,\tau)=H(\vec{\sigma})=-\sum_{e \in E_{\Lambda_\mathcal{D}}} J_e \sigma_{e_1} \tau_{e_2}=-\sum_{e \in E_{\Lambda_\mathcal{D}}} J_e b_e. \end{equation} Thus, the partition function is: \begin{equation} \label{partifuncthigh} Z_{\Lambda_\mathcal{D}} = \sum_{\vec{\sigma}} e^{-H(\vec{\sigma})} =\sum_{\vec{\sigma}} e^{\sum_{e \in E_{\Lambda_\mathcal{D}}} J_e b_e} = \sum_{\vec{\sigma}} \prod_{e \in E_{\Lambda_\mathcal{D}}} e^{J_e b_e}. \end{equation} Write $ e^{J_e b_e} = \cosh(J_e) [ 1 + b_e\tanh(J_e)]. $ Then, the partition function can be rewritten as: \begin{equation} \label{partifuncthigh2} Z_{\Lambda_\mathcal{D}} = [\cosh(J_e)]^{\abs{E_{\Lambda_\mathcal{D}}}} \sum_{\vec{\sigma}} \prod_{e \in E_{\Lambda_\mathcal{D}}} [ 1 + b_e\tanh(J_e)]. \end{equation} Developing the product $\prod_{e \in E_{\Lambda_\mathcal{D}}} [ 1 + b_e\tanh(J_e)]$, we get terms of the type: $$ [\tanh(J_e)]^N b_{e_1} b_{e_2} \dots b_{e_N}, $$ which has a clear geometric interpretation: the set of bonds $b_{e_1} \dots b_{e_N}$ form a graph (connected or not) in $\Lambda_\mathcal{D}$ whose links are nearest neighbors. Performing the sum over $\vec{\sigma}$ we get that the only graphs which yield a non vanishing contribution to, $ \sum_{\vec{\sigma}} b_{e_1} \dots b_{e_N} $, and, hence, to the partition function, are those whose vertices have incidence number two or four, while all other graphs are zero once the sum over configurations $\vec{\sigma}$ has been done. Graphs of this type are called non-vanishing.\\ If the graph $b_{e_1} \dots b_{e_N}$ is non-vanishing, then: $$ \sum_{\vec{\sigma}} b_{e_1} \dots b_{e_N} = 2^{\abs{\Lambda_\mathcal{D}}}. $$ We can naturally split a non vanishing graph into non intersecting connected components which we will call lattice animals. A lattice animal $\gamma$ is thus nothing but a graph $g$ with edge set $E_{\gamma}= \{ e_1 \dots e_N \} \in E_{\Lambda_\mathcal{D}}$ formed by nearest neighbor links. The allowed lattice animals are only those $\gamma$ with incidence number at the vertices equal to two or four. We denote with $\mathbb{L}_{\Lambda_\mathcal{D}}$ the set of all possible lattice animals in $\Lambda_\mathcal{D}$.\\ Two lattice animals $\gamma$ and $\gamma'$ are non overlapping (i.e. compatible), and we write $\gamma \sim \gamma'$ if and only if $\gamma \cap \gamma' = \emptyset$. We will denote shortly $\abs{\gamma} = \abs{E_{\gamma}}$ (i.e. $\abs{\gamma}$ is the number of nearest neighbor bonds which constitute $\gamma$ , i.e. if $\gamma = \{ b_{e_1} \dots b_{e_N} \} $ then $\abs{\gamma} = N$. Note that only such lattice animals (i.e. just those with incidence number at the vertices equal to 2 or to 4) survive because we are using free boundary conditions.\\ In conclusion we can write (from \ref{partifuncthigh2}): \begin{equation} \label{partufuncthigh3} Z_{\Lambda_\mathcal{D}} = [\cosh(J_e)]^{\abs{E_{\Lambda_\mathcal{D}}}} 2^{\abs{\Lambda_\mathcal{D}}} \Xi_{\Lambda_\mathcal{D}}(J_e), \end{equation} where \begin{equation} \label{partpolim} \Xi_{\Lambda_\mathcal{D}}(J_e) = 1 + \sum_{n \geq 1} \sum_{\substack{ \{ \gamma_1, \dots, \gamma_n \} \subset \mathbb{L}_{\Lambda_\mathcal{D}}, \\ \gamma_i \sim \gamma_j }} \xi(\gamma_1) \dots \xi(\gamma_n) \end{equation} is the partition function of a hard core polymer gas in which polymers are lattice animals, i.e. elements of $\mathbb{L}_{\Lambda_\mathcal{D}}$ with the incompatibility relation $\gamma \nsim \gamma'$ if and only if $V_{\gamma} \cap V_{\gamma'} \neq \emptyset$. Each polymer $\gamma$ has an activity given by: \begin{equation} \label{activitymk} \xi(\gamma) = [\tanh(J)]^{2k}[\tanh(q)]^{2m}, \end{equation} where $2k$ and $2m$ denote respectively the number of $J-$edges and the number of $q-$edges of the closed polymer $\gamma$, so $2k + 2m = \abs{\gamma}$. Actually, we can imagine the lattice $\Lambda_\mathcal{D}$ as a collection of layer of hexagonal lattice, the sites of which are linked only by $J-$edges, and these layers are linked to each other by edges of type $q$. So, a closed circuit must have an even number of $q-$segments and an even number of $J-$segments and the number of $q-$type segments must be less then or equal to half the number of $J-$type segments. To control the analyticity of $Z_{\Lambda_\mathcal{D}}$, we can apply the Fernandez-Procacci convergence criterion (see \cite{procaccifernandez}) to the $\Xi_{\Lambda_\mathcal{D}}(J_e)$.\\ Namely, we need to find numbers $\mu(\gamma) \in (0,+\infty)$ such that: \begin{equation} \label{FP1} \xi(\gamma) \leq \frac{\mu(\gamma)}{\Xi_{\mathbb{L}_{\gamma}}(\vec{\mu})}, \end{equation} where $ \mathbb{L}_{\gamma} = \{ \gamma' \in \mathbb{L}_{\Lambda_\mathcal{D}} : \gamma' \nsim \gamma \} $ is the set of all polymers in $\Lambda_\mathcal{D}$ incompatible with $\gamma$ (i.e. the set of all polymers that intersect $\gamma$).\\ Setting $\mu(\gamma) = \xi(\gamma) e^{a\abs{\gamma}}$, the condition (\ref{FP1}) becomes \begin{equation} \label{FP2} \Xi_{\mathbb{L}_{\gamma}}(\vec{\mu}) \leq e^{a\abs{\gamma}}, \end{equation} where \begin{align} \label{Xibig} \Xi_{\mathbb{L}_{\gamma}}(\vec{\mu}) & = 1 + \sum_{n=1}^{\abs{\gamma}} \frac{1}{n!} \sum_{\substack{(\gamma_1, \dots, \gamma_n) \subset \mathbb{L}_{\gamma}^n, \\ \gamma_i \sim \gamma_j }} \abs{\mu(\gamma_1) \dots \mu(\gamma_n)}\\ & = 1 + \sum_{n=1}^{\abs{\gamma}} \frac{1}{n!} \sum_{\substack{(\gamma_1, \dots, \gamma_n) \subset \mathbb{L}_{\gamma}^n, \\ \gamma_i \sim \gamma_j }} \prod_{i=1}^n \abs{\xi(\gamma_i)} e^{a\abs{\gamma_i}}, \end{align} with the sum $\sum_{\substack{(\gamma_1, \dots, \gamma_n) \subset \mathbb{L}_{\gamma}^n, \\ \gamma_i \sim \gamma_j }} (\cdot)$ running over all ordered n-tuples of polymers.\\ Consider now the factor: $$ \sum_{\substack{(\gamma_1, \dots, \gamma_n) \subset \mathbb{L}_{\gamma}^n, \\ \gamma_i \sim \gamma_j }} \prod_{i=1}^n \abs{\xi(\gamma_i)} e^{a\abs{\gamma_i}}, $$ we have thus to choose $n$ lattice animals $(\gamma_1, \dots, \gamma_n)$ all incompatible with a given lattice animal $\gamma$ and all pairwise compatible. We recall that two lattice animals are incompatible if they share a vertex in $\Lambda_\mathcal{D}$ . Since $\gamma$ has $|V_{\gamma}|$ vertices, we can find at most $|V_{\gamma}|$ animals incompatibles with $\gamma$ and pairwise compatible. Thus, the factor above is zero whenever $ n > |V_{\gamma}|, \gamma_i \nsim \gamma$. \\ We want to rearrange the sum over all ordered n-tuples of polymers $(\gamma_1, \dots, \gamma_n) \subset \mathbb{L}_{\gamma}^n$ in a sum over all polymers $\{ \gamma_1, \dots, \gamma_n \} \subset \mathbb{L}_{\Lambda_\mathcal{D}}$, regardless of their order. Recalling that the factor above is zero whenever $n > |V_{\gamma}|$, we have: \begin{align} \begin{aligned} & \sum_{\substack{(\gamma_1, \dots, \gamma_n) \subset \mathbb{L}_{\gamma}^n, \\ \gamma_i \sim \gamma_j } } \prod_{i=1}^n \abs{\xi(\gamma_i)} e^{a\abs{\gamma_i}}\\ & = (|V_{\gamma}|)(|V_{\gamma}|-1)\dots(|V_{\gamma}|-n+1) \sum_{\gamma_1 \in \mathbb{L}_{\Lambda_\mathcal{D}}} \abs{\xi(\gamma_1)} e^{a\abs{\gamma_1}} \dots \sum_{\gamma_n \in \mathbb{L}_{\Lambda_\mathcal{D}}} \abs{\xi(\gamma_n)} e^{a\abs{\gamma_n}} \\ & \leq (|V_{\gamma}|)(|V_{\gamma}|-1)\dots(|V_{\gamma}|-n+1) \left( \sup_{x \in \Lambda_\mathcal{D}} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ x \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} \right)^n\\ & = \binom{|V_{\gamma}|}{n} n! \left( \sup_{x \in \Lambda_\mathcal{D}} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ x \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} \right)^n. \end{aligned} \end{align} Then, (\ref{Xibig}) becomes: \begin{align} \begin{aligned} \label{Xibig2} \Xi_{\mathbb{L}_{\gamma}}(\vec{\mu}) = 1 + \sum_{n=1}^{\abs{V_{\gamma}}} {\frac{1}{n!}} \binom{|V_{\gamma}|}{n} {n!} \left( \sup_{x \in \Lambda_\mathcal{D}} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ x \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} \right)^n = \left( 1 + \sup_{x \in \Lambda_\mathcal{D}} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ x \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} \right)^{|V_{\gamma}|}. \end{aligned} \end{align} The convergence condition (\ref{FP2}) becomes: \begin{equation} \label{FP3} \left( 1 + \sup_{x \in \Lambda_\mathcal{D}} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ x \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} \right)^{|V_{\gamma}|} \leq e^{a\abs{\gamma}}. \end{equation} But $|V_{\gamma}| \leq \abs{\gamma} \text{ \ \ } \forall \gamma \in \mathbb{L}_{\Lambda_\mathcal{D}}$ (the equality hold only if $\gamma$ is a cycle), then (\ref{FP3}) becomes: \begin{equation} \label{FP4} \sup_{x \in \Lambda_\mathcal{D}} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ x \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} \leq e^a - 1. \end{equation} Observe finally that, due to the structure of our lattice, the function $$ f(x) = \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ x \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} $$ is constant as $x$ varies in $\Lambda_\mathcal{D}$. Therefore (\ref{FP4}) is equivalent to the condition $$ \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ 0 \in \gamma}} \abs{\xi(\gamma)} e^{a\abs{\gamma}} \leq e^a - 1. $$ where 0 is the ``origin'' in $\Lambda_\mathcal{D}$. Now, recalling (\ref{activitymk}), the above condition becomes: \begin{equation} \label{FP5} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ 0 \in \gamma}} \abs{[\tanh(J)]^{2k}[\tanh(q)]^{2m} e^{a(2k+2m)}} \leq e^a - 1. \end{equation} We want to convert the sum over $\gamma$'s passing through $0$ in a sum over their lengths $\abs{\gamma}=2k+2m$. To this end, we observe that in a closed circuit, the number $q-$segments must be less then or equal to the number of $J-$segments: $2m \leq 2k$; and the minimal number of edges must be $6$: $2k+2m \geq 6$. So, condition (\ref{FP5}) becomes: \begin{equation} \label{FP6} \sum_{k \geq 2} \sum_{\substack{ m = 0 \\ k+m \geq 3 }}^{k} \abs{[\tanh(J)]^{2k}[\tanh(q)]^{2m} e^{a(2k+2m)} \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ 0 \in \gamma \\ \abs{\gamma}=2k+2m}} 1 }\leq e^a - 1. \end{equation} The sum: $$ \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ 0 \in \gamma \\ \abs{\gamma}=2k+2m}} 1 $$ corresponds to the number of closed circuits of length $2k+2m$ passing through $0$.\\ We can find this number imagining to start from a certain point and doing $2k$ steps of $J-$type and $2m$ steps of $q-$type, until returning to the starting point. As long as we move on a layer we carry out all $J-$type steps, while when we change layer we carry out a $q-$type step. As long as we move on a layer, we have $2^{2k-2m}$ possible circuits (at each node, we have $2$ possible directions); however, occasionally we have to insert a change of layer (a segment of type $q$), for a total of $2m$ segments of this type. So we have to insert $2m$ step of type $q$ among the $2k$ steps of type $J$. We can do this in $\binom{2k}{2m}$ ways. Once we change layers, we have $3$ possible directions for the first step in this new layer, so we need to consider also a factor $3^{2m}$. So, the total number of such circuits is: $$ \sum_{\substack{\gamma \in \mathbb{L}_{\Lambda_\mathcal{D}} \\ 0 \in \gamma \\ \abs{\gamma}=2k+2m}} 1 = 2^{2k-2m} 3^{2m} \binom{2k}{2m}. $$ Hence, condition (\ref{FP6}) becomes: \begin{equation} \label{FP7} \sum_{k \geq 2} \sum_{\substack{ m = 0 \\ k+m \geq 3 }}^{k} \abs{[\tanh(J)]^{2k}[\tanh(q)]^{2m} e^{a(2k+2m)} 2^{2k} \left(\frac{3}{2}\right)^{2m} \binom{2k}{2m}} \leq e^a - 1. \end{equation} yielding \begin{equation} \label{FP8} \sum_{\substack{ k \geq 2 \\ k+m \geq 3 }} \abs{[2\tanh(J) e^a]^{2k} \sum_{m = 0}^{k} \binom{2k}{2m} \left[\left(\frac{3}{2}\right)\tanh(q) e^a\right]^{2m} }\leq e^a - 1. \end{equation} We observe that the two sums must satisfy the constraint $k+m \geq 3$, that is the close circuits condition. We can then extend these sums to all value $k \geq 2$, $q \geq 0$ without any constrain if we subtract by hand the only term forbidden by the constrain; this term corresponds to $k=2$ and $m=0$. So, we finally have: \begin{equation} \label{FP9} \sum_{k \geq 2} \abs*{ [2\tanh(J) e^a]^{2k} \sum_{m =0}^{k} \binom{2k}{2m} \left[\left(\frac{3}{2}\right)\tanh(q) e^a\right]^{2m} - [2\tanh(J) e^a]^4} \leq e^a - 1. \end{equation} We can now perform the sum over $m$. This sum is: \begin{equation} \label{sumpari2} \sum_{m=0}^k \binom{2k}{2m} x^{2m} = \frac{1}{2} \left( (x-1)^{2k} + (x+1)^{2k} \right), \end{equation} with $x = \left(\frac{3}{2}\right)\tanh(q) e^a$. So, condition (\ref{FP9}) becomes:\\ \begin{equation} \label{FP10} \frac{1}{2} \sum_{k \geq 2} \abs{ [2\tanh(J) e^a]^{2k} \left[ \left( x-1 \right)^{2k} + \left(x+1 \right)^{2k} \right] - [2\tanh(J) e^a]^4} \leq e^a - 1. \end{equation} Setting, for brevity $y=2\tanh(J) e^a$, we have:\\ \begin{equation} \label{FP11.2} \frac{1}{2} \sum_{k \geq 2} \abs{[y(x-1)]^{2k} + \frac{1}{2} \sum_{k \geq 2} [y(x+1)]^{2k} - y^4} \leq e^a - 1. \end{equation} We can finally perform the remaining sums over $k$: $$ \sum_{k \geq 2} [y(x\pm1)]^{2k}=\sum_{k \geq 2} [y^2(x\pm1)^2]^k = \frac{[y^2(x\pm1)^2]^2}{1-y^2(x\pm1)^2} = \frac{y^4(x\pm1)^4}{1-y^2(x\pm1)^2} $$ So, condition (\ref{FP11.2}) becomes:\\ \begin{equation} \label{FP12} \abs*{\frac{1}{2} \left[ \frac{y^4(x-1)^4}{1-y^2(x-1)^2} + \frac{y^4(x+1)^4}{1-y^2(x+1)^2} \right] -y^4} \leq e^a - 1. \end{equation} Finally, recalling the form of $x$ and $y$, we have: \begin{tiny} \begin{equation}\label{FP13} \abs*{\frac{1}{2} \left[ \frac{(2\tanh(J) e^a)^4(\frac{3}{2}\tanh(q) e^a-1)^4}{1-(2\tanh(J) e^a)^2(\frac{3}{2}\tanh(q) e^a-1)^2} + \frac{(2\tanh(J) e^a)^4(\frac{3}{2}\tanh(q) e^a+1)^4}{1-(2\tanh(J) e^a)^2(\frac{3}{2}\tanh(q) e^a+1)^2} \right] + -(2\tanh(J) e^a)^4}\leq e^a - 1. \end{equation} \end{tiny} This expression is the condition that $J$ and $q$ must satisfy to be sure that $ f_{\Lambda_\mathcal{D}}(J,q)=\frac{1}{\abs{\Lambda_\mathcal{D}}} \ln Z_{\Lambda_\mathcal{D}}(J,q)$ is an analytic function for in $J$ and $q$. Numerical evaluations show that a good value of $a$ is $a=0.15$. For this value, the expression (\ref{FP13}) identifies the region below the lower curve in Fig.\ref{twocurves}. Hence, for values of $J$ and $q$ small enough (i.e. in the aforementioned region), $ f_{\Lambda_\mathcal{D}}(J,q)$ is analytic.\\ \end{proof} Thus,we have shown that in low-temperature regime the system is in the ordered phase, while in the high-temperature regime it is in the disordered one. Our conjecture is that there exists a unique critical line in $J-q$ plane that separates the ordered phase from the disordered one and this curve must lie in the region between the two curves as shown in Figure~\ref{twocurves}. This fact is well supported by numerical simulations. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{3dHighLowTemperatureRegionsWithCritical} \caption{The curves delimiting the Low Temperature and High Temperature regions identified in equations \eqref{condtrans} and \eqref{FP13} together with a numerical approximation (in red) of the conjectured critical curve.} \label{twocurves} \end{figure} \section{Numerical simulations} In this section we present our numerical results concerning the critical curve in the $(q,J)$ plane and discuss the behavior of the critical exponents of the magnetic susceptibility as $q$ varies. To this end we consider the alternate dynamics taking place on a class of ``tetrahedral'' non homogeneous lattices. It is possible to give a geometric interpretation to the pair interaction thinking to $J$ and $q$ to be proportional to the inverse of the distance of the lattice points. In this way, the three dimensional lattice can be thought to be a collection of honeycomb layers (where the the side of each hexagon is $\frac{1}{J}$) at distance $\frac{1}{q}$ on from another. With this picture in mind, we observe that when $q=J$ the dynamics lives on the diamond lattice whereas for $q \to \infty$ the lattice becomes the simple cubic one. Moving in the other direction (towards smaller values of $q$) the interaction between the layers becomes weaker and weaker up to the point ($q = 0$) when they become independent so that the system resembles a collection of well separated graphene sheets. In this framework, estimating the critical curve in the $(q,J)$ plane, amounts to finding the critical ``size'' $J$ of the hexagons for each ``distance'' $q$ between the sheets. Moreover, it is reasonable to think that the transition from the three dimensional model to a collection of independent two dimensional ones implies that the critical exponents of the magnetic susceptibility undergoes a sharp change at $q=0$. Both the critical values of $J$ and the critical exponents can be estimated by looking at the variance of the magnetization of the system. Indeed, at the critical $J$ this variance diverges. To estimate the variance of the magnetization, we considered its sample variance over a long time for a somewhat large system. simple, We have some confidence on our findings since the critical values of $J$ that we found are consistent with the actual critical value of $J$ for the Ising model on the honeycomb lattice in the case of $q=0$ and with recent numerical estimates for the critical $J$ for the diamond and simple cubic lattice for $q=J$ and for $q$ large respectively. As far as the critical curve is concerned, our results are summarized in Fig.~\ref{twocurves} and~\ref{fig:critical_curve}. In particular Fig.~\ref{twocurves} shows that the estimated critical curve lies in the region between the curves of equations~\eqref{condtrans} and~\eqref{FP13} delimiting the low and the high temperature region respectively. Fig.~\ref{fig:critical_temperatures} shows the values of the normalized standard deviation of the magnetization as a function of $J$ for $q=0$, $q=J$ and $q=2$ corresponding, respectively, to the collection of two dimensional honeycomb lattices, the diamond lattice and, ideally, the simple cubic lattice. Our estimate of the critical $J$ is given by the value at which the variance is maximal. We have $\hat{J}_c(0) = 0.659$. In this case the analytical critical value is $J_c^\mathrm{h.l.} \approx 0.659$ (see \cite{shaken2d}) For $q = J$ we obtained $\hat{J}_c(J) = 0.370$ whereas in \cite{diamond} the numerical estimate is $J_c^\mathrm{d.l.} = 0.370$. Finally, setting $q = 2$ we estimated $\hat{J}_c(2) = 0.226$. In \cite{3d} $J_c^\mathrm{s.c.} = 0.222$ \begin{figure} \centering \includegraphics[width=0.7\textwidth]{zfig_critical_temperatures.pdf} \caption{The standard deviation of the magnetization as a function of $J$ for $q = 2$ (ideally the limit $q \to \infty$), $q = J$ and $q = 0$. Our estimates for the critical temperature are $J_c = 0.226$ for the cubic lattice, $J_c = 0.370$ for the tetrahedral diamond lattice and $J_c = 0.669$ for the 2d honeycomb lattice.} \label{fig:critical_temperatures} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{niagara3d.pdf} \quad \includegraphics[width=0.45\textwidth]{variance3d.pdf} \caption{The ``critical behavior'' of the mean magnetization (left) and the standard deviation (right) for the tested values of $J$ and $q$} \label{fig:critical_curve} \end{figure} Approaching the critical temperature, the magnetic susceptibility $\chi$ (the variance of the magnetization) diverges with a power law with some \emph{critical exponent} $\gamma$ (see, e.g., \cite{Thompson}) \begin{align} \chi = \left( \frac{\abs{T - T_c}}{T_c} \right)^{-\gamma}. \end{align} Recalling that, in this paper we wrote $J$ in place of the usual $\beta J$, where $\beta = \frac{1}{k_b T}$, we get \begin{align} \gamma = -\frac{\log(\chi)} {\log(\abs{J - Jc})} + C \end{align} for a suitable constant $C$. Note that the value of $\gamma$ is related to the dimension of the system and it is the same for the whole class of Ising-like lattice systems with the same dimension. We estimated $\gamma$ for several values of $q$ ranging from $0$ to $2$, that is for geometries ranging from a collection of $2d$ honeycomb lattices to the simple cubic lattice. Our estimates are fairly simple and are not meant to determine the values of the critical exponent with high accuracy. Rather, as long as our values are consistent with those available in the literature, we want use them to support our conjecture that the systems retains a three dimensional structure for all positive $q$s. Our results are summarized in Fig.~\ref{fig:slope_critical_exponents} and Fig.~\ref{fig:critical_coefficients}. There it is possible to see that, for $q > 0$ both the high and low temperature critical exponents are quite ``close'' to the value $\gamma = 1.237\ldots$ that is the critical value for three dimensional Ising systems (see \cite{25orderGamma}). On the other hand, as soon as $q=0$, our estimate jumps to a value that is much closer to the critical value for two dimensional Ising system ($\gamma=\frac{7}{4}$, see \cite{Thompson}). These findings show that our model is able to capture through the variation of the parameter $q$ the phase transition in the geometry of the system. \begin{figure} \centering \includegraphics[width=0.4\textwidth] {zfig_exp_regression_0.0.pdf} \hspace{0.1\textwidth} \includegraphics[width=0.4\textwidth] {zfig_exp_regression_0.01.pdf}\\ \vspace{0.5cm} \includegraphics[width=0.4\textwidth] {zfig_exp_regression_0.1.pdf} \hspace{0.1\textwidth} \includegraphics[width=0.4\textwidth]{zfig_exp_regression_1.0.pdf} \caption{The regression lines for $\gamma$ for some values of $q$. In each chart the red line is obtained looking at the values $J < J_c$ (high temperature) whreas the blue line is obtained looking at the values $J > J_c$ (low temperature).} \label{fig:slope_critical_exponents} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth] {zfig_cold_coefficients.pdf} \hspace{0.1\textwidth} \includegraphics[width=0.4\textwidth] {zfig_warm_coefficients.pdf} \caption{The low temperature ($J > J_c$, in blue) and high temperature ($J < J_c$, in red) coefficients for values of $q$ between $0$ and $2.0$. The solid lines correspond to the ``true'' values of the critical exponents for the two and three dimensional systems. The numerically determined values appear to be close to the critical value of the three dimensional system for $q>0$ and to the critical value of the two dimensional system for $q=0$. The behavior of numerical estimate for the low temperature critical exponent in the case $q=2$ is likely to be due to the limited size of the simulated system. } \label{fig:critical_coefficients} \end{figure} \subsection{Numerical details}\label{sec:numerical_details} We simulated the shaken dynamics, without external magnetic field, on a $96 \times 96 \times 96$ grid on which we imposed periodic boundary conditions. We considered a grid of points in the $(q,J)$ and for each point in the grid, we started the simulation from the configuration with all spin set to $-1$ and let the system perform 510000 steps of the shaken dynamics (that is, 1020000 half steps). We considered the first 10000 steps as a ``transient'' and collected statistics on the final 500000 steps. In particular, for each pair of parameters $(q,J)$ we computed the the average and variance (over time) of the magnetization. Simulations have been carried over using the language ``julia''. An heuristic insight on why this procedure should be useful it is possible to argue as follows. Letting the dynamics start from the configuration with all spins taking value $-1$, it is expected to reach very rapidly a local minimizer of the free energy and start visiting configurations that are close to this minimizer. In the high temperature regime, the minimizers of the free energy are expected to have all zero mean magnetization and if the parameters $(q,J)$ are in the high temperature region, the dynamics is likely to return very quickly to a state where the number of plus and minus spins is essentially the same. As a consequence, it is possible to conjecture that the average (over time) of the magnetization is very close to zero and that its variance is very small (see figure \ref{fig:critical_curve}). In the low temperature region, the free energy has minimizers whose mean magnetization is closer (and closer as the system freezes) to $\pm 1$. The colder the system, the higher the free energy barriers separating the ``positive magnetization'' minimizers from the ``negative magnetization'' ones. However, these minimizers are separated by free energy barriers that become higher as the ``temperature'' decreases. As the chain evolves, the dynamics will overcome a free energy barrier of magnitude $\Delta$ with a probability that is exponentially small in $\Delta$. Therefore, if the system starts from the configuration where all spins are $-1$, it will very likely reach the vicinity of one of these $-1$ minimizer and will stay, with very high probability, in the region where the minimizers of the free energy have negative mean magnetization. Consequently, also in the low temperature region we can expect a very small variance for the average magnetization whereas its mean is likely to be more and more negative as the system becomes colder (see figure \ref{fig:critical_curve}). Around the critical temperature, the free energy has minimizers with both positive and negative mean magnetization. However, the ``valleys'' of the free energy landscape where these minimizers sit are rather shallow and, therefore, the dynamics is expected to move between minimizers whose mean magnetization has opposite signs. An evolution of this type will produce an average magnetization that is close zero. Nevertheless, the variance of the magnetization, in this case is expected to be rather big.d \section*{Acknowledgements} BS acknowledges the support of the Italian MIUR Department of Excellence grant (CUP E83C18000100006). AT has been supported through the H2020 Project Stable and Chaotic Motions in the Planetary Problem (Grant 677793 StableChaoticPlanetM of the European Research Council). \addcontentsline{toc}{chapter}{\bibname}
train/arxiv
BkiUdnzxK6-gDz87QQuV
5
1
\section{Introduction} Heart and cardiovascular diseases are the leading global cause of death, with 80\% of cardiovascular disease-related deaths due to heart attacks and strokes. The 12-lead ECG can be considered as the foundation of cardiology and electrophysiology. It provides unique information about the structure and electrical activity of the heart as well as systemic conditions, through changes in timing and morphology of the recorded waveforms. Consequently, the clinical 12-lead ECG when correctly interpreted, remains a primary tool for detecting cardiac abnormalities and screening at-risk populations for heart-related issues. Accurate ECG interpretations of acute cardiac conditions are critical for timely, efficient, and cost-effective interventions. Consequently, achievement of reliable machine assisted ECG interpretations could significantly impact patient outcomes~\citep{zhu2022physiomtl}. With the development of machine learning and deep learning methods, it may also be possible to identify additional previously unrecognized signatures of disease. Many methods have been explored for diagnosing physiological signals, i.e., EEG, ECG, EMG, etc \citep{Liu2019MultimodalER,Shanmugam2019MultipleIL,CtAllard2019DeepLF}. Due to limited data and sensitive modeling frameworks, the diagnostic performance of developed algorithms is not always robust. Also, it has been shown that deep learning models for ECG data could be susceptible to adversarial attacks. \citep{han2020deep,hossain2021_ecg_adv_gan,chen2020_aaai_ecg_adv}. To tackle the problem caused by \textit{adversarial data distributions}, people have proposed both empirical and certified robust learning approaches, such as adversarial training \citep{madry2017towards_emp_adv} and certified defense approaches \citep{cohen2019certified_cert_rob,li2020sok_SOK,li2021tss_TSS}. It has already been shown that \textit{data augmentation} strategies \citep{Rebuffi2021DataAC,Rebuffi2021FixingDA,Gao2020FuzzTB,volpi2018generalizing_data_aug_dro,Ng2020SSMBASM} or more training data \citep{carmon2019unlabeled_imp_rob} can improve the performance and increase the robustness of deep learning models. Specifically, augmenting data with random Gaussian noise \citep{cohen2019certified_cert_rob} or transformations \citep{li2021tss_TSS} yields certifiable smoothed models. Mixup methods \citep{Zhang2018mixupBE,greenewald2021_kmixup}, which augment data with weighted averages of training points, also promote the certifiable robustness \citep{jeong2021smoothmix}. However, different types of data usually contain critical domain-specific properties. While people have effectively applied different neural network architectures to ECG classification problems, it has become an increasing concern that these neural networks are susceptible to adversarial examples~\citep{han2020deep}. Several studies~\citep{raghu22a,nonaka2020electrocardiogram_ecg_data_aug} have explored data augmentation techniques for ECG datasets. Nevertheless, unlike other fields such as computer vision~\citep{Zhao2020MaximumEntropyAD} or NLP~\citep{morris2020textattack_data_aug_adv_nlp}, the effect of data augmentation on ECG deep learning robustness is less explored. In this paper, we propose a data augmentation method from a probability and geometric perspective. Following the notion of optimal mass transportation theories~\citep{villani2009optimal}, we perturb the data distributions along the geodesic in a Wasserstein space. Specifically, the ground metric of this Wasserstein space is computed via comparing the geometry of ECG signals, which exploits the cardiovascular properties. In summary, our contribution is threefold: \begin{enumerate} \item Our proposed data augmentation method augments samples by perturbing an empirical distribution towards samples of other classes. This data augmentation scheme can preserve the local structure on the data manifold. \item To perform the computation of Wasserstein barycenters, we propose a similarity metric for ECG signals by comparing their shapes, where we consider each beat of an ECG as a \textit{continuous probability} and compute the corresponding Wasserstein distance. \item We validate our method on the PTB-XL dataset, which covers a variety of conditions, including Myocardial Infarction (MI), ST/T Change (STTC), Conduction Disturbance (CD), and Hypertrophy (HYP), collected from subjects of different ages and gender. We compare our methods with a list of baseline methods in terms of the standard prediction performances and robust prediction under adversarial attacks. \end{enumerate} \subsection*{Generalizable Insights about Machine Learning in the Context of Healthcare} ECG signals can be treated as continuous sequential data and directly fed into deep learning models, since some neural networks architectures, such as ResNet, can effectively capture information from raw ECG signals. However, our study emphasizes the potential importance of utilizing physiologically informed features that are intrinsically encoded in the structure or shape of the ECG waveforms. This statement is motivated by the following: (1) Specific components of ECG signals are generated from different parts of the cardiac cycle and represent different physiology (2) The periodicity (or absence) of expected ECG waveforms contains valuable information beyond that contained within the waveforms themselves. (3) Human experts specify ECG categories predominantly based on coarser visual features (shape and morphology) that can reflect a broad variety of structural or conduction abnormalities. (4) The advantages of incorporating some prior knowledge into models is supported by the fact that the leading method of the 2020 PhysioNet challenge only used handcrafted features rather than raw signals. Hence, we suggest exploring and developing algorithms based on electrocardiograms' properties and physiological features. While decompiling ECG signals into individual wave components (P, QRS, T waves) is challenging, our method of comparing ECG beats with respect to their underlying geometry offers a principle approach to discriminating ECG signals. \section{Related Work} \paragraph{ECG Robustness} The robustness of ECG has recently drawn more attention. \citet{Venton2021RobustnessOC} generated clean and noisy ECG datasets to test the robustness of different models. \citet{Hossain2021ECGATKGANRA} proposed Conditional GAN, which claimed to be robust against adversarial attacked ECG signals. \citet{Venton2021InvestigatingTR} explored the impact of different physiological noise types and differing signal-to-noise ratios (SNRs) of noise on ECG classification performance. \paragraph{Deep learning in ECG} Deep learning approaches have been rapidly adopted across a wide range of fields due to their accuracy and flexibility but require large labeled training sets. With the development in machine learning, many models have been applied to ECG disease detection \citep{Kiranyaz2015ConvolutionalNN,pmlr-v149-nonaka21a,Khurshid2021ElectrocardiogrambasedDL,Raghunath2021DeepNN,Giudicessi2021ArtificialIA,Strodthoff2021DeepLF}. Al-Zaiti et al. predicted acute myocardial ischemia in patients with chest pain with a fusion voting method \citep{AlZaiti2020MachineLP}. Acharya et al. proposed a nine-layer deep convolutional neural network (CNN) to classify heartbeats in the MIT-BIH Arrhythmia database \citep{Acharya2017ADC,Moody2001TheIO}. Shanmugam et al. estimate a patient's risk of cardiovascular death after an acute coronary syndrome by a multiple instance learning framework \citep{Shanmugam2019MultipleIL}. Recently, Smigiel et al. proposed models based on SincNet \citep{Ravanelli2018SpeakerRF} and used entropy-based features for cardiovascular diseases classification \citep{smigiel2021ecg}. The transformer model has also recently been adopted in several ECG applications, i.e., arrhythmia classification, abnormalities detection, stress detection, etc \citep{Yan2019FusingTM,Che2021ConstrainedTN,Natarajan2020AWA,Behinaein2021ATA,Song2021TransformerbasedSF,Weimann2021TransferLF}. \paragraph{Data augmentation for ECG} The data augmentation task has also been explored for ECG applications in the previous studies. \citet{Martin2021RealtimeFS} tried to use oversampling method to augment the imbalanced data. \citet{ClementVirgeniya2021AND} tried to feed the data into the adaptive synthetic (ADASYN) \citep{He2008ADASYNAS} based sampling model, which utilized a weighted distribution for different minority class samples depending upon the learning stages of difficulty, instead of using synthetic models such as synthetic minority oversampling technique (SMOTE). \citet{Liu2021MultiLabelCO} augmented the ECG data by using a band-pass filter, noise addition, time-frequency transformation, and data selection. Data augmentation is also a good method to deal with imbalanced ECG dataset~\citep{Qiu2022OptimalTB}. Recently, a task-dependent learnable data augmentation policy~\citep{raghu22a} has been developed for 12-lead ECG detection. This study showed that data augmentation techniques are not always helpful. \paragraph{Data augmentation \& robustness in ML: } A promising way to enable robust learning is to provide adversarially perturbed samples with data augmentation. It has already been shown that data augmentation \citep{rebuffi2021data_aug_rob_deepmind,volpi2018generalizing_data_aug_dro} or more training data \citep{carmon2019unlabeled_imp_rob,deng2021improving_ood_data_rob_gaussian_mix} can improve the performance and increase the robustness of deep learning models. \citet{Zhang2018mixupBE} proposed Mixup, an effective model regularizer for data augmentation that encourages linear interpolation in-between training examples, which has been applied in sequential data. \citet{Zhang2020SeqMixAA} augmented the queried samples by generating extra labeled sequences. \citet{Guo2020SequencelevelMS} created new synthetic examples by softly combining input/output sequences from the training set. \citet{Guo2020NonlinearMO} embraced a nonlinear interpolation policy for both the input and label pairs, where the mixing policy for the labels is adaptively learned based on the mixed input. Perhaps there is one recent work that is conceptually close to our study where a k-mixup data augmentation~\citep{greenewald2021_kmixup}, guided by Optimal Transport (OT), is proposed to improve the generalization and robustness of neural networks. This method uses optimal coupling to interpolate for vicinal data samples that respect local structures. Our method also enjoys this benefit since the Wasserstein barycenter also exploits local distribution structures. However, this study uses $l2$ cost as the ground metric, which could be ineffective when dealing with high-dimensional data. On the contrary, our study utilizes a ground metric that compares ECG signals according to their cardiovascular characteristics. \begin{figure}[t] \centering \includegraphics[width=0.99\textwidth]{figs/framework/first_figure.png} \caption{ Our data augmentation creates perturbed samples toward vicinal other-class samples. The perturbation lies on the geodesic connecting two distributions on a Wasserstein space, whose ground cost metric is computed via another level Wasserstein distance that compares the geometric shape of ECG signals. } \label{fig:framework} \end{figure} \section{Methods}\label{sec_OT} In this work, we focus on the model's performance on adversarial examples, which are generated by adding imperceptible noises on clean inputs to mislead ML models' predictions through well-designed attack algorithms~\citep{szegedy2013intriguing, goodfellow2014explaining, eykholt2018robust}. Given such malicious scenarios and ML security considerations, robust deep learning has been studied extensively~\citep{salman2019provably, allen2022feature}, to develop effective learning algorithms to build robust models. \subsection{Robust Deep Learning with Data Augmentation} It is imperative to obtain a deep learning model that is operational in the presence of potentially adversarial shifts in data distribution. A common way to describe this procedure is through the framework of distributional robust optimization \citep{weber2022certifying_cert_ood_gen_dro}. Specifically, denote $P$ as the joint data distribution over features $X \in \mathcal{X}$ and labels $Y \in \mathcal{Y}$, and let $h_\theta : \mathcal{X} \mapsto \mathcal{Y}$ be a family of predictive function parameterized by $\theta$. Given a loss function $l : \mathcal{Y} \times \mathcal{Y} \mapsto \mathbb{R}$, we turn to solve the following optimization problem: \begin{equation} \min_{\theta} \sup_{Q \in \mathcal{U}_P} \mathbb{E}_{(X, Y) \sim Q} [l(h_{\theta}(X), Y)], \end{equation} where $\mathcal{U}_P \subseteq \mathcal{P(Z)} $ is a set of probability distribution. Intuitively, this objective aims at finding the worst-case optimal predictor $h^*_\theta$ when the data distribution $P$ is perturbed towards some distribution $\mathcal{U}_P$. In this work, we follow the distribution perturbing adversary frameworks~\citep{sinha2017certifying_distribution_rob,mehrabi2021fundamental_distributionally_adv} wherein the adversarial distributions can be viewed as the neighbor distribution of clean data distribution, characterized by certain distribution distance metrics (e.g., Wasserstein distance~\citep{villani2009optimal}). It is challenging to access this adversarial distribution explicitly. Thus, inspired by recent studies~\citep{carmon2019unlabeled_imp_rob,zhai2019adversarially_rob_gaussian,dan2020sharp_rob_gaussian}, we make further assumptions for the distribution of the data by writing out the joint data distribution $P(X, Y) $ as the product of conditional distributions $P(X, Y) = P(X|Y)P(Y)$. Since we focus on the multi-classification task, we denote $P_k(X) = P(X | Y_k)$ as the data distribution of one class $k$. During the data augmentation, we aim to perturb the class $i$'s data distribution $P_i(X)$ towards class $j$'s distribution $P_j(X), i \neq j$ since we believe the data samples lying on the geodesic can be served as adversarial samples. We can illustrate our intuitions as follows: \textbf{(1)} Instead of the datapoint-specific adversarial perturbations that are aimed to attack one specific sample, the directed augmented data distribution can be considered as universal perturbations~\citep{moosavi2017robustness_boundary} that cause label change for a set of samples from the perturbed distribution $\mathcal{U}_P$. \textbf{(2)} Such perturbation matches the global manifold structure of the dataset~\citep{greenewald2021_kmixup}, therefore promoting a smoother decision boundary. \textbf{(3)} It is shown in~\citet{wei2020theoretical_expansion} that this augmentation strategy improves the expansion of the neighborhood of class-conditional distributions. More significantly, this formulation allows us to employ the results from OT theories~\citep{villani2009optimal} and Wasserstein Barycenter~\citep{agueh2011barycenters} thus firmly estimating the perturbed distribution $\mathcal{U}_P$. \subsection{Data Augmentation by Perturbation on the Geodesic}\label{sec:DA} Let $\mathcal{X}$ be an arbitrary space. Assume $d(\cdot, \cdot) : \mathcal{X} \times \mathcal{Y} \mapsto \mathbb{R}^+$ is the ground metric cost function. The well-known Wasserstein distance originated from the Optimal Transport (OT) problem which aims at finding an optimal coupling $\pi$ that minimizes the transportation cost. \begin{definition} (Wasserstein Distances). For $p \in [1, \infty]$ and probability measures $\mu$ and $ \nu \in \mathcal{M}(\mathcal{X})$. The $p-$Wasserstein distance between them is defined as \begin{equation} W_p(\mu, \nu) := \left( \inf_{\pi \in \Pi} \int_{\mathcal{X} \times \mathcal{X}} d^p(x, y) d \pi(x, y) \right)^{1/p} , \text{ } (x, y) \in \mathcal{X} \times \mathcal{X} \label{eq:w-dist} \end{equation} where $\Pi$ is the set of all probability measures on $\mathcal{X} \times \mathcal{X}$. \end{definition} Considering the path of distributions (a geodesic) $p_t$ that interpolates between two distributions $\mu$ and $\nu$, one of the most intriguing properties of this interpolation is that it will preserve the basic structure of $\mu$ and $\nu$. In other words, such perturbation can be viewed as an optimal transport map that pushes forward $\mu$ along the geodesic that connects $\mu$ and $\nu$. \begin{definition} (Geodesics in Wasserstein space). Let $\mu$ and $\nu$ be two distributions. Consider a map $m: [0, 1] \mapsto \mathcal{M}(\mathcal{X})$ taking $[0, 1]$ to the set of distributions, such that $m(0) = \mu$ and $m(1) = \nu$, where $\mathcal{M}(\mathcal{X})$ is the set of Borel measures on $\mathcal{X}$. Thus $(p_{\alpha}: 0 \leq \alpha \leq 1)$ is a path connecting $\mu$ and $\nu$, where $p_{\alpha}=m(\alpha)$. The length of $m$ --- denoted by $L(m)$ --- is the supremum of $\sum_{i=1}^{K} W\left(m\left(\alpha_{i-1}\right), m\left(\alpha_{i}\right)\right)$ over all $m$ and all $0=\alpha_{1}<\cdots<\alpha_{K}=1$. Therefore, there exists such a path $m$ such that $L(m) = W(\mu, \nu)$ and $(p_{\alpha}: 0 \leq \alpha \leq 1)$ is the geodesic connecting $\mu$ and $\nu$. \end{definition} The definition of the geodesic in Wasserstein space provides us a roadmap to obtain the perturbed distributions, as it boils down to the Wasserstein Barycenter problem. \begin{definition} (Wasserstein Barycenter). The Wasserstein barycenter of a set of measures $\{\nu_1, ..., \nu_N \}$ in a probability space $\mathbb{P} \subset \mathcal{M}(\mathcal{X})$ is a minimizer of objective $f_{wb}$ over $\mathbb{P}$, where \begin{equation} f_{wb}(\mu) := \frac{1}{N} \sum_{i=1}^N \alpha_i W(\mu, \nu_i), \end{equation} where $\alpha_i$ are the weights such that $\sum \alpha_i = 1$ and $\alpha_i > 0$. \end{definition} If we are using uniform weights and consider all the distributions, the barycenter is the Fr\'echet mean, or the Wasserstein population mean~\citep{bigot2017geodesic_pca}. Also, it is known that when we have only two samples, the barycenter corresponds to the interpolation between two distributions along the geodesic. In this case, given class-conditional data distributions $P_i$ and $P_j$, the perturbed augmentation which interpolates along the geodesic can be obtained via \begin{equation} \Tilde{P}_{ij} = \inf_{{P}_\alpha} ~ (1 - \alpha)W(P_i, {P}_\alpha) + \alpha W({P}_\alpha,P_j) \text{ where } \alpha \in (0, \epsilon) \end{equation} Then the augmented samples can be obtained by sampling $(\Tilde{x}_i, y_i) \sim \Tilde{P}_{ij}$. Later on, we will provide an algorithmic derivation of this augmentation procedure for discrete data samples. \begin{figure}[htp] \centering \includegraphics[width=0.99\textwidth]{figs/framework/physio_pipeline_new_v2.png} \caption{The semantic representation of our pipeline. } \label{fig:physio_pipeline} \end{figure} \section{Algorithm \& Computation } \subsection{Computational Optimal Transport} In practice, we only observe discrete training samples that represents empirical distribution of $P_i$ and $P_j$. Consider $\mathbf{X}_i = \{\mathbf{x}^i_l\}_{l=1}^{n_i}$ and $\mathbf{X}_j = \{\mathbf{x}^j_l\}_{l=1}^{n_j}$ are two set of features from class $i$ and $j$ respectively. The empirical distributions are written as $\hat{P}_i = \sum_{l=1}^{n_i} p_l^i \delta_{x^i_l}$ and $\hat{P}_j = \sum_{l=1}^{n_j} p_l^j \delta_{x^j_l}$ where $\delta_{x}$ is the Dirac function at location $x \in \Omega$, $p_l^i$ and $p_l^j$ are probability mass associated to the sample. Then the Wasserstein distance, or equation (\ref{eq:w-dist}), between empirical measures $\hat{P}_i$ and $\hat{P}_j$ becomes \begin{equation} W(\hat{P}_i, \hat{P}_j) = \inf_{\pi \in \hat{\Pi_{ij}}} \sum_{l=1,k=1}^{n_i, n_j} c(\mathbf{x}^i_l, \mathbf{x}^j_k) \pi_{l,k}, \end{equation} where $\hat{\Pi_{ij}}:= \{\pi \in (\mathbb{R}^+)^{n_i \times n_j} | \pi \mathbf{1}_{n_j} = \mathbf{1}_{n_i} /n_i, \pi^\top \mathbf{1}_{n_i} = \mathbf{1}_{n_j} /n_j \}$ with $\mathbf{1}_n$ a length $n$ vector of ones. $c(x, y)$ is the ground cost function that specifies the actually cost to transport the mass, or probability measure, from position $x$ to $y$. Most studies use $l_2$ norm as the ground metric as there are a lot of desirable properties. However, here we emphasis that it is not appropriate to compare ECG signals with $l_2$ metrics. \subsection{A Physiology Inspired Metric for ECG} In practice, the accurate decomposition~\citep{kanjilal1997fetal_ecg_decomp} of ECG is a crucial step in providing medical diagnosis and services. For example, ventricular heart rate~\citep{kundu2000knowledge_ecg_decomp} is the most common information that is extracted by measuring the time interval between two successive $R$ peaks. While in most computer vision tasks, it is hard to describe features explicitly, informative characteristics are defined the wave features of the ECG signal, as illustrated in Fig.~\ref{fig:ecg_decomp_qrs}. A great deal of work \citep{zhong2020maternal_decomp_ff,rasti2021aecg_decomp_deep} focused on extracting or decomposing the wave components from ECG signals. However, it is still challenging. In this work, we propose to directly compare the shape of two ECGs rather than parsing the ECG into the P wave, QRS, and T wave since it is challenging to process the noisy signals. Specifically, we first (1) treat them as probability densities, and then (2) compute a Wasserstein distance between these two densities. Formally, consider two individual ECG beat signals as two density function of time $\mu_e = \mu_e(t)$ and $\nu_e = \nu_e(t)$. The Wasserstein distance is obtained via: \begin{equation} W_e(\mu_e, \nu_e) := \inf_{\pi_e \in \Pi_e} \int_{[0, 1] \times [0, 1]} \| x - y\|_2^2 d \pi_e (x, y), \end{equation} where $\Pi_e$ is the joint distribution which has marginals as $\mu_e$ and $\nu_e$. Therefore, now we have a reasonable metric that measures the pairwise similarity between ECG signals $C_d(\cdot, \cdot) = W_e(\cdot, \cdot)$, which can serve as the ground metric in the computation of the Wasserstein barycenter data augmentation procedure. \begin{figure}[htp] \centering \begin{minipage}[b]{.45\linewidth} \centering \includegraphics[width=0.60\textwidth]{figs/geometric_distance/qrs_shape.png} \caption{ECG components.} \label{fig:ecg_decomp_qrs} \end{minipage} \begin{minipage}[b]{.54\linewidth} \centering \includegraphics[width=0.98\textwidth]{figs/geometric_distance/w_shape_qrs.png} \caption{Treat ECG as continuous densities.} \label{fig:ecg_shape_w_distance} \end{minipage} \end{figure} \paragraph{Computation concerns: batch OT and entropic OT} Discrete optimal transport involves a linear program that has an $O(n^3)$ complexity. Our framework requires the computation of optimal transport in two levels: (1) Use Wasserstein distance to obtain the pairwise similarity of ECG signals. (2) Use Wasserstein Barycenter, which also computes Wasserstein distances, to interpolate between two sets of ECG signal samples from different conditions. Hence, the potential computation issues can not be ignored. First of all, we adopted the celebrated entropic optimal transport~\citep{cuturi2013sinkhorn} and used the Sinkhorn algorithm to solve for OT objectives and Barycenters~\citep{janati2020debiased_barycenter}. The Sinkhorn algorithm has a $O(n \log n)$ complexity, thus it can ease the computation burden. In addition, the pairwise Wasserstein distance of ECG signals can be precomputed and stored. Last but not least, we follow the concept of minibatch optimal transport~\citep{fatras2021minibatch_OT} where we sample a batch of ECG samples from each condition during the data augmentation procedure. Whereas minibatch OT could lead to non-optimal couplings, our experimental results have demonstrated that our data augmentation is still satisfactory. \subsection{Backbone Model} \citet{pmlr-v149-nonaka21a} used raw ECG signal as input, however, it is shown that the optimal predictive performance can be achieved by transformers trained with hand-crafted features \citep{Natarajan2020AWA}. So for the classification model, we take advantage of the transformer encoder \citep{Vaswani2017AttentionIA}, and proposed a Multi-Feature Transformer (MF-Transformer) model. The transformer is based on the attention mechanism \citep{Vaswani2017AttentionIA} and outperforms previous models in accuracy and performance on many tasks~\citep{xu2022prompting,qiu2022mhms}. The original transformer model is composed of an encoder and a decoder. The encoder maps an input sequence into a latent representation, and the decoder uses the representation along with other inputs to generate a target sequence. Our model is mostly based on the encoder, since we aim at learning the representations of ECG features, instead of decoding it to another sequence. As shown in Fig.~\ref{fig:transformer}, the input for the Multi-Feature Transformer is composed of three parts, including ECG raw features, time-domain features, and frequency domain features. First, we feed out the input into an embedding layer and then inject positional information into the embeddings. In our model, the attention model contains $N=5$ same layers, and each layer contains two sub-layers: a multi-head self-attention model and a fully connected feed-forward network. Residual connection and normalization are added in each sub-layer. We use a 1D convolutional and softmax layers for the output to calculate the final output. More details of the MF-Transformer model is introduced in Appendix~\ref{MF}. \begin{figure*}[htp] \centering \includegraphics[width=0.99\textwidth]{figs/transformer.png} \caption{The architecture of the Multi-Feature Transformer model.} \label{fig:transformer} \end{figure*} \section{Cohort} We carried out the experiments on the PTB-XL dataset \citep{Wagner2020PTBXLAL}, which contains clinical 12-lead ECG signals of 10-second length. There are five conditions in total, including Normal ECG (NORM), Myocardial Infarction (MI), ST/T Change (STTC), Conduction Disturbance (CD), and Hypertrophy (HYP). The waveform files are stored in WaveForm DataBase (WFDB) format with 16-bit precision at a resolution of 1$\mu$V/LSB and a sampling frequency of 100Hz. \subsection{Signal Pre-processing} First, the raw ECG signals are processed by the wfdb library\footnote{https://pypi.org/project/wfdb/} and Fast Fourier transform (fft) to process the time series data into the spectrum, which is shown in Fig.~\ref{fig:fft}. Then we perform n-points window filtering to filter the noise within the original ECG signals and adopt notch processing to filter power frequency interference (noise frequency: 50Hz, quality factor: 30). An example of the filtered ECG signal result after n-points window filtering and notch processing is shown in Fig.~\ref{fig:notch}. \begin{figure}[htp] \centering \begin{minipage}[b]{.48\linewidth} \centering \includegraphics[width=0.98\textwidth]{figs/fft_v2_MLHC.png} \caption{ECG data in the format of time series and spectrum.} \label{fig:fft} \end{minipage} ~ \begin{minipage}[b]{.48\linewidth} \centering \includegraphics[width=0.98\textwidth]{figs/notch2_v2_MLHC.png} \caption{Filtered ECG data in the format of time series and spectrum.} \label{fig:notch} \end{minipage} \end{figure} Then we perform ECG segmentation by dividing the 10-second ECG signals into individual ECG beats. We first detect the R peaks of each signal by ECG detectors\footnote{https://pypi.org/project/py-ecg-detectors/}, and then slice the signal at a fixed-sized interval on both sides of the R peaks to obtain individual beats. Examples of R peak detection results and segmented ECG beats are shown in Fig. \ref{fig:find_R} and Fig.~\ref{fig:divide_R}, respectively. \begin{figure}[htp] \centering \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=0.98\textwidth]{figs/findR_MLHC.png} \caption{Detecting R peaks in the ECG signals.} \label{fig:find_R} \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \includegraphics[width=0.98\textwidth]{figs/divide_R2_MLHC.png} \caption{Extracted ECG beats divided by R peaks.} \label{fig:divide_R} \end{minipage} \end{figure} \subsection{Feature Extraction}\label{sec:features} Instead of directly using the time-series signals, we extract time domain and frequency domain features to better represent ECG signals. The time-domain features include: maximum, minimum, range, mean, median, mode, standard deviation, root mean square, mean square, k-order moment and skewness, kurtosis, kurtosis factor, waveform factor, pulse factor, and margin factor. The frequency-domain features include: FFT mean, FFT variance, FFT entropy, FFT energy, FFT skew, FFT kurt, FFT shape mean, FFT shape std, FFT shape skew, FFT kurt, where the function of each component is shown in Table~\ref{fre_table}. \begin{table}[htp]\small \centering \renewcommand\arraystretch{1.5} \caption{ECG statistical features in frequency domain.} \begin{tabular}{c|c|c|c} \hline Feature Symbol &Formula & Feature Symbol &Formula \\ \hline $Z_1$ & $\frac{1}{N} \sum_{k=1}^{N} F(k)$ & $Z_6$ & $\frac{1}{N} \sum_{k=1}^{N}\left(\frac{F(k)-Z_{1}}{\sqrt{Z_{2}}}\right)^{4}$ \\ $Z_2$ & $\frac{1}{N-1} \sum_{k=1}^{N}\left(F(k)-Z_{1}\right)^{2}$ & $Z_7$ & $\frac{\sum_{k=1}^{N}(f(k)-F(k))}{\sum_{k=1}^{N} F(k)}$ \\ $Z_3$ & $-1 \times \sum_{k=1}^{N}\left(\frac{F(k)}{Z_{1} N} \log _{2} \frac{F(k)}{Z_{1} N}\right)$ & $Z_8$ & $\sqrt{\frac{\sum_{k=1}^{N}\left[\left(f(k)-Z_{6}\right)^{2} F(k)\right]}{\sum_{k=1}^{N} F(k)}}$\\ $Z_4$ & $\frac{1}{N} \sum_{k=1}^{N}(F(k))^{2}$ & $Z_9$ & $\frac{\sum_{k=1}^{N}\left[(f(k)-F(k))^{3} F(k)\right]}{\sum_{k=1}^{N} F(k)}$ \\ $Z_5$ & $\frac{1}{N} \sum_{k=1}^{N}\left(\frac{F(k)-Z_{1}}{\sqrt{Z_{2}}}\right)^{3}$ & $Z_{10}$ & $\frac{\sum_{k=1}^{N}\left[(f(k)-F(k))^{4} F(k)\right]}{\sum_{k=1}^{N} F(k)}$ \\ \hline \end{tabular} \label{fre_table} \end{table} After processing the ECG signals, we analyzed the statistics of the processed ECG data, and the result is shown in Table~\ref{imbalance_table}, where there are five categories in total, including NORM, MI, STTC, CD, and HYP. \begin{table}[htp]\small \centering \caption{Statistics of the processed ECG data.} \begin{tabular}{c|c|c|c|c} \hline Category & Patients & Percentage & ECG beats &Percentage \\ \hline NORM &9528 &34.2\% & 28419 &36.6\% \\ MI &5486 &19.7\% &10959 &14.1\% \\ STTC &5250 &18.9\% & 8906 &11.5\% \\ CD &4907 &17.6\% & 20955 &27.0\% \\ HYP &2655 &9.5\% & 8342 &10.8\% \\ \hline \end{tabular} \label{imbalance_table} \end{table} \section{Experiments} \subsection{Experimental Setup} We use the MF-Transformer model as our classifier, where the input contains three parts: the ECG signals, the time domain features, and the frequency domain feature. To reduce the dimension of ECG signals for the convenience of computation, we downsample the processed ECG signals to 50Hz. We computed the ECG features in Section~\ref{sec:features} for each ECG beat for all 12 leads, and concatenated them with the downsampled and de-noised ECG signals. The dimension of the final features vector of each ECG beat is 864, where the dimensions for the ECG signals, time-domain features, and frequency domain features are 600, 156, and 108, respectively. Our experiments are carried out on two NVIDIA RTX A6000 GPUs. \begin{figure}[htp] \centering \includegraphics[width=0.48\textwidth]{figs/geometric_distance/ecg_geo_dist_1.png} \includegraphics[width=0.48\textwidth]{figs/geometric_distance/ecg_geo_dist_2.png} \caption{Difference between W-distance and l2 distance. We overlaid two ECG signals from different conditions on the left side, while on the right side, we put two ECG signals from the same condition. The Wasserstein distance correctly describes the similarity, but the $l_2$ norm indicates a large distance between two signals comes from the same condition but has a slight time shift. } \label{fig:geo_dist_comparision} \end{figure} \subsection{Data Augmentation by Wasserstein Geodesic Perturbation} Our data augmentation strategy through Wasserstein Geodesic Perturbation aims at improving the robustness of heart disease diagnosis. In specific, (1) We use NORM individual beats as the source and transport the samples from the NORM into each other minor categories. (2) In the augmentation procedure, we randomly sample a batch of ECG signals from both the source and target categories and then use formulation in Section~\ref{sec:DA} to get the barycentric mapping samples. The label of augmented samples is set to be the target categories. (3) We mix the original data and augmented data together as input data for the MF-Transformer. Examples of augmented data are shown in Fig.~\ref{fig:aug_result}. The quality of the augmented data is also confirmed to preserve the semi-periodic nature. The augmentation results of each lead fit well with the ECG pattern compared with the original ECG signals. \begin{figure}[htp] \centering \includegraphics[width=0.99\textwidth]{figs/aug_results.png} \caption{Example comparisons of 10-second 12-lead ECG signals within different conditions. Left column: original signals; Right column: augmented signals.} \label{fig:aug_result} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=0.98\textwidth]{figs/geometric_distance/geodesic_vs_l2.png} \caption{The difference between perturbation along the Wasserstein geodesic and on $l_2$ space. The geodesic perturbation keeps the structure of ECG beats while $l_2$ interpolation leads to situations that violates common senses (two QRS waves in one beat). } \label{fig:geo_inter_vs_l2} \end{figure} \subsection{Evaluation of Heart Disease Detection } We used the MF-Transformer model as the classifier to evaluate our methods to detect the heart conditions based on the ECG data. First, we trained the MF-Transformer model with the original PTB-XL data to obtain the baseline performance for different categories. Second, we used different data augmentation strategies to augment the ECG signals for the minority categories, then trained the MF-Transformer model from scratch to obtain the performance by different data augmentation methods. Third, we augmented the ECG data with our data augmentation method and trained the MF-Transformer model from scratch again to evaluate the performance of our method. The augmented data is only used for training, the testing set remains the same as for all the experiments, which only contain the real-world ECG signals to have a fair evaluation of the proposed method. The training and testing splitting strategy is the same as in \citep{Wagner2020PTBXLAL,Strodthoff2021DeepLF}. \begin{table}[htp]\small \centering \caption{Comparison results of heart disease diagnosis (HYP, CD, STTC, MI) by different data augmentation methods, where the evaluation metrics are AUROC and F1-score.} \begin{tabular}{l|c|c} \hline Methods & AUROC & F1-score \\ \hline No augmentation &0.843 &0.575 \\ Random Oversampling &0.820 &0.536 \\ SMOTE \citep{Chawla2002SMOTESM} &0.799 & 0.534 \\ ADASYN \citep{He2008ADASYNAS} &0.820 &0.546 \\ TaskAug \citep{raghu22a} &0.842 &-- \\ Ours &\textbf{0.931} &\textbf{0.707} \\ \hline \end{tabular} \label{results_table} \end{table} \begin{figure}[H] \centering \includegraphics[width=0.99\textwidth]{figs/cm_5.png} \caption{Confusion matrix of prediction results.} \label{fig:cm_all} \end{figure} Table~\ref{results_table} shows the results of different data augmentation approaches, where standard evaluation metrics AUROC and F1-score are used to compare the performance of different strategies. We can find that even when some data augmentation methods are applied, i.e., random oversampling, SMOTE \citep{Chawla2002SMOTESM}, or ADASYN \citep{He2008ADASYNAS}, the classification performance is not significantly improved (or even slightly worse) than using only ECG data without any augmentation. We also compared with TaskAug \citep{raghu22a}, a new ECG data augmentation strategy which takes raw ECG signal as input. We can find that by using no augmentation strategy, our result is already better than TaskAug, showing: (1) high-level features are useful for the diagnosis task for learning the ECG patterns; (2) some data augmentation methods' performance improvement may be due to underfitting instead of learning additional patterns. To have a more quantitative comparison of the classification results of each heart condition by different data augmentation methods, we compute the confusion matrix for each data augmentation method, as shown in Fig.~\ref{fig:cm_all}. Our data augmentation method not only improves the classification accuracy of each category, but also improves the average classification result. Each category's performance becomes more balanced, showing that the robustness of the diagnosis result is improved. \paragraph{Robust prediction:} Following the pipeline of previous works~\citep{han2020deep}, we evaluate the robustness of our model as well as baseline method on the adversarial examples generated by Projected Gradient Descent (PGD)~\citep{kurakin2016adversarial_pgd}. PGD is a white-box attack methods that seeks adversarial samples with an $\epsilon$-ball based on the gradient of a trained model. In our experiment, we gradually increase the capability of the adversarial by increasing the $\epsilon$. \begin{table}[H]\small \centering \caption{AUROC result on clean and adversarial samples, for Myocardial Infarction (MI). } \begin{tabular}{l|c|c|c|c|c} \hline Myocardial Infarction (MI) & Clean AUROC & $\epsilon=0.001$ & $\epsilon=0.002$ & $\epsilon=0.003$ & $\epsilon=0.004$ \\ \hline No augmentation &0.742 &0.563 &0.386 & 0.265 &0.185 \\ Random Oversampling &0.701 &0.459 &0.276 & 0.175 &0.121 \\ SMOTE \citep{Chawla2002SMOTESM} & 0.706 &0.596 & 0.485 &0.403 & 0.317 \\ ADASYN\citep{He2008ADASYNAS}& 0.726 &0.621 & 0.497 &0.405 & 0.323 \\ Ours &\textbf{0.910} &\textbf{0.823} &\textbf{0.749} &\textbf{0.686} &\textbf{0.615} \\ \hline \end{tabular} \end{table} \vspace{-15pt} \begin{table}[H]\small \centering \caption{AUROC result on clean and adversarial samples for ST/T Change (STTC). } \begin{tabular}{l|c|c|c|c|c} \hline ST/T Change (STTC) & Clean AUROC & $\epsilon=0.001$ & $\epsilon=0.002$ & $\epsilon=0.003$ & $\epsilon=0.004$ \\ \hline No augmentation & 0.847 &0.769 &0.681 &0.577 &0.481 \\ Random Oversampling & 0.835 &0.583 &0.375 &0.247 &0.170 \\ SMOTE \citep{Chawla2002SMOTESM}& 0.761 & 0.708 & 0.609 &0.524 &0.443 \\ ADASYN \citep{He2008ADASYNAS}& 0.824 & 0.727 & 0.619 &0.525 &0.433 \\ Ours &\textbf{0.935} &\textbf{0.857} &\textbf{0.795 } &\textbf{0.734} &\textbf{0.680} \\ \hline \end{tabular} \end{table} \vspace{-15pt} \begin{table}[H]\small \centering \caption{AUROC result on clean and adversarial samples for Conduction Disturbance (CD). } \begin{tabular}{l|c|c|c|c|c} \hline Conduction Disturbance (CD) & Clean AUROC & $\epsilon=0.001$ & $\epsilon=0.002$ & $\epsilon=0.003$ & $\epsilon=0.004$ \\ \hline No augmentation & 0.883 &0.799 &0.695 & 0.603 &0.533 \\ Random Oversampling & 0.885 &0.748 &0.598 & 0.491 &0.408 \\ SMOTE \citep{Chawla2002SMOTESM}& 0.866 &0.832 & 0.786 &0.734 &0.689 \\ ADASYN \citep{He2008ADASYNAS} & 0.869 &0.780 & 0.690 &0.615 &0.552 \\ Ours &\textbf{0.952} &\textbf{0.915} &\textbf{0.863 } &\textbf{0.811} &\textbf{0.766 } \\ \hline \end{tabular} \end{table} \vspace{-15pt} \begin{table}[H]\small \centering \caption{AUROC result on clean and adversarial samples for Hypertrophy (HYP). } \begin{tabular}{l|c|c|c|c|c} \hline Hypertrophy (HYP) & Clean AUROC & $\epsilon=0.001$ & $\epsilon=0.002$ & $\epsilon=0.003$ & $\epsilon=0.004$ \\ \hline No augmentation & 0.842 &0.724 &0.599 &0.501 &0.396 \\ Random Oversampling & 0.799 &0.588 &0.398 &0.271 &0.194 \\ SMOTE \citep{Chawla2002SMOTESM}& 0.787 & 0.705 & 0.616 &0.532 &0.464 \\ ADASYN \citep{He2008ADASYNAS} & 0.806 & 0.647 & 0.514 &0.419 &0.364 \\ Ours &\textbf{0.966} &\textbf{0.919} &\textbf{0.862 } &\textbf{0.804} &\textbf{0.746} \\ \hline \end{tabular} \end{table} \section{Conclusions, Limitation, and Future Work} In this paper, we propose a new method for electrocardiograms data augmentation. We perturb the dataset along the geodesic in a Wasserstein space. We show that after data augmentation, there are both accuracy and robustness improvements in the classification results over five ECG categories, which demonstrate the effectiveness of our method. Although we focus on ECG prediction in this work, our proposed data augmentation method could be applied to sequential data in other healthcare applications. The computational inefficiency might still be one of the significant obstacles with a large scale dataset. We would like to explore more advanced Wasserstein Barycenter algorithms that can improve efficiency. \acks{The work is partly supported by the Allegheny Health Network and Mario Lemieux Center for Innovation and Research in EP.} \clearpage
train/arxiv
BkiUdxY4eIOjSZd0FFxC
5
1
\section{Introduction} The problem of coherence for a certain theory (like monoidal, monoidal closed\dots) consists in understanding which diagrams necessarily commute as a consequence of the axioms. One of the most famous results is Mac Lane's theorem on coherence for monoidal categories~\cite{mac_lane_natural_1963}: every diagram built up only using associators and unitors, which are the data that come with the definition of monoidal category, commutes. One of the consequences of this fact is that every monoidal category is monoidally equivalent to a \emph{strict} monoidal category, where associators and unitors are, in fact, identities. What this tells us is that those operations that one would like to regard as not important--such as the associators and unitors etc.--really are not important. Solving the coherence problem for a theory, therefore, is fundamental to the complete understanding of the theory itself. In this article we aim to set down the foundations for the answer to an open question left by Kelly in his task to study the coherence problem abstractly, started with~\cite{kelly_abstract_1972,kelly_many-variable_1972}. Kelly argued that coherence problems are concerned with categories carrying an extra structure: a collection of functors and natural transformations subject to various equational axioms. For example, in a monoidal category $\A$ we have $\otimes \colon \A^2\! \to \A$, $I \colon \A^0 \to \A$; if $\A$ is also closed then we would have a functor of mixed variance $(-) \implies (-) \colon \Op\A\! \times \A \to \A$. The natural transformations that are part of the data, like associativity in the monoidal case: \[ \alpha_{A,B,C} \colon (A \otimes B) \otimes C \to A \otimes (B \otimes C), \] connect not the basic functors directly, but rather functors obtained from them by \emph{iterated substitution}. By ``substitution'' we mean the process where, given functors \[ K \colon \A \times \Op\B \times \C \to \D, \quad F \colon \E \times \mathbb G \to \A, \quad G \colon \mathbb H \times \Op\L \to \B, \quad H \colon \Op\M \to \C \] we obtain the new functor \[ K(F,\Op G, H) \colon \E \times \mathbb G \times \Op{\mathbb H} \times \L \times \Op\M \to \D\label{substitution functors example} \] sending $(A,B,C,D,E)$ to $K(F(A,B),\Op G (C,D), H(E))$. Hence substitution generalises composition of functors, to which it reduces if we only consider one-variable functors. In the same way, the equational axioms for the structure, like the pentagonal axiom for monoidal categories: \[ \begin{tikzcd}[column sep={3.5em,between origins},row sep=2em] & & & (A \otimes B) \otimes (C \otimes D) \ar[drr,"\alpha_{A,B,C\otimes D}"] \\ \bigl( (A \otimes B) \otimes C \bigr) \otimes D \ar[urrr,"\alpha_{A\otimes B, C, D}"] \ar[dr,"\alpha_{A,B,C} \otimes D"'] & & & & & A \otimes \bigl( B \otimes (C \otimes D) \bigr) \\ & \bigl( A \otimes (B \otimes C) \bigr) \otimes D \ar[rrr,"\alpha_{A,B \otimes C, D}"'] & & & A \otimes \bigl( (B \otimes C) \otimes D \bigr) \ar[ur,"A \otimes \alpha_{B,C,D}"'] \end{tikzcd} \] involve natural transformations obtained from the basic ones by ``substituting functors into them and them into functors'', like $\alpha_{A \otimes B, C, D}$ and $\alpha_{A,B,C} \otimes D$ above. By substitution of functors into transformations and transformations into functors we mean therefore a generalised \emph{whiskering} operation or, more broadly, a generalised \emph{horizontal composition} of transformations. For these reasons Kelly argued in~\cite{kelly_many-variable_1972} that an abstract theory of coherence requires ``a tidy calculus of substitution'' for functors of many variables and appropriately general kinds of natural transformations, generalising the usual Godement calculus~\cite[Appendice]{godement_topologie_1958} for ordinary functors in one variable and ordinary natural transformations. (The ``five rules of the functorial calculus'' set down by Godement are in fact equivalent to saying that sequential composition of functors and vertical and horizontal composition of natural transformations are associative, unitary and satisfy the usual interchange law; see~\cite[Introduction]{santamaria_towards_2019} for more details.) One could ask why bother introducing the notion of substitution, given that it is not primitive, as the functor $K(F,\Op G, H)$ above can be easily seen to be the usual composite $K \circ (F \times \Op G \times H)$. Kelly's argument is that there is \emph{no need} to consider functors whose codomain is a product of categories, like $F \times \Op G \times H$, or the twisting functor $T(A,B) = (B,A)$, or the diagonal functor $\Delta \colon \A \to \A \times \A$ given by $\Delta(A) = (A,A)$, if we consider substitution as an operation on its own. However, take a Cartesian closed category $\A$, and consider the diagonal transformation $\delta_A \colon A \to A \times A$, the symmetry $\gamma_{A,B} \colon A \times B \to B \times A$ and the evaluation transformation $\eval A B \colon A \times (A \implies B) \to B$. It is true that we can see $\delta$ and $\gamma$ as transformations $\id \A \to \Delta$ and $\times \to \times \circ T$, but there is no way to involve $\Delta$ into the codomain of $\eval{}{}$, given that the variable $A$ appears covariantly and contravariantly at once. Kelly suggested adapting the notion of \emph{graph} for \emph{extranatural} transformations that he had introduced with Eilenberg~\cite{eilenberg_generalization_1966} to handle the case of natural transformations; that is, he proposed to consider natural transformations $\phi \colon F \to G$ between functors of many variables together with a graph $\Gamma(\phi)$ that tells us which arguments of $F$ and $G$ are to be equated when we write down the general component of $\phi$. The information carried by the graph is what allows us to get by without explicit mention of functors like $T$ and $\Delta$ and, moreover, it paves the way to the substitution calculus he sought. With the notion of ``graph of a natural transformation'', Kelly constructed a full Godement calculus for covariant functors only. His starting point was the observation that the usual Godement calculus essentially asserts that $\Cat$ is a 2-category, but this is saying less than saying that $\Cat$ is actually \emph{Cartesian closed}, $- \times \B$ having a right adjoint $[\B,-]$ where $[\B,\C]$ is the functor category. Since every Cartesian closed category is enriched over itself, we have that $\Cat$ is a $\Cat$-category, which is just another way to say 2-category. Now, vertical composition of natural transformations is embodied in $[\B,\C]$, but sequential composition of functors and horizontal composition of natural transformations are embodied in the functor \[ M \colon [\B,\C] \times [\A,\B] \to [\A,\C] \] given by the closed structure (using the adjunction and the evaluation map twice). What Kelly does, therefore, is to create a generalised functor category $\FC \B \C $ over a category of graphs $\Per$ and to show that the functor $\FC - -$ is the internal-hom of $\catover\Per$, which is then monoidal closed (in fact, far from being Cartesian or even symmetric), the left adjoint of $\FC \B -$ being denoted as $\ring - \B$. The analogue of the $M$ above, now of the form $\ring {\FC \B \C} {\FC \A \B} \to \FC \A \C,$ is what provides the desired substitution calculus. When trying to deal with the mixed-variance case, however, Kelly ran into problems. He considered the every-variable-twice extranatural transformations of~\cite{eilenberg_generalization_1966} and, although he got ``tantalizingly close'', to use his words, to a sensible calculus, he could not find a way to define a category of graphs that can handle cycles in a proper way. This is the reason for the ``I'' in the title \emph{Many-Variable Functorial Calculus, I} of~\cite{kelly_many-variable_1972}: he hoped to solve these issues in a future paper, which sadly has never seen the light of day. What we do in this article is, in fact, consider transformations between mixed-variance functors whose type is even more general than Eilenberg and Kelly's, corresponding to $\text{\uuline{G}}^*$ in~\cite{kelly_many-variable_1972}, recognising that they are a straightforward generalisation of \emph{dinatural transformations}~\cite{dubuc_dinatural_1970} in many variables. This poses an immediate, major obstacle: dinatural transformations notoriously fail to compose, as already observed by Dubuc and Street when they introduced them in 1970. There are certain conditions, known already to their discoverers, under which two dinatural transformations $\phi$ and $\psi$ compose: if either of them is natural, or if a certain square happens to be a pullback or a pushout, then the composite $\psi\circ\phi$ turns out to be dinatural. However, these are far from being satisfactory solutions for the compositionality problem, for either they are too restrictive (as in the first case), or they speak of properties enjoyed not by $\phi$ and $\psi$ themselves, but rather by other structures, namely one of the functors involved. Many studies have been conducted about them~\cite{bainbridge_functorial_1990,blute_linear_1993,freyd_dinaturality_1992,girard_normal_1992,lataillade_dinatural_2009,mulry_categorical_1990,pare_dinatural_1998,pistone_dinaturality_2017,plotkin_logic_1993,simpson_characterisation_1993,wadler_theorems_1989}, and many attempts have been made to find a proper calculus for dinatural transformations, but until recently only \emph{ad hoc} solutions have been found and, ultimately, they have remained poorly understood. In 2003, Petri\'c~\cite{petric_g-dinaturality_2003} studied coherence results for bicartesian closed categories, and found himself in need, much like Kelly in his more general case, of understanding the compositionality properties of \emph{g-dinatural transformations}, which are slightly more general dinatural transformations than those of Dubuc and Street~\cite{dubuc_dinatural_1970} in what their domain and codomain functors are allowed to have different variance and, moreover, they always come with a graph (whence the ``g'' in ``g-dinatural'') which reflects their signature. Petri\'c successfully managed to find a sufficient and essentially necessary condition for two consecutive g-dinatural transformations $\phi$ and $\psi$ to compose: if the composite graph, obtained by appropriately ``glueing'' together the graphs of $\phi$ and $\psi$, is acyclic, then $\psi\circ\phi$ is again g-dinatural. This result, which effectively solves the compositionality problem of dinatural transformations, surprisingly does not appear to be well known: fifteen years after Petri\'c's paper, the authors of the present article, completely oblivious to Petri\'c's contribution, independently re-discovered the same theorem, which was one of the results of~\cite{mccusker_compositionality_2018} and of the second author's PhD thesis~\cite{santamaria_towards_2019}\footnotemark. We, too, associated to each dinatural transformation a graph, inspired by Kelly's work of~\cite{kelly_many-variable_1972}, such graph being slightly different from Petri\'c's; we also proved that acyclicity of the composite graph of $\phi$ and $\psi$ is ``essentially enough'' for $\psi\circ\phi$ to be dinatural. The proof of our and Petri\'c's theorem are, deep down, following the same argument, but the main difference is in the approach we took to formalise it: Petri\'c's went purely syntactic, using re-writing rules to show how the arbitrary morphism of the universal quantification of the dinaturality property for $\psi\circ\phi$ can ``travel through the composite graph'' when the graph is acyclic, whereas we showed this by interpreting the composite graph as a \emph{Petri Net}~\cite{petri_kommunikation_1962} and re-casting the dinaturality property of $\psi\circ\phi$ into a \emph{reachability} problem. We then proceeded to solve it by exploiting the general theory of Petri Nets: in other words, we took a more semantic approach. \footnotetext{We also presented our result as novel in various occasions, including in a plenary talk at the Category Theory conference in Edinburgh in 2019, yet nobody redirected us to Petri\'c's paper, which we found by chance only in September 2019.} Because of this appreciable difference of Petri\'c's and our proof of the compositionality result for dinatural transformations, we believe it is worth presenting in this paper our theorem despite the non-novelty of its statement; moreover, we give here a more direct proof for it than the one in~\cite{mccusker_compositionality_2018}: this is done in Section~\ref{section vertical compositionality}. In Section~\ref{chapter horizontal}, we define a working notion of horizontal composition, that we believe will play the role of substitution of dinaturals into dinaturals, precisely as horizontal composition of natural transformation does, as shown by Kelly in~\cite{kelly_many-variable_1972}. Next, we form a generalised functor category $\FC \B \C$ for these transformations (Definition~\ref{def: generalised functor category}). Finally, we prove that $\FC \B -$ has indeed a left adjoint $\ring - \B$, which gives us the definition of a category of formal substitutions $\ring \A \B$ generalising Kelly's one. Although the road paved by Kelly towards a substitution calculus for dinatural transformations still stretches a long way, our work sets the first steps in the right direction for a full understanding of the compositionality properties of dinaturals, which hopefully will be achieved soon. \paragraph{Notations} $\N$ is the set of natural numbers, including 0, and we shall ambiguously write $n$ for both the natural number $n$ and the set $\{1,\dots,n\}$. We denote by $\I$ the category with one object and one morphism. Let $\alpha\in \List{\{+,-\}}$, $\length\alpha=n$, with $\length{-}$ denoting the length function (and also the cardinality of an ordinary finite set). We refer to the $i$-th element of $\alpha$ as $\alpha_i$. Given a category $\C$, if $n\ge 1$, then we define $\C^\alpha=\C^{\alpha_1} \times \dots \times \C^{\alpha_n}$, with $\C^+=\C$ and $\C^-=\Op\C$, otherwise $\C^\alpha=\I$. Composition of morphisms $f \colon A \to B$ and $g \colon B \to C$ will be denoted by $g\circ f$, $gf$ or also $f;g$. The identity morphism of an object $A$ will be denoted by $\id A$, $1_A$ (possibly without subscripts, if there is no risk of confusion), or $A$ itself. Given $A$, $B$ and $C$ objects of a category $\C$ with coproducts, and given $f \colon A \to C$ and $g \colon B \to C$, we denote by $[f,g] \colon A + B \to C$ the unique map granted by the universal property of $+$. We use boldface capital letters $\bfA,\bfB\dots$ for tuples of objects, whose length will be specified in context. Say $\bfA=(A_1,\dots,A_n) \in \C^n$: we can see $\bfA$ as a function from the set $n$ to the objects of $\C$. If $\sigma \colon k \to n$ is a function of sets, the composite $\bfA \sigma$ is the tuple $(A_{\sigma 1}, \dots, A_{\sigma k})$. For $\bfB \in \C^n$ and $i \in \{1,\dots,n\}$, we denote by $\subst B X i$ the tuple obtained from $\bfB$ by replacing its $i$-th entry with $X$, and by $\subst B \cdot i$ the tuple obtained from $\bfB$ by removing its $i$-th entry altogether. In particular, the tuple $\subst A X i \sigma$ is equal to $(Y_1,\dots,Y_k)$ where \[ Y_j = \begin{cases} X & \sigma j=i \\ A_{\sigma j} & \sigma j \ne i \end{cases}. \] Let $\alpha \in \List\{+,-\}$, $\bfA = (A_1,\dots,A_n)$, $\sigma \colon \length\alpha \to n$, $i \in \{1,\dots,n\}$. We shall write $\substMV A X Y i \sigma$ for the tuple $(Z_1,\dots,Z_{\length\alpha})$ where \[ Z_j = \begin{cases} X & \sigma j = i, \alpha_j = - \\ Y & \sigma j = i, \alpha_j = + \\ A_{\sigma j} & \sigma j \ne i \end{cases}\label{not:A[X,Y/i]sigma} \] We shall also write $\subst B {\bfA} i$ for the tuple obtained from $\bf B$ by substituting $\bfA$ into its $i$-th entry. For example, if $\bfA = (A_1,\dots,A_n)$ and $\bfB = (B_1,\dots,B_m)$, we have \[ \subst B {\bfA} i = (B_1,\dots, B_{i-1},A_1,\dots A_n, B_{i+1}, \dots B_m). \] If $F \colon \B^{\alpha} \to \C$ is a functor, we define $\funminplus F {A_i} {B_i} i {\length\alpha}$ to be the following object (if $A_i$, $B_i$ are objects) or morphism (if they are morphisms) of $\C$: \[ \funminplus F {A_i} {B_i} i {\length\alpha}= F(X_1,\dots,X_{\length\alpha}) \text{ where } X_i = \begin{cases} A_i & \alpha_i = - \\ B_i & \alpha_i = + \end{cases} \] If $A_i = A$ and $B_i = B$ for all $i \in \length\alpha$, then we will simply write $\funminplusconst F A B$ for the above. We denote by $\Not\alpha$ the list obtained from $\alpha$ by swapping the signs. Also, we call $\Op F \colon \B^{\Not\alpha} \to \Op\C$ the \emph{opposite functor}, which is the obvious functor that acts like $F$ between opposite categories. \section{Vertical compositionality of dinatural transformations}\label{section vertical compositionality} We begin by introducing the notion of \emph{transformation} between two functors of arbitrary variance and arity, which is simply a family of morphisms that does not have to satisfy any naturality condition. (This simple idea is, unsurprisingly, not new: it appears, for example, in~\cite{power_premonoidal_1997}.) A transformation comes equipped with a cospan in $\finset$ that tells us which variables of the functors involved are to be equated to each other in order to write down the general component of the family of morphisms. \begin{definition}\label{def:transformation} Let $\alpha$, $\beta \in \List\{+,-\}$, $F \colon \B^\alpha \to \C$, $G \colon \B^\beta \to \C$ be functors. A \emph{transformation} $\phi \colon F \to G$ \emph{of type} $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ (with $n$ a positive integer) is a family of morphisms in $\C$ \[ \bigl( \phi_{\bfA} \colon F(\bfA\sigma) \to G(\bfA\tau) \bigr)_{\bfA \in \B^n} \] (i.e., according to our notations, a family $\phi_{A_1,\dots,A_n} \colon F(A_{\sigma 1}, \dots, A_{\sigma\length\alpha}) \to G(A_{\tau1},\dots,A_{\tau\length\beta})$). Notice that $\sigma$ and $\tau$ need not be injective or surjective, so we may have repeated or unused variables. Given another transformation $\phi' \colon F' \to G'$ of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma'"] & n & \length\beta \ar[l,"\tau'"'] \end{tikzcd}, $ we say that \[ \phi \sim {\phi'} \text{ if and only if there exists } \pi \colon n \to n \text{ permutation such that } \begin{cases} \sigma' = \pi\sigma \\ \tau' = \pi\tau \\ \phi'_\bfA = \phi_{\bfA\pi} \end{cases}. \] $\sim$ so defined is an equivalence relation and we denote by $\class\phi$ the equivalence class of $\phi$. \end{definition} \begin{remark} Two transformations are equivalent precisely when they differ only by a permutation of the indices in the cospan describing their type: they are ``essentially the same''. For this reason, from now on we shall drop an explicit reference to the equivalence class $\class\phi$ and just reason with the representative $\phi$, except when defining new operations on transformations, like the vertical composition below. \end{remark} \begin{definition}\label{def:vertical composition} Let $\phi \colon F \to G$ be a transformation as in Definition~\ref{def:transformation}, let $H \colon \B^\gamma \to \C$ be a functor and $\psi \colon G \to H$ a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \ar[l,"\theta"'] \length\gamma \end{tikzcd} $. The \emph{vertical composition} $\class\psi \circ \class\phi$ is defined as the equivalence class of the transformation $\psi\circ\phi$ of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\zeta\sigma"] & l & \ar[l,"\xi\theta"'] \length\gamma \end{tikzcd} $, where $\zeta$ and $\xi$ are given by a choice of a pushout \begin{equation}\label{eqn:pushout composite type} \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr,phantom,very near end,"\ulcorner"] & m \ar[d,"\xi",dotted] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta",dotted] & l \end{tikzcd} \end{equation} and the general component $(\psi\circ\phi)_{\bfA}$, for $\bfA \in \B^l$, is the composite: \[ \begin{tikzcd} F(\bfA\zeta\sigma) \ar[r,"\phi_{\bfA\zeta}"] & G(\bfA\zeta\tau)=G(\bfA\xi\eta) \ar[r,"\psi_{\bfA\xi}"] & H(\bfA\xi\theta) \end{tikzcd}. \] (Notice that by definition $\phi_{\bfA\zeta} = \phi_{(A_{\zeta1},\dots,A_{\zeta n})}$ requires that the $i$-th variable of $F$ be the $\sigma i$-th element of the list $(A_{\zeta1},\dots,A_{\zeta n})=\bfA\zeta$, which is $A_{\zeta\sigma i}$, hence the domain of $\phi_{\bfA\zeta}$ is indeed $F(\bfA\zeta\sigma)$.) \end{definition} Before giving some examples, we introduce the definition of dinaturality of a transformation in one of its variables, as a straightforward generalisation of the classical notion of dinatural transformation in one variable. Recall from p.~\pageref{not:A[X,Y/i]sigma} the meaning of the notation $\substMV \bfA X Y i \sigma$ for $\bfA\in\B^n$, $\sigma \colon \length\alpha \to n$ and $i \in \{1,\dots,n\}$. \begin{definition}\label{def:dinaturality in i-th variable} Let $\phi = (\phi_{A_1,\dots,A_n}) \colon F \to G$ be a transformation as in Definition~\ref{def:transformation}. For $i \in \{1,\dots,n\}$, we say that $\phi$ is \emph{dinatural in $A_i$} (or, more precisely, \emph{dinatural in its $i$-th variable}) if and only if for all $A_1,\dots,A_{i-1}, A_{i+1},\dots,A_n$ objects of $\B$ and for all $f \colon A \to B$ in $\B$ the following hexagon commutes: \[ \begin{tikzcd} & F(\subst \bfA A i \sigma) \ar[r,"\phi_{\subst \bfA A i}"] & G(\subst \bfA A i \tau) \ar[dr,"G(\substMV \bfA A f i \tau)"] \\ F(\substMV \bfA B A i \sigma) \ar[ur,"F(\substMV \bfA f A i \sigma)"] \ar[dr,"F(\substMV \bfA B f \sigma)"'] & & & G(\substMV \bfA A B i \tau) \\ & F(\subst \bfA B i \sigma) \ar[r,"\phi_{\subst \bfA B i}"'] & G(\subst \bfA B i \tau) \ar[ur,"G(\substMV \bfA f B i \tau)"'] \end{tikzcd} \] where $\bfA$ is the $n$-tuple $(A_1,\dots,A_n)$ of the objects above with an additional (unused in this definition) object $A_i$ of $\B$. \end{definition} Definition~\ref{def:dinaturality in i-th variable} reduces to the well-known notion of dinatural transformation when $\alpha=\beta=[-,+]$ and $n=1$. Our generalisation allows multiple variables at once and the possibility for $F$ and $G$ of having an arbitrary number of copies of $\B$ and $\Op\B$ in their domain, for each variable $i \in \{1,\dots,n\}$. \begin{example}\label{ex:delta} Let $\C$ be a cartesian category. The diagonal transformation $\delta=(\delta_A \colon A \to A \times A)_{A \in \C}$, classically a natural transformation from $\id\C$ to the diagonal functor, can be equivalently seen in our notations as a transformation $\delta \colon \id\C \to \times$ of type $ \begin{tikzcd}[cramped,sep=small] 1 \ar[r] & 1 & \ar[l] 2 \end{tikzcd}. $ Of course $\delta$ is dinatural (in fact, natural) in its only variable. \end{example} \begin{example}\label{ex:eval} Let $\C$ be a cartesian closed category and consider the functor \[ \begin{tikzcd}[row sep=0em] \C \times \Op\C \times \C \ar[r,"T"] & \C \\ (X,Y,Z) \ar[r,|->] & X \times (Y \Rightarrow Z) \end{tikzcd} \] The evaluation $ \eval{}{} = \left(\eval A B \colon A \times (A \implies B) \to B\right)_{A,B \in \C} \colon T \to \id\C $ is a transformation of type \[ \begin{tikzcd}[row sep=0em] 3 \ar[r] & 2 & 1 \ar[l] \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210]& 2 & \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] which is dinatural in both its variables. \end{example} \begin{example}\label{ex:Church numeral} Let $\C$ be any category, and call $\hom\C \colon \Op \C \times \C \to \Set$ the hom-functor of $\C$. The $n$-th numeral~\cite{dubuc_dinatural_1970}, for $n \in \N$, is the transformation $n \colon \hom\C \to \hom\C$ of type $ \begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} $ whose general component $n_A \colon \C(A,A) \to \C(A,A)$ is given, for $A \in \C$ and $g \colon A \to A$, by \[ n_A (g) = g^n, \] with $0_A (g) = \id A$. Then $n$ is dinatural because for all $f \colon A \to B$ the following hexagon commutes: \[ \begin{tikzcd} & \C(B,B) \ar[r,"n_B"] & \C(B,B) \ar[dr,"-\circ f"] \\ \C(B,A) \ar[ur,"f\circ -"] \ar[dr,"-\circ f"'] & & & \C(A,B) \\ & \C(A,A) \ar[r,"n_A"'] & \C(A,A) \ar[ur,"f \circ -"'] \end{tikzcd} \] It is indeed true that for $h \colon B \to A$, $(f \circ h)^n \circ f = f \circ (h \circ f)^n$: for $n=0$ it follows from the identity axiom; for $n \ge 1$ it is a consequence of associativity of composition. \end{example} \paragraph{The graph of a transformation} Given a transformation $\phi$, we now define a graph that reflects its signature, which we shall use to prove our version of Petri\'c's theorem on compositionality of dinatural transformations~\cite{petric_g-dinaturality_2003}. This graph is, as a matter of fact, a \emph{string diagram} for the transformation. String diagrams were introduced by Eilenberg and Kelly in~\cite{eilenberg_generalization_1966} (indeed our graphs are inspired by theirs) and have had a great success in the study of coherence problems (\cite{kelly_coherence_1980,mac_lane_natural_1963}) and monoidal categories in general (\cite{joyal_geometry_1991,joyal_traced_1996}, a nice survey can be found in~\cite{selinger_survey_2010}). \begin{definition}\label{def:standard graph} Let $F \colon \B^\alpha \to \C$ and $G \colon \B^\beta \to \C$ be functors, and let $\phi \colon F \to G$ be a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \ar[l,"\tau"'] \length\beta \end{tikzcd} $. We define its \emph{standard graph} $\graph\phi = (P,T,\inp{(-)},\out{(-)})$ as a directed, bipartite graph as follows: \begin{itemize} \item $P=\length{\length\alpha + \length\beta}$ and $T=n$ are distinct finite sets of vertices; \item $\inp{(-)},\out{(-)} \colon T \to \parts P$ are the input and output functions for elements in $T$: there is an arc from $p \in P$ to $t \in T$ if and only if $p \in \inp t$, and there is an arc from $t$ to $p$ if and only if $p \in \out t$. Indicating with $\injP {\length\alpha} \colon \length\alpha \to P$ and $\injP {\length\beta} \colon \length\beta \to P$ the injections defined as follows: \[ \injP{\length\alpha} (x) = x, \quad \injP{\length\beta} (x) = \length\alpha + x, \] we have: \begin{align*} \inp{t} &= \{ \injP {\length\alpha} (p) \mid \sigma (p) = t,\, \alpha_p = + \} \, \cup \, \{ \injP {\length\beta} (p) \mid \tau (p) = t,\, \beta_p = - \} \\ \out{t} &= \{ \injP {\length\alpha} (p) \mid \sigma(p) = t,\, \alpha_p = - \} \, \cup \, \{ \injP {\length\beta} (p) \mid \tau (p) = t,\, \beta_p = + \} \end{align*} \end{itemize} In other words, elements of $P$ correspond to the arguments of $F$ and $G$, while those of $T$ to the variables of $\phi$. For $t \in T$, its inputs are the covariant arguments of $F$ and the contravariant arguments of $G$ which are mapped by $\sigma$ and $\tau$ to $t$; similarly for its outputs (swapping `covariant' and `contravariant'). \end{definition} Graphically, we draw elements of $P$ as white or grey boxes (if corresponding to a covariant or contravariant argument of a functor, respectively), and elements of $T$ as black squares. The boxes for the domain functor are drawn at the top, while those for the codomain at the bottom; the black squares in the middle. The graphs of the transformations given in examples \ref{ex:delta}-\ref{ex:Church numeral} are the following: \begin{itemize} \item $\delta=(\delta_A \colon A \to A \times A)_{A \in \C}$ (example \ref{ex:delta}): \[ \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {}; \\ & \node (A) [component] {}; \\ \node (2) [category] {}; & & \node (3) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] \item $\eval{}{} = \left(\eval A B \colon A \times (A \implies B) \to B\right)_{A,B \in \C}$ (example \ref{ex:eval}): \[ \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \] \item $n=(n_A \colon \C(A,A) \to \C(A,A))_{A \in \C}$ (example \ref{ex:Church numeral}): \[ \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {}; & & \node (2) [category] {};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {}; & & \node (4) [category] {};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \] \end{itemize} \begin{remark} Each connected component of $\graph\phi$ corresponds to one variable of $\phi$: the arguments of the domain and codomain of $\phi$ corresponding to (white, grey) boxes belonging to the same connected component are all computed on the same object, when we write down the general component of $\phi$. \end{remark} \label{discussion:informal-reading-morphisms-in-a-box}This graphical counterpart of a transformation $\phi \colon F \to G$ permits us to represent, in an informal fashion, the dinaturality properties of $\phi$. By writing inside a box a morphism $f$ and reading a graph from top to bottom as ``compute $F$ in the morphisms as they are written in its corresponding boxes, compose that with an appropriate component of $\phi$, and compose that with $G$ computed in the morphisms as they are written in its boxes (treating an empty box as an identity)'', we can express the commutativity of a dinaturality diagram as an informal equation of graphs. (We shall make this precise in Proposition~\ref{prop:fired labelled marking is equal to original one}.) For instance, the dinaturality of examples~\ref{ex:delta}-\ref{ex:Church numeral} can be depicted as follows, where the upper leg of the diagrams are the left-hand sides of the equations: \begin{itemize} \item $\delta=(\delta_A \colon A \to A \times A)_{A \in \C}$ (example \ref{ex:delta}): \[ \begin{tikzcd} A \ar[r,"f"] \ar[d,"\delta_A"'] & B \ar[d,"\delta_B"] \\ A \times A \ar[r,"f \times f"] & B \times B \end{tikzcd} \qquad \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {$f$}; \\ & \node (A) [component] {}; \\ \node (2) [category] {}; & & \node (3) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {}; \\ & \node (A) [component] {}; \\ \node (2) [category] {$f$}; & & \node (3) [category] {$f$}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] \item $\eval{}{} = \left(\eval A B \colon A \times (A \implies B) \to B\right)_{A,B \in \C}$ (example \ref{ex:eval}): \[\scriptstyle{ \begin{tikzcd} A \times (A' \implies B) \ar[r,"f\times (1 \implies 1)"] \ar[d,"1\times(f\implies 1)"'] & A' \times (A' \implies B) \ar[d,"\eval {A'} B"] \\ A \times (A \implies B) \ar[r,"\eval A B"] & B \end{tikzcd}} \qquad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {$f$}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {}; & & \node (2) [opCategory] {$f$}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \] \[\scriptstyle{ \begin{tikzcd} A \times (A \implies B) \ar[r,"1\times(1\implies g)"] \ar[d,"\eval A B"'] & A \times (A \implies B') \ar[d,"\eval A {B'}"] \\ B \ar[r,"g"] & B' \end{tikzcd}} \qquad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {$g$}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {$g$}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \] \item $n=(n_A \colon \C(A,A) \to \C(A,A))_{A \in \C}$ (example \ref{ex:Church numeral}): \[\scriptstyle{ \begin{tikzcd}[column sep={.5cm}] & \C(B,B) \ar[r,"n_B"] & \C(B,B) \ar[dr,"{\C(f,1)}"] \\ \C(B,A) \ar[ur,"{\C(1,f)}"] \ar[dr,"{\C(f,1)}"'] & & & \C(A,B) \\ & \C(A,A) \ar[r,"n_A"] & \C(A,A) \ar[ur,"{\C(1,f)}"'] \end{tikzcd}} \qquad \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {}; & & \node (2) [category] {$f$};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {$f$}; & & \node (4) [category] {};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {$f$}; & & \node (2) [category] {};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {}; & & \node (4) [category] {$f$};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \] \end{itemize} All in all, the dinaturality condition becomes, in graphical terms, as follows: \emph{$\phi$ is dinatural if and only if having in $\graph\phi$ one $f$ in all white boxes at the top and grey boxes at the bottom is the same as having one $f$ in all grey boxes at the top and white boxes at the bottom}. Not only does $\graph\phi$ give an intuitive representation of the dinaturality properties of $\phi$, but also of the process of composition of transformations. Given two transformations $\phi \colon F \to G$ and $\psi \colon G \to H$ as in Definition~\ref{def:vertical composition}, the act of computing the pushout~(\ref{eqn:pushout composite type}) corresponds to ``glueing together'' $\graph\phi$ and $\graph\psi$ along the boxes corresponding to the functor $G$ (more precisely, one takes the disjoint union of $\graph\phi$ and $\graph\psi$ and then identifies the $G$-boxes), obtaining a composite graph which we will call ${\graph\psi} \circ {\graph\phi}$. The number of its connected components is, indeed, the result of the pushout. That being done, $\graph{\psi\circ\phi}$ is obtained by collapsing each connected component of $\graph\psi\circ\graph\phi$ into a single black square together with the $F$- and $H$-boxes. The following example shows this process. The graph $\graph\psi\circ\graph\phi$ will play a crucial role into the compositionality problem of $\psi\circ\phi$. \begin{example}\label{ex:acyclic-example} Suppose that $\C$ is cartesian closed, fix an object $R$ in $\C$, consider functors \[ \begin{tikzcd}[row sep=0em,column sep=1em] \C \times \Op\C \ar[r,"F"] & \C \\ (A,B) \ar[r,|->] & A \times (B \Rightarrow R) \end{tikzcd} \quad \begin{tikzcd}[row sep=0em,column sep=1em] \C \times \C \times \Op\C \ar[r,"G"] & \C \\ (A,B,C) \ar[r,|->] & A \times B \times (C \Rightarrow R) \end{tikzcd} \quad \begin{tikzcd}[row sep=0em,column sep=1.5em] \C \ar[r,"H"] & \C \\ A \ar[r,|->] & A \times R \end{tikzcd} \] and transformations $\phi = \delta \times \id{(-)\Rightarrow R} \colon F \to G$ and $\psi = \id\C \times \eval {(-)} R \colon G \to H$ of types, respectively, \[ \begin{tikzcd}[row sep=0em] 2 \ar[r,"\sigma"] & 2 & \ar[l,"\tau"'] 3 \\ 1 \ar[r,|->] & 1 & \ar[l,|->] 1 \\[-3pt] 2 \ar[r,|->] & 2 & \ar[ul,|->,out=180,in=-30] 2 \\[-3pt] & & \ar[ul,|->,out=180,in=-20] 3 \end{tikzcd} \quad\text{and}\quad \begin{tikzcd}[row sep=0em] 3 \ar[r,"\eta"] & 2 & \ar[l,"\theta"'] 1 \\ 1 \ar[r,|->] & 1 & \ar[l,|->] 1 \\[-3pt] 2 \ar[r,|->] & 2 \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] so that \[ \phi_{A,B} = \delta_A \times \id{B\implies R} \colon F(A,B) \to G(A,A,B), \, \psi_{A,B} = \id A \times \eval B R \colon G(A,B,B) \to H(A). \] Then $\psi \circ \phi$ has type $\begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 1 \end{tikzcd}$ and $\graph{\psi}\circ\graph{\phi}$ is: \[ \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ & \node (A) [category] {}; & & & \node(F) [opCategory] {};\\ & \node (B) [component] {}; & & & \node(J) [component] {};\\ \node (C) [category] {}; & & \node(D) [category] {}; & & \node(E) [opCategory] {};\\ \node (H) [component] {}; & & & \node(I) [component] {};\\ \node (G) [category] {}; & & & \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \] The two upper boxes at the top correspond to the arguments of $F$, the three in the middle to the arguments of $G$, and the bottom one to the only argument of $H$. This is a connected graph (indeed, $\psi\circ\phi$ depends only on one variable) and by collapsing it into a single black box we obtain $\graph{\psi\circ\phi}$ as it is according to Definition~\ref{def:standard graph}: \[ \begin{tikzpicture} \matrix[column sep=.5em,row sep=1em]{ \node (1) [category] {}; & & \node (2) [opCategory] {};\\ & \node (A) [component] {}; \\ & \node (3) [category] {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] We have that $\psi \circ \phi$ is a dinatural transformation. (This is one of the transformations studied by Girard, Scedrov and Scott in~\cite{girard_normal_1992}.) The following string-diagrammatic argument proves that: \[ \begin{split} \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {$f$}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \quad &= \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {$f$}; \& \& \node(D) [category] {$f$}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {$f$}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {$f$}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \\ &= \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {}; \& \& \node(E) [opCategory] {$f$};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {$f$}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {$f$};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {$f$}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \end{split} \] The first equation is due to dinaturality of $\phi$ in its first variable; the second to dinaturality of $\psi$ in its first variable; the third to dinaturality of $\psi$ in its second variable; the fourth equation holds by dinaturality of $\phi$ in its second variable. \end{example} The string-diagrammatic argument above is the essence of our proof of Petrić's theorem: we will interpret $\graph\psi \circ \graph\phi$, for arbitrary transformations $\phi$ and $\psi$ as a \emph{Petri Net} whose set of places is $P$ and of transitions is $T$. The dinaturality of $\psi\circ\phi$ will be expressed as a reachability problem and we will prove that, if $\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is always dinatural because we can always ``move the $f$'s'' from the upper-white boxes and lower-grey boxes all the way to the upper-grey boxes and lower-white boxes, as we did in Example~\ref{ex:acyclic-example}. \paragraph{Petri Nets}\label{section: Petri Nets} Petri Nets were invented by Carl Adam Petri in 1962 in \cite{petri_kommunikation_1962}, and have been used since then to model concurrent systems, resource sensitivity and many dynamic systems. A nice survey of their properties was written by Murata in \cite{murata_petri_1989}, to which we refer the reader for more details and examples. Here we shall limit ourselves only to the definitions and the properties of which we will make use in the paper. \begin{definition}\label{def:Petri Net} A \emph{Petri Net} $N$ is a tuple $(P,T,\inp{(-)},\out{(-)})$ where $P$ and $T$ are distinct, finite sets, and $\inp{(-)},\out{(-)}\colon T \to \parts{P}$ are functions. Elements of $P$ are called \emph{places}, while elements of $T$ are called \emph{transitions}. For $t$ a transition, $\inp t$ is the set of \emph{inputs} of $t$, and $\out t$ is the set of its \emph{outputs}. A \emph{marking} for $N$ is a function $M \colon P \to \N$. \end{definition} Graphically, the elements of $P$ and $T$ are drawn as light-blue circles and black bars respectively. Notice that the graph of a transformation is, as a matter of fact, a Petri Net. We can represent a marking $M$ by drawing, in each place $p$, $M(p)$ \emph{tokens} (black dots). Note that there is at most one arrow from a node to another. With little abuse of notation, we extend the input and output notation for places too, where \[ \inp p = \{ t \in T \mid p \in \out{t} \}, \qquad \out p = \{ t \in T \mid p \in \inp t \}. \] A pair of a place $p$ and a transition $t$ where $p$ is both an input and an output of $t$ is called \emph{self-loop}. For the purposes of this article, we shall only consider Petri Nets that contain no self-loops. \begin{definition} Let $N$ be a Petri Net. A place $p$ of $N$ is said to be a \emph{source} if $\inp p = \emptyset$, whereas is said to be a \emph{sink} if $\out p = \emptyset$. A source (or sink) place $p$ is said to be \emph{proper} if $\out p \ne \emptyset$ (or $\inp p \ne \emptyset$, respectively). \end{definition} We shall need a notion of (directed) path in a Petri Net, which we introduce now. It coincides with the usual notion of path in a graph. \begin{definition} Let $N$ be a Petri Net. A \emph{path} from a vertex $v$ to a vertex $w$ is a finite sequence of vertices $\pi=(v_0,\dots,v_l)$ where $l \ge 1$, $v_0=v$, $v_l=w$ and for all $i \in \{0,\dots,l-1\}$ $v_{i+1} \in v_i \! \LargerCdot \! \cup \! \LargerCdot \! v_i $. Two vertices are said to be \emph{connected} if there is a path from one to the other. If every vertex in $N$ is connected with every other vertex, then $N$ is said to be \emph{weakly connected}. A \emph{directed path} from a vertex $v$ to a vertex $w$ is a finite sequence of vertices $\pi=(v_0,\dots,v_l)$ such that $v=v_0$, $w=v_l$ and for all $i \in \{0,\dots,l-1\}\,$ $v_{i+1} \in v_i \! \LargerCdot \!$. In this case we say that the path $\pi$ has length $l$. A directed path from a vertex to itself is called a \emph{cycle}, or \emph{loop}; if $N$ does not have cycles, then it is said to be \emph{acyclic}. Two vertices $v$ and $w$ are said to be \emph{directly connected} if there is a directed path either from $v$ to $w$ or from $w$ to $v$. \end{definition} We can give a dynamic flavour to Petri Nets by allowing the tokens to "flow" through the nets, that is allowing markings to change according to the following \emph{transition firing rule}. \begin{definition} Let $N=(P,T,\inp{(-)},\out{(-)})$ be a Petri Net, and $M$ a marking for $N$. A transition $t$ is said to be \emph{enabled} if and only if for all $p \in \inp t$ we have $M(p) \ge 1$. An enabled transition may \emph{fire}; the firing of an enabled transition $t$ removes one token from each $p \in \inp t$ and adds one token to each $p \in \out t$, generating the following new marking $M'$: \[ M'(p) = \begin{cases} M(p) -1 & p \in \inp t \\ M(p)+1 & p \in \out t \\ M(p) & \text{otherwise} \end{cases} \] \end{definition} \begin{example}\label{my-example} Consider the following net: \[ \begin{tikzpicture}[yscale=0.5,xscale=0.70] \foreach \i/\u in {1/1,2/1,3/2} { \foreach \j/\v in {1/0,2/0,3/0,4/1} { \node[place,tokens=\u,label=above:$p_\i$](p\i) at (2*\i,0){}; \node[place,tokens=\v,label=below:$q_\j$](q\j) at (2*\j-2,-4){}; \node[place,label=below:$q_5$](q5) at (8,-4){}; \node[place,tokens=1,label=above:$p_4$](p4) at (8,0){}; \node[place,label=above:$p_5$](p5) at (10,0){}; \node[transition,label=right:{$t$}] at (4,-2) {} edge [pre] (p\i) edge [post] (q\j) edge [post] (q5); \node[transition,label=right:$t'$] at (10,-2) {} edge [pre] (q5) edge [pre] (p4) edge [post] (p5); } } \end{tikzpicture} \] There are two transitions, $t$ and $t'$, but only $t$ is enabled. Firing $t$ will change the state of the net as follows: \[ \begin{tikzpicture}[yscale=0.5,xscale=0.70] \foreach \i/\u in {1/0,2/0,3/1} { \foreach \j/\v in {1/1,2/1,3/1,4/2} { \node[place,tokens=\u,label=above:$p_\i$](p\i) at (2*\i,0){}; \node[place,tokens=\v,label=below:$q_\j$](q\j) at (2*\j-2,-4){}; \node[place,tokens=1,label=below:$q_5$](q5) at (8,-4){}; \node[place,tokens=1,label=above:$p_4$](p4) at (8,0){}; \node[place,label=above:$p_5$](p5) at (10,0){}; \node[transition,label=right:{$t$}] at (4,-2) {} edge [pre] (p\i) edge [post] (q\j) edge [post] (q5); \node[transition,label=right:$t'$] at (10,-2) {} edge [pre] (q5) edge [pre] (p4) edge [post] (p5); } } \end{tikzpicture} \] Now $t$ is disabled, but $t'$ is enabled, and by firing it we obtain: \[ \begin{tikzpicture}[yscale=0.5,xscale=0.70] \foreach \i/\u in {1/0,2/0,3/1} { \foreach \j/\v in {1/1,2/1,3/1,4/2} { \node[place,tokens=\u,label=above:$p_\i$](p\i) at (2*\i,0){}; \node[place,tokens=\v,label=below:$q_\j$](q\j) at (2*\j-2,-4){}; \node[place,label=below:$q_5$](q5) at (8,-4){}; \node[place,label=above:$p_4$](p4) at (8,0){}; \node[place,tokens=1,label=above:$p_5$](p5) at (10,0){}; \node[transition,label=right:{$t$}] at (4,-2) {} edge [pre] (p\i) edge [post] (q\j) edge [post] (q5); \node[transition,label=right:$t'$] at (10,-2) {} edge [pre] (q5) edge [pre] (p4) edge [post] (p5); } } \end{tikzpicture} \] \end{example} \paragraph{The reachability problem and dinaturality} Suppose we have a Petri Net $N$ and an initial marking $M_0$. The firing of an enabled transition in $N$ will change the distribution of tokens from $M_0$ to $M_1$, according to the firing transition rule, therefore a sequence of firings of enabled transitions yields a sequence of markings. A \emph{firing sequence} is denoted by $\sigma = (t_0,\dots,t_n)$ where the $t_i$'s are transitions which fire. \begin{definition} A marking $M$ for a Petri Net $N$ is said to be \emph{reachable} from a marking $M_0$ if there exists a firing sequence $(t_1,\dots,t_n)$ and markings $M_1,\dots,M_n$ where $M_i$ is obtained from $M_{i-1}$ by firing transition $t_i$, for $i \in \{1,\dots,n\}$, and $M_{n}=M$. \end{definition} The reachability problem for Petri Nets consists in checking whether a marking $M$ is or is not reachable from $M_0$. It has been shown that the reachability problem is decidable \cite{kosaraju_decidability_1982,mayr_algorithm_1981}. \begin{remark}\label{rem:preliminary-discussion} The crucial observation that will be at the core of our proof of Petri\'c's theorem is that the firing of an enabled transition in the graph of a dinatural transformation $\phi$ corresponds, under certain circumstances, to the dinaturality condition of $\phi$ in one of its variables. Take, for instance, the $n$-th numeral transformation (see example~\ref{ex:Church numeral}). Call the only transition $t$, and consider the following marking $M_0$: \[ \begin{tikzpicture}[scale=0.7] \node[opCategory] (1) at (-1,1) {}; \node[category,tokens=1] (2) at (1,1) {}; \node[opCategory,tokens=1] (3) at (-1,-1) {}; \node[category] (4) at (1,-1) {}; \node[component,label=left:$t$] {} edge[pre] (2) edge[pre] (3) edge[post] (1) edge[post] (4); \end{tikzpicture} \] Transition $t$ is enabled, and once it fires we obtain the following marking $M_1$: \[ \begin{tikzpicture}[scale=0.7] \node[opCategory] (1) at (-1,1) {}; \node[category,tokens=1] (2) at (1,1) {}; \node[opCategory,tokens=1] (3) at (-1,-1) {}; \node[category] (4) at (1,-1) {}; \node[component,label=left:$t$] {} edge[pre] (2) edge[pre] (3) edge[post] (1) edge[post] (4); \draw[->,snake=snake,segment amplitude=.4mm,segment length=2mm,line after snake=1mm] (1.5,0) -- node[above]{$t$} node[below]{fires} (3.5,0); \begin{scope}[xshift=5cm] \node[opCategory,tokens=1] (1) at (-1,1) {}; \node[category] (2) at (1,1) {}; \node[opCategory] (3) at (-1,-1) {}; \node[category,tokens=1] (4) at (1,-1) {}; \node[component,label=left:$t$] {} edge[pre] (2) edge[pre] (3) edge[post] (1) edge[post] (4); \end{scope} \end{tikzpicture} \] The striking resemblance with the graphical version of the dinaturality condition for $n$ is evident: \[ \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {}; & & \node (2) [category] {$f$};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {$f$}; & & \node (4) [category] {};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {$f$}; & & \node (2) [category] {};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {}; & & \node (4) [category] {$f$};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \] By treating the ``morphism $f$ in a box'' as a ``token in a place'' of $\graph n$, we have seen that the firing of $t$ generates an equation in $\Set$, namely the one that expresses the dinaturality of $n$. \end{remark} Suppose now we have two composable transformations $\phi$ and $\psi$ dinatural in all their variables, in a category $\C$, together with a graph. We shall make precise how certain markings of $\graph\psi\circ\graph\phi$ correspond to morphisms in $\C$, and how the firing of an enabled transition corresponds to applying the dinaturality of $\phi$ or $\psi$ in one of their variables, thus creating an equation of morphisms in $\C$. Therefore, if the firing of a single transition generates an equality in the category, a sequence of firings of enabled transitions yields a chain of equalities. By individuating two markings $M_0$ and $M_d$, each corresponding to a leg of the dinaturality hexagon for $\psi\circ\phi$ we want to prove is commutative, and by showing that $M_d$ is reachable from $M_0$, we shall have proved that $\psi\circ\phi$ is dinatural. We are now ready to present and prove the first main result of this article. For the rest of this section, fix transformations $\phi \colon F_1 \to F_2$ and $\psi \colon F_2 \to F_3$ where \begin{itemize} \item $F_i \colon \B^{\alpha^i} \to \C$ is a functor for all $i \in \{1,2,3\}$, \item $\phi$ and $\psi$ have type, respectively, \[ \begin{tikzcd} \length{\alpha^1} \ar[r,"\sigma_1"] & k_1 & \length{\alpha^{2}} \ar[l,"\tau_1"'] \end{tikzcd} \qquad \text{and} \qquad \begin{tikzcd} \length{\alpha^2} \ar[r,"\sigma_2"] & k_2 & \length{\alpha^{3}}. \ar[l,"\tau_2"'] \end{tikzcd} \] \end{itemize} We shall establish a sufficient condition for the dinaturality of $\psi \circ \phi$ in some of its variables. However, since we are interested in analysing the dinaturality of the composition in each of its variables \emph{separately}, we start by assuming that $\psi\circ\phi$ depends on only one variable, i.e. has type $ \begin{tikzcd}[cramped,sep=small] \length{\alpha^1} \ar[r] & 1 & \length{\alpha^{3}} \ar[l], \end{tikzcd} $ and that $\phi$ and $\psi$ are dinatural in all their variables. In this case, we have to show that the following hexagon commutes for all $f \colon A \to B$, recalling that $\funminplusconst {F_1} B A$ is the result of applying functor $F_1$ in $B$ in all its contravariant arguments and in $A$ in all its covariant ones: \begin{equation}\label{eqn:compositionality-hexagon} \begin{tikzcd}[column sep=1cm] & \funminplusconst {F_1} A A \ar[r,"\phi_{A\dots A}"] & \funminplusconst {F_2} A A \ar[r,"\psi_{A \dots A}"] & \funminplusconst {F_3} A A \ar[dr,"\funminplusconst {F_3} 1 f"] \\ \funminplusconst {F_1} B A \ar[ur,"\funminplusconst {F_1} f 1"] \ar[dr,"\funminplusconst {F_1} 1 f"'] & & & & \funminplusconst {F_3} A B \\ & \funminplusconst {F_1} B B \ar[r,"\phi_{B\dots B}"'] & \funminplusconst {F_2} B B \ar[r,"\psi_{B \dots B}"'] & \funminplusconst {F_3} B B \ar[ur,"\funminplusconst {F_3} f 1"'] \end{tikzcd} \end{equation} The theorem we want to prove is then the following. \begin{theorem}\label{theorem:acyclic implies dinatural} Let $\phi$ and $\psi$ be transformations which are dinatural in all their variables and such that $\psi\circ\phi$ depends on only one variable. If \,$\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is a dinatural transformation. \end{theorem} The above is a direct generalisation of Eilenberg and Kelly's result on \emph{extranatural transformations} \cite{eilenberg_generalization_1966}, which are dinatural transformations where either the domain or the codomain functor is constant. For example, $\eval{}{}$ is extranatural in its first variable. They worked with the additional assumption that $\graph\phi$ and $\graph\psi$ do not contain any ramifications, that is, the white and grey boxes are always linked in pairs, and they also proved that if the composite graph is acyclic, then the composite transformation is again extranatural. Their condition is also ``essentially necessary'' in the sense that if we do create a cycle upon constructing $\graph\psi \circ \graph\phi$, then that means we are in a situation like this: \[ \begin{tikzpicture} \matrix[column sep=1em,row sep=1em]{ & \node[component] (A) {}; \\ \node[opCategory] (1) {}; & & \node[category] (2) {};\\ & \node[component] (B) {};\\ }; \graph[use existing nodes]{ 1 -> A -> 2 -> B -> 1; }; \end{tikzpicture} \] where we have a transformation between constant functors. Such a family of morphisms is (extra)natural precisely when it is constant (that is, if every component is equal to the same morphism) on each connected component of the domain category. As already said in Remark~\ref{rem:preliminary-discussion}, the key to prove this theorem is to see $\graph\psi \circ \graph\phi$ as a Petri Net, reducing the dinaturality of $\psi\circ\phi$ to the reachability problem for two markings we shall individuate. We begin by unfolding the definition of $\graph\psi \circ \graph\phi$: we have $\graph\psi \circ \graph\phi = (P,T,\inp{(-)},\out{(-)})$ where $P = \length{\alpha^1} + \length{\alpha^2} + \length{\alpha^{3}}$, $T = k_1 + k_2$ and, indicating with $\injP i \colon \length{\alpha^i} \to P$ and $\injT i \colon k_i \to T$ the injections defined similarly to $\injP{\length\alpha}$ and $\injP{\length\beta}$ in Definition~\ref{def:standard graph}, \begin{equation}\label{input-output-transitions} \begin{aligned} \inp{(\injT i (t))} &= \, \{ \injP i (p) \mid \sigma_i(p) = t,\, \alpha^i_p = + \} \, \cup \, \{ \injP {i+1} (p) \mid \tau_i(p) = t,\, \alpha^{i+1}_p = - \}, \\ \out{(\injT i (t))} &= \, \{ \injP i (p) \mid \sigma_i(p) = t,\, \alpha^i_p = - \} \, \cup \, \{ \injP {i+1} (p) \mid \tau_i(p) = t,\, \alpha^{i+1}_p = + \}. \end{aligned} \end{equation} For the rest of this section, we shall reserve the names $P$ and $T$ for the sets of places and transitions of $\graph\psi \circ \graph\phi$. \begin{remark}\label{rem:graph of a transformation is FBCF} Since $\sigma_i$ and $\tau_i$ are functions, we have that $\length{\inp p}, \length{\out p} \le 1$ and also that $\length{\inp p \cup \out p }\ge 1$ for all $p\in P$. With a little abuse of notation then, if $\inp p = \{t\}$ then we shall simply write $\inp p = t$, and similarly for $\out p$. \end{remark} \paragraph{Labelled markings as morphisms} We now show how to formally translate certain markings of $\graph\psi \circ \graph\phi$ in actual morphisms of $\C$. The idea is to treat every token in the net as a fixed, arbitrary morphism $f \colon A \to B$ of $\C$ and then use the idea discussed on p.~\pageref{discussion:informal-reading-morphisms-in-a-box}. However, not all possible markings of $\graph\psi \circ \graph\phi$ have a corresponding morphism in $\C$. For example, if $M$ is a marking and $p$ is a place such that $M(p)>1$, it makes no sense to ``compute a functor $F_i$ in $f$ twice'' in the argument of $F_i$ corresponding to $p$. Hence, only markings $M \colon P \to \{0,1\}$ can be considered. Moreover, we have to be careful with \emph{where} the marking puts tokens: if a token corresponds to a morphism $f \colon A \to B$, we have to make sure that there are no two consecutive tokens (more generally, we have to make sure that there is at most one token in every directed path), otherwise a naive attempt to assign a morphism to that marking might end up with type-checking problems. For instance, consider the diagonal transformation in a Cartesian category $\C$ (example \ref{ex:delta}) and the following marking: \[ \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category,tokens=1] {}; \\ & \node (A) [component] {}; \\ \node (2) [category,tokens=1] {}; & & \node (3) [category,tokens=1] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] The token on the top white box should be interpreted as $\id\C(f) \colon A \to B$, hence the black middle box should correspond to the $B$-th component of the family $\delta$, that is $\delta_B \colon B \to B \times B$. However, the bottom two white boxes are read as $f \times f \colon A \times A \to B \times B$, which cannot be composed with $\delta_B$. We therefore introduce the notion of \emph{labelled marking}, which consists of a marking together with a labelling of the transitions, such that a certain coherence condition between the two is satisfied. This constraint will ensure that every labelled marking corresponds to a morphism of $\C$. We will then use only \emph{some} labelled markings to prove our compositionality theorem. \begin{definition}\label{def:labelled marking} Consider $f \colon A \to B$ a morphism in $\C$. A \emph{labelled marking} for $\graph\psi \circ \graph\phi$ is a triple $(M,L,f)$ where functions $M \colon P \to \{0,1\}$ and $L \colon T \to \{A,B\}$ are such that for all $p \in P$ \[ M(p)=1 \implies L(\inp p) = A, \, L(\out p ) = B \] \[ M(p)=0 \implies L(\inp p ) = L(\out p ) \] These conditions need to be satisfied only when they make sense; for example if $M(p) = 1$ and $\inp p = \emptyset$, condition $L(\inp p) = A$ is to be ignored. \end{definition} We are now ready to assign a morphism in $\C$ to every labelled marking by reading a token in a place as a morphism $f$ in one of the arguments of a functor, while an empty place corresponds to the identity morphism of the label of the transition of which the place is an input or an output. \begin{definition}\label{def:morphism for labelled marking} Let $(M,L,f\colon A \to B)$ be a labelled marking. We define a morphism $\mor M L f$ in $\C$ as follows: \[ \mor M L f = F_1(x^1_1,\dots,x^1_{\length{\alpha^1}});\phi_{X^1_1\dots X^1_{k_1}} ; F_2(x^2_1,\dots,x^2_{\length{\alpha^2}}); \psi_{X^2_1\dots X^2_{k_2}} ; F_{3}(x^{3}_1,\dots,x^{3}_{\length{\alpha^{3}}}) \] where \[ x^i_j = \begin{cases} f \quad &M(\injP i (j)) = 1 \\ \id{L(t)} \quad & M(\injP i (j)) = 0 \land t \in \inp {\injP v (j)} \cup \out {\injP v (j)} \end{cases} \qquad X^i_j = L(\injT i (j)). \] for all $i \in \{1,2,3\}$ and $j\in\{1,\dots,\length{\alpha^i}\}$. (Recall that $\injP i \colon \length{\alpha^i} \to P$ and $\injT i \colon k_i \to T$ are the injections defined similarly to $\injP{\length\alpha}$ and $\injP{\length\beta}$ in Definition~\ref{def:standard graph}.) \end{definition} It is easy to see that $\mor M L f$ is indeed a morphism in $\C$, by checking that the maps it is made of are actually composable using the definition of labelled marking and of $\graph\psi \circ \graph\phi$. What are the labelled markings corresponding to the two legs of diagram~(\ref{eqn:compositionality-hexagon})? In the lower leg of the hexagon, $f$ appears in all the covariant arguments of $F_1$ and the contravariant ones of $F_{3}$, which correspond in $\graph\psi \circ \graph\phi$ to those places which have no inputs (in Petri nets terminology, \emph{sources}), and all variables of $\phi$ are equal to $B$; in the upper leg, $f$ appears in those arguments corresponding to places with no outputs (\emph{sinks}), and $\psi$ is computed in $A$ in each variable. Hence, the lower leg is $\mor {M_0} {L_0} f$ while the upper leg is $\mor {M_d} {L_d} f$, where: \begin{equation}\label{eqn:markings-definitions} \begin{aligned} M_0(p)&=\begin{cases} 1 & \inp p = \emptyset \\ 0 & \text{otherwise} \end{cases} \quad & M_d(p)&=\begin{cases} 1 & \out p = \emptyset \\ 0 & \text{otherwise} \end{cases} \\[.5em] L_0(t) &= B & L_d(t) &= A \end{aligned} \end{equation} for all $p\in P$ and $t \in T$. It is an immediate consequence of the definition that $(M_0,L_0,f)$ and $(M_d,L_d,f)$ so defined are labelled markings. We aim to show that $M_d$ is reachable from $M_0$ by means of a firing sequence that preserves the morphism $\mor {M_0} {L_0} f$. In order to do so, we now prove that firing a $B$-labelled transition in an arbitrary labelled marking $(M,L,f)$ generates a new labelled marking, whose associated morphism in $\C$ is still equal to $\mor M L f$. \begin{proposition}\label{prop:fired labelled marking is equal to original one} Let $(M,L,f)$ be a labelled marking, $t \in T$ an enabled transition such that $L(t) = B$. Consider \begin{equation}\label{markings after firing definition} \begin{tikzcd}[row sep=0em,column sep=1em,ampersand replacement=\&] P \ar[r,"M'"] \& \{0,1\} \& \& \& \& \& T \ar[r,"L'"] \& \{A,B\} \\ p \ar[r,|->] \& \begin{cases} 0 & p \in \inp t \\ 1 & p \in \out t \\ M(p) & \text{otherwise} \end{cases} \& \& \& \& \& s \ar[r,|->] \& \begin{cases} A & s = t \\ L(s) & s \ne t \end{cases} \end{tikzcd} \end{equation} Then $(M',L',f)$ is a labelled marking and $\mor M L f = \mor {M'} {L'} f$. \end{proposition} \begin{proof} By definition of labelled marking, if $\out t \ne \emptyset$ and $L(t) = B$ then $M(p) = 0$ for all $p \in \out t$, because if there were a $p \in \out t$ with $M(p) = 1$, then $L(t) = A$. $M'$ is therefore the marking obtained from $M$ when $t$ fires once. It is easy to see that $(M',L',f)$ is a labelled marking by simply checking the definition. We have now to prove that $\mor M L f = \mor {M'} {L'} f$. Since $t \in T$, we have $t = \injT u (i)$ for some $u \in \{1,2\}$ and $i \in \{1,\dots,k_u\}$. The fact that $t$ is enabled in $M$, together with the definition of $\graph\psi \circ \graph\phi$ (\ref{input-output-transitions}) and Definition~\ref{def:morphism for labelled marking}, ensures that, in the notations of Definition~\ref{def:morphism for labelled marking}, \begin{align*} \sigma_u(j) = i \land \alpha^u_j = + &\implies x^u_j = f \\ \sigma_u(j) = i \land \alpha^u_j = - &\implies x^u_j = \id B \\ \tau_u(j) = i \land \alpha^{u+1}_j = + &\implies x^{u+1}_j = \id B \\ \tau_u(j) = i \land \alpha^{u+1}_j = - &\implies x^{u+1}_j = f \end{align*} hence we can apply the dinaturality of $\phi$ or $\psi$ (if, respectively, $u=1$ or $u=2$) in its $i$-th variable. To conclude, one has to show that the morphism obtained in doing so is the same as $\mor {M'} {L'} f$, which is just a matter of identity check. The details can be found in the second author's thesis~\cite{santamaria_towards_2019}.\qed \end{proof} It immediately follows that a sequence of firings of $B$-labelled transitions gives rise to a labelled marking whose associated morphism is still equal to the original one, as the following Proposition states. \begin{corollary}\label{cor:reachability-implies-equality} Let $\mor M L f$ be a labelled marking, $M'$ a marking reachable from $M$ by firing only $B$-labelled transitions $t_1,\dots,t_m$, $L' \colon T \to \{A,B\}$ defined as: \[ L'(s) = \begin{cases} A & s = t_i \text{ for some $i \in \{1,\dots,m\}$} \\ L(s) & \text{otherwise} \end{cases} \] Then $(M', L',f)$ is a labelled marking and $\mor M L f = \mor {M'} {L'} f$. \end{corollary} Now all we have to show is that $M_d$ is reachable from $M_0$ (see~(\ref{eqn:markings-definitions})) by only firing $B$-labelled transitions: it is enough to make sure that each transition is fired at most once to satisfy this condition. We shall work on a special class of Petri Nets, to which our $\graph\psi \circ \graph\phi$ belongs (Remark~\ref{rem:graph of a transformation is FBCF}), where all places have at most one input and at most one output. \begin{definition}\label{def:FBCF petri net} A Petri Net is said to be \emph{forward-backward conflict free} (FBCF) if for all $p$ place $\length{\inp p} \le 1$ and $\length{\out p} \le 1$. \end{definition} \begin{theorem}\label{thm:acyclic-implies-reachable} Let $N$ be an acyclic FBCF Petri Net and let $M_0$, $M_d$ be the only-source and only-sink markings as in~(\ref{eqn:markings-definitions}). Then $M_d$ is reachable from $M_0$ by firing each transition exactly once. \end{theorem} \begin{proof} We proceed by induction on the number of transitions in $N$. If $N$ has no transitions at all, then every place is both a source and a sink, and $M_0$ and $M_d$ coincide, therefore there is nothing to prove. Now, let $n \ge 0$, suppose that the theorem holds for Petri Nets that have $n$ transitions and assume that $N$ has $n+1$ transitions. Define, given $t$ and $t'$ two transitions, $t \le t'$ if and only if there exists a directed path from $t$ to $t'$. The relation $\le$ so defined is reflexive, transitive and antisymmetric (because $N$ is acyclic), hence it is a partial order on $T$, the set of transitions of $N$. Now, $T$ is finite by definition, hence it has at least one minimal element $t_0$. Since $t_0$ is minimal, every (if any) input of $t_0$ is a source, therefore $t_0$ is enabled in $M_0$. Now, fire $t_0$ and call $M_1$ the resulting marking. Consider the subnet $N'$ obtained from $N$ by removing $t_0$ and all its inputs. Since $N$ is forward-backward conflict free, we have that all the outputs of $t_0$ are sources in $N'$. This means that $N'$ is an acyclic FBCF Petri Net: by inductive hypothesis, we have that $M_d$ (restricted to $N'$) is reachable from $M_1$ in $N'$, and therefore $M_d$ is reachable from $M_0$ in $N$.\qed \end{proof} \begin{remark} Theorem~\ref{thm:acyclic-implies-reachable} is an instance of Hiraishi and Ichikawa's result on reachability for arbitrary markings in arbitrary acyclic Petri Nets~\cite{hiraishi_class_1988}. Our proof is an adapted version of theirs for the special case of FBCF Petri Nets and the particular markings $M_0$ and $M_d$ that put one token precisely in every source and in every sink respectively. \end{remark} We are now ready to give an alternative proof to the first half of Petri\'{c}'s theorem~\cite{petric_g-dinaturality_2003} that solved the compositionality problem of dinatural transformations. \begin{proofMainTheorem} Let $f \colon A \to B$ be a morphism in $\C$, and define labelled markings $(M_0,L_0,f)$ and $(M_d,L_d,f)$ as in~(\ref{eqn:markings-definitions}). Then $\mor {M_0} {L_0} f$ is the lower leg of~(\ref{eqn:compositionality-hexagon}), while $\mor {M_d} {L_d} f$ is the upper leg. By theorem~\ref{thm:acyclic-implies-reachable}, marking $M_d$ is reachable from $M_0$ by firing each transition of $\graph\psi \circ \graph\phi$ exactly once, hence by only firing $B$-labelled transitions. By Proposition~\ref{cor:reachability-implies-equality}, we have that the hexagon~(\ref{eqn:compositionality-hexagon}) commutes. \qed \end{proofMainTheorem} Theorem~\ref{theorem:acyclic implies dinatural} can then be straightforwardly generalised to the case in which $\psi\circ\phi$ depends on $n$ variables for an arbitrary $n$. Suppose then that the type of $\psi\circ\phi$ is given by the following pushout: \begin{equation}\label{eqn:pushout2} \mkern0mu \begin{tikzcd} & & \length{\alpha^3} \ar[d, "\tau_2"] \\ & \length{\alpha^2} \ar[d, "\tau_1"'] \ar[dr, phantom, "\ulcorner" very near start] \ar[r, "\sigma_2"] & k_2 \ar[d, dotted, "\xi"] \\ \length{\alpha^1} \ar[r, "\sigma_1"] & k_1 \ar[r, dotted, "\zeta"] & n \end{tikzcd} \end{equation} $\graph\psi \circ \graph\phi$ now has $n$ connected components, and a sufficient condition for the dinaturality of $\psi\circ\phi$ in its $i$-th variable is that $\phi$ and $\psi$ are dinatural in all those variables of theirs which are ``involved'', as it were, in the $i$-th connected component of $\graph\psi \circ \graph\phi$ \emph{and} such connected component is acyclic. \begin{theorem}\label{theorem:acyclicity implies dinaturality GENERAL} In the notations above, let $i\in \{1,\dots,n\}$. If $\phi$ and $\psi$ are dinatural in all the variables in, respectively, $\zeta^{-1}\{i\}$ and $\xi^{-1}\{i\}$ (with $\zeta$ and $\xi$ given by the pushout~(\ref{eqn:pushout2})), and if the $i$-th connected component of $\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is dinatural in its $i$-th variable. \end{theorem} We then have a straightforward corollary. \begin{corollary} Let $\phi\colon F \to G$ and $\psi \colon G \to H$ be transformations which are dinatural in all their variables. If $\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is dinatural in all its variables. \end{corollary} One can generalise even further Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL} by considering $k$ consecutive dinatural transformations $\phi_1,\dots,\phi_k$, instead of just two, and reasoning on the acyclicity of the connected components of the composite graph $\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$, obtained by ``glueing together'' the standard graphs of the $\phi_i$'s along the common interfaces (formally this would be a composite performed in the category $\gc$ introduced in Definition~\ref{definition:graph category}). \begin{theorem}\label{theorem:compositionality with complicated graphs} Let $\phi_j \colon F_j \to F_{j+1}$ be transformations of type $ \begin{tikzcd}[cramped,sep=small] \length{\alpha^j} \ar[r,"\sigma_j"] & n_j & \ar[l,"\tau_j"'] \length{\alpha^{j+1}} \end{tikzcd} $ for $j \in \{1,\dots,k\}$. Suppose that the type of $\phi_k \circ \dots \phi_1$ is computed by the following pushout-pasting: \[ \begin{tikzcd} & & & & \length{\alpha^{k+1}} \ar[d,"\tau_k"'] \\ & & & \length{\alpha^k} \ar[r,"\sigma_k"] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & n_k \ar[d] \\ & & \length{\alpha^3} \ar[r] \ar[ur,sloped,phantom,"\dots"] \ar[d,"\tau_2"'] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[r] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[d] \\ & \length{\alpha^2} \ar[r,"\sigma_2"] \ar[d,"\tau_1"'] \ar[dr, phantom, "\ulcorner" very near start] & n_2 \ar[r] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[r] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[d] \\ \length{\alpha^1} \ar[r,"\sigma_1"] & n_1 \ar[r] & \dots \ar[r] & \dots \ar[r] & l \end{tikzcd} \] Let $\xi_j \colon n_j \to l$ be the map given by any path of morphisms from $n_j$ to $l$ in the above diagram. If the $i$-th connected component of $\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$ (composite calculated in $\gc$) is acyclic and if for all $j \in \{1,\dots,k\}$, for all $x \in \xi_j^{-1} \{i\}$ the transformation $\phi_j$ is dinatural in its $x$-th variable, then $\phi_k \circ \dots \circ \phi_1$ is dinatural in its $i$-th variable. \end{theorem} \begin{proof} The proof is essentially the same of Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL}, where instead of two transformations we have $k$: one defines labelled markings $(M_0, L_0,f)$ and $(M_d,L_d,f)$ corresponding to the two legs of the dinaturality hexagon of $\phi_k \circ \dots \circ \phi_1$ in its $i$-th variable, and uses Theorem~\ref{thm:acyclic-implies-reachable} to prove that $M_d$ is reachable from $M_0$, thus showing the hexagon commutes. \qed \end{proof} \begin{remark} In~\cite{girard_normal_1992}, the authors had to prove the dinaturality of families of morphisms obtained by composing several transformations that are dinatural by assumption. They showed that the dinaturality hexagons for such composites commute by filling them with a trellis of commutative diagrams, stating functoriality properties and dinaturality of the building blocks. Theorem~\ref{theorem:compositionality with complicated graphs} provides an alternative way to do that: one can simply draw the composite graph of the involved transformations, notice that the resulting Petri Net is always acyclic, and thus infer the dinaturality of the composite. \end{remark} \paragraph{An ``essentially necessary'' condition for compositionality} The other half of Petri\'c's theorem can also be shown with the help of the theory of Petri Nets. One can prove that if $N$ is a weakly connected FBCF Petri Net with at least one proper source or one proper sink and $M_0$ and $M_d$ are the only-source and only-sink markings as before, then a necessary condition for the reachability of $M_d$ from $M_0$ is that every transition in $N$ must fire at least once. The intuition behind this is that there must be at least one transition $t$ which fires, because $M_0$ and $M_d$ are not equal (in the hypothesis that $N$ has at least one proper sink or proper source), and if a transition $t$ fires once, then all the transitions that are connected to it must fire as well: in order for $t$ to fire it must be enabled, hence those transitions which are between the source places and $t$ must fire to move the tokens to the input places of $t$; equally, if $t$ fires, then also all those transitions ``on the way'' from $t$ to the sink places must fire, otherwise some tokens would get stuck in the middle of the net, in disagreement with $M_d$. As a consequence of this fact, we have a sort of inverse of Theorem~\ref{thm:acyclic-implies-reachable}. \begin{theorem}\label{thm:reachability-implies-acyclicity} Let $N$ be weakly connected with at least one proper source or one proper sink place. If $M_d$ is reachable from $M_0$, then $N$ is acyclic. \end{theorem} \begin{proof} Suppose that $N$ contains a directed, circular path $\pi=(v_0,\dots,v_{2l})$ where $v_0 = v_{2l}$ is a place. Then each $v_{2i}$ is not a source, given that it is the output of $v_{2i-1}$, hence $M_0(v_{2i})=0$ for all $i \in \{1,\dots,l\}$. This means that $v_{2i+1}$ is disabled in $M_0$, therefore it will not fire when transforming $M_0$ into $M_1$. Then also $M_1(v_{2i})=0$. Using the same argument we can see that none of the transitions in the loop $\pi$ can fire, thus $M_d$ cannot be reached by $M_0$. \qed \end{proof} In other words, if $N$ contains a loop—in the hypothesis that $N$ is weakly connected and has at least one proper source or sink place—then $M_d$ is \emph{not} reachable from $M_0$. In the case of $N=\graph\psi \circ \graph\phi$, given the correspondence between the dinaturality condition of $\phi$ and $\psi$ in each of their variables and the firing of the corresponding transitions, this intuitively means that $\psi\circ\phi$ cannot be proved to be dinatural as a sole consequence of the dinaturality of $\phi$ and $\psi$ when $\graph\psi \circ \graph\phi$ is cyclic. Therefore, acyclicity is not only a \emph{sufficient} condition for the dinaturality of the composite transformation, but also ``essentially necessary'': if the composite happens to be dinatural despite the cyclicity of the graph, then this is due to some ``third'' property, like the fact that certain squares of morphisms are pullbacks or pushouts. The interested reader can find a detailed formalisation of this intuition in the second author's thesis~\cite{santamaria_towards_2019}, where a syntactic category generated by the equations determined by the dinaturality conditions of $\phi$ and $\psi$ was considered, and where it was shown that in there $\psi \circ \phi$ is \emph{not} dinatural in a similar way to Petri\'c's approach in~\cite{petric_g-dinaturality_2003}. \section{Horizontal compositionality of dinatural transformations}\label{chapter horizontal} Horizontal composition of natural transformations is co-protagonist, together with vertical composition, in the classical Godement calculus. In this section we define a new operation of horizontal composition for dinatural transformations, generalising the well-known version for natural transformations. We also study its algebraic properties, proving it is associative and unitary. Remarkably, horizontal composition behaves better than vertical composition, as it is \emph{always} defined between dinatural transformations of matching type. \subsection{From the Natural to the Dinatural} Horizontal composition of natural transformations \cite{mac_lane_categories_1978} is a well-known operation which is rich in interesting properties: it is associative, unitary and compatible with vertical composition. As such, it makes $\mathbb{C}\mathrm{at}$ a strict 2-category. Also, it plays a crucial role in the calculus of substitution of functors and natural transformations developed by Kelly in \cite{kelly_many-variable_1972}; in fact, as we have seen in the introduction, it is at the heart of Kelly's abstract approach to coherence. An appropriate generalisation of this notion for dinatural transformations seems to be absent in the literature: in this section we propose a working definition, as we shall see. The best place to start is to take a look at the usual definition for the natural case. \begin{definition}\label{def:horizontal composition natural transformations} Consider (classical) natural transformations \[ \begin{tikzcd} \A \ar[r,bend left,"F"{above},""{name=F,below}]{} \ar[r,bend right,"G"{below},""{name=G}] & \B \ar[r,bend left,"H"{above},""{name=H,below}]{} \ar[r,bend right,"K"{below},""{name=K}] & \C \arrow[Rightarrow,from=F,to=G,"\phi"] \arrow[Rightarrow,from=H,to=K,"\psi"] \end{tikzcd} \] The horizontal composition $\hc \fst \snd \colon HF \to KG$ is the natural transformation whose $A$-th component, for $A \in \A$, is either leg of the following commutative square: \begin{equation}\label{eqn:horCompNatTransfSquare} \mkern0mu \begin{tikzcd} HF(A) \ar[r,"\psi_{F(A)}"] \ar[d,"H(\phi_A)"'] & KF(A) \ar[d,"K(\phi_A)"] \\ HG(A) \ar[r,"\psi_{G(A)}"] & KG(A) \end{tikzcd} \end{equation} \end{definition} Now, the commutativity of (\ref{eqn:horCompNatTransfSquare}) is due to the naturality of $\psi$; the fact that $\hc \phi \psi$ is in turn a natural transformation is due to the naturality of both $\phi$ and $\psi$. However, in order to \emph{define} the family of morphisms $\hc \phi \psi$, all we have to do is to apply the naturality condition of $\psi$ to the components of $\phi$, one by one. We apply the very same idea to dinatural transformations, leading to the following preliminary definition for classical dinatural transformations. \begin{definition}\label{def:horCompDef} Let $\fst\colon\fstDom \to \fstCoDom$ and $\snd \colon \sndDom \to \sndCoDom$ be dinatural transformations of type $ \begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & 2 \ar[l] \end{tikzcd} $, where $\fstDom, \fstCoDom \colon \Op\A \times \A \to \B$ and $\sndDom, \sndCoDom \colon \Op\B \times \B \to \C$. The \emph{horizontal composition} $\hc \fst \snd$ is the family of morphisms \[ \bigl((\hc \fst \snd)_A\colon \sndDom(\fstCoDom(A,A), \fstDom(A,A)) \to \sndCoDom(\fstDom(A,A),\fstCoDom(A,A))\bigr)_{A \in \A} \] where the general component $(\hc \fst \snd)_A$ is given, for any object $A \in \A$, by either leg of the following commutative hexagon: \[ \begin{tikzcd}[column sep=.8cm,font=\normalsize] & \sndDom(\fstDom(A,A),\fstDom(A,A)) \ar[r,"{\snd_{\fstDom(A,A)}}"] & \sndCoDom(\fstDom(A,A),\fstDom(A,A)) \ar[dr,"{\sndCoDom(1,\fst_A)}"] \\ \sndDom(\fstCoDom(A,A),\fstDom(A,A)) \ar[ur,"{\sndDom(\fst_A,1)}"] \ar[dr,"{\sndDom(1,\fst_A)}"'] & & & \sndCoDom(\fstDom(A,A),\fstCoDom(A,A)) \\ & \sndDom(\fstCoDom(A,A),\fstCoDom(A,A)) \ar[r,"{\snd_{\fstCoDom(A,A)}}"] & \sndCoDom(\fstCoDom(A,A),\fstCoDom(A,A)) \ar[ur,"{\sndCoDom(\fst_A,1)}"'] \end{tikzcd} \] \end{definition} \begin{remark}\label{rem:our definition of hc generalises natural case} If $F$, $G$, $H$ and $K$ all factor through the second projection $\Op\A \times \A \to \A$ or $\Op\B \times \B \to \B$, then $\phi$ and $\psi$ are just ordinary natural transformations and Definition~\ref{def:horCompDef} reduces to the usual notion of horizontal composition, Definition~\ref{def:horizontal composition natural transformations}. \end{remark} As in the classical natural case, we can deduce the dinaturality of $\hc \fst \snd$ from the dinaturality of $\fst$ and $\snd$, as the following Theorem states. (Recall that for $F \colon \A \to \B$ a functor, $\Op F \colon \Op\A \to \Op\B$ is the obvious functor which behaves like $F$.) \begin{theorem}\label{thm:horCompTheorem} Let $\fst$ and $\snd$ be dinatural transformations as in Definition \ref{def:horCompDef}. Then $\hc \fst \snd$ is a dinatural transformation \[ \hc \fst \snd \colon \sndDom(\Op\fstCoDom , \fstDom) \to \sndCoDom(\Op\fstDom,\fstCoDom) \] of type $ \begin{tikzcd}[cramped,sep=small] 4 \ar[r] & 1 & 4 \ar[l] \end{tikzcd} $, where $\sndDom(\Op\fstCoDom , \fstDom), \sndCoDom(\Op\fstDom,\fstCoDom) \colon \A^{[+,-,-,+]} \to \C$ are defined on objects as \begin{align*} \sndDom(\Op\fstCoDom , \fstDom)(A,B,C,D) &= \hcd A B C D \\ \sndCoDom(\Op\fstDom,\fstCoDom)(A,B,C,D) &= \hcc A B C D \end{align*} and similarly on morphisms. \end{theorem} \begin{proof} The proof consists in showing that the diagram that asserts the dinaturality of $\hc \fst \snd$ commutes: this is done in Figure~\ref{fig:DinaturalityHorizontalCompositionFigure}. \qed \end{proof} \begin{sidewaysfigure}[p] \centering \begin{tikzpicture}[every node/.style={scale=0.5}] \matrix[column sep=1cm, row sep=1.5cm]{ \node(1) {$\H(\G(A,A),\F(A,A))$}; & & & \node(2) {$\H(\F(A,A),\F(A,A))$}; & \node(3) {$\K(\F(A,A),\F(A,A))$}; & & & \node(4) {$\K(\F(A,A),\G(A,A))$};\\ & \node(5){$\H(\G(A,A),\F(B,A))$}; & \node(6) {$\H(\F(A,A),\F(B,A))$}; & & & \node(7) {$\K(\F(B,A),\F(A,A))$}; & \node (8){$\K(\F(B,A),\G(A,A))$};\\ \node(9) {$\H(\G(A,B),\F(B,A))$}; & & &\node(10){$\H(\F(B,A),\F(B,A))$}; & \node(11){$\K(\F(B,A),\F(B,A))$}; & & &\node(12){$\K(\F(B,A),\G(A,B))$};\\ &\node(13){$\H(\G(B,B),\F(B,A))$}; & \node(14){$\H(\F(B,B),\F(B,A))$}; & & & \node(15){$\K(\F(B,A),\F(B,B))$}; & \node(16){$\K(\F(B,A),\G(B,B))$};\\ \node(17){$\H(\G(B,B),\F(B,B))$}; & & &\node(18){$\H(\F(B,B),\F(B,B))$}; & \node(19){$\K(\F(B,B),\F(B,B))$}; & & &\node(20){$\K(\F(B,B),\G(B,B))$};\\ }; \graph[use existing nodes,edge quotes={sloped,anchor=south}]{ 9 ->["$\H(\G(1,f),\F(f,1))$"] 1 ->["$\H(\fst_A,1)$"] 2 ->["$\snd_{\F(A,A)}$"] 3 ->["$\K(1,\fst_A)$"] 4 ->["$\K(\F(f,1),\G(1,f))$",] 12; 9 ->["$\H(\G(1,f))$"] 5 ->["$\H(\fst_A,1)$"] 6 ->["$\H(1,\F(f,1))$"] 2; 3 ->["$\K(\F(f,1),1)$"] 7 ->["$\K(1,\fst_A)$"] 8 ->["$\K(1,\G(1,f))$"] 12; 6 ->["$\H(\F(f,1),1)$"] 10 ->["$\snd_{\F(B,A)}$"] 11 ->["$\K(1,\F(f,1))$"] 7; 9 ->["$\H(\G(f,1),1)$"] 13 ->["$\H(\fst_B,1)$"] 14 ->["$\H(\F(1,f),1)$"] 10; 11->["$\K(1,\F(1,f))$"] 15 ->["$\K(1,\fst_B)$"] 16 ->["$\K(1,\G(f,1))$"] 12; 14->["$\H(1,F(1,f))$"] 18 ->["$\snd_{\F(B,B)}$"] 19 ->["$\K(\F(1,f),1)$"] 15; 18 <-["$\H(\fst_B,1)$"] 17 <-["$\H(\G(f,1),\F(1,f))$"] 9; 12 <-["$\K(\F(1,f),\G(f,1))$"] 20 <-["$\K(1,\fst_B)$"] 19; 1 ->[bend left=20,dashed,"$(\hc \fst \snd)_A$"] 4; 17->[bend right=20,dashed,"$(\hc \fst \snd)_B$"] 20; }; \path[] (9) to [out=-80,in=170] node[anchor=mid,red]{Functoriality of $\H$} (18); \path[] (9) to [out=80,in=-170] node[anchor=mid,red]{Functoriality of $\H$} (2); \path[] (19) to [out=10,in=260] node[anchor=mid,red]{Functoriality of $\K$} (12); \path[] (3) to [out=-10,in=-260] node[anchor=mid,red]{Functoriality of $\K$} (12); \path (6) to node[anchor=mid,red]{Dinaturality of $\snd$} (7); \path (14) to node[anchor=mid,red]{Dinaturality of $\snd$} (15); \path (9) to node[anchor=mid,red]{Dinaturality of $\fst$} (10); \path (11) to node[anchor=mid,red]{Dinaturality of $\fst$} (12); \end{tikzpicture} \caption{Proof of Theorem \ref{thm:horCompTheorem}: dinaturality of horizontal composition in the classical case. Here $f\colon A \to B$.} \label{fig:DinaturalityHorizontalCompositionFigure} \end{sidewaysfigure} We can now proceed with the general definition, which involves transformations of arbitrary type. As the idea behind Definition~\ref{def:horCompDef} is to apply the dinaturality of $\snd$ to the general component of $\fst$ in order to define $\hc \fst \snd$, if $\snd$ is a transformation with many variables, then we have many dinaturality conditions we can apply to $\fst$, namely one for each variable of $\snd$ in which $\snd$ is dinatural. Hence, the general definition will depend on the variable of $\snd$ we want to use. For the sake of simplicity, we shall consider only the one-category case, that is when all functors in the definition involve one category $\C$; the general case follows with no substantial complications except for a much heavier notation. \begin{definition}\label{def:generalHorizontalCompositionDef} Let $\F \colon \C^\fstDomVar \to \C$, $\G \colon \C^\fstCoDomVar \to \C$, $\H \colon \C^\sndDomVar \to \C$, $\K \colon \C^\sndCoDomVar \to \C$ be functors, $\fst = (\fst_\bfA)_{\bfA \in \C^n} \colon \fstDom \to \fstCoDom$ be a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\fstDomVar \ar[r,"\fstTypL"] & \fstVarNo & \length\fstCoDomVar \ar[l,"\fstTypR"'] \end{tikzcd} $ and $\snd = (\snd_{\bf B})_{\bf B \in \C^m}\colon \sndDom \to \sndCoDom$ of type $ \begin{tikzcd}[cramped,sep=small] \length\sndDomVar \ar[r,"\sndTypL"] & \sndVarNo & \length\sndCoDomVar \ar[l,"\sndTypR"'] \end{tikzcd} $ a transformation which is dinatural in its $i$-th variable. Denoting with $\concat$ the concatenation of a family of lists, let \[ \ghcDom \colon \C^{{\concat_{u=1}^{\length\sndDomVar} \lambda^u}} \to \C, \quad \ghcCoDom \colon \C^{\concat_{v=1}^{\length\sndCoDomVar}\mu^v} \to \C \] be functors, defined similarly to $\sndDom(\Op\fstCoDom , \fstDom)$ and $\sndCoDom(\Op\fstDom,\fstCoDom)$ in Theorem \ref{thm:horCompTheorem}, where for all $j \in \sndVarNo$, $u \in\length\gamma$, $v\in\length\delta$: \[ \begin{tikzcd}[ampersand replacement=\&,row sep=.5em] \F_j= \begin{cases} F & j=i \\ \id\C & j \ne i \end{cases} \& \G_j= \begin{cases} G & j=i \\ \id\C & j \ne i \end{cases} \\ \lambda^u = \begin{cases} \alpha & \eta u = i \land \gamma_u=+ \\ \Not\beta\footnotemark & \eta u = i \land \gamma_u = - \\ [\gamma_u] & \eta u \ne i \end{cases} \& \mu^v = \begin{cases} \beta & \theta v = i \land \delta_v=+ \\ \Not\alpha & \theta v = i \land \delta_v = - \\ [\delta_v] & \theta v \ne i \end{cases} \end{tikzcd} \] \footnotetext{Remember that for any $\beta\in\List\{+,-\}$ we denote $\Not\beta$ the list obtained from $\beta$ by swapping the signs.}Define for all $u \in\length\gamma$ and $v\in\length\delta$ the following functions: \[ a_u = \begin{cases} \iota_n \sigma & \eta u = i \land \gamma_u=+ \\ \iota_n \tau & \eta u = i \land \gamma_u = - \\ \iota_m K_{\eta u} & \eta u \ne i \end{cases} \quad b_v = \begin{cases} \iota_n \tau & \theta v = i \land \delta_v=+ \\ \iota_n \sigma & \theta v = i \land \delta_v = - \\ \iota_m K_{\theta v} & \theta v \ne i \end{cases} \] with $K_{\eta u} \colon 1 \to m$ the constant function equal to $\eta u$, while $\iota_n$ and $\iota_m$ are defined as: \[ \begin{tikzcd}[row sep=0em] n \ar[r,"\iota_n"] & (i-1)+n+(m-i) \\ x \ar[|->,r] & i-1+x \end{tikzcd} \qquad \begin{tikzcd}[row sep=0em,ampersand replacement=\&] m \ar[r,"\iota_m"] \& (i-1)+n+(m-i) \\ x \ar[|->,r] \& \begin{cases} x & x < i \\ x +n -1 & x \ge i \end{cases} \end{tikzcd} \] The \emph{$i$-th horizontal composition} $\HC {[\fst]} {[\snd]} i$ is the equivalence class of the transformation \[ \HC \fst \snd i \colon \ghcDom \to \ghcCoDom \] of type \[ \begin{tikzcd}[column sep=1.5cm] \displaystyle\sum_{u=1}^{\length\gamma} \length{\lambda^u} \ar[r,"{[a_1,\dots, a_{\length\gamma}]}"] & (i-1) + n + (m-i) & \displaystyle\sum_{v=1}^{\length\delta} \length{\mu^v} \ar[l,"{[b_1,\dots, b_{\length\delta}]}"'] \end{tikzcd} \] whose general component, $(\HC \fst \snd i)_{\subst B \bfA i}$, is the diagonal of the commutative hexagon obtained by applying the dinaturality of $\snd$ in its $i$-th variable to the general component $\fst_\bfA$ of $\fst$: \[ \begin{tikzcd}[column sep=2em] & \H(\subst \bfB {F(\bfA\sigma)} i \eta) \ar[r,"\psi_{\subst \bfB {\F(\bfA\sigma)} i}"] & \K(\subst \bfB {\F(\bfA\sigma)} i \theta) \ar[dr,"\K(\substMV \bfB {\F(\bfA\sigma)} {\phi_\bfA} i \theta)"] \\ \H(\substMV \bfB {\G(\bfA\tau)} {\F(\bfA\sigma)} i \eta) \ar[ur,"\H(\substMV \bfB {\phi_\bfA} {\F(\bfA\sigma)} i \eta)"] \ar[dr,"\H(\substMV \bfB {\G(\bfA\tau)} {\phi_\bfA} \eta)"'] \ar[rrr,dotted,"(\HC \fst \snd i)_{\subst B \bfA i}"] & & & \K(\substMV \bfB {\F(\bfA\sigma)} {\G(\bfA\tau)} i \theta) \\ & \H(\subst \bfB {\G(\bfA\tau)} i \eta) \ar[r,"\psi_{\subst \bfB {\G(\bfA\tau)} i}"'] & \K(\subst \bfB {G(\bfA\tau)} i \theta) \ar[ur,"\K(\substMV \bfB {\phi_\bfA} {\G(\bfA\tau)} i \theta)"'] \end{tikzcd} \] \end{definition} In other words, the domain of $\HC \fst \snd i$ is obtained by substituting the arguments of $\H$ (the domain of $\snd$) that are in the $i$-th connected component of $\graph\snd$ with $\F$ (the domain of $\fst$) if they are covariant, and with $\Op\G$ (the opposite of the codomain of $\fst$) if they are contravariant; those arguments not in the $i$-th connected component are left untouched. Similarly the codomain. The type of $\HC \fst \snd i$ is obtained by replacing the $i$-th variable of $\snd$ with all the variables of $\fst$ and adjusting the type of $\snd$ with $\fstTypL$ and $\fstTypR$ to reflect this act. In the following example, we see what happens to $\graph\fst$ and $\graph\snd$ upon horizontal composition. \begin{example}\label{ex:hc example} Consider transformations $\delta$ and $\eval{}{}$ (see examples~\ref{ex:delta},\ref{ex:eval}). In the notations of Definition~\ref{def:generalHorizontalCompositionDef}, we have $F=\id\C \colon \C \to \C$, $G = \times \colon \C^{[+,+]} \to \C$, $H \colon \C^{[+,-,+]} \to \C$ defined as $H(X,Y,Z) = X \times (Y \implies Z)$ and $K = \id\C \colon \C \to \C$. The types of $\delta$ and $\eval{}{}$ are respectively \[ \begin{tikzcd}[font=\small] 1 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} \qquad \text{and} \qquad \begin{tikzcd}[row sep=0em,font=\small] 3 \ar[r] & 2 & 1 \ar[l] \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210]& 2 & \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] The transformation $\eval{}{}$ is extranatural in its first variable and natural in its second: we have two horizontal compositions. $(\HC \delta {\eval{}{}} 1 )_{A,B}$ is given by either leg of the following commutative square: \begin{equation}\label{delta inside eval} \begin{tikzcd} A \times \bigl( (A \times A) \implies B \bigr) \ar[r,"\delta_A \times (1 \implies 1)"] \ar[d,"1 \times (\delta_A \implies 1)"'] & (A \times A) \times \bigl( (A \times A) \implies B \bigr) \ar[d,"\eval {A \times A} B"] \\ A \times (A \implies B) \ar[r,"\eval A B"] & B \end{tikzcd} \end{equation} We have $\HC \delta {\eval{}{}} 1 \colon H(\id\C,\times,\id\C) \to \id\C(\id\C)$ where $\id\C(\id\C) = \id\C$ and \[ \begin{tikzcd}[row sep=0em] \C^{[+,-,-,+]} \ar[r,"{H(\id\C,\times,\id\C)}"] & \C \\ (X,Y,Z,W) \ar[|->,r] & \quad X \times \bigl( (Y \times Z) \implies W \bigr) \end{tikzcd} \] and it is of type \[ \begin{tikzcd}[row sep=0em] 4 \ar[r] & 2 & \ar[l] 1 \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210] & 2 \\[-3pt] 3 \ar[uur,|->,out=0,in=230] \\[-3pt] 4 \ar[uur,|->,out=0,in=230] \\ \end{tikzcd} \] Intuitively, $\graph{\HC \delta {\eval{}{}} 1}$ is obtained by substituting $\graph{\delta}=\begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {}; \\ & \node (A) [component] {}; \\ \node (2) [category] {}; & & \node (3) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture}$ into the first connected component of $\graph{\eval{}{}}=\begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture}$, by ``bending'', as it were, $\graph\delta$ into the $U$-turn that is the first connected component of $\graph{\eval{}{}}$: \[ \begin{tikzpicture} \matrix[column sep=1em,row sep=2em]{ &\node[category] (1) {}; & & \node[opCategory] (2) {}; & & \node[opCategory] (3) {}; & & \node[category] (4) {};\\ &\node[component] (A) {};& & & & & & \node[component] (B) {};\\ \node[coordinate] (fake1) {}; & & \node[coordinate] (fake2) {}; &\node[coordinate] (fake3) {};& & \node[coordinate] (fake4) {}; & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -- fake1 --[out=-90,in=-90] fake4 -> 3; A -- fake2 --[out=-90,in=-90] fake3 -> 2; 4 -> B -> 5; }; \end{tikzpicture} \quad \text{or} \quad \begin{tikzpicture} \matrix[column sep=1em,row sep=2em]{ \node[category] (1) {};& & \node[opCategory] (2) {}; & & \node[opCategory] (3) {}; & & \node[category] (4) {};\\ \node[coordinate] (fake1) {}; & & & \node[component] (A) {}; & & & \node[component] (B) {};\\ & & & & & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -- fake1 ->[out=-90,in=-90] A -> {2,3}; 4 -> B -> 5; }; \end{tikzpicture} \] Here the first graph corresponds to the upper leg of (\ref{delta inside eval}) , the second to the lower one. Notice how the component $\eval {A \times A} B$ has now \emph{two} wires, one per each $A$ in the graph on the left. The result is therefore \[ \graph{\HC \delta {\eval{}{}} 1} = \begin{tikzpicture} \matrix[column sep=1em,row sep=1.5em]{ \node[category] (1) {}; & \node[opCategory] (2) {}; & \node[opCategory] (3) {}; & \node[category] (4) {}; \\ & \node[component] (A) {}; & & \node[component] (B) {};\\ & & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 4 -> B -> 5; }; \end{tikzpicture} \] Turning now to the other possible horizontal composition, we have that $\HC \delta {\eval{}{}} 2 \colon H(\id\C,\id\C,\id\C) \to \id\C(\times)$ where $ H(\id\C,\id\C,\id\C) = H$ and $\id\C(\times)=\times$ by definition; it is of type \[ \begin{tikzcd}[row sep=0em] 3 \ar[r] & 2 & \ar[l] 2 \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210]& 2 & 2 \ar[l,|->] \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] and $(\HC \delta {\eval{}{}} 2)_{A,B}$ is given by either leg of the following commutative square: \[ \begin{tikzcd}[column sep=3em] A \times (A \implies B) \ar[r,"1 \times (1 \implies \delta_B)"] \ar[d,"\eval A B"'] & A \times \bigl( A \implies (B \times B) \bigr) \ar[d,"\eval A {B \times B}"] \\ B \ar[r,"\delta_B"] & B \times B \end{tikzcd} \] Substituting $\graph\delta$ into the second connected component of $\graph{\eval{}{}}$, which is just a ``straight line'', results into the following graph: \[ \graph{\HC \delta {\eval{}{}} 2} = \begin{tikzpicture} \matrix[column sep=.5em,row sep=1em]{ \node[category] (1) {}; & & \node[opCategory] (2) {}; & & \node[category] (3) {}; \\ & \node[component] (A) {}; & & & \node[component] (B) {};\\ & & & \node[category] (4) {}; & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> {4,5}; }; \end{tikzpicture} \] \end{example} \subsection{Dinaturality of horizontal composition}\label{section dinaturality of horizontal composition} We aim to prove here that our definition of horizontal composition, which we have already noticed generalises the well-known version for classical natural transformations (Remark~\ref{rem:our definition of hc generalises natural case}), is a closed operation on dinatural transformations. For the rest of this section, we shall fix transformations $\fst$ and $\snd$ with the notations used in Definition~\ref{def:generalHorizontalCompositionDef} for their signature; we also fix the ``names'' of the variables of $\fst$ as $\bfA=(A_1,\dots,A_n)$ and of $\snd$ as $\bfB=(B_1,\dots,B_m)$. In this spirit, $i$ is a fixed element of $\{1,\dots,m\}$, we assume $\snd$ to be dinatural in $B_i$ and we shall sometimes refer to $\HC \fst \snd i$ also as $\HC \fst \snd {B_i}$. As in the classical natural case (Definition~\ref{def:horizontal composition natural transformations}), only the dinaturality of $\snd$ in $B_i$ is needed to \emph{define} the $i$-th horizontal composition of $\fst$ and $\snd$. Here we want to understand in which variables the $i$-th horizontal composition \[ \HC \fst \snd {B_i} = \bigl( (\HC \fst \snd {B_i})_{\subst \bfB \bfA i} \bigr)= \bigl( (\HC \fst \snd {B_i})_{B_1,\dots, B_{i-1},A_1,\dots, A_n, B_{i+1}, \dots, B_m} \bigr) \] itself is in turn dinatural. It is straightforward to see that $\HC \fst \snd {B_i}$ is dinatural in all its $B$-variables where $\snd$ is dinatural, since the act of horizontally composing $\fst$ and $\snd$ in $B_i$ has not ``perturbed'' $\sndDom$, $\sndCoDom$ and $\snd$ in any way except in those arguments involved in the $i$-th connected component of $\graph\snd$, see example~\ref{ex:hc example}. Hence we have the following preliminary result. \begin{proposition} If $\snd$ is dinatural in $B_j$, for $j \ne i$, then $\HC \fst \snd {B_i}$ is also dinatural in $B_j$. \end{proposition} More interestingly, it turns out that $\HC \fst \snd {B_i}$ is also dinatural in all those $A$-variables where $\fst$ is dinatural in the first place. We aim then to prove the following Theorem. \begin{theorem}\label{thm:horCompIsDinat} If $\fst$ is dinatural in its $k$-th variable and $\snd$ in its $i$-th one, then $\HC \fst \snd i$ is dinatural in its $(i-1+k)$-th variable. In other words, if $\fst$ is dinatural in $A_k$ and $\snd$ in $B_i$, then $\HC \fst \snd {B_i}$ is dinatural in $A_k$. \end{theorem} The proof of this theorem relies on the fact that we can reduce ourselves, without loss of generality, to Theorem~\ref{thm:horCompTheorem}. To prove that, we introduce the notion of \emph{focalisation} of a transformation on one of its variables: essentially, the focalisation of a transformation $\varphi$ is a transformation depending on only one variable between functors that have only one covariant and one contravariant argument, obtained by fixing all the parts of the data involving variables different from the one we are focusing on. \begin{definition}\label{def:focalisation def} Let $\varphi = (\varphi_\bfA) = (\varphi_{A_1,\dots,A_p}) \colon T \to S$ be a transformation of type \[ \begin{tikzcd} \length\alpha \ar[r,"\sigma"] & p & \ar[l,"\tau"'] \length\beta \end{tikzcd} \] with $T \colon \C^\alpha \to \C$ and $S \colon \C^\beta \to \C$. Fix $k\in\{1,\dots,p\}$ and objects $A_1,\dots,A_{k-1}$, $A_{k+1},\dots,A_p$ in $\C$. Consider functors $\bar T k$, $\bar S k \colon \Op\C \times \C \to \C$ defined by \begin{align*} \bar T k (A,B) &= T(\substMV \bfA A B i \sigma) \\ \bar S k (A,B) &= S(\substMV \bfA A B i \tau) \end{align*} The \emph{focalisation of $\varphi$ on its $k$-th variable} is the transformation \[ \bar \varphi k \colon \bar T k \to \bar S k \] of type $ \begin{tikzcd}[cramped, sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} $ where \[ \bar \varphi k_X = \phi_{\subst \bfA X i} = \varphi_{A_1\dots A_{k-1},X,A_{k+1}\dots A_p}. \] Sometimes we may write $\bar \varphi {A_k} \colon \bar T {A_k} \to \bar S {A_k}$ too, when we fix as $A_1,\dots,A_p$ the name of the variables of $\varphi$. \end{definition} \begin{remark}\label{rem:focalisationIsDinaturalRemark} $\varphi$ is dinatural in its $k$-th variable if and only if $\bar \varphi k$ is dinatural in its only variable for all objects $A_1,\dots,A_{k-1},A_{k+1},\dots,A_p$ fixed by the focalisation of $\varphi$. \end{remark} The $\bar{} k$ construction depends on the $p-1$ objects we fix, but not to make the notation too heavy, we shall always call those (arbitrary) objects $A_1,\dots,A_{k-1},A_{k+1},\dots,A_n$ for $\bar \fst k$ and $B_1,\dots,B_{i-1}$, $B_{i+1},\dots,B_m$ for $\bar \snd i$. \begin{lemma}\label{lemma:focalisationLemma} It is the case that $\HC \fst \snd i$ is dinatural in its $(i-1+k)$-th variable if and only if $\hc {\bar \fst k} {\bar \snd i}$ is dinatural in its only variable for all objects $B_1,\dots,B_{i-1}$, $A_1,\dots,A_{k-1}$, $A_{k+1},\dots,A_n$, $B_{i+1},\dots,B_m$ in $\C$ fixed by the focalisations of $\fst$ and $\snd$. \end{lemma} \begin{proof} The proof consists in unwrapping the two definitions and showing that they require the exact same hexagon to commute: see~\cite[Lemma 2.14]{santamaria_towards_2019}. \qed \end{proof} We can now prove that horizontal composition preserves dinaturality. \begin{proofDinTheorem} Consider transformations $\bar \fst k$ and $\bar \snd i$. By Remark \ref{rem:focalisationIsDinaturalRemark}, they are both dinatural in their only variable. Hence, by Theorem \ref{thm:horCompTheorem}, $\hc {\bar \fst k} {\bar \snd i}$ is dinatural and by Lemma \ref{lemma:focalisationLemma} we conclude. \qed \end{proofDinTheorem} It is straightforward to see that horizontal composition has a left and a right unit, namely the identity (di)natural transformation on the appropriate identity functor. \begin{theorem} Let $T \colon \B^\alpha \to \C$, $S \colon \B^\beta \to \C$ be functors, and let $\varphi \colon T \to S$ be a transformation of any type. Then \[ \hc \varphi {\id{\id \C}} = \varphi. \] If $\varphi$ is dinatural in its $i$-th variable, for an appropriate $i$, then also \[ \HC {\id{\id \B}} \varphi i = \varphi. \] \end{theorem} \begin{proof} Direct consequence of the definition of horizontal composition.\qed \end{proof} \subsection{Associativity of horizontal composition}\label{section associativity horizontal composition} Associativity is a crucial property of any respectable algebraic operation. In this section we show that our notion of horizontal composition is at least this respectable. We begin by considering classical dinatural transformations $\fst \colon \F \to \G$, $\snd \colon \H \to \K$ and $\trd \colon \U \to \V$, for $\F,\G,\H,\K,\U,\V \colon \Op\C \times \C \to \C$ functors, all of type $ \begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} $. \begin{theorem}\label{thm:associativity simple case} $\hc {\left( \hc \fst \snd \right)} \trd = \hc \fst {\left( \hc \snd \trd \right)}$. \end{theorem} \begin{proof} We first prove that the two transformations have same domain and codomain functors. Since they both depend on one variable, this also immediately implies they have same type. We have $\hc \fst \snd \colon \H(\Op\G,\F) \to \K(\Op\F,G)$, hence \[ \hc {\left( \hc \fst \snd \right)} \trd \colon \U\Bigl(\Op{\K(\Op\F,G)},\H(\Op\G,\F)\Bigr) \to \V\Bigl( \Op{\H(\Op\G,\F)}, \K(\Op\F,G) \Bigr). \] Notice that $\Op{\K(\Op\F,G)} = \Op\K (F, \Op G)$ and $\Op{\H(\Op\G,\F)} = \Op\H (G, \Op \F)$. Next, we have $\hc \snd \trd \colon \U(\Op\K, \H) \to \V(\Op\H, \K)$. Given that $\U(\Op\K,\H), \V(\Op\H,\K) \colon \C^{[+,-,-,+]} \to \C$, we have \[ \hc \fst {\left( \hc \snd \trd \right)} \colon \underbrace{\U(\Op\K, \H)(\F,\Op\G,\Op\G,\F)}_{\U\bigl(\Op\K(\F,\Op\G),\H(\Op\G,\F)\bigr)} \to \underbrace{\V(\Op\H, \K)(\G,\Op\F,\Op\F,\G)}_{\V\bigl( \Op\H(\G,\Op\F), \K(\Op\F,\G) \bigr)}. \] This proves $\hc {\left( \hc \fst \snd \right)} \trd$ and $\hc \fst {\left( \hc \snd \trd \right)}$ have the same signature. Only equality of the single components is left to show. Fix then an object $A$ in $\C$. Figure~\ref{fig:Associativity} shows how to pass from $(\trd \ast \snd) \ast \fst$ to $\trd \ast (\snd \ast \fst)$ by pasting three commutative diagrams. In order to save space, we simply wrote ``$\H(\G,\F)$'' instead of the proper ``$\H(\Op\G(A,A),F(A,A))$'' and similarly for all the other instances of functors in the nodes of the diagram in Figure~\ref{fig:Associativity}; we also dropped the subscript for components of $\fst$, $\snd$ and $\trd$ when they appear as arrows, that is we simply wrote $\fst$ instead of $\fst_A$, since there is only one object involved and there is no risk of confusion. \qed \end{proof} \begin{sidewaysfigure} \footnotesize \begin{tikzpicture} \matrix[column sep=1cm, row sep=1cm]{ \node(1) {$\U(\K(\F,\G),\H(\G,\F))$}; & & \node(2) {$\U(\K(\F,\F),\H(\F,\F))$}; & \node(3) {$\U(\H(\F,\F),\H(\F,\F))$};\\ \node(4) {$\U(\K(\F,\F),\H(\G,\F))$}; & & \node(5) {$\U(\H(\F,\F),\H(\F,\F))$}; & \node(6) {$\V(\H(\F,\F),\H(\F,\F))$}; & \node(7) {$\V(\H(\F,\F),\K(\F,\F))$}; & & \node(8) {$\V(\H(\G,\F),\K(\F,\G))$};\\ \node(9) {$\U(\H(\F,\F),\H(\G,\F))$};&&&& \node(10){$\V(\H(\G,\F),\H(\F,\F))$};&& \node(11){$\V(\H(\G,\F),\K(\F,\F))$};\\ & & \node(12){$\U(\H(\G,\F),\H(\G,\F))$}; & \node(13){$\V(\H(\G,\F),\H(\G,\F))$};\\ }; \graph[use existing nodes]{ 1 ->["$\U(\K(1,\fst),\H(\fst,1))$"] 2 ->["$\U(\snd,1)$"] 3 ->["$\trd$"] 6; 1 ->["$\U(\K(1,\fst),1)$"'] 4 ->["$\U(\snd,1)$"'] 9 ->["$\U(1,\H(\fst,1))$",sloped] 5 ->["$\trd$"] 6; 6 ->["$\V(1,\snd)$"] 7 ->["$\V(\H(\fst,1),\K(1,\fst))$"] 8; 6 ->["$\V(\H(\fst,1),1)$",sloped] 10 ->["$\V(1,\snd)$"] 11 ->["$\V(1,\K(1,\fst))$"'] 8; 9 ->["$\U(\H(\fst,1),1)$"',sloped] 12 ->["$\trd$"] 13 ->["$\V(1,\H(\fst,1))$"', sloped] 10; }; \path (2) -- node[anchor=center,red]{\footnotesize Functoriality of $\U$} (5); \path (9) -- node[anchor=center,red]{\footnotesize Dinaturality of $\trd$} (10); \path (10) --node[anchor=center,red]{\footnotesize Functoriality of $\V$} (8); \end{tikzpicture} \caption{Associativity of horizontal composition in the classical case. The upper leg is $(\trd \ast \snd) \ast \fst $, whereas the lower one is $\trd \ast (\snd \ast \fst)$.} \label{fig:Associativity} \end{sidewaysfigure} We can now start discussing the general case for transformations with an arbitrary number of variables; we shall prove associativity by reducing ourselves to Theorem~\ref{thm:associativity simple case} using focalisation (see Definition~\ref{def:focalisation def}). For the rest of this section, fix transformations $\fst$, $\snd$ and $\trd$, dinatural in all their variables, with signatures: \begin{itemize} \item $\fst \colon \fstDom \to \fstCoDom$, for $\fstDom \colon \C^\fstDomVar \to \C$ and $\fstCoDom \colon \C^\fstCoDomVar \to \C$, of type $ \begin{tikzcd}[cramped,sep=small] \length\fstDomVar \ar[r,"\fstTypL"] & \fstVarNo & \ar[l,"\fstTypR"'] \length\fstCoDomVar \end{tikzcd} $; \item $\snd \colon \sndDom \to \sndCoDom$, for $\sndDom \colon \C^\sndDomVar \to \C$ and $\sndCoDom \colon \C^\sndCoDomVar \to \C$, of type $ \begin{tikzcd}[cramped,sep=small] \length\sndDomVar \ar[r,"\sndTypL"] & \sndVarNo & \ar[l,"\sndTypR"'] \length\sndCoDomVar \end{tikzcd} $; \item $\trd \colon \trdDom \to \trdCoDom$, for $\trdDom \colon \C^\trdDomVar \to \C$ and $\trdCoDom \colon \C^\trdCoDomVar \to \C$, of type $ \begin{tikzcd}[cramped,sep=small] \length\trdDomVar \ar[r,"\trdTypL"] & \trdVarNo & \ar[l,"\trdTypR"'] \length\trdCoDomVar \end{tikzcd} $ \end{itemize} For sake of simplicity, let us fix the name of the variables for $\fst$ as $\fstVariables{}{} = (A_1,\dots,A_n)$, for $\snd$ as $\sndVariables{}{} = (B_1,\dots,B_m)$ and for $\trd$ as $\trdVariables{}{} = (C_1,\dots,C_l)$. In this spirit we also fix the variables of the horizontal compositions, so for $i \in \{1,\dots,\sndVarNo\}$, the variables of $\HC \fst \snd i$ are \[ \sndVariables i {\fstVariables{}{}} = B_1,\dots,B_{i-1},A_1,\dots,A_n,B_{i+1},\dots,B_m \] and, similarly, for $j \in \{1,\dots,\trdVarNo\}$ the variables of $\HC \snd \trd j$ are $ \trdVariables j {\sndVariables{}{}}. $ The theorem asserting associativity of horizontal composition, which we prove in the rest of this section, is the following. \begin{theorem}\label{thm:associativityTheorem} For $i \in \{1,\dots,\sndVarNo\}$ and $j \in \{1,\dots,\trdVarNo\}$, \[ \HC {\left( \HC \fst \snd i \right)} \trd j = \HC \fst {\left(\HC \snd \trd j\right)} {j-1+i} \] or, in alternative notation, \begin{equation}\label{eqn:associativity equation} \HC {\left( \HC \fst \snd {B_i} \right)} \trd {C_j} = \HC \fst {\left(\HC \snd \trd {C_j}\right)} {B_i}. \end{equation} \end{theorem} We shall require the following, rather technical, Lemma, whose proof is a matter of identity checking. \begin{lemma}\label{lemma:associativity techincal lemma} Let $\Phi = (\Phi_{V_1,\dots,V_p})$ and $\Psi = (\Psi_{W_1,\dots,W_q})$ be transformations in $\C$ such that $\Psi$ is dinatural in $W_s$, for $s \in \{1,\dots,q\}$. Let $V_1,\dots,V_{r-1}$, $V_{r+1},\dots,V_p$, $W_1,\dots,W_{s-1}$, $W_{s+1},\dots,W_q$ be objects of $\C$, and let $\bar \Phi {V_r}$ and $\bar \Psi {W_s}$ be the focalisation of $\Phi$ and $\Psi$ in its $r$-th and $s$-th variable respectively using the fixed objects above. Let also $X$ be an object of $\C$. Then \begin{enumerate}[(i)] \item $ \left( \hc { \bar \Phi {V_r} } { \bar \Psi {W_s} } \right)_X = \left( \HC \Phi \Psi {W_s} \right)_{W_1,\dots,W_{s-1},V_1,\dots,V_{r-1},X,V_{r+1},\dots,V_p,W_{s+1},\dots,W_q} = \left( \bar {\HC \Phi \Psi {W_s}} {V_r} \right)_X $ \item $\mathit{(co)dom}\left( \bar {\HC \Phi \Psi {W_s}} {V_r} \right) (x,y) = \mathit{(co)dom}\left( \hc {\bar \Phi {V_r}} {\bar \Psi {W_s}} \right) (x,y,y,x) $ for any morphisms $x$ and $y$. \end{enumerate} \end{lemma} \begin{remark}\label{rem:associativity techincal lemma remark} Part (i) asserts an equality between \emph{morphisms} and not \emph{transformations}, as $ \hc { \bar \Phi {V_r} } { \bar \Psi {W_s} }$ and $\HC \Phi \Psi {W_s}$ have different types and even different domain and codomain functors. \end{remark} \begin{proofAssociativity} One can show that $\HC {\left( \HC \fst \snd {B_i} \right)} \trd {C_j}$ and $ \HC \fst {\left(\HC \snd \trd {C_j}\right)} {B_i}$ have the same domain, codomain and type simply by computing them and observing they coincide. In particular, notice that they both depend on the following variables: $\trdVariables j {\sndVariables i {\fstVariables{}{}}}$. Here we show that their components are equal. Let us fix then $ C_1,\dots, C_{j-1}$, $B_1$, $\dots$, $B_{i-1}$, $A_1$, $\dots$, $A_{k-1}$, $X$, $A_{k+1}$, $\dots$, $A_n$, $B_{i+1}$, $\dots$, $B_m$, $C_{j+1}$, $\dots$, $C_l$ objects in $\C$. Writing just $V$ for this long list of objects, we have, by Lemma~\ref{lemma:associativity techincal lemma}, that \[ \left(\HC \fst {\left(\HC \snd \trd {C_j}\right)} {B_i}\right)_V = \left( \hc {\bar \fst {A_k}} {\bar {\HC \snd \trd {C_j}} {B_i}} \right)_X . \] Now, we cannot apply again Lemma~\ref{lemma:associativity techincal lemma} to $\bar {\HC \snd \trd {C_j}} {B_i}$ because of the observation in Remark~\ref{rem:associativity techincal lemma remark}, but we can use the definition of horizontal composition to write down explicitly the right-hand side of the equation above: it is the morphism \[ \codom{ \bar {\HC \snd \trd {C_j}} {B_i} } (\id{\bar \F {} (X,X)} , (\bar \fst {A_k})_X) \circ \left( \bar{\HC \snd \trd {C_j}}{B_i} \right)_{\bar \F {} (X,X)} \circ \dom{ \bar {\HC \snd \trd {C_j}} {B_i} }( {(\bar \fst {A_k})}_X ,\id{\bar \F {} (X,X)}) \] (Remember that $\bar \fst {A_k} \colon \bar \F {A_k} \to \bar \G {A_k}$, here we wrote $\bar F {} (X,X)$ instead of $\bar F {A_k}(X,X)$ to save space.) Now we \emph{can} use Lemma~\ref{lemma:associativity techincal lemma} to ``split the bar'', as it were: \begin{multline*} \codom{\hc {\bar \snd {B_i}} {\bar \trd {C_j}}} \bigl( {(\bar \fst {A_k})}_X, \id{\bar \F {} (X,X)}, \id{\bar \F {} (X,X)}, {(\bar \fst {A_k})}_X \bigr) \circ \\[.5em] \left( \hc {\bar \snd {B_i}} {\bar \trd {C_j}} \right)_{\bar \F {} (X,X)} \circ \\[.5em] \dom{\hc {\bar \snd {B_i}} {\bar \trd {C_j}}} \bigl(\id{\bar \F {} (X,X)}, {(\bar \fst {A_k})}_X, {(\bar \fst {A_k})}_X, \id{\bar \F {} (X,X)}\bigr) \end{multline*} This morphism is equal, by definition of horizontal composition, to \[ \left( \hc {\bar \fst {A_k}} {\left( \hc {\bar \snd {B_i}} {\bar \trd {C_j}} \right)} \right)_X \] which, by Theorem~\ref{thm:associativity simple case}, is the same as \[ \left( \hc {\left( \hc {\bar \fst {A_k}} {\bar \snd {B_i}} \right)} {\bar \trd {C_j}} \right)_X. \] An analogous series of steps shows how this is equal to $\left( \HC {\left(\HC \fst \snd {B_i}\right)} \trd {C_j} \right)_V$, thus concluding the proof. \qed \end{proofAssociativity} \subsection{Whiskering and horizontal composition} Let $\phi \colon F \to G$, with $F \colon \C^\alpha \to \C$ and $G \colon \C^\beta \to \C$, and $\psi \colon H \to K$, with $H,K \colon \Op\C \times \C\to \C$, be dinatural transformations of type $\begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}$ and $\begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd}$ respectively. Then $\hc \phi \psi \colon H(\Op G , F) \to K(\Op F, G)$ is of type \[ \begin{tikzcd} \length\beta + \length\alpha \ar[r,"{[\tau,\sigma]}"] & n & \ar[l,"{[\sigma,\tau]}"'] \length\alpha + \length\beta \end{tikzcd} \] and its general component $(\hc \phi \psi)_{\bfA}$, with $\bfA=(A_1,\dots,A_n)$, is either leg of \[ \begin{tikzcd}[column sep=2em] & H\bigl( F(\bfA\sigma), F(\bfA\sigma) \bigr) \ar[r,"\psi_{F(\bfA\sigma)}"] & K \bigl( F(\bfA\sigma), F(\bfA\sigma) \bigr) \ar[dr,"{K\bigl( F(\bfA\sigma),\phi_\bfA \bigr)}"] \\ H \bigl( G(\bfA\tau), F(\bfA\sigma) \bigr) \ar[ur,"{H \bigl( \phi_\bfA , F(\bfA\sigma) \bigr)}"] \ar[dr,"{ H \bigl( G(\bfA\tau), \phi_\bfA \bigr)}"'] & & & K \bigl( F(\bfA\sigma), G(\bfA\tau) \bigr) \\ & H \bigl( G(\bfA\tau), G(\bfA\tau) \bigr) \ar[r,"\psi_{G(\bfA\tau)}"'] & K \bigl( G(\bfA\tau), G(\bfA\tau) \bigr) \ar[ur,"{K \bigl( \phi_\bfA, G(\bfA\tau) \bigr)}"'] \end{tikzcd} \] If we look at the upper leg of the above hexagon, we may wonder: is it, in fact, the general component of the vertical composition \begin{equation}\label{eqn:whiskering vs horizontal composition} \bigl(\hc {(F,\phi)} K \bigr) \circ \bigl( \hc F \psi \bigr) \circ \bigl( \hc {(\phi,F)} H \bigr), \end{equation} where by $\hc {(F,\phi)} K$ we mean the simultaneous horizontal composition of $\id K$ with $\id F$ in its first variable and with $\phi$ in its second, namely $\bigl(\HC {\id F} {\id K} 1 \bigr) \circ \bigl( \HC \phi {\id K} 2 \bigr) = \bigl( \HC \phi {\id K} 2 \bigr) \circ \bigl( \HC {\id F} {\id K} 1 \bigr)$? In other words, is horizontal composition a vertical composite of \emph{whiskerings}, analogously to the classical natural case? \emph{No}, but it is intimately related to it, as we show now by computing the composite~\eqref{eqn:whiskering vs horizontal composition}. Let $\bfA=(A_1,\dots,A_n)$, $\bfB=(B_1,\dots,B_{\length\alpha})$ and $\bfC=(C_1,\dots,C_{\length\alpha})$. Then \begin{multline*} \bigl(\hc {(\phi,F)} H \bigr)_{\bfA \concat \bfB} = \begin{tikzcd}[ampersand replacement=\&,column sep=4em] H \bigl( G(\bfA\tau), F(\bfB) \bigr) \ar[r,"{H\bigl( \phi_\bfA, F(\bfB)\bigr)}"] \& H \bigl( F(\bfA\sigma), F(\bfB) \bigr) \end{tikzcd} \\ (\hc F \psi)_\bfC = \begin{tikzcd}[ampersand replacement=\&,column sep=4em] H \bigl( F(\bfC), F(\bfC) \bigr) \ar[r,"\psi_{F(\bfC)}"] \& K \bigl( F(\bfC), F(\bfC) \bigr) \end{tikzcd} \end{multline*} Therefore, upon composing $\hc {(\phi,F)} H $ with $\hc F \psi$, we have to impose $\bfA\sigma = \bfC = \bfB$, which means $A_{\sigma_i} = C_i = B_i$ for all $i \in \length\alpha$. If we take also $\bfD=(D_1,\dots,D_{\length\alpha})$ and $\bfE=(E_1,\dots,E_n)$, we have: \begin{multline*} \Bigl(\bigl( \hc F \psi \bigr) \circ \bigl( \hc {(\phi,F)} H \bigr)\Bigr)_{\bfA} = \begin{tikzcd}[ampersand replacement=\&,column sep=6em] H \bigl( G(\bfA\tau), F(\bfA\sigma) \bigr) \ar[r,"{\psi_{F(\bfA\sigma)} \circ H \bigl( \phi_\bfA, F(\bfA\sigma)\bigr)}"] \& K \bigl( F(\bfA\sigma), F(\bfA\sigma) \bigr) \end{tikzcd} \\ \bigl(\hc {(F,\phi)} K \bigr)_{\bfD \concat \bfE} = \begin{tikzcd}[ampersand replacement=\&,column sep=3em] K \bigl( F(\bfD), F(\bfE\sigma) \bigr) \ar[r,"{K\bigl( F(\bfD), \phi_\bfE \bigr)}"] \& K \bigl( F(\bfD), G(\bfE\tau) \bigr) \end{tikzcd} \end{multline*} So, to compose $\bigl( \hc F \psi \bigr) \circ \bigl( \hc {(\phi,F)} H \bigr)$ with $\bigl(\hc {(F,\phi)} K \bigr)$, we only need $\bfD = \bfA\sigma = \bfE\sigma$. Crucially, for all $k \in n \setminus \mathrm{Im}(\sigma)$, $A_k$ and $E_k$ need not to be equal. This means that, if $l=\length{n \setminus \mathrm{Im}(\sigma)}$, the composite~\eqref{eqn:whiskering vs horizontal composition} is a transformation depending on $l + 2 \cdot (n-l)$ variables (equivalently, $n+(n-l)$ variables), where the two instances of $\phi$, at the beginning and at the end of the general component of~\eqref{eqn:whiskering vs horizontal composition}, are computed in variables $\bfA$ and $\bfE$ respectively where $A_{\sigma i} = E_{\sigma i}$ for all $i \in \length\alpha$, and $A_k \ne E_k$ for $k \in n \setminus \mathrm{Im}(\sigma)$. Instead, the horizontal composition $\hc \phi \psi$ uses \emph{the same} general component of $\phi$ in both instances, as it is obtained by ``applying the dinaturality condition of $\psi$ to $\phi_\bfA$'': it requires a stronger constraint, namely $A_k = E_k$ for all $k\in n$. This means that $\hc \phi \psi$ is a sub-family of~\eqref{eqn:whiskering vs horizontal composition}, and it coincides with it precisely when $\sigma$ is surjective. Analogously, $\hc \phi \psi$ is in general a sub-family of \[ \bigl(\hc {(\phi,G)} K \bigr) \circ \bigl( \hc G \psi \bigr) \circ \bigl( \hc {(G,\phi)} H \bigr) \] and they coincide precisely when $\tau$ is surjective. This issue is part of the wider problem of incompatibility of horizontal and vertical composition, which we discuss in the next section. \begin{remark} If $\phi$ and $\psi$ are classical dinatural transformations as in Definition~\ref{def:horCompDef}, $\hc \phi \psi$ is indeed equal to the vertical composite of whiskerings~\eqref{eqn:whiskering vs horizontal composition}. In this case, one can alternatively infer the dinaturality of $\hc \phi \psi$ (Theorem~\ref{thm:horCompTheorem}) from the easy-to-check dinaturality of $\hc {(\phi,F)} H$, $\hc F \psi$ and $\hc {(F,\phi)} K$ by applying Theorem~\ref{theorem:compositionality with complicated graphs}: one can draw the composite graph of three whiskerings (Figure~\ref{fig:composite graph whiskerings}) and notice that the resulting Petri Net is acyclic. The trellis of commutative diagrams in Figure~\ref{fig:DinaturalityHorizontalCompositionFigure}, which proves Theorem~\ref{thm:horCompTheorem}, is the algebraic counterpart of firing transitions in Figure~\ref{fig:composite graph whiskerings}: the dinaturality of $\phi$ corresponds to firing the top-left and bottom-right transitions, while the dinaturality of $\psi$ to firing the two transitions in the middle layer. \end{remark} \begin{figure} \centering \begin{tikzpicture} \matrix[row sep=1.5em,column sep=1.5em]{ \node[category] (1) {}; & & \node[opCategory] (2) {}; & & \node[opCategory] (3) {}; & & \node[category] (4) {}; \\ & \node[component] (5) {}; & & & \node[component] (6) {}; & & \node[component] (7) {};\\ \node[category] (8) {}; & & \node[opCategory] (9) {}; & & \node[opCategory] (10) {}; & & \node[category] (11) {}; \\ & &\node[component] (12) {}; & & \node[component] (13) {};\\ \node[category] (14) {}; & & \node[opCategory] (15) {}; & & \node[opCategory] (16) {}; & & \node[category] (17) {}; \\ \node[component] (18) {}; & & \node[component] (19) {}; & & & \node[component] (20) {};\\ \node[category] (21) {}; & & \node[opCategory] (22) {}; & & \node[opCategory] (23) {}; & & \node[category] (24) {}; \\ }; \graph[use existing nodes]{ 1 -> 5 -> 8 -> 12 -> 14 -> 18 -> 21; 22 -> 19 -> 15 -> 13 -> 9 -> 5 -> 2; 23 -> 20 -> 16 -> 12 -> 10 -> 6 -> 3; 4 -> 7 -> 11 -> 13 -> 17 -> 20 -> 24; }; \end{tikzpicture} \caption{The composite graph of the three whiskerings in~\eqref{eqn:whiskering vs horizontal composition} for $\phi$ and $\psi$ classical dinatural transformations.} \label{fig:composite graph whiskerings} \end{figure} \subsection{(In?)Compatibility with vertical composition}\label{section compatibility} Looking at the classical natural case, there is one last property to analyse: the \emph{interchange law}~\cite{mac_lane_categories_1978}. In the following situation, \[ \begin{tikzcd}[column sep=1.5cm] \A \arrow[r, out=60, in=120, ""{name=U, below}] \arrow[r, ""{name=D, }] \arrow[r,phantom,""{name=D1,below}] \arrow[r, bend right=60,""{name=V,above}] & \B \arrow[r, bend left=60, ""{name=H, below}] \arrow[r,""{name=E}] \arrow[r,phantom,""{name=E1,below}] \arrow[r, bend right=60,""{name=K,above}] & \C \arrow[Rightarrow, from=U, to=D,"\phi"] \arrow[Rightarrow, from=D1, to=V,"\psi"] \arrow[Rightarrow, from=H, to=E, "\phi'"] \arrow[Rightarrow, from=E1,to=K, "\psi'"] \end{tikzcd} \] with $\phi,\phi',\psi$ and $\psi'$ natural transformations, we have: \begin{equation}\label{interchange law} \hc {(\psi \circ \phi)} {(\psi' \circ \phi')} = (\hc \psi {\psi'}) \circ (\hc \phi {\phi'}). \tag{$\dagger$} \end{equation} The interchange law is the crucial property that makes $\C\mathrm{at}$ a 2-category. It is then certainly highly interesting to wonder whether a similar property holds for the more general notion of horizontal composition for dinatural transformations too. As we know all too well, dinatural transformations are far from being as well-behaved as natural transformations, given that they do not, in general, vertically compose; on the other hand, their horizontal composition always works just fine. Are these two operations compatible, at least when vertical composition is defined? The answer, unfortunately, is \emph{No}, at least if by ``compatible'' we mean ``compatible as in the natural case (\ref{interchange law})''. Indeed, consider classical dinatural transformations \begin{equation}\label{compatibility situation} \begin{tikzcd}[column sep=0.75cm] \Op\A \times \A \arrow[rr, bend left=60, ""{name=U,below}] \arrow[rr, phantom, bend left=60, "F"{above}] \arrow[rr, "G"{name=D,anchor=center,fill=white,pos=0.34}] \arrow[rr, bend right=60,""{name=V,above}] \arrow[rr, bend right=60,"H"{below}] & & \B & \Op\B \times \B \arrow[rr, bend left=60, ""{name=H, below,}] \arrow[rr,phantom, bend left=60, "J"{above}] \arrow[rr,"K"{name=E,anchor=center,fill=white},pos=0.35] \arrow[rr, bend right=60,""{name=K,above}] \arrow[rr,phantom, bend right=60,"L"{below}] & & \C \arrow[Rightarrow, from=U, to=D,"\phi"] \arrow[Rightarrow, from=D, to=V,"\psi"] \arrow[Rightarrow, from=H, to=E, "\phi'"] \arrow[Rightarrow, from=E,to=K, "\psi'"] \end{tikzcd} \end{equation} such that $\psi\circ\phi$ and $\psi'\circ\phi'$ are dinatural. Then \[ \hc \phi {\phi'} \colon J(\Op G,F) \to K(\Op F,G) \qquad \hc \psi {\psi'} \colon K(\Op H,G) \to L(\Op G,H) \] which means that $\hc \phi {\phi'}$ and $\hc \psi {\psi'}$ are not even composable as families of morphisms, as the codomain of the former is not the domain of the latter. The problem stems from the fact that the codomain of the horizontal composition $\hc \phi {\phi'}$ depends on the codomain of $\phi'$ and also the domain \emph{and} codomain of $\phi$, which are not the same as the domain and codomain of $\psi$: indeed, in order to be vertically composable, $\phi$ and $\psi$ must share only one functor, and not both. This does not happen in the natural case: the presence of mixed variance, which forces to consider the codomain of $\phi$ in $\hc \phi {\phi'}$ and so on, is the real culprit here. The failure of (\ref{interchange law}) is not completely unexpected: after all, our definition of horizontal composition is strictly more general than the classical one for natural transformations, as it extends the audience of functors and transformations it can be applied to quite considerably. Hence it is not surprising that this comes at the cost of losing one of its properties, albeit so desirable. Of course, one can wonder whether a different definition of horizontal composition exists for which (\ref{interchange law}) holds. Although we cannot exclude \emph{a priori} this possibility, the fact that ours not only is a very natural generalisation of the classical definition for natural transformations (as it follows the same idea, see discussion after Definition~\ref{def:horizontal composition natural transformations}), but also enjoys associativity and unitarity, let us think that we \emph{do} have the right definition at hand. (As a side point, behold Figure~\ref{fig:DinaturalityHorizontalCompositionFigure}: its elegance cannot be the fruit of a wrong definition!) What we suspect, instead, is that a different \emph{interchange law} should be formulated, that can accommodate the hexagonal shape of the dinatural condition. Indeed, what proves (\ref{interchange law}) in the natural case is the naturality of either $\phi'$ or $\psi'$. For instance, the following diagrammatic proof uses the latter, for $\phi \colon F \to G$, $\psi \colon G \to H$, $\phi' \colon J \to K$, $\psi' \colon K \to L$ natural: \[ \begin{tikzcd}[row sep=2.5em,column sep=2.5em,font=\small] JF(A) \ar[r,"{\phi'_{F(A)}}"] \ar[rd,dashed,"{(\hc \phi {\phi'})_A}"'] & KF(A) \ar[r,"\psi'_{F(A)}"] \ar[d,"K(\phi_A)"'] & LF(A) \ar[d,"L(\phi_A)"] \\ & KG(A) \ar[r,"\psi'_{G(A)}"] \ar[dr,dashed,"{(\hc \psi {\psi'})}_A"'] & LG(A) \ar[d,"L(\psi_A)"] \\ & & LH(A) \end{tikzcd} \] (The upper leg of the diagram is $ \hc {(\psi \circ \phi)} {(\psi' \circ \phi')}$.) The naturality condition of $\psi'$ is what causes $\phi$ and $\psi'$ to swap places, allowing now $\phi$ and $\phi'$ to interact with each other via horizontal composition; same for $\psi$ and $\psi'$. However, for $\phi, \psi, \phi',\psi'$ dinatural as in (\ref{compatibility situation}), this does not happen: \[ \begin{tikzcd}[column sep=.5cm,font=\small] & & J(F,F) \ar[r,"\phi'"] & K(F,F) \ar[r,"\psi'"] & L(F,F) \ar[dr,"{L(1,\phi)}"] \\ & J(G,F) \ar[ur,"{J(\phi,1)}"] \ar[rrrr,dashed,"\hc {\phi} {({\psi'}\circ{\phi'})}"] & & & & L(F,G) \ar[dr,"{L(1,\psi)}"]\\ J(H,F) \ar[ur,"{J(\psi,1)}"] & & & & & & L(F,H) \end{tikzcd} \] Here, the upper leg of the diagram is again $\hc {(\psi \circ \phi)} {(\psi' \circ \phi')}$; we have dropped the lower-scripts of the transformations and we have written ``$J(H,F)$'' instead of ``$J(H(A,A),F(A,A))$'' to save space. The dinaturality conditions of $\phi'$ and $\psi'$ do not allow a place-swap for $\phi$ and $\phi'$ or for $\phi$ and $\psi'$; in fact, they cannot be applied at all! The only thing we can notice is that we can isolate $\phi$ from $\phi'$, obtaining the following: \[ \hc {(\psi \circ \phi)} {(\psi' \circ \phi')} = L(1,\psi) \circ \Bigl(\hc {\phi} {({\psi'}\circ{\phi'})}\Bigr) \circ J(\psi,1). \] Notice that the right-hand side is \emph{not} $\hc \psi {\Bigl(\hc {\phi} {({\psi'}\circ{\phi'})}\Bigr)}$, as one might suspect at first glance, simply because the domain of $\hc {\phi} {({\psi'}\circ{\phi'})}$ is not $J$ and its codomain is not $L$. It is clear then that the only assumption of $\phi'\circ\phi$ and $\psi'\circ\psi$ being dinatural (for whatever reason) is not enough. One chance of success could come from involving the graph of our transformations; for example, if the composite graphs $\graph\psi \circ \graph\phi$ and $\graph{\psi'} \circ \graph{\phi'}$ are acyclic—hence dinatural, yes, but for a ``good'' reason—then maybe we could be able to deduce a suitably more general, ``hexagonal'' version of (\ref{interchange law}) for dinatural transformations. It also may well be that there is simply no sort of interchange law, of course. This is still an open question, and the matter of further study in the future. In the conclusions we shall make some additional comments in light of the calculus we will build in the rest of the article. \section{A category of partial dinatural transformations} Since dinatural transformations do not always compose, they do not form a category. However, the work done in Section~\ref{section vertical compositionality} permits us to define a category whose objects are functors of mixed variance and whose morphisms are transformations that are dinatural only in \emph{some} of their variables, as we shall see. A first attempt would be to construct $\fc \B \C$ by defining: \begin{itemize}\label{first attempt} \item objects: pairs $(\alpha, F \colon \B^\alpha \to \C)$; \item morphisms: a morphism $(\alpha, F) \to (\beta, G)$ would be a tuple $(\phi, \graph\phi, \Delta_\phi)$ where $\phi \colon F \to G$ is a transformation whose standard graph is $\graph\phi$, and if $n$ is the number of connected components of $\graph\phi$ (hence, the number of variables of $\phi$), then $\Delta_\phi \colon n \to \{0,1\}$ would be the ``discriminant'' function that tells us in which variables $\phi$ is dinatural: if $\Delta_\phi(i)=1$, then $\phi$ is dinatural in its $i$-th variable; \item composition: given $(\phi,\graph\phi,\Delta_\phi) \colon (\alpha,F) \to (\beta,G)$ and $(\psi,\graph\psi,\Delta_\psi) \colon (\beta,G) \to (\gamma,H)$ morphisms, their composite would be $(\psi\circ\phi,\graph{\psi\circ\phi},\Delta_{\psi\circ\phi})$, where $\psi\circ\phi$ is simply the vertical composition of transformations $\phi$ and $\psi$, $\graph{\psi\circ\phi}$ is its standard graph, and $\Delta_{\psi\circ\phi}(x)$ is defined to be $1$ if and only if the $x$-th connected component of $\graph\psi \circ \graph\phi$ is acyclic and $\phi$ and $\psi$ are dinatural in all variables involved in the $x$-th connected component of the composite graph $\graph{\psi}\circ\graph\phi$, in the sense of Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL}. \end{itemize} However, composition so defined fails to be associative in $\Delta$. Suppose we have three consecutive transformations $\phi$, $\psi$ and $\chi$, dinatural in all their variables, where \[ \graph\phi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ & \node [category] (1) {}; & & & & \node [opCategory] (6) {}; \\ & \node [component] (A) {}; & & & & \node [component] (B) {};\\ \node [category] (2) {}; & & \node [category] (3){}; & & \node [opCategory] (5) {}; & & \node [opCategory] (7) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 5 -> B -> 6; 7 -> B; }; \end{tikzpicture} \quad \graph\psi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node [category] (2) {}; & & \node [category] (3){}; & & \node [opCategory] (5) {}; & & \node [opCategory] (7) {};\\ \node [component] (C) {}; & & & \node [component] (D) {}; & & &\node [component] (E) {}; \\ \node [category] (4) {}; & & & & & & \node [opCategory] (8) {};\\ }; \graph[use existing nodes]{ 2 -> C -> 4; 3 -> D -> 5; 8 -> E -> 7; }; \end{tikzpicture} \quad \graph\chi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node[category](2){}; & & & & \node[opCategory](5){}; \\ & & \node[component](B){};\\ }; \graph[use existing nodes]{ 2 -> B -> 5; }; \end{tikzpicture} \] Of course vertical composition of transformations \emph{is} associative, therefore $(\chi \circ \psi) \circ \phi = \chi \circ (\psi \circ \phi)$ and $\graph{(\chi \circ \psi) \circ \phi} = \graph{\chi \circ (\psi \circ \phi)}$. Yet, $\Delta_{(\chi \circ \psi) \circ \phi} \ne \Delta_{\chi \circ (\psi \circ \phi)}$: indeed, by computing $\graph\chi \circ \graph\psi$ and then collapsing the connected components, we obtain \[ \graph{\chi\circ\psi} = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node [category] (1) {}; & \node[category] (7) {}; & & \node[opCategory] (8) {}; & \node[opCategory] (6) {}; \\ & & \node[component](D){}; \\ & & \node[component](B){};\\ }; \graph[use existing nodes]{ 1 -> B -> 6; 7 -> D -> 8; }; \end{tikzpicture} \quad \text{hence } \graph{\chi \circ \psi} \circ \graph\phi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ &\node[category](1){}; & & & &\node[opCategory](6){};\\ &\node[component](A){};& & & &\node[component](C){};\\ \node[category](2){}; & &\node[category](7){}; & &\node[opCategory](8){}; & &\node[opCategory](5){};\\ & & &\node[component](D){};\\ & & &\node[component](B){};\\ }; \graph[use existing nodes]{ 1 -> A -> 2 -> B ; B -> 5 -> C -> 6; A -> 7 -> D -> 8 -> C; }; \end{tikzpicture} \] Since $\graph{\chi \circ \psi} \circ \graph\phi$ is acyclic, we have that $(\chi\circ\psi)\circ\phi$ is dinatural, thus $\Delta_{(\chi\circ\psi)\circ\phi} \colon 1 \to \{0,1\}$ is the function returning 1. On the other hand, however, we have \[ \graph\psi \circ \graph\phi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ & \node [category] (1) {}; & & & & \node [opCategory] (6) {}; \\ & \node [component] (A) {}; & & & & \node [component] (B) {};\\ \node [category] (2) {}; & & \node [category] (3){}; & & \node [opCategory] (5) {}; & & \node [opCategory] (7) {};\\ \node [component] (C) {}; & & & \node [component] (D) {}; & & &\node [component] (E) {}; \\ \node [category] (4) {}; & & & & & & \node [opCategory] (8) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 2 -> C -> 4; 3 -> D -> 5 -> B -> 6; 8 -> E -> 7 -> B; }; \end{tikzpicture} \quad \text{so } \graph{\psi\circ\phi} = \begin{tikzpicture} \matrix[column sep=3.5mm,row sep=0.4cm]{ \node [category] (1) {}; & & \node [opCategory] (2) {}; \\ & \node [component] (A) {}; \\ \node [category] (3) {}; & & \node [opCategory] (4) {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 3; 4 -> A -> 2; }; \end{tikzpicture} \] which means that, when we glue together $\graph\chi$ and $\graph{\psi\circ\phi}$, we obtain: \[ \graph{\chi}\circ\graph{\psi\circ\phi}= \begin{tikzpicture} \matrix[column sep=3.5mm,row sep=0.4cm]{ \node[category](1){}; & & \node[opCategory](6){};\\ &\node[component](A){};\\ \node[category](2){}; & & \node[opCategory](5){};\\ &\node[component](B){};\\ }; \graph[use existing nodes]{ 1->A->2->B->5->A->6; }; \end{tikzpicture} \] which is cyclic, so $\Delta_{\chi\circ(\psi\circ\phi)} \colon 1 \to \{0,1\}$ returns 0. What went wrong? In the graph of $\psi\circ\phi$ there is a path from the bottom-right node to the bottom-left node, which then extends to a cycle once connected to $\graph{\chi}$. That path was created upon collapsing the composite graph $\graph\psi \circ \graph\phi$ into $\graph{\psi \circ \phi}$: but in $\graph\psi \circ \graph\phi$ there was no path from the bottom-right node to the bottom-left one. And rightly so: to get a token moved to the bottom-left vertex of $\graph\psi \circ \graph\phi$, we have no need to put one token in the bottom-right vertex. Therefore, once we have formed $\graph{\psi\circ\phi}$, we have lost crucial information about which sources and sinks are \emph{directly} connected with which others, because we have collapsed the entire connected component into a single internal transition, with no internal places. As it happens, by computing the composite graph in a different order, instead, no new paths have been created, hence no cycles appear where there should not be. After all, by Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL} we know that $\chi \circ \psi \circ \phi$ is dinatural because it can be written as the composite of two dinatural transformations, namely $\chi \circ \psi$ and $\phi$, whose composite graph is acyclic. This tells us that the crucial reason for which associativity fails in our preliminary definition of the category $\fc \B \C$ is that only keeping track of which connected component each of the arguments of the domain and codomain functors belongs to is not enough: we are forgetting too much information, namely the paths that directly connect the white and grey boxes. Hence our transformations will have to be equipped with more complicated Petri Nets than their standard graph that do contain internal places, and upon composition we shall simply link the graphs together along the common interface, without collapsing entire connected components into a single transition. Recall from Definition~\ref{def:FBCF petri net} that a FBCF Petri Net is a net where all the places have at most one input and at most one output transition. We now introduce the category of FBCF Petri Nets, using the usual definition of morphism for bipartite graphs. \begin{definition} The category $\PN$ consists of the following data: \begin{itemize} \item objects are FBCF Petri Nets $N=(P_N,T_N,\inp{},\out{})$ together with a fixed ordering of its connected components. Such an ordering will allow us to speak about the ``$i$-th connected component'' of $N$; \item a morphism $f \colon N \to M$ is a pair of functions $(f_P,f_T)$, for $f_P \colon P_N \to P_M$ and $f_T \colon T_N \to T_M$, such that for all $t \in T_N$ \[ \inp{f_T(t)} = \{f_P(p) \mid p \in \inp t \} \quad \text{and} \quad \out{f_T(t)} = \{ f_P(p) \mid p \in \out t \}. \] \end{itemize} \end{definition} Note that if $f \colon N \to M$ is a morphism in $\PN$ then $f$ preserves (undirected) paths, hence for $C$ a connected component of $N$ we have that $f(C)$ is connected. In particular, if $f$ is an isomorphism then $f(C)$ is a connected component of $M$. \begin{remark}\label{remark:finite sets are in PN} We have a canonical inclusion $\finset \to \PN$ by seeing a set as a Petri Net with only places and no transitions. \end{remark} For a function $x \colon A \to B$ of sets we call $\parts x \colon \parts A \to \parts B$ the action of the covariant powerset functor on $x$, that is the function such that $\parts x (S) = \{x(a) \mid a \in S\} $ for $S \subseteq A$. We then have that if $f \colon N \to M$ is a morphism in $\PN$, then \[ \begin{tikzcd} T_N \ar[r,"f_T"] \ar[d,"\inp{}"'] & T_M \ar[d,"\inp{}"] \\ \parts{P_N} \ar[r,"\parts{f_P}"] & \parts{P_M} \end{tikzcd} \quad \text{and} \quad \begin{tikzcd} T_N \ar[r,"f_T"] \ar[d,"\out{}"'] & T_M \ar[d,"\out{}"] \\ \parts{P_N} \ar[r,"\parts{f_P}"] & \parts{P_M} \end{tikzcd} \] commute by definition of the category $\PN$. It turns out that $\PN$ admits pushouts, hence we can form a category $\cospan\PN$. \begin{proposition}\label{prop: pushouts in PN} Let $N,M,L$ be in $\PN$, and consider the following diagram in $\PN$: \begin{equation}\label{diagram: pushout in PN} \begin{tikzcd}[column sep=3em] (P_N,T_N,\Inp {N} ,\Out {N} ) \ar[r,"{(g_P,g_T)}"] \ar[d,"{(f_P,f_T)}"'] & (P_L,T_L,\Inp L,\Out L ) \ar[d,"{(k_P,k_T)}"] \\ (P_M,T_M,\Inp M,\Out M) \ar[r,"{(h_P,h_T)}"] & (P_Q,T_Q,\Inp Q,\Out Q) \end{tikzcd} \end{equation} where \[ \begin{tikzcd} P_N \ar[r,"g_P"] \ar[d,"f_P"'] & P_L \ar[d,"k_P"] \\ P_M \ar[r,"h_P"] & P_Q \end{tikzcd} \quad {and} \quad \begin{tikzcd} T_N \ar[r,"g_T"] \ar[d,"f_T"'] & T_L \ar[d,"k_T"] \\ T_M \ar[r,"h_T"] & T_Q \end{tikzcd} \] are pushouts and $\Inp Q \colon T_Q \to \parts{P_Q}$ is the unique map (the dashed one) that makes the following diagram commute: \[ \begin{tikzcd} \parts{P_N} \ar[dddr,bend angle=20,bend right,"\parts{f_P}"'] \ar[rrrd,bend angle=20,bend left,"\parts{g_P}"] \\ & T_N \ar[r,"g_T"] \ar[d,"f_T"'] \ar[ul,"\Inp N"] & T_L \ar[d,"k_T"] \ar[r,"\Inp L"] & \parts{P_L} \ar[dd,"\parts{k_P}"] \\ & T_M \ar[r,"h_T"] \ar[d,"\Inp M"] & T_Q \ar[dr,dashed] \\ & \parts{P_M} \ar[rr,"\parts{h_P}"] & & \parts{P_Q} \end{tikzcd} \] $\Out Q \colon T_Q \to \parts{P_Q}$ is defined analogously. Then (\ref{diagram: pushout in PN}) is a pushout. \end{proposition} \begin{proof} It is easily checked that (\ref{diagram: pushout in PN}) satisfies the definition of pushout.\qed \end{proof} Remember from Remark~\ref{remark:finite sets are in PN} that finite sets can be seen as places-only Petri Nets: if $S$ is a set and $N$ is an object in $\PN$, then a morphism $f \colon S \to N$ in $\PN$ is a pair of functions $f=(f_P,f_T)$ where $f_T$ is the empty map $\emptyset \colon \emptyset \to T_N$. Hence, by little abuse of notation, we will refer to $f_P$ simply as $f$. For later convenience, we consider the following subcategory of $\cospan\PN$, whose morphisms are essentially Petri Nets $N$ in $\PN$ with ``interfaces'', that is specific places seen as ``inputs'' and ``outputs'' of $N$. Composition will then be computed by ``gluing together'' two consecutive nets along the common interface. \begin{definition}\label{definition:graph category} The category $\gc$ consists of the following data: \begin{itemize} \item objects are lists in $\List\{+,-\}$; \item morphisms $f \colon \alpha \to \beta$ are (equivalence classes of) cospans in $\PN$ of the form \[ \begin{tikzcd} \length\alpha \ar[r,"\lambda"] & N & \ar[l,"\rho"'] \length\beta \end{tikzcd} \] where \begin{itemize}[leftmargin=*] \item $\lambda \colon \length\alpha \to P_N$ and $\rho \colon \length\beta \to P_N$ are injective functions, hence we can see $\length\alpha$ and $\length\beta$ as subsets of $P_N$; \item $\mathit{sources}(N) = \{ \lambda(i) \mid \alpha_i=+ \} \cup \{ \rho(i) \mid \beta_i = - \}$; \item $\mathit{sinks}(N) = \{ \lambda(i) \mid \alpha_i=- \} \cup \{ \rho(i) \mid \beta_i = + \}$. \end{itemize} Two such cospans are in the same class if and only if they differ by an isomorphism of Petri Nets on $N$ coherent with $\lambda$, $\rho$ and the ordering of the connected components of $N$; \item composition is that of $\cospan\PN$. \end{itemize} \end{definition} \begin{proposition} Composition in $\gc$ is well defined. \end{proposition} \begin{proof} Consider $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\lambda"] & M & \ar[l,"\rho"'] \length\beta \end{tikzcd} $ and $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\lambda'"] & L & \ar[l,"\rho'"'] \length\gamma \end{tikzcd} $ two morphisms in $\gc$. By Proposition~\ref{prop: pushouts in PN} then, their composite is given by computing the pushouts \[ \begin{tikzcd} \length\beta \ar[r,"\lambda'"] \ar[d,"\rho"'] & P_L \ar[d,"k_P"] \\ P_M \ar[r,"h_P"] & P_Q \end{tikzcd} \quad \text{and} \quad \begin{tikzcd} \emptyset \ar[r,"\emptyset"] \ar[d,"\emptyset"'] & T_L \ar[d,"k_T"] \\ T_M \ar[r,"h_T"] & T_Q \end{tikzcd} \] Now, the injectivity of $\rho$ and $\lambda'$ implies that $k_P$ and $h_P$ are also injective, as the pushout (in $\Set$) of an injective map against another yields injective functions. $P_Q$, in particular, can be seen as the quotient of $P_M + P_L$ where the elements of $P_M$ and $P_L$ with a common pre-image in $\length\beta$ are identified. Next, the pushout of the empty map against itself yields as a result the coproduct, thus $T_Q = T_M + T_L$ where $h_T$ and $k_T$ are the injections. Hence, the input function of the composite is defined as follows: \[ \begin{tikzcd}[row sep=0em,ampersand replacement=\&] T_M + T_L \ar[r,"\inp{}"] \& \parts{P_Q} \\ t \ar[r,|->] \& \begin{cases} \inp{_M(t)} & t \in T_M \\ \inp{_L(t)} & t \in T_L \end{cases} \end{tikzcd} \] and similarly for the output function. All in all, therefore, composition in $\gc$ is computed by ``glueing'' together the Petri Nets $M$ and $L$ along the common $\length\beta$-places; the resulting morphism of $\gc$ is \[ \begin{tikzcd}[column sep=3em] \length\alpha \ar[r,"h_P \circ \lambda"] & L \circ M & \length\beta \ar[l,"k_P \circ \rho'"']. \end{tikzcd} \] Now, for all $i \in \length\beta$, if $\beta_i=+$ then $\rho(i)$ is a sink of $M$ and $\lambda'(i)$ a source of $L$; if $\beta_i=-$ instead then $\rho(i)$ is a source of $M$ and $\lambda'(i)$ a sink of $L$: in every case, once we glue together $M$ and $L$ along the $\length\beta$-places to form the composite net $L \circ M$, these become internal places of $L \circ M$, with at most one input and one output transition each (depending whether they are proper sources or sinks in $M$ and $L$). Hence $L \circ M$ is still a FBCF Petri Net, and \begin{align*} \mathit{sources}(L \circ M) &= \bigl(\mathit{sources} (L) \setminus \rho(\length\beta) \bigr) \cup \bigl(\mathit{sources} (L)\setminus \lambda'(\length\beta) \bigr) \\ &= \{h_P \circ \lambda(i) \mid \alpha_i = +\} \cup \{k_P \circ \rho'(i) \mid \gamma_i =-\} \end{align*} and similarly for $\mathit{sinks}(N' \circ N)$. \qed \end{proof} \paragraph{Generalised graphs of a transformation} We can now start working towards the definition of a category $\fc \B \C$ of functors of mixed variance and transformations that are dinatural only on some of their variables; $\fc \B \C$ will be a category over $\gcf$ in the sense that transformations in $\fc \B \C$ will carry along, as part of their data, certain cospans in $\PN$. The category of graphs $\gcf$ will be built from $\fc \B \C$ by forgetting the transformations. As such, $\gcf$ will be defined \emph{after} $\fc \B \C$. It is clear how to define the objects of $\fc \B \C$: they will be pairs $(\alpha,F \colon \B^\alpha \to \C)$. Morphisms are less obvious to define, as we learnt in our preliminary attempt on p.~\pageref{first attempt}. A morphism $(\alpha,F) \to (\beta,G)$ will consist of a transformation $\phi \colon F \to G$ of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $, together though with a morphism $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma "] & N & \length\beta \ar[l,"\overline\tau "'] \end{tikzcd} $ in $\gc$ coherent with the type of $\phi$, in the sense that the Petri Net $N$, under certain conditions, looks exactly like $\graph\phi$ as in Definition~\ref{def:standard graph} except that it allows for internal places as well. For example, if $\psi_1$ and $\psi_2$ are two arbitrary consecutive transformations, $\graph{\psi_2} \circ \graph{\psi_1}$ will be coherent with the type of $\psi_2\circ\psi_1$. In other words, $N$ will have $n$ connected components, its sources (sinks) are exactly the places corresponding to the positive (negative) entries of $\alpha$ and the negative (positive) entries of $\beta$, and elements in $\length\alpha$ ($\length\beta$) mapped by $\sigma$ ($\tau$) into the same $i \in \{1,\dots,n\}$ will belong to the $i$-th connected component of $N$. A priori $N$ can contain places with no inputs or outputs: this will be useful for the special case of $\phi = \id F$ as we shall see in Theorem~\ref{theorem: {B,C} is a category}; however, if all sources and sinks in $N$ are proper, then $N$ plays the role of a generalised $\graph{\phi}$. \begin{definition}\label{definition: generalised graph of transformation} Let $\phi \colon F \to G$ be a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $. A cospan $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd} $ in $\PN$, which is a representative of a morphism in $\gc$ (hence $\overline\sigma$ and $\overline\tau$ are injective), is said to be \emph{coherent with the type of $\phi$} if and only if the following conditions are satisfied: \begin{itemize}[leftmargin=*] \item $N$ has $n$ connected components; \item for all $i \in \length\alpha$ and $j \in \length\beta$, $\overline\sigma(i)$ belongs to the $\sigma(i)$-th connected component of $N$ and $\overline\tau(j)$ belongs to the $\tau(j)$-th connected component of $N$. \end{itemize} In this case we say that $N$ is a \emph{generalised graph of $\phi$}. \end{definition} \begin{example}\label{example: graph and type are a generalised graph} For $\phi \colon F \to G$ a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $, recall that the set of places of $\graph\phi$ is $P = \length\alpha + \length\beta$. If we call $\injP {\length\alpha}$ and $\injP {\length\beta}$ the injections as in Definition~\ref{def:standard graph}, then \[ \begin{tikzcd} \length\alpha \ar[r,"\injP{\length\alpha}"] & \Gamma(\phi) & \ar[l,"\injP{\length\beta}"'] \length\beta \end{tikzcd} \] is indeed coherent with the type of $\phi$. Also $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ itself, seen as a cospan in $\PN$, is coherent with itself. \end{example} \begin{remark} If $N$ is a generalised graph of $\phi$ as in the notations of Definition~\ref{definition: generalised graph of transformation} and does not have any place which is a source and a sink at once, then $N$ has exactly $\length\alpha + \length\beta$ sources and sinks and their union coincides with the joint image of $\overline\sigma$ and $\overline\tau$. Moreover, $\overline\sigma$ and $\overline\tau$ have to make sure that they map elements of their domain into places belonging to the correct connected component: in this way, $N$ reflects the type of $\phi$ in a Petri Net like $\graph\phi$, with the possible addition of internal places. \end{remark} We shall now show how composition in $\gc$ preserves generalised graphs, in the following sense. \begin{proposition}\label{proposition: composition in G preservers generalised graphs} Let $\phi \colon F \to G$ and $\psi \colon G \to H$ be transformations of type, respectively, $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ and $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \length\gamma \ar[l,"\theta"'] \end{tikzcd} $; let also $u= \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd} $ and $v= \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\overline\eta"] & N' & \length\gamma \ar[l,"\overline\theta"'] \end{tikzcd} $ be cospans in $\PN$ coherent with the type of $\phi$ and $\psi$, respectively. Suppose the type of $\psi \circ \phi$ is given by \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] and that the composite in $\gc$ of $u$ and $v$ is given by \begin{equation}\label{composite generalised graphs} \begin{tikzcd} & & \length\gamma \ar[d,"\overline\theta"] \\ & \length\beta \ar[d,"\overline\tau"'] \ar[r,"\overline\eta"] \ar[dr, phantom, "\ulcorner" very near start] & N' \ar[d,"\overline\xi"] \\ \length\alpha \ar[r,"\overline\sigma"] & N \ar[r,"\overline\zeta"] & N' \circ N \end{tikzcd} \end{equation} Then $v\circ u$ is coherent with the type of $\psi \circ \phi$. \end{proposition} \begin{proof} As we said in the discussion after Definition~\ref{definition:graph category}, $N' \circ N$ is obtained by gluing together $N$ and $N'$ along the $\length\beta$ places which they have in common. The number of connected components of $N' \circ N$ is indeed $l$ by construction. The morphisms $\overline\zeta$ and $\overline\xi$ in $\PN$ are pairs of injections that map each place and transition of $N$ and $N'$ to itself in the composite $N' \circ N$. This means that $\overline\zeta \overline\sigma(i)$ does belong to the $\zeta\sigma(i)$-th connected component of $N' \circ N$, as the latter contains the $\sigma(i)$-th c.c.\ of $N$; similarly the $\overline\xi \overline\theta (j)$ belongs to the $\xi\theta(j)$-th c.c.\ of $N' \circ N$. \qed \end{proof} The morphisms of our generalised functor category $\fc \B \C$ will be, therefore, transformations $\phi$ equipped with a generalised graph $N$ and a discriminant function that tells us in which variables $\phi$ is dinatural. The Petri Net $N$ will not be arbitrary though: unless $\phi$ is an identity transformation, $N$ can be either $\graph\phi$ or $\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$, for some consecutive transformations $\phi_1,\dots,\phi_k$ such that $\phi = \phi_k \circ \dots \phi_1$. Therefore, only transformations which are \emph{explicitly} recognisable as the composite of two or more families of morphisms are allowed to have an associated Petri Net, containing internal places, that is not their standard graph. \begin{definition}\label{def: generalised functor category} Let $\B$ and $\C$ be categories. The \emph{generalised functor category} $\fc \B \C$ consists of the following data: \begin{itemize}[leftmargin=*] \item objects are pairs $(\alpha,F)$, for $\alpha \in \List\{+,-\}$ and $F \colon \B^\alpha \to \C$ a functor; \item morphisms $(\alpha,F) \to (\beta,G)$ are equivalence classes of tuples \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \] where: \begin{itemize}[leftmargin=*] \item $\phi \colon F \to G$ is a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $, \item $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd} $ is a representative of a morphism in $\gc$ coherent with the type of $\phi$, \item $\Delta_\Phi \colon n \to \{0,1\}$ is a function such that $\Delta_\Phi (i) = 1$ implies that the $i$-th connected component of $N$ is acyclic and $\phi$ is dinatural in its $i$-th variable. \end{itemize} Moreover: \begin{itemize}[leftmargin=*] \item If $N$ consists of $n$ places and no transitions, then $(\alpha,F) = (\beta,G)$, $\phi=\id F$, $\sigma=\tau=\overline\sigma=\overline\tau=\id{\length\alpha}$ and $\Delta_\Phi = K_1$, the constant function equal to $1$; in this case $\Phi$ is the identity morphism of the object $(\alpha,F)$. \item If $N = \graph\phi$, $\overline\sigma = \injP{\length\alpha}$ and $\overline\tau=\injP{\length\beta}$, we say that $\Phi$ is \emph{atomic}. \item If $N \ne \graph\phi$ and $\Phi \ne \id{(\alpha,F)}$, then there exist $\Phi_1, \dots, \Phi_k$ atomic such that $\Phi = \Phi_k \circ \dots \circ \Phi_1$ in $\fc \B \C$, according to the composition law to follow in this Definition. \end{itemize} We say that $\Phi \eq \Phi'$, for $\Phi' = (\phi', \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma'"] & n & \length\beta \ar[l,"\tau'"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline{\sigma'}"] & N' & \length\beta \ar[l,"\overline{\tau'}"'] \end{tikzcd}, \Delta_{\Phi'} )$, if and only if the transformations differ only by a permutation of their variables (in a coherent way with the rest of the data) and $N$ and $N'$ are coherently isomorphic: more precisely, when \begin{itemize}[leftmargin=*] \item there is a permutation $\pi \colon n \to n$ such that $\sigma'=\pi\sigma$, $\tau'=\pi\tau$, $\phi_{A_1,\dots,A_n}'=\phi_{A_{\pi 1},\dots,A_{\pi n}}$, $\Delta_{\Phi}=\Delta_{\Phi'} \pi$; \item there is an isomorphism $f=(f_P,f_T) \colon N \to N'$ in $\PN$ such that the following diagram commutes: \[ \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] \ar[dr,"\overline{\sigma'}"'] & N \ar[d,"f"] & \length\beta \ar[l,"\overline\tau"'] \ar[dl,"\overline{\tau'}"] \\ & N' \end{tikzcd} \] mapping the $i$-th connected component of $N$ to the $\pi(i)$-th connected component of $N'$. \end{itemize} \item Composition of $\Phi$ as above and \[ \Psi = (\psi, \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \ar[l,"\theta"'] \length\gamma \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\overline\eta"] & N' & \ar[l,"\overline\theta"'] \length\gamma \end{tikzcd}, \Delta_\Psi ) \colon (\beta,G) \to (\gamma, H) \] is component-wise: it is the equivalence class of the tuple \begin{equation}\label{eqn:composition in {B,C}} \Psi \circ \Phi = ( \psi\circ\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\zeta\sigma"] & l & \ar[l,"\xi\theta"'] \length\gamma \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\zeta \overline\sigma"] & {N'} \circ N & \ar[l,"\overline\xi \overline\theta"'] \length\gamma \end{tikzcd}, \Delta_{\Psi\circ\Phi} ) \end{equation} where $\psi\circ\phi$ is the transformation of type given by the result of the pushout: \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] $N' \circ N$ is computed by composing in $\gc$, that is by performing the pushout in $\PN$: \[ \begin{tikzcd} & & \length\gamma \ar[d,"\overline\theta"] \\ & \length\beta \ar[d,"\overline\tau"'] \ar[r,"\overline\eta"] \ar[dr, phantom, "\ulcorner" very near start] & N' \ar[d,"\overline\xi"] \\ \length\alpha \ar[r,"\overline\sigma"] & N \ar[r,"\overline\zeta"] & N' \circ N \end{tikzcd} \] and the discriminant $\Delta_{\Psi\circ\Phi} \colon l \to \{0,1\}$ is obtained by setting $\Delta_{\Psi\circ\Phi} (x) = 1$ if and only if the $x$-th connected component of $N'\circ N$ is acyclic \emph{and} for all $y \in \zeta^{-1}\{x\}$ and $z \in \xi^{-1}\{x\}$ we have that $\Delta_\Phi(y) = 1 = \Delta_\Psi(z)$. The latter condition is tantamount to asking that $\phi$ and $\psi$ are dinatural in all the variables involved by the $x$-th connected component of the composite graph ${N'}\circ N$ of $\psi\circ\phi$. \end{itemize} \end{definition} \begin{theorem}\label{theorem: {B,C} is a category} $\fc \B \C$ is indeed a category. \end{theorem} \begin{proof} First of all, if $\Phi$ and $\Psi$ as above are in $\fc \B \C$, it is not difficult to check that the equivalence class of $\Psi \circ \Phi$ as in~(\ref{eqn:composition in {B,C}}) does not depend on the choice of representatives for the classes of $\Phi$ and $\Psi$. Next, we aim to prove that $\Psi \circ \Phi$ is again a morphism of $\fc \B \C$. By Proposition~\ref{proposition: composition in G preservers generalised graphs} we have that ${N'} \circ N$ is a generalised graph for $\psi\circ\phi$. In order to prove that $\Delta_{\Psi\circ\Phi}$ correctly defines a morphism of $\fc \B \C$, that is that if $\Delta_{\Psi\circ\Phi}(i)=1$ then $\psi\circ\phi$ is indeed dinatural in its $i$-th variable, we first show that composition in $\fc \B \C$ is associative: once we have done that we will use Theorem~\ref{theorem:compositionality with complicated graphs} to conclude. Consider \begin{align*} \Phi_1 &= (\phi_1, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\alpha \ar[r,"\sigma_1"] \& n \& \length\beta \ar[l,"\tau_1"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\alpha \ar[r,"\overline{\sigma_1}"] \& N_1 \& \length\beta \ar[l,"\overline{\tau_1}"'] \end{tikzcd}, \Delta_\Phi ) \colon (\alpha, F) \to (\beta, G), \\ \Phi_2 &= ( \phi_2, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\beta \ar[r,"\sigma_2"] \& m \& \length\gamma \ar[l,"\tau_2"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\beta \ar[r,"\overline{\sigma_2}"] \& N_2 \& \length\gamma \ar[l,"\overline{\tau_2}"'] \end{tikzcd}, \Delta_{\Phi_2} ) \colon (\beta, G) \to (\gamma, H), \\ \Phi_3 &= ( \phi_3, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\gamma \ar[r,"\sigma_3"] \& p \& \length\delta \ar[l,"\tau_3"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\gamma \ar[r,"\overline{\sigma_3}"] \& N_3 \& \length\delta \ar[l,"\overline{\tau_3}"'] \end{tikzcd}, \Delta_{\Phi_3} ) \colon (\gamma,H) \to (\delta,K). \end{align*} We know that composition of cospans via pushout is associative, as well as composition of transformations; suppose therefore that $\phi_3 \circ \phi_2 \circ \phi_1$ has type given by: \[ \begin{tikzcd} & & & \length\delta \ar[d,"\tau_3"] \\ & & \length\gamma \ar[r,"\sigma_3"] \ar[d,"\tau_2"'] \ar[dr, phantom, "\ulcorner" very near start] & p \ar[d,"\xi_2"] \\ & \length\beta \ar[r,"\sigma_2"] \ar[d,"\tau_1"'] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[r,"\zeta_2"] \ar[d,"\xi_1"'] \ar[dr, phantom, "\ulcorner" very near start] & q \ar[d,"\xi_3"] \\ \length\alpha \ar[r,"\sigma_1"] & n \ar[r,"\zeta_1"] & l \ar[r,"\zeta_3"] & r \end{tikzcd} \] and the generalised graph $N_3 \circ N_2 \circ N_1$ is obtained as the result of the following pushout-pasting: \[ \begin{tikzcd} & & & \length\delta \ar[d,"\overline{\tau_3}"] \\ & & \length\gamma \ar[r,"\overline{\sigma_3}"] \ar[d,"\overline{\tau_2}"'] \ar[dr, phantom, "\ulcorner" very near start] & N_3 \ar[d,"\overline{\xi_2}"] \\ & \length\beta \ar[r,"\overline{\sigma_2}"] \ar[d,"\overline{\tau_1}"'] \ar[dr, phantom, "\ulcorner" very near start] & N_2 \ar[r,"\overline{\zeta_2}"] \ar[d,"\overline{\xi_1}"'] \ar[dr, phantom, "\ulcorner" very near start] & N_3 \circ N_2 \ar[d,"\overline{\xi_3}"] \\ \length\alpha \ar[r,"\overline{\sigma_1}"] & N_1 \ar[r,"\overline{\zeta_1}"] & N_2 \circ N_1 \ar[r,"\overline{\zeta_3}"] & N_3 \circ N_2 \circ N_1 \end{tikzcd} \] We prove that $\Delta_{\Phi_3 \circ (\Phi_2 \circ \Phi_1)} = \Delta_{(\Phi_3 \circ \Phi_2) \circ \Phi_1}$. We have that $\Delta_{\Phi_3 \circ (\Phi_2 \circ \Phi_1)}(x) = 1$ if and only if, by definition: \begin{enumerate}[labelindent=0pt] \item[(1)] the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$ is acyclic; \item[(2)] $\forall y \in \zeta_3^{-1}\{x\} \ldotp \Delta_{\Phi_2 \circ \Phi_1}(y) = 1$; \item[(3)] $\forall z \in (\xi_3 \circ \xi_2)^{-1}\{x\} \ldotp \Delta_{\Phi_3}(z) = 1$; \end{enumerate} which is equivalent to say that: \begin{enumerate}[labelindent=0pt] \item[(1)] the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$ is acyclic; \item[(2a)] $\forall y \in l \ldotp \Bigl[ \zeta_3(y) = x \implies \text{$y$-th c.c.\ of $N_2 \circ N_1$ is acyclic} \Bigr] $; \item[(2b)] $\forall y \in l \ldotp \Bigl[ \zeta_3(y) = x \implies \forall a \in n \ldotp \Bigl( \zeta_1(a)=y \implies \Delta_{\Phi_1}(a)=1 \Bigr) \Bigr] $; \item[(2c)] $\forall y \in l \ldotp \Bigl[ \zeta_3(y) = x \implies \forall b \in m \ldotp \Bigl( \xi_1(b)=y \implies \Delta_{\Phi_2}(b)=1 \Bigr) \Bigr] $; \item[(3)] $\forall z \in p \ldotp \Bigl[ \xi_3\bigl(\xi_2(z)\bigr) = x \implies \Delta_{\Phi_3} (z) = 1 \Bigr] $. \end{enumerate} Call $A$ the conjunction of the conditions above. Next, we have that $\Delta_{(\Phi_3 \circ \Phi_2) \circ \Phi_1}(x)=1$ if and only if: \begin{enumerate}[labelindent=0pt] \item[(i)] the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$ is acyclic; \item[(ii)] $\forall a \in n \ldotp \Bigl[ \zeta_3 \bigl( \zeta_1(a) \bigr) =x \implies \Delta_{\Phi_1}(a)=1\Bigr]$; \item[(iiia)] $\forall w \in q \ldotp \Bigl[ \xi_3(w)=x \implies \text{ $w$-th c.c.\ of $N_3 \circ N_2$ is acyclic } \Bigr]$; \item[(iiib)] $\forall w \in q \ldotp \Bigl[ \xi_3(w)=x \implies \forall b \in m \ldotp \Bigl( \zeta_2(b)=w \implies \Delta_{\Phi_2}(b)=1 \Bigr) \Bigr]$; \item[(iiic)] $\forall w \in q \ldotp \Bigl[ \xi_3(w)=x \implies \forall z \in p \ldotp \Bigl( \xi_2(z)=w \implies \Delta_{\Phi_3}(z)=1 \Bigr) \Bigr]$ \end{enumerate} Call $B$ the conjunction of these last five conditions. We prove that $A$ implies $B$; in a similar way one can prove the converse as well. \begin{enumerate}[labelindent=0pt] \item[(ii)] Let $a \in n$, suppose $\zeta_3\bigl(\zeta_1(a)\bigr) = x$. By (2b), with $y = \zeta_1(a)$, we have $\Delta_{\Phi_1}(a) = 1$. \item[(iiia)] Let $w \in q$, suppose $\xi_3(w)=x$. Then the $w$-th c.c.\ of $N_3 \circ N_2$ must be acyclic as it is part of the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$, which is acyclic. \item[(iiib)] Let $w \in q$, suppose $\xi_3(w)=x$. Let also $b \in m$ and suppose $\zeta_2(b) = w$. Then $x = \xi_3\bigl( \zeta_2(b)\bigr) = \zeta_3 \bigl(\xi_1(b)\bigr)$. By (2c), with $y = \xi_1(b)$, we have $\Delta_{\Phi_2}(b)=1$. \item[(iiic)] Let $w \in q$, suppose $\xi_3(w)=x$. Let $z \in p$ be such that $\xi_2(z)=w$. Then $\xi_3\bigl(\xi_2(z)\bigr)=x$: by (3), we have $\Delta_{\Phi_3}(z)=1$. \end{enumerate} Hence composition is associative. Take now $\Phi$ and $\Psi$ consecutive morphisms of $\fc \B \C$ as in the Definition of $\fc \B \C$. Then $\Phi=\Phi_k \circ \dots \circ \Phi_1$ for some $\Phi_j$'s, in particular $\phi=\phi_k \circ \dots \circ \phi_1$ for some $\phi_j$'s, and $\Delta_\Phi (i) =1 $ precisely when the $i$-th connected component of $N$ is acyclic and for all $j \in \{1,\dots,k\}$ the transformation $\phi_j$ is dinatural in all its variables involved in the $i$-th c.c.\ of $N$: one can see this by simply unfolding the definition of $\Delta_{\Phi_k \circ \dots \circ \Phi_1}$, extending the case of $\Delta_{\Phi_3 \circ \Phi_2 \circ \Phi_1}$ above. Similarly for $\Psi=\Psi_{k'} \circ \dots \Psi_1$, with $\psi = \psi_{k'} \circ \dots \psi_1$. We have then that if \[ N' \circ N = \graph{\psi_{k'}} \circ \dots \circ \graph{\psi_1} \circ \graph{\phi_k} \circ \dots \circ \graph{\phi_1} \] is acyclic in its $x$-th connected component and for all $y \in \zeta^{-1}\{x\}$ and $z \in \xi^{-1}\{x\}$ we have that $\Delta_\Phi(y) = 1 = \Delta_\Psi(z)$, then all the $\phi_j$'s and $\psi_j$'s are dinatural in all their variables involved in the $x$-th connected component of $N' \circ N$: by Theorem~\ref{theorem:compositionality with complicated graphs}, we have that $\psi\circ\phi$ is dinatural in its $x$-th variable. Hence $\Psi\circ\Phi$ is still a morphism of $\fc \B \C$. All that is left to prove is that composition is unitary where the identity morphism of $(\alpha,F)$ is given by the equivalence class of \[ ( \id F, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\id{}"] & \length\alpha & \ar[l,"\id{}"'] \length\alpha \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\id{}"] & \length\alpha & \ar[l,"\id{}"'] \length\alpha \end{tikzcd}, K_1 ), \] which is indeed a morphism of $\fc\B\C$ because, as discussed in Example~\ref{example: graph and type are a generalised graph} we have that $\length\alpha$ is a generalised graph for $\id F$; moreover, the identity transformation is indeed (di)natural in all its variables, therefore the constant function equal to $1$, $K_1$, is a valid discriminant function for $\id{\length\alpha}$. Let \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \colon (\alpha,F) \to (\beta,G). \] We prove that $\Phi \circ \id{(\alpha,F)} = \Phi$ and ${\id{(\beta,G)}} \circ \Phi = \Phi$ (by ``$\Phi$'' here we mean its equivalence class). It is clear that $\Phi \circ {\id{(\alpha,F)}}$ consists of $\phi$ together with its type and generalised graph as specified in $\Phi$. Also, $\Delta_{\Phi \circ \id{(\alpha,F)}}(x) = 1$ precisely when the $x$-th connected component of $N$ is acyclic and $\Delta_{\Phi}(x)=1$, by definition. Given that $\Delta_\Phi(x)=1$ implies that the $x$-th c.c.\ of $N$ is acyclic, we have that $\Delta_{\Phi \circ \id{(\alpha,F)}} = \Delta_\Phi$. One can prove in a similar way the other identity law. \qed \end{proof} \begin{remark} The condition ``$\Delta_\Phi(i)=1$ implies that the $i$-th connected component of $N$ is acyclic'' in Definition~\ref{def: generalised functor category} is designed to ignore dinaturality properties that happen to be satisfied ``by accident'', as it were, which could cause problems upon composition. Indeed, suppose that we have a transformation $\phi$ which is the composite of four transformations $\phi_1,\dots,\phi_4$, whose resulting generalised graph, obtained by pasting together $\Gamma(\phi_1),\dots,\Gamma(\phi_4)$, is as follows: \[ N= \quad \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node (1) [category] {}; \\ \node (2) [component] {}; & & & \node (7) [component] {}; \\ \node (3) [category] {}; & & \node (8) [category] {}; & & \node (6) [opCategory] {}; \\ & \node (4) [component] {}; & & & \node (5) [component] {}; \\ & \node (A) [category] {}; & & & \node(F) [opCategory] {};\\ & \node (B) [component] {}; & & & \node(J) [component] {};\\ \node (C) [category] {}; & & \node(D) [category] {}; & & \node(E) [opCategory] {};\\ \node (H) [component] {}; & & & \node(I) [component] {};\\ \node (G) [category] {}; & & & \\ }; \graph[use existing nodes]{ 1 -> 2 -> 3 -> 4 -> A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F -> 5 -> 6 -> 7 -> 8 -> 4; }; \node[coordinate](p) at (-2,0) {}; \node[coordinate](q) at (2,0) {}; \draw [dashed] (3.west -| p) -- (6.east -| q); \draw [dashed] (A.west -| p) -- (F.east -| q); \draw [dashed] (C.west -| p) -- (E.east -| q); \end{tikzpicture} \] Call $\Phi$ the tuple in $\fc \B \C$ consisting of $\phi$ with its type $ \begin{tikzcd}[cramped,sep=small] 1 \ar[r] & 1 & 1 \ar[l] \end{tikzcd} $ and $N$ as a generalised graph, as a composite of the atomic morphisms of $\fc \B \C$ given by $\phi_1,\dots,\phi_4$. Suppose that $\phi$ happens to be dinatural in its only variable for some reason (extreme example: the category $\C$ is the terminal category). If in the definition of $\fc \B \C$ the only condition on $\Delta$ were ``$\Delta_\Phi(i) = 1$ implies $\phi$ dinatural in its $i$-th variable'', without requiring that the $i$-th connected component of $N$ be acyclic if $\Delta_\Phi(i)=1$, then equipping $\phi$ in $\Phi$ with a discriminant function $\Delta_\Phi$ defined as \[ \begin{tikzcd}[row sep=0pt] 1 \ar[r,"\Delta_\Phi"] & 1 \\ 1 \ar[r,|->] & 1 \end{tikzcd} \] would be legitimate. Compose now $\Phi$ with the identity morphism of $\fc \B \C$: by definition we would obtain again $\Phi$ except for the discriminant function, which would be defined as $\Delta_{\Phi \circ \id{}}(1)=0$ because the composite graph, which is $N$, is not acyclic. Composition would not be unitary! The condition ``the $i$-th connected component of $N$ is acyclic whenever $\Delta_\Phi(i)=1$'' in Definition~\ref{def: generalised functor category} is therefore not only sufficient, but also necessary for unitarity of composition in $\fc \B \C$. \end{remark} \begin{remark}\label{remark:non-atomic morphisms of {B,C}} Although it is impossible, in general, to judge whether a transformation is or is not a composite of others by looking at its type, one can distinguish atomic morphisms of $\FC \B \C$ from composite morphisms by looking at the generalised graph $N$ they come with. Indeed, if \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \] is a non-identity morphism of $\FC \B \C$, then $\Phi$ is atomic if and only if $N=\graph\phi$. In case $N \ne \graph\phi$, then $N$ contains internal places as a result of composing together ``atomic'' graphs of transformations: that is, we have that $\phi = \phi_k \circ \dots \circ \phi_1$ for some transformations $\phi_i$, and $N=\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$. This decomposition of $\phi$ and $N$ is not necessarily unique. \end{remark} \paragraph{The category of graphs} We can now finally individuate the category $\gcf$ of graphs of transformations. To do so, we will first build a category $\GC$, which will consist of those morphisms in $\gc$ that are the generalised graph of a transformation in $\fc \B \C$, together with a discriminant function. The category of graphs $\gcf$ we seek will be defined as a subcategory of it. We begin by defining the notion of \emph{skeleton} of a morphism in $\gc$, as it will be useful later on. \begin{definition} Let $ f = \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd} $ be a morphism in $\gc$, and let $n$ be the number of connected components of $N$. The \emph{skeleton} of the cospan $f$ is an (equivalence class of) cospan(s) in $\finset$ \[ \begin{tikzcd} \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} \] where $\sigma(i)$ is the number of the connected component of $N$ to which $\overline\sigma(i)$ belongs to, and similarly is defined $\tau$. \end{definition} \begin{remark} If $\phi$ is a transformation and $N$ is a generalised graph of $\phi$, then the type of $\phi$ is the skeleton of $N$. \end{remark} The category $\GC$ will then consist of only part of the data of $\fc \B \C$, obtained, as it were, by discarding functors and transformations, and only considering the graphs and the discriminant functions. \begin{definition}\label{definition:graph category definitive} The category $\GC$ of graphs consists of the following data. \begin{itemize}[leftmargin=*] \item Objects are lists in $\List\{+,-\}$. \item Morphisms $\alpha \to \beta$ are equivalence classes of pairs \[ \bigl( \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd}, \Delta_N \bigr) \] where: \begin{itemize} \item $(\overline\sigma,\overline\tau,N)$ is a morphism in $\gc$, \item let $n$ be the number of connected components of $N$: then $\Delta_N \colon n \to \{0,1\}$ is called \emph{discriminant function} and it is such that $\Delta(i)=1$ implies that the $i$-th connected component of $N$ is acyclic. \end{itemize} A pair above is equivalent to another $((\overline\sigma',\overline\tau',N'),\Delta_{N'})$, where $N'$ also has $n$ connected components, if and only if there exists $f \colon N \to N'$ an isomorphism in $\PN$ and $\pi \colon n \to n$ a permutation such that \[ \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] \ar[dr,"\overline{\sigma'}"'] & N \ar[d,"f"] & \length\beta \ar[l,"\overline\tau"'] \ar[dl,"\overline{\tau'}"] \\ & N' \end{tikzcd} \quad \text{and} \quad \begin{tikzcd} n \ar[r,"\Delta_N"] \ar[d,"\pi"'] & \{0,1\} \\ n \ar[ur,"\Delta_N'"'] \end{tikzcd} \] commute and $f$ maps the $i$-th c.c.\ of $N$ to the $\pi(i)$-th c.c.\ of $N'$. \item Composition is defined exactly as in $\fc \B \C$. To wit, composition of \[ \bigl( \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd}, \Delta_N \bigr) \quad \text{and} \quad \bigl( \begin{tikzcd} \length\beta \ar[r,"\overline\eta"] & N & \ar[l,"\overline\theta"'] \length\gamma \end{tikzcd}, \Delta_{N'} \bigr) \] is the equivalence class of the pair \[ ( \begin{tikzcd} \length\alpha \ar[r,"\overline\zeta \overline\sigma"] & {N'} \circ N & \ar[l,"\overline\xi \overline\theta"'] \length\gamma \end{tikzcd}, \Delta_{g \circ f} ) \] where $N' \circ N$ is the Petri Net given by the result of the pushout \[ \begin{tikzcd} & & \length\gamma \ar[d,"\overline\theta"] \\ & \length\beta \ar[d,"\overline\tau"'] \ar[r,"\overline\eta"] \ar[dr, phantom, "\ulcorner" very near start] & N' \ar[d,"\overline\xi"] \\ \length\alpha \ar[r,"\overline\sigma"] & N \ar[r,"\overline\zeta"] & N' \circ N \end{tikzcd} \] and $\Delta_{N' \circ N}$ is defined as follows. If $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ and $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \length\gamma \ar[l,"\theta"'] \end{tikzcd} $ are the skeletons of $(\overline\sigma,\overline\tau,N)$ and $(\overline\eta,\overline\theta,N')$ respectively, then the skeleton of $(\overline\zeta\overline\sigma,\overline\xi\overline\theta,N'\circ N)$ is given by the pushout \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] (cf.\ Proposition~\ref{proposition: composition in G preservers generalised graphs}). Define therefore $\Delta_{N' \circ N}(x)=1$ if and only if the $x$-th connected component of $N'\circ N$ is acyclic \emph{and} for all $y \in \zeta^{-1}\{x\}$ and $z \in \xi^{-1}\{x\}$ we have that $\Delta_N(y) = 1 = \Delta_{N'}(z)$. \end{itemize} \end{definition} \begin{definition} The category $\gcf$ of graphs is the wide subcategory of $\GC$ (that is, it contains all the objects of $\GC$) generated by equivalence classes of pairs \[ \bigl( \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd}, \Delta_N \bigr) \] where $P_N=\length\alpha + \length\beta$, $\overline\sigma=\injP{\length\alpha}$, $\overline\tau = \injP{\length\beta}$ and for all $p$ place, $\length{\inp p} + \length{\out p} = 1$ (equivalently, $N$ has no internal places and every place is either a proper source or a proper sink). Hence, the general morphism of $\gcf$ is either: \begin{itemize} \item an identity $ \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\id{}"] & \length\alpha & \ar[l,"\id{}"'] \length\alpha \end{tikzcd}, K_1 \bigr), $ \item a generator satisfying the conditions above; such morphisms are called \emph{atomic}, \item a finite composite of atomic morphisms. \end{itemize} \end{definition} The assignment $(\alpha,F) \mapsto \alpha$ and \[ \bigl[(\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\injP{\length\alpha}"] & \ggraph\phi & \length\beta \ar[l,"\injP{\length\beta}"'] \end{tikzcd}, \Delta_\Phi )\bigr] \mapsto \Bigl[\bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\injP{\length\alpha}"] & \ggraph\phi & \length\beta \ar[l,"\injP{\length\beta}"'] \end{tikzcd}, \Delta_\Phi \bigr)\Bigr] \] mapping atomic morphisms of $\FC \B \C$ to atomic morphisms of $\gcf$ uniquely extends to a functor $\gf \colon \FC \B \C \to \gcf$. Moreover, $\gf$ has two special properties, by virtue of the ``modularity'' of our $\FC \B \C$ and $\gcf$ and the fact that all and only atoms in $\FC \B \C$ have atomic images: it reflects compositions and identities. By ``reflects identities'' we mean that if $\Phi \colon (\alpha,F) \to (\alpha,F)$ is such that $\gf(\Phi)=\id{\length\alpha}$, then $\Phi=\id{(\alpha,F)}$. By ``reflects compositions'' we mean that if $\Phi$ is a morphism in $\FC \B \C$ and $\gf(\Phi)$ is not atomic, i.e.\ $\gf(\Phi) = (N_k,\Delta_k) \circ \dots \circ (N_1,\Delta_1)$ with $(N_i,\Delta_i)$ atomic in $\gcf$, then there must exist $\Phi_1,\dots,\Phi_k$ morphisms in $\FC \B \C$ such that: \begin{itemize} \item $\Phi = \Phi_k \circ \dots \circ \Phi_1$, \item $\gf(\Phi_i) = (N_i,\Delta_i)$. \end{itemize} Hence, say $\Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) $: then there must exist transformations $\phi_i$ with graph $\graph{\phi_i}$ (hence atomic), dinatural according to $\Delta_i$, such that $\phi = \phi_k \circ \dots \circ \phi_1$, cf.\ Remark~\ref{remark:non-atomic morphisms of {B,C}}. In other words, $\gf$ satisfies the following definition. \begin{definition} Let $\D,\E$ be any categories. A functor $P \colon \D \to \E$ is said to be a \emph{weak Conduché fibration} (WCF) if, given $f \colon A \to B$ in $\D$: \begin{itemize} \item $P(f)=\id{}$ implies $f=\id{}$; \item given a decomposition $P(f)=u \circ v$ in $\E$, we have that there exist $g,h$ in $\D$ such that $f = g \circ h$, $P(g) = u$, $P(h)=v$. \end{itemize} We define $\WCFover \E$ to be the full subcategory of $\catover\E$ whose objects are the categories over $\E$ whose augmentation is a weak Conduché fibration. \end{definition} We have then proved the following theorem. \begin{theorem} $\FC \B \C$ is an object of $\,\,\WCFover\gcf$. \end{theorem} Conduché fibrations were introduced in~\cite{conduche_au_1972} as a re-discovery after the original work of Giraud~\cite{giraud_methode_1964} on exponentiable functors in slice categories. Our notion is weaker in not requiring the additional property of uniqueness of the decomposition $f=g \circ h$ up to equivalence, where we say that two factorisations $g \circ h$ and $g' \circ h'$ are equivalent if there exists a morphism $j \colon \codom h \to \dom {g'}$ such that everything in sight commutes in the following diagram: \[ \begin{tikzcd} & \codom{h} \ar[r,"g"] \ar[d,"j"] & B \\ A \ar[ur,"h"] \ar[r,"h'"'] & \dom{g'} \ar[ur,"g'"'] \end{tikzcd} \] We will not, in fact, need such uniqueness; moreover, it is not evident whether our $\gf$ is a Conduché fibration or not. \begin{remark} The fact that $\FC \B \C$ is not just an object of $\catover\gcf$, but even of $\WCFover\gcf$, will allow us to build the substitution category $\ring \A \B$ just for categories $\A$ over $\gcf$ whose augmentation is more than a mere functor: it is a weak Conduché fibration. The main advantage of restricting our attention to $\WCFover\gcf$ is that a category $\A$ in it inherits, in a sense, the modular structure of $\gcf$, as we shall see in the next Lemma. \end{remark} \begin{definition} Let $P \colon \D \to \gcf$ be an object of $\WCFover\gcf$. A morphism $d$ in $\D$ is said to be \emph{atomic} if $P(d)$ is atomic. \end{definition} \begin{lemma}\label{lemma:functors determined by atoms in WCF over E} Suppose that, in the following diagram, $P$ is a weak Conduché fibration and $Q$ is an ordinary functor. \[ \begin{tikzcd}[column sep={1cm,between origins}] \D \ar[rr,"Q"] \ar[dr,"P"'] & & \mathbb F \\ & \gcf \end{tikzcd} \] Then $Q$ is completely determined on morphisms by the image of atomic morphisms of $\D$. \end{lemma} \begin{proof} Let $d \colon D \to D'$ be a morphism in $\D$ with $P(D)=\alpha$, $P(D')=\beta$ and $P(d) = \bigl[ \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_d \bigr) \bigr] $. If $P(d)$ is not atomic, then either $P(d)=\id{}$, in which case $d=\id{}$ (because $P$ is a weak Conduché fibration), or $P(d)=(N_k,\Delta_k) \circ \dots \circ (N_1,\Delta_1)$ for some (not necessarily unique) atomic $(N_i,\Delta_i)$. Hence there must exist $d_1,\dots,d_k$ in $\D$ such that $d=d_k \circ \dots \circ d_1$ and $P(d_i)=(N_i,\Delta_i)$. Then $Q(d)$ will necessarily be defined as $\id{}$ in the first case, or as $Q(d_k) \circ \dots \circ Q(d_1)$ in the second case, otherwise $Q$ would not be a functor. \qed \end{proof} \section{The category of formal substitutions}\label{section:category of formal substitutions} Kelly~\cite{kelly_many-variable_1972}, after defining his generalised functor category $\fc \B \C$ for covariant functors and many-variable natural transformations only, proceeds by showing that the functor $\fc \B -$ has a left adjoint, which he denotes with $\ring - \B$. The category $\ring \A \B$ will be essential to capture the central idea of substitution. Here we aim to do the same in our more general setting where $\fc \B \C$ comprises mixed-variance functors and many-variable, partial dinatural transformations. First, we give an explicit definition of the functor $\FC \B - \colon \Cat \to \WCFover\gcf$. Given a functor $K \colon \C \to \C'$, we define $\FC \B K \colon \FC \B \C \to \FC \B {\C'}$ to be the functor mapping $(\alpha,F \colon \B^\alpha \to \C)$ to $(\alpha,KF \colon \B^\alpha \to \C')$; and if \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \colon (\alpha,F) \to (\beta,G) \] is a morphism in $\FC \B \C$, then $\FC \B K (\Phi)$ is obtained by whiskering $K$ with $\phi$, obtaining therefore a transformation with the same type and generalised graph as before, with the same dinaturality properties: \[ \FC \B K (\Phi) = ( K\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ). \] In particular, $\FC \B K$ is clearly a functor over $\gcf$. It is a classic exercise in Category Theory to prove that $\fc \B -$ is continuous, see~\cite[Theorem 3.52]{santamaria_towards_2019}, which is a necessary condition for the existence of a left adjoint \[ \ring - \B \colon \WCFover\gcf \to \Cat. \] We shall prove that a left adjoint does exist by first constructing the category $\ring \A \B$ explicitly, and then showing the existence of a universal arrow $(\A \circ \B, F_\A \colon \A \to \FC \B {\ring \A \B})$ from $\A$ to $\FC \B -$: this will yield the desired adjunction. To see what $\ring \A \B$ looks like, we follow Kelly's strategy: we aim to prove that there is a natural isomorphism \[ \Cat ( \ring \A \B, \C) \cong \WCFover\gcf (\A, \FC \B \C) \] and we use this to deduce how $\ring \A \B$ must be. Write $\Gamma$ for all augmentations (as weak Conduché fibrations) over $\gcf$, and let $\Phi$ be an element of $\,\WCFover\gcf(\A,\FC \B \C)$. We now spell out all we can infer from this fact. To facilitate reading, and to comply with Kelly's notation in~\cite{kelly_many-variable_1972}, we shall now refer to the $\bfA$-th component of a transformation $\phi$, for $\bfA=(A_1,\dots,A_m)$ say, as $\phi(\bfA)$ instead of $\phi_{\bfA}$. \begin{enumerate}[(a),wide,labelindent=0pt] \item \label{PhiA} For all $A \in \A$, $\Gamma(A)=\alpha$ we have $\Phi A \colon \B^\alpha \to \C$ is a functor, hence \begin{enumerate}[label=(a.\roman*),wide,leftmargin=\parindent] \item for every $\bfB=(B_1,\dots,B_{\length\alpha})$ object of $\B^\alpha$, $\Phi A (\bfB)$ is an object of $\C$,\label{PhiA(B1...Balpha)} \item for all $\bfg=(g_1,\dots,g_{\length\alpha})$, with $g_i \colon B_i \to B_i'$ a morphism in $\B$, we have \[ \Phi(A)(\bfg) \colon \funminplus {\Phi A} {B_i'} {B_i} i {\length\alpha} \to \funminplus {\Phi A} {B_i} {B_i'} i {\length\alpha} \] is a morphism in $\C$.\label{PhiA(g1...galpha)} \end{enumerate} This data is subject to functoriality of $\Phi A$, that is: \begin{enumerate}[(1),wide,leftmargin=\parindent] \item For every $\bfB$ object of $\B^\alpha$, $\Phi A (\id\bfB) = \id{\Phi A (\bfB)}$\label{PhiA(1...1)=1PhiA}. \item For $\bfh=(h_1,\dots,h_{\length\alpha})$, with $h_i \colon B_i' \to B_i''$ morphism of $\B$, \[ \funminplus {\Phi A} {g_i \circ_{\Op\B} h_i} {h_i \circ_{\B} g_i} i {\length\alpha} = \funminplus {\Phi A} {g_i} {h_i} i {\length\alpha} \circ \funminplus {\Phi A} {h_i} {g_i} i {\length\alpha}. \]\label{PhiA(hg)=PhiA(h)PhiA(g)} \end{enumerate} \item \label{Phif}For all $f \colon A \to A'$ in $\A$ with $ \Gamma(f) = \Bigl[\bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd},\Delta_f \bigr) \Bigr] $, we have that $\Phi f$ is an equivalence class of transformations whose graphs are representatives of $\Gamma(f)$, such transformations being dinatural in some variables according to $\Delta_f$. Hence for all $\xi = \bigl((\overline\sigma, \overline\tau, N),\Delta_\xi\bigr) \in \Gamma(f)$ we have a transformation $\Phi f_\xi \colon \Phi A \to \Phi A'$ whose type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ is the skeleton of $(\overline\sigma,\overline\tau,N)$ and with discriminant function $\Delta_\xi$ that tells us in which variables $\Phi f_\xi$ is dinatural. Therefore to give $\Phi f$ one has to provide, for all $\xi = \bigl((\overline\sigma, \overline\tau, N),\Delta_\xi\bigr) \in \Gamma(f)$, for every $\bfB=(B_1,\dots,B_n)$ object of $\B^n$, a morphism in $\C$ \[ \Phi f_\xi (\bfB) \colon \Phi A (\bfB\sigma) \to \Phi A' (\bfB\tau) \] such that: \begin{enumerate}[(1),start=3,wide,leftmargin=\parindent] \item for all $\pi \colon n \to n$ permutation, $\Phi f_{\pi\xi}(\bfB) = \Phi f_\xi (\bfB\pi)$,\label{Phif_pixi(Bi)=Phif_xi(Bpii)} \item \label{Phif_xi dinatural} for $\bfB'=(B_1',\dots,B_n')$ in $\B^n$ and for $\bfg=(g_1,\dots,g_n) \colon \bfB \to \bfB'$ in $\B^n$, where if $\Delta_\xi(i)=0$ then $B_i=B_i'$ and $g_i = \id {B_i}$, the following hexagon commutes: \[ \begin{tikzcd}[font=\normalsize, column sep={0.5cm}] & \Phi A (\bfB\sigma) \ar[rrr,"{\Phi f_\xi (\bfB)}"] & && \Phi A' (\bfB\tau) \ar[dr,"\funminplus{\Phi A'}{B_{\tau i}}{g_{\tau i}} i {\length\beta}"] \\ \funminplus{\Phi A}{B_{\sigma i}'}{B_{\sigma i}} i {\length\alpha} \ar[ur,"{\funminplus{\Phi A}{g_{\sigma i}}{B_{\sigma i}} i {\length\alpha}}"] \ar[dr,"\funminplus{\Phi A} {{B_{\sigma i}}} {{g_{\sigma i}}} i {\length\alpha}"'] & & && & \funminplus {\Phi A'} {B_{\tau i}}{B_{\tau i}'} i {\length\beta} \\ & \Phi A (\bfB'\sigma) \ar[rrr,"{\Phi f_\xi (\bfB')}"'] & & &\Phi A'(\bfB'\tau) \ar[ur,"\funminplus{\Phi A'}{g_{\tau i}}{B_{\tau i}} i {\length\beta}"'] \end{tikzcd} \] \end{enumerate} \item The data provided in \ref{PhiA} and \ref{Phif} is subject to the functoriality of $\Phi$ itself, hence: \begin{enumerate}[(1),start=5,wide,leftmargin=\parindent] \item $\Phi(\id A) = \id{\Phi A}$, \label{Phi(1A)=1_Phi(A)} \item for $f \colon A \to A'$ and $f' \colon A' \to A''$, $\Phi(f' \circ_{\A} f) = {\Phi f'} \circ_{\FC \B \C} {\Phi f}$ \label{Phi(f2 f1)=Phi(f2) Phi(f1)}. \end{enumerate} \end{enumerate} We now mirror all the data and properties of a functor $\Phi \colon \A \to \FC \B \C$ over $\gcf$ to define the category $\ring \A \B$. \begin{definition}\label{definition A ring B} Let $\A$ be a category over $\gcf$ via a weak Conduché fibration $\Gamma \colon \A \to \gcf$, and let $\B$ be any category. The category $\ring \A \B$ of \emph{formal substitutions} of elements of $\B$ into those of $\A$ is the free category generated by the following data. We use the same enumeration as above to emphasise the correspondence between each piece of information. \begin{itemize}[wide=0pt,leftmargin=*] \item[\ref{PhiA(B1...Balpha)}] Objects are of the form $A[\bfB]$, for $A$ an object of $\A$ with $\Gamma(A)=\alpha$, and for $\bfB=(B_1,\dots,B_{\length\alpha})$ in $\B^\alpha$. As it is standard in many-variable calculi, we shall drop a set of brackets and write $A[B_1,\dots,B_{\length\alpha}]$ instead of $A[(B_1,\dots,B_{\length\alpha})]$. \item[\ref{PhiA(g1...galpha)},\ref{Phif}] Morphisms are to be generated by \[ A[\bfg] \colon \funminplussq A {B_i'} {B_i} i {\length\alpha} \to \funminplussq A {B_i} {B_i'} i {\length\alpha} \] for $A$ in $\A$ with $\Gamma(A)=\alpha$, $\bfg=(g_1,\dots,g_{\length\alpha})$ and $g_i \colon B_i \to B_i'$ in $\B$, and by \[ f_{\xi}[\bfB] \colon A[\bfB\sigma] \to A'[\bfB\tau] \] for $f \colon A \to A'$ in $\A$, $ \xi = \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd},\Delta_\xi \bigr) $ a representative of $\Gamma(f)$, $(\sigma,\tau,n)$ the skeleton of $(\overline\sigma,\overline\tau,N)$, $\bfB=(B_1,\dots,B_n)$ object of $\B^n$. \end{itemize} Such data is subject to the following conditions: \begin{itemize}[wide=0pt,leftmargin=*] \item[\ref{Phif_pixi(Bi)=Phif_xi(Bpii)}] For every permutation $\pi \colon n \to n$ and for every $\bfB=(B_1,\dots,B_n)$ object of $\B^n$ \[ f_{\pi\xi}[\bfB] = f_\xi[\bfB\pi]. \] \item[\ref{PhiA(1...1)=1PhiA},\ref{Phi(1A)=1_Phi(A)}] For all $A\in\A$ with $\Gamma(A)=\alpha$ and for every $\bfB=(B_1,\dots,B_{\length\alpha})$ object of $\B^\alpha$ \[ A[\id\bfB] = \id{A[\bfB]} = {\id A}[\bfB]. \] \item[\ref{PhiA(hg)=PhiA(h)PhiA(g)}] For all $A \in \A$ with $\Gamma(A)=\alpha$, for all $g_i \colon B_i \to B_i'$ and $h_i \colon B_i' \to B_i''$ in $\B$, $i \in \{1,\dots,\length\alpha\}$ \[ \funminplussq { A} {g_i \circ_{\Op\B} h_i} {h_i \circ_{\B} g_i} i {\length\alpha} = \funminplussq { A} {g_i} {h_i} i {\length\alpha} \circ \funminplussq { A} {h_i} {g_i} i {\length\alpha}. \] \item[\ref{Phi(f2 f1)=Phi(f2) Phi(f1)}] For all $f \colon A \to A'$ and $f' \colon A' \to A''$ in $\A$, for all \[ \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd},\Delta \bigr) \in \Gamma(f) \quad \text{and} \quad \bigl( \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\overline\eta"] & M & \ar[l,"\overline\theta"'] \length\gamma \end{tikzcd},\Delta' \bigr) \in \Gamma(f'), \] with $(\sigma,\tau,n)$ and $(\eta,\theta,m)$ the skeletons of, respectively, $(\overline\sigma,\overline\tau,N)$ and $(\overline\eta,\overline\theta,M)$, and for all choices of a pushout \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr,phantom,very near start,"\ulcorner"] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] each choice determining the skeleton of (the first projection of) a representative of $\Gamma(f' \circ f)$, and for all $\bfB=(B_1,\dots,B_l)$ object of $\B^l$ \[ f'_{(\eta,\theta)}[\bfB\xi] \circ f_{(\sigma,\tau)}[\bfB\zeta] = (f'\circ f)_{(\zeta\sigma,\xi\theta)}[\bfB]. \] \item[\ref{Phif_xi dinatural}] For all $f \colon A \to A'$, $\xi= \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\xi \bigr) \in \Gamma(f) $, with $(\sigma,\tau,n)$ the skeleton of $(\overline\sigma,\overline\tau,N)$, for all $\bfB=(B_1,\dots,B_n)$, $\bfB'=(B_1',\dots,B_n')$ objects of $\B^n$ and for all $\bfg=(g_1,\dots,g_n) \colon \bfB \to \bfB'$, with $B_i=B_i'$ and $g_i=\id{B_i}$ if $\Delta_\xi(i)=0$, the following hexagon commutes: \begin{equation}\label{f[g1...gn]} \begin{tikzcd}[column sep={0.5cm}] & A[\bfB\sigma] \ar[rrr,"{f_\xi [\bfB]}"] & && A' [\bfB\tau] \ar[dr,"\funminplussq{A'}{B_{\tau i}}{g_{\tau i}} i {\length\beta}"] \\ \funminplussq{A}{B_{\sigma i}'}{B_{\sigma i}} i {\length\alpha} \ar[ur,"{\funminplussq{A}{g_{\sigma i}}{B_{\sigma i}} i {\length\alpha}}"] \ar[dr,"\funminplussq{A} {{B_{\sigma i}}} {{g_{\sigma i}}} i {\length\alpha}"'] & & && & \funminplussq { A'} {B_{\tau i}}{B_{\tau i}'} i {\length\beta} \\ & A [\bfB'\sigma] \ar[rrr,"{f_\xi [\bfB']}"'] & & & A'[\bfB'\tau] \ar[ur,"\funminplussq{A'}{g_{\tau i}}{B_{\tau i}} i {\length\beta}"'] \end{tikzcd} \end{equation} We will denote the diagonal of \ref{f[g1...gn]} as $f[\bfg]$. \end{itemize} \end{definition} \begin{remark} By \ref{Phi(1A)=1_Phi(A)} and \ref{PhiA(hg)=PhiA(h)PhiA(g)}, we have \[ A[\bfg] = \id A [\bfg] \] and by \ref{PhiA(1...1)=1PhiA}, we have \[ f[\bfB] = f[\id\bfB] \] which is coherent with the usual notation of $A$ for $\id A$. \end{remark} Since two consecutive morphisms both of type \ref{PhiA(g1...galpha)} or both of type \ref{Phif} can be merged together into a single one by \ref{PhiA(hg)=PhiA(h)PhiA(g)} and \ref{Phi(f2 f1)=Phi(f2) Phi(f1)}, we have no way, in general, to swap the order of a morphism of type $A[\bfg]$ followed by one of the form $f_\xi[\bfB]$, because the only axiom that relates the two generators is (\ref{f[g1...gn]}). Therefore, all we can say about the general morphism of $\ring \A \B$ is that it is a string of compositions of alternate morphisms of type \ref{PhiA(g1...galpha)} and \ref{Phif}, subject to the equations \ref{PhiA(1...1)=1PhiA}-\ref{Phi(f2 f1)=Phi(f2) Phi(f1)}. \begin{remark} If $\A$ is such that $\length{\Gamma(A)}=1$ for all objects $A$ in $\A$, then $\ring \A \B$ is highly reminiscent of the category $\A \otimes \B$ as described by Power and Robinson in~\cite{power_premonoidal_1997}. The authors studied the \emph{other} symmetric monoidal closed structure of $\Cat$, where the exponential $[\B,\C]$ is the category of functors from $\B$ to $\C$ and morphisms are simply transformations (not necessarily natural), and $\otimes \B$ is the tensor functor that is the left adjoint of $[\B,-]$. The category $\A \otimes \B$ has pairs $(A,B)$ of objects of $\A$ and $\B$, and a morphism from $(A,B)$ to $(A',B')$ is a finite sequence of non-identity arrows consisting of alternate chains of consecutive morphisms of $\A$ and $\B$. Composition is given by concatenation followed by cancellation accorded by the composition in $\A$ and $\B$, much like our $\ring \A \B$. The only difference with their case is that we have the additional dinaturality equality \ref{Phif_xi dinatural}. For an arbitrary category $\A$ over $\gcf$, our $\ring \A \B$ would be a sort of generalised tensor product, where the number of objects of $\B$ we ``pair up'' with an object $A$ of $\A$ depends on $\Gamma(A)$. \end{remark} We are now ready to show that $\fc \B -$ has indeed a left adjoint. This is going to be a crucial step towards a complete substitution calculus for dinatural transformations; we shall discuss some ideas and conjectures about the following steps in the conclusions. \begin{theorem}\label{theorem:{B,-} has a left adjoint} The functor $\FC \B -$ has a left adjoint \[ \begin{tikzcd}[column sep=2cm,bend angle=30] {\WCFover\gcf} \ar[r,bend left,"\ring - \B"{name=A},pos=.493] & \Cat \ar[l,bend left,"\FC \B -"{name=B},pos=.507] \ar[from=A,to=B,phantom,"\bot"] \end{tikzcd} \] therefore there is a natural isomorphism \begin{equation}\label{natural isomorphism (A circ B, C) -> (A,{B,C})} \Cat \bigl( \ring \A \B , \C \bigr) \cong \WCFover\gcf \bigl( \A, \FC \B \C \bigr). \end{equation} Moreover, $\ring {} {} \colon \WCFover\gcf \times \Cat \to \Cat$ is a functor. \end{theorem} \begin{proof} Recall that to give an adjunction $ (\ring - \B) \dashv \FC \B -$ is equivalent to give, for all $\A \in \WCFover\gcf$, a universal arrow $(\ring \A \B, F_\A \colon \A \to \FC \B {\ring \A \B})$ from $\A$ to the functor $\FC \B -$; $F_\A$ being a morphism of $\WCFover\gcf$. This means that, for a fixed $\A$, we have to define a functor over $\gcf$ that makes the following triangle commute: \[ \begin{tikzcd}[column sep={1.5cm,between origins}] \A \ar[rr,"F_\A"] \ar[dr,"\Gamma"'] & & \FC \B {\ring \A \B} \ar[dl,"\gf"] \\ & \gcf \end{tikzcd} \] and that is universal among all arrows from $\A$ to $\FC \B -$: for all arrows $(\C, \Phi \colon \A \to \FC \B \C)$ from $\A$ to $\FC \B -$ ($\Phi$ being a functor over $\gcf$), there must exist a unique morphism in $\Cat$, that is a functor, $H \colon \ring \A \B \to \C$ such that \[ \begin{tikzcd} \A \ar[r,"F_\A"] \ar[dr,"\Phi"'] & \FC \B {\ring \A \B} \ar[d,"{\FC \B H}"] \\ & \FC \B \C \end{tikzcd} \] commutes. In the proof we will refer to properties \ref{PhiA(1...1)=1PhiA}-\ref{Phi(f2 f1)=Phi(f2) Phi(f1)} as given in the definition of $\ring \A \B$. Let then $\A$ be a category over $\gcf$ with $\Gamma \colon \A \to \gcf$ a weak Conduché fibration. We define the action of $F_\A$ on objects first. If $A$ is an object of $\A$ with $\Gamma(A)=\alpha$, then the assignment \[ \begin{tikzcd}[row sep=0em] \B^{\alpha} \ar[r,"F_\A(A)"] & \ring \A \B \\ \bfB \ar[|->,r] \ar{d}[description,name=A]{\bfg} & A[\bfB] \ar{d}[description,name=B]{{A[\bfg]}} \\[2em] \bfB' \ar[|->,r] & A[\bfB'] \arrow[from=A,to=B,|->] \end{tikzcd} \] is a functor by virtue of \ref{PhiA(1...1)=1PhiA} and \ref{PhiA(hg)=PhiA(h)PhiA(g)}. By little abuse of notation, call $F_\A(A)$ also the pair $(\alpha,F_\A(A))$, which is an object of $\FC \B {\ring \A \B}$. To define $F_\A$ on morphisms, let $f \colon A \to A'$ be a morphism in $\A$, with $\Gamma(A)=\alpha$, $\Gamma(A')=\beta$, let \[ \xi = \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\xi \bigr) \in \Gamma(f), \] and call $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ the skeleton of $(\overline\sigma,\overline\tau,N)$. We define $F_\A (f) \colon F_\A(A) \to F_\A(A')$ to be the equivalent class of the tuple \[ \bigl( F_\A (f)_\xi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\xi \bigr) \] where $F_\A(f)_\xi$ is a transformation whose general component is \[ \begin{tikzcd}[row sep=1em,column sep=4em] F_\A(A)(\bfB\sigma) \ar[d,phantom,"\rotatebox{90}="] \ar[r,"{f_\xi[\bfB]}"] & F_\A(A')(\bfB\tau) \ar[d,phantom,"\rotatebox{90} ="] \\ A[\bfB\sigma] & A'[\bfB\tau] \end{tikzcd} \] Then $F_\A(f)_\xi$ is indeed dinatural in its $i$-th variable whenever $\Delta_\xi(i)=1$ because of~\ref{Phif_xi dinatural}. Moreover, $F_\A$ is well-defined on morphisms because of \ref{Phif_pixi(Bi)=Phif_xi(Bpii)} and is in fact a functor thanks to \ref{Phi(1A)=1_Phi(A)} and \ref{Phi(f2 f1)=Phi(f2) Phi(f1)}. Finally, $F_\A(f)$ so defined is indeed a morphism of $\FC \B {\ring \A \B}$: if $f$ is such that $\Gamma(f)$ is atomic, then $F_\A(f)$ is an atomic morphism of $\FC \B {\ring \A \B}$; if instead $\Gamma(f)=(N_k,\Delta_k) \circ \dots \circ (N_1,\Delta_1)$ where $(N_i,\Delta_i)$ is atomic, then there exists a factorisation $f=f_k \circ \dots \circ f_1$ in $\A$ with $\Gamma(f_i)=(N_i,\Delta_i)$ because $\Gamma$ is a weak Conduché fibration. By functoriality of $F_\A$, we have that $F_\A(f)=F_\A(f_k) \circ \dots \circ F_\A(f_1)$, hence it is a composite of atomic morphisms of $\FC \B {\ring \A \B}$. We now prove that $F_\A$ is universal. Let then $\Phi \colon \A \to \FC \B \C$ be a morphism in $\WCFover\gcf$, that is a functor over $\gcf$. We define $H \colon \ring \A \B \to \C$ as follows: \begin{itemize}[wide=0pt,leftmargin=*] \item[\ref{PhiA(B1...Balpha)}] For $A \in \A$ with $\Gamma(A)=\alpha$ and $\bfB \in \B^\alpha$, \[ H\bigl(A[\bfB]\bigr) = \Phi(A)(\bfB); \] \item[\ref{PhiA(g1...galpha)}] For $A \in \A$ with $\Gamma(A)=\alpha$, for $\bfg$ in $\B^\alpha$, \[ H\bigl(A[\bfg]\bigr) = \Phi(A)(\bfg); \] \item[\ref{Phif}] For $f \colon A \to A'$ in $\A$, $\xi = (N_\xi,\Delta_\xi) \in \Gamma(f)$ where $N_\xi$ has $n$ connected components, for $\bfB \in \B^n$, \[ H\bigl(f_\xi[\bfB]\bigr) = \Phi(f)_\xi(\bfB), \] where $\Phi(f)_\xi$ is the representative of $\Phi(f)$ whose type is given by the skeleton of $N_\xi$, cf.~the discussion on the data entailed by a functor $\Phi \colon \A \to \FC \B \C$ over $\gcf$ preceding Definition~\ref{definition A ring B}. \end{itemize} $H$ so defined on the generators of $\ring \A \B$ extends to a unique functor provided that $H$ preserves the equalities \ref{PhiA(1...1)=1PhiA}-\ref{Phi(f2 f1)=Phi(f2) Phi(f1)} in $\ring \A \B$, which it does as they have been designed \emph{precisely} to reflect all the properties of a functor $\Phi \colon \A \to \FC \B \C$, and $H$ is defined using $\Phi$ accordingly. Finally, by construction \[ \begin{tikzcd} \A \ar[r,"F_\A"] \ar[dr,"\Phi"'] & \FC \B {\ring \A \B} \ar[d,"{\FC \B H}"] \\ & \FC \B \C \end{tikzcd} \] commutes. The uniqueness of $H$ follows from the fact that the commutativity of the above triangle implies that $\Phi(A)=H(F_\A(A))$ for all $A \in \A$ and $\Phi(f)=H(F_\A(f))$, hence any such functor $H$ \emph{must} be defined as we did to make the triangle commutative. With such a universal arrow $(\ring \A \B, F_\A \colon \A \to \FC \B {\ring \A \B})$ we can define a functor $\ring - \B$ which is the left adjoint of $\FC \B -$. Given $F \colon \A \to \A'$ a functor over $\gcf$, by universality of $F_\A$ there exists a unique functor $\ring F \B \colon \ring \A \B \to \ring {\A'} \B$ that makes the following square commute: \[ \begin{tikzcd} \A \ar[r,"F_\A"] \ar[d,"F"'] & \FC \B {\ring \A \B} \ar[d,"\FC \B {\ring F \B}"] \\ \A' \ar[r,"F_{\A'}"'] & \FC \B {\ring {\A'} \B} \end{tikzcd} \] Such $\ring F \B$ is defined on objects as $\ring F \B \bigl( A[\bfB] \bigr) = (F_{\A'} \circ F)(A)(\bfB) = FA[\bfB]$ and on morphisms as \[ \ring F \B \bigl( A[\bfg] \bigr) = FA[\bfg], \quad \ring F \B \bigl( f[\bfB] \bigr) = Ff[\bfB]. \] Finally, $\ring{}{}$ extends to a functor \[ \begin{tikzcd}[row sep=0em] \WCFover\gcf \times \Cat \ar[r,"\ring{}{}"] & \Cat \\ \A\quad\quad\,\B \ar[r,|->] \ar[d,shift right=17mu,"F"'] \ar[d,shift left=17mu,"G"] & \ring \A \B \ar[d,"\ring F G"] \\[3em] {\A'}\quad\quad\B' \ar[r,|->] & \ring {\A'} {\B'} \end{tikzcd} \] where $\ring F G$ is defined as follows on the generators: \begin{itemize} \item $\ring F G \bigl( A[\bfB] \bigr) = FA[G\bfB]$, \item $\ring F G \bigl( A[\bfg] \bigr) = FA[G\bfg]$, \item $\ring F G \bigl( f[\bfB] \bigr) = Ff[G\bfB]$ \end{itemize} (where $G\bfB=(GB_1,\dots,GB_{\length\alpha})$ if $\bfB=(B_1,\dots,B_{\length\alpha})$). It is easy to see that $\ring F G$ is well defined (i.e.\ it preserves equalities in $\ring \A \B$), thanks to the functoriality of $F$ and $G$. It is also immediate to verify that $\ring{}{}$ is indeed a functor.\qed \end{proof} \section{Conclusions}\label{section:coda} The ultimate goal to achieve a complete substitution calculus of dinatural transformations is to obtain an appropriate functor over $\gcf$ \[ M \colon \ring{\FC \B \C} {\FC \A \B} \to \FC \A \C \] which, \emph{de facto}, realises a \emph{formal} substitution of functors into functors and transformations into transformations as an \emph{actual} new functor or transformation. As in Kelly's case, {horizontal} composition of dinatural transformations will be at the core, we believe, of the desired functor; the rules of vertical composition are, instead, already embodied into the definition of $\FC \B \C$. Such $M$ will arise as a consequence of proving that $\WCFover\gcf$ is a monoidal closed category, much like Kelly did, by showing that the natural isomorphism~(\ref{natural isomorphism (A circ B, C) -> (A,{B,C})}) extends to \[ \WCFover\gcf (\ring \A \B , \C) \cong \WCFover\gcf (\A , \FC \B \C). \] Necessarily then, we will first have to show that the substitution category $\ring \A \B$ is itself an object of $\WCFover\gcf$. Following Kelly's steps described in~\cite[\S 2.1]{kelly_many-variable_1972}, this will be done by extending our functor $\ring{}{} \colon \WCFover\gcf \times \Cat \to \Cat$ to a functor \[ \ring{}{} \colon \WCFover\gcf \times \WCFover\gcf \to \WCFover\gcf, \] exhibiting $\WCFover\gcf$ as a monoidal category, with tensor $\ring{}{}$. To do so in his case, Kelly defined $\ring \A \B$ just as before, ignoring the augmentation on $\B$, and then augmented $\ring \A \B$ using the augmentations of $\A$ and $\B$. In fact, what he did, using the category $\Per$ of permutations, was to regard $\Per$ as a category over itself in the obvious way and then to define a functor $P \colon \ring \Per \Per \to \Per$ that computes substitution of permutations into permutations. That done, he set $\Gamma \colon \ring \A \B \to \Per$ as a composite \[ \begin{tikzcd} \ring \A \B \ar[d,"\ring {\Gamma_\A} {\Gamma_\B}"'] \ar[r] & \Per \\ \ring \Per \Per \ar[ur,"P"'] \end{tikzcd} \] This suggests, as usual, to do the same in our case. Hence, the next step will be to come up with a substitution functor \[ S \colon \ring \gcf \gcf \to \gcf, \] which is tantamount to define an operation of substitution of graphs, and then define $\Gamma \colon \ring \A \B \to \gcf$ as \begin{equation}\label{augmentation of A ring B via G ring G} \begin{tikzcd} \ring \A \B \ar[d,"\ring {\Gamma_\A} {\Gamma_\B}"'] \ar[r] & \gcf\\ \ring \gcf \gcf \ar[ur,"S"'] \end{tikzcd} \end{equation} A possible hint to how to do this is given by how we defined the horizontal composition of dinatural transformations in Chapter~\ref{chapter horizontal}, and what happened to the graphs of the transformations (that is, we consider the special case of $\A = \B = \FC \C \C$). Looking back at Example~\ref{ex:hc example}, when we computed the first horizontal composition of $\delta$ and $(\eval A B)_{A,B}$, in fact we considered the formal substitution $\eval{}{}\bigl[\delta,([+],\id\C)\bigr]$ in $\ring {\FC \C \C} {\FC \C \C}$, which we then realised into the transformation $\HC \delta {\eval{}{}} 1$. This realisation part is what the desired functor $M$ will do, once properly defined. Now, consider, in $\ring \gcf \gcf$, the formal substitution $\graph{\eval{}{}}\bigl[\graph\delta,[+]\bigr]$, which is the image of $\eval{}{}\bigl[\delta,([+],\id\C)\bigr]$ along the functor $\ring \gf \gf \colon \ring {\FC \C \C} {\FC \C \C} \to \ring \gcf \gcf$. Since $M \colon \ring {\FC \C \C} {\FC \C \C}$ ought to be a functor over $\gcf$, we have that $S\bigl(\graph{\eval{}{}}\bigl[\graph\delta,[+]\bigr]\bigr)$ should be the graph that $\HC \delta {\eval{}{}} 1$ has, which is \[ \begin{tikzpicture} \matrix[column sep=1em,row sep=1.5em]{ \node[category] (1) {}; & \node[opCategory] (2) {}; & \node[opCategory] (3) {}; & \node[category] (4) {}; \\ & \node[component] (A) {}; & & \node[component] (B) {};\\ & & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 4 -> B -> 5; }; \end{tikzpicture} \] The intuition for it was that we ``bent'' $\graph\delta$ into the U-turn that is the first connected component of $\graph{\eval{}{}}$. A possible approach to a general definition of substitution of graphs into graphs is the following: given two connected graphs $N_1$, $N_2$ in $\gcf$, the graph $S\bigl(N_1[N_2]\bigr)$ is the result of subjecting $N_2$ to all the ramifications and U-turns of $N_1$; in so doing, one would have to substitute a copy of $N_2$ in every \emph{directed path} of $N_1$. This idea is not original, as it was suggested by Bruscoli, Guglielmi, Gundersen and Parigot~\cite{guglielmi_substitution} in private communications to implement substitution of \emph{atomic flows}~\cite{GuglGundStra::Breaking:uq}, which are graphs extracted from certain formal proofs in \emph{Deep Inference}~\cite{Gugl:06:A-System:kl} and they look very much like a morphism in $\gcf$. How to put such an intuitive idea into a formal, working definition is the subject of current investigations, and this task has already revealed itself as far from being trivial. Once that is done, the rest should follow relatively easily, and we would expect that the correct compatibility law for horizontal and vertical composition sought in \ref{section compatibility} will become apparent, once the substitution functor $M$ above will be found as part of a monoidal closed structure. \section*{Acknowledgements} Most of the material in this article derives from Santamaria's PhD thesis~\cite{santamaria_towards_2019}, written under the supervision of McCusker, and it is, in part, a detailed version of~\cite{mccusker_compositionality_2018}. As such, Santamaria acknowledges the support of a research studentship from the University of Bath, as well as EPSRC grant EP/R006865/1 and the funding support of the Ministero dell'Universit\`a e della Ricerca of Italy under Grant No.~201784YSZ5, PRIN2017. The authors would like to thank John Power for suggesting the notations to handle the manipulation of tuples, which we believe provided a great improvement to the exposition of our theory with respect to~\cite{mccusker_compositionality_2018,santamaria_towards_2019}. We would also like to thank Alessio Guglielmi for his valuable insights on the simplification of the proof of Theorem~\ref{thm:acyclic-implies-reachable} with respect to~\cite{mccusker_compositionality_2018,santamaria_towards_2019}. Finally, we thank Zoran Petri\'c for his kind understanding of our lack of acknowledgement of his results in the past: we hope that with this paper we have finally given him the credit he deserves for his work.
train/arxiv
BkiUaKXxK0wg09KOULjD
5
1
\section{Introduction} The dynamics of polarizable point-like particles trapped by an optical cavity light field has been the subject of intense theoretical and experimental studies in the past decade~\cite{domokos2003mechanical,ritsch2013cold,aspelmeyer2014cavity,aspelmeyer2014cavitybook}. Beyond implementing improved neutral-atom cavity-QED systems~\cite{ye1999trapping,kruse2003cold,pinkse2000trapping}, recently proposed applications of such setups range from ultrahigh-$Q$ optomechanics~\cite{chang2012ultrahigh,ni2012enhancement,aspelmeyer2014cavity} to precision tests of quantum mechanics at a mesoscopic scale~\cite{romero2011large} and gravity~\cite{pikovski2012probing}. Following the first pioneering experiments more than a decade ago~\cite{ye1999trapping,pinkse2000trapping,kruse2003cold}, several groups have implemented reliable cavity-based optical traps in their experiments for various particle numbers ranging from a single or few atoms~\cite{schleier2011optomechanical,stamper2014cavity,brahms2012optical,thompson2013coupling} to Bose-Einstein condensates (BECs)~\cite{brennecke2007cavity,colombe2007strong,wolke2012cavity,bux2011cavity} or lately even considerably heavier nanoparticles~\cite{asenbaum2013cavity,kiesel2013cavity,millen2015cavity}. \par Particles in cavity fields, in contrast to free-space optical potentials, substantially act back on the field dynamics~\cite{domokos2001semiclassical,mekhov2012quantum}, which generates complex and rich nonlinear dynamics~\cite{gupta2007cavity,griesser2011nonlinear,diver2014nonlinear,goldwin2014backaction}. In the standard optomechanical limit of very tightly trapped particles or membranes, which can essentially be modeled by harmonic oscillators~\cite{marquardt2009optomechanics,schulze2010optomechanical}, a wealth of interesting physics beyond ground-state cooling appears in the strong-coupling regime. Typical examples are atom-field entanglement, nonlinear oscillations, and multistable behavior~\cite{marquardt2006dynamical,fernandez2007nonlinear,vukics2009cavity,griesser2011nonlinear,niedenzu2012quantum,dombi2013optical}. The system dynamics gets even more complex and rich, if one refrains from linearizing the particle motion and considers its full dynamics along the cavity potential~\cite{maschler2005cold,niedenzu2010microscopic}. \par In most cases the optical potential along the cavity axis is well approximated by a sinusoidal lattice potential with a depth proportional to the momentary intracavity photon number~\cite{ritsch2013cold}. While for deep potentials the harmonic-oscillator basis allows for analytic insight, it becomes inadequate for shallower lattices. The eigenfunctions of periodic potentials are delocalized Bloch functions, which can be transformed to localized Wannier functions~\cite{kohn1959analytic}. Unfortunately, no analytic solutions for neither the Bloch nor the Wannier functions are known even for a fixed lattice depth. Hence, aiming for an explicit analytic treatment in the (dynamic) quantum-potential limit is a hopeless goal. In view of these complications, several semiclassical and mean-field models with factorized evolution of the particles and the field have been developed to obtain some first insights~\cite{domokos2001semiclassical,griesser2011nonlinear,schuetz2013cooling}. Here the field expectation value is governed by ordinary differential equations containing particle expectation values. This field is in turn inserted in the effective Hamiltonian for the particle motion~\cite{maschler2004quantum,maschler2005cold}. Even in this strongly simplified limit the nonlinearity of the interaction does not allow for a straightforward solution in the general case and further assumptions are needed~\cite{vukics2009cavity}. \par In this paper we study the full quantum dynamics and the steady-state properties for the case of a single particle in a cavity-sustained optical lattice in the strongly coupled and strongly pumped limit. Hence, our treatment will centrally be based on straightforward numerical solutions of the corresponding quantum-optical master equation. Strong emphasis will be put on steady-state properties of the system in the limit of very low temperatures close to $T=0$, where semiclassical treatments predict a multitude of stationary solutions. To this end we will heavily rely on quantum Monte Carlo wave-function simulations~\cite{dalibard1992wave,dum1992monte,moelmer1993monte}, since a direct solution of the master equation becomes very slow and cumbersome owing to the large joint particle-field Hilbert space, even though we consider the simplest possible system involving only a single particle. \par This paper is organized as follows. In Sec.~\ref{sec_model} we introduce the model Hamiltonian and the master equation, from which we derive equations for the expectation values of the cavity field and the photon number, depending on the particle state. To get some first qualitative insight into the system behavior and to identify interesting parameter regimes, we start with simplified semiclassical models. Factorizing atom and field dynamics, we approximate the photon field by a classical field characterized by its mean photon number, which is determined by the spatial distribution of the particle. We then look for self-consistent steady-state solutions for the expected photon number. Section~\ref{sec_self-consistent} is devoted to studies of these self-consistency conditions in various limiting cases. We first consider the deep-trap limit of harmonic particle confinement, which allows for an analytic treatment. This analysis is afterwards extended to localized Wannier states in the general case of a periodic optical lattice. In Sec.~\ref{sec_numerical} we then numerically solve the full master equation in typical operating regimes determined before and also analyze the behavior of single Monte Carlo trajectories. Finally, in Sec.~\ref{sec_conclusions} the conclusions are drawn. \section{Model}\label{sec_model} \par \begin{figure} \centering \includegraphics[width=\columnwidth]{system} \caption{(Color online) A particle within a driven optical cavity. The longitudinal cavity pump $\eta$ builds up an intracavity field that drives the particle motion. The particle's motional state affects the cavity detuning (dynamic refractive index), which in turn influences the intracavity photon number. Photons leak through the cavity mirrors at a rate $2\kappa$.} \label{fig_system} \end{figure} \par We consider the standard case of a driven, damped cavity mode with a single polarizable particle of mass $m$ moving along the cavity axis, as sketched in Fig.~\ref{fig_system}. The Hamiltonian ($\hbar=1$) in a rotating frame with pump amplitude $\eta$, cavity detuning $\Delta_\mathrm{C}$, and effective particle-field interaction strength $U_0$ is then given by~\cite{hechenblaikner1998cooling} \begin{equation} \label{Hamiltonian} H = \frac{p^2}{2m}-\left[\Delta_\mathrm{C}-U_0 \cos^2(k_\mathrm{R} x)\right]a^{\dagger}a-i\eta\left(a-a^{\dagger}\right), \end{equation} where $k_\mathrm{R}=2\pi/\lambda$ is the single-photon recoil momentum, with $\lambda$ being the cavity mode wavelength. The particle position and momentum operators $x$ and $p$, and the photon mode annihilation (creation) operators $a^{\left(\dagger\right)}$, obey the standard canonical commutation relations \begin{equation} \comm{x}{p}=i , \; \comm{a} {a^{\dagger} }=1, \end{equation} with all other commutators vanishing. The coupling strength is parametrized by $U_0$, which denotes the optical potential depth per photon as well as the maximum cavity mode resonance frequency shift that a particle induces when placed at a field antinode. Here we consider a large negative $U_0$, which corresponds to high-field seeking particles. \par We assume operation far from any internal optical resonances, such that spontaneous light scattering and absorption losses from the particle into the mode can be neglected~\cite{domokos2003mechanical}. The dominant loss mechanism is then cavity damping, which at optical frequencies can be incorporated by a standard master equation treatment parametrized by a loss rate $\kappa$~\cite{gardinerbook}, \begin{equation}\label{eq_master} \dot{\rho} = -i \comm{H}{\rho}+\kappa \left(2a \rho a^{\dagger}-a^{\dagger}a\rho-\rho a^{\dagger}a \right). \end{equation} \par From this master equation we straightforwardly derive ordinary differential equations (ODEs) for the expectation values of the field amplitude, $\ew{a}=\alpha$, and the photon number, $\ew{n}=\ew{a^{\dagger} a}$, which read \begin{subequations}\label{eq_alpha_n} \begin{align} \dot{\alpha}&=\left[i\left(\Delta_\mathrm{C}-U_0\ew{\cos^2(k_\mathrm{R} x)}\right)-\kappa\right]\alpha+\eta \\ \dot{\ew{n}}&=\eta\left(\alpha+\alpha^{\ast}\right)-2\kappa\ew{n}. \end{align} \end{subequations} Within the semiclassical treatment with a $c$-number description of the field amplitude the particle-field density matrix is assumed to be separable. Obviously, the field dynamics depends on the motional state of the particle via the expectation value (bunching parameter) \begin{equation}\label{eq_ew_cos2} b\mathrel{\mathop:}=\ew{\cos^2(k_\mathrm{R} x)} \end{equation} in a nonlinear fashion. This parameter itself is, in turn, governed by the Schr{\"o}dinger equation containing the Hamiltonian~\eqref{Hamiltonian}, whose spatial eigenstates are Bloch functions according to the quantized lattice depth $V_0= |U_0| a^{\dagger}a$. This yields a different evolution for each photon-number component of the total wave function and thus a very complex time evolution. Hence, a full solution of the master equation~\eqref{eq_master} requires a numerical approach, which can be directly implemented using a truncated photon number and momentum basis expansion. Note that due to the periodic nature of the potential we can work with periodic boundary conditions in real space and use a discrete momentum basis $\{\ket{p}=\ket{jk_\mathrm{R}}\}$ with $j \in \mathbb{N}_0$. \par As these calculations are time consuming and the range of physical parameters $(\eta,\Delta_\mathrm{C},U_0,\omega_\mathrm{R})$ is large, we first try to get some qualitative insight and find interesting parameters regions using the factorized semiclassical approach involving Eqs.~\eqref{eq_alpha_n}. \section{Self-consistent semiclassical solutions of the coupled atom-field dynamics}\label{sec_self-consistent} Let us now analyze potential stationary solutions of the coupled ODE system~\eqref{eq_alpha_n}. As the field dynamics in the semiclassical approximation depends on the position distribution of the particle via the expectation value~\eqref{eq_ew_cos2} only, for the system to reach a steady state we need a stationary wave function. This leads to the self-consistency condition \begin{equation} \ew{n} = \frac{\eta^2}{\kappa^2+\left(\Delta_\mathrm{C}-U_0\ew{\cos^2(k_\mathrm{R} x)}\right)^2}, \label{eq_selcons} \end{equation} where the wave function of the particle has to be an eigenstate of Eq.~\eqref{Hamiltonian} with the photon-number operator $a^{\dagger}a$ replaced by $\langle n\rangle$. Note that the expectation value $\ew{\cos^2(k_\mathrm{R} x)}$ in the denominator on the right-hand side of Eq.~\eqref{eq_selcons} does not explicitly involve any field operators. Nevertheless, the time evolution of the spatial part of the wave function depends on the field intensity. Hence, the state can only be stationary if it is an eigenstate of the Hamiltonian~\eqref{Hamiltonian} for the momentary photon number. Note that the pump amplitude $\eta$ is a free parameter in the above equation and in many cases for a given eigenstate of the particle Hamiltonian a self-consistent choice of $\eta$ can be made to fulfill Eq.~\eqref{eq_selcons}~\cite{maschler2004quantum}. We, however, opt for the opposite approach and determine self-consistent photon numbers for given pump strengths. \par Let us mention, though, that this is only a necessary condition and by no means sufficient for a stable stationary equilibrium subject to the quantum fluctuations of the system. At this point it can only serve as a guide towards interesting parameter regions, which is, e.g., the case when several different spatial eigenfunctions lead to the same pump amplitude $\eta$. We will discuss this in some more detail below for specific limiting examples. \subsection{Harmonic-oscillator expansion in a deep lattice} In the limit where the potential depth $V_0 \approx |U_0| \ew{n} $ strongly exceeds the recoil energy $E_\mathrm{R}\equiv\omega_\mathrm{R}\mathrel{\mathop:}= k_\mathrm{R}^2 /(2 m)$, the lowest-energy particle states are well localized within a single well of the optical lattice. For low enough temperatures the optical potential $V_{\text{eff}}(x)=U_0\ew{n}\cos^2(k_\mathrm{R} x)$ can then be approximated by a harmonic potential~\cite{maschler2004quantum}, \begin{equation} U_0\ew{n}\cos^2(k_\mathrm{R} x) \approx U_0\ew{n}\left(1-k_\mathrm{R}^2 x^2\right). \end{equation} The corresponding trapping frequency $\omega_{\mathrm{ho}}$ then reads \begin{equation} \frac{\omega_{\mathrm{ho}}}{\omega_\mathrm{R}} = 2\sqrt{\frac{|U_0|}{\omega_\mathrm{R}}\ew{n}} \gg 1, \end{equation} and we can analytically find the respective oscillator states $\ket{n_{\mathrm{ho}}}$ to this frequency. The expectation value in the denominator of Eq.~\eqref{eq_selcons} is then well approximated by $\ew{\cos(k_\mathrm{R} x)^2} \approx 1 - k_\mathrm{R}^2 \ew{x^2}$ with \begin{equation} \ew{x^2}_{n_{\mathrm{ho}}}=\bra{n_{\mathrm{ho}}}x^2\ket{n_{\mathrm{ho}}}=\frac{2n_{\mathrm{ho}}+1}{2m\omega_{\mathrm{ho}}}, \end{equation} such that \begin{equation} k_\mathrm{R}^2\ew{x^2}_{n_{\mathrm{ho}}}=\left(2n_{\mathrm{ho}}+1\right)\frac{\omega_\mathrm{R}}{\omega_{\mathrm{ho}}}=\frac{2n_{\mathrm{ho}}+1}{2\sqrt{\frac{|U_0|}{\omega_\mathrm{R}}\ew{n}}}. \end{equation} Hence, within the harmonic-oscillator approximation Eq.~\eqref{eq_selcons} becomes the simple algebraic equation \begin{align} \ew{n} = \frac{\eta^2}{\kappa^2+\left[\Delta_\mathrm{C}-U_0\left(1-\frac{2n_{\mathrm{ho}}+1}{2\sqrt{\frac{|U_0|}{\omega_\mathrm{R}}\ew{n}}}\right)\right]^2}, \label{eq_selconsho} \end{align} which can be easily solved for each choice of eigenstate number $n_\mathrm{ho}$. Figure~\ref{fig_selconsho} shows contours in the $\Delta_\mathrm{C}$-$\ew{n}$ plane for different values of $\eta$ for which Eq.~\eqref{eq_selconsho} holds. While the lowest-energy state $n_\mathrm{ho}=0$ results in a unique photon number, multiple (up to two) solutions are possible for higher excited states. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{selfconsho4} \caption{(Color online) Self-consistent photon-number contours for a harmonically trapped particle as a function of the cavity-pump detuning for different pump amplitudes $\eta$. The four plots show contours in the $\Delta_\mathrm{C}$-$\ew{n}$ plane for different harmonic-oscillator eigenstates $\ket{n_{\text{ho}}}$, where Eq.~\eqref{eq_selconsho} holds self-consistently. All excited states, $n_{\text{ho}}\geq 1$, yield the possibility of two self-consistent solutions for a certain range of detuning $\Delta_\mathrm{C}$ and therefore may allow for optomechanical bistability. Parameters: $U_0=-10\omega_\mathrm{R}$ and $\kappa=\omega_\mathrm{R}$.} \label{fig_selconsho} \end{figure} \subsection{Self-consistent states for the full lattice dynamics} Several interesting aspects of the deep-trap harmonic-oscillator regime have been studied in the past~\cite{maschler2004quantum,diver2014nonlinear,vukics2009cavity}. In many respects the system is directly related to standard optomechanical models with quadratic coupling~\cite{marquardt2009optomechanics}. For ultracold particles and weaker optical potentials the motion is strongly delocalized in the lattice~\cite{ritsch2013cold}. Hence, we now turn to the full model and consider the particle's motion in the periodic optical lattice \begin{equation} V_{\text{eff}}(x) = U_0\ew{n}\cos^2(k_\mathrm{R} x). \end{equation} \par For very shallow lattices close to zero temperature, i.e., for a BEC in a cavity, a two-mode expansion of the wave function can be applied~\cite{brennecke2007cavity,szirmai2010quantum}, which again allows for analytic treatments and analogies with optomechanical couplings. However, the validity range of this model is limited in temperature, time, and coupling strength. As we are here more interested in the limit of strong nonlinear backaction in deep potentials, we cannot apply this simplification and have to solve the Schr{\"o}dinger equation for a periodic potential, which gives us the well-known Bloch states $\Psi_{mq}(x)$, where $m$ denotes the energy band and $q$ is the quasi-momentum~\cite{kohn1959analytic}. Being periodic with the lattice constant, they are not the best basis to describe a single localized particle. Hence, we switch to a Wannier basis, where each basis state represents a localized wave function with its center of mass at a particular lattice site. Such basis states have been very successfully used to study ultracold particle dynamics in optical lattices~\cite{jaksch1998cold,zwerger2003mott}. The Wannier functions for a given band index $m$ localized at lattice position $R$ are defined as~\cite{kohn1959analytic} \begin{equation} w_m(x-R):=\sqrt{\frac{a}{2\pi}}\int_{-\pi/a}^{\pi/a}\Psi_{mq}(x)e^{-iqR}\mathrm{d} q, \end{equation} where $a$ is the lattice periodicity. The Bloch functions $\Psi_{mq}(x)$ are only defined up to a phase. In order to obtain the \emph{maximally localized} (i.e., real and exponentially decaying) Wannier functions, these phases need to be properly adjusted~\cite{kohn1959analytic}. In what follows we choose for simplicity $R=0$. \par We are now able to restate the self-consistency equation~\eqref{eq_selcons} for each band index $m$ as \begin{equation} 0=\frac{\eta^2}{\kappa^2+\bigl(\Delta_\mathrm{C}-U_0b_m\bigr)^2}-\ew{n}, \label{eq_selconswa2} \end{equation} with \begin{equation} b_m = \int_{-\infty}^{\infty}[w_m(x)]^2\cos^2(k_\mathrm{R} x)\,\mathrm{d} x. \label{eq_potexp} \end{equation} Contrary to the harmonic oscillator wave functions, there is no analytic expression for Wannier functions and we have to numerically solve the Schr{\"o}dinger equation for each particular $\ew{n}$. Therefore $\ew{n}$ does not explicitly appear on the right-hand side of Eq.~\eqref{eq_selconswa2}, but enters implicitly through the shape of the wave function. As before, we can obtain the contours where Eq.~\eqref{eq_selconswa2} holds self-consistently in the $\Delta_\mathrm{C}$-$\ew{n}$ plane for the same values of $\eta$; see Fig.~\ref{fig_selconswa}. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{selfconswa4} \caption{(Color online) Same as Fig.~\ref{fig_selconsho} for particles in localized Wannier states. Photon numbers above the dashed lines for $w_{m \geq 1}$ correspond to bound Wannier states ($E<0$). Higher bands exhibit up to three solutions of Eq.~\eqref{eq_selconswa2} for a given value of $\Delta_\mathrm{C}$. Parameters: $U_0=-10\omega_\mathrm{R}$ and $\kappa=\omega_\mathrm{R}$.} \label{fig_selconswa} \end{figure} \par The behaviors of the photon numbers for the lowest-energy states $n_{\mathrm{ho}}=0$ and $m=0$ in both cases are very much alike, because the corresponding lowest bound states in both models are similar. Indeed, the maximally localized Wannier functions converge to the harmonic oscillator functions for deep potentials~\cite{kohn1959analytic}. For the higher-energy eigenstates, however, the photon numbers differ significantly. The reason behind is that in a harmonic oscillator all states are bound, while Wannier states for increasing $m \geq 1$ undergo a transition from bound to free states for a given photon number (i.e., potential depth). Dashed lines in Fig.~\ref{fig_selconswa} indicate this boundary. For $m \geq 1$ sharp bends appear, yielding self-consistent contours reminiscent of nonlinear response curves. The origin of these peculiarity at the transition from free to bound states becomes evident, if we look at the spatial particle density of the respective Wannier states. Figure~\ref{fig_wannloc} illustrates the behavior of the fourth band Wannier state $w_4$ for different mean photon numbers (i.e., potential depths). The key quantity here is the expectation value of the bunching parameter $b_m$ [Eq.~\eqref{eq_potexp}], which determines the backaction of the particle on the cavity field, i.e., its effective refractive index. For free particles, $\ew{E}>0$, the wave function is barely localized and $b_m \approx \frac{1}{2}$. Around $\ew{E}\approx 0$ the Wannier states localize around potential maxima, i.e., optical field nodes, which minimizes the backaction of the particle on the cavity, $b_m<\frac{1}{2}$, while for deeper potentials $\ew{E}<0$ and particles are drawn towards field antinodes and the index of refraction increases with potential depth, $b_m>\frac{1}{2}$. Thus the nonlinear behavior of the refractive index allows for multiple self-consistent solutions for certain ranges of the cavity detuning $\Delta_\mathrm{C}$. In particular, we also find solutions corresponding to unbound particle states (e.g., for $w_4$ in Fig.~\ref{fig_selconswa}). \par \begin{figure} \centering \includegraphics[width=\columnwidth]{wannfunlocplts} \caption{(Color online) Spatial particle distribution in a fourth band Wannier state for different photon numbers: (a) Free particle: The wave function is barely localized, $b_4 \lesssim 0.5$. (b) Transition to a bound state: The wave function is localized at potential \emph{maxima}, $b_4 < 0.5$. (c) Tight-binding regime: The wave function is strongly localized in a single potential well, $b_4 > 0.5$.} \label{fig_wannloc} \end{figure} \subsection{Stability of the self-consistent factorized solutions} As we saw above, for certain parameter ranges in both the harmonic oscillator and the Wannier contour plots more than one self-consistent solution appears. Whether or not these solutions have significant relevance for the full system dynamics depends on their stability properties, i.e., their response against small deviations in the photon number or the spatial distribution. Some qualitative insight can already be gained by virtue of Eq.~\eqref{eq_selcons}. The right-hand side of Eq.~\eqref{eq_selcons} depends on the shape of the wave function in real space, which in our semiclassical model implicitly depends on $\ew{n}$. The term on the right-hand side of Eq.~\eqref{eq_selcons} determines the mean photon number that is allowed by the spatial part of the wave function in steady state. If it increases or decreases with $\ew{n}$ faster than $\ew{n}$ at a self-consistent point, one may assume that the self-consistent configuration is unstable. Therefore we find that at \textit{stable} self-consistent configurations the inequality \begin{align} \frac{\partial}{\partial \ew{n}} \frac{\eta^2}{\kappa^2+\left(\Delta_\mathrm{C}-U_0\ew{\cos^2(k_\mathrm{R} x)}\right)^2} -1 < 0 \label{eq_selconstab} \end{align} must hold. This rather intuitive result is verified in Appendix~\ref{app_stability} via linear stability analysis. The stability regions for the fourth band (where up to three self-consistent solutions exist) are shown in Fig.~\ref{fig_selcons3d}. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{selfcons3dw4} \caption{(Color online) Contour plot for the right-hand side of Eq.~\eqref{eq_selconswa2} for the fourth band Wannier state. Solid (dotted) lines mark stable (unstable) self-consistent solutions according to Eq.~\eqref{eq_selconstab}. Parameters: $\eta = 6\omega_\mathrm{R}$, $U_0=-10\omega_\mathrm{R}$, and $\kappa=\omega_\mathrm{R}$.} \label{fig_selcons3d} \end{figure} \section{Numerical analysis of the full coupled atom-field dynamics}\label{sec_numerical} In order to test the above analysis, we now strive to solve the full master equation [Eq.~\eqref{eq_master}]. As already mentioned, the (even for a single particle) large Hilbert space makes a direct numerical integration attempt practically unfeasible for realistic parameters and photon numbers. We therefore make use of quantum Monte Carlo wave-function simulations, in which single stochastic state vectors (instead of the whole density matrix) are evolved subject to a non-Hermitian effective Hamilton operator~\cite{dalibard1992wave,dum1992monte,moelmer1993monte}. This evolution is stochastically interrupted by quantum jumps corresponding to a projective removal of one photon. Mathematically, averaging over a large number of such trajectories then approximates the full density matrix. Interestingly, the jumps can be physically interpreted as detection events of photons leaking out of the resonator. Hence, single trajectories provide a microscopic view of the processes incorporated in the master equation since the ensemble average over many trajectories converges towards the solution of the latter. \par In what follows we compare our above predictions with the time evolution of single trajectories as well as to their ensemble average. The numerical implementation of these simulations was done within the C++QED framework allowing efficient and fast simulations~\cite{vukics2007cppqed,vukics2012cppqedv2,sandner2014cppqedv2}. Since dynamic aspects have been eliminated in our self-consistency and stability analysis, it is not clear which of the self-consistent solutions appear in the dynamics and what are their corresponding probabilities. \subsection{Time evolution of single trajectories} We consider a small sample of single quantum trajectories in a multistable regime. Figure~\ref{fig_singtraj} shows the corresponding expectation values of the intracavity photon number $\ew{n}$ as well as of the kinetic energy $\ew{E_{\mathrm{kin}}}=\ew{p^2}/(2m)$. As one might expect, both quantities jump simultaneously between rather stable values. The latter can be identified as the possible semiclassical values found above. Each trajectory thus seems to switch between these states rather than forming state superpositions. Between jumps both quantities appear to fluctuate only weakly about the self-consistent values (upper three graphs). In some cases $\ew{n}$ jumps to very low values, where no bound state exists. In such cases the system continuously heats up (i.e., $\ew{E_{\mathrm{kin}}}$ increases) until a subsequent jump occurs and projects the particle back into a bound state (as for example in Fig.~\ref{fig_singtraj}b between the two quantum jumps at $\omega_\mathrm{R} t\approx 120$ and $150$; the significantly increased photon number after the second jump allows again for bound states). Figure~\ref{fig_singtraj}d shows an extreme case, where the particle remains essentially free for a long time. We find that there exists a multitude of stable self-consistent solutions of Eq.~\eqref{eq_selconswa2} around $\ew{n}=4$ for higher bands ($m \geq 6$), whose self-consistent photon numbers increase only slightly with increasing band index. A small plateau of $\ew{E_{\mathrm{kin}}}$ in Fig.~\ref{fig_singtraj}d can be interpreted as an occupation of the 12th Wannier state, $\ew{E_{\mathrm{kin}}}_{m=12}$. We also indicate the mean kinetic energy of the 14th excited band, $\ew{E_{\mathrm{kin}}}_{m=14}$, and observe that this energy is reached continuously rather than by discrete jumps as in the bound case. Below we will re-encounter reminiscences of such trajectories in the ensemble-averaged solution of the master equation. \par Due to the self-consistent photon numbers' small sensitivity on the band index for $m \geq 6$, according to our semiclassical analysis several excited bands can co-exist at a given value of $\ew{n}$. Wannier functions for a given potential depth are mutually orthogonal, therefore transitions between bands can only occur through fluctuations in the unbound regime, yielding much slower transition rates than in the bound regime. Trajectories may jump back to bound states, Figure~\ref{fig_singtraj}a--c, or remain unbound, Figure~\ref{fig_singtraj}d. The likeliness of jumping back to a bound state seems to decrease with kinetic energy. At this stage it appears that the momentum part of the wave function controls the expected intracavity photon number rather than vice versa. \par The correlated particle-field jumps reflect strong particle-field correlations and some amount of entanglement, as previously discussed in similar contexts~\cite{vukics2007microscopic,niedenzu2012quantum}. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{singletrajpap} \caption{(Color online) Expectation values on single quantum trajectories for $\eta=6\omega_\mathrm{R}$, $\Delta_\mathrm{C}=-7.5\omega_\mathrm{R}$, $U_0=-10\omega_\mathrm{R}$, and $\kappa = \omega_\mathrm{R}$. The mean values jump between the stable values predicted by the self-consistency equation~\eqref{eq_selconswa2} (dashed lines). (a) and (b) Trajectories with several jumps from very high to very low photon numbers. (c) Trajectory with only a few jumps. (d) Trajectory with a large increase of $\ew{E_{\mathrm{kin}}}$ during evolution in a low-photon-number state. The kinetic energy mean value of the 12th and 14th excited bands are shown as black solid lines. Although numerical values indicate that the trajectory could be in a definite band, the neat picture of correlated photon number and momentum jumps clearly breaks down at this point. Note the different scaling of the $\ew{E_{\mathrm{kin}}}$ axis.} \label{fig_singtraj} \end{figure} \subsection{Stationary solution of the master equation via ensemble-averaged quantum trajectories} We now investigate the solution of the master equation~\eqref{eq_master} of the joint particle-field density matrix by averaging over a sufficiently large ensemble of Monte Carlo trajectories. First we check the distribution of photon numbers for a specific choice of parameters and compare it to the semiclassical results. In Fig.~\ref{fig_selconsnbar} we depict the simplest case of parameters, where only a single semiclassical solution exists. Interestingly, we see that the mean photon numbers obtained from the Monte Carlo simulations agree surprisingly well with the self-consistent solutions of Eq.~\eqref{eq_selconswa2} for the lowest Wannier state, as long as the cooling regime (large negative effective detuning) is maintained. Closer to resonance we see a deviation towards higher photon numbers, which indicates the appearance of motional exited states (cf.\ Fig.~\ref{fig_selconswa}). \begin{figure} \centering \includegraphics[width=\columnwidth]{fin7ncut96nbarpl} \caption{(Color online) Self-consistent solutions of Eq.~\eqref{eq_selconswa2} for the lowest Wannier states ($m=0$) compared to ensemble-averaged quantum simulation results. Solid lines mark self-consistent contours; data points are quantum simulation results. The black line separates the regions where $\Delta_{\mathrm{eff},\,0} < -\Delta_{\mathrm{eff},\,2}$ (left) and $\Delta_{\mathrm{eff},\,0} > -\Delta_{\mathrm{eff},\,2}$ (right) along the self-consistent contours of the zeroth band; cf.\ Eq.~\eqref{eq_wanndet}. In the left region quantum simulations are in accordance with the self-consistent solutions of Eq.~\eqref{eq_selconswa2} for the lowest band, while deviations arise in the right region due to population of the $m=2$ Wannier state. Parameters: $U_0=-10\omega_\mathrm{R}$ and $\kappa=\omega_\mathrm{R}$.} \label{fig_selconsnbar} \end{figure} Motivated by the single trajectories depicted in Fig.~\ref{fig_singtraj} one can deduce a microscopic interpretation of the dynamics. To each band may be assigned a kinetic temperature $k_\mathrm{B} T=2E_\mathrm{kin}=\ew{p^2}/m$, which increases with the band index~\footnote{A single particle does not have a temperature; neither does a single-particle quantum state. When we speak of temperature we think of averaging over a fictitious ensemble.}. The sign of the effective detuning \begin{equation}\label{eq_def_deltaeff} \Delta_{\mathrm{eff},\,m} \mathrel{\mathop:}= \Delta_\mathrm{C}-U_0b_m\bigr. \end{equation} determines whether the according band is heated ($+$) or cooled ($-$). Heating means that in a certain band the system tends towards populating higher excited bands, while cooling implies the opposite. Since the value of $\Delta_{\text{eff},\,m}$ is different for every band, some bands (the lower ones) are heated and others (the higher ones) are cooled. From the proportionality of the cooling/heating rates to $\Delta_{\text{eff},\,m}$, we conclude that higher bands appear in the ensemble-averaged steady-state solution if \begin{equation} \Delta_{\mathrm{eff},\,m} > -\Delta_{\mathrm{eff},\,m+2}. \label{eq_wanndet} \end{equation} Note that for symmetry reasons the dynamics induced by the Hamiltonian~\eqref{Hamiltonian} conserves the parity of the initial state. For a particle initially in the ground state, the lowest accessible excited state is the second band and consequently $m+2$ appears in Eq.~\eqref{eq_wanndet}. Hence the system effectively remains in the lowest band until $\Delta_{\mathrm{eff},\,0} > -\Delta_{\mathrm{eff},\,2}$; see Figs.~\ref{fig_selconsnbar} and~\ref{fig_rhoraywann6}. This implies that, though the system is effectively blue detuned, it does not get heated; see Fig.~\ref{fig_cpppvar}. For certain parameter values a further increase of $\Delta_\mathrm{C}$ around $\Delta_{\mathrm{eff},\,0}=0$ even yields further cooling before the second excited band is populated. \par \begin{figure} \centering \includegraphics[width=\columnwidth]{rhoraywann6r2} \caption{(Color online) Combined photon-number and momentum-state occupation probability after a time interval of $\Delta t= 310\omega_\mathrm{R}^{-1}$, starting from a state with zero momentum and one cavity photon, $k=0$ and $n=1$, for $U_0=-10\omega_\mathrm{R}$, $\kappa = \omega_\mathrm{R}$, $\eta = 6\omega_\mathrm{R}$, and different values of the detuning. The density matrix is approximated via quantum simulations with ensemble averages over 1000 trajectories for each parameter set. The numbers in each box give the detuning $\Delta_\mathrm{C}$ in units of $\omega_\mathrm{R}$.} \label{fig_rhoraywann6} \end{figure} \par \begin{figure} \centering \includegraphics[width=\columnwidth]{fin7ncut96pvarplred} \caption{(Color online) Quantum simulation results showing the particle's average kinetic energy $\ew{E_{\mathrm{kin}}}$ as a measure of its temperature as a function of the cavity detuning for different pump amplitudes. For $\eta=5\omega_\mathrm{R}$ an interval where the final temperature decreases with increasing detuning is clearly visible. The plot shows only a detail of the set of data points for better visibility of the effect. In order to eliminate temporal fluctuations at single time instances the plotted points are time averages of $\ew{E_{\mathrm{kin}}}$ of 100 values in the interval $\omega_\mathrm{R} t\in\left(305,310\right]$. The dashed lines are interpolations between data points and merely serve as a guide to the eye. The other parameters are $U_0=-10\omega_\mathrm{R}$ and $\kappa=\omega_\mathrm{R}$. } \label{fig_cpppvar} \end{figure} \par A more complete picture of this dynamics can be found if one includes momentum distributions, as depicted in Fig.~\ref{fig_rhoraywann6}, which represents the essence of the underlying physics. For large negative cavity detunings $\Delta_\mathrm{C}$ the photon-number distribution follows the mean values obtained for the lowest-band approximation from our semiclassical model [Eq.~\eqref{eq_selconswa2}]. With increasing $\Delta_\mathrm{C}$ higher bands $m+2$ appear when heating depletes the lowest band if Eq.~\eqref{eq_wanndet} is satisfied. Note that the population of higher momenta at relatively small photon number in Fig.~\ref{fig_rhoraywann6} (e.g., at $\Delta_\mathrm{C}=-7.5\omega_\mathrm{R}$) can be traced back to trajectories like the one shown in Fig.~\ref{fig_singtraj}d. \section{Conclusions and outlook}\label{sec_conclusions} We have shown that the dynamics of a quantum particle trapped in a cavity-sustained optical lattice can reach a multitude of quasi-stationary solutions at the same operation parameters. Fluctuations of the cavity field as well as quantum dynamics of the particle eventually trigger transitions between such states observable in single Monte Carlo quantum trajectories and stable for extended periods. Key properties of these strongly correlated atom-field solutions can be understood from an analysis in terms of localized Wannier functions and a mean-field approximation of the cavity mode. Quantum simulations exhibit few but fast transitions between such quasi-stationary states. Averaged over a sufficiently large ensemble, the final density matrix in this regime is a mixture of several Bloch bands with corresponding photon-number distributions. While the density matrix is mostly a mixture of such quasistationary states, some atom-field entanglement can be present in the transition phase, where the photon-number expectation value lies in between two stationary values. \begin{acknowledgments} This work has been supported by the Austrian Science Fund FWF through the SFB F4013 FoQuS. \end{acknowledgments}
train/arxiv
BkiUeOw4eIOjSBZee165
5
1
\section{Introduction} Many environments, such as dense vegetation and narrow caves, are not easily accessible by human beings. Unmanned Aerial Vehicles (UAVs) provide cost-effective alternatives to human beings for a large variety of tasks in such environments, including search, rescue, surveillance, and land inspection. In recent years, impressive progress has been made in UAVs, leading to revolutions in the aerodynamic structure, mechanical transmission, actuator, computer control, etc. Despite these advances, existing technology in UAVs is still limited as most systems can only operate in clear, open space \cite{Dey2011} or in fields with sparsely distributed tree obstacles \cite{Barry2017}, and most existing approaches for localization and planning fail in the presence of large number of obstacles. Moreover, sensors used in these systems are often bulky which hinders efficient navigation \cite{abdallah2019reliability}. It is highly desirable to build safe and efficient UAV systems that do not fail under different real-world conditions. Among many directions in technological innovation, bio-inspired technology provides a promising solution that may break the performance boundary in UAVs. Mammals, insects and other organisms often exhibit advanced capabilities and features that would be desirable for UAVs. They can rapidly pick out salient features buried in large amounts of data, and adapt themselves to the dynamics of their environments. Adopting prototypes that emulate the characteristics and functions found in living creatures may enable robots to maneuver more efficiently without the aid of approaches such as simultaneous mapping and localization (SLAM), GPS or inertial units. In recent years, bio-inspired approaches have already given rise to robots that operate in water \cite{yao2011applications}, air \cite{duan2014pigeon} and on land \cite{zhou2012survey} and, in some cases, transiting in various media. For UAVs in particular, ``Microbot" has been created in 2002 by The California Institute of Technology \cite{bogue2015miniature}, which achieves independent fly by imitating the morphological properties of versatile bat wings. In 2011, AeroVironment successfully developed the ``Hummingbird" by mimicking hummingbirds \cite{coleman2015design}. The Hummingbird is trained and equipped to continue flying itself with its own supply of energy. The flapping wings can effectively control its attitude angles. Besides these examples, there are several other conventional designs developed, including Robird \cite{folkertsma2017robird}, DelFly \cite{de2016delfly}, and Bat Bot \cite{ramezani2015bat}. In this research, we consider using the echolocation system of bats as a biological model for the study of highly parsimonious biosonar sensors for UAVs. Millions of years' biological development provides bats numerous incredible skills to navigate freely in complex, unstructured environments. Relying on miniature sonar systems with a few transducers---a nose (or mouth) and two ears, bats achieve much better navigation performance than engineered systems. Specifically, a echolocating bat emits brief ultrasonic pulses through mouth or nostrils, and use the returning echoes to navigate \cite{Griffin1958}. Based on bats' biosonar, we aim to develop a bat-inspired sonar sensing and navigation paradigm for quad-rotor UAVs. To achieve this, we adopt a data-driven approach that integrates large-scale simulations with statistical learning to gain insights and replicate bats' abilities. Results presented in this paper are based on our initial efforts in recreating the sensory world of bats via computer simulation. We develop an effective computer model to simulate sensing environments that consist of natural looking trees. The simulated environments are random and contain full geometry of the tree foliage. While this model can be used as a general platform for studying the sensing mechanism of different flying species, our ultimate goal is to build bat-inspired Quad-rotor UAVs---UAVs that can recreate bat's flying behavior (e.g., obstacle avoidance, path planning) in dense vegetation. To this end, we also introduce an foliage echo simulator that can produce simulated echoes by mimicking bat's biosonar. In Figure \ref{fig1}, we demonstrate how a bat is mimicked by a Quad-rotor while navigating across a tree. In our current model, a few realistic model choices or assumptions are made. First, in order to create natural looking trees, the branching structures of trees are modeled by L-systems, whereas the detailed geometry of branches, sub-branches and leaves is created by randomizing a reference tree in a CAD object file. Additionally, the foliage echo simulator is simplified so that no shading effect is considered. We demonstrate our developed model by simulating real-world scenarios with multiple trees and compute the corresponding impulse responses along a Quad-rotor trajectory. \begin{figure}[h] \hspace{-0.5cm} \includegraphics[width=1.2\linewidth]{figures/batsasquad.pdf} \caption{A bat navigating around a tree. We mimic the highly developed bio-sonar system in bats by simulating sonar and leaf beampatterns and validate it through different experiments. A Quad-rotor using this sonar is visualized.} \label{fig1} \end{figure} The rest of this paper is organized as follows. In Section 2, we describe the method of simulating a sensing environment with multiple natural looking trees and the theory behind the foliage echo simulator. We elaborate experimental results and analyses in Section 3. Finally, in Section 4, a general conclusion and the direction towards future work are given. \section{Material and Methods} We develop a computational framework that consists of two simulators, one for the simulation of sensing environment which produces random trees with necessary geometry (e.g., leave locations, size and orientations etc.), another for the simulation of foliage echoes which produces sonar impulses by mimicking the biosonar system of bats. In this Section, we elucidate the main methodology used in these simulators. \\ We simulate the topology of each individual tree by combining Lindenmayer systems (L-systems) with modified CAD implemented object files. An L-system is a graphical model commonly used to describe the growth pattern of plants \cite{Prusinkiewicz1996}. It defines the branching pattern of a plant through recursively applying certain production rules on a string of symbols. Each symbol in the string defines a structural component (e.g., branch, terminal). Each recursive iteration creates an additional level of growth of the string. The final string represents the branching structure of the grown tree. While L-system is commonly used to produce branching structures \cite{shlyakhter2001reconstructing}, we found that it is not sufficient for generating natural looking trees because of over-simplified assumptions. For example, tree models based on L-systems often model branches as straight lines while ignoring the natural curvatures of the branches. Furthermore, most of the L-system models often rely on a few parameters to control the lengths, thickness and angles of branches. Although probability distributions can be introduced to randomize these parameters, they are often not enough to characterize all features of a particular tree species. For these reasons, we choose to adopt L-system to generate the first level branching locations at the trunk. To generate the branches and sub-branches, we modify reference trees from CAD developed object files by randomizing the branch curvatures, lengths, and sub-branch locations. This produces random trees that look more realistic. In Figure \ref{fig3}, we demonstrate the plot of a tree simulated by L-system with the first level branching structure only. Information about the branches, sub-branches and leaves of the simulated trees is stored as an organizational structure of building systems. This is associated with 3D CAD drawing that includes faces and vertices modelled as meshes. This provides a complete 3D tree with planarity for each branch. The planarity makes it easy to visualize the tree with short computing time based on available data (i.e: Polygon, vertices, textures etc), thereby offering a convenient way to effectively imagine scenarios with other trees in forests. The L-system does not really follow drawing standards (i.e: with the geometric information). Hence, in order to make branches and sub-branches, we should follow certain rules using 3D CAD tools that abides by the tree geometry (see Figure \ref{fig4}). Based on the simulator of a random tree, we are able to generate a community that consists of random number of trees. We determine the number of trees and the locations of these trees in a 2-D region by sampling from Inhomogeneous Poisson process (IPP). Let $D \subset \mathbb{R}^2$ denote the 2-D region on which the community of trees will be built. The random locations (i.e., $(x,y)$ coordinates) of the trees are denoted by $S = \{s_i\}_{1\leq i\leq n}$. We assume that $S$ follows an IPP with intensity function $\lambda (s) : D \rightarrow \mathbb{R}^+ $, where $\lambda (s)$ is a parameter to be specified by user which describes how dense the trees are at every location. Small values of $\lambda (s)$ indicate sparse regions whereas high values indicate dense regions. The number of trees, $n$, follows a Poisson distribution $\int_D \lambda (s) ds$. To simulate $S$ given $n$, we adopt a thinning approach \cite{Lewis1979}.\\ For the simulation of foliage echoes, we follow the approach of \cite{10.1371/journal.pone.0182824}. Here, we briefly summarize the method. In the current model, the leaves are simplified as circular disks. The simulated foliage echoes are stored as time-domain (discrete) signals. Let $Y = \{y_1,\ldots, y_n\}$ denote one time-domain signal to be simulated. Let $Y^* = \{y_1^*,\ldots, y_{n^{'}}^* \}$ denote the Fourier transform of $Y$ in the frequency domain. To obtain $Y$, we first compute $Y^*$ and apply inverse fast Fourier transform. It is assumed that $y_k^*$ is nonzero in the frequency ranges between 60 to 80 kHz, which corresponds to the strongest harmonic in the biosonar impulses of the \textit{Rhinolophus ferrumequinum bat}~\cite{andrews2003AC}. According to acoustic laws of sound reflection \cite{Bowman1987}, each Fourier component $y_k^*$ is the superposition of all the reflecting echoes from the reflecting facets within the main lobe of the sonar. It takes the form \begin{equation} y^*_k = \sum_{i=1}^m A_{ki}\cos(\phi_{ki}) + j \sum_{i=1}^m A_{ki}\sin(\phi_{ki}), \end{equation} where $m$ denotes the number of reflecting facets within the main lobe of the sonar, $A_{ki}$ is the amplitude at frequency $f_k$ (which is the frequency corresponding to $y^*_k$) for the $i$-th facet, $\phi_{ki}$ is a phase delay parameter at $f_k$ for the $i$-th facet. The term $A_{ki}$ can be computed by \begin{equation} A_{ki} = S(az_i,el_i,f_k,r_i)L_i(\beta_i,a_i,f_k)\frac{\lambda_k}{2\pi r_i^2}, \end{equation} where $S(az_i,el_i,f_k,r_i)$ represents the sonar beampattern with $az_i$ and $el_i$ being the azimuth and elevation angles of the line that connects the sonar with the $i$-th reflecting facet, $r_i$ is the distance between the sonar and the $i$-th reflecting facet, $L_i(\beta_i,a_i,f_k)$ is the beampattern of the reflecting facet with $\beta_i$ and $a_i$ being the incident angle and of the $i$-th reflecting facet respectively. The sonar beampattern has the general form \begin{multline} S(\cdot) = A_{1}\exp\{- ( a(x-x_0)^2 + \\ 2b(x-x_0)(y-y_0) + c(y-y_0)^2 t)\} \label{eq:sbeam} \end{multline} where $A_{1}$ is the amplitude, $a, b, c$ are the parameters of Gaussian function. The value of $a, b, c$ are determined by empirical data. The leaf beampattern can be approximated by cosine function of the form \begin{equation} L_i(\cdot) = A\left(c\left(f_k,a_i \right) \cdot\cos \left(B c\left( f_k,a_i\right) \cdot\beta_i\right) \right) \label{eq:lbeam} \end{equation} where $c = 2 \pi a_i {f_k}/{v}$, with $v$ being the speed of sound and $A$, $B$ are functions of $c$. A detailed description of (\ref{eq:sbeam}) and (\ref{eq:lbeam}) is beyond the scope of this paper and we refer the interested readers to~\cite{bowman1987book,adelman2014arXiv}. \begin{figure}[h] \hspace{-0.5cm} \includegraphics[width=1.2\linewidth]{figures/lsys.eps} \caption{L-system for branch generation on the trunk.} \label{fig3} \end{figure} \begin{figure}[h] \includegraphics[scale=0.3]{figures/branch.eps}% \qquad \hspace{-1cm} \includegraphics[scale=0.3]{figures/branchside.eps}% \caption{L-system and object file fusion for generating tree branches and sub branches}% \label{fig4}% \end{figure} The simulation of sensing environments and foliage echoes provide us rich amount of sensing data under various sensing tasks. Within the simulated setup, we can design a Quad-rotor that mimics the bat behaviors for tasks such as insects prey. The dynamics of Quad-rotor UAV can be taken from, e.g., \cite{bouabdallah2007design}. We employ the control, that has been recently developed to stabilize UAV while flying \cite{tanveer_recchiuto_sgorbissa_2019}. \section{Results and Discussion} We performed a pilot study by designing a simple sensing scene that involves multiple trees. These trees are constructed by combing an L-system with CAD developed object files as described in the previous section. When visualizing the trees, leaves were approximated using the mid points of the triangular meshes used to model leaves in CAD. We conduct several simulations in the MATLAB environment to demonstrate the performance of model. The performance are evaluated on an Intel{\small\textregistered} Core$™$ i7-3632QM under Ubuntu 16.04 LTS. The simulation with multiple trees has been done on a ten-core server computer. Tree locations in the environment are determined by sampling from an IPP model. The trees are different from each other in terms of of branches angles, sizes and leaves distribution. Moreover, the initial branching pattern follows that of an L-system. In each simulation, we construct a tree (or trees) and analyze the impulse responses from simulated sonar echoes. The impulse responses are computed at different sonar locations in the environment to mimic a flying Quad-rotor. For example, we have computed impulses at regular intervals along a circular path around a tree and impulses for a trajectory directly towards the tree. The beam width of the sonar main lobe is chosen to be 10, 20, and 50 degrees \begin{figure}[h] \hspace{-0.5cm} \includegraphics[width=1.2\linewidth]{figures/withoutleaf.pdf} \caption{Quad-rotor navigating with no leaves encountering in main lobe of sensor} \label{fig5} \end{figure} \begin{figure}[h] \hspace{-0.5cm} \includegraphics[width=1.2\linewidth]{figures/withleaf.pdf} \caption{Quad-rotor navigating with leaves encountering in main lobe of sensor} \label{fig6} \end{figure} \begin{figure}[h] \hspace{-0.5cm} \includegraphics[width=1\linewidth]{figures/impulse1.eps} \caption{Impulse response of main lobe} \label{fig7} \end{figure} Figure \ref{fig5} presents a tree when the Quad-rotor navigates through the tree and observes no leaf in the sensor main lobe, hence no output is generated. Figure \ref{fig6} demonstrates the situation when the sonar encounters leaves and branches, which results in impulse as shown in Figure \ref{fig7}. Figure \ref{fig8} presents two trees when the Quad-rotor navigates through the tree in a circular path. It encounters leaves and branches at four instances. The impulse responses of the four sample points are shown in Figure \ref{fig9}. In addition, Figure 10 shows two and three trees with multiple sonar sample points. \begin{figure}[h] \hspace{-0.5cm} \includegraphics[width=1.2\linewidth]{figures/2tree4sonar.pdf} \caption{Quad-rotor navigating across 2 trees with leaves encountering in main lobe of sensor} \label{fig8} \end{figure} \begin{figure}[h] \centering \subfloat{\includegraphics[scale=0.29]{figures/impulse8.eps}} \subfloat{\includegraphics[scale=0.29]{figures/impulse9.eps}}\\ \subfloat{\includegraphics[scale=0.29]{figures/impulse10.eps}} \subfloat{\includegraphics[scale=0.29]{figures/impulse12.eps}} \caption{Impulse response of main lobe at 4 different locations along a circular trajectory.} \label{fig9} \end{figure} \begin{figure}[h] \hspace{-0.5cm} \includegraphics[width=1.2\linewidth]{figures/10a9a.pdf} \caption{Quad-rotor navigating across different number of trees with leaves encountering in main lobe of sensor} \label{fig10} \end{figure} To analyze the computational complexity, we compute the total computation time while the Quad-rotor completes an entire trajectory. Under one setup, we increase the number of locations along the trajectory where the impulse responses are computed. Under another setup, we increase the number of trees (\texttt{T}) in the environment. The computation time for different scenarios are shown in Table~\ref{table1}. We observe that, increasing the number of computation points has a direct effect on the computation time, which is quite intuitive. It is interesting to note that for a circular trajectory around one tree (\texttt{T} = 1) with radius of $6.2$, it takes only around one second on average to compute 15 impulse responses at regular intervals of 24 degrees. For more than one tree, we set the centre of the circular trajectory to be the mean position of the tree locations. We observe that, increasing the number of trees varies the leaf densities and hence has a direct effect on the computation time. For \texttt{T} = 5, it takes only about 3 seconds on average to compute impulses at 15 points along the trajectory. Overall, our model performs fairly well in real-time. \setlength{\tabcolsep}{2.5pt} \begin{table}[h] { \begin{tabular}{c l ccccc } \hline \multicolumn{1}{c}{\multirow{1}{*}{Computation points}} & \multicolumn{5}{c}{Impulse computation time (s)} \\ {} & \multicolumn{1}{c}{$\texttt{T}=1$} & \multicolumn{1}{c}{$\texttt{T}=2$} & \multicolumn{1}{c}{$\texttt{T}=3$} & \multicolumn{1}{c}{$\texttt{T}=4$} & \multicolumn{1}{c}{$\texttt{T}=5$} & \\ \hline 1 & 0.73 & 0.77 & 0.86 & 0.93 & 1.07 \\ 5 & 0.88 & 1.00 & 1.19 & 1.33 & 1.52 \\ 10 & 0.97 & 1.21 & 1.42 & 1.72 & 2.11\\ 15 & 1.03 & 1.68 & 1.92 & 2.54 & 3.03 \\ \hline \end{tabular}} \caption{Total impulse computation for an entire circular trajectory when the impulses are computed at $1,5,10$ and $15$ locations along the trajectory. \texttt{T} = \{1,\ldots,5\} represent different number of trees in the environment. The center of the circular trajectory is the mean location of the trees in the environment.} \label{table1} \end{table} \section{Conclusion} In this research we explore how to recreate bat behaviors on Quad-rotor UAVs in dynamic environments, thereby transforming nature into bio-technology. In particular, we propose a computational approach to simulate the sensing environments and to simulate foliage echoes during different sensing scenarios. In this preliminary study, we mainly focused on model development and experimental validation in a simulated/known environmental setting. The impulse responses can be further analyzed using state-of-the-art artificial intelligence and machine learning methods to predict different parameters like leaf density, orientation, density. This is a promising direction since it enables navigating in unknown environments. Currently, the trajectories followed by the sonar are predefined and we only analyze the impulse generated at different time instances along the trajectory. Immediate next step is to extend it to an active navigation scenario in which an optimal path can be calculated. Another interesting future direction is to extend the framework towards task and motion planning in large knowledge-intensive domains, as recently done in~\cite{lo2018AAMAS,thomas2019ISRR}. We also plan to model the shading effect between leaves, for example by using an adjusted attenuation function. In order to deal with real world uncertainties, we also plan to integrate our model with Inverse perspective mapping (IPM) approach. This can be done by mounting a camera on the UAV in order to obtain a bird's eye view \cite{10.1007/978-3-030-31993-9_21}.
train/arxiv
BkiUdOA5qsBDH65QVJuz
5
1
\section{Introduction} Understanding the evolution of the early generations of stars and galaxies in the universe, and the accompanying reionization of the intergalactic medium, are primary objectives of contemporary astrophysics. In particular, whether extreme ultra-violet radiation from those early stars was the predominant driver of reionization is a crucial question, since if not then some other substantial source of ionizing radiation must be found \citep[e.g.][]{Robertson2010}. Determination of the electron scattering optical depth from microwave background observations indicate a peak era of reionization around $z\sim7$--10 \citep{Planck2016}. Considerable progress has been made in recent years in unveiling the galaxy populations at these redshifts, particularly thanks to the various deep field campaigns undertaken with the {\em Hubble Space Telescope} \citep[e.g.][]{Koekemoer2013}, most recently the Frontier Fields initiative employing gravitational lensing to probe to fainter levels than would otherwise be possible \citep[e.g.][]{Ishigaki2015}. This has suggested that a major proportion of star formation is occurring in very faint galaxies \citep[e.g.][]{Atek2015}, for which direct constraints on their number and properties are very limited. \begin{figure*}[t!] \begin{minipage}{70mm} \includegraphics[clip=true,angle=90,width=66mm]{120923A.eps} \end{minipage} \begin{minipage}{67mm} \vspace{5mm} \includegraphics[clip=true,angle=270,width=64mm]{120923A_sim.ps} \end{minipage} \caption{\footnotesize Left - the observed spectrum of GRB\,120923A \citep{Tanvir2017} which was obtained from a 2\,hr VLT X-shooter spectrum. This afterglow was particularly faint and challenging for current technology, and only allowed the redshift, $z\sim7.8$, to be determined from the Ly-$\alpha$ break. Right - a simulated ELT/HARMONI spectrum of the same afterglow, illustrating the huge improvement in signal-to-noise expected, and consequent detection of metal lines and precise determination of the HI column from the fit to the Ly-$\alpha$ damping wing. } \label{120923A} \end{figure*} Several {\em Swift}-discovered long-duration gamma-ray bursts (long-GRBs) have been found approaching and within the era of reionization \citep[e.g.][]{Tanvir2009,Salvaterra2009,Cucchiara2011,Tanvir2017}. As outlined in this brief contribution, these high redshift long-GRBs have already produced unique insights into high-$z$ star formation, and have paved the way for the key high-$z$ science theme of the {\em THESEUS} mission. \section{The astronomical landscape of the late 2020s} Gamma-ray bursts are quintessential multi-wavelength, and indeed multi-messenger, phenomena, and so the scientific return obtained from GRB missions is enhanced greatly by the facilities available for complementary observations and follow-up. By the time {\em THESEUS} is operational, if selected for an M5 launch, the landscape of astrophysical hardware is likely to be significantly different from today. Supplementing the current generations of 8\,m class optical telescopes, ground-based radio, submm, and gravitational wave detectors, together with the X-ray, gamma-ray and optical observatories in space, we expect a new generation of 30\,m class ground-based optical/IR telescopes, the Square Kilometre Array, potentially third-generation gravitational wave detectors, the Large Synoptic Survey Telescope, and {\em ATHENA} in space. The {\em THESEUS} mission will provide the essential link to exploit the synergies between these facilities for transient science generally, and in the exploration of the early universe in particular. \section{The role of GRBs} Long-duration GRBs are found over a large span of cosmic history; they are born in massive star core-collapse, and lie at the star-forming hearts of galaxies. Thus they provide a range of unique probes high redshift galaxy evolution, which will be exploited by the {\em THESEUS} mission together with follow-up observations. \subsection{Evolution of the global star formation rate density} Since long-GRBs are core-collapse phenomena, they trace massive star formation \citep[e.g.][]{Blain2000}. Thus the observed GRB redshift distribution, providing the sample is redshift complete, can in principle be inverted to estimate the global star formation evolution without regard to whether the host galaxies are detected or not. Early attempts already showed that the long-GRB rate was surprisingly high at $z>4$, given the fairly rapid decrease in star formation being found by galaxy surveys \citep[e.g.][]{Kistler2009}. In practice, it is known that long-GRBs are preferentially created in lower metallicity environments, and suitable accounting for this effect, combined with a realisation that a greater proportion of high-z star formation is likely happening in very faint galaxies, has brought estimates into better agreement \citep{Perley2016}. However, this remains a critical question for reionization. \subsection{Locating star forming galaxies} By virtue of localising GRB afterglows and determining their redshifts, we can sample faint galaxy populations independently of their luminosities {\citep[e.g.][]{McGuire2016}. This is in contrast to conventional galaxy surveys, that of course depend on detecting the galaxies in some band(s), and in the large majority of cases rely on photometric redshifts. This is a particular issue at $z\gtrsim6$, where the galaxy luminosity function becomes increasingly steep and faint-end dominated, and corrections for these missed galaxies difficult and uncertain \citep[e.g.][]{Bouwens2017}. By comparing the number of GRBs in hosts above some given detection threshold to the number below, one can directly estimate the correction factor for the proportion of star formation missed in conventional galaxy surveys \citep{Tanvir2012,Trenti2012,Basa2012}. \begin{figure*}[t!] \centerline{\includegraphics[clip=true,angle=270,width=110mm]{NH_v_z.ps}} \caption{\footnotesize Neutral hydrogen column density measured from fits to the Lyman-$\alpha$ absorption lines in GRB afterglow spectra. On the right-hand axis is shown the corresponding optical depth to H-ionizing extreme ultra-violet radiation. In nearly all cases the sight-lines are essentially opaque, implying a very low escape fraction. The running median and interquartile range of 20 events (red line and pink band) show little evidence for evolution between $2\lesssim z\lesssim5$. (Figure based on data presented in Tanvir et al. submitted.)} \label{nhz} \end{figure*} \subsection{Cosmic chemical evolution} GRB afterglows provide bright back-lights against which numerous absorption features created by intervening gas clouds in the host can often be seen. These provide not only gross metallicities, but detailed abundance patterns from which the enrichment by prior generations of stars can be inferred. Once again, this is independent of the host luminosity, unlike crude emission line diagnostics, and can be applied even at high redshifts \citep{Vreeswijk2004,Thoene2013,Sparre2014,Hartoog2015}. The complementarity of {\em THESEUS} GRB discoveries, and optical/nIR followup with 30-m class telescopes promises a major step forward, which is illustrated by the comparison of the VLT spectrum of GRB\,120923A and simulated ELT spectrum of the same afterglow in Figure~\ref{120923A}. \subsection{Ionizing radiation escape fraction} Direct observation of escaping Lyman continuum radiation from distant galaxies is challenging at $z\sim2$--4 \citep[e.g.][]{Japelj2017}, and essentially impossible at higher redshift due to strong IGM absorption. Afterglow spectra of GRBs frequently exhibit strong absorption due to hydrogen Lyman-$\alpha$, which allows calculation of neutral hydrogen column density, and hence the opacity to ionizing radiation with $\lambda<912$\,\AA\ \citep{Chen2007,Fynbo2009}. As can be seen from Fig.~\ref{nhz}, over a wide range of redshift the large majority of GRB sight-lines are essentially opaque, and thus the bulk of ionizing radiation from the progenitor stars would not escape the host galaxies. Assuming these sight-lines are representative of the sight-lines to massive stars more generally, one can thus infer an average escape fraction of ionizing radiation, which, in a new study by Tanvir et al. (MNRAS submitted) is found to have a 98\% upper limit of 1.5\%. This is potentially a problem for the hypothesis that reionization was brought about by UV from massive star, since that seems to require escape fractions of at least 10--20\%. Although this GRB sample is largely in the range $2\lesssim z \lesssim5$, there is little evidence of variation with redshift. From the {\rm THESEUS} mission we expect to greatly increase the sample of $z>5$ GRBs with precise $N_{\rm HI}$ measures, thus providing a strong test of whether sufficient stellar ionizaing radiation can escape from the locations of massive stars to drive reionization. \subsection{Topology of reionization} Just as the neutral gas in the host interstellar medium on the lines-of-sight to GRBs can be inferred from the Lyman-$\alpha$ absorption line, so any neutral gas in the intergalactic medium (IGM) proximate to the host also contributes to the absorption. The shape of the damping wing in each case differs slightly, since the IGM absorption is an integrated effect of gas over a path length through the expanding universe. Thus in principle the two columns can be decomposed and hence the neutral fraction of the IGM at that location estimated. In practice, the method is hard for low signal-to-noise spectra, and may be complicated by `proximity effects' such as inflows or outflows and local ionized regions around the host \citep{McQuinn2008}, although these effects are likely much less significant than in the case of bright quasars. This approach has been attempted in a couple of cases to date \citep{Totani2006,Hartoog2015}, with results consistent with a low neutral fraction at $z\sim6$. By obtaining similar results for a larger sample of sight-lines in the {\em THESEUS} era we will be able to investigate both the overall timeline, but also the variation from place to place (and hence the topology) of reionization. \subsection{Population III stars} Inefficient cooling of metal-free gas tends to produce stars with a top-heavy initial mass function \citep[e.g.][]{Stacy2016}, and if some of these very massive pop III stars end their lives with high specific angular momentum they may produce energetic collapsars. If the jets they produce are sufficiently long-lived then they may produce a distinct class of pop-III GRBs \citep{Meszaros2010,Yoon2015}. Even if not detectable directly, the chemical signatures of pop III enrichment may be witnessed in spectroscopy of high-redshift GRBs \citep{Ma2015}. \section{Conclusions} The study of distant galaxy populations, and their role in the reionization of the universe, have been the subject of major efforts, and are a primary science driver for {\em JWST}. Despite this, some key questions will continue to be very hard to answer, in particular the star formation occurring in faint galaxies, the build up of heavy elements and the escape fraction of ionizing radiation. Long-duration gamma-ray bursts provide unique routes to investigate these issues, which will be fully exploited using the large samples of high-$z$ GRBs found by {\em THESEUS} together with follow-up by next generation facilities. \bibliographystyle{aa}
train/arxiv
BkiUddo4eIZjv1U1E9uj
5
1
\section{Introduction} With the significant sensitivity improvement of forthcoming next generation gravitational wave detectors like advanced LIGO \cite{Harry:2010zz}, advanced Virgo \cite{Weinstein:2011kh} and the LCGT \cite{Kuroda:2010zzb} there is a realistic chance that gravitational waves may be directly observed. In addition to transient events such as neutron star and/or black hole mergers or supernovae, which require that an event happens sufficiently near to us during the observation period \cite{Abadie:2012rq}, it is important to also consider continuous sources. Millisecond pulsars are a particularly promising class since they are very old and stable systems and therefore could be reliable sources of gravitational waves. Their fast rotation strongly favors gravitational wave emission \cite{Aasi:2013sia}, and the fact that their timing behavior is known to high precision \cite{Manchester:2004bp} greatly simplifies the analysis required to find a signal in the detector data. Emission due to deformation of these objects (``mountains''), which is usually parametrized by an ellipticity, is the standard paradigm for continuous gravitational wave searches \cite{Collaboration:2009rfa}, but global oscillation modes of a star can also emit copious gravitational waves that could be detectable if the oscillation reaches sufficiently high amplitudes. R-modes are the most interesting class \cite{Andersson:1997xt,Andersson:2000mf,Lindblom:1998wf,Owen:1998xg}, because they are generically unstable in millisecond pulsars and therefore will be present unless the dissipative damping is strong enough. If r-modes arise in a spinning neutron star, they affect the spindown (since they cause the star to lose angular momentum via gravitational radiation) and the cooling (since the the damping forces on the r-mode generate heat). To understand the interplay of these effects we have developed \cite{Alford:2012yn,Alford:2013pma} an effective description of the spindown evolution where complicated details about the star's interior are absorbed into a few effective parameters. The resulting spindown can be rather different from that predicted by simpler approaches, and includes strict bounds on the uncertainties in the final results. In this paper we will use this method to analyze the possible r-mode gravitational radiation of old neutron stars. Firstly, however, we provide some background and motivation. R-modes can occur in young or old pulsars. In the case of young sources \cite{Aasi:2013sia,Abadie:2010hv,Abbott:2008fx,Wette:2008hg} we have analyzed their r-mode evolution \cite{Alford:2012yn} and found that r-modes can provide a \textit{quantitative} explanation for their observed low spin rates. Moreover, the r-mode gravitational emission is expected to be strong, because a large r-mode amplitude would be required to spin down the known young pulsars to their current low spin frequencies within their lifetimes which are as short as a thousand years. These known pulsars are no longer in their r-mode spindown epoch, but there may be unobserved young neutron stars, e.g. associated with known supernova remnants such as SN 1987A, that are currently undergoing r-mode spindown, and several of them would be in the sensitivity range of advanced LIGO \cite{Alford:2012yn}, allowing this scenario to be falsified by future measurements. In this paper we focus on old neutron stars which have been spun up by accretion, and we perform an analysis of their expected r-mode gravitational wave radiation. In \cite{Alford:2013pma} novel r-mode instability regions in spindown timing parameter space have been derived that allow us to decide if r-modes can be present in old millisecond radio pulsars. As discussed there, there are two scenarios to explain the observed timing data. It might be that the ordinary nuclear matter model of neutron stars is incomplete, and there is additional damping (e.g.~from exotic forms of matter or currently overlooked physical processes) that stops r-modes from growing in these stars. In this case there will be no r-mode gravitational radiation from old neutron stars. The other possibility is the conventional scenario where only standard damping mechanisms are present in neutron stars. In this scenario most old millisecond pulsars \textit{will be} undergoing r-mode oscillations, since for expected r-mode saturation amplitudes the dissipative heating ensures that fast spinning sources can neither cool nor spin out of the parameter region were r-modes are unstable \cite{Alford:2013pma}. Yet, some slower spinning sources can escape the instability region and we will determine the limiting frequency. Therefore, there will be gravitational radiation from most old neutron stars in this scenario, and the purpose of this paper is to find out whether it could be detected on earth. The detectability of known continuous sources is generally described by the ``spindown limit'' which is, for a specific source with known timing data, the maximum gravitational wave strain that can be emitted by that source. Despite the quite restrictive limits set by the spindown data, the large spin frequencies of millisecond pulsars could nevertheless lead to a detectable signal. Present gravitational wave detectors---like the original LIGO interferometer---did not probe the spindown limit for millisecond pulsars. However, next generation detectors including the advanced LIGO detector will be able to beat the spindown limit for various sources. Therefore, it is interesting to assess the chance to detect gravitational emission from oscillating millisecond pulsars. We will introduce here the \textit{universal r-mode spindown limit} on the gravitational wave strain, which is more restrictive since it takes into account our understanding of the r-mode spindown and the complete information we have about these systems. Whereas deformations of a given source depend on its evolutionary history and could therefore vary significantly from one source to another, for proposed saturation mechanisms the r-mode saturation amplitude proves to be rather insensitive to details of a particular source, like its mass or radius \cite{Bondarescu:2013xwa,Alford:2011pi,Haskell:2013hja}. The expected gravitational wave strain of a given source can then be strongly constrained by the timing data of the \textit{entire set} of millisecond pulsars. Using our semi-analytic approach to pulsar evolution, and assuming that the same saturation and cooling mechanism (with given power-law dependence on temperature) operates in all the stars, we can then obtain the universal limit given in Eq.~(\ref{eq:universal-spindown-limit}). We will see that this is considerably below the standard spindown limits, indicating that it will be harder than previously expected to see r-mode gravitational waves from these sources. \section{R-mode spindown of millisecond pulsars\label{sec:R-mode-spindown-of}} As described in \cite{Alford:2012yn,Alford:2013pma} the r-mode evolution \cite{Owen:1998xg} can be discussed within an effective description, which relies on the fact that a compact star appears effectively as a point source and that the relevant material properties integrated over the star have simple power law dependencies on the macroscopic observables that change during the evolution. The relevant macroscopic quantities are the power emitted as gravitational waves $P_{G}$, the dissipated power $P_{D}$ that heats the star and the thermal luminosity $L$ that cools the star \begin{equation} P_{G}=\hat{G}\Omega^{8}\alpha^{2},\; P_{D}=\hat{D}T^{\delta}\Omega^{\psi}\alpha^{\phi},\; L=\hat{L}T^{\theta}\,,\label{eq:powers} \end{equation} in terms of the rotational angular velocity $\Omega=2\pi f$, the core temperature $T$ of the star and the dimensionless r-mode amplitude $\alpha$ defined in \cite{Lindblom:1998wf,Alford:2012yn}. The explicit form of the prefactors $\hat{G}$, $\hat{D}$ and $\hat{L}$ for different damping and cooling mechanisms \cite{Alford:2013pma} is given in tab. \ref{tab:parameterization}. \begin{table} \begin{tabular}{|c||c|} \hline parameter of the ... & integral expression\tabularnewline \hline \hline GW luminosity & $\hat{G}\equiv\frac{2^{17}\pi}{3^{8}5^{2}}\tilde{J}^{2}GM^{2}R^{6}$\tabularnewline \hline \hline Shear visc. dissipation & $\hat{D}=5\tilde{S}\Lambda_{{\rm QCD}}^{3+\sigma}R^{3}$\tabularnewline \hline Bulk visc. dissipation & $\hat{D}=\frac{2^{3}}{3^{3}7}\frac{\Lambda_{{\rm QCD}}^{9-\delta}\tilde{V}R^{7}}{\Lambda_{{\rm EW}}^{4}}$\tabularnewline \hline Ekman layer dissipation & $\hat{D}=5\left(\frac{2}{3}\right)^{\frac{9}{2}}\frac{3401+2176\sqrt{2}}{11!!}\sqrt{\hat{\eta}_{c}\rho_{c}}R_{c}^{4}$\tabularnewline \hline \hline Neutrino luminosity & $\hat{L}=\frac{4\pi R^{3}\Lambda_{{\rm QCD}}^{9-\theta}\tilde{L}}{\Lambda_{{\rm EW}}^{4}}$\tabularnewline \hline Photon luminosity & $\hat{L}=\frac{\pi^{3}}{15}R^{2}\hat{X}^{4}$\tabularnewline \hline \end{tabular}\caption{\label{tab:parameterization}Parameters in the general parameterization eq.~(\ref{eq:powers}) for the energy loss rates. The arising quantities are the mass $M$ and the radius $R$ of the star, the gravitational constant $G$, generic normalization scales $\Lambda_{{\rm QCD}}$ and $\Lambda_{{\rm EW}}$ and in case of Ekman damping the relevant quantities at the crust/core interface, see \cite{Lindblom:2000gu}. The dimensionless constants $\tilde{J}$, $\tilde{V},$ $\tilde{S}$ and $\tilde{L}$ contain the complete information about the interior of the star. Their definition and values for realistic neutron stars are given in \cite{Alford:2012yn}.} \end{table} R-modes are unstable and their fast growth has to be stopped by a non-linear dissipative saturation mechanism. Even though there are several interesting proposals \cite{Bondarescu:2013xwa,Haskell:2013hja,Alford:2011pi,Lin:2004wx,Lindblom:2000az,Wu:2000qy,Rezzolla:1999he} it is not yet settled which mechanism will dominate and saturate r-modes. For millisecond pulsars we expect moderate saturation amplitudes, in which case the pulsar spindown is determined by the equation \cite{Owen:1998xg} \begin{equation} \frac{d\Omega}{dt}=-\frac{3\hat{G}\alpha_{{\rm sat}}^{2}\left(T,\Omega\right)}{I}\Omega^{7}-\cdots\,,\label{eq:spindown} \end{equation} in terms of the moment of inertia of the star $I$ and the r-mode saturation amplitude $\alpha_{{\rm sat}}$. The observed total spindown rate will in general be larger since in addition to r-modes there are other spindown mechanisms given by the ellipsis. Nevertheless, by assuming that the observed spindown rate is entirely due to r-modes, observed pulsar timing data allows one to give upper bounds on the r-mode saturation amplitude. These bounds are shown for the observed radio pulsars included in the ATNF database \cite{Manchester:2004bp} in fig.~\ref{fig:alpha-bounds} and they require very low saturation amplitudes, $10^{-7}\lesssim\alpha_{{\rm sat}}\lesssim10^{-5}$ \cite{Alford:2013pma,Bondarescu:2013xwa}. Similar low values are obtained from pulsars in low mass x-ray binaries \cite{Alford:2013pma,Mahmoodifar:2013quw}. Moreover, one can see in fig.~\ref{fig:alpha-bounds} that faster spinning sources generally set more stringent bounds on the saturation amplitude of r-modes in the considered pulsar. \begin{figure} \includegraphics{alpha-bounds} \caption{\label{fig:alpha-bounds} Upper bounds on the r-mode saturation amplitude arising from the observed spindown of the pulsars in the ATNF database \cite{Manchester:2004bp}. The strongest bound $\alpha_{{\rm sat}}\lesssim1.2\times10^{-7}$ is obtained for the $533$ Hz pulsar J0034-0534 which has a spindown rate $\dot{f}\approx-1.4\cdot10^{-15}s^{-2}$ (large triangle at lower right, red online). } \end{figure} The r-mode saturation amplitude can in general depend both on the temperature and the frequency of the star. We use a general parametrization of the saturation amplitude with a power-law form \begin{equation} \alpha_{{\rm sat}}\left(T,\Omega\right)=\hat{\alpha}_{{\rm sat}}T^{\beta}\Omega^{\gamma}\,,\label{eq:saturation-amplitude} \end{equation} as realized for the proposed saturation mechanisms \cite{Bondarescu:2013xwa,Haskell:2013hja,Alford:2011pi,Wu:2000qy}. Here the exponents are fixed (rational) numbers determined by the saturation mechanism, whereas the reduced amplitude $\hat{\alpha}_{{\rm sat}}$ is less well known and can also depend on parameters of the particular source, like the mass or the radius. Using this general approach it was found in \cite{Alford:2012yn} that the r-mode heating is significant even for small amplitude modes and the thermal evolution is systematically faster than the spindown. Therefore the star reaches a thermal steady state where the dissipative r-mode heating balances the cooling due to photons and neutrinos and the temperature is given by \begin{equation} T_{{\rm hc}}=\left(\frac{\hat{G}\hat{\alpha}_{{\rm sat}}^{2}\Omega^{8+2\gamma}}{\hat{L}}\right)^{{\textstyle \frac{1}{\theta-2\beta}}}\,.\label{eq:heating-cooling} \end{equation} This leads to a spindown equation along this steady state curve \cite{Alford:2012yn,Alford:2013pma} \begin{equation} \frac{d\Omega}{dt}=-\frac{3\hat{G}^{\theta/\left(\theta-2\beta\right)}\hat{\alpha}_{{\rm sat}}^{2\theta/\left(\theta-2\beta\right)}}{I\hat{L}^{2\beta/\left(\theta-2\beta\right)}}\Omega^{n_{rm}}\,,\label{eq:effective-spindown} \end{equation} with an effective braking index \begin{equation} n_{{\rm rm}}=7\left(\!\frac{1\!+\!2\gamma/7\!+\!2\beta/\left(7\theta\right)}{1\!-\!2\beta/\theta}\!\right)\label{eq:effective-braking-index} \end{equation} that depends on the saturation mechanism and can be rather different from the generic r-mode spindown exponent $7$. The spindown equation has the solution \begin{align} \Omega\!\left(t\right) & =\left((\Omega_{i})^{z}+\frac{3z}{2}\left(\frac{\hat{G}^{\theta}\hat{\alpha}_{{\rm sat}}^{2\theta}}{\hat{L}^{2\beta}}\right)^{{\textstyle \frac{1}{\theta-2\beta}}}\frac{t-t_{i}}{I}\!\right)^{1/z}\:,\label{eq:spindown-solution}\\ z & \equiv\frac{2(3+\gamma)\theta+4\beta}{\theta-2\beta}\:.\nonumber \end{align} This solution has two limits. At early times the first term in the parenthesis of eq.~(\ref{eq:spindown-solution}) dominates, so that the star hardly spins down, i.e. $\Omega\approx\Omega_{i}$, and at late times the second term dominates, so that the spindown becomes independent of the initial angular velocity $\Omega_{i}$. The crossover point between these two regimes is determined by the reduced saturation amplitude $\hat{\alpha}_{{\rm sat}}$. For young pulsars the $\Omega_{i}$-independent late time behavior of the spindown law is relevant. For some old millisecond pulsars the spindown rate is so low that the ``early time'' regime is realized and they hardly spin down even over their billion year age. The r-mode evolution eq.~(\ref{eq:spindown-solution}) takes place unless the dissipation, which depends strongly on temperature and frequency, is strong enough to completely damp r-modes. R-modes are only unstable at sufficiently high frequencies: a typical instability region for a neutron star with standard damping mechanisms in a $T-\Omega$-diagram \cite{Lindblom:1998wf}{} is shown in fig.~\ref{fig:schematic evolution}. By ``standard damping'' we mean established mechanisms% \footnote{A potential Ekman layer at the crust-core boundary \cite{Lindblom:2000gu} does not qualitatively change this picture \cite{Alford:2013pma}.% }, namely shear viscosity due to leptonic and hadronic scattering \cite{Shternin:2008es} and bulk viscosity due to modified Urca reactions \cite{Sawyer:1989dp}. Ref.~\cite{Alford:2010fd} gives a general semi-analytic expression for the minimum frequency $\Omega_{{\rm min}}$ down to which r-modes can be unstable, and shows that this limit is extremely insensitive to unknown details of the source and the microphysics (see also \cite{Lindblom:1998wf}). Fig.~\ref{fig:schematic evolution} also shows two qualitatively different evolution trajectories. A recycled millisecond pulsar entering the instability region at point A in fig.~\ref{fig:schematic evolution} is slowly spun up and kept warm by accretion in a binary system, following the thick vertical line. The r-mode evolution, eq.~(\ref{eq:spindown-solution}), starts when accretion stops. This may occur when the star is spinning slowly (B) or quickly (C). In either case, even though the star is in the region of $\Omega$-$T$ space where according to the standard damping mechanism r-modes are unstable, the star then cools faster than it spins down (following the thin horizontal lines). If accretion brought the star to a high spin frequency (C) then the star cools until it reaches the steady state line (dashed line, given by eq.~(\ref{eq:heating-cooling})) at point (D); it then slowly spins down, following the steady-state line, and would only reach the boundary of the instability (E) after time scales that are longer than the age of known sources. Therefore we expect such a source not too far below point (D). Fig.~\ref{fig:schematic evolution} shows a steady-state line for low r-mode saturation amplitude, in which case the line is high enough that it exits the instability region at a frequency $\Omega_{f}$ which is significantly above the minimum frequency $\Omega_{{\rm min}}$. This means that if accretion leaves the star with a low spin frequency (B), below $\Omega_{f}$, then the star cools in less than a million years \cite{Yakovlev:2004iq} and reaches the boundary of the instability region (F). The value of $\Omega_{f}$ is \begin{equation} \Omega_{f}=\left(\frac{\hat{D}^{\theta-2\beta}\hat{\alpha}_{{\rm sat}}^{2\delta}}{\hat{G}^{\theta-\delta-2\beta}\hat{L}^{\delta}}\right)^{\frac{1}{6\theta-8\delta-12\beta-2\delta\gamma}}\ .\label{eq:final-frequency} \end{equation} It was shown in \cite{Alford:2012yn,Alford:2013pma} that this expression is extremely insensitive to the microphysical details. Whereas for young sources discussed in \cite{Alford:2012yn} neutrino emission is the relevant cooling mechanism to determine the final spindown frequency, for the low saturation amplitudes relevant for millisecond pulsars photon cooling from the surface and damping due to shear viscosity dominates \cite{Alford:2013pma} in eq.~(\ref{eq:final-frequency}). For a given upper bound on $\hat{\alpha}_{{\rm sat}}$, all sources below this \textit{universal r-mode frequency bound} $\Omega_{f}$ cannot be undergoing r-mode spindown since they either have been spun out of the instability region or cooled out of it in less than a million years, which is considerably shorter than their billion year age. In contrast, all sources spinning faster than $\Omega_{f}$ must be undergoing r-mode spindown (i.e.~they are on the steady-state curve in fig.~\ref{fig:schematic evolution}). The fastest spinning sources ($f\gtrsim600\,{\rm Hz}$) could have only left the instability region if the saturation amplitude would be as low as $\alpha_{{\rm sat}}\lesssim10^{-10}$ \cite{Alford:2013pma}, which is orders of magnitude below what proposed saturation mechanisms can provide \cite{Bondarescu:2013xwa,Haskell:2013hja,Alford:2011pi,Wu:2000qy}. We conclude that fast spinning sources should be emitting gravitational waves via r-mode spindown and we will determine the required spin frequencies below. \begin{figure} \includegraphics{schematic-evolution} \caption{\label{fig:schematic evolution} Schematic evolution of recycled radio pulsars which have been spun up by accretion. Whereas the cooling (horizontal segments) takes less than a million years, the slow spindown along the steady state curve takes longer than a billion years. The occurrence of a long r-mode spindown epoch is determined by whether the frequency when accretion ends is below (B) or above (C) the universal r-mode frequency bound $\Omega_{f}$. See the text for details.} \end{figure} \section{Gravitational wave strain} R-modes emit gravitational waves due to their time-varying current quadrupole moment. The gravitational wave frequency $\nu$ emitted by the dominant fundamental ($m=2)$ r-mode is related to the rotational angular velocity via $\nu=2/\left(3\pi\right)\Omega$ \cite{Owen:1998xg}. The gravitational wave signal of a given source is described by the intrinsic gravitational wave strain of the detector, which describes the expected signal in a terrestrial detector and can directly be compared to the detector noise. For r-modes it takes the form \cite{Owen:2010ng} \begin{equation} h_{0}=\sqrt{\frac{2^{15}\pi^{7}}{5}}\frac{\tilde{J}GMR^{3}\nu^{3}\alpha_{{\rm sat}}}{D}\:,\label{eq:intrinsic-strain-amplitude} \end{equation} where $D$ is the distance to the source. In a recent study of the gravitational wave emission of young sources \cite{Alford:2012yn} it was found that for the large amplitudes required to explain the low spin frequencies of young pulsars, the late time behavior of the spindown evolution eq.~(\ref{eq:spindown-solution}) is relevant and in this case the strain eq.~(\ref{eq:intrinsic-strain-amplitude}) depends only on the age and the distance of the source \cite{Wette:2008hg}, and is independent of the saturation amplitude. Because of the restrictive bounds on the saturation amplitude shown in fig.~\ref{fig:alpha-bounds}, for some old radio pulsars the ``early time'' limit of the evolution is relevant, where the frequency barely changes and the strain depends linearly on the saturation amplitude. In fig.~\ref{fig:schematic evolution} such a source stays close to its starting point D on the spindown curve for more than a billion years. However, in general the time evolution is relevant and has to be taken into account. This can be seen in fig.~\ref{fig:strain-evo} where the evolution of the gravitational wave strain is shown for saturation amplitudes relevant for millisecond pulsars. The dots also show for various amplitudes the end of the gravitational wave emission, where the source spins slowly enough that the r-mode is damped. As seen, for $\alpha_{{\rm sat}}<10^{-5}$ the time needed for a star to spin out of the r-mode instability region is considerably more than a billion years, longer than the age of these sources. \begin{figure} \includegraphics{low-alpha-evolution} \caption{\label{fig:strain-evo}The time evolution of the emitted intrinsic gravitational wave strain amplitude (solid curves) and the endpoints of the gravitational wave emission (dots) \cite{Alford:2012yn} shown for different saturation amplitudes and for a fiducial source located at a distance of $1\,{\rm kpc}$. The universal late time behavior is also shown (dotted line).} \end{figure} \subsection{Standard r-mode spindown limit} Using the spindown equation (\ref{eq:spindown}) to eliminate the r-mode saturation amplitude $\alpha_{{\rm sat}}$ in eq.~(\ref{eq:intrinsic-strain-amplitude}), i.e. employing the values given in fig.~\ref{fig:alpha-bounds}, yields the \textit{spindown limit}% \footnote{The expression given here is slightly smaller than the estimate given in \cite{Owen:2010ng}, since the rotational energy loss goes not entirely into gravitational waves but is partly dissipated to saturate the r-mode at a finite amplitude.% } \cite{Owen:2010ng,Aasi:2013sia} \begin{equation} h_{0}^{\left({\rm sl}\right)}=\sqrt{\frac{15}{4}\frac{GI\left|\dot{\Omega}\right|}{D^{2}\Omega}}\:,\label{eq:spindown-limit-strain} \end{equation} which provides an upper bound that is saturated when the entire rotational energy loss is due to the gravitational wave emission and accompanying dissipation caused by r-modes. The spindown limits for the observed radio pulsar data \cite{Manchester:2004bp} are shown in fig.~\ref{fig:spindown-limits} and are represented by inverted triangles. Here the spindown limits from r-modes (solid triangles) are compared to those for the typically considered case of elliptic star deformations (open triangles), which has recently been studied in detail by the LIGO collaboration \cite{Aasi:2013sia}. R-modes lead to a slightly higher strain but at a lower frequency \cite{Owen:2010ng}. These limits are also compared to the detector sensitivity at 95\% confidence limit (curves), given by $h_{0}^{95\%}\!\approx\!10.8\sqrt{S_{h}/\Delta t}$ \cite{Aasi:2013sia} in terms of the spectral density $S_{h}$ of the detector strain noise and the observation interval $\Delta t$. To assess fig.~\ref{fig:spindown-limits} it is important to recall that r-modes are only unstable at sufficiently large frequencies. The lowest frequency at which r-modes are unstable ($\Omega_{{\rm min}}$ in Fig.~\ref{fig:schematic evolution}) \cite{Alford:2010fd,Lindblom:1998wf} shown by the vertical line, sets a strict frequency limit below which no r-mode gravitational wave emission is possible. Despite this restriction, the figure shows that the spindown limit for several millisecond radio pulsars should be beaten by the advanced LIGO detector. However, even though the pulsars J0537-6910 and J0437-4715 could be significantly above the detector sensitivity they are not promising sources. The pulsar J0537-6910 is actually a young pulsar that has been analyzed in detail in \cite{Alford:2012yn} where it is shown that although it is slightly above the minimum frequency of the instability region, it is very likely outside of it. The $f=174\,{\rm Hz}$ pulsar J0437-4715 is the closest and brightest millisecond pulsar and therefore would be a natural target. However, this is also the only non-accreting source for which a temperature estimate is available \cite{Haskell:2012}, and this shows that it is outside of the instability region and similarly cannot emit gravitational waves due to r-modes. Moreover, its frequency is below likely values of $\Omega_{f}$ (eq.~(\ref{eq:final-frequency})) so, as we discussed at the end of section \ref{sec:R-mode-spindown-of} and will analyze in more detail below, it ought to have already cooled out of the instability region for a neutron star with standard damping mechanisms \cite{Alford:2013pma}. Thinking beyond the advanced LIGO sensitivity thresholds shown in fig.~\ref{fig:spindown-limits}, we note that planned detectors like the Einstein telescope, which has an order of magnitude higher sensitivity, would be able to detect the gravitational waves that would be emitted from many sources, if r-modes are responsible for the better part of their observed spindown rate. \begin{figure} \includegraphics{combined-spindown-limit} \caption{\label{fig:spindown-limits}The standard spindown limits of known radio pulsars compared to the characteristic strain amplitude for different detector configurations assuming a coherent analysis of a year of data. Open (magenta) triangles show the limits for the standard case of elliptic deformations of the star \cite{Aasi:2013sia} and filled (red) triangles for the case of r-mode gravitational wave emission. The solid (grey) curve gives the sensitivity of the original LIGO detector and the dashed (blue) and dot-dashed (green) curves show the sensitivity of the advanced LIGO detector in the standard and neutron star enhanced mode. The vertical line shows the limiting frequency below which r-modes are absent in a neutron star and the shaded band gives the uncertainty on it using the semi-analytic result \cite{Alford:2010fd} and the ranges for the underlying parameters used in \cite{Alford:2012yn}.} \end{figure} \subsection{Universal r-mode spindown limit} The spindown limit for a particular source only takes into account information about that source. Here we will derive a more restrictive limit taking into account the entire data set of radio pulsars. It is based on the observation that proposed r-mode saturation mechanisms are very insensitive to the details of a particular source \cite{Bondarescu:2013xwa,Alford:2011pi,Haskell:2013hja}. To make this statement quantitative, we factorize the reduced saturation amplitude given in eq.~(\ref{eq:saturation-amplitude}) by writing \begin{equation} \hat{\alpha}_{{\rm sat}}=\hat{\alpha}_{{\rm sat}}^{\left({\rm mic}\right)}\hat{\alpha}_{{\rm sat}}^{\left({\rm mac}\right)}\:, \end{equation} where $\hat{\alpha}_{{\rm sat}}^{\left({\rm mic}\right)}$ depends on the microphysics of the saturation mechanism and is source-independent, and $\alpha_{{\rm sat}}^{\left({\rm mac}\right)}$ depends on the macroscopic properties of a specific source (mass, radius, etc) % \footnote{To make the factorization unambiguous, we will use the convention that once a set of macroscopic parameters have been chosen, the source-dependent macroscopic part $\alpha_{{\rm sat}}^{\left({\rm mac}\right)}$ consists of powers of those parameters with no multiplicative prefactor. For the saturation mechanisms considered here there is a known set of macroscopic parameters (mass, radius, etc) but for generality we do not limit ourselves by writing $\alpha_{{\rm sat}}^{\left({\rm mac}\right)}$ explicitly in terms of them.% }, which generically only vary within narrow margins. For a given saturation mechanism determined by $\beta$ and $\gamma$ in eq.~(\ref{eq:saturation-amplitude}) and a particular source-dependence encoded in $\hat{\alpha}_{{\rm sat}}^{\left({\rm mac}\right)}$ we can then use eq.~(\ref{eq:effective-spindown}) to determine the reduced microscopic saturation amplitudes $\hat{\alpha}_{{\rm sat}}^{\left({\rm mic}\right)}$ from given pulsar timing data. The smallest value obtained from the entire data set, which is realized for a particular source with frequency $f_{0}$ and spindown rate $\dot{f}_{0}$, can then be used to give a limit for a general source, spinning with frequency $f$, using eqs.~(\ref{eq:heating-cooling}), (\ref{eq:effective-spindown}) and (\ref{eq:intrinsic-strain-amplitude}). We find the \textit{universal r-mode spindown limit} \begin{equation} h_{0}^{\left({\rm usl}\right)}=\sqrt{\frac{15}{4}\frac{GI\left|\dot{f}_{0}\right|}{D^{2}f_{0}}}\left(\frac{\hat{\alpha}_{{\rm sat}}^{\left({\rm mac}\right)}}{\hat{\alpha}_{{\rm sat},0}^{\left({\rm mac}\right)}}\right)^{{\textstyle \frac{1}{1-2\beta/\theta}}}\left(\frac{f}{f_{0}}\right)^{{\textstyle \frac{3+\gamma+2\beta/\theta}{1-2\beta/\theta}}}\:.\label{eq:universal-spindown-limit} \end{equation} The first factor is just the standard spindown limit eq.~(\ref{eq:spindown-limit-strain}) for the source with the strongest bound on the reduced microscopic saturation amplitude, whereas the two others are correction factors involving information on the source to which this limit applies, with exponents $\beta$, $\gamma$, $\theta$ from eqs.~(\ref{eq:powers}) and (\ref{eq:saturation-amplitude}). Note that this result is independent of the details of the cooling mechanism encoded in $\hat{L}$ although the effective spindown law eq.~(\ref{eq:effective-spindown}) that was used to obtain eq.~(\ref{eq:universal-spindown-limit}) depends on it. It depends on the power law exponent $\theta$ of the cooling luminosity via the factor $2\beta/\theta$, which takes different values depending on whether the cooling is dominated by neutrinos ($\theta=8$ for modified Urca cooling) or photons ($\theta=4$). Recently it has been shown \cite{Alford:2013pma} that unless the dissipation is so strong that r-modes are completely damped away, radio pulsars should be surprisingly hot due to the strong heating from r-mode dissipation. Therefore, both photon and neutrino cooling can be relevant for observed radio pulsars. Since the detailed properties of particular sources are generally unknown the ratio of the macroscopic parts of the reduced saturation amplitudes in eq.~(\ref{eq:universal-spindown-limit}) can only be estimated. However, from our theoretical understanding of compact stars (possible range of masses, radii, etc) we can, for a given saturation mechanism, determine bounds on this unknown factor that are tight enough that the universal spindown limit is still considerably more restrictive than the standard spindown limit. \begin{figure} \includegraphics{combined-universal} \caption{\label{fig:universal-spindown-limits}Comparison of different upper bounds on the strain amplitude of known radio pulsars due to r-mode emission. The spindown limit (red triangles) is obtained from the timing data of an individual source. The universal spindown limit takes into account that the saturation mechanism applies to the entire class of millisecond pulsars, and also provides a lower bound eq.~(\ref{eq:final-frequency}) on the frequency. We show universal spindown limits (green circles) and minimum frequency (green dotted vertical line with uncertainty band) for the toy model of a constant r-mode saturation amplitude \cite{Owen:1998xg}. We also show universal spindown limits (blue rectangles) and minimum frequency (blue dashed vertical line with uncertainty band) for a realistic saturation mechanism arising from mode-coupling and the damping of the daughter modes by shear viscosity \cite{Bondarescu:2013xwa}. For a given saturation mechanism, stars below the minimum frequency (open symbols) do not undergo r-mode oscillation.} \end{figure} The simplest and most often used toy model for r-mode saturation \cite{Owen:1998xg} assumes a constant saturation amplitude that is independent of both temperature and frequency, so $\beta=\gamma=0$. Although realistic models based on an explicit physical saturation mechanism have a more complicated dependence this simple case is useful for illustrative purposes. In this case the saturation amplitude is also assumed to be independent of the source so $\alpha_{{\rm sat}}=\hat{\alpha}_{{\rm sat}}^{\left({\rm mic}\right)}$, which is given in fig.~\ref{fig:alpha-bounds}. The strongest limit $\alpha_{{\rm sat}}\leq1.2\times10^{-7}$ is obtained for the fast pulsar J0034-0534 with $f_{0}\approx533\,{\rm Hz}$ and $\dot{f}_{0}\approx-1.4\times10^{-15}\,{\rm s}^{-1}$. Using this bound in eq.~(\ref{eq:final-frequency}) shows that in the constant saturation model r-mode gravitational wave emission can only be present in sources spinning with frequencies $f\gtrsim225\,{\rm Hz}$ corresponding to gravitational wave frequencies $\nu\gtrsim300\,{\rm Hz}$, since slower spinning sources would have left the r-mode instability region (see fig.~\ref{fig:schematic evolution} and the accompanying discussion). The expression for the universal spindown limit shows that the bounds for other sources scale in this case as $\left(f/f_{0}\right)^{3}$. Therefore the universal spindown limits are significantly lower than the standard spindown limits since the saturation amplitude obtained from the entire data set is lower and the frequencies of most sources are lower than $f_{0}$. This is shown in fig.~\ref{fig:universal-spindown-limits} which compares the universal spindown limits for the constant saturation amplitude model (circles) to the standard spindown limits (triangles) given before in fig.~\ref{fig:spindown-limits}. The, dashed vertical line in fig.~\ref{fig:universal-spindown-limits} gives the universal r-mode frequency bound eq.~\ref{eq:final-frequency} below which r-modes cannot be present. Therefore slower spinning sources, which appeared to be rather promising when only taking into account the standard spindown limit, are entirely excluded, as is denoted by the open symbols. But even for faster spinning sources the universal spindown limits can be orders of magnitude smaller. Therefore, all limits for this saturation mechanism are considerably below the estimated sensitivity of advanced LIGO. R-mode saturation amplitudes obtained from realistic mechanisms have a temperature and/or frequency dependence. Moreover, the power law exponents that are found are generally negative and of order one. As an important realistic example we discuss the saturation due to mode coupling and the subsequent damping of the daughter modes \cite{Arras:2002dw}. The saturation amplitude from mode coupling has recently been revised \cite{Bondarescu:2013xwa}, taking into account that the dominant damping source for daughter modes in a neutron star is likely shear viscosity instead of the previously assumed boundary layer damping. The revised saturation amplitude could be low enough to be compatible with the restrictive bounds from the observed small spindown rates given in fig.~\ref{fig:alpha-bounds}. In the case of a star with an impermeable crust the saturation amplitude is given by \cite{Bondarescu:2013xwa} \begin{equation} \alpha_{{\rm sat}}=\frac{\left|C_{R}\right|}{\sqrt{\tilde{J}}}=\left(1.4\times10^{-7}\frac{K_{4}^{\frac{2}{3}}}{\kappa_{D}}\right)\left(\frac{1}{\sqrt{\tilde{J}}R_{10}^{\frac{4}{3}}}\right)T_{8}^{-\frac{4}{3}}f_{500}^{-\frac{2}{3}}\:.\label{eq:mode-coupling-alpha} \end{equation} Here the first parenthesis represents $\hat{\alpha}_{{\rm sat}}^{\left({\rm mic}\right)}$, which can be determined from the spindown data. The lowest bound on $\hat{\alpha}_{{\rm sat}}^{\left({\rm mic}\right)}$ is obtained from the $f_{0}=336\,{\rm Hz}$ radio pulsar J2229+2643. Using this bound in eq.~(\ref{eq:final-frequency}), in the mode coupling model r-modes are only present in sources that spin with frequencies $f\gtrsim160\,{\rm Hz}$ corresponding to gravitational wave frequencies $\nu\gtrsim215\,{\rm Hz}$. In the mode-coupling mechanism the saturation amplitude has a strong temperature and density dependence which gives a weaker scaling of the universal spindown limit. As seen in eq.~(\ref{eq:universal-spindown-limit}) the scaling ranges from $\left(f/f_{0}\right)^{3/2}$ if neutrino cooling dominates to $f/f_{0}$ if photon cooling is dominant. To obtain rigorous upper bounds we are conservative and use for each source the weaker of the two constraints obtained from neutrino and photon cooling. The second parenthesis in eq.~(\ref{eq:mode-coupling-alpha}) is the source-dependent factor $\hat{\alpha}_{{\rm sat}}^{\left({\rm mac}\right)}$ which is unknown. To estimate the uncertainty in this factor, we note that the radius of a neutron star $R_{10}$ (in units of $10\,{\rm km}$) is at present uncertain within $1\lesssim R_{10}\lesssim1.5$ and the factor $\tilde{J}$, defined in \cite{Owen:1998xg}, has been shown in \cite{Alford:2012yn} to be strictly bounded within $1/\left(20\pi\right)\leq\tilde{J}\leq3/\left(28\pi\right)$. Therefore, the uncertainty on the universal spindown limit from the source-dependent factor in eq.~(\ref{eq:universal-spindown-limit}) is $\hat{\alpha}_{{\rm sat}}^{\left({\rm mac}\right)}/\hat{\alpha}_{{\rm sat,0}}^{\left({\rm mac}\right)}\lesssim2.5$ Including this uncertainty, the results for the universal spindown limit for saturation due to mode-coupling are shown as well in fig.~\ref{fig:universal-spindown-limits} (rectangles) and the corresponding frequency below which r-modes are excluded is shown by the dashed vertical line. As can be seen the universal spindown limits are above those for the constant saturation model (circles), but in most cases still significantly below both the standard spindown limits and for all sources they are below the sensitivity of the advanced LIGO detector. For some fast spinning sources the standard spindown limit is more restrictive but these are far below the aLIGO sensitivity. \section{Conclusions} We have analyzed the continuous gravitational wave emission of millisecond radio pulsars due to r-modes. As an improvement to the usual bound, given by the spindown limit, we have derived the \textit{universal r-mode spindown limit} which takes into account the fact that proposed r-mode saturation mechanisms are insensitive to the macroscopic star configuration (mass, radius, moment of inertia, etc) and takes into account the whole class of sources. Using this additional information, we find that the universal spindown limit for the intrinsic gravitational wave strain amplitude can be significantly smaller than the usual spindown limit. Furthermore, we show that r-modes are damped in old millisecond radio pulsars spinning with frequencies below about $150-200\,{\rm Hz}$ so that corresponding gravitational wave emission is not expected to be present. Our results do not rely on explicit estimates for the r-mode saturation amplitude, which depend on the microphysics and are still very uncertain, but merely on the parametric temperature and frequency dependence which is generic for a given saturation mechanism and given by characteristic rational power-law exponents. We compare our improved bounds to the detection thresholds for realistic searches with next generation detectors like advanced LIGO using a year of coherent data and find that for none of the known millisecond radio pulsars would r-mode gravitational waves be detectable in the near future. This is in contrast to r-mode emission from young sources, where several potential sources are in reach of advanced LIGO \cite{Alford:2012yn}. However, if the sensitivity could be improved by the combination of different detectors, the analysis of larger coherent data sets or other enhancements, the universal spindown limits of selected millisecond pulsars which are close to the the detection limit might be beaten by next generation searches. For third generation detectors, like the planned Einstein telescope, there is a realistic chance to detect dozens of sources and our refined bounds identify those that are most promising. In contrast to the conventional spindown limit (triangles in fig.~\ref{fig:universal-spindown-limits}), which is sizable for some lower frequency sources, the universal spindown limit (circles or rectangles in fig.~\ref{fig:universal-spindown-limits}) shows that it is actually the mid- to high-frequency sources which feature the largest bounds. The universal spindown limit relies on the assumption that the same r-mode saturation mechanism is operating in the entire set of radio pulsars. In principle it cannot be completely excluded that different saturation mechanisms are at work in different sources. This could happen if there were classes of sources with qualitatively different structural or phase compositions. For the recycled old radio pulsars that we focus on in this paper, this is not a likely scenario. These stars are very stable systems that hardly change over time and have very similar properties. For instance the magnetic fields that could distinguish different radio pulsars are all rather small and are not expected to strongly affect the r-mode evolution besides the additional magnetic spindown. However, it is quite possible that old and young stars have different saturation mechanisms, in fact such a difference is required to explain the low spin frequencies of young pulsars \cite{Alford:2012yn} and the low spindown limits of old radio pulsars \cite{Alford:2013pma}. One possibility is enhanced dissipation due to the transformation of a neutron star or its core into a quark star owing to the density increase during the initial spindown \cite{Alford:2013rea}. Another option would be enhanced dissipation in a superfluid/superconductor \cite{Haskell:2013hja} which is only present below the superfluid melting temperature and this transition might have been explicitly observed in the cooling of the neutron star in Cassiopeia A \cite{Page:2010aw}. Both of these transitions would happen in the dynamic early evolution of young sources, before they are a few hundreds of years old. In contrast, recycled old radio pulsars are very stable systems that hardly change over time and have very similar properties. For instance the magnetic fields that could distinguish different radio pulsars are all rather small and are not expected to strongly affect the r-mode evolution besides the additional magnetic spindown. Therefore, the assumption that the same saturation mechanism is realized in the entire class of old millisecond radio pulsars is reasonable. In addition to the exciting prospect of directly detectable gravitational waves, the emission from oscillating pulsars presents a unique chance to directly probe the interior of a compact star. The amplitude of r-modes, which is encoded in the gravitational wave signal, can directly reveal the damping properties of the matter inside the star and thereby its composition. In addition to thermal measurements from low mass x-ray binaries \cite{Lindblom:1998wf,Haskell:2012} and pulsar timing data \cite{Alford:2013pma,Manchester:2004bp}, gravitational waves would provide a third messenger to probe the interior star composition via r-modes. The combined analysis of these different data sets would provide a clearer picture of the star's interior and could allow us to discriminate different star compositions in the future. \begin{acknowledgments} This research was supported in part by the Offices of Nuclear Physics and High Energy Physics of the U.S. Department of Energy under contracts \#DE-FG02-91ER40628 and \#DE-FG02-05ER41375. \end{acknowledgments}
train/arxiv
BkiUd404ubnhDOg0XdZf
5
1
\section{Introduction} When an electron subjected to a source-drain bias $V$ is transferred from the negative electrode into (say,) the empty LUMO of a neutral molecular system M$^0$ embedded in a molecular junction, a transient radical anion is created in a first process (M$^{0} \to$M$^{\bullet -}$), which is subsequently transferred to the positive electrode (M$^{\bullet -} \to$M$^{0}$). If these charge transfer processes are fast (strong electrode-molecule coupling), they cannot be considered as separate events; electron transport proceeds by coherent tunneling. The nuclei are frozen at the optimum molecular geometry $\mathbf{Q}_0$. This picture is ubiquitously adopted to describe vacuum molecular junctions within NEGF (nonequilibrium Green's functions) approaches \cite{Datta:05}. For molecular junctions immersed in electrolytes \cite{Wandlowski:08}, this picture has a counterpart called adiabatic transport \cite{Schmickler:93,Zhang:05,Medvedev:07}, which can be summarized as follows. At a given solvent configuration ${Q}$, the molecular device is traversed by a so-called partial electric current $j(V; {Q})$ \cite{Schmickler:93,Zhang:05,Medvedev:07} which is the result of coherent tunneling. The slow solvent dynamics with respect to the electronic motion legitimates the usage of a classical effective coordinate $ {Q}$. Changes in $Q$ model solvent's dipoles/charges that rearrange to stabilize the extra anion's charge and induce significant variations of the Gibbs adiabatic free energy $\mathcal{G} = \mathcal{G}({Q}; V)$ \cite{Schmickler:93,Zhang:05,Medvedev:07} at a a time scale much shorter than the measurement times. Therefore, as schematically depicted in fig.~\ref{fig:dos}, to compare with experiments one has to consider the so-called total current $I(V)$, which is obtained by ensemble averaging the partial current with a weight function $\mathcal{P}(\mathcal{G}(Q); V)$ that depends on $\mathcal{G}({Q}; V)$ [cf.~eq.~(\ref{eq-I}) below]. \begin{figure} $ $\\[3.6ex] \centerline{\hspace*{-0ex}\includegraphics[width=0.35\textwidth,angle=0]{Fig1.eps}} $ $\\[0.6ex] \caption{Schematic representation of the adiabatic molecular transport in the Newns-Anderson model [eqs.~(\ref{eq-H-1}) and (\ref{eq-R})]. The total current represents a $Q$-ensemble average of LUMO-mediated tunneling, whose energy $\varepsilon_0(Q)$ fluctuates within an energy range $\sim 2 \lambda$ [cf Eq.~(\ref{eq-R})]]. See the main text for details.} \label{fig:dos} \end{figure} The Newns-Anderson model \cite{Anderson:61,Newns:69b} is widely employed in these studies \cite{Schmickler:86,Schmickler:93,Medvedev:07}. Since early studies on atoms absorbed on a metal surface \cite{Newns:69b}, this model continues to be used to describe phenomena of recent interest for molecular electronics, e.~g., transition voltage spectroscopy \cite{Beebe:06,Baldea:2012a,Baldea:2012b,Baldea:2012g}. It models the molecule as {a} single level $\varepsilon_0$, which mediates the tunneling between the source and the drain. This level models the closest molecular orbital (LUMO for n-type conduction) to the metal's Fermi energy $\varepsilon_F$. For a molecule immersed in electrolytes, its energy $\varepsilon_0 = \varepsilon_0({Q})$ fluctuates due to solvent reorganization \cite{Schmickler:86,Schmickler:93,Medvedev:07}. The aim of the present Letter is fourfold: (i) to extend the Newns-Anderson model to include intramolecular nuclear reorganization; to show that this extension (ii) is relevant for molecules of current interest in molecular electronics but (iii) the underlying functional dependencies differ from those describing the solvent reorganization; (iv) to deduce these dependencies from electronic structure calculations. {Thus, information will be provided that can be subsequently used as input for transport studies based on realistic parameters. Transport calculations and comparison with experiments \cite{Tao:03,Venkataraman:06,Venkataraman:12} deserve a special analysis and will make the object of a separate publication.} \section{Specific details} \label{sec:details} As a concrete molecule, (4, 4$^\prime$)-bipyridine (44BPY) will be considered in the present study. In view of its special structure (cf.~fig.~\ref{fig:44bpy}), with two active nitrogen atoms in para positions, 44BPY is particularly suitable for simultaneous binding to two metallic electrodes. It has been utilized to demonstrate for the first time the possibility of repeated formation of molecular junctions \cite{Tao:03}. Compounds based on 44BPY, commonly known as ``viologens'', attracted considerable attention for many decades. 44BPY molecules have been incorporated in redox active tunneling junctions to demonstrate the LUMO electrolyte gating \cite{Wandlowski:08}. Several theoretical studies devoted to electron transport in 44BPY \cite{Hou:05,Thygesen:05c,Bagrets:08,Venkataraman:12} considered the coherent tunneling at fixed geometry but did not examine the impact of intramolecular reorganization. The quantum chemical calculations for the present study have been done with the Gaussian 09 package \cite{g09} at density functional theory (DFT) level by using the B3LYP functional. Basis sets of double-zeta quality augmented with diffuse functions (Dunning aug-cc-pVDZ) for the light atoms and with relativistic core potential (cc-pVDZ-PP from ref.~\cite{Puzzarini:05}) for gold have been employed. \section{The spinless Newns-Anderson model} \label{sec:na} The central assumption of the transport approaches based on the Newns-Anderson model is that electric conduction is dominated by a single molecular level. As shown below, this should certainly be the case for molecular junctions based on 44BPY \cite{Tao:03,Venkataraman:09b,Venkataraman:12}. {From $\Delta$-DFT calculations \cite{Sham:88}, we deduced a HOMO-LUMO gap $\Delta_{ } = E_C + E_A - 2 E_N = 8.5$\,eV. This quantity, expressed in terms of the energies ($E$) of the cation and anion radicals, and neutral species (subscripts $C$, $A$, and $N$, respectively) is the counterpart of the so-called charge gap used in solid state or mesoscopic physics (see ref.~\cite{Baldea:2008} and citations therein). Screening effects narrow down this gap \cite{Thygesen:09c}, but it certainly exceeds the Kohn-Sham HOMO-LUMO gap ($\Delta_{KS} =4.97$\,eV \cite{Hou:05}), which is known to drastically underestimate $\Delta$.} If the electrodes' Fermi level were located midway between HOMO and LUMO (a situation wherein the Newns-Anderson model would inherently fail, since both HOMO and LUMO should contribute significantly), the transmission (Gamow formula) $T \approx \exp( - 2 d \sqrt{2 m/\hbar} \sqrt{\Delta / 2} )$ through an \emph{underestimated} energy barrier of a height $\Delta/2={\Delta_{KS}/2 = 2.49}$\,eV and a spatial extension $d = 2 d_{\mbox{N-Au}} + l_{\mbox{44BPY}} \simeq 2 \times 2.4 + 7.11$\,{\AA}$\simeq 11.9$\,{\AA} would yield a conductance $G/G_0 = T \approx {10^{-9}}$, which is completely at odds with the experimental values ($G/G_0 \sim 10^{-2} - 10^{-3}$ \cite{Tao:03,Venkataraman:12}). Here, $G_0 = 2 e^2/\hbar = 77.48\,\mu$S is the conductance quantum. To conclude, the assumption of a dominant MO appears to be reasonable for 44BPY. Whether the LUMO (electron/$n$-type conduction) or the HOMO (hole/$p$-type conduction) is the dominant MO cannot be determined from transport measurements in two-terminal setups alone. This issue can be addressed, e.~g., in electrolyte gating \cite{Wandlowski:08} or thermopower studies \cite{Venkataraman:12}. Because they revealed an $n$-type conduction, we will restrict ourselves below to the case of a LUMO-mediated conduction. Within the most general Newns-Anderson model \cite{Anderson:61,Newns:69b}, the single MO of energy $\varepsilon_0$ it contains can be empty ($n_{\uparrow, \downarrow} = 0$), single ($n_{\uparrow} + n_{\downarrow} = 1$), or doubly ($n_{\uparrow} = n_{\downarrow} = 1$) occupied, corresponding to the neutral, radical anion, and dianion species, respectively. The second-quantized Hamiltonian has the expression \begin{equation} \label{eq-H-2} H_{ } = \sum_{\sigma=\uparrow,\downarrow} \varepsilon_{0}\left(\mathbf{Q}\right) n_{ \sigma}+ U_{ } n_{ \uparrow} n_{\downarrow} + \mathcal{E}_{ph}\left(\mathbf{Q}\right) . \end{equation} where $n_{ \sigma} = c^{\dagger}_{ \sigma} c_{ \sigma}$ denote electron number operators. A Hubbard-type interaction accounts for the Coulomb repulsion $U$ between the two spin directions. In an STM-setup, the molecule is coupled to two electrodes (substrate $s$ and tip $t$), The average molecule-electrode couplings $\tau_{s,t}$ determine a nonvanishing level width characterized by the broadening functions $\Gamma_{s,t} \propto \tau_{s,t}^{2}$ \cite{Datta:05}. The active MO is coupled to classical intramolecular (and, if the case, solvent) vibrational modes $\mathbf{Q}$ that reorganize upon charge transfer. They modulate the MO energy $ \varepsilon_{0}^{0} \to \varepsilon_{0}\left(\mathbf{Q}\right) $ and store an energy $\mathcal{E}_{ph}\left(\mathbf{Q}\right)$. This yields a $\mathbf{Q}$-dependence of the total energy ${E}\left(\mathbf{Q}\right) \equiv \langle H \rangle$. For convenience, the energy at the neutral optimum $\mathbf{Q}_{0}$ will be taken as energy zero, $E_N\left(\mathbf{Q}_{0}\right) = 0$. An important issue in molecular transport is whether the double occupancy of the active MO is significant or not. Albeit entirely correct only if a single-particle picture (on which the DFT description relies) is applicable, the analysis can be done by observing that eq.~(\ref{eq-H-2}) can be described in terms of two single electron states of energies $\varepsilon_{1}(Q) = \varepsilon_{0}\left(\mathbf{Q}\right)$ and $\varepsilon_{2}\left(\mathbf{Q}\right) = \varepsilon_{0}\left(\mathbf{Q}\right) + U$ ($\varepsilon_{1} < \varepsilon_{2}$, $U > 0$). The charge transfer efficiency is determined by the energy differences $\varepsilon_{1,2} - \varepsilon_F$. Fluctuations in $\varepsilon_{0}\left(\mathbf{Q}\right)$ induced by phonons are of the order of the reorganization energy $\lambda$, which typically amounts a few tenths of electron volts \cite{Wandlowski:08}. So, a doubly occupied LUMO gives a significant contribution only if $U$ is at most of the order of $\lambda$; otherwise, the state of energy $\varepsilon_{2}$, too high above the Fermi level, is blocked, and only that of energy $\varepsilon_{1}$ is relevant. The Coulomb blockade parameter $U_{ }$ cannot be directly determined from transport data in a simple manner \cite{Goldhaber-GordonNature:98,Baldea:2009a,Baldea:2010d}. Within DFT, $U \to U_{KS}$ obtained from energy splitting of the Kohn-Sham LUMO ``orbitals'' is $U_{KS} = 1.54$, $1.52$, and $1.26$\,eV for a 44BPY molecule in vacuum, in (aqueous) solution, and in solution with one gold atom attached at each of the two nitrogen atoms, respectively. Similar to the case of the DFT HOMO-LUMO gap, $U_{KS}$ drastically underestimates $U$. A substantially higher value is obtained via the more adequate method of energy differences based on eq.~(\ref{eq-H-2}), $U = E_N + E_D - 2 E_A = 1.92$\,eV (instead of $1.26$\,eV), for the last of the three aforementioned situations (subscript $D$ stands for dianion). Still, what is really important for the present purpose is that $U$ is much larger than the reorganization energies (see below). Transfer processes with a doubly occupied LUMO are energetically too costly and can be ignored in electron transport through 44BPY. So, one can safely employ a spinless Newns-Anderson model Hamiltonian ($n = c^\dagger c$) \begin{equation} \label{eq-H-1} H_{ } = \varepsilon_{0}\left(\mathbf{Q}\right) \, n_{} + \mathcal{E}_{ph}\left(\mathbf{Q}\right) , \end{equation} as done in existing studies, e.~g., refs.~\cite{Schmickler:86,Schmickler:93,Medvedev:07}. The total energies of the radical anion $E_{A}(\mathbf{Q})$ and neutral species $E_{N}(\mathbf{Q})$ can be used to microscopically compute the $\mathbf{Q}$-dependence of the parameters entering eq.~(\ref{eq-H-1}) \begin{equation} {\mathcal{E}}_{ph}(\mathbf{Q}) = E_{N}(\mathbf{Q}); \ \varepsilon_{0}(\mathbf{Q}) = E_{A}(\mathbf{Q}) - E_{N}(\mathbf{Q}) . \label{eq-param-NA} \end{equation} Notice that the LUMO energy $\varepsilon_{0}(\mathbf{Q})$ expressed above is measured with respect to the vacuum. The reorganization energies of the radical anion ($\lambda_{A}$) and the neutral ($\lambda_{N}$) species are important quantities defined by \begin{equation} \label{eq-lambda-na} \lambda_{N} = E_N\left(\mathbf{Q}_{A}\right) - E_N\left(\mathbf{Q}_{0}\right); \lambda_{A} = E_A\left(\mathbf{Q}_{0}\right) - E_A\left(\mathbf{Q}_{A}\right) , \end{equation} where $\mathbf{Q}_{A}$ denoted the radical anion optimum geometry. All the phenomenological approaches of molecular transport in electrolytes based on the spinless Newns-Anderson model of which we are aware (\emph{e.~g.}, Refs.~\cite{Schmickler:86,Schmickler:96b,Zhang:05,Medvedev:07,Wandlowski:08}) assume harmonic $\mathbf{Q}$-dependencies of $E_{N}$ and $E_{A}$, and this yields ($\mathbf{Q} \equiv \{Q_{\nu}\}$; henceforth $\mathbf{Q}_0 \equiv \mathbf{0}$) \begin{eqnarray} \label{eq-harm} \mathcal{E}_{ph}\left(\mathbf{Q}\right) & = & \frac{1}{2}\sum_{\nu}\omega_{\nu} Q_{\nu}^{2} , \\ \label{eq-linear} \delta \varepsilon_{0}\left(\mathbf{Q}\right) & \equiv & \varepsilon_{0}\left(\mathbf{Q}\right) - \varepsilon_{0}\left(\mathbf{Q} = 0\right) = - \sum_{\nu}g_{\nu} Q_{\nu} . \end{eqnarray} In the cases where eqs.~(\ref{eq-harm}) and (\ref{eq-linear}) apply, the above reorganization energies are equal \begin{equation} \label{eq-lambda-a-n} \lambda_{N} = \lambda_{A} = \lambda \equiv \sum_{\nu} \frac{g^2_{\nu}}{2 \omega_{\nu}}. \end{equation} By linearly joining the minima of the radical anion and neutral species, one can introduce an effective coordinate $ \mathcal{R}$, $Q_{\nu} = \mathcal{R} g_{\nu} /\omega_{\nu}$ and recast Eqs.~(\ref{eq-harm}) and (\ref{eq-linear}) as \cite{Schmickler:96b} \begin{equation} \label{eq-R} \mathcal{E}_{ph}(\mathcal{R}) = \lambda \, \mathcal{R}^2 ; \ \varepsilon_{0}(\mathcal{R}) = \varepsilon_{0} - 2 \lambda \, \mathcal{R} . \end{equation} The configurations $\mathbf{Q}_0$ and $\mathbf{Q}_A$ correspond to $\mathcal{R} = 0$ and $\mathcal{R} = 1$, respectively. \begin{figure} $ $\\[0.6ex] \centerline{\hspace*{-0ex}\includegraphics[width=0.35\textwidth,angle=0]{Fig2.eps}} $ $\\[0.6ex] \caption{The neutral 44BPY molecule consists of two pyridine rings twisted by $\theta = 37.2^\circ$.} \label{fig:44bpy} \end{figure} \section{Extending the Newns-Anderson model to describe intramolecular relaxation} \label{sec:breakdown} Most intramolecular modes are fast (frequencies $\omega_{\nu}$ comparable to or higher than $\Gamma_{s,t}$) and should be treated quantum mechanically \cite{Thoss:09,Thoss:10}. Fingerprints of the associated inelastic tunneling are, e.~g., the well-known peaks in the second derivative $d^2I/dV^2$ at resonant voltage values ($e V = \hbar \omega_{\nu}$). They negligibly reorganize and therefore are not interesting in the present context. But if (like in 44BPY, see below) slow intramolecular vibrations exist that are sufficiently strongly coupled to the LUMO, their reorganization during electron transfer is significant and should be considered. The problem is to investigate whether a certain molecule possesses such modes and to scrutinize whether they can be described by eqs.~(\ref{eq-harm}) and (\ref{eq-linear}). To this aim, we have performed electronic structure calculations for 44BPY. The values of the reorganization energies found from eqs.~(\ref{eq-lambda-na}) were found different: $\lambda_N = 0.353$\,eV and $\lambda_A = 0.224$\,eV. The inequality $\lambda_N \neq \lambda_A$ demonstrates that the inner reorganization of the 44BPY molecule cannot be correctly described by eqs.~(\ref{eq-harm}) and (\ref{eq-linear}). The reorganization energies $\lambda_{N,A}$ computed via eqs.~(\ref{eq-lambda-na}) represent ``global'' quantities emerging from electronic structure calculations, wherein \emph{all} intramolecular vibrational modes are included within a classical picture. To gain further insight, we have split the reorganization energy into contributions of individual molecular vibrational modes. Composed of 20 atoms, the nonlinear 44BPY molecule has 54 normal vibrations. In the D$_2$ point group symmetry, they are distributed as $14 A + 12 B_1 + 14 B_2 + 14 B_3 $; $10 A + 9 B_1 + 9 B_2 + 9 B_3 $ are in-plane vibrations and $4 A + 3 B_1 + 5 B_2 + 5 B_3 $ are out-of-plane vibrations \cite{Zhuang:07}. Our extensive calculations confirmed the expectation that significant contributions to the reorganization energy arise from the in-plane normal modes with A symmetry. Out of the ten in-plane modes of A symmetry we have identified two modes that dominate the inner reorganization. As expected, they are related to the main structural differences between 44BPY$^{0}$ and 44BPY$^{\bullet -}$. One of these modes (normal coordinate $Q_{46}$) is related to the so-called quinoidal distortion \cite{Zhuang:07} of the neutral molecule upon electron attachment (44BPY$^{0} + e^{-}\to$ 44BPY$^{\bullet -}$), with a shortening of the inter-ring C-C bond and of the C-C bond parallel to it, and a lengthening of the C-C bond between them as well as of the C-N bond. Adiabatic energy curves $E_{N,A}(Q_{46})$ reveal a virtually perfect harmonic behavior, which agree with eqs.~(\ref{eq-harm}) and (\ref{eq-linear}) and yield equal partial reorganization energies $\lambda_{N}^{(46)} = \lambda_{A}^{(46)}$. Because its frequency (the computed value $\omega_{46} = 1642\,\mbox{cm}^{-1}\simeq 0.2$\,eV excellently agrees with the strong Raman band observed experimentally at 1645\,cm$^{-1}$ \cite{Zhuang:07}) exceeds typical $\Gamma_{s,t}$-values \cite{Baldea:2012a,Baldea:2012b,Baldea:2012g}, this mode is too fast to significantly reorganize by the classical thermal activation considered in this study. The inequality $\lambda_{N} \neq \lambda_{A}$ traces back to the most appealing feature of the molecular structure, namely the inter-ring twisting angle $\theta$. The mode directly related to the inter-ring torsional motion represents the floppy (label $f$) degree of freedom of 44BPY. Adiabatic energy curves $E_{N,A}(Q_{f})$ along the normal coordinate $Q_{f}$ exhibit strong anharmonicities. Its (harmonic) frequency is very low ($\omega_f \simeq 62$\,cm$^{-1}$ in the isolated molecule, cf.~table \ref{table}), but the impact on reorganization is important because large amplitude oscillations are significant. The partial reorganization energy of the radical anion $\lambda_{A}^{(f)} \simeq 0.16$\,eV is almost two times larger than that of the neutral molecule ($\lambda_{N}^{(f)} \simeq 0.09$\,eV). From the adiabatic $E_{N,A}$-curves obtained by quantum chemical calculations, the functional dependencies of the model parameters $\mathcal{E}_{ph}\left(Q_{f}\right)$ and $\varepsilon_{0}\left(Q_{f}\right)$ can be computed from eqs.~(\ref{eq-param-NA}). The results of these calculations are collected in fig.~\ref{fig:e-Q-f}a. They show significant deviations from the linear and quadratic $Q_f$-dependencies of $\varepsilon_{0}\left(Q_{f}\right)$ and $\mathcal{E}_{ph}\left(Q_{f}\right)$, respectively. The latter approximations are represented as dashed lines in fig.~\ref{fig:e-Q-f}a. Fig.~\ref{fig:e-Q-f} also shows that fourth-order polynomials \begin{eqnarray} \varepsilon_{0}\left(Q_f\right) & = & - EA_v + \omega_f \left( e_1 Q_f + e_2 Q_f^2 + e_3 Q_f^3 + e_4 Q_f^4\right) , \nonumber \\ \label{eq-fit} \mathcal{E}_{ph}(Q_f) & = & \omega_f \left( Q_{f}^{2}/2 + f_3 Q_f^3 + f_4 Q_f^4\right) . \end{eqnarray} very accurately fit the DFT-curves. Values of the vertical electron affinity $ EA_{v} \equiv - \varepsilon_{0}(0) \equiv E_N(\mathbf{Q}_0) - E_A(\mathbf{Q}_0)$ and the other quantities entering eq.~(\ref{eq-fit}) are given in table \ref{table}. \begin{center} \begin{largetable} \begin{tabular}{|l@{\hspace{1ex}}c@{\hspace{2.ex}}c@{\hspace{2.ex}}c@{\hspace{2.ex}}c@{\hspace{2.ex}}c@{\hspace{2.ex}}c@{\hspace{2.ex}}c@{\hspace{2.ex}}c@{\hspace{2.ex}}|} \hline & $\omega_f$ & $f_3$ & $f_4$ & $e_1$ & $e_2$ & $e_3$ & $e_4$ & $EA_v$ \\ \hline 44BPY in vacuum & 61.9 & 0.0399 & 0.10038 & -9.0236 & -0.9194 & 0.04239 & -0.0007 & 0.444 \\ 44BPY in solution & 58.18 & 0.0634 & 0.1144 & -9.6301 & -1.0751 & 0.04438 & 0.00115 & 2.206 \\ Au-44BPY-Au in vacuum & 63.2 & -0.0040 & 0.0943 & -9.0110 & -0.2416 & -0.3563 & 0.05714 & 1.586 \\ \hline \end{tabular} \caption{ Results for the 44BPY molecule in vacuum, in aqueous solution, and with two gold atoms connected to the two nitrogen atoms. The units for the frequency of the floppy mode $\omega_f$ and the vertical electron affinities $EA_v$ are cm$^{-1}$ and eV, respectively. The dimensionless coefficients $f_{3,4}$ and $e_{1,2,3,4}$ enter the interpolation formulas of eqs.~(\ref{eq-fit}). } \label{table} \end{largetable} \end{center} \begin{figure} $ $\\[1.0ex] \centerline{\hspace*{-0ex}\includegraphics[width=0.35\textwidth,angle=0]{Fig3a.eps}} $ $\\[0.2ex] \centerline{\hspace*{-0ex}\includegraphics[width=0.35\textwidth,angle=0]{Fig3b.eps}} $ $\\[0.2ex] \centerline{\hspace*{-0ex}\includegraphics[width=0.35\textwidth,angle=0]{Fig3c.eps}} \caption{The model parameters $\varepsilon_{0}$ and $\mathcal{E}_{ph}$ of a 44BPY molecule (a) in vacuum, (b) in aqueous solution, and (c) with two gold atoms attached. The relevant range of the normal coordinate of the floppy degree of freedom $Q_f$ includes values around the minima at $Q_f^{0}$ and $Q_f^{A}$ of the neutral and radical anion species, respectively. The results of the DFT calculations (solid lines) can be excellently fitted with polynomials [eqs.~(\ref{eq-fit}) and table \ref{table}] (represented by points), but they significantly depart from the linear and quadratic approximations of eqs.~(\ref{eq-linear}) and (\ref{eq-harm}), respectively (dashed lines). See the main text for details.} \label{fig:e-Q-f} \end{figure} \section{Beyond the case of an isolated molecule} \label{sec:nonisolated} The results discussed above refer to an isolated 44BPY molecule in vacuum. Cases of breakdown of the harmonic approximation for low frequency vibrations in isolated molecules are well known in molecular physics \cite{Tucker:94}. However, we are not aware of studies pointing out the failure of the harmonic approximation for floppy molecules used to fabricate molecular junctions, like in the case discussed above. For molecular transport, it is also important to consider the case of a 44BPY molecule immersed in an electrolyte and linked to metallic electrodes \cite{Tao:03,Wandlowski:08,Venkataraman:12}. In the present study on 44BPY in aqueous solution (label $sol$), the solvent has been considered within the polarized continuum model using the integral equation formalism (keyword SCRF=IEFPCM in Gaussian 09). The results (fig.~\ref{fig:e-Q-f}b and table \ref{table}) indicate that the most important solvent effect is an almost constant shift in the LUMO energy. It is related to the strong interaction of the anion with the water molecules. The largest contribution to the vertical electron affinity in solution comes from the anion's solvation free energy $ \Delta G_{A} \equiv E_{A}^{sol}(\mathbf{Q}_{A}^{sol}) - E_{A}(\mathbf{Q}_{A}) \simeq -2.01$\,eV. Here, $\mathbf{Q}_{A}^{sol}$ denotes the optimized geometry of the radical anion in solution. As a preliminary step in investigating the impact of electrodes, we have also considered the case of a 44BPY molecule with two gold atoms attached to the nitrogen atoms. The corresponding results (fig.~\ref{fig:e-Q-f}c) reveal an effect qualitatively similar to that of the solvent. So, apart from a nearly constant shift of the LUMO energy, for $Q_f$-values of interest (cf.~fig.~\ref{fig:e-Q-f}), the model parameters $\varepsilon_0(Q_f)$ and $\mathcal{E}_{ph}(Q_f)$ vary within ranges, which appear to be little sensitive to the presence of solvents or electrodes. The physics behind the similarity exhibited by the these numerical results is the following. The twisting angle $\theta$ of the 44BPY molecule is determined by the competition between the $\pi$-electronic interaction of the pyridyl fragments, which favors $\pi$-electrons delocalized between coplanar pyridyl rings ($\theta = 0$), and the steric repulsion between the ortho H atoms, which is diminished by a twisted conformation ($\theta \neq 0$) \cite{Kassab:96}. The latter prevails in the neutral species 44BPY$^{0}$ (empty/oxidized LUMO), which is nonplanar (fig.~\ref{fig:44bpy}). By adding an extra electron (occupied/reduced LUMO), the energy gain resulting from $\pi$-electron delocalization between the two rings outweighs the steric repulsion, and the radical anion (44BPY$^{\bullet -}$) becomes planar. Therefore, a structural transition from the twisted to the planar conformation can only be suppressed if a significant negative charge is transferred to the 44BPY unit. This cannot be achieved by immersing in solution, nor even by attaching gold atoms. In the latter case, the gold atoms acquire a small negative charge ($\sim -0.096e$), which has a negligible impact on the torsion angle $\theta$. \section{Summary and outlook} \label{sec:conclusion} The present results emphasize the need to consider the inner relaxation in junctions based on molecules with floppy degrees of freedom; the reorganization energies deduced here ($\lambda \sim 0.1 - 0.2$\,eV) are comparable with those for outer (solvent) reorganization \cite{Wandlowski:08}. {To avoid misunderstandings three comments are in order. (i) The present analysis of the inner reorganization (conformational distortions) has implicitly assumed a \emph{given} (e.~g., atop, hollow, bridge) contact geometry; variations in the binding geometry may be larger (e.~g., refs.~\citenum{Venkataraman:06,Wandlowski:11}) but they are of a different nature. (ii) The $Q_f$ dependence of the partial current and conductance discussed here refers to a \emph{given} molecule (44BPY); this is qualitatively different, e.~g., from the scaling $G \propto \cos^2 \theta$ discussed previously \cite{Venkataraman:06,Wandlowski:09,Wandlowski:11}, which refers to \emph{various} derivatives of a molecular family (e.~g., biphenyls) wherein the torsion angle $\theta$ is varied, e.~g., by a bridging (alkyl) chain. (iii) $\theta$ can\emph{not} be specified in terms of the single normal coordinate $Q_f$ only. Thermal averaging over $\theta$ (as done for biphenils \cite{Venkataraman:06}) amounts to consider that, \emph{concomitant} with the floppy mode $\omega_f$, other modes of much higher frequency are also thermally activated. Because such fluctuations are energetically costly in biphenyls, their effect is reduced \cite{Venkataraman:06}.} While revealing that a spinless Newns-Anderson is justified for 44BPY, the present study has shown that the ${Q}$-dependencies of eqs.~(\ref{eq-harm}) and (\ref{eq-linear}) assumed to work in electrolytes are inappropriate for the reorganization of low frequency intramolecular vibrations, which are characterized by a pronounced anharmonic behavior. New formulas for $\varepsilon_{0}({Q_f})$ and $\mathcal{E}_{ph}\left({Q_f}\right)$ have been deduced from microscopic calculations [cf.~eqs.~(\ref{eq-fit})], which replace the expressions of eqs.~(\ref{eq-harm}) and (\ref{eq-linear}) used for solvent reorganization. The $Q_f$-dependence of the model parameters is important because the ensemble average needed to compute the experimentally relevant quantity, namely the total current $I(V)$, requires an integration over $Q_f$ \begin{equation} \label{eq-I} I(V) = \langle j\left(V; Q_f\right)\rangle = \int j\left(V; Q_f\right) \mathcal{P}\left(Q_f; V\right) d\,Q_f . \end{equation} The fact that 44BPY possesses a single low frequency vibrational mode that significantly reorganizes represents an enormous simplification; otherwise, an ensemble average involving a $\mathbf{Q}$-integration over all of the 54 internal degrees of freedom would be a formidable numerical challenge. The thermal weight function $\mathcal{P}(Q_f; V) \propto \exp\left[ - \mathcal{G}(Q_f; V)/(k_B T)\right] $ is determined by the Gibbs free energy $\mathcal{G}( Q_f; V)$ of the (partial) oxidation state of the molecule (i.~e., LUMO occupancy $0 < n(Q_f; V) < 1$) \emph{linked} to biased electrodes. $\mathcal{G}$ depends on $Q_f$, $V$, and (if applicable) on electrolyte's overpotential \cite{Wandlowski:08,Schmickler:93,Zhang:05,Medvedev:07}. The need to carry out an ensemble averaging is an essential aspect, which renders purely ab initio approaches (as used for rigid molecules in vacuum) inapplicable; therefore, feasible approaches to date have to resort to models. The above results can (and will) be used in subsequent studies to deduce the current within the adiabatic transport approach, for which eq.~(\ref{eq-I}) can serve as starting point. Expressions for the partial current $j(V; \varepsilon_{0}(Q_f))$ and the level (LUMO) occupancy $n(\varepsilon_{0}(Q_f); V)$ are known \cite{Schmickler:86,Medvedev:07,Baldea:2010e,baldea:arXiv1108.1039B}. Along with the expression of $\mathcal{E}_{ph}(Q_f)$, the LUMO occupancy is required to express the $Q_f$- (and $V$-)dependent Gibbs adiabatic free energy $\mathcal{G}$. One should emphasize at this point that $\mathcal{G}(Q_f; V)$ is not simply related to the adiabatic energies $E_{N,A}(Q_f)$; the former quantity characterizes a molecule linked to biased electrodes, while the latter quantities pertain to an isolated molecule. Earlier studies demonstrated that even a single reorganizable harmonic mode, eq.~(\ref{eq-R}), yields highly nontrivial adiabatic $\mathcal{G}$-surfaces \cite{Schmickler:86,Medvedev:07}. In view of the significant anharmonicities embodied in the expressions adequate for a floppy degree of freedom, one can expect more complex adiabatic surface topologies, with the need to distinguish between several important limiting cases. To simply motivate this, one should note that, unlike in electrolytes, reorganization effects in floppy molecules cannot be merely characterized with the aid of a single quantity ($\lambda = \lambda_N = \lambda_A$). As shown recently \cite{Baldea:2012b,Baldea:2012g}, the impact of fluctuations in the MO-energy offset $\overline{\varepsilon}_0 \equiv \varepsilon_0 - \varepsilon_F$ on the ohmic conductance can be significant. Molecule-electrode interactions can be important sources of such fluctuations \cite{Baldea:2012b}. In view of the present study, in molecular junctions like those based on 44BPY fabricated experimentally \cite{Tao:03,Wandlowski:08,Venkataraman:12}, one can expect that, even in the absence of other effects, reorganization effects represent an important source for large $\overline{\varepsilon}_0$-fluctuations ($\delta \overline{\varepsilon}_0 \sim 0.2 - 0.4$\,eV). As a straightforward application, one can employ eq.~(\ref{eq-I}) to compute conductance histograms, which can be directly compared with those available from experimental studies \cite{Tao:03,Venkataraman:12}. \acknowledgments The author thanks Horst K\"oppel for valuable discussions and the Deu\-tsche For\-schungs\-ge\-mein\-schaft (DFG) for financial support.
train/arxiv
BkiUbso4dbjiU7srv_Pf
5
1
\section{Introduction} \label{sec:intro} The presence of an actively accreting supermassive black hole (SMBH) in a galaxy is demonstrated through signatures of energetic processes near the central engine. In order of increasing distance from the black hole, the primary signs closest to the black hole are X-ray continuum from the hot corona \citep[found within a few Schwarzschild radii of the SMBH; e.g.,][]{2012MNRAS.422..129Z}, and ultraviolet and optical emission lines with widths greater than $\sim 1500\, {\rm km}\, {\rm s}^{-1}$ from the broad line region \citep[BLR -- found within 10s to 100s of light days from the SMBH; e.g.,][]{2000ApJ...533..631K, 2005ApJ...629...61K}. However, in heavily obscured active galactic nuclei (AGN) for which the line of sight hydrogen column density to the nucleus ($N_{\rm H}$) exceeds $10^{23}\: {\rm cm}^{-2}$, these signatures are not visible. For AGN with a characteristic luminosity of $\mathrm{10^{43}\: erg\,s^{-1}}$ (i.e., Seyfert galaxies), 60\% of sources are in this category \citep[e.g.,][]{2011ApJ...728...58B, 2015ApJ...815L..13R}. Obscured AGN can still be identified through emission from further out from the central black hole, such as mid-infrared (MIR) thermal continuum from the torus that is thought to surround the AGN accretion disk at distances of 0.1 pc $-$ 10s of pc \citep[e.g.,][]{2005ApJ...618L..17P, 2008ApJ...681..141R, 2013A&A...558A.149B, 2016ApJ...822L..10I, 2016ApJ...823L..12G, 2016ApJ...829L...7G}, and the high ionization forbidden lines of the narrow line region (NLR) which occupies 100s to 1000s of pc scales \citep[e.g.,][]{1993ApJ...404L..51N, 2002ApJ...574L.105B, 2006A&A...456..953B, 2017NatAs...1..679R}. However, because the torus and NLR are further away from the black hole, it is possible for accretion onto the SMBH to be recently shut off but still preserve the MIR and NLR emission \citep[e.g., within the last 10s to 100s of years;][]{2017ApJ...844...21I}, creating an AGN that looks like a classical Seyfert~2 with the BLR obscured, but that in truth intrinsically lacks a BLR. This could be related to a so-called `true' Seyfert~2 galaxy \citep[e.g.,][]{bianchi08}. While so far in the literature these sources have been assumed to be actively accreting, the lack of a BLR could also be due to an AGN that has recently deactivated. X-rays with energies greater than 10~keV can penetrate thick obscuring columns and reveal the presence of an actively accreting central engine even in a heavily obscured Seyfert~2 galaxy. As the first focusing X-ray telescope in orbit with sensitivity above 10~keV, the {\it Nuclear Spectroscopic Telescope Array} \citep[\nustar;][]{2013ApJ...770..103H} has identified and studied actively accreting SMBHs obscured by even Compton-thick levels of absorption \citep[$N_{\rm H} > 1.5 \times 10^{24}\: {\rm cm}^{-2}$; e.g.,][]{2015ApJ...815...36A, 2016ApJ...819....4R, 2016ApJ...833..245B}. \nustar\, thus gives us an opportunity to measure what fraction of the local Seyfert 2 population is currently accreting, and thereby constrain the AGN duty cycle. To find Seyfert galaxies without a BLR requires a large sample of galaxies selected based on AGN signatures not blocked by obscuration, such as the warm dust from the torus. The most complete and brightest sample of such galaxies in the local universe are found in the 12~$\mathrm{\mu m}$ sample of galaxies \citep{1989ApJ...342...83S, 1993ApJS...89....1R}. This sample contains all galaxies in the second {\it Infrared Astronomical Satellite} \citep[{\it IRAS};][]{1984ApJ...278L...1N} point source catalogue that exceed 0.3~Jy in flux density at 12~$\mathrm{\mu m}$ that are also (a) brighter at 60 and 100~$\mathrm{\mu m}$ than at 12~$\mathrm{\mu m}$, and (b) located at declinations $\mathrm{|\delta|\:\geq}$ 25\degree{}. \cite{2008MNRAS.390.1241B} investigated a subset of Seyfert 2 galaxies from this sample that appeared to be unabsorbed in the X-rays, finding two strong `true' Seyfert 2 candidates, NGC 3147 and NGC 3660. The X-ray spectral properties of the full galaxy sample with \xmm\ data were presented in \cite{2011MNRAS.413.1206B} and \cite{2011MNRAS.414.3084B}. Of the Seyfert 2 galaxies in this sample, 10 show anomalously low observed (i.e., not absorption-corrected) 2-10 keV X-ray luminosities compared to their nuclear [\ion{O}{3}] luminosities \citep[Figure~\ref{fig:Proposal_Plot};][]{2017ApJ...846..102M}. That is, these galaxies have observed 2-10~keV X-ray luminosities significantly less than expected for their [\ion{O}{3}] luminosities based on our fit to the $L_{2-10}$ to $L_{\rm [OIII]}$ relation for the Seyfert~1 galaxies in the {\it IRAS} 12~$\mathrm{\mu m}$ sample: \begin{equation} \label{eq:Sy1line} \log(L_{2-10})=0.95\,\log(L_{\rm [OIII]})+3.89, \end{equation} where the luminosities are in units of ${\rm erg}\, {\rm s}^{-1}$. The X-ray luminosities were derived directly from the observed 2-10~keV fluxes listed in \citet{2008A&A...483..151P}, \citet{2011MNRAS.413.1206B}, and \citet{2011MNRAS.414.3084B}. The 10 anomalously X-ray faint Seyfert 2 galaxies are an order of magnitude below this relation. This makes them candidate Compton-thick AGN, but potentially turned-off Seyfert~2 AGN if the central engines are inactive. High-energy X-ray observations, as possible with \nustar, can distinguish these two scenarios. Of the 10 outlier galaxies, all but NGC 5953 had \nustar{} observations (Table \ref{tab:xraydata}) at the time of writing through a combination of archival data and dedicated observations from our Cycle~3 observing program (PID~3321). Three galaxies have already been reported as Compton-thick AGN in the literature based on these \nustar\ observations: NGC~1386 \citep{2015ApJ...805...41B}, NGC~4922 \citep{2017MNRAS.468.1273R}, and IC~3639 \citep{2016ApJ...833..245B}. However, since those publications, \citet{2018ApJ...854...42B} has released the BORUS X-ray spectral model which is designed for analyzing high-energy observations of heavily obscured AGN, allowing us to more accurately constrain the parameters of the obscuring torus. Therefore, we analyze all 9 outlier galaxies with \nustar\ data, including those that have already appeared in the literature. For calculating the distance scales on our images, we adopt the concordance cosmology, $\Omega_{\rm M} = 0.3$, $\Omega_\Lambda = 0.7$ and $H_0 = 70\, {\rm\,km\,s^{-1}\,Mpc^{-1}}$. For computing luminosities in XSPEC, we use the default cosmology, which is $\Omega_{\rm M} = 0.27$, $\Omega_\Lambda = 0.73$ and $H_0 = 70\, {\rm\,km\,s^{-1}\,Mpc^{-1}}$. The 9 galaxies in this sample are very low redshift so the differences between the two cosmologies are negligible. \begin{figure} \centering \plotone{Malkan_2017_Data_Annotated_Dots_Bigger_NGC5005_moved.pdf} \caption{Observed 2-10~keV X-ray luminosity vs. [\ion{O}{3}] luminosity for Seyfert~2 galaxies in the 12 $\mathrm{\mu m}$ sample, based on data from \citet{2017ApJ...846..102M}. The solid red line shows the mean $L_{\rm{2-10}}$ vs $L_{\rm{[OIII]}}$ relation for the Seyfert~1 galaxies in the $\mathrm{12\,\mu m}$ sample (Eq.\ \ref{eq:Sy1line}). The dashed red line is the same line shifted by an order of magnitude down in observed 2-10 keV X-ray luminosity. The ten galaxies with $L_{\rm{2-10}}$ more than an order of magnitude lower than the Seyfert~1 relation are labeled. The $L_{\rm{2-10}}$ vs $L_{\rm{[OIII]}}$ relation for Seyferts from \citet{2015MNRAS.454.3622B} is plotted in cyan for comparison.} \label{fig:Proposal_Plot} \end{figure} \smallskip \section{X-Ray Observations and Analysis} \label{sec:obs} The X-ray observations used in this paper are listed in Table~\ref{tab:xraydata}. We include all available \nustar\ data for the 9 X-ray faint galaxies. \nustar\ observes at 3-79~keV, though most sources are not detected out to the highest energies where the sensitivity of \nustar\ declines. Lower energy X-ray data are important for the spectral analysis, and several telescopes provide focused soft X-ray observations (0.5-10 keV). Where available, we preferentially use archival \chandra{} observations due to its sensitivity and high spatial resolution; with its 1\arcmin\ beam (half-power diameter), \nustar{} suffers confusion of off-nuclear point sources with the central AGN, which is particularly problematic for faint nuclei, as is the case for several of the galaxies analyzed here. When \chandra\ data were not available or were insufficient for understanding the true nature of some spectral features, we use archival \swift{} and/or \xmm{} data. All X-ray spectra were grouped with a minimum of one count per bin. We fit the data in XSPEC (version 12.11.1). Due to the low number of counts for all sources, the C-statistic was used for fitting. We subtracted the background instead of modeling it separately, in which case XSPEC uses the modified W-statistic. We next describe the X-ray observations by each satellite in more detail. \subsection{\nustar{}} By design, the entire sample presented here has \nustar{} observations. The \nustar{} data were reduced, filtered, and extracted using HEASOFT (version 6.28), the \nustar{} Data Analysis Software (NUSTARDAS; version 2.0.0), and the \nustar\ calibration database (CALDB; version 20200826). For the extractions, we used circular source regions 40\arcsec{} in radius centered on the galaxy nucleus positions and circular background regions 100\arcsec{} in radius. In the spectral fitting, we fixed the \nustar{} normalization constant to unity for FPMA and 1.04 for FPMB, where the latter is based on calibration observations of the bright source 3C~273 reported in \citet{2015ApJS..220....8M}. When multiple FPMA and FPMB observations were available, the normalization constants in the later observations were left as free parameters to account for variability. We used energies from 3 keV to 30 keV from the \nustar{} data for the spectral fitting. Above 30 keV background dominates over AGN emission for our sample. \subsection{\chandra{}} Archival \chandra\ ACIS observations were available for 8 of the 9 galaxies, with the exception being NGC~6890. For most of this sample of X-ray faint, nearby galaxies, the sensitive, higher angular resolution \chandra\ observations identify multiple sources within the \nustar\ beam, primarily due to X-ray binaries within the target galaxies. Using CIAO (version 4.12) and the \chandra\ CALDB (version 4.9.1), we extracted \chandra\ spectra of all sources within a 40\arcsec{} radius circular aperture around the core of each galaxy, matching the \nustar{} beam. As discussed in the following discussion of individual sources, the \chandra\ aperture sizes varied depending on whether the source was unresolved and/or if the target was at a larger off-axis angle, for which the \chandra\ point spread function degrades. Sources in the \chandra{} images were identified by eye. A circular background region 10\arcsec{} in radius was used for all \chandra{} data. We used energies from 0.5 keV to 8.0 keV from the \chandra{} data for the spectral fitting, and ignored off-nuclear sources with less than 10\% the net count rate of the central AGN. We used all archival \chandra{} data available for these sources, with the exception of NGC~3627, which had a 1.3~ks observation (ObsID: 394) that was ultimately discarded in favor of a much deeper observation (50.3~ks; ObsID: 9548). \subsection{\swift{}} Because NGC~6890 lacked \chandra\ observations, we analyzed data from the X-Ray Telescope (XRT) on \swift{} for this galaxy. We extracted the data using HEASOFT (version 6.28), the \swift{} XRT Data Analysis Software (SWXRTDAS; version 3.6.0), and the \swift\ CALDB (version 20200724). We used circular source regions of 25\arcsec{} radius and background regions of 50\arcsec{} radius for the spectral extraction. We used energies from 0.3 keV to 10.0 keV from the \swift{} data for the spectral fitting. \subsection{\xmm{}} We used \xmm{} data for NGC 5005 and NGC 6890, the former to further investigate unusual spectral features found in the \nustar{} data, and the latter because no \chandra{} observations exist for the source. We used all three of the European Photon Imaging Camera (EPIC) CCDs --- i.e., pn, MOS1, and MOS2 --- in the spectral fitting. We extracted the data using the \xmm\ Scientific Analysis System (SAS; version 18.0.0). Details on the \xmm{} spectral extractions are in the individual notes on each galaxy (\S3). We used energies from 0.2 keV to 10.0~keV from the \xmm{} data for the spectral fitting. \subsection{X-ray Spectral Models} For each galaxy we began fitting with a simple CONSTANT*TBABS*POWERLAW model in XSPEC. The constant is to account for source variability and cross-normalization differences between the different telescopes; in the text, we refer to this constant as either the cross-calibration coefficient or the normalization constant. The TBABS component models absorption of X-rays due to the interstellar medium of our own Milky Way galaxy, which we determined using the Galactic hydrogen column densities along the line of sight to each galaxy from \citet{2016A&A...594A.116H}. The POWERLAW component fits a simple powerlaw to the data with two parameters: the spectral index, $\Gamma$, and the normalization, defined as the number of $\mathrm{photons\, keV^{-1}\, cm^{-2}\, s^{-1}}$ at 1 keV in the source reference frame. In luminous, unobscured AGN, Compton upscattering of thermal photons from the accretion disk by the SMBH corona generates a powerlaw X-ray spectrum across our observed range, and this component dominates the X-ray spectrum. In obscured AGN, this component is absorbed by gas, making the observed X-ray spectrum harder (i.e., a lower value of $\Gamma$). For heavily absorbed, Compton-thick AGN, few photons from the intrinsic spectrum escape below 10~keV. However, a small fraction of the intrinsic powerlaw generally always escapes unabsorbed \citep[e.g.,][]{2021MNRAS.504..428G}. This scattered, unabsorbed powerlaw component is typically just a few percent of the intrinsic spectrum. In addition to this simple initial model, obscured AGN often exhibit a soft excess in the 0.5-2~keV range that is thought due to thermal emission from hot gas along the line of sight. We account for this model by adding an APEC model, which simulates X-ray emission from a collisionally ionized plasma. Its parameters are the plasma temperature, elemental abundances, redshift of the source ($z$), and normalization. The APEC normalization is defined as $\frac{10^{-14}}{4\pi [D_{A}(1+z)]^{2}} \int n_{e} N_{\rm H}dV$, where $D_{A}$ is the angular diameter distance to the source, and $n_{e}$ and $N_{\rm H}$ are the electron and hydrogen number densities, respectively. For this analysis, we set the elemental abundances to solar. Obscured AGN also typically show a prominent neutral Fe~K-alpha line at 6.4~keV and a Compton hump at $\sim 20$~keV. These features arise from reflected emission and scattering off gas around the central engine. The gas is believed to be toroidal in geometry and is presumed related to the cooler, more extended dusty torus that is responsible for AGN obscuration at visible wavelengths and AGN thermal emission at MIR wavelengths. We fit the iron line and Compton hump by adding a BORUS model to the overall spectral model, which allows us to constrain the geometry of the torus. BORUS models torus reprocessing of an intrinsic SMBH corona powerlaw spectrum. Its free parameters are the spectral index of the intrinsic powerlaw ($\Gamma$), the high-energy cutoff, the torus hydrogen column density ($N_{\rm H}$), the torus covering factor (defined as the cosine of the opening angle of the torus), the inclination angle of the torus ($\theta_{\rm inc}$), the relative abundance of iron compared to the solar abundance, the redshift to the source ($z$), and the normalization (which is defined the same as it is for the POWERLAW model). We consistently set the high energy cutoff to 500~keV and the iron abundance to solar. We also set the spectral indices of the BORUS model and the POWERLAW model to be the same in all cases except NGC~6890. In the case of an AGN with a BORUS component, the POWERLAW component represents the fraction of the intrinsic powerlaw that is scattered and transmitted through the torus, and so it should have the same spectral index as the BORUS component. We also tried including a ZTBABS model in our fits. ZTBABS is similar to TBABS but represents absorption from hydrogen at the source, rather than from our Galaxy. However, though we investigated including a ZTBABS component for all of the AGN in this sample, none of the sources ultimately required it. As noted below, a few of the extranuclear X-ray sources did find improved spectral fitting by including a ZTBABS component. For NGC 5005 we tried a ZGAUSS component in addition to a BORUS component. This model represents a Gaussian emission line profile. Its free parameters are the source frame line energy in keV, the source frame line width in keV, the redshift to the source, and the normalization (which is defined as the total photons $\mathrm{cm^{-2}\, s^{-1}}$ in the emission line in the source frame). A ZGAUSS model was ultimately preferred over a BORUS model for this source. Lastly, for NGC 3627 we used a CUTOFFPL instead of a POWERLAW component for the extranuclear point sources in the \nustar{} beam. This model component is the same as the POWERLAW component except it includes an exponential rolloff, $KE^{-\Gamma}\, \exp(-E/\beta)$, where $K$ is the normalization, $E$ is the energy, $\Gamma$ is the spectral index, and $\beta$ is the the e-folding energy of the rolloff. \begin{deluxetable}{lcccccc} \tablecaption{List of X-ray observations.}\label{tab:xraydata} \tablewidth{0pt} \tablehead{ \colhead{Target} & \colhead{R.A., Dec.} & \colhead{Observatory} & \colhead{ObsID} & \colhead{Date} & \colhead{Net Exposure Time} & \colhead{Net Count Rate}\\ \colhead{} & \colhead{(J2000)} & &\colhead{} & \colhead{} & \colhead{(ks)} & \colhead{(cts ${\rm ks}^{-1}$)} } \startdata NGC 1386 & 03:36:46.18, $-$35:59:57.87 & \chandra{} & 4076 & 2003-11-19 & 19.6 & 52.5\\ { } & { } & - & 12289 & 2011-4-13 & 17.3 & 48.7\\ { } & { } & - & 13185 & 2011-4-13 & 29.7 & 45.4\\ { } & { } & - & 13257 & 2011-4-14 & 33.8 & 47.0\\ { } & { } & \nustar{} & 60001063002 & 2013-7-9 & 18.8/18.4 & 9.2/10.2\\ { } & { } & - & 60201024002 & 2016-5-11 & 25.4/25.8 & 9.9/9.2\\ NGC 3627 & 11:20:14.96, +12:59:29.54 & \chandra{} & 9548 & 2008-3-31 & 49.6 & 6.1\\ { } & { } & \nustar{} & 60371003002 & 2017-12-23 & 49.1/48.9 & 3.3/2.3\\ NGC 3982 & 11:56:28.13, +55:07:30.86 & \chandra{} & 4845 & 2004-1-3 & 9.2 & 6.6\\ { } & { } & \nustar{} & 60375001002 & 2017-12-5 & 30.7/31.0 & 5.8/4.7\\ NGC 4501 & 12:31:59.161, +14:25:13.39 & \chandra{} & 2922 & 2002-12-9 & 17.1 & 11.7\\ { } & { } & \nustar{} & 60375002002 & 2018-1-28 & 58.0/59.4 & 4.2/3.4\\ { } & { } & - & 60375002004 & 2018-5-24 & 58.5/58.2 & 3.5/3.7\\ IC 3639 & 12:40:52.85, $-$36:45:21.11 & \chandra{} & 4844 & 2004-3-7 & 8.7 & 31.5\\ { } & { } & \nustar{} & 60001164002 & 2015-1-9 & 56.1/55.7 & 8.3/8.1\\ NGC 4922 & 13:01:24.90, +29:18:40.0 & \chandra{} & 4775 & 2004-11-2 & 3.8 & 11.8\\ { } & { } & - & 15065 & 2013-11-2 & 14.9 & 9.3\\ { } & { } & - & 18201 & 2016-3-6 & 5.8 & 10.7\\ { } & { } & \nustar{} & 60101074002 & 2015-11-9 & 20.2/20.1 & 4.2/2.8\\ NGC 5005 & 13:10:56.23, +37:03:33.14 & \chandra{} & 4021 & 2003-8-19 & 4.92 & 54.3\\ { } & { } & \xmm{} & 0110930501 & 2002-12-12 & 8.7/13.1/13.1 & 297.5/69.1/70.9\\ { } & { } & \nustar{} & 60001162002 & 2014-12-16 & 48.9/48.3 & 5.8/5.4\\ Mrk 463 & 13:56:02.87, +18:22:19.48 & \chandra{} & 4913 & 2004-6-11 & 49.3 & 24.3\tablenotemark{a}\\ { } & { } & - & 18194 & 2016-3-10 & 9.8 & 16.1\tablenotemark{a}\\ { } & { } & \nustar{} & 60061249002 & 2014-1-1 & 23.9/23.8 & 2.3/2.2\\ NGC 6890 & 20:18:18.10, $-$44:48:24.21 & \xmm{} & 0301151001 & 2005-9-29 & 0.9/7.5/7.8 & 131.1/26.3/28.3\\ { } & { } & \swift{} & 00088188001 & 2018-3-6 & 1.7 & 11.13\\ { } & { } & - & 00088188002 & 2018-5-25 & 2.0 & 20.1\\ { } & { } & \nustar{} & 60375003002 & 2018-5-25 & 34.6/34.5 & 59.5/56.2\\ \enddata \tablecomments{Net count rates for \chandra{} data are for the AGN core only. Exposure times and net count rates for \nustar{} observations are FPMA/FPMB. Exposure times and net count rates for \xmm{} observations are pn/MOS1/MOS2.} \tablenotetext{a}{For the brighter, eastern component of this merger system (see \S~3.8).} \end{deluxetable} \section{The Individual Galaxies} \label{sec:gal} We now discuss each of the nine galaxies in our sample individually, providing brief notes about the galaxy and then details of the X-ray observations and analysis. \subsection{NGC 1386} NGC 1386 is a barred spiral galaxy in the Fornax Cluster \citep{1989AJ.....98..367F} with prominent dust lanes, a ring of \ion{H}{2} regions, and AGN-ionized gas plumes visible in {\it Hubble} imagery of its central regions \citep{2000ApJS..128..139F}. It is optically classified as a Seyfert 2 galaxy \citep[e.g.,][]{1980ApJ...235..761P, 2011MNRAS.414.3084B} but it has also been classified as a S1i by \citet{2006A&A...455..773V} on the basis of a broad Paschen-beta (Pa$\beta$) component evident in its near-infrared (NIR) spectrum. At MIR wavelengths, NGC 1386 shows extended elliptical or bar-like emission \citep{2014MNRAS.439.1648A}. \citet{2014MNRAS.438.3434R} did not find polycyclic aromatic hydrocarbon (PAH) features in its {\it Spitzer} nuclear spectrum, likely attributable to ionization by the AGN. The AGN is a water megamaser source \citep{1997AAS...19110402B}; such sources typically show higher levels of obscuration \citep[e.g.,][]{2006A&A...450..933Z, 2016A&A...589A..59M}. \citet{2015ApJ...806...84L} reports that NGC 1386 has a mass outflow rate of $>1M_{\odot}\, {\rm yr}^{-1}$ and shows complex gas kinematics at its center, likely caused by an ionization cone intersecting the galactic disk at an angle. \citet{2017MNRAS.470.2845R} found even stronger outflows, comparable to that of a strong AGN, with a mass loss rate of $11 M_{\odot}\, {\rm yr}^{-1}$. The outflow takes the form of two expanding shells of gas that are coincident with the axis of the radio emission, implying they are likely powered by a radio jet rather than simply by the AGN radiation. Between the broad Pa$\beta$ emission line and the radio maser activity, the broadband properties of NGC~1386 suggest a currently active, obscured Seyfert~2 galaxy. In the X-rays, \citet{2005MNRAS.356..295G} analyzed its \xmm{} data and concluded the spectrum was best fit by either scattering and transmission components, or by thermal and reflection components. \citet{2006AA...448..499B} confirmed a reflection-dominated model was the best fit based on \chandra{} data, but concluded spectral lines visible in the soft X-ray EPIC observations were more likely due to scattering off of photoionized plasma rather than thermal emission. \citet{2012ApJ...758...82L} presented a joint analysis of \chandra{} and \xmm{} data in the 0.5-2.0 keV range and found that it was best fit with a two-temperature APEC model, indicating the presence of two thermal gas components, one with $kT \sim$ 0.13~keV and one with $kT \sim$ 0.67~keV. They noted this was similar to X-ray observations of starburst galaxies \citep[e.g.,][]{1998ApJS..118..401D, 2004ApJS..151..193S}. In addition to two APEC components, their model also contained two powerlaw components with spectral indices tied together, each subject to both Galactic absorption and absorption at the source. The latter was found to be $N_{\rm H}\:=\:3.14\times10^{23}\:{\rm cm}^{-2}$, and \citet{2012ApJ...758...82L} measured the AGN contribution to the 0.5-2.0 keV X-ray luminosity to be $\mathrm{\approx70\%}$. Recently, \citet{2021ApJ...910...19J} reported \chandra\ detection of extended hard X-ray emission across the ionization cones of NGC~1386. \citet{2011MNRAS.413.1206B} identified NGC 1386 as Compton-thick on the basis of its \xmm{} data, which shows a strong Fe K-alpha line ($\mathrm{EW_{6.4}=1710}$~eV in their model). They confirmed it was reflection-dominated, and measured a hydrogen column density of $N_{\rm H}\:=\:1.51\times10^{24}\:{\rm cm}^{-2}$. Adding data taken by \nustar{} to the existing \xmm{} spectra, \citet{2015ApJ...805...41B} found a slightly higher column density, $N_{\rm H}\:=\:5.61\times10^{24}\:{\rm cm}^{-2}$. \citet{2016A&A...589A..59M} found similar results using a combination of a MyTORUS model \citep{2009MNRAS.397.1549M} and an emission line component at 6.5~keV. \subsubsection{X-ray Observations and Data Extraction} NGC~1386 was observed twice by \nustar\ and four times by \chandra; details, including observation dates and exposure times, are in Table~\ref{tab:xraydata}. Figure~\ref{fig:NGC1386_image} presents the third \chandra{} observation and the second \nustar{} FPMA observation with the extraction regions overlaid. The AGN \chandra{} spectrum was extracted with a circular source region 5.72\arcsec{} in radius. In addition to the AGN core, five extranuclear point sources in the \nustar{} beam were present in all four \chandra{} images. They were extracted using circular source regions 1.5\arcsec{} in radius. Since the count rates for all these sources were less than 10\% that of the core, they were ignored in the X-ray spectral fitting. \begin{figure} \centering \plotone{NGC1386_labeled.pdf} \caption{\chandra{} (ObsID: 13257) and \nustar{} (ObsID: 60201024002) FPMA images of NGC 1386. The larger, 40\arcsec{} radius circle denotes the \nustar{} extraction region, while the smaller circles denote the \chandra{} extraction regions. Five extra-nuclear point sources were visible in the \chandra{} observations, though all were sufficiently faint (i.e., $< 10$\% the flux of the nucleus) to be ignored in the AGN spectral analysis.} \label{fig:NGC1386_image} \end{figure} \subsubsection{X-ray Spectral Fitting} We first modeled the spectrum with TBABS*POWERLAW, which yielded a C-stat/d.o.f.\ of 2980.46/1698. A strong Fe~K-alpha emission line is evident in the unfolded spectrum (Figure \ref{fig:NGC1386_uf}), as well as a prominent Compton hump at 10-20 keV. We added a BORUS component to the original TBABS*POWERLAW fit to account for these reflection features, fixing the spectral index of the BORUS component to that of the POWERLAW component. This improved C-stat/d.o.f.\ to 2067.40/1694. Strong residuals above the power law component were present at energies 0.5-2.0 keV so an APEC component was added, resulting in C-stat/d.o.f.\ = 1679.64/1692. While this is a statistically good fit, $N_{\rm H}$ remains unconstrained. We therefore opted to freeze $\cos(\theta_{\rm inc})$ at its model value of 0.45 before refitting. The final fit had C-stat/d.o.f.\ = 1681.69/1693 and the parameters of the best-fit model are presented in Table \ref{tab:NGC1386params}. The 90\% confidence interval for the BORUS parameter $\log N_{\rm H}$ was unconstrained at the upper end, so it is listed as $\geq 24.5$. The powerlaw spectral index hit the upper bound of 2.6 in the model, so it is listed as $\geq 2.6$. The best-fit model is plotted over the unfolded spectrum in Figure \ref{fig:NGC1386_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lccccccc@{}} \tablecaption{Parameters for best-fit NGC 1386 model.}\label{tab:NGC1386params} \tablewidth{2pt} \tablehead{\multicolumn{2}{c}{APEC} & \multicolumn{4}{c}{BORUS} & \multicolumn{2}{c}{POWERLAW}\\ \cline{1-2} \cline{3-6} \cline{7-8} \colhead{$kT$} & \colhead{Norm} & \colhead{$\log({N_{\rm H}}$)\tablenotemark{a}} & \colhead{$\mathrm{CF_{Tor}}$\tablenotemark{b}} & \colhead{$\mathrm {\cos(\theta_{inc})}$\tablenotemark{c}} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{(keV)} & \colhead{($10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{(cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{($10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} } \startdata $0.82\pm{0.03}$ & $2.28^{+0.44}_{-0.38}$ & $\geq{24.5}$ & $0.49\pm{0.01}$& $=0.45$\tablenotemark{d} & $0.09\pm{0.01}$ & $\geq 2.6$ & $5.65^{+0.98}_{-0.38}$ \enddata \tablecomments{Error bars represent 90\% confidence intervals. The \chandra{} normalization constant values were $0.91^{+0.16}_{-0.13}$ (ObsID: 4076), $0.97^{+0.17}_{-0.14}$ (ObsID: 12289), $0.90^{+0.15}_{-0.10}$ (ObsID: 13185), and $0.53^{+0.16}_{-0.08}$ (ObsID:13527). The second \nustar{} FPMA and FPMB normalization constants were $0.99^{+0.15}_{-0.11}$ and $0.98^{+0.16}_{-0.11}$ (ObsID: 60201024002).} \tablenotetext{a}{$N_{\rm H}$ in units of $\mathrm{cm^{-2}}$.} \tablenotetext{b}{Covering factor of torus, equivalent to cosine of torus opening angle.} \tablenotetext{c}{Cosine of torus inclination angle.} \tablenotetext{d}{Frozen at this value.} \end{deluxetable} \begin{figure} \plotone{NGC1386_unfolded_spectrum_delchi_v3.pdf} \caption{Unfolded spectrum and best-fit model for NGC 1386. Black, red, green, and blue denote \chandra{} data (ObsIDs 4076, 12289, 13185, and 13257). Cyan and magenta denote FPMA and FPMB data for \nustar{} observation 60001063002. Yellow and orange denote FPMA and FPMB for \nustar{} observation 60201024002.} \label{fig:NGC1386_uf} \end{figure} \subsection{NGC 3627} NGC 3627 (also known as Messier 66) is a barred spiral galaxy in the Leo triplet of galaxies, along with NGC 3623 and NGC 3628, and is undergoing tidal interactions with them \citep{1987ApJ...320..145S, 1991ApJ...370..176H, 1993ApJ...418..100Z, 1996A&A...306..721R}. It exhibits low-luminosity nuclear activity, though its status as a true SMBH-powered AGN (as opposed to simply having a nuclear starburst) has been the subject of debate in the literature. Its optical activity type has been variously characterized as a transition object \citep[e.g.][]{2005ApJ...620..113D}, Seyfert 2 \citep[e.g.][]{2011MNRAS.414.3084B}, incapable of being distinguished between the transition object and Seyfert 2 classes \citep{1997ApJS..112..315H}, or simply a LINER \citep{2006A&A...455..773V}. NGC 3627 presents a complex profile in the MIR, with diffuse emission across the entire galaxy \citep{2014MNRAS.439.1648A} from which a compact nuclear source cannot be clearly separated. In the X-rays, NGC 3627 was first detected by {\it ASCA} and {\it ROSAT}. \citet{2001MNRAS.324..737R} examined these observations and found that the spectrum was described well by a soft thermal component (0.5-1 keV) and a powerlaw component (2-5 keV). They measured the flux ratio between these components to be 0.56, indicating they were not dramatically different in intensity and argued that this indicated the two spectral components likely had a common, non-AGN origin. They noted this flux ratio was very similar to the {\it ASCA} flux ratio in the same energies for the starburst galaxy NGC 253. Therefore, they argued that NGC 3627 was unlikely to be a true AGN. The first \chandra{} observation of NGC 3627, a 1.3~ks snapshot exposure, was initially published by \citet{2001ApJ...549L..51H}, who did not detect a dominant unresolved point source in the galaxy's core, only a scattering of sources. They therefore concluded that NGC 3627 was not a true AGN. Some later papers also suspected NGC 3627 not to be a true AGN, partially on this basis \citep[e.g.][]{2006A&A...455..173P, 2009A&A...506.1107G}; \citet{2006A&A...455..173P} put an upper limit of $L_{2-10}<7.6\times10^{37}\:{\rm erg}\:{\rm s}^{-1}$ on the nuclear 2-10~keV luminosity. In contrast, and based on the same observations, \citet{2009ApJ...699..281Z} argued the \chandra{} image does show a dominant central point source within 1\arcsec{} of the galaxy's center, and they report a significantly higher 0.3-8~keV X-ray luminosity of $L_{0.3-8}=9.1\times10^{39}\:{\rm erg}\:{\rm s}^{-1}$. In NGC 3627's sole \xmm{} observation, \citet{2006A&A...455..173P} observed a point source at the galaxy nucleus, but noted it was equal in brightness to a second point source 10\arcsec{} away. Indeed, both \citet{2006A&A...455..173P} and \citet{2013A&A...556A..47H} agree the \xmm{} data is heavily contaminated by emission from sources other than the galaxy core. \citet{2009A&A...506.1107G} failed to find a unresolved point source in the harder bands observed by \xmm{} (4.5-8.0 keV). Their estimate of the 2-10 keV luminosity is $L_{2-10} \sim 10^{39}\:{\rm erg}\:{\rm s}^{-1}$ based on the \xmm{} data, assuming a powerlaw index of $\Gamma = 1.8$ and Galactic absorption. They nonetheless identified NGC 3627 as a Compton-thick AGN candidate on the basis of its $L_{2-10}/L_{\rm{[OIII]}}$ ratio \citep{2009ApJ...704.1570G}. In contrast, \citet{2011MNRAS.413.1206B} measured an ionized hydrogen column density of $5.01 \times10^{21}\, \mathrm{cm^{-2}}$ in the \xmm{} spectrum, which would clearly place it in the Compton-thin regime. \citet{2011MNRAS.413.1206B} modeled the \xmm{} observation of NGC 3627 with a soft thermal emission component and ionized absorber component in addition to Galactic absorption and powerlaw components. A second, deeper (50.3~ks) \chandra{} observation of NGC 3627 was taken in 2008 as part of a campaign to observe the \textit{Spitzer} Infrared Nearby Galaxy Survey (SINGS) catalogue in X-rays \citep{2011ApJ...731...60G}. In this observation, one can see an unresolved nuclear point source embedded in diffuse emission \citep{2013ApJ...776...50C}. \citet{2020ApJ...905...29E} fit the \nustar{} data for NGC 3627 with a partial covering absorber that included Galactic absorption. They measured an absorbing hydrogen column density of $1.8\times10^{24}\, \mathrm{cm^{-2}}$, which would put the AGN in the Compton-thick category. After correction for absorption they classified NGC 3627 as an AGN in the early stages of fading based on it being under-luminous in X-rays compared to the MIR. In their interpretation, NGC 3627 is observed at the beginning of the fading arc of the AGN duty cycle. \subsubsection{X-ray Observations and Data Extraction} NGC~3627 has been observed by \chandra{} twice, for 1.3~ks on 1999 November 3 (ObsID: 394) and for 50.3~ks on 2008 March 31. Given that the first, significantly shorter exposure does not clearly detect any sources at the galaxy center, we ignore those data in our analysis. Table~\ref{tab:xraydata} presents details of the latter \chandra\ observation, as well as the single \nustar\ observation of this galaxy to date. The \chandra{} and \nustar{} FPMA images of NGC 3627 are shown in Figure~\ref{fig:NGC3627_zoomout_image}. There are a large number of sources visible in the \chandra{} image, with one diffuse, irregularly shaped source associated with the nucleus. In addition, there is a bright point source approximately 1.5\arcmin\ to the southeast whose brightness dwarfs that of the nucleus as well as the numerous point sources within the \nustar\ beam (Figure~\ref{fig:NGC3627_zoomin_image}). This source, associated with the ultraluminous X-ray source (ULX) M66~X-1 \citep{2011MNRAS.416.1844W}, dominates in the \nustar{} image, while the AGN, in contrast, is not clearly visible. Indeed, we used the \chandra-derived astrometric offset between the ULX and the AGN to place the AGN extraction aperture in the \nustar\ image. Figure \ref{fig:NGC3627_zoomin_image} presents a zoomed-in version of the \chandra{} image, highlighting the 22 off-nuclear point sources visible within the \nustar{} beam. The \chandra{} AGN spectrum was extracted using a circular source region 2.55\arcsec{} in diameter. The off-nuclear point sources were extracted using circular source regions 1.5\arcsec{} in diameter for sources 3, 5, 8, 12, 14, and 20; 1.2\arcsec{} in diameter for sources 4, 11, 15, 18, and 19; and 1\arcsec{} in diameter for the remaining sources. Because the nucleus is so faint, 15 \chandra\ point sources within the \nustar{} beam are brighter than 10\% of its count rate. For all other galaxies in our sample, we do joint fitting of the AGN and all off-nuclear point sources within the \nustar\ beam above that threshold. However, fitting this many sources jointly would be prohibitive and most of the \chandra\ flux within the \nustar\ beam comes from the brightest of these off-nuclear sources. Therefore, only the ten brightest point sources are included in the spectral fitting (i.e., sources 4, 5, 7, 8, 9, 10, 12, 14, 15, and 20). \begin{figure} \centering \plotone{NGC3627_zoomed_out_labeled.pdf} \caption{\chandra{} (left; ObsID: 9548) and \nustar{} FPMA (right) images of NGC 3627. The larger, 40\arcsec{} radius red circle denotes the \nustar{} extraction region for the AGN, while the smaller red circle denotes the \chandra{} extraction region for the AGN. The ULX M66~X-1 is highlighted with a green circle (3.75\arcsec{} diameter in \chandra; 40\arcsec{} radius in \nustar{}). M66~X-1 dominates the \nustar{} image, while the AGN is not clearly detected by \nustar{}.} \label{fig:NGC3627_zoomout_image} \end{figure} \begin{figure} \centering \plotone{NGC3627_zoomed_in_labeled.pdf} \caption{Zoomed-in and re-scaled \chandra{} image of NGC 3627 highlighting and labeling the plethora of off-nuclear point sources (small red circles) within the larger 40\arcsec{} radius \nustar{} beam. The ULX M66~X-1 is visible to the southeast (green circle).} \label{fig:NGC3627_zoomin_image} \end{figure} \subsubsection{X-ray Spectral Fitting} Due to the large number of point sources present in the \nustar{} beam, we first fit the off-nuclear point sources with their \chandra{} data alone. We then initially fit this galaxy with all parameters for the off-nuclear sources frozen based on their \chandra\ data, thereby avoiding having too many free parameters which can lead to parameter values being implausibly high or low. We started with a simple model consisting of a CONSTANT, a TBABS component frozen to the Galactic hydrogen column density, and 11 POWERLAW components, one for the AGN and one for each of the 10 brightest point sources, where the latter were frozen to the best-fit values from \chandra. This yielded a C-stat/d.o.f.\ of 1422.72/1347. However, this fit substantially overestimated the brightness of the \nustar{} data, likely because several of the off-nuclear point sources had hard spectra over the \chandra\ range that overestimated their brightness at the higher energies of \nustar. We therefore decided to change the POWERLAW component in the extra point sources to a CUTOFFPL model. We started with the high-energy cutoffs frozen at 500 keV for all the sources, and tested whether thawing each one would decrease C-stat or not. Out of all the sources, only thawing the cutoffs on Sources 5, 8, 12, and 14 improved the fit. This fit had a C-stat/d.o.f.\ of 1182.16/1318. We fit the initial model with Sources 5, 8, 12 and 14's high-energy cutoffs thawed, then froze the high-energy cutoffs before refitting. We then added an APEC component to the model, as there is an excess between 0.5 and 2 keV. This led to a C-stat/d.o.f.\ of 1154.65/1320. The final parameter values are tabulated in Table \ref{tab:NGC3627params}. The spectrum and best fit final model are plotted in Figure \ref{fig:NGC3627_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lccccc@{}} \tablecaption{Parameters for best-fit NGC 3627 model.}\label{tab:NGC3627params} \tablewidth{2pt} \tablehead{\colhead{} & \multicolumn{2}{c}{APEC} & \multicolumn{3}{c}{CUTOFFPL}\\ \cline{2-3} \cline{4-6} \colhead{Source} & \colhead{$kT$} & \colhead{Norm} & \colhead{$\Gamma$} & \colhead{Cutoff} & \colhead{Norm}\\ & \colhead{(keV)} & \colhead{($10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{(keV)} & \colhead{($10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} } \startdata AGN & $0.83^{+0.12}_{-0.14}$ & $1.31^{+0.70}_{-0.43}$ & $1.04\pm{0.23}$ & -- & $2.29^{+1.54}_{-0.57}$\\ Src 4 & {} & {} & $1.31^{+0.22}_{-0.21}$ & 500\tablenotemark{a} & $2.2^{+2.3}_{-0.4}$\\ Src 5 & {} & {} & $-0.97\pm{0.32}$ & 1.40\tablenotemark{a} & $5.77^{+2.00}_{-4.68}$\\ Src 7 & {} & {} & $1.61^{+0.51}_{-0.46}$ & 500\tablenotemark{a} & $0.9^{+4.0}_{-0.2}$\\ Src 8 & {} & {} & $0.9^{+0.65}_{-0.64}$ & 0.51\tablenotemark{a} & $26.2^{+26.3}_{-17.6}$\\ Src 9 & {} & {} & $1.14^{+0.46}_{-0.41}$ & 500\tablenotemark{a} & $1.0^{+2.3}_{-0.4}$\\ Src 10 & {} & {} & $1.18^{+0.22}_{-0.20}$ & 500\tablenotemark{a} & $2.6^{+1.9}_{-0.5}$\\ Src 12 & {} & {} & $-0.56\pm{0.23}$ & 1.07\tablenotemark{a}& $26.8^{+7.4}_{-20.5}$\\ Src 14 & {} & {} & $1.39^{+0.19}_{-0.18}$ & 2.76\tablenotemark{a} & $7.7^{+26.2}_{-0.8}$\\ Src 15 & {} & {} & $2.26^{+0.39}_{-0.37}$ & 500\tablenotemark{a} & $1.84^{+6.89}_{-0.32}$\\ Src 20 & {} & {} & $1.54^{+0.27}_{-0.26}$ & 500\tablenotemark{a} & $2.7^{+3.4}_{-1.3}$\\ \enddata \tablecomments{The AGN was fit using a POWERLAW model (i.e., not a CUTOFFPL model). The CUTOFFPL normalizations for sources 4, 7, 9, 10, and 20 were estimated with the STEPPAR command. The \chandra{} instrumental normalization constants on each of the sources could not be constrained.} \tablenotetext{a}{Frozen at this value.} \end{deluxetable} \begin{figure} \plotone{NGC3627_unfolded_delchi_final.pdf} \caption{Unfolded spectrum and best-fit model for NGC 3627. Black denotes \chandra{} data and model for the AGN core. Red and green denote \nustar{} FPMA and FPMB data and models. The \chandra{} data and models for the off-nuclear sources are depicted in light grey.} \label{fig:NGC3627_uf} \end{figure} \subsection{NGC 3982} NGC 3982 is a barred spiral galaxy, classified as a Seyfert~1.9 since it possesses broad H$\mathrm{\alpha}$ but lacks broad H$\mathrm{\beta}$ in its optical spectrum \citep[e.g.][]{1997ApJS..112..315H, 2006A&A...455..773V}. Seyfert~1.9 galaxies are believed to be highly obscured, and are often lumped together with Seyfert~2 AGN in population studies \citep[e.g.][]{2001ApJ...554L..19T}. The nucleus of NGC~3982 is surrounded by a partial ring of star formation, at a radius of approximately 500 pc \citep{2017MNRAS.469.3405B}. At MIR wavelengths, NGC~3982 is a compact source with extended emission of unclear origin \citep{2014MNRAS.439.1648A}. \citet{2010ApJ...709.1257T} concluded that 81\% of the 19 $\mathrm{\mu m}$ emission originates from the AGN. \citet{2020ApJ...905...29E} identify NGC~3982 as a candidate fading AGN. In the X-rays, NGC 3982 was first observed with {\it ASCA} \citep{2001ApJ...556L..75M} and was later serendipitously observed with \chandra{} as part of the \chandra{} Deep Field North survey \citep{2003AJ....126..539A}. The \chandra{} spectrum was first analyzed by \citet{2005AA...444..119G}, where the low number of counts hampered attempts to fit the spectrum to a Compton-thick model, though they did report a hydrogen column density $N_{\rm H} >1.6 \times 10^{24}\, {\rm cm}^{-2}$ and a very high Fe K-alpha equivalent width (8 keV based on their ``local'' fit). These values suggest, though do not confirm, a Compton-thick nature for NGC~3982. Its \chandra{} spectrum was later re-analyzed by \citet{2007ApJ...656..105G} in an attempt to determine whether it was a `true' Seyfert~2, but the low number of counts prevented them from making a robust fit to the spectrum. However, because they did find evidence of photoelectric absorption, they concluded the `true' Seyfert~2 explanation for its 2-10 keV faintness seemed unlikely. \citet{2007ApJ...657..167S} presented a joint fit of \chandra\ and \xmm\ spectra of NGC~3982, where they measured $N_{\rm H} > 10^{24}\, {\rm cm}^{-2}$ and the Fe~K-alpha equivalent width to be 6.31 keV. They therefore classified the AGN as Compton-thick. \citet{2009A&A...500..999A} also analyzed these \xmm{} data and measured somewhat less extreme values, finding $N_{\rm H} = 4.32 \times 10^{23}\, {\rm cm}^{-2}$ and an Fe~K-alpha equivalent width of 0.8 keV. \citet{2011ApJ...729...52L} attempted to update the NGC~3982 Fe K-alpha properties using archival \chandra{} data and a ZGAUSS model, but they were unable to constrain the parameters. \citet{2012ApJ...758...82L} fit the 0.5-2 keV spectrum with a single APEC and two powerlaw components with the goal of measuring the relative contributions of star formation (APEC) and the AGN (powerlaw) to the soft X-ray luminosity. The two powerlaws had their spectral indices tied together but separate absorption column densities, representing a partial covering geometry where some of the transmitted X-ray emission is absorbed and the rest is scattered along the line of sight. Adopting a absorption column density of $N_{\rm H} = 4.03 \times\ 10^{23}\, {\rm cm}^{-2}$ for the second powerlaw component, they estimated that 15\% of the soft X-ray emission was from the AGN. Most recently, \citet{2020ApJ...901..161K} fit NGC 3982's \xmm{} and \nustar{} spectra using a PEXMON model and three variants of a MyTORUS model modified according to the procedures in \citet{2012MNRAS.423.3360Y}. The first variant was the standard MyTORUS model, while the other two were decoupled versions where the torus viewing angle was fixed to 90 degrees and the two sides of the torus were modeled separately. One model treated the torus as uniform and the other modeled it as patchy. While the PEXMON model resulted in a Compton-thin column density of $N_{\rm H} = 6 \times 10^{23}\, {\rm cm}^{-2}$, the decoupled MyTORUS models implied significantly higher, Compton-thick values of $N_{\rm H} = 5.3\, \times 10^{24}\, {\rm cm}^{-2}$ for a uniform torus and $N_{\rm H} = 4.5 \times 10^{24}\, {\rm cm}^{-2}$ for a patchy torus. \subsubsection{X-ray Observations \& Data Extraction} NGC 3982 was observed once by \chandra{}, on 2004 January 1 (ObsID: 4845), and once by \nustar{}, on 2017 December 5 (ObsID: 60375001002). The net exposure times were 9.20~ks and 61.67~ks, respectively. In addition to the AGN, Figure~\ref{fig:NGC3982_image}, which presents these images, shows a bright, off-nuclear \chandra\ point source (Source 1) within the \nustar\ beam. For the \chandra\ data, we used a 2.5\arcsec{} radius circular aperture to extract the AGN, and a 1.5\arcsec{} radius circular aperture to extract Source~1. Since the Source 1 net count rate was more than 10\% that of the AGN, we included it in the X-ray spectral analysis. \begin{figure} \centering \plotone{NGC3982_Annotated.pdf} \caption{\chandra{} and \nustar{} FPMA images of NGC 3982. The larger, 40\arcsec{} radius circle denotes the \nustar{} extraction region, while the smaller circles denote the \chandra{} extraction regions. An off-nuclear point source (Src 1) is visible in the \chandra{} image, and it was bright enough that it had to be accounted for in the spectral fitting.} \label{fig:NGC3982_image} \end{figure} \subsubsection{X-ray Spectral Fitting} We started by fitting Source 1's \chandra{} spectrum alone with a simple POWERLAW model, finding best-fit values for its POWERLAW spectral index of $\Gamma = 1.17$ and normalization of $5.36\: \times \: 10^{-6}\, {\rm cts}\, {\rm s}^{-1}\, {\rm keV}^{-1}$. We then fit the AGN and Source 1 jointly, freezing the spectral parameters of Source 1 to the best-fit values from \chandra{}. We started with TBABS*(POWERLAW+POWERLAW) and found C-stat/d.o.f.\ = 479.68/463. We then added BORUS (C-stat/d.of.\ = 398.96/459) and APEC (C-stat/d.o.f.\ = 382.89/457) components to the AGN. $\mathrm{CF_{tor}}$ was unconstrained in this fit, but $\mathrm{\cos(\theta_{inc})}$ was constrained. We froze $\mathrm{cos(\theta_{inc})}$ to its best-fit value of 0.86 before refitting, which allowed us to place a lower limit on $\mathrm{CF_{tor}}$. The final C-stat/d.o.f.\ is 396.15/458. The parameters of this final fit are tabulated in Table \ref{tab:NGC3982params} and the model is plotted over the X-ray data in Figure \ref{fig:NGC3982_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lcccccccc@{}} \tablecaption{Parameters for best-fit NGC 3982 model.}\label{tab:NGC3982params} \tablewidth{2pt} \tablehead{\colhead{} & \multicolumn{2}{c}{APEC} & \multicolumn{4}{c}{BORUS} & \multicolumn{2}{c}{POWERLAW}\\ \cline{2-3} \cline{4-7} \cline{8-9} \colhead{Source} & \colhead{$kT$} & \colhead{Norm} & \colhead{log($N_{\rm H}$)} & \colhead{$\mathrm{CF_{Tor}}$} & \colhead{$\mathrm {\cos(\theta_{inc})}$} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{} & \colhead{(keV)} & \colhead{($10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{(cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{($10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} } \startdata AGN & $0.16\pm{0.03}$ & $2.64^{+1.32}_{-1.07}$ & $\geq{25.3}$ & $\geq{0.92}$& $=0.86$\tablenotemark{a} & $0.15^{+0.05}_{-0.02}$ & $2.48^{+0.06}_{-0.29}$ & $9.64^{+12.0}_{-5.72}$\\ Src 1 & {} & {} & {} & {} & {} & {} & 1.18\tablenotemark{a} & 5.36\tablenotemark{a} \enddata \tablecomments{The instrumental normalization constant for the \chandra{} AGN data $0.53^{+0.41}_{-0.11}$. The normalization constant for the \chandra{} data of Src 1 was $1.07^{+0.30}_{-0.26}$.} \tablenotetext{a}{Frozen at this value.} \end{deluxetable} \begin{figure} \plotone{NGC3982_unfolded_delchi_src1grey.pdf} \caption{Unfolded spectrum and best-fit model for NGC 3982. Black denotes \chandra{} data of the AGN core. Red and green denote FPMA and FPMB data from the \nustar{} observation of NGC3982. The \chandra{} data of Src 1 is depicted in light grey.} \label{fig:NGC3982_uf} \end{figure} \subsection{NGC 4501} NGC 4501 (also known as Messier 88) is a spiral galaxy in the Virgo Cluster \citep{1982A&AS...47..505K}. In the optical, it has been classified as a Seyfert 2 \citep[e.g.,][]{1993ApJS...89....1R, 2006A&A...455..773V} but has occasionally been labeled a LINER \citep[e.g.,][]{1999RMxAA..35..187C,2017MNRAS.469.3405B}. The galaxy has a concurrent starburst based on its MIR spectra \citep{2011MNRAS.414..500H}, though the central regions of the galaxy seem to consist only of evolved stars \citep{2017MNRAS.464..293R,2017MNRAS.469.3405B}. The galaxy is approaching the center of the Virgo cluster and has already become depleted of neutral hydrogen due to ram-pressure stripping \citep[e.g.,][]{2008A&A...483...89V, 2009A&A...502..427V, 2016A&A...587A.108N}. NGC 4501 is radio-loud \citep{2013ApJS..204...23V} and displays a powerlaw SED across the entire 1-10 $\mathrm{\mu m}$ range of its {\it Spitzer} spectrum, probably as a result of synchrotron emission from a jet \citep{2013ApJ...764..159L}. \citet{2010ApJ...709.1257T} report that 70\% of its 19 $\mathrm{\mu m}$ emission comes from the AGN. While these results seem to indicate a strong AGN MIR component, the AGN was barely detectable in subarcsecond MIR images from \citet{2014MNRAS.439.1648A}, and it was not detected in the {\it M}-band ($\mathrm{\lambda_{c}=4.66\, \mu m}$) by the Very Large Telescope (VLT) Infrared Spectrometer and Array Camera \citep[ISAAC;][]{2021ApJ...910..104I}. In the X-ray, NGC 4501 was first detected by {\it ASCA} \citep{2000ApJ...539..161T}, where its spectrum showed no evidence of heavy absorption or Fe K-alpha emission. \citet{2005ApJ...633...86S} analyzed the \chandra{} observation of NGC 4501 and found it contained multiple X-ray components of equal brightness instead of a dominant hard X-ray component. On this basis they classified NGC 4501 as a non-AGN LINER, though they did note a lack of a dominant hard X-ray component could be caused by absorbing column densities of $\geq 10^{24}\,\mathrm{cm^{-2}}$. \citet{2012ApJ...758...82L} estimated that approximately 15\% of the soft (0.5-2 keV) X-ray emission in NGC 4501 was from the AGN. The \xmm{} observations of NGC 4501 were first analyzed in detail by \citet{2006A&A...446..459C}, who found its 0.5-10~keV spectrum could be fit well with a soft thermal component and a powerlaw component. They concurred with \citet{2000ApJ...539..161T} that there was no evidence of heavy absorption. In contrast to these researchers' conclusions, \citet{2008MNRAS.390.1241B} argued that NGC 4501's \chandra{} observation does indeed show a hard X-ray component coincident with the galaxy's optical nucleus. They fit this hard component using a PEXMON model. Using the hard X-ray component to estimate the bolometric luminosity of the AGN, they concluded that the AGN was more likely heavily obscured than intrinsically faint. They also noted that previous studies using \xmm{} data had been hampered by contamination from extranuclear point sources. \subsubsection{X-ray Observations and Data Extraction} NGC~4501 was observed twice by \nustar\ and once by \chandra; details, including observation dates and exposure times, are in Table~\ref{tab:xraydata}. The \chandra{} and \nustar{} images of NGC~4501 are presented in Figure~\ref{fig:NGC4501_image}, with extraction regions overlaid. The \chandra{} AGN spectrum was extracted with a circular source region 4.78\arcsec{} in radius. In addition to the AGN core, 8 extra-nuclear sources in the \nustar{} beam are visible in the \chandra{} image. These were extracted with circular source regions 2\arcsec{} in radius from the \chandra\ data, other than the second and eighth sources which were extracted with circular source regions 1.5\arcsec{} in radius. Of these eight point sources, all but the fifth source (Source~5 in the labeled image) had greater than 10\% the count rate of the AGN core, and so they were included in the X-ray spectral fitting. \begin{figure} \centering \plotone{NGC4501_Annotated.pdf} \caption{\chandra{} and \nustar{} FPMA (ObsID: 60375002002) images of NGC 4501. The larger, 40\arcsec{} radius circle denotes the \nustar{} extraction region, while the smaller circles denote the \chandra{} extraction regions. Eight off-nuclear point sources are visible in the \chandra{} image; all but Src 5 are sufficiently bright that they are included in the X-ray spectral fitting.} \label{fig:NGC4501_image} \end{figure} \subsubsection{X-ray Spectral Fitting} Because the large number of extra point sources would create too many free parameters for XSPEC to fit, we decided to repeat the procedure we initially attempted in NGC 3627 for the off-nuclear point sources. That is, we fit each off-nuclear source \chandra{} spectrum individually to find the best-fit parameters for its model components. Then, in the joint-fitting step with the \nustar{} data, model parameters were frozen to their best-fit values from \chandra{}. The only free parameter present for each off-nuclear source in the final fitting was its normalization constant. In the preliminary \chandra{} fitting, most of the off-nuclear sources were best fit by a simple TBABS*POWERLAW model. The exceptions were Source 1 and Source 4, which both required an additional ZTBABS component, and Source 7, which required an additional APEC component. We began the joint-fitting with a simple fit consisting of a normalization constant, TBABS, and eight powerlaws, one for the AGN and the rest for the off-nuclear sources. The resulting C-stat/d.o.f.\ was 1409.95/1489. We then added the other point-source model components and refit each time: the ZTBABS component on Source 4 (C-stat/d.o.f.\ = 1377.45/1489), the APEC component on Source 7 (C-stat/d.o.f.\ = 1360.06/1489), and the ZTBABS component on Source 1 (C-stat/d.o.f.\ = 1345.19/1489). There is a prominent hard component that rises towards 5 keV in the unfolded \chandra{} spectrum of the AGN (Figure \ref{fig:NGC4501_uf}), as would be expected for an Fe K-alpha line created by an obscuring torus along the line of sight. However this hard component is not seen in the \nustar{} data taken 12 years later. This raises two intriguing possibilities. It is possible the AGN has become less luminous in the intervening decade. Modeling all the sources as simple powerlaws, the total \chandra{} 3-8 keV flux within the \nustar\ beam was $1.9\times10^{-13}$ $\mathrm{erg\:cm^{-2}\:s^{-1}}$ with the AGN included, and $1.5\times10^{-13}$ $\mathrm{erg\:cm^{-2}\:s^{-1}}$ without the AGN. The 3-8 keV flux for the \nustar{} observations ranged from $1.4\times 10^{-13}$ $\mathrm{erg\:cm^{-2}\:s^{-1}}$ to $1.6\times10^{-13}$ $\mathrm{erg\:cm^{-2}\:s^{-1}}$. As the 3-8 keV \chandra{} flux without the AGN was always closer to the \nustar{} fluxes than with it included, this raised the potential for luminosity variation in NGC 4501. It is is also possible that the obscuration of NGC 4501 has changed in the intervening time; if it became very heavily obscured, then even the hard X-ray component could be blocked. Neither possibility is out of the question, as AGN are known to sometimes vary in both luminosity \citep[e.g.][]{2015ApJ...800..144L,2017ApJ...835..144G} and obscuration \citep[e.g.][]{2014ApJ...788...76W,2015ApJ...804..107R} over the timescales in question. However, given that 8 point sources other than the AGN are visible in the \nustar{} beam it is also possible that the \nustar{} spectrum is simply contaminated by them, washing out the AGN's hard X-ray component. To test the first possibility (that the AGN varied in luminosity) we allowed the normalization of the AGN to freely vary. The AGN spectrum shows a clear soft excess around 1 keV, so we first added an APEC model. We then added a BORUS model to account for the hard component. This rendered $kT$ implausibly large, however, so we set a lower limit of 0.1 keV and an upper limit of 2.0 keV on $kT$. Because the \nustar{} data do not show a reflection/torus component, the inclination angle and covering factor of the AGN torus cannot be measured with much accuracy. For this reason we froze the BORUS $\mathrm{CF_{Tor}}$ parameter to 0.5, and the BORUS $\mathrm {\cos(\theta_{inc})}$ parameter to 0.17 (corresponding to an inclination angle of 80 deg). The final fit had a C-stat/d.o.f.\ of 1296.11/1485. The parameters of this fit are tabulated in Table \ref{tab:NGC4501params}. The cross-calibration coefficient of the AGN in this fit was $2.66^{+1.33}_{-0.95}$, which includes 1.71 within its 90\% confidence interval. This is not an extreme value for this coefficient to take. As such the claim that the AGN decreased in luminosity cannot be made with confidence. However, this still leaves open the possibility of the obscuration varying between the time of the \chandra{} observation and the time of the \nustar{} observations. To test this second possibility (that the AGN varied in obscuration), we untied the \chandra{} and \nustar{} values of the BORUS parameter $N_{\rm{H}}$ from each other, but did not allow the AGN to vary in luminosity between the \chandra{} and \nustar{} data. After refitting with these changes, the C-stat/d.o.f. was 1296.11/1485. The value of log($N_{\rm{H}}/\rm{cm}^{-2}$) in this model changed from $22.91^{+0.28}_{-0.21}$ in the \chandra{} observation to $22.43^{+0.28}_{-0.24}$ in the \nustar{} observations, too similar to explain the lack of appearance of a hard component in the \nustar{} data. Considering this and the negligible improvement in C-stat/d.o.f.\ if we let the obscuration vary instead of the luminosity, we conclude that there is no evidence in our data of NGC 4501 obscuration variability. Given that we have no strong evidence of either luminosity or obscuration variability in this AGN, the most parsimonious explanation for the lack of a hard component in the \nustar{} data is contamination from the eight extra point sources. It should be noted, however, that the possibility of variability cannot be ruled out with this data. The best fit with the cross normalization constant on the AGN left to freely vary is plotted in Figure \ref{fig:NGC4501_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lccccccccc@{}} \tablecaption{Parameters for best-fit NGC4501 model.}\label{tab:NGC4501params} \tablewidth{2pt} \tablehead{\colhead{} & \colhead{ZTBABS} & \multicolumn{2}{c}{APEC} & \multicolumn{4}{c}{BORUS} & \multicolumn{2}{c}{POWERLAW}\\ \cline{2-2} \cline{3-4} \cline{5-8} \cline{9-10} \colhead{Source} & \colhead{$N_{\rm H}$} & \colhead{$kT$} & \colhead{Norm} & \colhead{log($N_{\rm H}$)} & \colhead{$\mathrm{CF_{Tor}}$} & \colhead{$\mathrm {\cos(\theta_{inc})}$} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{} & \colhead{($\mathrm{10^{22}\:cm^{-2}}$)} & \colhead{(keV)} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} } \startdata AGN & {} & $0.75\pm{0.11}$ & $2.65^{+2.56}_{-1.48}$ & $22.87^{+0.25}_{-0.15}$ & 0.5\tablenotemark{a} & 0.17\tablenotemark{a} & $2.03^{+1.60}_{-0.44}$ & $\geq1.98$ & $3.24^{+3.30}_{-1.83}$\\ Src 1 & 0.72\tablenotemark{a} & {} & {} & {} & {} & {} & {} & 2.25\tablenotemark{a} & 15.2\tablenotemark{a}\\ Src 2 & {} & {} & {} & {} & {} & {} & {} & 1.21\tablenotemark{a} & 1.59\tablenotemark{a}\\ Src 3 & {} & {} & {} & {} & {} & {} & {} & 1.48\tablenotemark{a} & 3.05\tablenotemark{a}\\ Src 4 & 0.21\tablenotemark{a} & {} & {} & {} & {} & {} & {} & 2.12\tablenotemark{a} & 31.4\tablenotemark{a}\\ Src 6 & {} & {} & {} & {} & {} & {} & {} & 1.37\tablenotemark{a} & 4.17\tablenotemark{a}\\ Src 7 & {} & 1.09\tablenotemark{a} & 7.25\tablenotemark{a} & {} & {} & {} & {} & 2.33\tablenotemark{a} & 11.7\tablenotemark{a}\\ Src 8 & {} & {} & {} & {} & {} & {} & {} & 1.30\tablenotemark{a} & 2.68\tablenotemark{a} \enddata \tablecomments{The instrumental normalization constants for the \chandra{} data, in the order of sources from the table, are $2.66^{+1.33}_{-0.95}$, $1.03^{+0.27}_{-0.23}$, $1.07^{+0.43}_{-0.34}$, $1.01^{+0.31}_{-0.26}$, $1.00^{+0.12}_{-0.11}$, $0.78^{+0.23}_{-0.19}$, $1.00^{+0.13}_{-0.12}$, and $1.01^{+0.32}_{-0.27}$. The \nustar{} normalization constants for ObsID 60375002004A are $0.88^{+0.15}_{-0.14}$ for FPMA and $0.98^{+0.18}_{-0.16}$ for FPMB. The normalizations for the model components are in units of $10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$ for APEC, $10^{-3}$cts $\mathrm{s^{-1}\:keV^{-1}}$ for BORUS, and $10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$ for POWERLAW.} \tablenotetext{a}{Frozen at this value.} \end{deluxetable} \begin{figure} \plotone{NGC4501_unfolded_delchi_nustarsrcs_colored.pdf} \caption{Unfolded spectrum and best-fit model for NGC 4501. Black denotes \chandra{} data and model for the AGN core. Red and green denote FPMA and FPMB data and models for \nustar{} observation 60375002002, while blue and cyan denote FPMA and FPMB data and models for \nustar{} observation 60375002004. The \chandra{} data and models for the extra point sources are depicted in light grey.} \label{fig:NGC4501_uf} \end{figure} \subsection{IC 3639} IC 3639 is a barred spiral galaxy containing a Seyfert 2 nucleus, as well as a nuclear starburst within the central 80~pc of the galaxy \citep{1998ApJ...505..174G, 2018A&A...611A..46F}. It is part of a compact group of galaxies, though it lacks features indicative of recent mergers or interactions \citep{2001MNRAS.324..859B}. IC~3639 has polarized broad H$\mathrm{\alpha}$ emission, though the nature of that emission is uncertain: some researchers considerate it as indicative of a hidden broad-line region \citep{1997Natur.385..700H, 2001MNRAS.327..459L, 2003ApJS..148..353T}, while others claim it is a kinematic feature of the narrow-line region \citep{2007ApJ...656..105G}. MIR interferometry reveals a compact, sub-arcsecond, unresolved nuclear point source \citep{2014MNRAS.439.1648A, 2016ApJ...822..109A} surrounded by a halo of MIR emission associated with the compact nuclear starburst \citep{2018A&A...611A..46F}. The starburst contributes 70\% of the observed MIR flux. The first published X-ray observations of IC~3639 suggested that it possessed a very high hydrogen column density and a strong Fe K$\mathrm{\alpha}$ line \citep{1999MmSAI..70...73R}. A more detailed analysis of \chandra{}, \suzaku{}, and \nustar{} data by \citet{2016ApJ...833..245B} confirmed it has a hydrogen column density of $\mathrm{10^{25}\,cm^{-2}}$ and an extreme Fe K$\mathrm{\alpha}$ equivalent width of 2.29 keV. They also found it has a 2-10 keV luminosity well below the expected value based on the luminosity of its [O III] line, assuming the relations of \citet{2006A&A...455..173P} and \citet{2015MNRAS.454.3622B}. Overall, \citet{2016ApJ...833..245B} conclude that IC 3639 is a Compton-thick AGN possessing an active central engine generating a strong reflection component in its X-ray spectrum. \subsubsection{X-Ray Observations \& Data Extraction} IC~3639 was observed once by both \nustar\ and \chandra. The observation dates and exposure times are in Table~\ref{tab:xraydata} and Figure~\ref{fig:IC3639_image} presents the images. The higher resolution \chandra{} data reveal a faint, off-nuclear point source (labeled ``Src 1'' in Figure \ref{fig:IC3639_image}) as well as the AGN in the 40\arcsec\ radius \nustar{} beam. The \chandra\ AGN spectrum was extracted with a circular region of 3.35\arcsec{} radius, while Source 1 was extracted with a 1.5\arcsec{} radius region. Since Source~1 has $\leq$10\% the net count rate of the AGN, its spectrum is not used in the spectral fitting. \begin{figure} \centering \plotone{IC3639_Annotated_v5.pdf} \caption{\chandra{} and \nustar{} FPMA images of IC 3639. The larger, 40\arcsec{} radius circle denotes the \nustar{} extraction region, while the smaller circles denote the \chandra{} extraction regions. An off-nuclear point source (Src 1) is visible in the \chandra{} image, but is sufficiently faint to be ignored in the X-ray spectral fitting.} \label{fig:IC3639_image} \end{figure} \subsubsection{X-Ray Spectral Fitting} We first fit the \chandra{} and \nustar{} data jointly with a simple absorbed powerlaw (TBABS*POWERLAW) model. The resulting C-stat/d.o.f.\ was 1062.90/830, indicating a poor fit. Looking at the unfolded spectrum for IC 3639 (Figure \ref{fig:IC3639_uf}), an extremely strong Fe K$\mathrm{\alpha}$ line can be seen around 6.4 keV. The unfolded spectrum also shows a substantial rise from 10-20 keV, with a pronounced Compton hump at 20 keV. We therefore added a BORUS component to the initial TBABS*POWERLAW model. Prominent residuals remained at 0.5-2.0 keV, so we also added an APEC component. The resulting best fit model has C-stat/d.o.f.\ = 606.54/824. The parameter values for the best fit model are tabulated in Table \ref{tab:IC3639params}. Since the upper error bar for $\mathrm{CF_{tor}}$, the lower error bar for $\mathrm{\cos(\theta_{inc})}$, and the lower error bar for the BORUS normalization were less than 0.005 in value, we have rounded them up to 0.01. The best fit model is plotted over the unfolded spectrum in Figure \ref{fig:IC3639_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lccccccc@{}} \tablecaption{Parameters for best-fit IC 3639 model.}\label{tab:IC3639params} \tablewidth{2pt} \tablehead{ \multicolumn{2}{c}{APEC} & \multicolumn{4}{c}{BORUS} & \multicolumn{2}{c}{Powerlaw}\\ \cline{1-2} \cline{3-6} \cline{7-8} \colhead{$kT$} & \colhead{Norm} & \colhead{$\log({N_{\rm H}})$} & \colhead{$\mathrm{CF_{Tor}}$} & \colhead{$\mathrm{\cos(\theta_{inc})}$} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{(keV)} & \colhead{($10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{(cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{($10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} } \startdata $0.85^{+0.12}_{-0.12}$ & $3.06^{+0.69}_{-0.64}$ & $25.00^{+0.06}_{-0.26}$ & $0.87^{+0.01}_{-0.12}$ & $0.77^{+0.07}_{-0.01}$ & $0.04^{+0.02}_{-0.01}$ & $\geq2.4$ & $5.01^{+2.59}_{-0.81}$ \enddata \tablecomments{Error bars shown are for 90\% confidence intervals. The \chandra{} instrumental normalization constant value was $0.59^{+0.24}_{-0.06}$.} \end{deluxetable} \begin{figure} \plotone{IC3639_unfolded_spectrum_delchi.pdf} \caption{Unfolded spectrum and best-fit model for IC 3639. Black denotes \chandra{} data, green denotes \nustar{} FPMA data, and red denotes \nustar{} FPMB data.} \label{fig:IC3639_uf} \end{figure} \subsection{NGC 4922} NGC 4922 is a pair of galaxies in the late stages of a merger \citep{2017MNRAS.468.1273R}. The northern galaxy has been classified as a luminous infrared galaxy \citep{2010ApJ...723..993D} and a Seyfert~2 \citep{2010ApJ...709..884Y}. It is also a water megamaser \citep{2004ApJ...617L..29B}. The southern galaxy is an elliptical galaxy with no obvious signs of activity \citep{1999MNRAS.302..561A}. In the X-rays, NGC 4922 was first studied in detail with \rosat{}, which detected extended soft X-ray emission across the entire merging pair \citep{1999MNRAS.302..561A}. Further observations by \citet{2017MNRAS.468.1273R} revealed the northern galaxy is brighter in X-rays, with the southern galaxy's nucleus only detectable in the 0.3-2 keV band by \chandra{}, and it was not detected by\nustar{}. Based on joint analysis of \chandra{} and \nustar{} observations, \citet{2017MNRAS.468.1273R} reported the northern galaxy to be Compton-thick, with $N_{\rm H}>4.27\:\times\:10^{24}\:{\rm cm}^{-2}$. \subsubsection{X-Ray Observations and Data Extraction} NGC~4922 was observed once by \nustar\ and three times by \chandra; the observation dates and exposure times are in Table~\ref{tab:xraydata}. The \chandra{} spectra were extracted using circular source regions 3.71\arcsec{} in radius. Figure \ref{fig:NGC4922_image} shows the second \chandra\ observation and the \nustar\ FPMA observation with the extraction regions overlaid. \begin{figure} \centering \plotone{NGC4922_annotated.pdf} \caption{\chandra{} (ObsID: 15065) and \nustar{} FPMA images of NGC 4922. The larger, 40\arcsec{} radius circle denotes the \nustar{} extraction region, while the smaller circle denotes the \chandra{} extraction region.} \label{fig:NGC4922_image} \end{figure} \subsubsection{X-Ray Spectral Fitting} We first fit the \chandra{} and \nustar{} data jointly with a simple TBABS*POWERLAW fit, using a Galactic hydrogen column density of $N_{\rm{H}}^{\rm{Gal}}\rm{=1.06\times{}10^{20}\,cm^{-2}}$. The resulting C-stat/d.o.f.\ was 407.61/423. The unfolded spectrum (Figure \ref{fig:NGC4922_uf}) shows a less prominent Compton rise than some of the other galaxies in the sample (e.g., NGC 1386), but it is present. A presumed Fe K$\mathrm{\alpha}$ line is also present at 6.4 keV. While the signal-to-noise is lower than in the aforementioned galaxies, NGC 4922 nonetheless shows the features typical of Compton-thick AGN. We therefore added a BORUS component to the initial TBABS*POWERLAW fit, fixing the BORUS spectral index to the POWERLAW spectral index. This resulted in a C-stat/d.o.f.\ of 338.44/414. An excess of soft X-ray emission was present from 0.5-2.0 keV, so an APEC component was added, resulting in C-stat/d.o.f.\ = 324.51/412. Because $\mathrm{\cos(\theta_{inc})}$ was completely unconstrained with these model components, its value was frozen at 0.17 (or $\mathrm{\theta_{inc}}\:\approx\:80\degree$), representing a near edge-on line of sight. We then refit the model. The final C-stat/d.o.f.\ was 321.07/412. The resulting values for each of the model parameters are shown in Table \ref{tab:NGC4922params}. The model is plotted over the \chandra{} and \nustar{} data as the solid lines in Figure \ref{fig:NGC4922_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lccccccc@{}} \tablecaption{Parameters for best-fit NGC 4922 model.}\label{tab:NGC4922params} \tablewidth{2pt} \tablehead{\multicolumn{2}{c}{APEC} & \multicolumn{4}{c}{BORUS} & \multicolumn{2}{c}{POWERLAW}\\ \cline{1-2} \cline{3-6} \cline{7-8} \colhead{$kT$} & \colhead{Norm} & \colhead{ $\log({N_{\rm H}}$)} & \colhead{$\mathrm{CF_{Tor}}$} & \colhead{$\mathrm {\cos(\theta_{inc})}$} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{(keV)} & \colhead{($10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{($10^{-4}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{($10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} } \startdata $1.06^{+0.34}_{-0.21}$ & $4.71^{+6.02}_{-2.71}$ & $23.89^{+0.11}_{-0.17}$ & $\geq0.25$& $=0.17$\tablenotemark{a} & $3.96^{+1.02}_{-2.50}$ & $1.75\pm{0.34}$ & $9.04^{+6.72}_{-3.67}$ \enddata \tablecomments{The \chandra{} instrumntal normalization constant values were $1.35^{+0.75}_{-0.52}$ (ObsID: 4775), $1.17^{+0.56}_{-0.41}$ (ObsID: 15065), and $1.69^{+0.87}_{-0.62}$ (ObsID: 18201).} \tablenotetext{a}{Frozen at this value.} \end{deluxetable} \begin{figure} \plotone{NGC4922_unfolded_spectrum_delchi.pdf} \caption{Unfolded spectrum and best-fit model for NGC 4922. Black, red and green denote \chandra{} data (ObsIDs 4775, 15065, 18201), while blue and cyan denote FPMA and FPMB data.} \label{fig:NGC4922_uf} \end{figure} \subsection{NGC 5005} NGC 5005 is a weakly barred spiral galaxy with a nucleus that is heavily shrouded in dust \citep{2000ApJ...532..323P}. Its AGN is known to be variable over timescales of months \citep{2012A&A...539A.104Y}. NGC 5005's optical classification has been ambiguous. \citet{1981ApJ...250...55S} were able to identify H$\mathrm{\alpha}$, [\ion{S}{2}], [\ion{O}{2}], and [\ion{O}{3}] emission lines in its nuclear spectrum, but no others. They did not specify a classification for it but regarded it as unlikely to be a Seyfert 2. Later papers in the literature have classified it as a LINER \citep[e.g.,][]{1992ApJ...393...90H, 1997ApJS..112..315H, 2006A&A...455..773V}, a Seyfert 2 \citep[e.g.,][]{2017MNRAS.464.2139A}, or both a LINER and a Seyfert 2 at the same time \citep[e.g.,][]{2006ApJS..166..498S, 2017ApJ...846..102M}. Palomar spectra for NGC 5005 show a broad H$\alpha$ component blended with narrow H$\alpha$ and [\ion{N}{2}] emission \citep{1996ApJ...471..190R, 1997ApJS..112..315H}, suggesting NGC 5005 is an unobscured AGN. However, \citet{2014A&A...563A.119B} were unable to find a broad H$\alpha$ component in later {\it Hubble} spectroscopy when using the [\ion{O}{1}] line as a template for deblending, and therefore concluded that either the broad H$\alpha$ detection in the Palomar data was spurious, or NGC~5005 is a changing-look AGN. \citet{2015ApJ...814..149C} did, in contrast, identify a broad H$\alpha$ line in the {\it Hubble} spectra when using the [\ion{S}{2}] line as a template for deblending, measuring a broad H$\alpha$ component with FWHM of $2610\, {\rm km}\, {\rm s}^{-1}$. A detailed analysis of new ground-based spectra as well as the archival {\it Hubble} spectra for NGC 5005 was published by \citet{2018MNRAS.480.1106C}, who found a broad H-alpha component in the {\it Hubble} spectra, blended with [\ion{S}{2}] and [\ion{N}{2}]. The broad H$\alpha$ component had a FWHM of $2152\, {\rm km}\, {\rm s}^{-1}$, was very weak, and was not visible in their ground-based spectra. NGC 5005's core is embedded in extended MIR emission that appears to trace out its spiral structure \citep{2014MNRAS.439.1648A}. Based on {\it Spitzer} data, \citet{2010ApJ...709.1257T} estimated that only 44\% of its 19$\mathrm{\mu m}$ emission is from an AGN. Based on the NIR [\ion{Fe}{2}] and [\ion{P}{2}] forbidden line flux ratios, \citet{2016ApJ...833..190T} found that, unusually, NGC~5005's narrow line region seems to be predominantly shock-ionized rather than UV-ionized. In the X-rays, NGC 5005 was first detected by {\it ASCA}, where its spectrum was analyzed by \citet{1999ApJ...522..157R}. They reported a hydrogen column density of $N_{\rm H}>10^{24}\,\rm{cm}^{-2}$, implying a Compton-thick AGN. Further evidence of NGC 5005's Compton-thick nature comes from the unusually low ratio between its observed 2-10~keV X-ray and [\ion{O}{3}] luminosities. \citet{1999ApJ...522..157R} note, however, that NGC~5005 showed no evidence of a reflection component in its {\it ASCA} spectrum, with an upper limit of 0.9~keV on the equivalent width of the Fe K$\mathrm{\alpha}$ line. They concluded that the hydrogen column density was so thick that the soft X-ray emission from the AGN was completely absorbed, leaving only extended emission from a concurrent starburst to create the {\it ASCA} spectrum. Observations by \chandra{} and \xmm{} revealed new features of NGC~5005's X-ray emission. The AGN core was found to be embedded in a background of extended X-ray emission that follows the contours of the galaxy \citep{2005MNRAS.356..295G}, and that might be responsible for a large soft excess observed in its 0.6-1~keV X-ray spectrum \citep{2006MNRAS.365..688G}. \citet{2005MNRAS.356..295G} concluded the X-ray spectrum was unlikely to be dominated by an inverse Compton component, and placed an upper limit on the equivalent width of an Fe K$\mathrm{\alpha}$ line of $\leq 0.24$~keV. In contrast to \citet{1999ApJ...522..157R}, they measured $N_{\rm H} \simeq 1.5\: \times\: 10^{20}\: {\rm cm}^{-2}$. Furthermore, their search of the available literature at the time \citep[e.g.,][]{1981ApJ...250...55S, 1988ApJS...67..249D, 1997ApJS..112..315H} revealed a wide range of reported [\ion{O}{3}] fluxes for NGC 5005, some of which were not overluminous compared to the X-ray flux. They therefore claimed NGC 5005 was misidentified as a Compton-thick AGN. These conclusions were further reinforced by later analyses of the \chandra{} and \xmm{} observations, with values of $N_{\rm H}$ closer to $\mathrm{10^{20}\:cm^{-2}}$ \citep{2011MNRAS.414.3084B} or $\mathrm{10^{21}\:cm^{-2}}$ \citep{2012A&A...539A.104Y} than to Compton-thick column densities. In summary, the latest analyses of optical and X-ray observations of NGC 5005 suggest that it might be intrinsically underluminous rather than heavily obscured. \subsubsection{X-ray Observations and Data Extraction} NGC 5005 has been observed once each by \nustar, \chandra, and \xmm; details of the observations, including the observation dates and exposure times are in Table~\ref{tab:xraydata}. \chandra{} and \nustar{} images of the galaxy are presented in Figure~\ref{fig:NGC5005_image}. The \chandra{} AGN spectrum was extracted from a 5.5\arcsec{} radius circular source region. Two extranuclear \chandra\ point sources (labeled ``Src 1'' and ``Src 2'') are visible within the \nustar{} beam, which we extracted with 1.5\arcsec{} radius circular source regions. Since their count rates were less than 10\% the count rate of the AGN, they were ignored in the X-ray spectral fitting. Since no background flares were evident in the \xmm\ 10-12~keV lightcurve of NGC 5005, the EPIC pn spectrum was extracted from the full dataset. We used 30\arcsec\ circular source regions with 60\arcsec{} radius background regions. For the MOS data, we filtered out times with high background, defined as times when the 10-12~keV count rate was $> 0.35\, {\rm ct}\, {\rm s}^{-1}$. Using patterns 0-12, we extracted the MOS source spectra with 30\arcsec{} radius circular regions and 50-80\arcsec\ annular background regions. \begin{figure} \centering \plotone{NGC5005_annotated.pdf} \caption{\chandra{} and \nustar{} FPMA images of NGC 5005. The larger, 40\arcsec{} radius circle denotes the \nustar{} extraction region, while the smaller circles denote the \chandra{} extraction regions. Two off-nuclear point sources (Src 1 and Src 2) are visible in the \chandra{} image, but were faint enough to be ignored in the spectral fitting.} \label{fig:NGC5005_image} \end{figure} \subsubsection{X-ray Spectral Fitting} We initially began our analysis with the \chandra{} and \nustar{} data only. We started with a simple TBABS*POWERLAW fit, with the Galactic hydrogen column density set to $N_{\rm{H}}^{\rm{Gal}}\rm{=1.17\times{}10^{20}\,cm^{-2}}$. The C-stat/d.o.f.\ for this fit was 550.23/584, indicating that a powerlaw model captures most of this AGN's spectrum. Next we added an APEC component to account for the soft excess visible from 0.5-2.0 keV. This reduced C-stat/d.o.f.\ to 515.15/582, so the APEC component was kept. We then added a BORUS component, as would be appropriate for a Compton-thick AGN, which brought C-stat/d.o.f.\ down to 504.28/578. However, looking at the unfolded spectrum of NGC 5005 (Figure \ref{fig:NGC5005_uf}), it is unclear whether a BORUS component is truly justified. The hard X-ray emission does not possess the Compton hump characteristic of a reflection-dominated spectrum, but instead appears to be flat or even declining. It may possess a broad line component in the \nustar{} spectrum, visible as a bump of emission from 4-8 keV. Together, these facts suggest the AGN spectrum might be better fit with just a ZGAUSS component rather than an entire BORUS component. We added a ZGAUSS component to the TBABS*(APEC+POWERLAW) model, fixing the line energy at 6.4 keV and fixing the line width at $10^{-3}$ keV. This did not significantly change C-stat/d.o.f., though allowing the line width to freely vary brought C-stat/d.o.f.\ down to 497.25/580. To further ascertain the nature of the unusual bump at 4-8 keV we extracted the \xmm{} observation of NGC 5005. The bump from 4-8 keV seen in the \nustar{} spectrum is not clearly seen in its \xmm{} spectra; however the \xmm{} data were taken a decade earlier, so the lack of the line may simply be due to variability. To determine whether the line was truly absent from the \xmm{} and \chandra{} data, we first fit the \nustar{} data alone to a TBABS*(ZGAUSS+POWERLAW) model to find the best fit parameters for the line. The C-stat/d.o.f.\ of this fit was 391.98/468; for comparison, the C-stat/d.o.f.\ for a TBABS*(BORUS+POWERLAW) fit to the \nustar{} data was 394.76/467. The resulting line parameters were an energy of 5.91 keV, a line with of 0.76 keV, and a normalization of $5.26\times 10^{-6}\ {\rm cts}\, {\rm s}^{-1}\, {\rm keV}^{-1}$. We then fit the \xmm{} and \chandra{} data alone with a TBABS*(APEC+ZGAUSS+POWERLAW) model, with the ZGAUSS energy and width set to the values measured from the \nustar{} data alone. The normalizations were left to freely vary. The resulting normalizations were consistent with the \nustar{} data for both the \xmm{} and \chandra{} data. We ran 10,000 Monte Carlo simulations to estimate the false alarm probability for the putative line. We simulated fake \nustar{} observations in XSPEC with the parameters of the best fit to the \nustar{} data using only a POWERLAW component, then tried fitting the data with both a POWERLAW model and a POWERLAW+ZGAUSS model. The normalization of the ZGAUSS component was left to freely vary, while the line width was fixed at the value measured from \nustar{}. We then stepped through the values in line energy and saved the best fit. The resulting decrease in C-stat was greater than the decrease for the real data only in 4 out of 10,000 runs. The same was true if we instead fit it with a POWERLAW+ZGAUSS model where the line was unresolved (width fixed at $3\times10^{-3}$ keV). We therefore estimate the false positivity rate as 0.04\%. This is a $>$3.3 sigma detection, and we treat the line as real. For the final fit, we froze the ZGAUSS parameters to the best-fit values from the \nustar{} data alone. The resulting C-stat/d.o.f.\ was 1872.72/2130. The parameters for this model are listed in Table \ref{tab:NGC5005params}. It is plotted over the \xmm{}, \chandra{} and \nustar{} data in Figure \ref{fig:NGC5005_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lcccccc@{}} \tablecaption{Parameters for best-fit NGC 5005 model.}\label{tab:NGC5005params} \tablewidth{2pt} \tablehead{\multicolumn{2}{c}{APEC} & \multicolumn{3}{c}{ZGAUSS} & \multicolumn{2}{c}{POWERLAW}\\ \cline{1-2} \cline{3-5} \cline{6-7} \colhead{$kT$} & \colhead{Norm} & \colhead{Line Energy} & \colhead{$\mathrm{\sigma}$} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{(keV)} & \colhead{($10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{(keV)} & \colhead{(keV)} & \colhead{($10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} & \colhead{} & \colhead{($10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$)} } \startdata $0.79\pm{0.03}$ & $4.62^{+0.67}_{-0.59}$ & $5.91^{+0.60}_{-0.62}$\tablenotemark{a} & $0.74^{+0.47}_{-0.48}$\tablenotemark{a}& ${5.26}^{+4.39}_{-3.08}$\tablenotemark{a}& ${1.69}\pm{0.05}$ & $6.61^{+0.88}_{-0.79}$ \enddata \tablecomments{The \xmm{} pn, MOS1, and MOS2 instrumental normalization constant values were $1.14^{+0.14}_{-0.13}$, $1.14^{+0.15}_{-0.13}$, and $1.17^{+0.15}_{-0.13}$ (ObsID: 0110930501). The \chandra{} normalization constant value was $0.59^{+0.10}_{-0.08}$ (ObsID: 4021). All parameters for APEC and POWERLAW components were measured using a fit with the ZGAUSS components fixed to the values from the \nustar{} data alone.} \tablenotetext{a}{Measured from fit to \nustar{} data alone} \end{deluxetable} \begin{figure} \plotone{NGC5005_unfolded_final.pdf} \caption{Unfolded spectrum and best-fit model for NGC 5005. Black denotes \xmm{} pn data, red denotes \xmm{} MOS1 data, green denotes \xmm{} MOS2 data, blue denotes \chandra{} data, cyan denotes FPMA data, and magenta denotes FPMB data.} \label{fig:NGC5005_uf} \end{figure} \subsection{Mrk 463} Mrk 463 is a complex ongoing merger with two galactic nuclei and prominent tidal tails visible in optical light \citep{1989AJ.....97.1306H}. It has long been known to be an ultraluminous infrared galaxy and possess a Seyfert 2 AGN \citep{1988ApJ...328L..35S}. In fact, Mrk 463 possesses dual AGN \citep{2008MNRAS.386..105B}, with the eastern AGN more luminous than the western AGN. The eastern AGN possesses a hidden BLR in polarized light \citep{2001ApJ...554L..19T}. Two-sided conical [\ion{O}{3}] outflows extend from the eastern nucleus \citep{1995A&A...298..343C}, creating an extended emission line region to the south of the galaxy, similar to a voorwerp \citep{2018ApJ...854...83T}. The eastern nucleus and its ionization cones generate radio fluxes comparable to a radio-loud quasar or radio galaxy \citep{1991AJ....102.1241M}, which is highly unusual for a Seyfert AGN. Based on the amount of energy required to create the observed ionization and emission line features, \citet{2018ApJ...854...83T} argued the eastern AGN was $\sim$3-20 times more luminous $\sim$40,000 years ago. They argue that it might become a bona fide quasar in the future as the galaxy merger progresses. Mrk 463 displays prominent photoionized metal lines in its \xmm{} spectra, including from heavily ionized \ion{Fe}{26} \citep{2004AJ....127..758I} and the \ion{O}{7} radiation recombination continuum \citep{2008MNRAS.386..105B}. It also has a neutral Fe K$\mathrm{\alpha}$ line in its \xmm{} spectra \citep{2004AJ....127..758I}. Both \citet{2004AJ....127..758I} and \citet{2008MNRAS.386..105B} concluded that Mrk 463 is overall Compton thin. Using the \chandra{} data, \citet{2008MNRAS.386..105B} detected a strong Fe K$\mathrm{\alpha}$ line in the eastern nucleus (EW $\simeq$ 250 eV), while only an upper limit could be placed on the Fe K$\mathrm{\alpha}$ line from the western nucleus. The eastern nucleus was also more heavily absorbed. They therefore concluded the eastern nucleus is more obscured than the western nucleus, a claim that is also supported by NIR data. \subsubsection{X-Ray Observations \& Data Extraction} Mrk~463 has been observed once by \nustar{} and twice by \chandra; details of the observations, including the observation dates and exposure times are in Table~\ref{tab:xraydata}. The image from the first \chandra{} observation (ObsID: 4913) and the FPMA image from the \nustar{} observation are shown side by side in Figure \ref{fig:Mrk463_image}. The higher resolution \chandra{} image clearly resolves the brighter eastern AGN and the fainter western AGN. An extra-nuclear point source (Source 1) is present in the \nustar{} beam. While at first glance the eastern AGN appears to be an elongated ellipse (and indeed was extracted as such by \citealp{2008MNRAS.386..105B}), closer inspection reveals the northern lobe of the ellipse is not part of the AGN, but rather an area of fainter, extended emission that is not detected above 2 keV in energy. It was therefore extracted as a separate source, labeled Source 2. Both extra-nuclear sources have more than 10\% the count rate of the fainter, western AGN in the 0.5-8.0 keV band, so they were ultimately used in the fitting. The eastern and western AGNs were extracted with circular source regions of radius 2\arcsec{} and 1.759\arcsec{}, respectively, while Source 1 and Source 2 were extracted with circular source regions of radius 2\arcsec{} and 1.772\arcsec{}, respectively. \begin{figure} \centering \plotone{Mrk463_annotated.pdf} \caption{\chandra{} (ObsID: 4913) and \nustar{} FPMA images of Mrk 463. The larger, 40\arcsec{} radius circle denotes the \nustar{} extraction region, while the smaller circles denote the \chandra{} extraction regions. Two extra-nuclear point sources (Source 1 and Source 2) were visible in all \chandra{} observations and were used in the fitting process.} \label{fig:Mrk463_image} \end{figure} \subsubsection{X-ray Spectral Fitting} Similarly to NGC 3627 and NGC 4501, we jointly fit the data freezing Source 1's and Source 2's parameters to the best-fit values from \chandra{} alone. We began with a fit that was simply four POWERLAW components. The soft X-ray spectra of the two AGN and Source 2 (see Figure \ref{fig:Mrk463_uf}) suggest the need for an APEC component, though this is not necessarily true for Source 1. We therefore added APEC components to all the sources but Source 1. The resulting C-stat/d.o.f.\ was 1673.97/1669. The \nustar{} spectra (see Figure \ref{fig:Mrk463_uf}) show a prominent Fe K$\mathrm{\alpha}$ line and Compton hump, while the \chandra{} spectra for both AGN show a pronounced rise towards the Fe K$\mathrm{\alpha}$ line from 4-6 keV. We therefore added a BORUS component to both AGN. Adding it to the east AGN brought C-stat/d.o.f.\ down to 1517.85/1610, while adding it to the west AGN brought C-stat/d.o.f.\ down to 1504.83/1610. However, this caused the APEC $kT$ on the east AGN to become implausibly small ($\approx8\times10^{-3}$ keV), APEC $kT$ on the west AGN to be implausibly large ($\approx62$ keV), and the constant on the west AGN to be implausibly large ($\approx 4.65\pm{14}$). To resolve these issues we fixed the normalization constants of the west AGN to have an upper limit of 2 and a lower limit of 0.5. C-stat/d.o.f.\ was brought down to 1469.63/1606. Most of the BORUS parameters remained unconstrained, however, so we froze $\mathrm{\cos(\theta_{inc})}$ to 0.17 (i.e., $\mathrm{\theta_{inc}}$ = 80\degree). This provided some improvement in C-stat/d.o.f., but some of the BORUS parameters were still not converging. We therefore froze the APEC $kT$ and normalization of both AGN. The final estimates for each parameter are tabulated in Table \ref{tab:Mrk463params}. The best fit model is plotted over the data as the solid lines in Figure \ref{fig:Mrk463_uf}. \begin{deluxetable*}{@{\extracolsep{10pt}}lcccccccc@{}} \tablecaption{Parameters for best-fit Mrk463 model.}\label{tab:Mrk463params} \tablewidth{2pt} \tablehead{\colhead{} & \multicolumn{2}{c}{APEC} & \multicolumn{4}{c}{BORUS\tablenotemark{a}} & \multicolumn{2}{c}{POWERLAW}\\ \cline{2-3} \cline{4-7} \cline{8-9} \colhead{Source} & \colhead{$kT$} & \colhead{Norm} & \colhead{log(${N_{\rm{H}}}$)} & \colhead{$\mathrm{CF_{Tor}}$} & \colhead{$\mathrm{\cos(\theta_{inc})}$} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{} & \colhead{(keV)} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} } \startdata E AGN & $0.91^{+0.10}_{-0.08}$ & $12.8^{+3.7}_{-8.2}$ & $23.86^{+0.12}_{-0.07}$ & $0.27^{+0.09}_{-0.07}$& $0.17$\tablenotemark{b} & $2.73^{+0.18}_{-0.59}$ & $\leq1.54$ & $27.8^{+5.8}_{-15.9}$\\ W AGN & $1.11^{+0.23}_{-0.20}$ & $1.42^{+7.34}_{-0.66}$ & $23.50^{+0.10}_{-0.22}$ & ${0.28}^{+0.43}_{-0.03}$ & 0.17\tablenotemark{b} & $4.50^{+2.39}_{-3.73}$ & $\geq{1.58}$ & $1.30^{+6.00}_{-0.54}$\\ Src 1 & {} & {} & {} & {} & {} & {} & 2.17\tablenotemark{b} & 4.03\tablenotemark{b}\\ Src 2 & 0.55\tablenotemark{b} & 2.6\tablenotemark{b} & {} & {} & {} & {} & 2.49\tablenotemark{b} & 2.76\tablenotemark{b} \enddata \tablecomments{The instrumental normalization constants for \chandra{} observations 4913 and 18194 were $0.66^{+2.96}_{-0.08}$ and $0.65^{+2.68}_{-0.11}$ for the East AGN, $1.00^{+0.16}_{-0.14}$ and $0.71^{+0.45}_{-0.32}$ for Source 1, and $1.01^{+0.13}_{-0.12}$ and $1.61^{+0.69}_{-0.54}$ for Source 2. No errors on the normalization constants for the West AGN could be calculated because a hard lower limit of 0.5 and a hard upper limit of 2.0 were placed on them. The normalizations of the model components are in units of $10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$ for APEC, $10^{-3}$cts $\mathrm{s^{-1}\:keV^{-1}}$ for BORUS, and $10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$ for POWERLAW. } \tablenotetext{a}{Parameters for this component recovered by freezing APEC component and refitting.} \tablenotetext{b}{Frozen at this value} \end{deluxetable*} \begin{figure} \plotone{Mrk463_APEC_frozen_spectrum_delchi.pdf} \caption{Unfolded spectrum and model for Mrk 463. The model shown is the fit with all the APEC parameters frozen (i.e. the fit that was used to recover the BORUS parameters). Black denotes \chandra{} observation 4913 of the east AGN. Red denotes \chandra{} observation 4913 of the west AGN. Green denotes \chandra{} observation 18194 of the east AGN. Blue denotes \chandra{} observation 18194 of the west AGN. Cyan and magenta represent \nustar{} FPMA and FPMB data respectively. The \chandra{} observations of Source 1 and Source 2 are depicted in light grey.} \label{fig:Mrk463_uf} \end{figure} \subsection{NGC 6890} NGC 6890 is a spiral galaxy. Its optical activity has traditionally been classified as Seyfert 2 \citep[e.g.,][]{1996ApJ...471..190R}, but it has also been more specifically classified as a S1.9 \citep{2006A&A...455..773V}. NGC 6890's MIR spectrum is dominated by a red continuum suggestive of cool dust and polycyclic aromatic hydrocarbon features \citep{2006AJ....132..401B}, where the latter is indicative of star formation \citep{2009ApJ...705...14D}. Based on its {\it Spitzer} IRS spectrum, \citet{2010ApJ...709.1257T} argued roughly 90\% of the 19$\mathrm{\mu m}$ emission is due to the AGN. Its MIR morphology is circular and centered on the nucleus \citep{2014MNRAS.439.1648A} but might be somewhat extended \citep{2016ApJ...822..109A}. The \xmm{} observations of NGC~6890 were first analyzed by \citet{2007ApJ...657..167S}, who fit it with an unabsorbed powerlaw. However, since its 2-10 keV X-ray flux was significantly depressed compared to its [\ion{O}{3}] flux, they still regarded it as a Compton-thick AGN. In contrast, \citet{2011ApJ...729...52L} found that the spectrum was best fit with two absorbed powerlaws. They also detected an Fe K$\mathrm{\alpha}$ line at the 93\% confidence level. The equivalent width was 1.21 keV if they used a global fit, and 0.93 keV if they used a local fit. \citet{2011MNRAS.413.1206B} presented a more detailed analysis of the \xmm{} data, including fits with PEXMON and TORUS models to account for a reflection component. They measured a hydrogen column density of $\mathrm{10^{21}\:cm^{-2}}$ for this reflection component, which would put it outside the Compton-thick regime. \subsubsection{X-Ray Observations \& Data Extraction} NGC~6890 has not been observed by \chandra, but has been observed by \xmm\ once, by \swift\ twice, and by \nustar\ once. The \nustar\ observation was concurrent with the second \swift\ observation. Details of these observations, including their observation dates and exposure times, are in Table~\ref{tab:xraydata}. For the \xmm{} data, we filtered out times with high background, defined as when the count rate in the 10–12 keV range was $\rm{>}$0.4 cts $\rm{s^{-1}}$ for the pn and $\rm{>}$0.35 cts $\rm{s^{-1}}$ for the MOS cameras. We extracted the source spectra with 30\arcsec{} radius circular regions, and background spectra from an annulus of 50–80\arcsec{} with pattern 0–4 for the pn and pattern 0–12 for the MOS cameras. \begin{figure} \centering \plotone{NGC6890_nustar_image_labeled.pdf} \caption{\nustar{} FPMA image of NGC 6890. The \xmm{} data, which have lower anguler resolution than \chandra, did not detect any off-nuclear point sources within the \nustar{} beam (shown in red).} \label{fig:NGC6890_image} \end{figure} \subsubsection{X-ray Spectral Fitting} We began with a simple CONSTANT*TBABS*POWERLAW model. The C-stat/d.o.f.\ for this fit was 1951.31/1580, implying room for improvement. A BORUS component improved C-stat/d.o.f.\ to 1736.27/1576. We then added an APEC component, which improved C-stat/d.o.f.\ to 1724.84/1574. Looking at the unfolded spectrum (Figure \ref{fig:NGC6890_uf}), the BORUS component seems to have a higher intensity in the \swift{} and \nustar{} data than in the \xmm{} data. The APEC component of the \xmm{} data is of similar flux density to the \nustar{} data, but energies in the \xmm{} data above 1 keV do not match up with the \nustar{} data. To test the possibility that the BORUS component was varying, we created fits where $N_{\rm{H}}$ and the BORUS normalization varied between each observation. When these parameters were left free to vary, their values for the \xmm{} MOS1 and MOS2 data were tied to the EPIC pn value, and their \nustar{} FPMB and FPMA values were tied together. The values for the \swift{} observations were left to freely vary independently. We set the cross-normalization constants all to 1. For the case of $N_{\rm{H}}$ varying, C-stat/d.o.f.\ was 1886.20/1576. For the case of the BORUS normalization varying, C-stat/d.o.f.\ was 1828.16/1576. And for the case of both $N_{\rm{H}}$ and the BORUS normalization varying, C-stat/d.o.f.\ was 1836.111/1573. The best fit seemed to be the one where only the BORUS normalization varied. However, adding $N_{\rm{H}}$ variability should not have made the fit worse than the fit with the normalization varying alone. We therefore freed $N_{\rm{H}}$. We then reset log($N_{\rm{H}}/\rm{cm}^{-2}$) to be 25.5 for \xmm{} and 23 for \swift{} and \nustar{} and refit. This led to a C-stat/d.o.f. = 1810.31/1573. However, the POWERLAW component was underestimating the \xmm{} data. We therefore untied the POWERLAW spectral index from the BORUS spectral index. This allowed the scattered powerlaw to differ from the intrinsic powerlaw input to the BORUS model. This fit had a C-stat/d.o.f.\ of 1711.07/1572. We next tied the values of $N_{\rm{H}}$ and the BORUS normalization for the second \swift{} observation and \nustar{} together, since these observations were contemporaneous. Lastly we thawed the cross-normalization constants, setting limits of 0.5-2.0 on all of them. The final C-stat/d.o.f.\ was 1699.97/1569, the parameters of the final fit are in Table \ref{tab:NGC6890params}, and the fit is plotted over the data in Figure \ref{fig:NGC6890_uf}. \begin{deluxetable}{@{\extracolsep{10pt}}lccccccccc@{}} \tablecaption{Parameters for best-fit NGC 6890 model.}\label{tab:NGC6890params} \tablewidth{0pt} \tablehead{\colhead{Observation} & \multicolumn{2}{c}{APEC} & \multicolumn{5}{c}{BORUS} & \multicolumn{2}{c}{POWERLAW}\\ \cline{2-3} \cline{4-8} \cline{9-10} \colhead{} & \colhead{$kT$} & \colhead{Norm} & \colhead{$\Gamma$} & \colhead{log($N_{\rm{H}}$)} & \colhead{$\mathrm{CF_{Tor}}$} & \colhead{$\mathrm {\cos(\theta_{inc})}$} & \colhead{Norm} & \colhead{$\Gamma$} &\colhead{Norm}\\ \colhead{} & \colhead{(keV)} & \colhead{} &\colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} } \startdata \xmm{} & $0.73^{+0.14}_{-0.15}$ & $7.14^{+9.64}_{-3.54}$ & $\leq1.41$ & $24.12^{+0.29}_{-1.10}$ & $\leq0.10$ & $\leq0.10$ & $1.89^{+13.1}_{-0.78}$ & $2.71^{+0.15}_{-0.20}$ & $2.95^{+4.28}_{-0.27}$\\ 1st \swift{} & {} & {} & {} & $23.40^{+0.86}_{-0.88}$ & {} & {} & $5.81^{+6.67}_{-3.88}$ & {} & {}\\ 2nd \swift{}/\nustar{} & {} & {} & {} & $23.01^{+0.02}_{-0.12}$ & {} & {} & $49.5^{+11.4}_{-1.3}$ & {} & {}\\ \enddata \tablecomments{The instrumental normalization constant for \xmm{} MOS2 was $1.46^{+0.19}_{-0.18}$. The rest were pegged at the upper limit of 2. The BORUS $\Gamma$ and $\mathrm{CF_{Tor}}$ constraints were derived by freezing the BORUS log($N_{\rm{H}}$) and Normalization at their best fit values in all observations. The normalizations are in units of $10^{-6}$ cts $\mathrm{s^{-1}\:keV^{-1}}$ for APEC, $10^{-3}$ cts $\mathrm{s^{-1}\:keV^{-1}}$ for BORUS, and $10^{-5}$ cts $\mathrm{s^{-1}\:keV^{-1}}$ for POWERLAW. } \end{deluxetable} \begin{figure} \plotone{NGC6890_unfolded_delchi.pdf} \caption{Unfolded spectrum and best-fit model for NGC 6890. Black denotes \xmm{} pn data, red denotes \xmm{} MOS1 data, green denotes \xmm{} MOS2 data, blue denotes data from \swift{} observation 00088188001, cyan denotes data from \swift{} observation 0008818800, and magenta and yellow denote \nustar{} FPMA and FPMB data, respectively.} \label{fig:NGC6890_uf} \end{figure} \section{Discussion} In this section we discuss the implications of our results. \S~4.1 discusses how the intrinsic luminosities of the AGN were derived. \S~4.2 compares these luminosities to those expected from scaling relations. \S~4.3 discusses the obscuration levels measured from the X-ray spectral fits. \S~4.4 describes how Eddington ratios were computed and whether there are any correlations observed with Eddington ratio. Lastly \S~4.5 explains in further detail special features observed in the individual galaxies. \begin{deluxetable*}{llccccccc} \tablecaption{Summary of AGN Properties.\label{tab:summary}} \tablewidth{1pt} \tablecolumns{9} \tablehead{ \colhead{Object} & \colhead{Type} & \colhead{$\log{(M_{\rm BH}}$)} & \colhead{$\log{(N_{\rm H})}$}& \colhead{$\log{(L_{\rm 2-10})}$} & \colhead{$\log{(L_{\rm [OIII]})}$} & \colhead{$\log{(L_{\rm MIR})}$} & \colhead{${L_{\rm bol}/L_{\rm Edd}}$} & \colhead{Refs}\\ \colhead{} & \colhead{} & \colhead{(${M_{\odot}}$)} & \colhead{($\mathrm{cm^{-2}}$)} & \colhead{$\mathrm{(erg\:s^{-1})}$} & \colhead{$\mathrm{(erg\:s^{-1})}$} & \colhead{$\mathrm{(erg\:s^{-1})}$} & \colhead{} & \colhead{} } \startdata NGC 1386 & S1i & 7.24 & $\geq24.5$ & $42.29\pm{0.05}$ & 40.16 & $42.39\pm{0.08}$ & $1.38\times10^{-2}$ & 1,2,6,10\\ NGC 3627 & S3 & 6.93 & - & $38.38^{+0.16}_{-0.10}$ & 39.40 & $40.60\pm{0.11}$ & $3.67\times10^{-6}$ & 1,3,6,10\\ NGC 3982 & S1.9 & 6.95 & $\geq25.3$ & $42.83^{+0.13}_{-0.08}$ & 39.87 & $41.56\pm{0.06}$ & $9.50\times10^{-2}$ & 1,4,6,10\\ NGC 4501 & S2 & 7.30 & $22.87^{+0.25}_{-0.15}$ & $41.50^{+0.25}_{-0.11}$ & 39.86 & $40.56\pm{0.06}$ & $1.93\times10^{-3}$ & 1,3,6,10\\ IC 3639 & S1h & 7.01 & $25.00^{+0.06}_{-0.26}$ & $43.07^{+0.18}_{-0.12}$ & 42.0 & $43.52\pm{0.04}$ & 0.146 & 1,4,7,10\\ NGC 4922 & S2 & {} & $23.89^{+0.11}_{-0.17}$ & $42.29^{+0.12}_{-0.47}$ & 42.3 & {} & {} & 1,8\\ NGC 5005 & S3b\tablenotemark{a} & 8.27 & - & $40.17^{+0.04}_{-0.05}$ & 39.03 & $40.78\pm{0.12}$ & $9.67\times10^{-6}$ & 1,4,6,10\\ Mrk 463 & E: S1h & {} & $23.86^{+0.12}_{-0.07}$ & $44.01^{+0.03}_{-0.10}$ & 42.62\tablenotemark{b} & 44.83 & {} & 1,9,11\\ {} & W: S2 & {} & $23.50^{+0.10}_{-0.22}$ & $43.57^{+0.19}_{-0.76}$ & & \nodata & {} & \\ NGC 6890\tablenotemark{c} & S1.9 & 7.07 & & & 42.02 & $42.60\pm{0.09}$ & & 1,5,6,10 \\ --- 2009 Sep & & & $24.12^{+0.29}_{-1.10}$ & $42.25^{+0.89}_{-0.24}$ & & & $1.86\times10^{-2}$ & \\ --- 2018 Mar & & & $23.40^{+0.86}_{-0.88}$ & $42.73^{+0.33}_{-0.48}$ & & & $5.70\times10^{-2}$ & \\ --- 2018 May & & & $23.01^{+0.02}_{-0.12}$ & $43.66^{+0.09}_{-0.01}$ & & & 0.530 & \\ \enddata \tablenotetext{a}{Broad component detected in H$\alpha$, no others.} \tablenotetext{b}{Combined $\log{(L_{\rm [OIII]})}$ for E and W components of Mrk~463, not corrected for intrinsic dust extinction.} \tablenotemark{c}{In temporal order: 2009 Sep = \xmm{} observation; 2018 Mar = first \swift{} observation; 2018 May = second \swift\ observation + \nustar{} observation.} \tablecomments{S1i indicates a Seyfert 2 with broad lines detected in the infrared, S1h indicates a Seyfert 2 with a hidden BLR detected in polarized light, S1.9 denotes a Seyfert with broad H$\mathrm{\alpha}$ but no broad H$\mathrm{\beta}$, and S3 indicates a LINER. ${L_{\rm 2-10}}$ is the intrinsic absorption-corrected X-ray luminosity. MIR luminosities are at 12 $\mu$m. Bolometric luminosities were computed using $K_{X}(L_{X})$ from Table 1 of \citet{fred}. Error bars on luminosities are given if available. Ref is references for optical classification, ${M_{\rm BH}}$, ${L_{\rm [OIII]}}$, and ${L_{\rm MIR}}$.} \tablerefs{(1) \citet{bob}, (2) \citet{2002ApJ...579..530W}, (3) \citet{2016ApJ...818...47S}, (4) \citet{2016ApJ...831..134V} \& references therein, (5) \citet{2010MNRAS.406..493M}, (6) \citet{2011MNRAS.414.3084B} and references therein, (7) \citet{2016ApJ...833..245B}, (8) \citet{2021ApJ...908..221L} and references therein, (9)\citet{2005ApJ...634..161H} and references therein, (10) \citet{2015MNRAS.454..766A}, (11) \citet{2016MNRAS.463.2405A}.} \end{deluxetable*} \subsection{Intrinsic Luminosities} For heavily obscured AGN the intrinsic X-ray emission is represented by a powerlaw spectrum emitted from the corona. The majority of this emission is then reprocessed by the torus to give the spectral components that we model using the BORUS model. Only a few percent of the intrinsic emission is transmitted or scattered out as a POWERLAW component. For all but NGC~6890, the BORUS spectral index was fixed to have the same value as the scattered POWERLAW spectral index. We derived the errorbars on the intrinsic luminosities by turning the upper and lower errors on the BORUS $\Gamma$ and norm into fractional errors and then added fractional errors on each of the two parameters in quadrature to derive the fractional error on the luminosities. For NGC 3627 and NGC 5005, we added a CFLUX component to the POWERLAW components of their models. This component calculates the flux of the model component it is added to when the spectrum is fitted. We then converted the fluxes to luminosities using the Local-Group-corrected redshift distances listed in NED. The errors on intrinsic luminosity for these galaxies were derived from the 90\% confidence intervals reported by the CFLUX component. The 9 AGN in the sample have low observed \xmm{} 2-10 keV luminosities. Recall this can mean one of two things: that the AGN is heavily obscured, or that the AGN is recently deactivated. Observations in the hard X-ray band from \nustar{} are essential for distinguishing between the two scenarios. With hard X-ray data, we can model the spectrum more accurately, and from that model we can estimate the true intrinsic 2-10 keV luminosity. In the case of an obscured AGN, we would expect to see additional flux at higher energies, where the photons have enough energy to penetrate the obscuring torus. This would lead to a modeled intrinsic X-ray luminosity that is higher than that originally derived from observed 2-10 keV fluxes. If an AGN has recently deactivated, we will not observe additional X-rays from the AGN at higher energies. This means the 2-10 keV band will capture most of the AGN's X-rays, and so the intrinsic luminosity inferred from the model will be similar to that inferred with the 2-10 keV observed fluxes alone. For most of the AGN in the sample, there was a jump by several orders of magnitude between the observed and intrinsic X-ray luminosities. This indicates that they are obscured AGN, as the hard X-ray data indicate their modeled spectra have to be more luminous in the 2-10 keV band than directly observed. For NGC 3627 and NGC 5005 however, the change in X-ray luminosity between observed and intrinsic was within an order of magnitude. This indicates that they are not as heavily obscured as the other AGN in the sample. The intrinsic X-ray luminosity of AGN generally correlates with its [\ion{O}{3}] luminosity and MIR luminosity. For this reason we drew [\ion{O}{3}] and $\mathrm{12\mu m}$ luminosities from the literature. We started with the the [\ion{O}{3}] luminosities shown in Figure \ref{fig:Proposal_Plot}, which were derived from emission line fluxes in \citet{2017ApJ...846..102M}. These fluxes are themselves generally the average of multiple fluxes from papers in the 1990s, which themselves are often drawn from publications in the 1980s. In all cases, these values were not reddening-corrected. We therefore decided to rely on more recent reddening-corrected intrinsic [\ion{O}{3}] luminosities from more recent literature. For consistency's sake we use the \citet{2011MNRAS.414.3084B} values for the [\ion{O}{3}] luminosities when available. Mrk~463 does not have any reddening-corrected luminosities, but all the other AGN do. As expected, in most cases the updated, reddening-corrected [\ion{O}{3}] luminosities are greater than the original, uncorrected values, though this was not the case for NGC 1386, which is 73\% lower. For NGC 1386, the \citet{2017ApJ...846..102M} [\ion{O}{3}] flux was derived from an average of the fluxes from \citet{1997ApJ...486..132B} and \citet{1994ApJ...436..586M}. The two fluxes differ wildly: the former is $\mathrm{5.07\times10^{-12}\, erg\, s^{-1}}$ whereas the latter is $\mathrm{7.94\times10^{-13}\, erg\, s^{-1}}$. This skews the luminosity high. The larger flux value ultimately comes from \citet{1986A&AS...66..335V}, and was measured using an aperture of 2\arcsec{} $\times$ 4\arcsec{}. The lower flux value ultimately comes from data that was never published. The updated NGC 1386 [\ion{O}{3}] luminosity comes from \citet{2011MNRAS.414.3084B}. They reference \citet{1997AJ....114.1345V} which used a smaller slit width of 2.4\arcsec{}. It therefore seems likely that the difference between the original and updated luminosities is due to a difference in aperture sizes. However, it should be noted that even if the \citet{2017ApJ...846..102M} values of the [\ion{O}{3}] luminosity are used, our conclusions in \S~4.2 do not change substantially. NGC 3627 still remains the outlier amongst the AGN in the sample. The $\mathrm{12\mu m}$ luminosities are derived mostly from \citet{2015MNRAS.454..766A}, which had the subarcsecond resolution necessary to resolve the nuclear MIR emission and separate it from the overall host galaxy emission. The exceptions to this are NGC 4922, for which no $\mathrm{12\mu m}$ luminosities could be found in the literature, and Mrk 463, for which the $\mathrm{12\mu m}$ luminosity was taken from \citet{2016MNRAS.463.2405A}. The intrinsic 2-10 keV X-ray luminosities, along with the [\ion{O}{3}] and $\mathrm{12\mu m}$ luminosities from the literature, are tabulated in Table \ref{tab:summary}. \subsection{Scaling Relations} \begin{figure} \centering \plotone{Malkan_2017_Data_Annotated_updatedLOIII_Dots_Bigger_NoNGC5953.pdf} \caption{Replot of Figure \ref{fig:Proposal_Plot} but with the 9 galaxies with \nustar{} data with updated [\ion{O}{3}] luminosities (blue squares) compared to values plotted in Figure~1 (black circles). All except Mrk 463 have now been corrected for reddening. As expected most AGN increase in luminosity, though NGC 1386 and NGC 5005 are now less luminous; see text for details. } \label{fig:Proposal_plot_updated_LOIII} \end{figure} \begin{figure} \centering \plotone{Malkan_2017_Data_Annotated_updated_Lx_DotsBigger.pdf} \caption{Intrinsic 2-10 keV X-ray luminosities versus updated [\ion{O}{3}] luminosities for the galaxies in our sample. The instrinsic luminosities are plotted alongside their former positions from Figure \ref{fig:Proposal_plot_updated_LOIII}. The Mrk 463 2-10 keV luminosity is the combined luminosity of the eastern and western AGNs. IC 3639 has been moved slightly to the left to better distinguish it from NGC 6890.} \label{fig:Proposal_plot_updated_Lx} \end{figure} \begin{figure} \centering \plotone{Lx_vs_MIR_annotated_white_DotsBigger.pdf} \caption{Intrinsic 2-10 keV X-ray luminosities versus $12\mu {\rm m}$ luminosities for the galaxies in our sample. The blue triangles are the galaxies plotted with observed 2-10 keV luminosities. The black points with errorbars use the intrinsic 2-10 keV luminosities. The red line is the mean $L_{2-10}$ vs $L_{\rm 12\mu m}$ relation for the complete reliable sample in \citet{2015MNRAS.454..766A}. The scatter of this relation is 0.33 dex which is depicted as the light red shaded region. $L_{2-10}$ errors were derived from our measurements as explained in Section 4. Errors on $L_{\rm 12\mu m}$ are derived from the literature.} \label{fig:Lx_vs_LMIR} \end{figure} The X-rays from an AGN originate from the corona, which is located very close to the central black hole. In contrast, the MIR emission from the torus and the [\ion{O}{3}] emission from the NLR originate from much further out. Therefore, if an AGN deactivates the corona will fade out well before the torus and NLR do. We therefore expect a recently deactivated AGN to have an intrinsic X-ray luminosity well below that which is expected based on its [\ion{O}{3}] and MIR luminosities. If in contrast the AGN is merely heavily obscured, we would expect to find an intrinsic X-ray luminosity consistent with its [\ion{O}{3}] and MIR luminosities. In Figure \ref{fig:Proposal_plot_updated_LOIII} and Figure \ref{fig:Proposal_plot_updated_Lx} we progressively plot updated [\ion{O}{3}] luminosities and the intrinsic 2-10 keV luminosities of our sample atop the original data from \citet{2017ApJ...846..102M}. The red line is the mean $L_{\rm [OIII]}$ vs $L_{2-10}$ relation for Seyfert 1 galaxies in the 12 $\mathrm{\mu m}$ sample. The blue line is from \citet{2015MNRAS.454.3622B} and represents the mean $L_{\rm [OIII]}$ vs $L_{2-10}$ correlation for the Seyfert galaxies in their sample. It is noteworthy that if we had started with the updated [\ion{O}{3}] luminosities, NGC 5005 would not have been in our sample. Most of the galaxies lie within 1 dex of the mean relation when the intrinsic 2-10 keV luminosity is considered. NGC 3982 does lie more than one dex above it. However, it is still placed within the scatter of the other Seyfert 2 galaxies in the 12 $\mathrm{{\mu m}}$ sample. NGC 4922 is more than 1 dex below the mean relation. But by far the furthest away galaxy is NGC 3627, located more than 2 dex below from the mean correlation. Since the corona is much smaller than the NLR, it is possible these offsets from the mean relation are due to recent increases or decreases in the energy output of the central engine. If these offsets are taken as evidence of recent rising/fading, then the change has occurred over the past 100s-1000s of years. In Figure \ref{fig:Lx_vs_LMIR}, we plot the observed and intrinsic 2-10 keV luminosities of our sample versus their $\mathrm{12\mu m}$ luminosities. The red line is from \citet{2015MNRAS.454..766A} and represents their estimate of the ${L_{\rm 12\mu m}}$ vs ${L_{2-10}}$ correlation using their entire reliable sample. This relation has an intrinsic scatter of 0.33 dex, which is shown as the light red shaded region. With the original observed estimates of the 2-10 keV luminosity, all the AGN except for NGC 4501 are located more than 0.33 dex below the mean relation. With the absorption-corrected intrinsic 2-10 keV luminosity, NGC 3982, NGC 4501, and NGC 6890 lie more than 1 dex above it. For NGC 6890 this is clearly due to the increase in luminosity observed in the X-ray data. Because the torus is located further out than the corona, this could imply the corona has gotten brighter in recent years for the other two X-ray overluminous AGN as well, while the torus has yet to respond to the increase in luminosity. Once again, NGC 3627 is the furthest below the mean relation, more than 1 dex below it. If this is taken to represent fading of the X-ray corona relative to the torus, this would mean it has only recently faded over the past few decades; indeed, this is the conclusion of \citet{2020ApJ...905...29E}. However, NGC 3627's $\mathrm{12\mu m}$ emissions are distributed throughout the galaxy, so it is unclear how much of the nuclear $\mathrm{12\mu m}$ contribution is from an AGN torus as opposed to star formation. \subsection{Obscuration} Of the galaxies in our sample, all but two have the X-ray spectra typical of obscured AGN. The hydrogen column densities of the AGN are summarized in Table \ref{tab:summary}. We replicate the result that NGC 1386 and IC 3639 are Compton-thick. NGC 3982 is also Compton-thick. NGC 4501, NGC 4922, and both AGNs in Mrk 463 are obscured, but not quite at the Compton-thick level. NGC 6890 was nearly Compton-thick during the time of its \xmm{} observations, but became definitively Compton-thin during later observations. This makes it a new X-ray changing-look AGN \citep[e.g.][]{2003MNRAS.342..422M}. NGC 3627 and NGC 5005 are unobscured. It is noteworthy that our selection method (searching for AGN that are underluminous in soft X-rays relative to their [\ion{O}{3}] luminosity), which is a common method of selecting Compton thick AGN candidates, resulted in a sample where only 33\% of the objects were actually Compton thick at the time of their \nustar{} observations. Most (77\%) of the AGN are indeed heavily obscured ($N_{\rm H} > 10^{23} \,{\rm cm}^{-2}$). \subsection{Eddington Ratios} We computed the bolometric luminosities from the intrinsic 2-10 keV luminosities using the general expression for $\kappa_{X}=L_{\rm bol}/L_{X}$ from Table 1 of \citet{fred}. We then converted these to Eddington ratios using the most recent black hole masses available in the literature. We find that the obscured AGN are accreting at a higher rate (i.e. $L_{\rm bol}/L_{\rm Edd}>10^{-3}$) than the two AGN that do not show evidence of obscuration (NGC 3627 and NGC5005; $L_{\rm bol}/L_{\rm Edd}\sim10^{-6}$). Of note is that the most heavily obscured AGN, IC 3639, has an Eddington ratio of 0.146, more typical of quasars than Seyfert galaxies, and that NGC 6890's Eddington ratio increased by an order of magnitude between its \xmm{} and \nustar{} observations, for a final ratio of 0.530. There is no clear correlation between Eddington ratio and the position of the AGN on the $L_{\rm [OIII]}$ vs $L_{2-10}$ relation for the sample as a whole, as the high Eddington ratio AGN IC 3639 and NGC 6890 are near the mean relation (as are low Eddington ratio AGN NGC 1386, NGC 4501, and Mrk 463) while low Eddington ratio NGC 3982 lies more than 1 dex above it. The very lowest Eddington ratio AGN (NGC 3627 and NGC 5005) are located at two very different positions in the graph, with NGC 5005 being only 1 dex away from the mean relation, while NGC 3627 lies more than 2 dex away. The same is true for the ${L_{\mathrm{12\mu m}}}$ vs ${L_{2-10}}$ relation, as low Eddington ratio NGC 3982 and NGC 4501 lie more than 1 dex above the mean relation like high Eddington ratio NGC 6890. NGC 5005 is within the intrinsic scatter of the relation, but NGC 3627 is not. \subsection{Notes about Individual Galaxies} \subsubsection{NGC 3627} The \nustar{} data for NGC~3627 were recently analyzed and concluded to show evidence of a Compton-thick nature for this AGN \citep{2020ApJ...905...29E}. However, this is clearly not the case for our analysis of the data (\S~3.2), which shows no evidence for a reflection component, and thus no evidence for obscuration. This would seem to favor the fading AGN scenario for this galaxy, since it is well below the X-ray luminosity expected for its MIR luminosity. However, it is still possible that a stronger AGN could be hidden behind extremely high levels of obscuration (such that not even the hard X-rays are able to escape). In that case we would still expect strong MIR emission, as the dusty torus would still be heated. Given the AGN does not dominate above the MIR background of its host galaxy, it seems likely this AGN is intrinsically low-luminosity. If we accept the measured 2-10 keV luminosity of $\mathrm{10^{38.38}\,erg\,s^{-1}}$ as the true intrinsic luminosity, NGC 3627's luminosity is below the Eddington luminosity of a stellar black hole ($\mathrm{1.26\times10^{39}\,erg\,s^{-1}}$ for a 10 $\mathrm{M_{\odot}}$ black hole). It therefore might not even be a currently active AGN by some definitions. \subsubsection{NGC 5005} Lacking a prominent hard X-ray reflection component, NGC 5005 does not present a typical obscured AGN X-ray spectrum (\S~3.7). This could indicate that the AGN is currently inactive, and we are only seeing softer X-ray emission from star formation. However, since its optical spectrum exhibits a broad H$\alpha$ component, the simplest conclusion is that this AGN is actually unobscured. This is in contrast to many of its classifications in the literature, which refer to it as a Seyfert 2. It is hypothetically conceivable that the central black hole in this source has faded relative to the BLR, but the extreme rapidity of the timescale of which this would occur given the 10s-100s of light days size of the BLR makes this very unlikely. As noted previously, \nustar{} data on NGC 5005 show a broad emission line centered on 5.91 keV, but this is not seen in the \xmm{} and \chandra{} data. Based on our MC simulations, this line is a real feature. In being broad it resembles the relativistic iron lines that have been observed in other AGN \citep[e.g.,][]{2006ApJ...652.1028B, 2013Natur.494..449R, 2020MNRAS.499.1480W}, and the centroid energy lower than the rest-frame energy transition at 6.4 keV of the line suggests it is gravitationally redshifted, like some of the relativistically broadened lines observed in Seyfert galaxies \citep[e.g.,][]{2007MNRAS.382..194N}. We fit the \nustar{} spectra of NGC 5005 with the relativistic reflection model RELXILL \citep[version 1.4.3;][]{2014ApJ...782...76G,2014MNRAS.444L.100D,2016A&A...590A..76D}. We first used all the default parameter values for the model except for the iron abundance and redshift, which we froze to solar and the redshift of the galaxy. The fit was better than both a BORUS+POWERLAW model fit just to the \nustar{} data, and the ZGAUSS*POWERLAW fit used in Section 3. However, the black hole spin parameter, $a$, could not be well constrained, and the reflection fraction was too high to be physical (i.e. reflection fraction $>10$ for spin $a<0.9$). The reflection fraction is defined as the ratio of the amount of observed radiation reflected off of the accretion disk to the amount of radiation directly transmitted to the observer from the corona. For a given spin value of the black hole there is a maximum possible value of this fraction \citep[see Figure 3 in][]{2014MNRAS.444L.100D}. We therefore fit the spectrum with the black hole spin fixed to a variety of values, and set the upper limit on the reflection fraction set to the upper limits from \citet{2014MNRAS.444L.100D}. The C-stat declined continuously as the spin increased, and the best fit was obtained with a near-maximum spin value ($a=0.998$). Given the strength of the broad line in NGC 5005, a high spin is clearly favored, as this allows a higher reflection fraction (in this case, $>12$). We have plotted that fit in Figure \ref{fig:relxill}. The C-stat/d.o.f.\ for this fit was comparable to that of the ZGAUSS+POWERLAW fit, but not lower. \begin{figure} \centering \plotone{NGC5005_relxill_unfolded_a=0998.pdf} \caption{Unfolded spectrum and best-fit model for NGC 5005 \nustar{} data using a TBABS*RELXILL model and realistic reflection fraction values. Black is FPMA data, red is FPMB data. The spin in this case is a=0.998.} \label{fig:relxill} \end{figure} \subsubsection{NGC 6890} NGC 6890 varied in both obscuration and luminosity between the time of its \xmm{} and \nustar{} observations (\S~3.9). The observed change in luminosity makes NGC 6890 different from many other X-ray changing-look Seyfert galaxies, which have been traditionally interpreted as varying in obscuration alone \citep[e.g.][]{2002ApJ...571..234R}, the most famous of which is NGC 1365, which shows rapid variability in absorption levels \citep{2002ApJ...571..234R,2014ApJ...788...76W,2015ApJ...804..107R}. However, other types of changing-look AGN, such as changing-look quasars \citep[which vary between optical classifications; e.g.][]{2020MNRAS.491.4925G}, are thought to indeed be due to physical changes in the accretion disk \citep[e.g.][]{2018ApJ...864...27S,2018MNRAS.480.4468R,2020ApJ...890L..29A} and/or accretion rate \citep[e.g.][]{2015ApJ...800..144L,2016MNRAS.455.1691R,2016MNRAS.457..389M,2017ApJ...835..144G,2018ApJ...858...49W}. NGC 6890's increase in luminosity by an order of magnitude implies a change in the central engine. This might make it more similar to optical-changing look AGN than to NGC 1365, and/or lend credence to the hypothesis that a decrease in the magnitude of an X-ray reflection component could also be caused by AGN brightening in addition to reduced obscuration \citep{2003MNRAS.342..422M}. \section{Conclusions} In this paper, we presented \nustar{} observations of 9 AGN underluminous in the 2-10 keV X-rays from the 12 $\mathrm{\mu m}$ galaxy sample. We combined these \nustar{} data with \chandra{}, \swift{}, and \xmm{} data as necessary to perform broad-band X-ray spectral fitting and determine whether these AGN were truly intrinsically underluminous and potentially deactivated, or simply heavily obscured. We find that all but NGC 5005 and NGC 3627 are obscured AGN, whereas NGC 5005 and NGC 3627 are intrinsically low luminosity. Of the two low-luminosity AGN, NGC 3627 appears to not be active. Since this galaxy preserves NLR [\ion{O}{3}] emission and nuclear MIR emission, we conclude that it is a potentially recently deactivated AGN. \bigskip{} The scientific results reported in this article are based on data obtained from the \chandra{} Data Archive. This work is based on on observations obtained with \xmm{}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. We acknowledge the use of public data from the \swift{} data archive. This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. This work has made use of data obtained from the \nustar{} mission, a project led by Caltech, funded by NASA and managed by NASA/JPL. MLS wants to thank Lisbeth D. Jensen for her help in digging through the data used in \citet{2017ApJ...846..102M}. J.A.G. acknowledges support from NASA grant 80NSSC21K1567 and from the Alexander von Humboldt Foundation.
train/arxiv
BkiUdW7xK6nrxl9bNiwj
5
1
\section{Introduction} \label{sec:introduction} Massive galaxies provide important constraints for our understanding of galaxy evolution. Observational and theoretical progress has led to a reasonably clear picture: Massive galaxies formed most of their stars in relatively short, intense bursts at high redshift, that were triggered by major, gas-rich mergers, and regulated by the energy injection from star formation and active galactic nuclei \citep[e.g.,][]{hopkins06, granato06, chakrabarti08, narayanan08}. This basic scenario is simple and elegant, and broadly consistent with ensemble studies of statistical samples of galaxies with intense star formation and powerful AGN. However, given the complexity of these processes, neither ensemble studies nor models alone can show conclusively if the different components of this model -- merger, star formation and AGN activity -- interact as postulated, and if this has the predicted impact on the interstellar medium. This is particularly true for the role of the energy injection by the AGN, for which we do not yet have a good physical understanding. The most direct way of overcoming these limitations is through detailed observations of the gas kinematics and energetics in massive galaxies that are in key phases along this sequence. Ideally, such an approach must focus on an {\it in situ} study of massive galaxies during their major phase of growth -- hence we must observe galaxies at high redshift. Here we present an analysis of the warm ionized interstellar medium in two such galaxies, which is based on deep rest-frame optical integral-field spectroscopy obtained with the Very Large Telescope of ESO. SWIRE~J022513.90-043419.9 and SWIRE~J022550.67-042142.2 (SW022513 and SW022550 hereafter) are two obscured quasars at z$\ge$3.5 \citep{polletta08} and are the two most luminous 24$\mu$m emitters at high redshift in the SWIRE survey \citep{lonsdale03}. We also include the results of a recent CO(4--3) analysis into our discussion, which has been presented by \citet{polletta10}. \citet{polletta08} presented a detailed analysis of the multi-wavelength photometric properties of these galaxies. Both galaxies host powerful starbursts (${\cal L}\sim 10^{13} {\cal L}_{\odot}$) and luminous, obscured quasars with bolometric luminosities $> 10^{13} {\cal L}_{\odot}$. Centimeter radio observations suggest the presence of moderately powerful radio sources of order $10^{25}$ W Hz$^{-1}$ at 1.4 GHz in both targets, which appear too powerful to be entirely powered by star formation. The rest-UV spectrum of SW022550 shows bright, high-ionization emission lines, in particular NV$\lambda$1240, and weak UV continuum emission, which are the typical signatures of luminous, obscured quasars \citep{polletta08}. SW022513 is well detected in the X-ray \citep{pierre07} with a luminosity, ${\cal L}$(2-10 keV) = $8.2\times 10^{44}$ erg s$^{-1}$ in the hard X-ray that is 7 times brigher than the soft X-ray luminosity at 0.5-2 keV. It is thus a luminous absorbed X-ray source, e.g. a factor 3 brighter than the archetypal type-2 quasar CDFS-202 at z$=$3.700 \citep{norman02}. SW022513 is most probably Compton-thick since its hard X-ray band luminosity is about only 1/50 of that measured around 24$\mu$m. \citet{polletta08} also present low-resolution ISAAC longslit spectroscopy in the near-infrared for both targets, and rest-frame UV spectroscopy for SW022550, showing that both sources are luminous line emitters. Recent IRAM Plateau de Bure millimeter interferometry of the CO(4--3) line shows that both galaxies are luminous CO line emitters with ${\cal L}_{CO}\sim 5\times 10^{10} {\cal L}_{\odot}$ K km s$^{-1}$ pc$^2$ each, corresponding to $4\times 10^{10}$ M$_{\odot}$ of cold molecular gas \citep{polletta10}. SW022550 has a broad, double-horned CO line profile, whereas the CO(4--3) line in SW022513 is also broad, FWHM$=$1000 km s$^{-1}$, and featureless, and is not very well fit with a Gaussian profile. Short gas consumption timescales suggest the galaxies may be near the end of their epoch of intense star formation, which is when galaxy evolution models postulate the impact of the AGN should be greatest \citep{springel05, narayanan08}. Targeting particularly powerful quasars is important to isolate the impact of the AGN from that of the starburst (or potentially a merger). By contrasting our targets and galaxies that have equally intense star formation, but less powerful AGN we may hope to identify the unique signatures of the AGN. In addition we may set upper limits on what impact a luminous, obscured AGN may possibly have on the gas kinematics of its host galaxy. Both aspects are important to quantify the impact of AGN on the evolution of their host galaxies, as they may serve as benchmarks to develop analyses of less powerful, more frequent forms of feedback later on -- and to infer whether these are indeed observationally feasible with present-day instruments. Both galaxies have kpc-sized narrow-line regions (NLRs) that have strongly disturbed gas kinematics and forbidden line emission with widths of up to few 1000 km s$^{-1}$. SW022513 has complex line profiles with broad, blueshifted components seen in [OIII]$\lambda\lambda$4959,5007 and H$\beta$. These are qualitatively similar, but broader than the blue wing previously found by \citet{alexander10} in the submillimeter-selected quasar SMMJ1237+6203. Our analysis focuses on the gas kinematics, including the ionized and molecular gas studied by \citet{polletta10}. We compare with the energy output of SW022513 and SW022550 through AGN and star formation and discuss what physical processes may contribute to driving such an outflow. Throughout our analysis we adopt a flat cosmology where H$_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}$=0.7, $\Omega_{M}=0.3$. With this cosmology, the luminosity distance to SW022513 at z=3.42 is 29.7 Gpc. 1 arcsec corresponds to a projected distance of 7.4 kpc. SW022550 is at a luminosity distance D$_L=$34.4 Gpc, and 1 arcsec corresponds to a projected distance of 7.0 kpc. \section{Observations and data reduction} \label{sec:observations} We observed both targets with the SINFONI integral-field spectrograph on the VLT. SINFONI is an image slicer which gives a field of view of 8\arcsec$\times$8\arcsec\ with a pixel scale of~250 mas in the seeing-limited mode. We used the H$+$K grating which covers the near-infrared H and K bands simultaneously from 1.45$\mu$m to 2.45$\mu$m at a spectral resolving power of R$\sim$1500 ($\sim$200 km s$^{-1}$). Both targets were observed in October and November 2009 under good and stable atmospheric conditions. The program was carried out in service mode under Program ID 384.B-0161. We obtained 3 hrs of on-source observing time per target, split into individual exposures of 300 seconds to allow for a good sky subtraction. Since we expected that both sources were smaller than the field of view we adopted a dither pattern where the sources fall onto different parts of the detector, but within the field of view at all times during observations. This allows to use one exposure to subtract the sky from the subsequent exposure, and makes taking dedicated sky frames unnessesary. Data reduction relies on the standard IRAF tools to reduce longslit spectra \citep{tody93}, which we modified to meet the special requirements of integral-field spectroscopy. Our data reduction has been extensively described elsewhere \citep[e.g.,][]{nesvadba06b, nesvadba07b, nesvadba08b}. The absolute flux scale was determined from standard star observations at the end of each one-hour long observing sequence, and we used the same stars to measure the seeing, which typically has a full-width-at-half-maximum of 0.5\arcsec\ to 0.6\arcsec. \section{Results} \label{sec:results} \subsection{Continuum emission} \label{ssec:continuum} We detect continuum emission at the position expected from the IRAC 3.6$\mu$m imaging in both galaxies. The spectral coverage of our H$+$K data corresponds to 3300\AA\ to 5500\AA\ and to 2900\AA\ to 5000\AA\ in the rest-frame of SW022513 and SW022550, respectively. The continuum is relatively faint in both galaxies, but nonetheless detected at a significance of 1$-$2$\sigma$ per pixel in each spectral bin of SW022513 and SW022550. To obtain a more robust measurement, we collapsed both cubes at wavelengths without strong line emission, and detected the continuum at significances of $>$3$\sigma$ and up to 14$\sigma$ per spatial pixel in the collapsed image over the area of one PSF. We did not attempt to constrain the overall spectral shape of the continuum, however notice that it seems to have a blue slope in both targets broadly consistent with originating from the AGN. In this case, the continuum could be either produced by scattered or direct light from the AGN, or represent nebular continuum \citep[e.g.,][]{vernet01}, A more detailed discussion of the continuum emission is beyond the scope of this paper. The continuum of SW022550 is compact, in SW022513 it is marginally extended along the North-South axis at $\sim 3\sigma$ significance (per spatial pixel, see Figure~\ref{fig:SW022513_maps}). We also detect very faint continuum emission associated with the faint line emission north from the nucleus in SW022513. \subsection{Emission-line gas} \label{ssec:emissionlinegas} \subsubsection{SW022513} We show the integrated spectrum of SW022513 in Figure~\ref{fig:SW022513_intspec}. Line properties are broadly consistent with those found by \citet{polletta08} with ISAAC longslit spectroscopy at 3$\times$ lower spectral resolution and lower signal-to-noise ratios. The lines are spectrally well resolved with typical widths of FWHM$\ge$700 km s$^{-1}$. Due to the broad width of the lines we did not resolve the individual components of the [OII]$\lambda\lambda$3726,3729 doublet. SW022513 has luminous [OII]$\lambda$3727, [OIII]$\lambda\lambda$4959, 5007, and H$\beta$ line emission, with [OIII]$\lambda$5007/H$\beta = 6$. [NeV]$\lambda\lambda$3346,3426 fall outside the atmospheric windows, but we detect [NeIII]$\lambda$3869. We will argue in \S\ref{ssec:localizing} that, for galaxies with the overall characteristics of SW022513, the [OIII]/H$\beta$ ratio and the detection of [NeIII] suggest that most of the warm ionized gas is photoionized by the AGN. Line emission in SW022513 is extended over sizes of 1.6\arcsec$\times$2.5\arcsec\ in right ascension and declination, respectively. The [OIII]$\lambda$5007 emission-line morphology is shown in the upper left panel of Figure~\ref{fig:SW022513_maps}. The [OIII]$\lambda$5007 line flux does not peak on the continuum peak, but is offset by 0.75\arcsec\ to the South (corresponding to a projected distance of about 5 kpc). Emission-line profiles are complex. We identify broad, blueshifted components in [OIII]$\lambda\lambda$4959,5007 and H$\beta$. Unlike [NeIII]$\lambda$3869 and [OII]$\lambda\lambda$3726,3729, the H$\beta$ and [OIII]$\lambda$5007 lines are fairly bright and do not suffer blending of several components with uncertain relative line widths. They are therefore particularly suited to investigate their line profiles, and we will focus our discussion on those two lines. Line profile fits to [OIII]$\lambda$4959 yield similar results as [OIII]$\lambda$5007 within the measurement uncertainties. The broadest [OIII]$\lambda$5007 component is found near the continuum peak, with FWHM$=$5078 km s$^{-1}$ and an offset of $-$1314 km s$^{-1}$ relative to the narrow [OIII]$\lambda$5007 component measured in the same spectrum. At the same position we also detect a broad component in H$\beta$, with FWHM$=$1000 km s$^{-1}$ and a blueshift of -183~km~s$^{-1}$ relative to the narrow component (which has FWHM=369 km~s$^{-1}$). All emission-line properties and their uncertainties are listed in Tables~\ref{tab:spectraSW022513_contpeak}, \ref{tab:spectraSW022513_OIIIpeak}, and \ref{tab:spectraSW022513_north}. We obtained maps of the relative velocities and line widths from fitting emission lines extracted from apertures of 3 pixels $\times$3 pixels (0.4\arcsec$\times$0.4\arcsec) at all positions where [OIII]$\lambda$5007 line emission was detected at $>$3$\sigma$, adopting the prodedure outlined in \citet{nesvadba08b}. To account for the complex profile of the line, we performed fits with 2 Gaussian components for each line. We fitted the [OIII]$\lambda\lambda$4959,5007 doublet simultaneously, requiring that both lines have the same redshift and line width, and a ratio as expected between the two components of R(5007,4959)$=$3. Overall this gives a line fit of 4 Gaussian components, where the narrow and broad components of [OIII]$\lambda$4959 and [OIII]$\lambda$5007 are required to have the same kinematics and a line ratio consistent with their Einstein coefficients. The maps of relative velocities and line widths (full width at half maximum) of the narrow [OIII]$\lambda$5007 component are shown in the upper middle and right-hand panel of Figure~\ref{fig:SW022513_maps}, respectively. Velocities are relatively uniform across most of the source, with a sudden redshift jump of $\sim$200 km s$^{-1}$ northward of the nucleus. The corresponding maps for the broad component are shown in the lower panel of Figure~\ref{fig:SW022513_maps}. We detect a very broad (FWHM$\sim$5000 km s$^{-1}$) component of [OIII]$\lambda\lambda$4959,5007 associated with the continuum peak, and a somewhat less extreme, but nonetheless broad component with FWHM$\sim$1000 km s$^{-1}$ associated with the peak of the [OIII]$\lambda\lambda$4959,5007 emission. To map line ratios relative to [OIII]$\lambda$5007 we also fitted the spatially resolved emission of H$\beta$ and [OII]$\lambda\lambda$3726,3729, where we assumed a single Gaussian distribution for the [OII] doublet which is not spectrally resolved. In Figure~\ref{fig:SW022513_ratiomaps} we show the maps of line ratios relative to the narrow [OIII]$\lambda$5007 component. The ratios of [OII]$\lambda$3727 and of H$\beta$ to [OIII]$\lambda$5007 are smallest near the [OIII]$\lambda$5007 emission-line peak, and have larger values near the brightest continuum emission. H$\beta$ and [OII]$\lambda\lambda$3726,3729 are too faint to map the kinematics of multiple components. Therefore we extracted integrated spectra associated with the peaks in [OIII]$\lambda\lambda$4959,5007 and continuum emission respectively, and from the faint, extended region to the North. These spectra are shown in Figure~\ref{fig:SW022513_spectra} and their properties are summarized in Tables~\ref{tab:spectraSW022513_contpeak}, \ref{tab:spectraSW022513_OIIIpeak}, and \ref{tab:spectraSW022513_north}. \subsubsection{SW022550} SW022550 is a luminous line emitter in the rest-frame UV as discussed by \citet{polletta08}. Unfortunately at z$=$3.867 it is at a somewhat unfavourable redshift for ground-based observations, where [OIII]$\lambda$5007 and [OII]$\lambda$3726,3729 fall outside the atmospheric windows. However, we did identify the [OIII]$\lambda$4959 line at $\lambda$=24182 \AA, corresponding to a redshift $z=3.876$, which is consistent with the redshifts measured in the rest-frame UV. The [OIII]$\lambda$4959 line is very broad, FWHM=2212 km s$^{-1}$, not very different from the width of the rest-frame UV lines, which have line widths of up to few 1000 km s$^{-1}$ \citep{polletta08}. The line flux is F(4959)$=$8.2$\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$. For F(4959)/F(5007) = 0.3 this corresponds to a [OIII]$\lambda$5007 flux of F(5007)$=$2.5$\times 10^{-15}$ erg s$^{-1}$ cm$^{-2}$. We did not detect H$\beta$ but place an upper limit of $4.3\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$ at 3$\sigma$ significance, assuming it has a similar width as [OIII]$\lambda$4959. We also detect the [NeV]$\lambda\lambda$3346,3426 doublet, although [NeV]$\lambda$3346 is heavily blended with night sky line residuals, making it difficult to measure anything but the line core. These lines give redundant kinematic and flux information and are well fit with a common redshift of z=3.861 and line width of FWHM$=$2985 km s$^{-1}$. For [NeV]$\lambda$3426 we measure a flux of $4.3\times 10^{-16}$ erg s$^{-1}$, and the core of [NeV]$\lambda$3346 is consistent with being 3$\times$ fainter as expected from the transition probabilities of the two lines. The rest-frame optical spectral properties of SW022550 are listed in Table~\ref{tab:spectraSW022550}. All line emission appears compact and associated with the compact continuum emission. \section{Two quasars with giant narrow-line regions at z$\ge$3.5} \label{ssec:localizing} Our spectroscopic results suggest that the AGN is the dominant source of ionization in our targets. For SW022550 this follows directly from the detection of [NeV]$\lambda$3426. Line emission from this galaxy is not spatially resolved, the size of the seeing disk of 0.6\arcsec\ implies an upper limit on the radius of the NLR of about 2.5 kpc, provided that we did not miss fainter, extended structures due to the somewhat unfavorable redshift of the source (\S\ref{ssec:emissionlinegas}). At the somewhat lower redshift of SW022513, [NeV]$\lambda$3426 does not fall into the atmospheric windows. We do however, observe [NeIII]$\lambda$3869 and find a relatively high [OIII]/H$\beta$ ratio of 6. A priori, both features could be produced either by the AGN or by intense star formation in galaxies with high ionization parameters and low metalicities, leading to hot electron temperatures, like, e.g., in high-redshift HII or Lyman-break galaxies \citep[e.g.,][]{pettini01, fosbury03, nesvadba06a, nesvadba07a, nesvadba08a}. \citet{villar08} find that star formation may photoionize parts of the gas in 3/50 type-2 quasars drawn from the SDSS; however, these are much less powerful than the targets we observe, and have less luminous optical lines, e.g., ${\cal L}([OIII])\sim 10^{42}$ erg s$^{-1}$ compared to few$\times 10^{44}$ erg s$^{-1}$ for SW022513 (Table~\ref{tab:spectraSW022513}). We are not aware of any star-forming galaxy without AGN and with ${\cal L}([OIII])=10^{44}$ erg s$^{-1}$. In addition, the bright 24$\mu$m and millimeter continuum emission of SW022513 and SW022550 shows these are dusty galaxies. Dust and metal lines are very efficient coolants, so that for a given ionizing spectrum (e.g., due to intense star formation), we may expect lower electron temperatures than in HII galaxies and LBGs, which would lead to lower [NeIII] fluxes and lower [OIII]/H$\beta$ ratios. The FIR/millimeter properties of SW022513 and SW022550 are indistinguishable from those of submillimeter-selected starburst galaxies at z$\ge$2, which have supersolar metallicities \citep{tecza04, swinbank04, nesvadba07a} and fall near the local mass-metallicity relationship of \citet{tremonti04}. In the absence of an AGN, these galaxies have [OIII]/H$\beta \le 1$ \citep{takata06,nesvadba07a}, and [NeIII] is generally not observed. For SW022550 and SW022513, the mass-metallicity relationship would imply a gas-phase oxygen abundance of $12+log(O/H)\sim 9.1$, supersolar even when accounting for the (large) scatter in the relationship (and supersolar by 0.4 dex when taken at face value). \citet{polletta08} came to a similar conclusion from the rest-frame UV line ratios of SW022550. Roughly solar metallicities have also been found by \citet{humphrey08} for dusty, massive radio galaxies at high redshift. The line emission in SW022513 is well resolved spatially. This implies that the QSO narrow-line region (NLR) extends to radii of at least 1.25\arcsec\ (corresponding to $\sim$10 kpc). Most notably, the peak of the narrow [OIII]$\lambda\lambda$4959,5007 line is offset by 0.75\arcsec\ (5 kpc) towards South from the continuum peak and the peak of broad [OIII]$\lambda\lambda$4959,5007 emission. A spatial offset between broad and narrow line emission has previously been reported for the submillimeter-selected quasar SMMJ1237$+$6203 at z=2.1 \citep{alexander10}. Without additional constraints, this offset could be interpreted as a signature of an outflow of turbulent gas, at several 10s of kpc from the galaxy traced by the narrow-line emitting gas. Our data on SW022513 allow us to perform a more complete analysis which suggests a different scenario. First, the broad [OIII] line emission is spatially coincident with the continuum peak, implying that the emission arises in the nuclear region of the galaxy and thus near the AGN. We also find that near the peak of narrow [OIII] emission, the [OII]/[OIII] and the H$\beta$/[OIII] line ratios are smaller than near the continuum peak, and hence near the peak of broad [OIII] and the likely location of the AGN (Figure 5). This implies that the narrow line region is more highly ionized than the emission immediately surrounding the AGN. This increase can be caused by a harder radiation field for a constant ionization parameter (ratio of radiation field intensity to gas density), an increasing ionization parameter, or a decrease in gas-phase metallicity as we move from the continuum to the narrow-line peak. It could be a combination of all three \citep[see][for a more detailed analysis in related environments]{humphrey08}. Given the bright FIR luminosity of SW022513, the extinction in the circum-nuclear region is likely to be high. However, large-scale extinction of the emission line gas cannot solely determine the variation in [OIII]/[OII] line ratio. The same trend is also found in [OIII]/H$\beta$ and given their similar wavelengths (including the [OIII] line at 4959\AA) the influence of extinction must be relatively small. In relation to the nebula itself, the effect of dust on the ionization of the gas is complex. Since dust provides an additional source of cooling and competes directly with the gas for ionizing photons due to its large cross section over a wide range of wavelengths but also depletes metals from the gas (some of which provide important cooling lines like Oxygen and Carbon), a dusty gas will have a different ionization structure depending on the gas-to-dust ratio and amount of metal depletion. So it is not entirely clear what its impact will be, but it could well increase the ionization of the gas. All these scenarios would produce the situation we observe and suggest that the characteristics of the extended ionized gas in SW022513 are consistent with a narrow-line region surrounding a luminous obscured quasar and extending over a radius of $\sim$10 kpc. Radii of few 10s of kpc are expected for the narrow-line regions of the most powerful quasars \citep{netzer04} and consistent with empirical constraints for optically selected quasars \citep{bennert02} and also type-2 quasars \citep{greene11}. In addition, the asymmetry of the narrow [OIII] line morphology relative to the nucleus supports this scenario as narrow-line regions are often highly elongated and asymmetric. Given the bright FIR luminosity of our targets of $10^{13} {\cal L}_{\odot}$ \citep{polletta08}, we expect these are highly dust-enshrouded, and that most of the rest-frame UV/optical light is obscured on kpc scales, with ionizing and non-ionizing radiation escaping to larger distances only along relatively few lines of sight that are comparably free of dust. This is analogous to the situation observed for local Seyfert 2s in their extended narrow-line regions \citep[e.g.,][see \citealt{lehnert09} for an example at z$>$2.]{veilleux03}. This does not imply that the quasar also dominates the gas kinematics. Quasar illumination cones without strong mechanical effect are known at low redshift \citep{veilleux03} as well as at z$=$2 \citep{lehnert09}. Note that this applies to radio-quiet quasars. Radio-loud AGN, including radio galaxies, often have much more extended emission-line regions with very energetic emission-line kinematics \citep[e.g.][]{mccarthy96, villar99, baum00, villar03, nesvadba06b, nesvadba08b}. \subsection{Ionized gas mass} \label{ssec:masses} The molecular gas strongly outweights the ionized gas in strongly star-forming mergers at z$\sim$2 \citep{nesvadba07a}, but not in powerful radio galaxies at similar redshifts \citep[][Nesvadba et al., in prep.]{nesvadba08b} where much of the interstellar medium appears photoionized by the AGN \citep[e.g.,][]{villar97,villar03,humphrey08} and accelerated by the radio source \citep[e.g.,][]{nesvadba08b}. We will now use the H$\beta$ emission-line flux measured in SW022513 to illustrate that the ionized gas mass in this obscured quasar could plausibly be as large as the molecular gas mass. Similar to \citet{nesvadba06b,nesvadba08b} we will assume case B recombination to estimate the ionized gas mass from the luminosity of the Balmer lines, setting \begin{eqnarray} {\cal M}_{H\beta}\ = 28.2\times 10^8\ {\cal L}_{H\beta,43}\ n_{e,100}^{-1}\ M_{\odot}, \end{eqnarray} where ${\cal L}_{H\beta,43}$ is the H$\beta$ luminosity in units of $10^{43}$ erg s$^{-1}$, and $n_{e,100}^{-1}$ is the electron density in units of 100 cm$^{-3}$. This relationship is equivalent to Equation (1) of \citet{nesvadba06b} assuming a Balmer decrement H$\alpha$/H$\beta =$2.9, and follows directly from \citet{osterbrock89}. This estimate has two major uncertainties. First, the [OII]$\lambda\lambda$3726,3729 line doublet is blended, so we have no direct estimate of the electron density. Second, from H$\beta$ alone we cannot measure the extinction. We can loosely constrain these values with rather generic arguments. First, the presence of luminous forbidden emission lines like the [OII]$\lambda\lambda$3726,3729 doublet suggests that electron densities are below the critical densities for collisional deexcitation, a few 1000 cm$^{-3}$, and we will in the following adopt a fiducial value of 1000 cm$^{-3}$ \citep[radio galaxies and submillimeter galaxies at similar redshifts have electron densities of few 100 cm$^{-3}$][]{nesvadba06b,nesvadba07b,nesvadba08b}. Second, we will assume an average extinction of about $A_V\sim 2$ mag, which is the average in submillimeter-selected galaxies without strong AGN \citep{smail04} that are the closest analogs to our galaxies in the far-infrared/submillimeter \citep[and consistent with a nuclear A$_V\sim$4.6 mag derived by][]{polletta08}. \citet{nesvadba08b} find A$_V=$1$-$4 mag from the H$\alpha$/H$\beta$ line ratios in the extended ionized gas of z$\sim$2 radio galaxies. For SW022513, this gives an H$\beta$ luminosity of ${\cal L}_{H\beta} = 7\sim \times 10^{44}$ erg s$^{-1}$, a factor $\sim 10$ higher than the observed value (Table~\ref{tab:spectraSW022513}). With these two assumptions, we find an ionized gas mass of $2\times 10^{10}$ M$_{\odot}$ (compared to $2\times10^{9}$ M$_{\odot}$ if we strictly use the observed H$\beta$ flux). This can obviously only be an order-of mass estimate, but illustrates that the mass of ionized gas in SW022513 is likely between 10\% and 100\% of the molecular gas mass. This is in the same range as found for radio galaxies, and a factor 100$-$1000 higher than for strongly star-forming galaxies. For SW022550 we only have an upper limit on the H$\beta$ flux, which is not very constraining. The above considerations suggest that the ionized gas mass estimate and derived quantities can only be accurate at an order-of-magnitude level. \subsection{What is the systemic redshift?} \label{ssec:systemicredshift} The lack of robust measurements of the systemic redshift is a major uncertainty in all studies of the gas dynamics in galaxies at high redshift, and yet, is indispensible to interpret the emission-line kinematics. At distances where direct measurements of stellar absorption line kinematics in AGN hosts are possible, observations suggest that the narrow emission-line components show only moderate offsets from the systemic velocity. This includes Seyfert galaxies \citep{nelson96} as well as powerful, type-2 quasars in the SDSS \citep{greene05}, and also galaxies with very broad blueshifted [OIII]$\lambda\lambda$4959,5007 components \citep[][]{wong06}. The narrow [OIII]$\lambda$5007 component in SW022513 has a relatively small velocity gradient, $\Delta v \sim$200 km s$^{-1}$ (Figure~\ref{fig:SW022513_maps}), and lines of different species of ionized gas have very similar velocities. This is broadly consistent with what would be expected for rotation or merger-driven kinematics of galaxies with stellar masses of $\sim 10^{11} M_{\odot}$ and is in the velocity range of submillimeter galaxies at z$\ge$2 \citep[][]{swinbank06,nesvadba07a} as well as nearby ULIRGs \citep{colina05}. We will thus in the following assume that the narrow optical emission lines of SW022513 trace the systemic redshift within $\le 100$ km s$^{-1}$. Specifically we will use the redshift of the narrow H$\beta$ component near the continuum peak (which we identify as the nucleus, \S\ref{ssec:localizing}), z$=$3.4247 (Table~\ref{tab:spectraSW022513_contpeak}) to approximate the systemic redshift. H$\beta$ is measured at good signal-to-noise (SNR$=$8.5$\sigma$) and is more representative for the overall gas kinematics than [OIII]$\lambda\lambda$4959,5007, which are very sensitive to ionization effects. \subsection{Blue wings as signatures of outflows} \label{ssec:bluwings} If the narrow lines of AGN host galaxies are approximate tracers of the systemic velocity, then the broad, blueshifted components often found in [OIII]$\lambda\lambda$4959,5007 as well as other lines, most likely trace gas that is in outflow \citep[e.g.,][]{heckman81,greene05b, morganti05, nesvadba07b, komossa08, holt08, nesvadba08b, greene09, spoon09a, spoon09b, nesvadba10a}. Scalings between the [OIII]$\lambda$5007 width and AGN power \citep[either radio power;][or bolometric luminosity; \citealt{greene05b}]{heckman81} broadly support this picture. Following these previous analyses, we therefore consider the broad wings of [OIII] and other lines in SW022513 and the broad [OIII] lines in SW022550 as signatures of outflows. In SW022513 the broadest line widths of FWHM$\ge$5000 km s$^{-1}$ seen in [OIII] are broader than the other lines in the same aperture, and broader than all other lines including [OIII] in all other apertures. The [OIII] emissivity is very sensitive to electron temperature and excitation conditions, but not the total gas mass, so this component is unlikely to trace large amounts of gas. The most strongly blueshifted gas seen in the more representative H$\beta$ line, blueshifted by of order $-$1000 km s$^{-1}$ relative to systemic, likely gives a more robust estimate of the overall gas kinematics (see also \S\ref{ssec:molgas}). We ran a Monte-Carlo simulation to infer for what H$\beta$/[OIII]$\lambda$5007 line ratio we could have detected a broad H$\beta$ component with FWHM=5000 km s$^{-1}$ and at the same redshift as the broad [OIII]$\lambda\lambda$4959,5007 lines \citep[see \S~3.1 of ][for more details]{nesvadba10b}. For 1000 throws we found that H$\beta$ would have been detected at 3$\sigma$ for [OIII]$\lambda$5007/ H$\beta \le 6.6$, compared to [OIII]$\lambda$5007/H$\beta$=3.8 for the narrow component in the same spectrum. If the assumptions of \S\ref{ssec:masses} hold, then this would imply a $3\sigma$ upper limit to the mass of high-velocity ionized gas (with FWHM$=$5000 km s$^{-1}$) of $1.5\times 10^{8} M_{\odot}$ (neglecting extinction), or $1.5\times 10^9$ M$_{\odot}$ (for A$_V=$2 mag; see \S\ref{ssec:masses}), roughly 10\% of the mass found at more moderate velocities. Interestingly, the higher [OIII]$/$H$\beta$ ratio suggests this gas is more highly ionized than the narrow-line emitting gas. Notwithstanding, H$\beta$ also shows the most strongly blueshifted components near the nucleus (see Tables \ref{tab:spectraSW022513_contpeak} to \ref{tab:spectraSW022513_north}). This is expected if the gas flows are driven by the AGN and decelerate as they interact with ambient gas at larger radii. \subsection{Comparison with molecular gas} \label{ssec:molgas} \citet{polletta10} recently discussed integrated CO(4--3) emission-line spectroscopy of SW022513 and SW022550 obtained with the IRAM Plateau de Bure Inteferometer at a spatial resolution of $\ge$4\arcsec. Both galaxies have luminous millimeter CO(4--3) line emission that corresponds to $\sim 4\times 10^{10}$ M$_{\odot}$ of molecular gas, assuming a factor of 0.8 M$_{\odot}$ K km s$^{-1}$ pc$^2$ to convert CO luminosity to a molecular gas mass. SW022550 has a double peaked profile with two components separated by 440 km s$^{-1}$ in velocity. SW022513 has a single, very broad component with FWHM$\sim$1000 km s$^{-1}$, which is not very well fitted with a single Gaussian. In Figure~\ref{fig:SW022550_CO_Hbeta} we compare the CO(4--3) and [OIII]$\lambda$4959 line profiles of SW022550. The redshift of [OIII], z$=$3.876$\pm$0.001 falls between that of the two CO peaks. In SW022513, the CO(4--3) line profile shows an excellent match with the H$\beta$ profile extracted from the nuclear aperture centered on the continuum peak (Figure~\ref{fig:SW022513_CO_Hbeta}). This includes in particular the broad, blueshifted wings with velocities of up to -1000 km s$^{-1}$ relative to the narrow H$\beta$ component. As we argued in \S\ref{ssec:systemicredshift}, the narrow H$\beta$ component most likely approximates the systemic velocity to about 100 km s$^{-1}$. Comparison with telluric night-sky lines suggests that the absolute wavelength scale in our SINFONI data is accurate to about 20 km s$^{-1}$. In the millimeter, wavelengths are measured relative to a local oscillator, therefore uncertainties in wavelength calibration are much smaller and can be neglected. Formally, our Gaussian fits imply a velocity offset of -183$\pm$67 km s$^{-1}$ for an assumed Gaussian line core of the blueshifted H$\beta$ component, and of -181$\pm$47 km s$^{-1}$ for the Gaussian core of the CO(4-3) line. Molecular line emission is therefore found with blueshifts of up to $-$900 to $-$1000 km s$^{-1}$ relative to the narrow H$\beta$ component, which, as argued in \S\ref{ssec:systemicredshift}, is our most robust measure of the systemic redshift. Regardless of the detailed match between molecular and ionized gas, and neglecting possible projection effects, this is likely to be more than the escape velocity (\S\ref{ssec:willthegasescape}). \section{The wind in a z$=$3.5 quasar} \label{sec:wind} Our targets have all the hallmarks of being in a short, decisive, and very complex stage of their evolution, where star formation, AGN, and gravitational interactions are likely to release an energy of $10^{59-60}$ erg, the equivalent of the binding energy of the host galaxy and its dark-matter halo. At their current luminosities, the AGN in SW022513 and SW022550 may release such energies in a few $10^{6-7}$ yrs, which corresponds to the lifetimes over which AGN may maintain bolometric luminosities $>10^{46}$ erg s$^{-1}$ \citep{hopkins05}. Similarly, the canonical energy release of $10^{51}$ erg per supernova explosion corresponds to a total energy of about $\times 10^{58}$ erg released in the formation of $10^{11}$ M$_{\odot}$ of stars, and observations have shown that the hydrostatic pressure of intense starbursts may balance gravity in z$=$2 galaxies very similar to our targets \citep{nesvadba07a}. Major mergers naturally release the equivalent of the binding energy of the merger remnant during the interaction. Hence, each of these mechanisms could have a strong influence on the kinematics and thermal state of the gas in our targets. What mechanism is dominating the gas dynamics in SW022550 and SW022513? Important to address this question is not only the total energy injection, but also the timescales over which this energy is released, and the efficiency with which the energy output is turned into an input of (thermal or mechanical) energy into the ambient gas. \citet{narayanan08} find in SPH simulations of a gas-rich, major merger associated with intense star formation and AGN activity, that the AGN affects the gas more drastically than star formation and gravitational collapse because of the shorter energy injection time. Star formation and gravitational interaction release their energy in few $10^{8}$ yrs and up to $10^9$ yrs, respectively, much longer than the $10^{6-7}$ yrs lifetime of the AGN. Observations lead to a similar conclusion. The spectral signatures of winds in submillimeter galaxies without strong AGN component appear much more subtle, with smaller blueshifts, and a smaller mass of entrained material \citep{nesvadba07b}. Overall, these galaxies have velocity gradients and line widths of up to a few 100 km s$^{-1}$, and more regular line profiles \citep[e.g.,][]{swinbank05, swinbank06, takata06} in spite of stellar masses and star-formation rates very similar to our targets. Low-redshift ULIRGs including very advanced mergers have velocity gradients and line widths in a similar range \citep{colina05}, and are much smaller than $\sim$1000 km s$^{-1}$ at any stage of merging. SW022513 and SW022550 both show signatures that the AGN does affect the warm ionized ISM, evidenced through the broad, blue components in SW022513 and the very broad FWHM of [OIII]$\lambda$4959 in SW022550 (FWHM$>$2000 km s$^{-1}$). However, FWHM$>$1000 km s$^{-1}$ are only found around the nucleus in both cases, and , since we cannot constrain extinction very well, it is not clear if the warm ionized gas represents a major fraction of the ISM (\S\ref{ssec:masses}). The CO profiles of both galaxies, which trace about $4\times 10^{10}$ M$_{\odot}$ in molecular gas in each source, are very different in SW022513 and SW022550 \citep{polletta10}. In SW022550 the CO profile resembles the fairly common double-peaked profiles found, e.g, in submillimeter galaxies, where they are commonly interpreted as signatures of mergers or rotating disks \citep[and where even disks may be signatures of (advanced) mergers;][]{downes98}, in agreement with the merger models of \citet{narayanan08}. In either case, the barycenter falls roughly inbetween the two CO peaks. This could suggest that the AGN affects only a small part of the multiphase ISM in SW022550, traced by the broad [OIII] line. SW022513 is however different. The CO line profile is irregular and neither a clear single nor double-peaked Gaussian. The line extends to large relative velocities of up to $-$1000 km s$^{-1}$ from systemic (\S\ref{ssec:molgas}), similar to the ionized gas near the AGN (Fig.~\ref{fig:SW022513_CO_Hbeta}). Outflows driven by the AGN are the most common interpretation of blue, broad wings of {\it ionized} gas (\S\ref{ssec:bluwings}), which may suggest that parts of the {\it molecular} gas in SW022513 may be tracing outflowing gas as well. Winds are inherently multi-phase phenomena, where the warm ionized gas seen through optical line emission is being entrained by a hot, tenuous medium. It may therefore {\it a priori} not be entirely surprising to find an associated phase of molecular gas. However, this does raise fundamental questions of how molecular gas, which is 1-2 orders of magnitude denser ($N\sim 10^{3-4}$ cm$^{-3}$) than ionized gas ($N\sim 10^{2-3}$ cm$^{-3}$) is accelerated to high velocities. This could be alleviated if much of the gas is diffuse and distributed across the galaxy, perhaps through tidal effects or a starburst-driven wind \citep{narayanan08}. It is also possible that the molecular gas is forming {\it in situ} in the outflow, as suggested by recent studies of the detailed mass and energy exchange in turbulent, multiphase gas in nearby extragalactic environments with galaxy-wide shocks, including radio galaxies \citep[][see also Krause \& Alexander, 2007 ]{guillard09, nesvadba10a, ogle10}. In these scenarios, the ionized line emission may arise from the turbulent mixing interfaces associated with the same clouds, which would explain why ionized and molecular gas have similar velocities. \citet[][]{papadopoulos10, papadopoulos08} recently found bright CO(6-5) and CO(3-2) line emission in the nearby radio galaxy 3C293, which is a posterchild of a jet-driven outflow \citep{morganti05, emonts05}. CO line ratios in 3C293 suggest that most of the molecular gas is dense, turbulent, and gravitationally unbound, as expected in a multiphase scenario. Possible kinematic signatures of outflows from star-forming AGN host galaxies traced through CO line emission have previously been reported in a few nearby cases \citep{appleton02, sakamoto06, alatalo10, iono07, feruglio10, irwin10}. These galaxies have AGN power and star-formation rates that are lower by 1-2 orders of magnitude than in SW022513. Outflow velocities are typically a few 100 km s$^{-1}$, and entrainment rates can be up to few 100 M$_{\odot}$ yr$^{-1}$. The CO profiles found by \citet{sakamoto06,alatalo10} are well matched by HI absorption-line profiles tracing neutral outflows, and providing robust evidence for the outflow interpretation. However, typically these winds include only a small fraction of the CO luminosity, unlike in SW022513 where most of the CO emission appears to be blueshifted. This may partially be due to the higher-excitation gas probed in the J$=$4--3 transition, or it may imply that more gas is being entrained compared to low-redshift galaxies. This is not implausible, given the much greater AGN power in SW022513 compared to nearby Seyfert galaxies, and the large molecular gas mass. We will in the following quantify the necessary and available amounts of kinetic energy in SW022513 to investigate whether the AGN may plausibly drive the gas to the velocities observed. \subsection{Kinetic energy of the gas} \label{ssec:ekingas} In Figure~\ref{fig:SW022513_CO_Hbeta} we compared the line profiles of H$\beta$ and CO, and argued that, by analogy with a large number of previous AGN studies, the systemic velocity is typically well approximated by the narrow component of optical emission lines, whereas the outflowing component is in the blue wing. To estimate the kinetic energy in the outflowing gas, we therefore decompose the CO and H$\beta$ line profiles into a 'systemic' and a blueshifted component, finding that about 40\% of the total CO(4-3) emission-line luminosity is in the outflowing component (and about 30\% of H$\beta$). Assuming that the CO-to-H$_2$ conversion factor and the CO excitation are independent of velocity, this suggests that 40\% of the molecular gas is in the blueshifted component. Based on these assumptions, we estimate the kinetic energy directly from the line profile, by setting \begin{eqnarray} E_{kin} = 1/2\ \sum_{v=0}^{v_{max,blue}}M_i\ v^2_i, \end{eqnarray} where $M_i$ is the gas mass in each velocity bin $v_i$ (relative to the systemic velocity, \S\ref{ssec:systemicredshift}), and including only the flux from the blueshifted component. This corresponds to a kinetic energy of about $4\times 10^{58}$ erg. Adding the ionized gas mass would add another $0.2-2\times 10^{58}$ erg, depending on extinction (see \S\ref{ssec:masses}). Obviously, these estimates have large systematic uncertainties related to the molecular gas mass estimate, excitation conditions, and velocity estimates including projection effects. We therefore consider this estimate accurate to about an order of magnitude. \subsection{Is the AGN capable of driving the outflow?} At the beginning of the section we have argued that a starburst-driven wind and an interaction are unlikely to accelerate significant amounts of gas to velocities of $-$1000 km s$^{-1}$. We will now investigate for two popular AGN feedback mechanisms, if they may plausibly explain the observed gas kinematics. \subsubsection{Radiation pressure from AGN and starburst} \label{ssec:radiationpressure} Radiation pressure is often invoked to explain how AGN may expel significant amounts of gas from their host galaxies \citep[e.g.,][]{king03,murray05}. \citet{murray05} discussed the outflow velocities that can be produced by radiation pressure from AGN and intense starbursts, finding that (their Equation~17) \begin{eqnarray} V(r) = 2\sigma \sqrt{ (\frac{{\cal L}}{{\cal L}_M} - 1) \ln{\frac{r}{R_0}}}, \end{eqnarray} where $\sigma$ is the stellar velocity dispersion of the host galaxy, and ${\cal L}$ is the quasar luminosity. $R_0$ is the launch radius of the outflow, and $r$ the radius at which the velocity of the wind is measured. ${\cal L}_M = \frac{4\ f_g\ c} {G} \sigma^4$ is a critical luminosity that depends on the stellar velocity dispersion $\sigma$, the speed of light, $c$, gravitational constant, $g$, and the gas fraction, $f_g$. For ${\cal L}>{\cal L}_M$, radiation pressure may launch a wind. These equations are appropriate for the limiting case of an optically thick wind, in which case the interaction is most efficient. \citet{polletta08} estimated the stellar mass and luminosity of the starburst and AGN from the dust and stellar emission of SW022513 with an exquisite set of multi-waveband photometry. They find a stellar mass of M$_{stellar}\sim 2-4\times 10^{11}$ M$_{\odot}$, and bolometric luminosities of $6\times 10^{46} erg s^{-1}$ and $4.8\times 10^{46}$ erg s$^{-1}$ for the starburst and AGN, respectively. For pressure-supported galaxies with approximately isothermal mass profile, this mass range corresponds to stellar velocity dispersions of $\sigma= 300-350$ km s$^{-1}$. This can be found from setting M$=c \sigma^2\ r_e / G$, where $\sigma$ is the stellar velocity dispersion, $r_e$ the effective radius, $M$ the stellar mass, and G the gravitational constant. For our calculations we adopted $r_e$ = 2-3 kpc, and $c=5$ \citep{bender92}. To give a lower limit on the gas fraction, we use the molecular gas mass estimate of \citet{polletta10}, $4\times 10^{10}$ M$_{\odot}$, which gives a gas fraction of order f$_g= 0.1-0.2$ for a stellar mass of M$_{stellar}=2-4\times 10^{11}$ M$_{\odot}$ \citep{polletta08}. Following \citet{murray05}, to launch a wind in a galaxy with M$_{stellar}=2 - 4 \times 10^{11}$ M$_{\odot}$ with $\sigma$=300-350 km s$^{-1}$, and M$_{gas}=4\times 10^{10}$ M$_{\odot}$ would require ${\cal L}_M=2.5 - 3 \times 10^{47}$ erg s$^{-1}$. This is the most optimistic case consistent with our observational constraints, and would require a bolometric luminosity that is about 2.5$-$3$\times$ greater than observed. To accelerate a wind to a terminal velocity of $\ge 1000$ km s$^{-1}$ (as suggested by the broad blueshifted components in SW022513) would then require a bolometric luminosity of at least $4.5-5\times 10^{47}$ erg s$^{-1}$ for a galaxy with $\sigma =$300$-$350 km s$^{-1}$, a factor 4-5 larger than what is measured. Thus, the luminosity of SW022513 is at least a factor 4 lower than required, if we consistently use the most optimistic estimates implied by our observations. It can be up to factors~10 lower for other plausible choices of parameters. For these estimates we assume a launch radius of the wind, $R_0$, of a few 100 pc \citep[the sizes of the circumnuclear molecular disks found in low-redshift ULIRGs][ and the lowest value in the AGN feedback models of \citealt{ciotti09,ciotti10}]{downes98}, and an outflow radius, $r$, of 5~kpc, which roughly corresponds to the radius of the narrow-line region in SW022513. A larger launch radius (perhaps more plausible given the large gas masses) or a larger size of the outflow region, enhances the required energy. This suggests that radiation pressure may only drive the outflow if all parameters take values that are strictly at their lower limits and if we underestimate the luminosity by at least a factor~4. Otherwise, driving an outflow like in SW022513 through radiation pressure as proposed by \citet{murray05} would require an AGN and starburst with at least factors of a few higher bolometric luminosity than in SW022513, although this is one of the most luminous obscured high-z AGN in the SWIRE survey \citep{polletta08}. \subsubsection{Radio source} Examples of outflows in the literature associated with radio-loud AGN are numerous, and include not only powerful radio sources \citep[e.g.,][]{morganti05, holt08, nesvadba08b, spoon09a, spoon09b, nesvadba10a}, but also Seyfert galaxies with low radio power \citep{capetti99, ulvestad99, reynolds09, barbosa09}. Weak radio sources are common in Seyfert galaxies \citep{gallimore06} but not always easily identified, and typically spatially asscociated with non-circular gas motions. The same is found for radio-quiet quasars \citep[e.g.,][]{leipski06a,leipski06b}. Note that ``radio-quiet'' does not mean ``radio-silent'', but that the radio power is less than 10$\times$ greater than the optical luminosity of a quasar, irrespective of whether the optical continuum is dominated by the direct or scattered AGN light, nebular continuum emission, or the obscured or unobscured stellar continuum of the host galaxy. Even relatively low-power radio sources can accelerate gas to high velocities of $\ge$1000 km s$^{-1}$ \citep[e.g.][]{capetti99} and produce equally large line widths \citep[e.g.,][]{capetti99, reynolds09}. \citet{heckman81} found that radio power correlates with the FWHM of the [OIII]$\lambda$5007 line. Measuring the [OIII]$\lambda$5007 FWHM in SW022513 from the integrated spectrum, we find that it falls well within the scatter of the \citeauthor{heckman81} relationship (Figure~\ref{fig:heckman}). The same is true for SW022550 and SMMJ1237+6203 \citep[with the FWHM of][]{alexander10} Weak radio sources where the internal pressure does not greatly exceed the ambient pressure, may deposit most of their energy in the ambient medium \citep{gallimore06} and may thus be more efficient in inflating bubbles than more powerful radio sources, where only a few percent of the kinetic jet energy is being deposited in the ambient medium \citep[e.g.,][]{nesvadba08b}. \citet{polletta08} estimated the radio power of SW022513 (and SW022550) to be of order $10^{25}$ W Hz$^{-1}$. This roughly corresponds to the FRI/FRII divide at low redshift, and appears as {\it moderately} strong only if compared to the very powerful radio-loud quasars and radio galaxies at high redshift. At more moderate redshifts, radio sources with fairly similar power can trigger significant outflows \citep[e.g.][]{morganti05,spoon09a} and have a profound impact on the molecular gas in their hosts \citep[e.g.,][]{papadopoulos10, nesvadba10a, ogle10, alatalo10}. The interaction efficiency of a radio source with the ambient gas depends critically on the gas density \citep[e.g.,][]{capetti99}, which in high-redshift galaxies is likely higher than at low redshift. For example, \citet{deyoung93} suggest that trapping a radio source with about $10^{25}$ W Hz$^{-1}$ in the ambient gas requires a volume-averaged gas density of order 100 cm$^{-3}$. Assuming a volume-averaged density may appear somewhat artificial, but has previously shown to provide reasonable constraints \citep{deyoung93,capetti99}, and is certaintly well matched to the crudeness of our observational constraints. For low-redshift galaxies, plausible estimates of the volume-averaged ambient gas density are of order 1-10 cm$^{-3}$ \citep{deyoung93}, however, a value of 100 cm$^{-3}$ is very similar to the average density 50-380 cm$^{-3}$ in SW025513 and SW022550 implied by the molecular mass of $4\times 10^{10}$ M$_{\odot}$ \citep{polletta10}, and assuming a spherical gas distribution with radius 1-2 kpc and a filling factor of unity. As a consequence, the higher gas densities in high redshift galaxies could boost the effect of moderately strong radio sources on their surrounding gas compared to nearby galaxies, where such jets escape more easily. In many similarly strong, low-redshift radio sources the total gas mass involved in the outflow is much smaller than in SW022513 \citep[e.g.,][]{morganti05}. The H$\beta$ luminosity in the component with FWHM$=$1000 km s$^{-1}$ implies an ionized gas mass of $1.2\times 10^{8-9}$ M$_{\odot}$, estimated with the method, assumptions, and uncertainties presented in \S\ref{ssec:masses}. We therefore test explicitly if the radio source in SW022513 may provide enough mechanical energy to produce the observed emission-line kinematics of H$\beta$ and CO. \citet{willott99} estimate the jet kinetic energy from the observed radio power ${\cal L}_{151,28}$ at 151 MHz, setting ${\cal L}_{mech} = 3\times 10^{38}\ f_W^{3/2} {\cal L}_{151,28}^{6/7}$ W. ${\cal L}_{151,28}$ is the observed radio power at 151 MHz in units of $10^{28}$ W Hz$^{-1}$ sr$^{-1}$, and $f_W$ is a fudge factor taking into account the most salient astrophysical uncertainties. Typically $f_W = 10$ \citep[see also][]{cattaneo09}. Estimating the 151 MHz radio luminosity from the measured radio fluxes at 1.4 GHz and 610 MHz \citep{polletta08}, we find a mechanical energy injection rate of $2\times 10^{44}$ erg s$^{-1}$. To estimate the total kinetic energy released by the radio source, we need to estimate the AGN lifetime. \citet{blundell99} argue that the most powerful radio sources at high redshift must be very young, about $10^7$ yrs. We will use this estimate as a lower limit on the age of the radio source. However, \citet{sajina07} find that 1/3$-$1/4 of intense infrared-selected starbursts at z$\ge$2 have moderately radio-loud AGN, with radio power and star-formation properties broadly similar to our sources. This suggests that moderately powerful radio sources are not uncommon amongst intensely star forming galaxies at high redshift. Assuming that all such galaxies are moderately bright radio sources at some time of their evolution yields a lower limit on the lifetime of radio sources in individual galaxies relative to the lifetime of the starburst. For typical star-formation timescales of few $10^8$ yrs, this would suggest that moderately radio-loud AGN at z$\sim$2 have longer radio lifetimes than the most extreme radio sources, up to of-order $10^8$ yrs, more resembling the long activity periods of weak radio sources in nearby Seyfert galaxies \citep{gallimore06} than the very short lifetimes (of-order $10^7$ yrs) of very powerful radio galaxies at high-z \citep[e.g.,][]{blundell99}. Even if the current energy injection rate of $2\times 10^{44}$ erg s$^{-1}$ in SW022513 were only maintained for $1\times 10^7$ yrs, the radio source would provide sufficient mechanical energy, $7\times 10^{58}$ erg, to accelerate large amounts of of molecular and ionized gas as is observed, if all of the jet's mechanical energy is transferred to the gas at a conversion efficiency of 100\%. For longer timescales, the same could be achieved with lower conversion efficiencies, of-order few 10\%, closer to those found in very powerful radio galaxies at z$\ge$2 \citep{nesvadba08b}. \subsection{Will the gas escape?} \label{ssec:willthegasescape} AGN-driven outflows were postulated by galaxy evolution models to sweep up and unbind the remaining gas at the end of merger-triggered major episodes of star formation at high redshift. Major gas-rich mergers are the most plausible processes that may trigger intense star formation and AGN activity in high-redshift galaxies, however, our data do not reveal multiple continuum components for either source (\S\ref{ssec:continuum}). This could imply that both components are at distances less than the spatial resolution of our data ($\sim$4$-$5 kpc) or that their stellar components have low surface brightnesses, perhaps because they are being tidally disrupted. Either interpretation would be consistent with an advanced merger stage. The same is suggested by the short gas consumption timescales of few $\times 10^7$ yrs, about 10\% of the age of the starburst \citep{polletta10}. Finding outflows of molecular and ionized gas associated with an obscured AGN is therefore certainly consistent with this broad evolutionary picture. However, a critical question is whether the gas will escape. In \S\ref{ssec:radiationpressure} we estimated that the velocity dispersion corresponding to the stellar mass of SW022513 derived by \citet{polletta08} is likely about $\sigma=$300-350 km s$^{-1}$, which would imply an escape velocity $\sqrt{2}\ \sigma \sim 500$ km s$^{-1}$. Finding gas with velocities of up to 1000 km s$^{-1}$ certainly suggests that at least a fraction of the outflow in gas may ultimately become unbound, depending on how much energy is being dissipated by the outflow before the gas has reached large radii. Recent observations of molecular gas in nearby radio galaxies finding roughly similar amounts of thermal and kinetic energy suggest that dissipational processes in the molecular gas could be important for the dynamics of molecular winds \citep{nesvadba10a}. The most direct evidence for gas escaping from the galaxy would therefore be the detection of outflowing gas that extends well beyond the radius of the host galaxies, with large velocity offsets, and aligned with the radio axis, as has been found in powerful radio galaxies at z$\ge$2 \citep{nesvadba06b, nesvadba07b, nesvadba08b, nesvadba10b}. At any rate, our observations suggest that ``radio-quiet'', powerful AGN at high redshift may accelerate significant fractions of their interstellar medium to velocities near or above the escape velocity even if their radio sources are rather unconspicuous. \section{Summary} We present an analysis of deep rest-frame optical integral-field spectroscopy of two powerful obscured quasars at z$\ge$3.5, SW022513 and SW022550. These are the most luminous 24$\mu$m emitters in the SWIRE survey at $\ge$2 and have previously been discussed by \citet{polletta08,polletta10}. Our main results are as follows: (1) The optical line emission in both galaxies is dominated by the luminous narrow-line region ionized by the hard quasar spectrum. Emission lines are very broad in SW022550, FWHM$=$2200 km s$^{-1}$ for [OIII]$\lambda$4959. Emission-line profiles in SW022513 are complex. For example, [OIII]$\lambda$5007 is dominated by a 'narrow' component with FWHM$=$1275 km s$^{-1}$ and has a broad, blueshifted component of up to FWHM$=$5000 km s$^{-1}$. These line widths reflect the kinematics of the narrow-line region, the broad-line region is obscured. (2) In SW022513 the line emission is spatially extended. We identify the nucleus with the continuum peak and site of broadest [OIII]$\lambda\lambda$4959,5007 line emission (FWHM$=$5000 km s$^{-1}$). The peak in narrow [OIII]$\lambda\lambda$4959,5007 line emission is offset by 0.75\arcsec\ to the South ($\sim$5 kpc). This suggests that the ionized gas reaches the largest velocities near the nucleus, and is surrounded by an extended narrow-line region. For bright AGN like in SW022513 narrow-line region sizes of 10s of kpc are possible. (3) For SW022513, and comparing with the CO(4--3) observations of \citet{polletta10}, we find that the ionized gas mass amounts to 10\%-100\% of the molecular mass. CO(4--3) and H$\beta$ line profiles are well matched suggesting both may originate from the same gas clouds. Using the narrow H$\beta$ component near the nucleus to define the systemic redshift (as commonly done in AGN host galaxies) we find that about 40\% of the CO line emission is from gas that does not participate in systemic motion, but is blueshifted to velocities of up to $-$1000 km s$^{-1}$. Such large velocity offsets are not suggested by merger models including starburst-driven winds and direct empirical evidence, but could be a signature of AGN-driven winds as postulated, e.g. by \citet{narayanan08}. In SW022550 the CO(4--3) line profile is double-peaked and very different from that of [OIII], which may suggest that the bulk of molecular gas is not affected by the AGN in a similar way, although the FWHM of the [OIII] line of $>$2000 km s$^{-1}$ suggests that the AGN interacts with the ISM nonetheless. (4) SW022513 and SW022550 host AGN with bolometric luminosities of $\sim 5 \times 10^{46}$ erg s$^{-1}$ and moderately strong radio sources. Comparing with the expected characteristics of radiation-pressure driven winds \citep[following][]{murray05} and mechanical energy injection from the radio source \citep[following][]{willott99} we find that it is difficult to produce the observed velocities with radiation pressure, whereas observations and basic energy considerations suggest the radio source as a possible driver. In this case, moderately radio-loud ULIRGs like SW022513 and SW022550 could be 'scaled-up' versions of nearby Seyfert galaxies and 'radio-quiet' (but not radio-silent) quasars with weak radio sources. \section*{Acknowledgments} We are very grateful to the staff at Paranal and at IRAM for having carried out the observations on which our analysis is based. We also thank the referee, Montserrat Villar-Martin, for comments that were very helpful in improving the paper. \bibliographystyle{mn2e}
train/arxiv
BkiUddzxaJiQn7ar39Ru
5
1
\section{Introduction} Loop quantum gravity (LQG) proposes a non-perturbative mathematical formulation of the kinematics of quantum gravity. The Hilbert space is generated by states defined over oriented graphs whose edges are labeled by irreducible representations of the $\mathrm{SU}(2)$ group and whose vertices are decorated with intertwiners ($\mathrm{SU}(2)$ invariant tensors). These are the so-called spin networks. Despite the several advances that have taken place in this field, one of the main challenges faced by the theory is the systematic implementation of the dynamics. Our goal is to focus on a specific model in order to propose a suitable Hamiltonian for it. We use the $\mathrm{U}(N)$ framework for $\mathrm{SU}(2)$-intertwiners \cite{un1,un2,un3} to study the spin network Hilbert space of the 2-vertex graph (2 nodes joined by an arbitrary number $N$ of links) from a new point of view. We identify a global symmetry that selects a homogeneous and isotropic sector of this system \cite{2vertex} and we construct the operators that leave this sector invariant. They will be the building blocks to construct the Hamiltonian operator. On the other hand, the recent spinor representation for LQG \cite{return,Freidel:2010bw,Livine:2011gp} opens a new way to study several aspects of LQG. We apply this new formalism to the 2-vertex graph and we propose a classical action with an interaction term which encodes the effective dynamics of this system. This interaction term is, indeed, the classical counterpart of the quantum Hamiltonian obtained within the $\mathrm{U}(N)$ framework. \section{The $\mathrm{U}(N)$ framework} The $\mathrm{U}(N)$ framework introduced in \cite{un1,un2} is very useful to study the Hilbert space of intertwiners with $N$ legs and to build appropriate semi-classical states \cite{un3}. The basic tool is the Schwinger representation of the ${\mathfrak su}(2)$ Lie algebra in terms of a pair of harmonic oscillators $a$ and $b$: $$ J_z=\f12(a^\dagger a-b^\dagger b),\quad J_+=a^\dagger b,\quad J_-=a b^\dagger\,. $$ Labeling the $N$ legs with the index $i$, we identify $\mathrm{SU}(2)$ invariant operators acting on pairs of (possibly equal) legs $i,j$ \cite{un1,un3}: \begin{equation} E_{ij}=a^\dagger_ia_j+b^\dagger_ib_j, \quad (E_{ij}^\dagger=E_{ji}),\quad\qquad F_{ij}=a_i b_j - a_j b_i,\quad (F_{ji}=-F_{ij}).\nonumber \end{equation} The operators $E$ form a $\u(N)$-algebra and they also form a closed algebra together with the operators $F,F^\dagger$. Notice that the diagonal operators give the energy on each leg, $E_{ii}=E_i$, which gives twice the spin $j_i$ of the ${\mathfrak su}(2)$ representation carried by that leg. This spin $j_i$ is identified geometrically as the area associated to the leg $i$ and the total energy $E=\sum_i E_i$ gives twice the total area $J=\sum_i j_i$ associated to the intertwiner. The $E_{ij}$-operators change the energy/area carried by each leg, while still conserving the total energy, while the operators $F_{ij}$ (resp. $F^\dagger_{ij}$) will decrease (resp. increase) the total area $E$ by 2: \begin{equation} [E,E_{ij}]=0,\qquad [E,F_{ij}]=-2F_{ij},\quad [E,F^\dagger_{ij}]=+2F^\dagger_{ij}.\nonumber \end{equation} The operators $E_{ij}$ allow then to navigate from state to state within each subspace $\cH_N^{(J)}$ of $N$-valent intertwiners with fixed total area $J$; and the operators $F^\dagger_{ij}$ and $F_{ij}$ allow to go from one subspace $\cH_N^{(J)}$ to the next $\cH_N^{(J\pm 1)}$, thus endowing the full space of $N$-valent intertwiners with a Fock space structure with creation operators $F^\dagger_{ij}$ and annihilation operators $F_{ij}$. Besides, it was proven \cite{un2} that each subspace $\cH_N^{(J)}$ carries an irreducible representation of $\mathrm{U}(N)$ generated by the $E_{ij}$ operators. Finally, it is worth pointing out that the operators $E_{ij},F_{ij},F^\dagger_{ij}$ satisfy certain quadratic constraints, which correspond to a matrix algebra \cite{2vertex}. \section{The 2 vertex model and the quantum Hamiltonian} We consider the simplest class of non-trivial graphs for spin network states in LQG: a graph with two vertices linked by $N$ edges, as shown in fig.\ref{2vertexfig}. \begin{figure}[h] \begin{center} \includegraphics[height=35mm]{2_vertex_figure.eps} \caption{The 2-vertex graph with vertices $\alpha$ and $\beta$ and the $N$ edges linking them.} \label{2vertexfig} \end{center} \end{figure} There are matching conditions \cite{un2} imposing that each edge carries a unique $\mathrm{SU}(2)$ representation (same spin seen from $\alpha$ and from $\beta$). This translates into an equal energy condition: \begin{equation} {\mathcal E}_i\,\equiv\,E^{(\alpha)}_i -E^{(\beta)}_i \,=\,0.\nonumber \end{equation} These constraints ${\mathcal E}_k$ turn out to be part of a larger $\mathrm{U}(N)$ symmetry algebra. Indeed, we introduce the more general operators: \begin{equation} {\mathcal E}_{ij}\,\equiv\, E^{(\alpha)}_{ij}-E^{(\beta)}_{ji} \,=\,E^{(\alpha)}_{ij}-(E^{(\beta)}_{ij})^\dagger,\nonumber \end{equation} that form a $\mathrm{U}(N)$ algebra and that reduce to the matching conditions in the case $i=j$. Now, one can show \cite{2vertex} that by imposing the global $\mathrm{U}(N)$-invariance generated by the ${\mathcal E}_{ij}$'s on our 2-vertex system, one obtains a single state $|J\rangle$ for each total boundary area $J$. Thus, the $\mathrm{U}(N)$ invariance is restricting our system to states that are homogeneous and isotropic (the quantum state is the same at every point of space, i.e. at $\alpha$ and $\beta$, and all directions or edges are equivalent). We propose a dynamics for this system compatible with the $\mathrm{U}(N)$-invariance. Investigating the structure of the $\mathrm{U}(N)$-invariant operators, we propose the most general $\mathrm{U}(N)$ invariant Hamiltonian (allowing only elementary changes in the total area), up to a renormalization by a $E$-dependent factor: \begin{equation} H \,\equiv\, \lambda\sum_{ij}E^{(\alpha)}_{ij}E^{(\beta)}_{ij}+ \left(\sigma \sum_{ij}F^{(\alpha)}_{ij}F^{(\beta)}_{ij} +\bar{\sigma} \sum_{ij}F^{\alpha\dagger}_{ij}F^{\beta\dagger}_{ij}\right)\,, \end{equation} where the coupling $\lambda$ is real while $\sigma$ can be complex a priori, so that the operator $H$ is Hermitian. We studied the properties of this Hamiltonian on the $\mathrm{U}(N)$ invariant Hilbert space. Its action on states $|J\rangle$ is known and its spectral properties have been analyzed \cite{2vertex}. It turns out that it shares several mathematical analogies with the evolution operator used in loop quantum cosmology. At the physical level, interpreted as a cosmological model, this simple dynamical 2-vertex model also leads generically to a big bounce and avoids the big bang singularity. \section{Spinors and effective dynamics} Based on the Schwinger representation of $\mathrm{SU}(2)$ in terms of harmonic oscillators, it is possible to give a representation of the classical phase of LQG in terms of spinor variables \cite{return,Livine:2011gp}. The quantization of this classical system will lead us back to the $\mathrm{U}(N)$ framework for intertwiners. Focusing on this classical system, we write an action principle with an effective dynamics of the spinors reflecting the quantum dynamics defined above. Let us start by introducing the usual spinor notation. Let us define the spinor $z$ and its dual: $$ |z\rangle=\matr{c}{z^0\\z^1}\in{\mathbb C}^2, \qquad \langle z|=\matr{cc}{\bar{z}^0 &\bar{z}^1}, \qquad |z]\equiv \begin{pmatrix}-\bar{z}^1\\\bar{z}^0 \end{pmatrix}. $$ In order now to describe $N$-valent intertwiners, we consider $N$ spinors $z_i$ satisfying a closure condition\footnotemark\,that, in terms of their components, is given by: \begin{equation} \sum_i |z_i\rangle\langle z_i|\propto\mathbb{I}\,\Leftrightarrow\, \sum_i z^0_i\,\bar{z}^1_i=0,\quad \sum_i \left|z^0_i\right|^2=\sum_i \left|z^1_i\right|^2=\f12\sum_i \langle z_i|z_i\rangle. \label{closure} \end{equation} Solutions are parameterized in terms of a positive number $\lambda\in{\mathbb R}_+$ and a unitary matrix $u\in\mathrm{U}(N)$ up to $\mathrm{U}(N-2)\times \mathrm{SU}(2)$ right-transformations with $z_i^0=\sqrt{\lambda}\,u_{i1}$ and $z_i^1=\sqrt{\lambda}\,u_{i2}$ . \footnotetext{We associate to each spinor $z_i$ a 3-vector $\vec{X}(z_i)=\langle z_i|\vec{\sigma}|z_i\rangle$ by projecting it onto the Pauli matrices. Then the closure constraint is $\sum_i \vec{X}(z_i)=0$ and we identify $\vec{X}(z_i)$ as the normal vector to the dual surface to the leg $i$. } The phase space is defined by the canonical Poisson bracket $\{z^a_i,\bar{z}^b_j\}\,\equiv\,i\,\delta^{ab}\delta_{ij}\,$. The quantization will be promoting $z_i$ and $\bar{z_i}$ as the annihilation and creation operators of harmonic oscillators. Then the classical matrices $M_{ij}=\langle z_i |z_j \rangle$ and $Q_{ij}=[z_j |z_i\rangle$ are the classical counterparts of the operators $E$ and $F$. The $\mathrm{U}(N)$-action on spinors is the simple $N\times N$ matrix action $(Uz)_i=\sum_j U_{ij}z_j$. Defining the ``homogeneous cosmological" sector as the $\mathrm{U}(N)$-invariant sector, satisfying $\langle z_i^\alpha |z_j^\alpha \rangle=\langle z_i^\beta |z_j^\beta \rangle$ and invariant under $z^\alpha,z^\beta\,\rightarrow Uz^\alpha,\bar{U}z^\beta$, imposes that all the $\alpha$-spinors are equal to the $\beta$-spinors up to a global phase, $\bar{z}^{(\alpha)}_i\,=\,e^{i\phi}\,z^{(\beta)}_i$. And we get a reduced phase space with two parameters, the total area $\lambda$ and its conjugate angle $\phi$ encoding the curvature. Our ansatz for the dynamics of this ``cosmological'' sector is: \begin{equation} S_{inv}[\lambda,\phi] \,=\, -2 \int dt\,\left(\lambda \partial_t \phi -\lambda^2\left(\gamma^0-\gamma^+e^{2i\phi} -\gamma^-e^{-2i\phi}\right)\right), \end{equation} which corresponds to the quantum Hamiltonian defined above. In this classical case, the equations of motion can be solved exactly \cite{return} with certain interesting analogies with (the effective dynamics of) loop quantum cosmology, showing that the dynamics of the $\mathrm{U}(N)$-invariant sector of the 2-vertex graph model can be interpreted as describing homogeneous and isotropic cosmology. \section{Conclusions} The $\mathrm{U}(N)$ framework and the spinor representation introduced and studied in \cite{un1,un2,un3,2vertex,return,Freidel:2010bw,Livine:2011gp} represents a new and refreshing way to tackle several important issues in loop quantum gravity. In this work we have discussed these new frameworks and we have reviewed a proposal for the dynamics of the homogeneous and isotropic sector of the model, both at the quantum (using the $\mathrm{U}(N)$ framework) and the classical (using spinors) level. In this process, we have described the main features of the $\mathrm{U}(N)$ framework, like the Fock space structure of the Hilbert space of intertwiners with $N$ legs. We further used this $\mathrm{U}(N)$ structure on the 2-vertex graph to define a symmetry reduction to the homogeneous and isotropic sector. We have then introduced a Hamiltonian consistent with this symmetry reduction, which can be solved exactly and shown to be analogous with the dynamics of loop quantum cosmology. \section*{Acknowledgments} This work was in part supported by the Spanish MICINN research grants FIS2008-01980, FIS2009-11893 and ESP2007-66542-C04-01 and by the grant NSF-PHY-0968871. IG is supported by the Department of Education of the Basque Government under the ``Formaci\'{o}n de Investigadores'' program. \section*{References}
train/arxiv
BkiUdUw5qsBC8zW3ypY7
5
1
\section{Introduction} Conformally invariant fields play a very important role in physics. In particular, the success of string theory is largely due to the fact that local conformal invariance provides us with the means of exactly solving the underlying 2D problem. Conformal invariance is pertinent not only to deal with the 2D dynamics on the string world sheet, but also manifests itself in field theoretical implications of string theory like the AdS/CFT correspondence suggesting new calculational methods to describe a strongly coupled regime. On the other hand, conformally invariant fields have important implications in cosmology of the early Universe. Cosmological evolution driven by the conformal anomaly of conformally invariant fields \cite{FHH,Starobinsky} represented perhaps first examples of self-consistent inflationary models. These earlier works disregarded the formulation of initial conditions which later were considered in the form of the no-boundary proposal \cite{HH} within the conformal anomaly context and, more recently, using the AdS/CFT correspondence \cite{HHR}. Some of these ideas were recently generalized in a new model of quantum initial conditions for the Universe \cite{slih}. In this model, one considers a (possibly large) number of conformal fields as the initial matter content of the Universe, also endowed with a cosmological constant, and allows the possibility that the initial state of the Universe be a mixed state, rather that a pure state as in the Harle-Hawking proposal. Using the conformal invariance, it is then possible in this model to compute the statistical sum in quantum gravity of spatially closed cosmologies \cite{slih}. Indeed, the initial state is represented by the microcanonical density matrix \cite{why} whose statistical sum can be calculated within the $1/N_{\rm cdf}$-expansion in the number, $N_{\rm cdf}$, of conformally invariant quantum fields under the assumption that they outnumber all other degrees of freedom. This statistical sum is dominated by the set of the quasi-thermal cosmological instantons. A first very interesting outcome of this calculation is the fact that its consistency requires the effective cosmological constant of the early Universe to belong to a finite range, strictly bounded from above and from below \cite{slih}. It also shows that the vacuum Hartle-Hawking instantons are excluded from the initial conditions, having {\em infinite positive} Euclidean gravitational effective action \cite{slih} due to the contribution of the conformal anomaly. Here we are going to analyze the cosmological evolution in this model with the initial conditions set by the instantons of \cite{slih}. In particular, we will derive the modified Friedmann equation incorporating the effect of conformal anomaly at late radiation and matter domination stages. As will be shown, this equation has several interesting properties. First it shows that the vacuum (Casimir) part of the energy density is "degravitated" via the effect of the conformal anomaly. Namely the Casimir energy does not weigh. Second, we will show that this equation, together with the recovery of the general relativistic behavior, can feature a stage of cosmological acceleration followed by what we call a {\em big boost} singularity. At this singularity the scale factor acceleration grows in finite proper time up to infinity with a finite limiting value of the Hubble factor, when the Universe again enters a quantum phase demanding for its description {\bf a} UV completion of the low-energy semiclassical theory. Finally we discuss the possibility of realizing this scenario within the AdS/CFT and braneworld setups, in particular when the conformal anomaly and its effective action are induced on 4D boundary/brane from the type IIB supergravity theory in the 5D bulk. We also comment on the relation between our model and the DGP setup. \section{Cosmological initial conditions generated by the conformal anomaly} The statistical sum for the microcanonical ensemble in spatially closed cosmology ($S^3$-topology of spatial sections) was shown to be represented by the path integral over the periodic scale factor $a(\tau)$ and lapse function $N(\tau)$ of the minisuperspace metric \begin{eqnarray} ds^2 = N^2(\tau)\,d\tau^2+a^2(\tau)\,d^2\Omega^{(3)} \label{FRW} \end{eqnarray} on the toroidal spacetime of $S^1\times S^3$ topology \cite{why} \begin{eqnarray} e^{-\varGamma}=\!\!\int\limits_{\,\,\rm periodic} \!\!\!\! D[\,a,N\,]\; e^{-\varGamma_E[\,a,\,N\,]}. \label{1} \end{eqnarray} Here $\varGamma_E[\,a,\,N]$ is the Euclidean effective action of all inhomogeneous ``matter" fields which include also metric perturbations on minisuperspace background of (\ref{FRW}). Under the assumption that the system is dominated by free matter fields conformally coupled to gravity this action is exactly calculable by the conformal transform from (\ref{FRW}) to static Einstein metric with $a={\rm const}$ \cite{slih}. In units of the Planck mass $m_P=(3\pi/4G)^{1/2}$ it reads \begin{eqnarray} &&\varGamma_E[\,a,N\,]=m_P^2\int d\tau\,N \left\{-aa'^2 -a+ \frac\Lambda3 a^3 +B\!\left(\frac{a'^2}{a} \label{FrieEu} -\frac{a'^4}{6 a}\right) +\frac{B}{2a}\,\right\}+ F(\eta),\\ &&F(\eta)=\pm\sum_{\omega}\ln\big(1\mp e^{-\omega\eta}\big),\,\,\,\,\, \eta=\int d\tau N/a. \label{effaction} \end{eqnarray} Here $a'\equiv da/Nd\tau$, the first three terms in curly brackets represent the classical Einstein action with a primordial cosmological constant $\Lambda$, the $B$-terms correspond to the contribution of the conformal anomaly and the contribution of the vacuum (Casimir) energy $(B/2a)$ of conformal fields on a static Einstein spacetime. $F(\eta)$ is a free energy of these fields -- a typical boson or fermion sum over field oscillators with energies $\omega$ on a unit 3-sphere, $\eta$ playing the role of the inverse temperature --- an overall circumference of the toroidal instanton measured in units of the conformal time. The constant $B$, \begin{eqnarray} B=\frac{3\beta}{4 m_P^2}=\frac{\beta G}\pi, \label{B} \end{eqnarray} is determined by the coefficient $\beta$ of the topological Gauss-Bonnet invariant $E = R_{\mu\nu\alpha\gamma}^2-4R_{\mu\nu}^2 + R^2$ in the overall conformal anomaly of quantum fields \begin{equation} g_{\mu\nu}\frac{\delta \varGamma_E}{\delta g_{\mu\nu}} = \frac{1}{4(4\pi)^2}g^{1/2} \left(\alpha \Box R + \beta E + \gamma C_{\mu\nu\alpha\beta}^2\right), \label{anomaly} \end{equation} ($C^2_{\mu\nu\alpha\beta}$ is the Weyl tensor squared term). For the model of $N_0$ scalar, $N_{1/2}$ Weyl spinor and $N_{1}$ gauge vector fields it reads \cite{Duffanomaly} \begin{eqnarray} \beta=\frac1{360}\,\big(2 N_0+11 N_{1/2}+ 124 N_{1}\big). \label{100} \end{eqnarray} The coefficient $\gamma$ does not contribute to (\ref{FrieEu}) because the Weyl tensor vanishes for any FRW metric. The situation with the coefficient $\alpha$ is more complicated. A nonvanishing $\alpha$ induces higher derivative terms $\sim \alpha (a'')^2$ in the action and, therefore, adds one extra degree of freedom to the minisuperspace sector of $a$ and $N$ and results in instabilities\footnote{In Einstein theory this sector does not contain physical degrees of freedom at all, which solves the problem of the formal ghost nature of $a$ in the Einstein Lagrangian. Addition of higher derivative term for $a$ does not formally lead to a ghost -- the additional degree of freedom has a good sign of the kinetic term as it happens in $f(R)$-theories, but still leads to instabilities discovered in \cite{Starobinsky}.}. But $\alpha$ can be renormalized to zero by adding a finite {\em local} counterterm $\sim R^2$ admissible by the renormalization theory. We assume this {\em number of degrees of freedom preserving} renormalization to keep theory consistent both at the classical and quantum levels \cite{slih}. It is interesting that this finite renormalization changes the value of the Casimir energy of conformal fields in closed Einstein cosmology in such a way that for all spins this energy is universally expressed in terms of the same conformal anomaly coefficient $B$ (corresponding to the $B/2a$ term in (\ref{FrieEu})) \cite{slih}. As we will see, this leads to the gravitational screening of the Casimir energy, mediated by the conformal anomaly of quantum fields. Ultimately, the effective action (\ref{FrieEu}) contains only two dimensional constants -- the Planck mass squared (or the gravitational constant) $m_P^2=3\pi/4G$ and the cosmological constant $\Lambda$. They have to be considered as renormalized quantities. Indeed, the effective action of conformal fields contains divergences, the quartic and quadratic ones being absorbed by the renormalization of the initially singular bare cosmological and gravitational constants to yield finite renormalized $m_P^2$ and $\Lambda$. Logarithmically divergent counterterms have the same structure as curvature invariants in the anomaly (\ref{anomaly}). When integrated over the spacetime closed toroidal FRW instantons they identically vanish because the $\Box R$ is a total derivative, Euler number of $S^3\times S^1$ is zero, $\int d^4x g^{1/2}E=0$, and $C_{\mu\nu\alpha\beta}=0$. There is however a finite tail of these vanishing logarithmic divergences in the form of the conformal anomaly action which incorporates the coefficient $\beta$ of $E$ in (\ref{anomaly}) and constitutes a major contribution to $\varGamma_E$ --- the first two $B$-dependent terms of (\ref{effaction})\footnote{These terms can be derived from the metric-dependent Riegert action \cite{Riegert} or the action in terms of the conformal factor relating two metrics \cite{FrTs,BMZ} and generalize the action of \cite{FHH} to the case of a spatially closed cosmology with $\alpha=0$.}. Thus, in fact, this model when considered in the leading order of the $1/N$-expansion (therefore disregarding loop effects of the graviton and other non-conformal fields) is renormalizable in the minisuperspace sector of the theory. The path integral (\ref{1}) is dominated by the saddle points --- solutions of the equation $\delta\varGamma_E/\delta N(\tau)=0$ which reads as \begin{eqnarray} &&-\frac{a'^2}{a^2}+\frac{1}{a^2} -B \left(\frac12\,\frac{a'^4}{a^4} -\frac{a'^2}{a^4}\right) = \frac\Lambda3+\frac{C}{ a^4}, \label{efeq}\\ &&C = \frac{B}2 +\frac{dF(\eta)}{d\eta},\,\,\,\, \eta = 2k \int_{\tau_-}^{\tau_+} \frac{d\tau}{a}. \label{bootstrap} \end{eqnarray} Note that the usual (Euclidean) Friedmann equation is modified by the anomalous $B$-term and the radiation term $C/a^4$. The constant $C$ sets the amount of radiation and satisfies the bootstrap equation (\ref{bootstrap}), where $B/2$ is a contribution of the Casimir energy, and \begin{eqnarray} \frac{dF(\eta)}{d\eta}= \sum_\omega\frac{\omega}{e^{\omega\eta}\mp 1} \label{100} \end{eqnarray} is the energy of the gas of thermally excited particles with the inverse temperature $\eta$. The latter is given in (\ref{bootstrap}) by the $k$-fold integral between the turning points of the scale factor history $a(\tau)$, $\dot a(\tau_\pm)=0$. This $k$-fold nature implies that in the periodic solution the scale factor $k$ times oscillates between its maximum and minimum values $a_\pm=a(\tau_\pm)$. As shown in \cite{slih}, such solutions represent the garland-type instantons which exist only in the limited range of the cosmological constant \begin{eqnarray} 0<\Lambda_{\rm min}<\Lambda< \frac{3\pi}{2\beta G}, \label{landscape} \end{eqnarray} and eliminate the vacuum Hartle-Hawking instantons corresponding to $a_-=0$\footnote{Hartle-Hawking instantons are ruled out in the statistical sum by their infinite positive effective action which is due to the contribution of the conformal anomaly drastically changing predictions of the tree-level theory.}. The period of these quasi-thermal instantons is not a freely specifiable parameter, but as a function of $G$ and $\Lambda$ follows from Eqs. (\ref{efeq})-(\ref{bootstrap}). Therefore the model does not describe a canonical ensemble, but rather a microcanonical ensemble (see \cite{why}) with only two freely specifiable dimensional parameters --- the renormalized gravitational and cosmological constants discussed above. The upper bound of the range (\ref{landscape}) is entirely caused by the quantum anomaly -- this is a new quantum gravity scale which tends to infinity when one switches the quantum effects off, $\beta\to 0$. The lower bound $\Lambda_{\rm min}$ is the effect of both radiation and anomaly, and can be obtained numerically for any field contents of the model. For a large number of conformal fields, and therefore a large $\beta$, the lower bound is of the order $\Lambda_{\rm min}\sim 1/\beta G$. Thus the restriction (\ref{landscape}) can be regarded as a solution of the cosmological constant problem in the early Universe, because specifying a sufficiently high number of conformal fields one can achieve the primordial value of $\Lambda$ well below the Planck scale where the effective theory applies, but high enough to generate sufficiently long inflationary stage. Also this restriction can be potentially considered as a selection criterion for the landscape of string vacua \cite{slih,why}. \section{Cosmological evolution} The gravitational instantons of the above type can be regarded as setting initial conditions for the cosmological evolution in the physical spacetime with the Lorentzian signature. This can be viewed as nucleation of the Lorentzian spacetime from the Euclidean spacetime at the maximum value of the scale factor $a_+=a(\tau_+)$ at the turning point of the Euclidean solution $\tau_+$ --- the minimal (zero extrinsic curvature) surface of the instanton. For the contribution of the one-fold instanton to the density matrix of the Universe this nucleation process is depicted in Fig. \ref{Fig.1}. \begin{figure}[h] \centerline{\epsfxsize 4.4cm \epsfbox{hh5.eps}} \caption{\small The contribution of the one-fold instanton to the density matrix of the Universe, whose two arguments are associated with with the surfaces $\Sigma$ and $\Sigma'$. Dashed lines depict the Lorentzian Universe nucleating at $\Sigma$ and $\Sigma'$. \label{Fig.1}} \end{figure} The Lorentzian evolution can be obtained by analytically continuing the Euclidean time into a complex plane by the rule $\tau=\tau_++it$. Correspondingly the Lorentzian effective equation follows from the Euclidean one (\ref{efeq}) as \begin{eqnarray} &&\frac{\dot a^2}{a^2}+\frac{1}{a^2} -B \left(\frac12\,\frac{\dot a^4}{a^4} +\frac{\dot a^2}{a^4}\right) = \frac\Lambda3+\frac{C}{ a^4}, \end{eqnarray} where the dot from now on denotes the derivative with respect to the Lorentzian time $t$. This can be rewritten in the form \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2}- \frac{B}2 \left(\frac{\dot a^2}{a^2} +\frac{1}{a^2}\right)^2 = \frac\Lambda3+\frac{C-B/2}{ a^4}, \label{Friedmann3} \end{eqnarray} and solved for the Hubble factor as \begin{eqnarray} &&\frac{\dot a^2}{a^2}+\frac{1}{a^2}= \frac1B\left\{1- \sqrt{1-2B\left(\frac\Lambda3 +\frac{\cal C}{a^4}\right)}\right\}, \label{Friedmann0}\\ &&{\cal C} \equiv C-\frac{B}2. \label{calC} \end{eqnarray} We have thus obtained a modified Friedmann equation in which the overall energy density, including both the cosmological constant and radiation, very nonlinearly contributes to the square of the Hubble factor. An interesting property of this equation is that the Casimir energy does not weigh --- the term $B/2a^4$ is completely subtracted from the full radiation density $C/a^4$ in the right hand side of (\ref{Friedmann3}) and under the square root of (\ref{Friedmann0}). Only ``real" thermally excited quanta contribute to the right-hand side of (\ref{Friedmann0}). Indeed, using (\ref{bootstrap}), the radiation contribution ${\cal C}/a^4$ is seen to read simply as \begin{eqnarray} \frac{\cal C}{a^4} = \frac1{a^4} \sum_\omega\frac{\omega}{e^{\omega\eta}\mp 1}. \label{primrad} \end{eqnarray} This is an example of the gravitational screening which is now being intensively searched for the cosmological constant \cite{WoodardTsamis,DvaliKhouryetal}. As we see this mechanism is mediated by the conformal anomaly action, but it applies not to the cosmological constant, but rather to the Casimir energy which has the equation of state of radiation $p=\varepsilon/3$. This gravitational screening is essentially based on the above mentioned renormalization that eradicates higher derivatives from the effective action and thus preserves the minisuperspace sector free from dynamical degrees of freedom. After nucleation from the Euclidean instanton at the turning point with $a=a_+$ and $\dot a_+=0$ the Lorentzian Universe starts expanding, because \begin{eqnarray} \ddot a_+=-a\left.\frac{1+ \dot a^2-2\Lambda a^2/3}{a^2-B-B\dot a^2}\right|_{\,\tau_+} =a_+\frac{\sqrt{1-4C\Lambda/3}}{a_+^2-B}>0 \label{nucleation} \end{eqnarray} (this equation can be derived from (\ref{Friedmann0}) or obtained by analytic continuation from the Euclidean variational equation $\delta \varGamma_E[a,N]/\delta a=0$). Therefore, the radiation quickly dilutes, so that the primordial cosmological constant starts dominating and can generate an inflationary stage. It is natural to assume that the primordial $\Lambda$ is not fundamental, but is due to some inflaton field. This effective $\Lambda$ is nearly constant during the Euclidean stage and the inflation stage, and subsequently leads to a conventional exit from inflation by the slow roll mechanism\footnote{In the Euclidean regime this field also stays in the slow roll approximation, but in view of the oscillating nature of a scale factor it does not monotonically decay. Rather it follows these oscillations with much lower amplitude and remains nearly constant during all Euclidean evolution, whatever long this evolution is (as it happens for garland instantons with the number of folds $k\to\infty$).}. During a sufficiently long inflationary stage, particle production of conformally non-invariant matter takes over the polarization effects of conformal fields. After being thermalized at the exit from inflation this matter gives rise to an energy density $\varepsilon(a)$ which should replace the energy density of the primordial cosmological constant and radiation. Therefore, at the end of inflation the combination $\Lambda/3+{\cal C}/a^4$ should be replaced according to \begin{eqnarray} \frac\Lambda3+\frac{\cal C}{a^4}\to \frac{8\pi G}3\,\varepsilon(a)\equiv \frac{8\pi G}3\,\rho(a)+\frac{\cal C}{a^4}. \label{split} \end{eqnarray} Here $\varepsilon(a)$ denotes the full energy density including the component $\rho(a)$ resulting from the decay of $\Lambda$ and the radiation density of the primordial conformal matter ${\cal C}/a^4$. The dependence of $\varepsilon(a)$ on $a$ is of course determined by the equation of state via the stress tensor conservation, and $\rho(a)$ also includes its own radiation component emitted by and staying in (quasi)equilibrium with the barionic part of the full $\varepsilon(a)$. Thus the modified Friedman equation finally takes the form \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2}= \frac\pi{\beta G}\left\{\,1- \sqrt{\,1-\frac{16 G^2}3\, \beta\varepsilon}\,\right\}, \label{modFriedmann} \end{eqnarray} where we expressed $B$ according to (\ref{B}) In the limit of small subplanckian energy density $\beta G^2\varepsilon\equiv\beta\varepsilon/\varepsilon_P\ll 1$ the modified equation goes over into the ordinary Friedmann equation in which the parameter $\beta$ completely drops out \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2} =\frac{8\pi G}3\,\varepsilon,\,\,\,\, G^2\varepsilon\ll\frac1\beta. \label{GR} \end{eqnarray} Therefore within this energy range the standard cosmology is recovered. Depending on the effective equation of state, a wide set of the known scenarios of late cosmological evolution can be obtained, including quintessence \cite{quintessence} and other scenarios of cosmological acceleration \cite{otherDE}. \section{Big boost cosmological acceleration} The range of applicability of the GR limit (\ref{GR}) depends however on $\beta$. This makes possible a very interesting mechanism to happen for a very large $\beta$. Indeed, the value of the argument of the square root in (\ref{modFriedmann}) can be sufficiently far from 1 even for small $\varepsilon$ provided $\beta\sim N_{\rm cdf}\gg 1$. Moreover, one can imagine a model with a variable number of the conformal fields $N_{\rm cdf}(t)$ inducing a time and implicitly a scale factor dependent $\beta$, $\beta=\beta(a)$. If $\beta(a)$ grows with $a$ faster than the rate of decrease of $\varepsilon(a)$, then the solution of (\ref{modFriedmann}) can reach the singular point labeled below by $\infty$ at which the square root argument vanishes and the cosmological acceleration becomes infinite. This follows from the expression \begin{eqnarray} \frac{\ddot a}a=\frac\pi{\beta G} \left\{\left(1-\sqrt{\,1-16 G^2 \beta\varepsilon/3}\,\right) \left(1-\frac12\frac{a(G\beta)'}{G\beta}\right)+ \frac43\frac{a(G^2\beta\varepsilon)'}{\sqrt{\,1-16 G^2 \beta\varepsilon/3}}\right\}, \label{acceleration} \end{eqnarray} where prime denotes the derivative with respect to $a$. This expression becomes singular at $t=t_\infty$ even though the Hubble factor remains finite when \begin{eqnarray} &&(G^2\beta\varepsilon)_\infty=\frac3{16},\\ &&\left(\frac{\dot a^2}{a^2}+\frac{1}{a^2}\right)_\infty =\frac{16\pi}3 (G\varepsilon)_\infty. \label{boostHubble} \end{eqnarray} Assuming for simplicity that the matter density has a dust-like behavior and $\beta$ grows by a power law in $a$ \begin{eqnarray} G\varepsilon\sim \frac1{a^3},\,\,\,\, G\beta\sim a^n,\,\,\,\,n>3, \label{behavior} \end{eqnarray} one easily finds an inflection point $t=t_*$ when the cosmological acceleration starts after the deceleration stage \begin{eqnarray} &&(G^2\beta\varepsilon)_*=\frac3{4}\frac{n-2}{(n-1)^2},\\ &&\left(\frac{\dot a^2}{a^2}+\frac{1}{a^2}\right)_* =\frac{8\pi}3 (G\varepsilon)_*\frac{n-1}{n-2}. \label{H*} \end{eqnarray} Of course, for the acceleration stage to start, the Universe should reach this inflection point $t_*$ before recollapsing from the point of its maximal expansion. This requirement imposes certain restrictions on the coefficients of the asymptotic behavior of $G\varepsilon$ and $G\beta$, (\ref{behavior}), which depend on the details of the history of the equation of state for $\varepsilon(a)$ and the dynamics in the number of conformal fields. We will consider these details elsewhere. Here it is worth only mentioning that the matter density in the vicinity of $t_*$ gravitates with {\bf a} slightly rescaled gravitational constant, Eq.(\ref{H*}), while at the singularity the effective gravitational constant doubles, see Eq.(\ref{boostHubble}). Also it is useful to comment on the duration of the acceleration stage before reaching the singularity. If we identify our epoch with some instant $t_0$ soon after $t_*$, then this duration until the singularity can be estimated by disregarding the spatial curvature term. Then it reads as \begin{eqnarray} t_\infty-t_*\sim\sqrt{B_0}\sim H_0^{-1}, \end{eqnarray} which is comparable to the age of the Universe. Thus, although the acceleration stage does not pass the eternity test of \cite{Polyakov}, its duration is very large. Nevertheless, the evolution ends in this model with the curvature singularity, $\ddot a\to\infty$, reachable in a finite proper time. Unlike the situation with a big brake singularity of \cite{bigbrake} it cannot be extended beyond this singularity analytically even by smearing it or taking into account its weak integrable nature. In contrast to \cite{bigbrake} the acceleration is positive, which fact allows us to call this singularity a {\em big boost}. The effect of the conformal anomaly drives the expansion of the Universe to the maximum value of the Hubble constant (\ref{boostHubble}), after which the solution becomes complex. This, of course, does not make the model apriori inconsistent, because for $t\to t_*$ an infinitely growing curvature invalidates the semiclassical and $1/N$ approximations. This is a new essentially quantum stage which requires the UV completion of the effective low-energy theory. \section{AdS/CFT and Randall-Sundrum braneworld setup} What can be the mechanism of a variable and indefinitely growing $\beta$? One such mechanism is well known -- phase transitions in cosmology between different vacua can give a mass $m$ to an initially massless particle. This results in the loss of conformal invariance of the corresponding particle, which instead of contributing to the vacuum polarization by its own $\beta$ factor starts generating the Coleman-Weinberg type potential $\sim m^4\ln(m^2/\mu^2)$. However this effect is weak and decreases the effective value of $\beta$, which is not what we are after. Another mechanism was suggested in \cite{why}. It relies on the possible existence, motivated by string theory, of extra dimensions whose size is evolving in time. Indeed, theories with extra dimensions provide a qualitative mechanism to promote $\beta$ to the level of a moduli variable indefinitely growing with the evolving size $L$ of those dimensions. Indeed $\beta$ basically counts the number $N_{\rm cdf}$ of conformal degrees of freedom, $\beta\sim N_{\rm cdf}$ (see Eq.(\ref{100})). If one considers a string theory in a space time with more than four dimensions, the extra-dimension being compact with typical size $L$, the effective 4-dimensional fields arise as Kaluza-Klein (KK) and winding modes with masses (see e.g. \cite{Polch}) \begin{eqnarray} m_{n,w}^2=\frac{n^2}{L^2}+\frac{w^2}{\alpha'^2}\,L^2 \end{eqnarray} (where $n$ and $w$ are respectively the KK and winding numbers), which break their conformal symmetry. These modes remain approximately conformally invariant as long as their masses are much smaller than the spacetime curvature, $m_{n,w}^2\ll H_0^2\sim m_P^2/N_{\rm cdf}$. Therefore the number of conformally invariant modes changes with $L$. Simple estimates show that the number of pure KK modes ($w=0$, $n\leq N_{\rm cdf}$) grows with $L$ as $N_{\rm cdf}\sim (m_P L)^{2/3}$, whereas the number of pure winding modes ($n=0$, $w\leq N_{\rm cdf}$) grows as $L$ decreases as $N_{\rm cdf}\sim(m_P\alpha'/L)^{2/3}$. Thus, it is possible to find a growing $\beta$ in both cases with expanding or contracting extra dimensions. In the first case it is the growing tower of superhorizon KK modes which {\it makes} the horizon scale $H_0\sim m_P/\sqrt{N_{\rm cdf}}\sim m_P/(m_P L)^{1/3}$ indefinitely decrease with $L\to\infty$. In the second case it is the tower of superhorizon winding modes which makes this scale decrease with the decreasing $L$ as $H_0\sim m_P(L/m_P\alpha')^{1/3}$. At the qualitative level of this discussion so far, such a scenario is flexible enough to accommodate the present day acceleration scale (though, at the price of fine-tuning an enormous coefficient of expansion or contraction of $L$). However, string (or rather string-inspired) models can offer a more explicit construction of the ideas put forward previously, as well as help addressing various phenomenological questions which arise in their consideration. In particular, one obvious question which arises, should the model considered here be realistic, is what are the possible observable effects of the large number of required conformal fields. Here some guidance can be obtained from the AdS/CFT picture. Indeed, in this picture \cite{AdS/CFT} a higher dimensional theory of gravity, namely type IIB supergravity compactified on $AdS_5 \times S^5$, is seen to be equivalent to a four dimensional conformal theory, namely ${\cal N}=4$ $SU(N)$ SYM, thought to live on the boundary of $AdS_5$ space-time. An interesting arena for a slight generalization of these ideas is the Randall-Sundrum model \cite{Randall:1999vf} where a 3-brane is put in the inside of $AdS_5$ space-time resulting in a large distance recovery of 4D gravity without the need for compactification. This model has a dual description. On the one hand it can just be considered from a 5D gravity perspective, on the other hand it can also be described, thanks to the AdS/CFT picture, by a 4D conformal field theory coupled to gravity. Indeed, in this picture, the 5D SUGRA --- a field-theoretic limit of the type IIB string --- induces on the conformal boundary of the underlying AdS background the quantum effective action of the conformally invariant 4D ${\cal N}=4$ $SU(N)$ SYM theory coupled to the 4D geometry of the boundary. The multiplets of this CFT contributing according to (\ref{100}) to the total conformal anomaly coefficient $\beta$ are given by $(N_0,N_{1/2},N_1)=(6N^2,4N^2,N^2)$ \cite{DuffLiu}, so that \begin{eqnarray} \beta=\frac12\,N^2. \end{eqnarray} The parameters of the two theories are related by the equation \cite{AdS/CFT,Gubser,HHR} \begin{eqnarray} \frac{L^3}{2 G_5}=\frac{N^2}{\pi}, \end{eqnarray} where $L$ is the radius of the 5D $AdS$ space-time with the negative cosmological constant $\Lambda_5=-6/L^2$ and $G_5$ is the 5D gravitational constant. The radius $L$ is also related to the 't Hooft parameter of the SYM coupling $\lambda=g_{SYM}^2 N$ and the string length scale $l_s=\sqrt{\alpha'}$, $L=\lambda^{1/4} l_s$. The generation of the 4D CFT from the local 5D supergravity holds in the limit when both $N$ and $\lambda$ are large. This guarantees the smallness of string corrections and establishes the relation between the weakly coupled tree-level gravity theory in the bulk ($G_5\to 0$, $L\to\infty$) and the strongly coupled 4D CFT ($g_{SYM}^2\gg 1$). Moreover, as said above, the AdS/CFT correspondence explains the mechanism of recovering general relativity theory on the 4D brane of the Randall-Sundrum model \cite{Gubser,HHR}. The 4D gravity theory is induced on the brane from the 5D theory with the negative cosmological constant $\Lambda_5=-6/L^2$. In the one-sided version of this model the brane has a tension $\sigma=3/8\pi G_5L$ (the 4D cosmological constant is given by $\Lambda_4=8\pi G_4\sigma$), and the 4D gravitational constant $G_4\equiv G$ turns out to be \begin{eqnarray} G=\frac{2G_5}L. \end{eqnarray} One recovers 4D General Relativity at low energies and for distances larger than the radius of the AdS bulk, $L$. Thus, the CFT dual description of the 5D Randall-Sundrum model is very similar to the model considered above. Moreover, even though the CFT effective action is not exactly calculable for $g_{SYM}^2\gg 1$ it is generally believed that its conformal anomaly is protected by extended SUSY \cite{TseytlinLiu} and is exactly given by the one-loop result (\ref{anomaly}). Therefore it generates the exact effective action of the anomalous (conformal) degree of freedom given by (\ref{effaction}), which guarantees a good $1/N_{\rm cdf}$-approximation for the gravitational dynamics. Applying further the above relations it follows a relation between our $\beta$ coefficient and the radius $L$ of the $AdS$ space-time, given by $\beta G=\pi L^2/2$. Introducing this in the modified Friedmann equation (\ref{modFriedmann}), the latter becomes explicitly depending on the size of the 5D AdS spacetime as given by \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2}= \frac2{L^2}\left\{\,1- \sqrt{\,1-L^2 \left(\frac{8\pi G}3\, \rho+\frac{\cal C}{a^4}\right)}\,\right\}, \label{modFriedmann2} \end{eqnarray} where we have reintroduced the decomposition (\ref{split}) of the full matter density into the decay product of the inflationary and matter domination stages $\rho$ and the thermal excitations of the primordial CFT (\ref{primrad}). For low energy density, $GL^2\rho\ll 1$ and $L^2 {\cal C}/a^4\ll 1$, in the approximation beyond the leading order, cf. Eq.(\ref{GR}), this equation reads\footnote{We assume that the dark radiation term is redshifted for growing $a$ faster than matter term and expand to the second order in $\rho$, but the first order in ${\cal C}$.} \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2} \simeq\frac{8\pi G}3\,\rho\, \left(1+\frac{2\pi GL^2}3\rho\right) + \frac{\cal C}{a^4} \end{eqnarray} and coincides with the modified Friedmann equation in the Randall-Sundrum model \cite{BinDefLan} \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2} =\frac{8\pi G}3\,\rho\, \left(1+\frac{\rho}{2\sigma}\right)+ \frac{\cal C}{a^4}, \label{RS} \end{eqnarray} where $\sigma=3/8\pi G_5L=3/4\pi GL^2$ is the Randal-Sundrum brane tension and ${\cal C}$ is the braneworld constant of motion \cite{BinDefLan,bulkBH}. Note that the thermal radiation on the brane (of non-Casimir energy nature) is equivalent to the mass of the bulk black hole associated with this constant. This fact can be regarded as another manifestation of the AdS/CFT correspondence in view of the known duality between the bulk black hole and the thermal CFT on the brane \cite{bulkBH}. Thus indeed anomaly driven cosmology coincides with the Randall-Sundrum one in the limit of low density of matter and radiation. Interestingly, this comparison between our model and the Randall-Sundrum framework also allows {\bf one} to have some insight on the phenomenologically allowed physical scales. Indeed, it is well known that the presence of an extra-dimension in the Randall-Sundrum model, or in the dual language, that of the CFT, manifests itself typically at distances lower that the $AdS$ radius $L$. Hence, it is perfectly possible to have a large number of conformal fields in the Universe, {\it \`a la} Randall-Sundrum, without noticing their presence in the everyday experiments, provided $L$ is small enough. Moreover, if one uses the scenario of \cite{slih} to set the initial conditions for inflation, it provides an interesting connection between the Hubble radius of inflation, given by eq. (\ref{landscape}), and the distance at which the presence of the CFT would manifest itself in gravity experiments, both being given by $L$. Last, it seems natural in a string theory setting, to imagine that the $AdS$ radius $L$ can depend on time, and hence on the scale factor. In this case, assuming that the AdS/CFT picture still holds when $L$ is adiabatically evolving, one can consider the possibility that $GL^2\varepsilon$ is large, and that $L^2(t)$ grows faster than $G\varepsilon(t)$ decreases during the cosmological expansion. One would then get the cosmological acceleration scenario of the above type followed by the big boost singularity. In this case, however, should this acceleration scenario correspond to the present day accelerated expansion, $L$ should be of the order of the present size of the Universe, i.e. $L^{-2}\sim H_0^2$. Since the Randall-Sundrum mechanism recovers 4D GR only at distances beyond the curvature radius of the AdS bulk, $r\gg L$, it means that local gravitational physics of our model (\ref{modFriedmann2}) at the acceleration stage is very different from the 4D general relativity. Thus this mechanism can hardly be a good candidate for generating dark energy in real cosmology. \section{Anomaly driven cosmology and the DGP model} It is interesting that there exists {\bf an} even more striking example of a braneworld setup dual to our anomaly driven model. This is the generalized DGP model \cite{DGP} including together with the 4D and 5D Einstein-Hilbert terms also the 5D cosmological constant, $\Lambda_5$, in the special case of the {\em vacuum} state on the brane with a vanishing matter density $\rho=0$. In contrast to the Randall-Sundrum model, for which this duality holds only in the low energy limit --- small $\rho$ and small ${\cal C}/a^4$, vacuum DGP cosmology {\em exactly} corresponds to the model of \cite{slih} with the 4D cosmological constant $\Lambda$ simulated by the 5D cosmological constant $\Lambda_5$. Indeed, in this model (provided one neglects the bulk curvature), gravity interpolates between a 4D behaviour at small distances and a 5D behaviour at large distances, with the crossover scale between the two regimes being given by $r_c$, \begin{eqnarray} \frac{G_5}{2G}=r_c, \label{DGPscale} \end{eqnarray} and in the absence of stress-energy exchange between the brane and the bulk, the modified Friedmann equation takes the form \cite{DGPDeffayet} \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2}- r_c^2 \left(\,\frac{\dot a^2}{a^2} +\frac{1}{a^2}-\frac{8\pi G}3\,\rho\right)^2 = \frac{\Lambda_5}{6} +\frac{{\cal C}}{ a^4}. \label{FriedmannDGP} \end{eqnarray} Here ${\cal C}$ is the same as above constant of integration of the bulk Einstein's equation, which corresponds to a nonvanishing Weyl tensor in the bulk (or a mass for a Schwarzschild geometry in the bulk) \cite{BinDefLan,bulkBH}. It is remarkable that this equation with $\rho=0$ exactly coincides with the modified Friedmann equation of the anomaly driven cosmology (\ref{Friedmann3}) under the identifications \begin{eqnarray} &&B\equiv\frac{\beta G}\pi=2 r_c^2, \label{1000}\\ &&\Lambda=\frac{\Lambda_5}2. \end{eqnarray} These identifications imply that in the DGP limit $G\ll r_c^2$, the anomaly coefficient $\beta$ is much larger than 1. This looks very much like the generation of the vacuum DGP model for any value of the dark radiation ${\cal C}/a^4$ from the anomaly driven cosmology with a very large $\beta\sim m_P^2 r_c^2\gg 1$. However, there are several differences. A first important difference between the conventional DGP model and the anomaly driven DGP is that the former does not incorporate the self-accelerating branch \cite{DGPDeffayet,DDG} of the latter. This corresponds to the fact that only one sign of the square root is admissible in Eq.(\ref{Friedmann0}) --- a property dictated by the instanton initial conditions at the nucleation of the Lorentzian spacetime from the Euclidean one (see Eq.(\ref{nucleation})). So, one does not have to worry about possible instabilities associated with the self-accelerating branch \cite{instabilityofacceleratingbranch}. Another important difference concerns the way the matter energy density manifests itself in the Friedmann equation for the non-vacuum case. In our 4D anomaly driven model it enters the right hand side of the equation as a result of the decay (\ref{split}) of the effective 4D cosmological constant $\Lambda$, while in the DGP model it appears inside the parenthesis of the left hand side of equation (\ref{FriedmannDGP}). Therefore, the DGP Hubble factor reads as \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2}= \frac{8\pi G}3\, \rho+ \frac1{2r_c^2}\left\{\,1- \sqrt{\,1-4r_c^2 \left(\frac{\textstyle\Lambda_5}{\textstyle 6} +\frac{\textstyle\cal C}{\textstyle a^4}- \frac{\textstyle 8\pi G}{\textstyle 3}\, \rho\right)}\,\right\} \label{modFriedmannDGP} \end{eqnarray} (note the negative sign of $\rho$ under the square root and the extra first term on the right hand side) and in the limit of small $\rho$, ${\cal C}/a^4$ and $\Lambda_5$ yields the behavior very different from the GR limit of the anomaly driven model (\ref{GR}), \begin{eqnarray} \frac{\dot a^2}{a^2}+\frac{1}{a^2}\simeq \frac{\textstyle\Lambda_5}{\textstyle 6} +\frac{\textstyle\cal C}{\textstyle a^4} +r_c^2 \left(\frac{\textstyle\Lambda_5}{\textstyle 6} +\frac{\textstyle\cal C}{\textstyle a^4}- \frac{\textstyle 8\pi G}{\textstyle 3}\, \rho\right)^2. \end{eqnarray} For vanishing $\Lambda_5$ and ${\cal C}/a^4$ this behavior corresponds to the 5D dynamical phase \cite{DGPDeffayet,DDG} which is realized in the DGP model for a very small matter energy density on the brane $\rho\ll 3/32\pi G r_c^2\sim m_P^2/r_c^2$. Of course, in this range the DGP braneworld reduces to a vacuum brane, but one can also imagine that the 5D cosmological constant decays into matter constituents similar to (\ref{split}) and thus simulates the effect of $\rho$ in Eq.(\ref{modFriedmann}). This can perhaps provide us with a closer correspondence between the anomaly driven cosmology and the non-vacuum DGP case. But here we would prefer to postpone discussions of such scenarios to future analyses and, instead, focus on the generalized {\em single-branch} DGP model to show that it also admits the cosmological acceleration epoch followed by the big boost singularity. Indeed, for positive $\Lambda_5$ satisfying a very weak bound \begin{eqnarray} \Lambda_5>\frac3{2r_c^2} \label{bound} \end{eqnarray} Eq.(\ref{modFriedmannDGP}) has a solution for which, during the cosmological expansion with $\rho\to 0$, the argument of the square root vanishes and the acceleration tends to infinity (prime again denotes the derivative with respect to $a$) \begin{eqnarray} \frac{\ddot a}a\simeq \frac{\textstyle\left[\,r_c^2\Big( \frac{\textstyle\Lambda_5}{\textstyle 6} +\frac{\textstyle\cal C}{\textstyle a^4} -\frac{\textstyle 8\pi G}{\textstyle 3}\, \rho \Big)\,\right]'} {\textstyle r_c^2\,\sqrt{\,1-4r_c^2 \Big( \frac{\textstyle\Lambda_5}{\textstyle 6} +\frac{\textstyle\cal C}{\textstyle a^4} -\frac{\textstyle 8\pi G}{\textstyle 3}\,\rho \Big)}}\to\pm\infty. \end{eqnarray} This is the big boost singularity labeled similarly to (\ref{boostHubble}) by $\infty$ and having a finite Hubble factor \begin{eqnarray} &&\left(\frac{\textstyle\Lambda_5}{\textstyle 6} +\frac{\textstyle\cal C}{\textstyle a^4}- \frac{\textstyle 8\pi G}{\textstyle 3}\, \rho\right)_\infty =\frac1{4r_c^2},\\ &&\left(\frac{\dot a^2}{a^2} +\frac{1}{a^2}\right)_\infty =\frac{\Lambda_5}6+\frac1{4r_c^2}. \end{eqnarray} For the effective $a$-dependence of $r_c^2$ and $G\rho$ analogous to (\ref{behavior}), $r_c^2(a)\sim a^n$ and $G\rho(a)\sim 1/a^3$, the acceleration becomes positive at least for $n\geq 0$, \begin{eqnarray} \frac{\ddot a}a\simeq \frac{\textstyle n+ 32\pi G\,r_c^2\rho} {\textstyle 4r_c^2\,\sqrt{\,1+4r_c^2 \Big(\frac{\textstyle 8\pi G}{\textstyle 3}\, \rho-\frac{\textstyle\Lambda_5}{\textstyle 6} -\frac{\textstyle\cal C}{\textstyle a^4}\Big)}} \to+\infty. \end{eqnarray} Thus, the {\em single-branch} DGP cosmology can also lead to a big boost version of acceleration. For that to happen, one does not actually need a growing $r_c$ (which can be achieved at the price of having a time dependent $G_5$ --- itself some kind of a modulus, in a string inspired picture). The DGP crossover scale $r_c$ can be constant, $n=0$, and the big boost singularity will still occur provided the lower bound (\ref{bound}) is satisfied\footnote{Or, more precisely, its small modification due to the dark radiation contribution ${\cal C}/a^4$ which is very small at late stages of expansion.}. When $\Lambda_5$ violates this bound, the acceleration stage is eternal with the asymptotic value of the Hubble factor squared $\dot a^2/a^2=\big(1-\sqrt{1-2r_c^2\Lambda_5/3}\big)/2r_c^2$. \section{Conclusions} To summarize, we have obtained the modified Friedmann equation in the anomaly driven cosmology with the microcanonical density matrix initial conditions suggested in \cite{slih,why}. This equation exhibits a gravitational screening of the quantum Casimir energy of conformal fields --- this part of the total energy density does not weigh, being degravitated due to the contribution of the conformal anomaly. Also, in the low-density limit this equation does not only recover the general relativistic behavior, but also establishes a good correspondence with the dynamics of the Randall-Sundrum cosmology via the AdS/CFT duality. Moreover, for very large and rapidly growing value of the Gauss-Bonnet coefficient $\beta$ in the conformal anomaly this equation features a regime of cosmological acceleration followed by a big boost singularity. At this singularity the scale factor acceleration grows in finite proper time up to infinity with a finite limiting value of the Hubble factor, when the Universe again enters a quantum phase demanding for its description an UV completion of the low-energy semiclassical theory. A natural mechanism of growing $\beta$ can be based on the idea of adiabatically evolving scale associated with extra dimensions \cite{why} and realized within the picture of AdS/CFT duality, according to which the conformal field theory is induced on the 4D brane from the 5D non-conformal theory in the bulk. As is well known, this duality underlies the justification of the 4D general relativistic limit in the Randall-Sundrum model \cite{Gubser,HHR}. Here we observed an extended status of this duality from the cosmological perspective --- the generalized Randall Sundrum model with the Sschwarzschild-AdS bulk was shown to be equivalent to the anomaly driven cosmology for small energy density. In particular, the radiation contents of the latter was shown to be equivalent to the dark radiation term ${\cal C}/a^4$ pertinent to the Randall-Sundrum braneworld with a bulk black hole of mass ${\cal C}$ (well-known duality of the bulk black hole and the thermal CFT on the brane \cite{bulkBH}). It is interesting that the initial conditions of anomaly driven model establish the relation between the amount of radiation ${\cal C}$ and the product of renormalized cosmological and gravitational constants $G\Lambda\sim\Lambda/m_P^2$ --- the corollary of the closed system of equations (\ref{efeq})-(\ref{bootstrap}) \cite{slih}. Such a relation is not known in the standard $SU(N)$ AdS/CFT version of the Randall-Sundrum scenario valid for the effective $\Lambda=0$ and large $N^2\sim N_{\rm cdf}\sim\beta\gg 1$. But this is consistent with the fact that the solution of the bootstrap equations (\ref{efeq})-(\ref{bootstrap}) has a scaling behavior in $N_{\rm cdf}\sim N^2$, $\Lambda\to\Lambda/N_{\rm cdf}$, ${\cal C}\to N_{\rm cdf}{\cal C}$ \cite{slih}, which simply implies the limit of $N\to\infty$ in this scenario. This limit justifies the semiclassical approximation applicable in accordance with (\ref{landscape}) in the range of curvature much below the Planck scale. Another intriguing observation concerns establishing the {\em exact} correspondence between the anomaly driven cosmology and the vacuum DGP model generalized to the case of a nonvanishing $\Lambda_5$. In this case a large $\beta$ is responsible for the large crossover scale $r_c$, (\ref{DGPscale}). For positive $\Lambda_5$ satisfying the lower bound (\ref{bound}) this model also features a big boost scenario even for stabilized $\beta$. Below this bound (but still for positive $\Lambda_5>0$, because a negative $\Lambda_5$ would imply the point of maximal expansion from which the Universe starts recollapsing) the cosmological evolution eventually enters eternal acceleration scenario. However, the DGP model with a matter on the brane can hardly be equivalent to the 4D anomaly driven cosmology, unless one has some mechanism of decaying $\Lambda_5$ and simulating matter density on the brane. Unfortunately, the AdS/CFT correspondence with adiabatically evolving scale of extra dimension cannot incorporate the phenomenology of the observable dark energy, because the local gravitational physics of this model becomes very different from the 4D general relativity. Thus, macabre perspective of being ripped by infinite tidal forces at Big Boost singularity seems being postponed. AdS/CFT-correspondence --- a perfect nonperturbative tool in mathematical physics --- still remains a thing in itself from the perspective of applied astroparticle cosmology. In general, the idea of a very large central charge of CFT algebra, underlying the solution of the hierarchy problem in the dark energy phenomenon and particle phenomenology, seems hovering in current literature \cite{bigcentalcharge,Dvalispecies}. Our idea of a big growing $\beta$ belongs to the same scope, but its realization seems escaping phenomenological framework. Probably some other modification of this idea can be more productive. In particular, another qualitative mechanism of running $\beta$ could be based on the field-theoretic implementation of winding modes. These modes do not seem to play essential role in the AdS/CFT picture with a big scale of extra dimensions $L$, because they are heavy in the big $L$ limit. On the contrary, this mechanism is expected to work in the opposite case of contracting extra dimensions, for which the restrictions imposed by local gravitational physics do not seem to apply (as long as for $L\to 0$ the short-distance correction go deeper and deeper into UV domain). We hope to consider the mechanism of winding modes in accelerating cosmology elsewhere. \section*{Acknowledgements} A.B. is grateful for hospitality of the Laboratory APC, CNRS, University Paris 7, Paris, where a major part of this work has been done. His work was also supported by the Russian Foundation for Basic Research under the grant No 05-01-00996 and the grant LSS-4401.2006.2. A.Yu.K. was supported by the RFBR grant 05-02-17450 and the grant LSS-1157.2006.2. A.B. and C.D. wish to thank G.Gababadze and R.Woodard for discussions.
train/arxiv
BkiUanrxK1Thg98UdjuG
5
1
\section{Introduction} Storm surge is one of the most severe natural disasters that can lead to significant flooding in coastal areas that brings multi-billion dollar damages and is responsible on average for half of lives lost from hurricanes \citep{rappaport2014fatalities}. On average, inflation-normalized direct economic damages to the U.S. (1900-2005) are estimated at \$10 billion per year and increasing as a result of storm surges \citep{pielke2008normalized,klotzbach2018continental}. Since 2005, there have been 12 hurricanes whose total U.S. damages exceeded \$10 billion \citep{NOAA2019}. For instance, Hurricane Katrina (2005) caused over 1500 deaths and total estimated damages of \$75 billion in the New Orleans area and along the Mississippi coast as a result of storm surge \citep{FEMA2006}. To mitigate these impacts, studies are carried out to evaluate the probabilistic hazard \citep[e.g.,][]{niedoroda2010analysis,Cialone2017,garner2017impact} and risk \citep[e.g.,][]{aerts2013low,fischbach2016bias} from coastal flooding through a synthesis of computer modeling, statistical modeling, and extreme-event probability computation, where computer modeling is used to predict the storm surge hazard initialized by hurricanes, statistical modeling is used to determine the distribution of hurricane characteristics, and extreme-event probability is used to assess the flood hazard. These studies support development and application of flood insurance rates, building codes, land use planning/development, infrastructure design and construction, and related goals by providing hazard levels at a range of frequencies \citep[e.g.][]{aerts2014evaluating}. Similarly, forecast simulations are used to support a wide array of operational needs, most notably disaster mitigation and evacuation planning/preparedness \citep[e.g.,][]{blanton2018integrated,georgas2016stevens}. The ADCIRC ocean circulation model \citep{Luettich2004,westerink2008basin} is the primary computer model in the U.S. to predict storm surges in coastal areas. It was certified by the Federal Emergency Management Agency (FEMA) for coastal flood hazard study and has been successfully used in a large number of applications, including FEMA flood hazard map updates \cite[e.g.,][]{FEMA2008, niedoroda2010analysis, Jensen2012, Hesser2013} and in support of United States Army Corps of Engineers (USACE) projects \citep[e.g.,][]{Bunya2010, Wamsley2013,Cialone2017}. These studies develop surge and wave hazard elevations corresponding to a range of frequencies, from the 50\% annual exceedance level to the 0.01\% annual exceedance level. In the risk assessment of coastal flood hazard, ADCIRC needs to be run for a large number of storm characteristics. ADCIRC can be run at different levels of accuracy due to the sophistication of the physics incorporated in mathematical models, accuracy of numerical solvers, and resolutions of meshes. Although ADCIRC can be run efficiently in parallel on large supercomputers \citep{Tanaka2011}, its computational cost scales with the cube of the spatial resolution, meaning that very high fidelity models are several orders of magnitude more expensive than lower fidelity ones. By incorporating the physics of ocean waves, ADCIRC can generate storm surges with even greater fidelity, but this adds another order of magnitude to the run time as compared to the uncoupled ADCIRC model \citep{Dietrich2012}. For instance, a single high-resolution, coupled simulation in Southwestern Florida takes roughly 2000 core-hours on a high-performance supercomputer \citep{xsede}. As a result, research often uses coarser models without wave effects, even though the importance of more advanced, detailed models in estimating storm surge has been well-demonstrated \citep[e.g.,][]{dietrich2010high,marsooli2018numerical,yin2016coupled}. An important scientific demand is to develop an \emph{emulator} - a fast probabilistic approximation of a simulator - that can produce highly accurate storm surges over a large spatial domain. The need to support such improvements is particularly great in probabilistic flood studies dealing with climate change where a broad host of variables need to be considered. For instance, risk studies with relatively simple surge models estimate the current coastal flooding risk to New York City alone at over 100 million U.S. dollars per year \citep{aerts2013low,aerts2014evaluating}. But these estimates are highly sensitive to underlying assumptions that remain to be explored \citep[e.g.,][]{de2011effect,fischbach2016bias}. Similarly, high-fidelity studies of coastal flooding changes associated with sea level rise and/or climate change have shown complex, nonlinear changes in flooding patterns that simpler studies cannot uncover \citep{liu2019physical,garner2017impact,bilskie2019development}. The main scientific goal in this article is to develop an emulator that can not only predict highly accurate storm surges over a large spatial domain but also can be run very quickly to support coastal flood hazard studies. The development of an emulator for the high-fidelity storm surge model directly can be computationally prohibitive, since the high-fidelity storm surge model requires a tremendous amount of computing resources just to obtain a single run. An alternative is to develop an emulator that can combine a limited number of highly accurate simulations from a high-fidelity but expensive surge model and a larger number of less accurate simulations from a low-fidelity but cheaper surge model. Combining simulations from different fidelity levels relies on the idea that, after quantifying discrepancy between models of different fidelity levels, information from the low-fidelity surge model can facilitate prediction at high fidelity. Several statistical works have been proposed to combine output from simulators at different fidelity levels based on a well-known geostatistical method called \emph{cokriging} \citep[see Chapter 3 of][]{Cressie1993}. The idea to emulate multiple computer models is originated in \cite{Kennedy2000} using an autoregressive cokriging model. The work in \cite{Kennedy2000} has been extended in several ways. For instance, \cite{Qian2008} propose a Bayesian hierarchical formulation. \cite{Gratiet2013} devises an efficient Bayesian approach to estimate model parameters. \cite{konomikaragiannisABTCK2019} introduce nonstationarity by partitioning the input space via a Bayesian tree structure. \cite{Ma2019} develops objective Bayesian analysis for an autoregressive cokriging model. All these works focus on univariate computer model output without addressing the high-dimensionality challenge in the storm surge application. A particular assumption in univariate autoregressive cokriging models is the hierarchically nested design. This assumption also limits the application of autoregressive cokriging models to solve the storm surge application. In this paper, we develop a data-augmentation technique to allow for the possibility of non-nested design. We further propose a parallel partial cokriging emulator that can deal with high-dimensional output under non-nested design in the storm surge application. The inference is performed via an empirical Bayesian approach with the proposed data-augmentation technique. To estimate model parameters, we devise a Monte Carlo expectation-maximization (MCEM) algorithm. To make prediction, we devise a sequential prediction approach in which prediction at a higher fidelity level requires prediction to be made at a lower fidelity level. The proposed emulator explicitly introduces nonstationarity to model the mean parameters and variance parameters in Gaussian processes at each fidelity level and also allows fast computations in an empirical Bayesian framework. It provides a fast solution for synthesizing highly multivariate output from multiple computer models with substantial contributions to advance coastal flood hazard studies and storm surge forecasting. The remainder of the article is organized as follows. Section~\ref{sec: storm surge} introduces the storm surge application with two storm surge simulators. In Section~\ref{sec: model formulation}, we present the proposed methodology to handle high-dimensional output and non-nested design. In Section~\ref{sec: application}, an analysis of storm surge simulators is performed with the proposed methodology. Section~\ref{sec: Discussion} is concluded with discussions and possible extensions. \section{Application of interest: Storm Surge} \label{sec: storm surge} This section describes the storm surge simulators with their discrepancy highlighted and the simulation design that is used for this study. \subsection{Storm Surge Simulators} ADCIRC is a hydrodynamic circulation numerical model that solves the shallow-water equations for water levels and horizontal currents using the finite-element method over an unstructured gridded domain representing bathymetric and topographic features \citep{Luettich2004}. Information about ADCIRC can be accessed at \url{https://adcirc.org}. In what follows, we will refer to ADCIRC as the low-fidelity simulator, meanwhile we will refer to the coupled ADCIRC + SWAN model as the high-fidelity simulator. The latter incorporates the Simulating WAves Nearshore (SWAN) wave model \citep{Booij1999, Zijlema2010} in order to enhance system physics and accuracy. This is achieved by tightly coupling the ADCIRC and SWAN models, simulating them on the same unstructured mesh \citep{Dietrich2011, Dietrich2012}. This coupling is important to accurate prediction of waves in the nearshore, which ride on top of storm surge and bring substantial destructive power to coastal structures and defenses. \begin{figure}[htbp] \begin{subfigure}{.7\textwidth} \centering \makebox[\textwidth][c]{ \includegraphics[width=1.0\textwidth, height=0.2\textheight]{ADCIRC_SWAN.pdf}} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.9\linewidth, height=0.25\textheight]{storm_param.pdf} \end{subfigure} \caption{Diagram of storm surge models based on ADCIRC and storm parameters. The left panel shows ADCIRC and its coupling with SWAN. Then right panel shows storm parameters when a storm approaches the coastline \citep{Toro2010}. ADCIRC has bathymetry and topography mesh and wind and pressure fields as inputs. Storm parameters are used to derive the wind and pressure fields.} \label{fig: ADCIRC + SWAN} \end{figure} Figure~\ref{fig: ADCIRC + SWAN} shows a basic diagram for the ADCIRC simulator and the ADCIRC + SWAN simulator. In this study, we focus on six input parameters to characterize the storm: $\Delta P$, $R_p$, $V_f$, $\theta$, $B$, $\mathbf{\boldsymbol \ell}$, with their physical meaning given in Table~\ref{table: input parameters}. These parameters will be treated as inputs in the ADCIRC simulator. Although the behavior of hurricanes is much more complex than this characterization, using this simplified storm parameterization is acceptable for the probabilistic characterization of future storms since no practical or robust model exists to represent these effects for surge-frequency calculations for future storms. In this application, the response variable of interest is the peak surge elevation (PSE) for each landfalling hurricane simulated from these surge models, where the peakness is taken across time over the course of one storm. \begin{table}[htbp] \centering \normalsize \caption{Storm characteristics parameters.} {\resizebox{1.0\textwidth}{!}{% \setlength{\tabcolsep}{2.0em} \begin{tabular}{c l } \toprule \noalign{\vskip 1.5pt} Input variables & Physical meaning \\ \noalign{\vskip 1.5pt} \midrule \noalign{\vskip 2.5pt} $\Delta P$ & central pressure deficit of the storm (mb) \\ \noalign{\vskip 2.5pt} $R_p$ & scale pressure radius in nautical miles \\ \noalign{\vskip 2.5pt} $V_f$ & storm's forward speed (m$/$s) \\ \noalign{\vskip 2.5pt} $\theta$ & storm's heading in degrees clockwise from north \\ \noalign{\vskip 2.5pt} $B$ & Holland's $B$ parameter (unitless) \\ \noalign{\vskip 2.5pt} $\mathbf{\boldsymbol \ell}$ & landfall location in latitude and longitude \\ \noalign{\vskip 2.5pt} \bottomrule \end{tabular}% }} \label{table: input parameters} \end{table} \subsection{Model Validation} In this work, our goal is to develop an emulator for this coupled ADCIRC + SWAN model for storm surge prediction. The detailed validation of the coupled ADCIRC + SWAN model has been performed against a variety of data sources (tide harmonic constituent data, surge measurements, water level gages, and wave buoy data) and storm events (Hurricane Charley 2004, Tropical storm Gabrielle 2001, Hurricane Donna 1960) in FEMA coastal flood hazard studies \citep[e.g.,][]{FEMA2006, FEMA2008, FEMA2017}. The coupled ADCIRC + SWAN model has been validated to observed surges in several studies for historical storms, and shows good model performance with typical errors below 0.3 meters \citep{Dietrich2012,Dietrich2011,dietrich2010high,FEMA2017,Cialone2017}. From a physics perspective, ADCIRC + SWAN explicitly incorporates ubiquitous wave effects on water levels and currents through the SWAN model. Before modern computing, wave setup effects were determined via approximate equations and added onto surge hazard estimates. Now, model coupling is standard practice for both researchers and practitioners to more correctly represent the physical processes, with clear benefits \citep[e.g.,][]{Kerr2013, Dietrich2011, Hope2013}. ADCIRC + SWAN has been used in all regions of the U.S. Gulf and Atlantic coasts (including our study region) by both FEMA and USACE in their flood hazard studies. Flooding forecasting work still struggles to utilize coupled models because of the computational cost, and our work also has the potential to aid such efforts, partly thanks to the speed of emulator construction. There is a substantial need to develop an efficient emulator for the high-fidelity simulator ADCIRC + SWAN to aid flood hazard studies and forecasting work. \subsection{Model Simulation Setup} \label{sec: model simulation} In FEMA coastal flood hazard studies \citep[e.g.,][]{FEMA2006, FEMA2008, FEMA2017}, storm surge hazard assessment is accomplished via the annual exceedance probability (AEP) for hurricane-prone areas. The quantification of AEP requires large-scale numerical simulations from ADCIRC. Statistical modeling is used to develop characteristics of synthetic storms based on historical tropical cyclones. The wind and pressure fields are used as inputs into hydrodynamic models such as ADCIRC to predict storm surges. In this application, the ADCIRC simulator and the ADCIRC + SWAN simulator are run on the same mesh with 148,055 nodes (spatial points). The mesh and simulation characteristics were constructed for a FEMA coastal flood hazard study in Southwest Florida \citep{FEMA2017} and these simulations were carried out using the same standards and methods documented in that study. The peak surge hazard estimates produced by that study are considered to represent current conditions (i.e. sea level and climate in the years around when the study is done), and the joint probability distribution of tropical cyclone parameters is constructed from regional historical data. We primarily focus on peak storm surges at $N=9,284$ spatial locations in the Cape Coral subregion of the study area, since Cape Coral is a study region in the FEMA Region IV's mission. Section~\ref{sec: nodes} of the Supplementary Material gives a brief description of coastal flood study and shows the full mesh of ADCIRC and the selected spatial locations. To design the experiment, we select 50 unique combinations of storm parameters $(\Delta P, R_p, V_f, \theta, B)$ based on the maximin Latin hypercube design (LHD) in the input domain $[30, 70]\times [16, 39] \times [3, 10] \times [15, 75] \times [0.9, 1.4]$. This parameter range corresponds to a core region of major surge hazards of interest in the FEMA coastal study \citep{FEMA2017}. For each combination of these 5 storm parameters, the landfall location $\boldsymbol \ell$ is repeated with one $R_p$ spacing along the coastline in Cape Coral. For each of the 5 storm parameters, the initial position of landfall location is randomly chosen, meaning that no two storms make landfall at the same location, and the number of landfalls for each of the 50 parameter combinations varies, with a higher number of smaller storms; this reflects the smaller spatial scale of storm surge for smaller storms, which necessitates higher sampling to capture the more localized response. The meteorological forcing in both the ADCIRC simulator and the ADCIRC + SWAN simulator is produced by a single group (Oceanweather, Inc.) using the work in \cite{Cardone2009}. In total, we obtained 226 inputs, meaning that on average, each of the 50 parameter combinations is used to generate 4.5 storms. These 226 inputs will be referred to as ${\cal X}_0$. We randomly selected 60 inputs from the 226 inputs to run the ADCIRC + SWAN simulator, which will be referred to as ${\cal X}_2$. Then we randomly selected 140 inputs from the the remaining 166 inputs to run the ADCIRC simulator, which will be referred to as ${\cal X}_1^1$. To characterize the difference between the ADCIRC simulator and the ADCIRC + SWAN simulator, we also randomly chose 50 inputs form the 60 inputs to run the ADCIRC simulator, which will be referred to as ${\cal X}_1^2$. Let ${\cal X}_1:={\cal X}_1^1 \cup {\cal X}_1^2$ be the collection of 200 inputs from ${\cal X}_1^1$ and ${\cal X}_1^2$. Notice that only 50 inputs in the ADCIRC + SWAN simulator are nested within the 200 inputs in the ADCIRC simulator. Figure~\ref{fig: storm surges at two input settings} shows peak surge elevations over $9,284$ spatial locations from the ADCIRC simulator and the ADCIRC + SWAN simulator at two different input settings $\mathbf{x}_1=(48.30,$ $20.48, 6.187, 62.28, 1.260, -82.08, 26.59)^\top$ and $\mathbf{x}_2=(68.85, 34.34, 8.778, 55.57, 1.066, -82.15,$ $26.69)^\top$. The output surfaces also have very different variations in different regions at each fidelity level. This indicates that a spatially-varying mean function or a spatially-varying variance function may help capture the spatial variations in the output space. The third column shows that PSEs have their maximum difference less than 0.3 meters between the low and high fidelity simulators, and also shows that the discrepancy has different spatial structures. At some specific regions such as the Fort Myers Beach (on the top right panel), very sharp changes can be detected. Physically, the changes in the surge elevation arise from differences in the spatial and temporal structures of the surge-only versus the wave response to storm forcing. For instance, in the top panel, the addition of wave-driven water level setup leads to greater overtopping (a situation when waves are higher than the dunes or structures they encounter) of coastal barrier islands, bringing greater water into the semi-protected bays in the southeastern portion of the figure. It can be difficult to detect these sorts of patterns without modeling wind-driven wave effects. Although the discrepancy between the low-fidelity simulator and high-fidelity simulator is ``small'', the accuracy of storm surges has a substantial impact on risk assessment of storm surges in coastal areas, and the increase in computational cost is substantial according to previous studies \citep[e.g.,][]{FEMA2017, Cialone2017, dietrich2010high}. In what follows, we develop a cokriging based emulator to approximate the high-fidelity simulator by combining simulations from a limited number of high-fidelity runs and a larger number of low-fidelity simulation runs. \begin{figure} \begin{subfigure}{.333\textwidth} \centering \makebox[\textwidth][c]{ \includegraphics[width=1.0\linewidth]{map_l_run_54_selected_LHS4A.png}} \caption{ADCIRC: $\mathbf{x}_1$.} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth]{map_h_run_54_selected_LHS4A.png} \caption{ADCIRC+SWAN: $\mathbf{x}_1$.} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth]{map_hl_diff_run_54_selected_LHS4A.png} \caption{Difference at $\mathbf{x}_1$.} \end{subfigure} \begin{subfigure}{.333\textwidth} \centering \makebox[\textwidth][c]{ \includegraphics[width=1.0\linewidth]{map_l_run_161_selected_LHS4A.png}} \caption{ADCIRC: $\mathbf{x}_2$.} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth]{map_h_run_161_selected_LHS4A.png} \caption{ADCIRC+SWAN: $\mathbf{x}_2$.} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth]{map_hl_diff_run_161_selected_LHS4A.png} \caption{Difference at $\mathbf{x}_2$.} \end{subfigure} \caption{Comparison of model runs from the ADCIRC simulator and the ADCIRC + SWAN simulator at two input settings. The first and second columns show the model runs from the low-fidelity simulator and the high-fidelity simulator. The third column shows the difference between the high-fidelity run and the low-fidelity run.} \label{fig: storm surges at two input settings} \end{figure} \section{Methodology: Multifidelity Computer Model Emulation} \label{sec: model formulation} We give a brief introduction of the general autoregressive cokriging framework explaining why it cannot be implemented to our application, and then we present our proposed approach called \emph{parallel partial cokriging} that is able to solve the application under consideration. \subsection{Background on Autoregressive Cokriging Modeling}\label{sec: univariate model} Assume that the computer model can be run at $s$ levels of sophistication corresponding to output functions $y_{1}(\cdot),\ldots,y_{s}(\cdot)$. The computer model associated to $y_{t}(\cdot)$ is assumed to be more accurate than the one associated to $y_{t-1}(\cdot)$ for $t=2,\ldots,s$. Let ${\cal X}$ be a compact subset of $\mathbb{R}^{d}$, which is assumed to be the input space of computer model. Further assume that the computer model $y_{t}(\cdot)$ is run at a set of input values denoted by ${\cal X}_{t}\subset{\cal X}$ for $t=1,\ldots,s$, where ${\cal X}_{t}$ contains $n_{t}$ input values. The autoregressive cokriging model at any input $\mathbf{x}\in\mathcal{X}_{t}$ is \begin{align} y_{t}(\mathbf{x})=\gamma_{t-1}(\mathbf{x})y_{t-1}(\mathbf{x})+\delta_{t}(\mathbf{x}),\quad t=2,\ldots,s,\label{eqn: AR cokriging} \end{align} where $\delta_{t}(\cdot)$ is the unknown location discrepancy representing the local adjustment from level $t-1$ to level $t$, and $\gamma_{t-1}(\mathbf{x})$ is the scale discrepancy representing the scale change from level $t-1$ to level $t$ at input $\mathbf{x}$. This well-interpreted model is induced from the so called Markovian assumption: no further information is gained about $y_{t}(\mathbf{x})$ by observing $y_{t-1}(\mathbf{x}')$ for any other $\mathbf{x}\ne \mathbf{x}'$ \citep{o1998markov}. To account for uncertainties in the unknown functions $y_{1}(\cdot)$ and $\delta_{t}(\cdot)$, we assign Gaussian process priors \begin{align} \begin{split}y_{1}(\cdot)\mid\boldsymbol \beta_{1},\sigma_{1}^{2},\boldsymbol \phi_{1} & \sim\mathcal{GP}(\mathbf{h}_{1}^{\top}(\cdot)\boldsymbol \beta_{1},\,\sigma_{1}^{2}r(\cdot,\cdot|\boldsymbol \phi_{1})),\\ \delta_{t}(\cdot) & \sim\mathcal{GP}(\mathbf{h}_{t}^{\top}(\cdot)\boldsymbol \beta_{t},\,\sigma_{t}^{2}r(\cdot,\cdot|\boldsymbol \phi_{t})), \end{split} \label{eqn: cokriging model} \end{align} for $t=2,\ldots,s$. $\mathbf{h}_{t}(\cdot)$ is a vector of basis functions and $\boldsymbol \beta_{t}$ is a vector of coefficients at code level $t$. In practice, the basis functions $\mathbf{h}_{t}(\cdot)$ along with $\mathbf{w}_{t}(\cdot)$ should be determined by exploratory data analysis following standard Gaussian process modeling procedure. $\gamma_{t-1}(\mathbf{x})$ can be modeled as a basis function representation, i.e., $\gamma_{t-1}(\mathbf{x})=\mathbf{w}_{t-1}(\mathbf{x})^{\top}\boldsymbol \xi_{t-1}$, where $\mathbf{w}_{t-1}(\mathbf{x})$ is a vector of known basis functions and $\boldsymbol \xi_{t-1}$ is a vector of unknown coefficients. Here, $r(\cdot,\cdot\mid\boldsymbol \phi_{t})$ is a correlation function with parameters $\boldsymbol \phi_{t}$, \citep{williams2006gaussian}. A direct implementation of the above model in the storm surge application would be to treat the spatial coordinates as part of the inputs $\mathbf{x}$. Such a straightforward implementation however would be computationally infeasible because evaluation of the corresponding likelihood would require the inversion of covariance matrix with excessively large size. In our application we have $9,284\times220$ model output values for the low-fidelity simulator ADCIRC and $9,284\times60$ ones for the high-fidelity simulator ADCIRC + SWAN. Consequently, fitting the autoregressive cokriging model would require about $1.4\times10^{19}=(9284\times 260)^{3}$ flops to evaluate the likelihood and about 46,612 gigabytes memory to store the covariance matrix with double precision. In addition, the computation of the predictive distribution would be infeasible in standard computers as the large number of unknown parameters could not be analytically integrated; this is because the experimental design in our application is not necessarily fully nested across levels of code as required in existing autorigressive cokriging implementations \citep[e.g.,][]{Kennedy2000,Qian2008,Gratiet2013}. Another challenge is to model nonstationarity in the output space, where the storm surges in Cape Coral show strong heterogeneous spatial dependence structures. \subsection{The Parallel Partial Cokriging Emulator} \label{sec: multivariate model} We propose the parallel partial cokriging emulator that couples ideas from the standard parallel partial (PP) Gaussian process \citep{Gu2016} with the cokriging model. This approach mitigates the aforementioned challenges in the storm surge application. We consider that at each level the output functions and their additive discrepancy functions are modeled as multivariate GPs where each dimension corresponds to a spatial location. This induces a cokriging model with high-dimensional output due to the massive spatial locations available. Two popular ideas for modeling multivariate output in computer models are the use of basis-function representation with a principal component analysis by \cite{Higdon2008} and the separable covariance function formulation between input space and output space by \citep{Conti2010}. These approaches have not been developed in the multifidelity setting, while such an extension is not straightforward, and hence we briefly argue whether they can provide useful ideas to our proposed development in the multifidelity setting. The basis-function representation approach requires the output to be represented in terms of a few principal components. However, we found that this is not possible in the storm surge application, due to the large changes in the storm output from different input simulations. For the separable model \citep{Conti2010}, it is computationally infeasible due to massive output that the storm surge simulators generate for each simulation. To deal with the high-dimensional output in the separable model, \cite{Gu2016} propose the PP Gaussian process emulator, which assumes conditionally independent Gaussian processes for each spatial coordinate with the same range parameter. They showed that the PP Gaussian process emulator produces identical posterior predictive mean and nearly identical posterior predictive variance as the separable model. This approach is able to borrow information across the data-poor input dimensions when the data is rich in the spatial dimension. The computational cost is only linear in terms of the number of simulator outputs $N$. The computation can even be parallelized to ensure scalability for massive outputs. In addition, we can develop an imputation technique to incorporate non-nested design of experiments. However the adoption of the PP ideas in the multifidelity setting with possibly non-nested designs is not straightforward. In what follows, we present our proposed statistical approach. \subsubsection{The Parallel Partial Cokriging Model} \label{sec: PP cokriging model} In the storm surge application, we have $s=2$ fidelity levels for computer models: ADCIRC and ADCIRC + SWAN. Each simulation of these computer models generates output values at the same $N=9,284$ spatial locations. Let $n_{t}$ denote the number of computer simulations at fidelity level $t$, where $n_1=200$ and $n_2=60$ in the application. Let $\mathbf{y}_{t,j}$ be a vector of output values over all inputs in ${\cal X}_{t}$ at coordinate $j$ and code level $t$. Let $\mathbf{y}^{t,\mathscr{D}}:=\{\mathbf{y}_{t,1}^\top, \ldots, \mathbf{y}_{t,N}^\top\}^\top$ be a vector of output values across all spatial locations and input values at level $t$. Let $\mathbf{y}_{j}^{\mathscr{D}}:=(\mathbf{y}_{1,j}^{\top},\ldots,\mathbf{y}_{s,j}^{\top})^{\top}$ be a vector of output values at coordinate $j$ over all inputs in ${\cal X}$ and all code levels. For each coordinate $j=1,...,N$ and fidelity level $t=2,\ldots,s$, we specify the cokriging model for any input $\mathbf{x} \in \mathcal{X}_t$ as \begin{align} \label{eqn: AR in multivariate model} y_{t,j}(\mathbf{x})=\gamma_{t-1,j}(\mathbf{x})y_{t-1,j}(\mathbf{x})+\delta_{t,j}(\mathbf{x}) \end{align} where $y_{1,j}(\mathbf{x})$, $\delta_{t,j}(\mathbf{x})$ and $\gamma_{t-1,j}(\mathbf{x})$ are unknown functions. We assign Gaussian processes priors on the unknown functions \begin{align} \begin{split}y_{1,j}(\cdot)\mid\boldsymbol \beta_{1,j},\sigma_{1,j}^{2},\boldsymbol \phi_{1} & \sim\mathcal{GP}(\mathbf{h}_{1}^{\top}(\cdot)\boldsymbol \beta_{1,j},\,\sigma_{1,j}^{2}r(\cdot,\cdot|\boldsymbol \phi_{1})),\\ \delta_{t,j}(\cdot) & \sim\mathcal{GP}(\mathbf{h}_{t}^{\top}(\cdot)\boldsymbol \beta_{t,j},\,\sigma_{t,j}^{2}r(\cdot,\cdot|\boldsymbol \phi_{t})), \end{split} \label{eqn: model for coordinate j} \end{align} where $\mathbf{h}_t(\cdot)$ is a vector of common fixed basis functions across all $N$ spatial locations. In the storm surge application, these basis functions are assumed to be constant functions based on exploratory analysis. For each coordinate $j$, we assume different regression parameters $\mathbf{b}_j:=\{\boldsymbol \beta_{1,j},$ $\ldots,\boldsymbol \beta_{s,j}\}$, different variance parameters $\boldsymbol \sigma^2_j:=\{\sigma_{1,j}^{2},\dots,$ $\sigma_{s,j}^{2}\}$, different scale discrepancy parameters $\boldsymbol \gamma_j:=\{\gamma_{1,j},\ldots,\gamma_{t-1,j}\}$, where each $\gamma_{1,j}$ is further assumed to be a unknown constant in the application. $r(\cdot,\cdot|\boldsymbol \phi_{t})$ is a correlation function with correlation parameters $\boldsymbol \phi_t:=(\phi_{t,1}, \ldots, \phi_{t,d})^\top$ at fidelity level $t$, where $d=6$ denotes the number of input parameters that characterize the storm characteristics. The correlation parameters $\boldsymbol \phi_t$ are assumed to be the same across different spatial coordinates to simplify computations following \citep{Gu2016}. As explained in Section~\ref{sec: univariate model}, the PP idea is the most appropriate method to solve the storm surge application. Following \cite{Sacks1989}, we use the product form of correlation structure to allow anisotropy in each input dimension, i.e., $r(\mathbf{x},\mathbf{x}'\mid\boldsymbol \phi_{t})=\prod_{i=1}^{d}r(x_{i},x_{i}'\mid\phi_{t,i})$, where each $r(x_{i},x_{i}'\mid\phi_{t,i})$ is chosen to be the Mat\'ern correlation function although other choice is available \citep[see][]{Ma2019cov}. The Mat\'ern correlation function is \begin{align*} r(u\mid\phi)=\frac{2^{1-\upsilon}}{\Gamma(\upsilon)}\left(\frac{\sqrt{2\upsilon}u}{\phi}\right)^{\upsilon}\mathcal{K}_{\upsilon}\left(\frac{\sqrt{2\upsilon}u}{\phi}\right), \end{align*} where $u=|x_i-x_i'|$ is the Euclidean distance, $\mathcal{K}_{\upsilon}$ is the modified Bessel function of the second kind, and $\upsilon$ is the smoothness parameter controls. Here, we set $\upsilon=2.5$ because the underling physical process in our application is typically smooth and it gives reasonable prediction results. We call the proposed model defined in \eqref{eqn: AR in multivariate model} and \eqref{eqn: model for coordinate j} the \emph{parallel partial cokriging} model. The ``parallel partial'' reflects the fact that our model can be thought of as involving different autoregressive cokriging models at each coordinate which share common input correlation parameters. As mentioned in Section~\ref{sec: univariate model}, likelihood-based inference requires that the collection of input runs at each level is nested in order to have closed-form inference, i.e., ${\cal X}_{t}\subset{\cal X}_{t-1}$. In the next section, we present a {data-augmentation} technique to deal with possibly non-nested design so that statistical inference based on the PP cokriging model can be carried out. \subsubsection{Data Augmentation for Non-Nested Design} \label{sec: DA} The specification of convenient priors facilitating the tractability of the marginal likelihood and the analytic integration of the unknown parameters require the available experimental design to be fully hierarchical nested; i.e. $\mathcal{X}_{t+1}\subset \mathcal{X}_{t}$. This restrictive requirement can be found by examining the likelihood that results from \eqref{eqn: AR in multivariate model} and \eqref{eqn: model for coordinate j} and it is inherited from the cokriging model. As this requirement is not satisfied in our application, to address this issue we propose a suitable data-augmentation remedy that imputes the data with missing values with purpose to create a fully nested design. We replace the original input domain ${\cal X}_{t}$ by $\tilde{{\cal X}}_{t}={\cal X}_{t}\cup\mathring{{\cal X}}_{t}$ such that $\mathring{{\cal X}}_{t}:={\cal X}_{(t+1):s}\setminus{\cal X}_{t}$ represents a collection of missing inputs that have not been run by the simulator at fidelity level $t$ in order to form a nested design, where ${\cal X}_{(t+1):s}:=\cup_{k=t+1}^{s}{\cal X}_{k}$ represents the collection of observed inputs from fidelity level $t+1$ up to the highest fidelity level $s$. Let $\mathring{\mathbf{y}}_{t,j}$ be a vector of missing output values all inputs in $\mathring{{\cal X}}_{t}$ at coordinate $j$ and code level $t$. In what follows, we use $\tilde{n}_t$ to denote the number of input values in the augmented set $\tilde{{\cal X}}_t$. Let $\mathring{\mathbf{y}}_{j}^{\mathscr{D}}:=(\mathring{\mathbf{y}}_{1,j}^{\top},\ldots,\mathring{\mathbf{y}}_{s,j}^{\top})^{\top}$ be a vector of missing output values at coordinate $j$ over all code levels, where $\mathring{\mathbf{y}}_{s,j}^{\top}$ is defined to be empty for notational convenience. Let $(\mathring{\mathbf{y}}_t)_{j=1}^N:=(\mathring{\mathbf{y}}_{t,1}^\top, \ldots, \mathring{\mathbf{y}}_{t,N}^\top)^\top$ be a vector of missing output at code level $t$ over all $N$ spatial coordinates. Let $\tilde{\mathbf{y}}_{t,j}:=(\mathbf{y}_{t,j}^{\top},\mathring{\mathbf{y}}_{t,j}^{\top})^{\top}$ be a vector of augmented output over all inputs at code level $t$ at the $j$th spatial coordinate and $\tilde{\mathbf{y}}_{j}^{\mathscr{D}}:=(\mathbf{y}_{j}^{\mathscr{D}\top},\mathring{\mathbf{y}}_{j}^{\mathscr{D}\top})^{\top}$ be a vector of augmented output over all inputs at coordinate $j$. Then the augmented sampling distribution at coordinate $j$ is \begin{align} \begin{split} L(\tilde{\mathbf{y}}_{j}^{\mathscr{D}}\mid\boldsymbol \beta_{j},\boldsymbol \gamma_{j},\boldsymbol \sigma_{j}^{2},\boldsymbol \phi)& \propto\pi(\tilde{\mathbf{y}}_{1,j}\mid\boldsymbol \beta_{1,j},\sigma_{1,j}^{2},\boldsymbol \phi_{1}) \prod_{t=2}^{s}\pi(\tilde{\mathbf{y}}_{t,j}\mid\boldsymbol \beta_{t,j},\gamma_{t,j},\sigma_{t,j}^{2},\boldsymbol \phi_{t}), \end{split} \end{align} with \begin{align*} \pi(\tilde{\mathbf{y}}_{1,j}\mid\boldsymbol \beta_{1,j},\sigma_{1,j}^{2},\boldsymbol \phi_{1}) &= \mathcal{N}(\tilde{\mathbf{H}}_{1} \boldsymbol \beta_{1,j}, \sigma^2_{t,j} \tilde{\mathbf{R}}_1), \\ \pi(\tilde{\mathbf{y}}_{t,j}\mid\boldsymbol \beta_{t,j},\gamma_{t,j},\sigma_{t,j}^{2},\boldsymbol \phi_{t}) &= \mathcal{N}( \tilde{\mathbf{H}}_{1,j} \boldsymbol \beta_{1,j} + \tilde{W}_{t-1,j} \gamma_{t-1,j}, \sigma^2_{t,j} \tilde{\mathbf{R}}_{t}), \end{align*} where $\tilde{\mathbf{H}}_{t,j}:=\mathbf{h}_t(\tilde{{\cal X}}_t)$, $\tilde{\mathbf{R}}_t:=r(\tilde{{\cal X}}_t, \tilde{{\cal X}}_t \mid \boldsymbol \phi_t)$, and $\tilde{W}_{t-1,j}:=\mathbf{y}_{t-1,j}(\tilde{{\cal X}}_t)$. The proposed augmentation allows the joint likelihood of the complete output values to be factorized into Gaussian likelihood kernels of smaller dimensionality, where we can specify conditional conjugate priors and break the training problem into smaller more tractable ones. In Figure~\ref{fig: univariate example 1} of the Supplementary Material, a toy example is used to illustrate the performance of autoregressive cokriging with a \emph{non-nested design} with the proposed data-augmentation technique. This example shows that the information from a low-fidelity code can be used to better infer the high-fidelity code and shows that the data-augmentation technique works efficiently. \subsubsection{Empirical Bayesian Inference via an MCEM algorithm} \label{sec: parameter estimation} Let $\tilde{\mathbf{y}}^{\mathscr{D}}:=(\tilde{\mathbf{y}}_{1}^{\mathscr{D}\top},\ldots,\tilde{\mathbf{y}}_{N}^{\mathscr{D}\top})^{\top}$ be a vector of augmented outputs over all $N$ spatial locations. We introduce the following notation: $\boldsymbol \beta:=(\boldsymbol \beta_{1}^{\top},\ldots,\boldsymbol \beta_{N}^{\top})^{\top}$, $\boldsymbol \gamma:=(\boldsymbol \gamma_{1}^{\top},\ldots,\boldsymbol \gamma_{N}^{\top})^{\top}$, and $\boldsymbol \sigma^{2}:=(\boldsymbol \sigma_{1}^{2\top},\ldots,\boldsymbol \sigma_{N}^{2\top})^{\top}$. The overall augmented sampling distribution across all $N$ spatial locations is the product of each augmented sampling distribution \begin{eqnarray} L(\tilde{\mathbf{y}}^{\mathscr{D}}\mid\boldsymbol \beta,\boldsymbol \gamma,\boldsymbol \sigma^{2},\boldsymbol \phi)=\prod_{j=1}^{N}L(\tilde{\mathbf{y}}_{j}^{\mathscr{D}}\mid\boldsymbol \beta_{j},\boldsymbol \gamma_{j},\boldsymbol \sigma_{j}^{2},\boldsymbol \phi). \end{eqnarray} We specify the following a priori model for the unknown parameters \begin{align} \label{eqn: prior model in multivariate model} \begin{split}\pi(\boldsymbol \beta,\boldsymbol \gamma,\boldsymbol \sigma^{2},\boldsymbol \phi) &=\pi(\boldsymbol \phi)\prod_{j=1}^{N}\left\{ \pi(\boldsymbol \beta_{s,j}, \sigma_{s,j}^2) \prod_{t=1}^{s-1}\pi(\boldsymbol \beta_{t,j},\gamma_{t,j},\sigma_{t,j}^{2})\right\} , \end{split} \end{align} where independent Jeffreys priors can be assigned on $\boldsymbol \beta_{t,j}, \gamma_{t-1,j}, \sigma_{t,j}^{2}$: i.e., $\pi(\boldsymbol \beta_{t,j}, \gamma_{t-1,j}, \sigma_{t,j}^{2})$ $\propto \frac{1}{\sigma^2_{t,j}}$ for $t=2, \ldots, s$, and $\pi(\boldsymbol \beta_{1,j}, \sigma^2_{1,j}) \propto \frac{1}{\sigma^2_{1,j}}$ at each coordinate $j$. For each level $t$, we assign a jointly robust prior \citep{Gu2019} on $\boldsymbol \phi_{t}$ which is a proper prior and has desirable properties for Gaussian process emulation. We highlight that these Jeffreys priors are 'conjugate' for the augmented likelihood but not the observed one. After integrating out model parameters $\{\boldsymbol \beta,\boldsymbol \gamma,\boldsymbol \sigma^{2}\}$, the conditional distribution of $\tilde{\mathbf{y}}^{\mathscr{D}}$ given $\boldsymbol \phi$ is \begin{align} \label{eqn: aug marginal dist} \begin{split} \pi(\tilde{\mathbf{y}}^{\mathscr{D}}\mid\boldsymbol \phi) & =\prod_{j=1}^{N}\pi(\tilde{\mathbf{y}}_{1,j}\mid\boldsymbol \phi_{1})\prod_{t=2}^{s}\pi(\tilde{\mathbf{y}}_{t,j}\mid\boldsymbol \phi_{t},\tilde{\mathbf{y}}_{t-1,j}), \end{split} \end{align} where each conditional distribution is given in Section~\ref{sec: MCEM}. The augmented posterior distribution $\pi(\boldsymbol \phi \mid \tilde{\mathbf{y}}^{\mathscr{D}})$ can be obtained via Bayes' theorem: $\pi(\boldsymbol \phi \mid \tilde{\mathbf{y}}^{\mathscr{D}}) \propto \pi(\tilde{\mathbf{y}}^{\mathscr{D}}\mid\boldsymbol \phi) \times \pi (\boldsymbol \phi)$, where $\pi (\boldsymbol \phi)$ is a proper prior as mentioned earlier. To estimate $\boldsymbol \phi$, we adopt an empirical Bayesian approach which maximizes the integrated posterior $\pi(\boldsymbol \phi\mid \tilde{\mathbf{y}}^{\mathscr{D}})$, since empirical Bayes approaches provide faster computational results when run in personal computers compared to the fully Bayesian ones which often require Markov chain Monte Carlo methods. As direct maximization of $\pi(\boldsymbol \phi\mid \tilde{\mathbf{y}}^{\mathscr{D}})$ is impossible due to the intractable form of $\pi(\boldsymbol \phi\mid \tilde{\mathbf{y}}^{\mathscr{D}})$, we introduce a Monte Carlo expectation-maximization algorithm to tackle this challenge. The detailed development is given in Section~\ref{sec: MCEM} of the Supplementary Material. \subsubsection{Prediction} \label{sec: prediction} For any new input $\mathbf{x}_0\in {{\cal X}}$, the goal is to make prediction for $\{y_{s,j}(\mathbf{x}_0), j=1, \ldots, N\}$ based upon the data $\mathbf{y}^{\mathscr{D}}$. With the prior model~\eqref{eqn: prior model in multivariate model}, the predictive distribution of interest is $y_{s,j}(\mathbf{x}_0 \mid \mathbf{y}^{\mathscr{D}}, \boldsymbol \phi)$ for $j=1, \ldots, N$. In what follows, we derive a new approach to predicting $y_{s,j}(\mathbf{x}_0)$ and termed it as a \emph{sequential prediction} approach. The idea is to add the new input $\mathbf{x}_0$ to each collection of missing inputs $\mathring{{\cal X}}_t$ such that a hierarchically nested design can be obtained. To fix the notation, we define $\mathring{{\cal X}}_t^0: = \mathring{{\cal X}}_t \cup \{\mathbf{x}_0\}$ and $\tilde{{\cal X}}_t^0 := {{\cal X}}_t \cup \mathring{{\cal X}}_t^0$. Hence the collection of inputs $\{ \tilde{{\cal X}}_t^0: t=1, \ldots, s\}$ also forms a nested design with $\tilde{{\cal X}}_t^0 \subset \tilde{{\cal X}}_{t-1}^0$. Let $\mathbf{y}(\mathbf{x}_0):=(\mathbf{y}_1(\mathbf{x}_0)^\top, \ldots, \mathbf{y}_s(\mathbf{x}_0)^\top)^\top$ with $\mathbf{y}_j(\mathbf{x}_0):=(y_{1,j}(\mathbf{x}_0), \ldots, y_{s,j}(\mathbf{x}_0))^\top$. The predictive distribution of $\mathbf{y}(\mathbf{x}_0)$ given $\tilde{\mathbf{y}}^{\mathscr{D}}$ and $\boldsymbol \phi$ is the product of $N$ independent distributions with \begin{align*} \pi(\mathbf{y}(\mathbf{x}_0) \mid \tilde{\mathbf{y}}^{\mathscr{D}}, \boldsymbol \phi) = \prod_{j=1}^N \pi(\mathbf{y}_j(\mathbf{x}_0)\mid \tilde{\mathbf{y}}_j^{\mathscr{D}}, \boldsymbol \phi). \end{align*} The following result gives the predictive distribution at each spatial coordinate. Its proof follows from standard kriging theory. \begin{proposition}[\textbf{Sequential Prediction}] \label{thm: predict} Given the PP cokriging model and the non-informative priors~\eqref{eqn: prior model in multivariate model}, the predictive distribution across all code levels at spatial coordinate $j$ for $j=1, \ldots, N$ is \begin{align} \label{eqn: predictive distribution in multivariate model} \begin{split} \pi(\mathbf{y}_j(\mathbf{x}_0) \mid \tilde{\mathbf{y}}_j^{\mathscr{D}}, \boldsymbol \phi) &= \pi(y_{1,j}(\mathbf{x}_0) \mid \tilde{\mathbf{y}}_{1,j}, \boldsymbol \phi_1) \prod_{t=2}^{s-1} \pi(y_{t,j}(\mathbf{x}_0) \mid \tilde{\mathbf{y}}_{t,j}, \tilde{\mathbf{y}}_{t-1,j}, \\ &\quad y_{t-1,j}(\mathbf{x}_0), \boldsymbol \phi_t) \times \pi(y_{s,j}(\mathbf{x}_0) \mid y_{s-1,j}(\mathbf{x}_0), \mathbf{y}_{s,j}, \boldsymbol \phi_s). \end{split} \end{align} The conditional distributions on the right-hand side of \eqref{eqn: predictive distribution in multivariate model} are Student-$t$ distributions with detailed formulas given in Section~\ref{app: sequential pred} of the Supplementary Material. \end{proposition} Proposition~\ref{thm: predict} shows that a random sample from the predictive distribution can be sequentially drawn from a collection of conditional distributions in an efficient manner, since the total computational cost required for such a simulation is $O(\sum_{t=1}^s \tilde{n}_t^3)$ at each spatial coordinate. As the correlation matrix is the same across all spatial locations at each fidelity level, the total computational cost to obtain one single random sample from the predictive distribution across all spatial locations is $O(\sum_{t=1}^s \tilde{n}_t^3 + N \sum_{t=1}^s \tilde{n}_t^2 )$, which is linear in $N$ when $\sum_{t=1}^s \tilde{n}_t^2 \ll N$. Notice that a sample from $ \pi(\mathbf{y}_{s}(\mathbf{x}_0) \mid \mathbf{y}^{\mathscr{D}}, \boldsymbol \phi)$ can be obtained via the composition sample technique based on $\pi(\mathbf{y}_{s}(\mathbf{x}_0) \mid \mathbf{y}^{\mathscr{D}}, \boldsymbol \phi) = \int \pi(\mathbf{y}_{s}(\mathbf{x}_0) \mid \tilde{\mathbf{y}}^{\mathscr{D}}, \boldsymbol \phi) \pi(\mathring{\mathbf{y}}^{\mathscr{D}} \mid {\mathbf{y}}^{\mathscr{D}}, \boldsymbol \phi)\, d \mathring{\mathbf{y}}^{\mathscr{D}}$, by recycling the realizations of $\mathring{\mathbf{y}}^{\mathscr{D}}$ from Algorithm \ref{alg: MCEM in mult} of the Supplementary Material. In Section~\ref{app: pred} of the Supplementary Material, we also derive the traditional prediction formula based on the idea in \cite{Kennedy2000} and \cite{Gratiet2013}, which is referred to as a \emph{one-step prediction} formula. The sequential prediction formula in Proposition~\ref{thm: predict} has several advantages over the one-step prediction formula in Section~\ref{app: pred} of the Supplementary Material. First, the high-dimentionality of simulator output makes the one-step prediction formula computationally infeasible in the storm surge application, since this sequential prediction formula has $O(N(\sum_{t=1}^s \tilde{n}_t^3))$ computational cost. Second, to obtain predictive distribution $\pi(y_{s,j}(\mathbf{x}_0)\mid \mathbf{y}^{\mathscr{D}}, \boldsymbol \phi)$, model parameters $\{\boldsymbol \beta, \boldsymbol \gamma, \boldsymbol \sigma^2 \}$ have to be numerically integrated out in the one-step prediction formula. Thus, Monte Carlo approximation is required to take account of uncertainty in both model parameters $\{ \boldsymbol \beta, \boldsymbol \gamma, \boldsymbol \sigma^2 \}$ and missing data $\mathring{\mathbf{y}}^{\mathscr{D}}$. This will even hinder the practicality of this predictive formula. In contrast, the sequential prediction formula automatically integrate model parameters $\{ \boldsymbol \beta, \boldsymbol \gamma, \boldsymbol \sigma^2 \}$ without relying on Monte Carlo approximations. \subsubsection{Computational Cost} \label{sec: cost} In the PP cokriging model, the computational cost can be broken into two parts: one related to the parameter estimation and the other related to the prediction. In parameter estimation, each iteration of the MCEM algorithm developed in Algorithm~\ref{alg: MCEM in mult} of the Supplementary Material requires the computation of $\hat{Q}_{t,M}(\boldsymbol \phi_{t}\mid\boldsymbol \phi_{t}^{[\ell]})$ and its numerical optimization with respect to correlation parameters $\boldsymbol \phi_t$ at each level of code. The evaluation of $\hat{Q}_{t,M}(\boldsymbol \phi_{t}\mid\boldsymbol \phi_{t}^{[\ell]})$ requires matrix inversions and matrix multiplication of size $\tilde{n}_t\times \tilde{n}_t$. Such an evaluation requires $O(MN \tilde{n}_t^2 + \tilde{n}_t^3)$ computational cost across $N$ spatial locations and $M$ Monte Carlo samples. If the numerical optimization requires $k$ evaluations of $\hat{Q}_{t,M}(\boldsymbol \phi_{t}\mid\boldsymbol \phi_{t}^{[\ell]})$ to find the optimal value, the overall computational cost in each iteration of the MCEM algorithm is $O(kMN \sum_{t=1}^s $ $\tilde{n}_t^2 + k \sum_{t=1}^s\tilde{n}_t^3)$ without any parallelization. Notice that parallelization across $t$ is possible according to Algorithm~\ref{alg: MCEM in mult} of the Supplementary Material. This is a one-time computational cost. In the predictive distribution~\eqref{eqn: predictive distribution in multivariate model}, each conditional distribution requires matrix inversions and matrix multiplication of size $\tilde{n}_t\times \tilde{n}_t$. This requires $O(\tilde{n}^3_t $\ $+ N\tilde{n}_t^2)$ computational cost. One random sample generated from the predictive distribution at one new input value requires $O(\sum_{t=1}^s \tilde{n}^3_t+ N\sum_{t=1}^s \tilde{n}_t^2)$ computational cost. As $\tilde{n}_t$ is typically small (a few hundreds at most) in many real applications, the computational cost in prediction is linear in the number of spatial locations, $N$. This indicates the scalability of the proposed method to handle high-dimensional output for multifidelity computer models. The proposed data-augmentation idea for missing data imputation allows for efficient parameter estimation and prediction. Applying our approach with the data-augmentation idea but independently for each spatial location could lead to independent cokriging emulation at each spatial location which has two main drawbacks. First, the independent cokriging emulation often fails to obtain stable parameter estimates for correlation parameters and results in unreliable prediction results. Second, the computational cost of independent emulation across $N$ spatial locations is $O(N\sum_{t=1}^s \tilde{n}^3_t)$ for making prediction at one input value, which is much more expensive than the one based on the PP cokriging model. Our preliminary analysis with storm surge application indicated that it would take about 1800 core hours to obtain predictions at 9,284 spatial locations over 166 inputs using computing resources \texttt{Stampede2} at the Texas Advanced Computing Center (TACC). In contrast, the PP cokriging emulation just took about 5 core hours. Hence, the independent cokriging implementation is practically undesirable to solve applied problems in coastal flood hazard studies and storm surge forecasting. \section{Analysis of Storm Surge Simulations} \label{sec: application} In this section, the PP cokriging emulator is used to analyze high-dimensional output from the ADCIRC simulator and the ADCIRC + SWAN simulator. The analysis of emulation results and numerical comparison is presented to illustrate the advantage of the parallel partial cokriging model with high-dimensional output. The proposed PP cokriging methodology is implemented in the \textsf{R} package \texttt{ARCokrig} available at \url{https://github.com/pulongma/ARCokrig}. The PP cokriging model is trained on 200 inputs from the ADCIRC simulator and 60 inputs from the ADCIRC + SWAN simulator, where 50 inputs from the second fidelity level are nested within the first fidelity level. With such model runs, the proposed method can be applied readily. To measure predictive performance, we run the ADCIRC + SWAN simulator at 166 inputs from the original 226 inputs after excluding 60 inputs as described in Section~\ref{sec: model simulation}. In addition, we also train the PP kriging emulator via the \textsf{R} package \texttt{RobustGaSP} with the same 60 high-fidelity runs used in the PP cokriging emulator. As the landfall location is along the coastline, we define a distance measure $d_{\boldsymbol \ell}$ to replace the actual longitude and latitude coordinate of the landfall location. Specifically, we first choose a reference location $\boldsymbol \ell_0$ to be the landfall location that is in the most northwest direction along the coastline. Then for any landfall location $\boldsymbol \ell$, $d_{\boldsymbol \ell}$ is defined as the spherical distance between $\boldsymbol \ell$ and $\boldsymbol \ell_0$. As the coastline is unique, the landfall location determines the distance measure $d_{\boldsymbol \ell}$ and vice versa. In the implementation of the PP kriging emulator and the PP cokriging emulator, the input variables are $\Delta P$, $R_p$, $V_f$, $\theta$, $B$, $d_{\boldsymbol \ell}$. To measure the predictive performance, we run the high-fidelity simulator at 60 new inputs that are not in the training inputs. Evaluation of predictive performance is based on root-mean-squared-prediction errors (RMSPE), coverage probability of the 95\% equal-tail credible interval (CVG(95\%)), average length of the 95\% equal-tail credible interval (ALCI(95\%)), and continuous rank probability score \citep[CRPS;][]{Gneiting2007}. In addition, we also compute the Nash-Sutcliffe model efficiency coefficient (NSME): $$\text{NSME}:= 1 - \frac{\sum_{j=1}^N \sum_{\mathbf{x} \in A}\{m_j(\mathbf{x}) - y_{2,j}(\mathbf{x})\}^2}{\sum_{j=1}^N \sum_{\mathbf{x} \in A} \{m_j(\mathbf{x})-\bar{y}_{2,j}\}^2},$$ where $m_j(\mathbf{x})$ is the value to predict the high level code $y_{2,j}(\cdot)$ at input $\mathbf{x}$ and $j$-th spatial coordinate and $\bar{y}_{2,j}: = \sum_{\mathbf{x} \in {\cal X}_2} y_{2,j}(\mathbf{x})/n_2$ is the average of code $y_{2,j}(\cdot)$ at $j$-th spatial coordinate. The NSME computes the residual variance with the total variance. The closer NSME is to 1, the more accurate the model is. If the ADCIRC simulator is used to predict the ADCIRC + SWAN simulator at these 166 inputs, NSME is -1.089, which indicates that the mean of the training data in the high-fidelity simulator is a better predictor than the low-fidelity simulator at these inputs. \subsection{Emulation Accuracy} \label{sec: emulation result for storm surge} In the PP cokriging model, we include constant basis functions $\mathbf{h}_t(\cdot)$ according to exploratory data analysis. The scale discrepancy function $\gamma_{t-1,j}$ is assumed to be an unknown constant that does not depend on input. This assumption still allows the scale discrepancy function to vary across different code levels and spatial coordinates. For parameter estimation, the MCEM algorithm is initialized with multiple starting values and took about 5 hours to achieve convergence for a pre-specified tolerance on a 2-core Macbook Pro with 8 GB random access memory. The predictive mean and predictive variance is approximated by 30 random draws from the distribution \eqref{eqn: predictive distribution in multivariate model}. Negligible improvement is seen from increasing the number of draws. The estimated range parameters show that the peak surge elevation is highly dependent on the inputs: central pressure deficit ($\Delta P$), Holland's B parameter ($B$), since these two inputs have relatively large range parameters compared to their input ranges in the training sets. The small impact of the landfall location ($\boldsymbol \ell$) is due to our focus on a small coastal region in Cape Coral. A direct approach to building an emulator for the high-fidelity model run is the PP kriging emulator \citep{Gu2016} using model runs from the high-fidelity simulator. The results in Table~\ref{table: CV for cokriging} show that the PP cokriging emulator gives better prediction results than the PP kriging emulator, since the PP cokriging model gives smaller RMSPE and CRPS than the PP kriging emulator, and the PP cokriging emulator has nominal coverage probability close to 0.95 and short interval length for the 95\% credible interval. As the root-mean-squared difference between the low-fidelity simulator runs and the high-fidelity simulator runs is 0.132, the PP cokriging emulator gives a much smaller RMSPE that the low-fidelity simulator. The PP kriging emulator has NSME -1.086, which indicates that the PP kriging emulator performs slightly better than the low-fidelity simulator. The PP cokriging emulator has NSME 0.998, which indicates that the PP cokriging emulator performs much better than the low-fidelity simulator and the PP kriging emulator. The scatter plot of the predicted PSE against held-out PSE in Figure~\ref{fig: prediction versus held-out} shows that the PP cokriging emulator performs much better than the PP cokriging emulator at the input setting $\mathbf{x}_1$, since its predicted PSE are scattered around the 45 degree straight line. In contrast, the PP kriging emulator has a terrible performance. This is mainly because the PP kriging emulator is trained based on 60 high-fidelity simulator runs, which have limited ability to explore the complex input space. Similar scatter plot at the input setting $\mathbf{x}_2$ given in Figure~\ref{fig: prediction versus held-out 2} of the Supplementary Material leads to the same conclusion. The computational efficiency of the PP cokriging emulator can be used to predict high-fidelity storm surges in a computationally efficient way such that quantification of storm surge hazards can be facilitated. This is especially important in mitigating the risk of hurricane driven storm surges. Data-driven decisions can thus be made much more quickly to avoid severe damages. \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \makebox[\textwidth][c]{ \includegraphics[width=1.0\linewidth, height=0.2\textheight]{CV_PPkriging_testing_run_54_selected_LHS4A.png}} \caption{PP kriging} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1.0\linewidth, height=0.2\textheight]{CV_PPcokriging_testing_run_54_selected_LHS4A.png} \caption{PP cokriging} \end{subfigure} \caption{Scatter plot of predicted PSE against held-out PSE over $N=9,284$ spatial locations at the input setting $\mathbf{x}_1$. } \label{fig: prediction versus held-out} \end{figure} In the storm surge application, the high-fidelity simulator is about 10 times slower than the low-fidelity simulator. Increasing the number of model runs in the high-fidelity simulator is therefore computationally prohibitive. The computational cost of predicting a new high-fidelity model run via the PP cokriging emulator is negligible compared to that needed to get a single run from the actual ADCIRC + SWAN simulator. This implies that emulating the high-fidelity simulator by using the our proposed PP cokriging emulator that combines only a small number of high-fidelity runs and a few hundred low-fidelity runs is preferable than using the low-fidelity simulator, in terms of both accuracy and computational cost. The capability to use the low fidelity simulator, without substantial loss of accuracy through use of the PP cokriging emulator, to explore more of the parameter space greatly enhances the feasibility of achieving high-precision modeling results without a massive computational budget. \begin{table}[htbp] \centering \normalsize \caption{Predictive performance of emulators at $n^*=166$ held-out inputs over all $N=9,284$ spatial locations. PP = parallel partial.} {\resizebox{1.0\textwidth}{!}{% \setlength{\tabcolsep}{1.5em} \begin{tabular}{l c c c c c} \toprule \noalign{\vskip 1.5pt} & RMSPE & CVG(95\%) & ALCI(95\%) & CRPS & NSME \\ \noalign{\vskip 1.5pt} \noalign{\vskip 1.5pt} \hline \noalign{\vskip 3pt} \noalign{\vskip 1.5pt} {PP kriging} &0.151 & 0.910 & 0.512 & 0.135 & -1.086 \\ \noalign{\vskip 4pt} {PP cokriging} & 0.043 & 0.993 & 0.306 & 0.051 & 0.998 \\ \noalign{\vskip 1.5pt} \bottomrule \end{tabular}% }} \label{table: CV for cokriging} \end{table} \subsection{Uncertainty Analysis} Cross-validation in the previous section showed that the PP cokriging emulator can provide very accurate predictions when compared to the true high-fidelity surge model runs in an overall sense. Figure~\ref{fig: map of storm surges at two input setting} compares the predicted storm surges against held-out storm surge from the high-fidelity surge model across $N=9,284$ spatial locations at two storm inputs that are used in Figure~\ref{fig: storm surges at two input settings} and Figure~\ref{fig: prediction versus held-out}. At these two inputs, the PP cokriging emulator seems to have large predictive uncertainties in the southeast region of Cape Coral and to have small predictive uncertainties in the Pine Island Sound Aquatic Preserve and the Caloosahatchee River. The largest predictive standard deviation in the PP cokriging emulator across all spatial locations is around 0.2, which is smaller than the difference between the high-fidelity surge model and the low-fidelity surge model. This indicates that the PP cokriging emulator can better approximate the high-fidelity surge model than the low-fidelity surge model at these two inputs in Cape Coral. \begin{figure} \begin{subfigure}{.333\textwidth} \centering \makebox[\textwidth][c]{ \includegraphics[width=1.0\linewidth,height=0.2\textheight]{map_tesing_run_54_selected_LHS4A.png}} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.2\textheight]{map_predmu_run_54_selected_LHS4A.png} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.2\textheight]{map_predSE_run_54_selected_LHS4A.png} \end{subfigure} \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.2\textheight]{map_tesing_run_161_selected_LHS4A.png} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.2\textheight]{map_predmu_run_161_selected_LHS4A.png} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.2\textheight]{map_predSE_run_161_selected_LHS4A.png} \end{subfigure} \caption{High-fidelity runs and predicted peak surge elevations with predictive standard errors at two input settings. The first column shows the high-fidelity runs at two different input settings. The second and third columns show the corresponding predicted PSE and associated predictive standard errors.} \label{fig: map of storm surges at two input setting} \end{figure} Next, we explore the relationship between storm inputs and error structures in the PP cokriging emulator. We compute the prediction errors across all spatial locations at all held-out inputs. The scatter plot of emulation error against each storm parameter in Figure~\ref{fig: error analysis} shows that the majority of emulation errors range from -0.5 to 0.5. This indicates that the PP cokriging emulator can capture the input-output relationship quite well. As we can see, the emulation errors become larger as the central pressure deficit and the forward speed increase. The scale pressure radius seems to impact the emulation error in an opposite way as central pressure deficit. The emulation errors across different spatial locations are different as shown in Figure~\ref{fig: error analysis}. This indicates that the current PP cokriging emulator can capture part of the inhomogeneous structures in the output space, and some variations due to inputs are still left. We discuss how we can introduce nonstationarity in input space in Section~\ref{sec: Discussion}. Finally, we show the parameter estimates for $\boldsymbol \beta_1$, $\boldsymbol \sigma_1$, $\boldsymbol \beta_2$, $\boldsymbol \gamma_1$, and $\boldsymbol \sigma_2$ in Figure~\ref{fig: map of parameter estimates}. As we can see, these estimated parameters show strong spatially-varying structures at different regions. The estimated regression parameters $\hat{\boldsymbol \beta}_1$ and standard deviation $\hat{\boldsymbol \sigma}_1$ at the low-fidelity level seem to be smoother than those estimates at fidelity level 2. This is because more variations are captured by the Gaussian process at the low-fidelity level. The remaining variations captured by the discrepancy function $\delta_{2,j}(\cdot)$ are small. This implies that the Gaussian process at the low-fidelity level fits well with model runs from the ADCIRC simulator and the discrepancy between the low-fidelity simulator and the high-fidelity simulator is relatively small. The estimated scale discrepancy parameters $\hat{\boldsymbol \gamma}_1$ at all locations also show strongly heterogeneous spatial structures with values slightly greater than 1. This indicates that the high-fidelity simulator is more likely to generate higher values of storm surges than the low-fidelity simulator, but this trend is very small. The estimated standard deviations $\hat{\boldsymbol \sigma}_1$ and $\hat{\boldsymbol \sigma}_2$ seem to have more local structures than their corresponding regression parameters. This makes sense because we expect the regression trend in Gaussian processes to capture large-scale variations, and covariance structure to capture small-scale variations. \begin{figure}[htbp] \centering \makebox[\textwidth][c]{ \includegraphics[width=1.0\textwidth, height=0.4\textheight]{Error_analysis_all_selected.png}} \caption{Prediction errors across all $N=9,284$ spatial locations against each storm parameter at all held-out inputs.} \label{fig: error analysis} \end{figure} \begin{figure} \captionsetup[subfigure]{justification=centering} \begin{subfigure}{.333\textwidth} \centering \makebox[\textwidth][c]{ \includegraphics[width=1.0\linewidth, height=0.2\textheight]{Estimation_beta1_selected_LHS4A.png}} \caption{$\hat{\boldsymbol \beta}_1$} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.20\textheight]{Estimation_sigma1_selected_LHS4A.png} \caption{$\hat{\boldsymbol \sigma}_1$} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.2\textheight]{Estimation_gamma1_selected_LHS4A.png} \caption{$\hat{\boldsymbol \gamma}_1$} \end{subfigure} \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.20\textheight]{Estimation_beta2_selected_LHS4A.png} \caption{$\hat{\boldsymbol \beta}_2$} \end{subfigure}% \begin{subfigure}{.333\textwidth} \centering \includegraphics[width=1.0\linewidth,height=0.20\textheight]{Estimation_sigma2_selected_LHS4A.png} \caption{$\hat{\boldsymbol \sigma}_2$} \end{subfigure} \caption{Estimated parameters across all spatial locations. The estimated parameters show strong heterogeneous spatial patterns.} \label{fig: map of parameter estimates} \end{figure} \section{Discussion} \label{sec: Discussion} Coastal flood hazard studies by FEMA and USACE use ADCIRC + SWAN to quantify the storm surge hazard, where simulation from this computer model is time-consuming and resource-intensive. We have built a parallel partial cokriging emulator to predict storm surges using simulations from both ADCIRC + SWAN and ADCIRC. The proposed emulator is able to predict highly accurate storm surges in Cape Coral of Southwestern Florida and allows efficient computation of rare-event probability computation in coastal flood hazard studies. Its prediction has similar accuracy with the highly complex and computationally demanding storm surge model ADCIRC + SWAN. The PP cokriging emulator has a linear computational cost in terms of output values and also induces nonstationarity in output space, which is crucial to deal with non-smooth surface in the storm surge application. Combined with historical storm surge data, the proposed emulator can be further used to model the discrepancy between the observed surge data and ADCIRC + SWNA so that it can be used to predict the real-world process of storm surges. However, this is very challenging, since there are not many historical hurricanes making landfall in the Southwestern Florida, and measurements of input variables that characterize historical hurricanes are rarely available. The proposed PP cokriging emulator assumes conditional independence across spatial locations that essentially leads to a separable covariance structure between input space and output space to simplify computations. This assumption can help capture nonstationary spatial patterns in the storm surge application. To quantify storm surge hazards, the marginal distribution is needed to compute the rare-event probability. The proposed methodology has the capability to obtain this marginal distribution in a computationally efficient way over a large spatial domain. If interest lies in joint modeling across spatial locations, one can choose a spatial window to enable joint modeling. A related concern is the assumption of common correlation parameters at all spatial locations. If correlation parameters differ at each spatial location, the computational cost would not be linear in terms of model output. This is a key advantage when the hazard from storm surge is assessed over a large spatial domain. One can potentially partition the domain into a set of subregions and allow different correlation parameters across these subregions. In the storm surge application, we used a limited number of runs to train the emulator due to computational constraints. To aid coastal flood hazard studies and storm surge forecasting, one may use the proposed PP cokriging emulator to setup the design in a statistical optimal way such as sequential design \citep{LeGratiet2015} or to use large number of model runs. The latter problem can be tackled via computationally efficient Gaussian process approximation approaches \citep[e.g.][]{Gramacy2015}. The storm surge output shows quite rough structure across the spatial domain due to hurricane characteristics and heterogeneous topography. One can introduce nonstationarity in input space via treed Gaussian process \citep{Gramacy2008, konomikaragiannisABTCK2019}. Another interesting exploration for the proposed methodology is related to non-nested design on how much gain is obtained by allowing the design is not hierarchically nested over the traditional nested design. These possible directions could be pursued in future. Finally, the PP cokriging emulator provides an efficient way to generate high-fidelity storm surges over a large domain in space or storm parameter space. These are applicable across a wide range of storm surge work. Most notably, this method should enable more precise, lower-cost estimation of flood hazards across a wide range of event probabilities as well as surge forecasting. These analyses support hazard delineation for insurance rate maps, siting of critical infrastructure, design and planning of coastal protections. \section*{Supplement Material} The Supplement Material contains technical details and additional results. \begin{singlespace} \bibliographystyle{biom} \setlength{\bibsep}{5pt}
train/arxiv
BkiUeMfxK6Ot9V_E41KT
5
1
\section{Introduction} Various intriguing hints of New Physics (NP) have been reported in the last years in the form of lepton flavour universality (LFU) violations in semileptonic $B$ decays. In particular the $R(D^{(*)})={\cal B}(B\to D^{(*)}\tau\nu) / {\cal B}(B\to D^{(*)}\ell\nu)$ observable in $b\to c \tau \nu$ charged current transition, with $\ell=e,\mu$, has been measured by the BaBar~\cite{Lees:2012xj,Lees:2013uzd}, Belle~\cite{Huschle:2015rga,Sato:2016svk,Hirose:2016wfn} and LHCb~\cite{Aaij:2015yra,Aaij:2017uff,Aaij:2017deq} collaborations to be consistently above the Standard Model (SM) predictions. Once global fits are performed~\cite{Amhis:2016xyh,HFLAV:2018}, the combined statistical significance of the anomaly is just above the $\sim 4\sigma$ level. Other deviations from the SM have been observed in the LFU ratios of neutral-current $B$ decays, $R(K^{(*)})={\cal B}(B\to K^{(*)}\mu^+\mu^-) / {\cal B}(B\to K^{(*)}e^+ e^-)$~\cite{Aaij:2014ora,Aaij:2017vbb}. Also in this case the overall significance is around $4\sigma$. This discrepancy, if interpreted as due to some NP contribution in the $b\to s \ell \bar \ell $ transition, is further corroborated by another deviation measured in the angular distributions of the process $B\to K^{*}\mu^+\mu^-$~\cite{Aaij:2015oid,Aaij:2013qta}, for which however SM theoretical predictions are less under control. Finding a combined explanation for both anomalies in terms of some Beyond the SM (BSM) physics faces various challenges. In particular, in the SM the $b\to c\tau \nu$ transition occurs at tree-level and an explanation of the $R(D^{(*)})$ anomaly generally requires NP close to TeV scale, for which several constraints from direct searches for new states at collider experiments as well as in precision electroweak measurements and other flavour observables can be stringent, see \emph{e.g.} Refs.~\cite{ Datta:2012qk,Bhattacharya:2014wla, Alonso:2015sja,Greljo:2015mma,Calibbi:2015kma,Bauer:2015knc,Fajfer:2015ycq, Barbieri:2015yvd,Buttazzo:2016kid,Das:2016vkr, Boucenna:2016qad,Becirevic:2016yqi,Hiller:2016kry,Bardhan:2016uhr,Bhattacharya:2016mcc,Barbieri:2016las,Becirevic:2016oho,Bordone:2017anc,Megias:2017ove,Crivellin:2017zlb,Cai:2017wry,Altmannshofer:2017poe,Sannino:2017utc,Buttazzo:2017ixm,Azatov:2018knx,Kumar:2018kmr,Becirevic:2018afm}. On the other hand, the neutral current $b\to s \ell^+ \ell^-$ transition occurs in the SM through a loop-induced process, thus hinting to a higher NP scale or smaller couplings responsible for the $R(K^{(*)})$ anomaly. Concerning the $R(D^{(*)})$ observables, it has recently been proposed that the measured enhancement with respect to the SM prediction can also be obtained by adding a new right-handed fermion, singlet under the SM gauge group, hereafter dubbed $N_R$~\cite{Asadi:2018wea,Greljo:2018ogz} (see also~\cite{Fajfer:2012jt,He:2012zp,Becirevic:2016yqi,Cvetic:2017gkt,Fraser:2018aqj} for earlier related studies). Differently from other explanations where the NP contributions directly enhance the $b\to c \tau \nu_\tau$ transition, this solution allows to evade the stringent constraints arising from the $SU(2)_L$ doublet nature of the SM $\nu_\tau$ neutrino. In this case the $B\to D^{(*)}\tau\nu$ decay rate becomes the sum of two non-interfering contributions: ${\cal B}(B\to D^{(*)}\tau\nu) = {\cal B}(B\to D^{(*)}\tau\nu_\tau) + {\cal B}(B\to D^{(*)}\tau N_R) $. Several effective operators involving $N_R$ can be written at the $B$-meson mass scale. In order to ensure that the differential distributions in the $B\to D^{(*)}\tau N_R$ process are compatible with the SM ones, as implicit in the global fits where the experimental acceptances are not assumed to be drastically modified by the presence of extra NP contributions, we assume that the sterile neutrino has a mass below $\sim \mathcal{O}(100) \textrm{ MeV}$ \cite{Greljo:2018ogz} and that the dominant contributions to the $R(D^{(*)})$ anomaly is given by a right-right vector operator \begin{equation} \mathcal{L}_{BSM}^{b\to c \tau \nu} = \frac{c_{R_D}}{\Lambda^2} \left( \bar c_R \gamma_\mu b_R \right)\left( \bar \tau_R \gamma^\mu N_R \right) + h.c.~. \label{eq:bctnuBSM} \end{equation} Matching to the observed excess one finds \cite{Amhis:2016xyh} (Summer 2018 update \cite{HFLAV:2018}) \begin{equation} R_{D^{(*)}} \equiv \frac{R(D)}{R(D)_{SM}} = \frac{R(D^*)}{R(D^*)_{SM}} = 1 + \left| \frac{c_{R_D} v^2}{2 \Lambda^2 V_{cb}} \right|^2 = 1.218 \pm 0.052~, \label{eq:RDst} \end{equation} where $v \approx 246 \textrm{ GeV}$ is the vacuum expectation value of the SM Higgs field. This gives a NP scale required to fit the observed excess \begin{equation} \Lambda / \sqrt{c_{R_D}} = (1.27^{+ 0.09}_{-0.07} )~ \textrm{ TeV}~. \label{eq:NPRDsize} \end{equation} Such a low NP scale strongly suggests that this operator could be generated by integrating out at tree-level some heavy mediator. There are only three possible new degrees of freedom which can do that: \begin{itemize} \item a charged vector \quad $W^\prime_\mu \sim ({\bf 1}, {\bf 1}, +1)$, \item a vector leptoquark \quad $U_1^\mu \sim ({\bf 3}, {\bf 1}, +2/3)$, \item a scalar leptoquark \quad $S_1 \sim ({\bf \bar 3}, {\bf 1}, +1/3)$, \end{itemize} where in parentheses we indicate their $\textrm{SU}(3)_c \times \textrm{SU}(2)_L \times \textrm{U}(1)_Y$ quantum numbers~\footnote{We normalise the weak hypercharge as $Q=T^{3L}+Y$.}. The case of the $W^\prime_\mu$ has been recently studied in detail in Refs.~\cite{Asadi:2018wea,Greljo:2018ogz}. In this work we focus on the two coulored leptoquark (LQ) models. Interestingly enough, both LQs can also contribute to the neutral-current $b\to s\mu^+ \mu^-$ transition. In particular, the vector LQ $U_1$ contributes to that process at tree-level while the scalar $S_1$ only at one loop. By considering the most general gauge invariant Lagrangians and assuming a specific flavour structure, we study in details the conditions under which the two LQ models can simultaneously explain both the $R(D^{(*)})$ and the $R(K^{(*)})$ measured values, taking into account all the relevant flavour and collider limits. Our findings show that the vector LQ provides a successful combined explanation of both anomalies, while being consistent with other low and high $p_T$ experiments. Instead, while the scalar LQ can address $R(D^{(*)})$, a combined explanation of also $R(K^{(*)})$ is in tension with bounds arising from $B_s-\bar B_s$ mixing. Also, by studying the present limits and future projections for collider searches, we find that the Large Hadron Collider (LHC) will be able to completely test both models already with $\sim 300\;$fb$^{-1}$ of integrated luminosity. For both models we then show that additional contributions to the mass of the active neutrinos generated by the operator responsible for reproducing the $R(K^{(*)})$ anomaly point to a specific extension of our framework, where neutrino masses are generated via the inverse see-saw mechanism~\cite{Mohapatra:1986aw,Mohapatra:1986bd,Dias:2012xp}. We finally study the cosmological bounds on the right-handed neutrino $N_R$ and discuss the conditions under which it can be identified with a Dark Matter (DM) candidate. We show that an ${\cal O}(1)\;$keV sterile neutrino can behave as DM only when the operators responsible for the explanation of the $R(K^{(*)})$ anomaly are turned off. In this case $N_R$ can reproduce the whole DM abundance observed in the Universe under the condition of additional entropy injection in the visible after the $N_R$ decoupling, while being compatible with bounds arising from the presence of extra degrees of freedom in the early Universe and from structure formations at small scales. Very recently, while this work was already in the final stages of preparation, Ref.~\cite{Robinson:2018gza} appeared on the arXiv which has some overlap with our paper. In particular~\cite{Robinson:2018gza} also studies explanations of $R(D^{(*)})$ anomalies with the two LQs considered here, as well as with other states which generate operators different than the right-right one, and studies the present LHC limits from LQs pair production. In this work we go beyond that analysis by studying in detail the possibility of a {\emph{combined explanation}} with the $b\to s \ell^+ \ell^-$ neutral-current anomalies, by studying also LHC constraints from off-shell exchange of LQs, which turn out to be very relevant, by discussing a possible scenario that can account for the generation of neutrino masses and by presenting a detailed study of the cosmological aspects of the sterile neutrino relevant for the anomalies. The layout of the paper is as follows. In Sec.~\ref{sec:models} we introduce the two LQ models with a right-handed neutrino and we describe their flavour structure and their implications for the relevant flavour observables. Limits arising from LHC searches are shown in Sec.~\ref{sec:collider}, while possible model extensions that can account for the generation of neutrino masses are discussed in Sec.~\ref{sec:neutrino}. Sec.~\ref{sec:cosmo} is dedicated to the discussion of the cosmological properties of $N_R$. We finally conclude in Sec.~\ref{sec:concl}. \section{Simplified models and flavour observables} \label{sec:models} In this Section we separately describe the interaction Lagrangians of the two candidate LQs, $U_1$ and $S_1$ in the presence of a right-handed SM singlet $N_R$, assuming baryon and lepton number conservation. We work in the down-quark and charged-lepton mass basis, so that $q_L^i = (V_{ji}^* u_L^j, d_L^i)^T$ and $\ell_L^\alpha = (\nu_L^\alpha, e_L^\alpha)^T$. Integrating out the LQs at the tree-level one generates a set of dimension-six operators, $\mathcal{L}^{\rm EFT} = - \frac{1}{v^2} \sum_x C_x O_x$, whose structures and corresponding value of the Wilson coefficients are indicated in Tab.~\ref{tab:operators}. For both mediators we study if the charged-current anomalies can be addressed while at the same time being consistent with all other experimental constraints. Furthermore, we also consider the possibility of addressing with the same mediators the neutral-current $R(K^{(*)})$ anomalies. \begin{table}[t!] \begin{center} \begin{tabular}{ l | c | c | c } Operator & Definition & Coeff. $U_1$ & Coeff. $S_1$ \\ \hline \hline $(O_{l q}^1)_{\alpha \beta i j}$ & $(\bar{l}_L^\alpha \gamma_\mu l_L^\beta) (\bar{q}_L^i \gamma^\mu q_L^j)$ & $2 \xi \; g_{i\beta}^{q} g^{q*}_{j\alpha}$ & $- \xi \; \lambda_{i\alpha}^{q*} \lambda^q_{j\beta}$ \\ $(O_{l q}^3)_{\alpha \beta i j}$ & $(\bar{l}_L^\alpha \gamma_\mu \sigma^a l_L^\beta) (\bar{q}_L^i \gamma^\mu \sigma^a q_L^j)$ & $2 \xi \; g_{i\beta}^{q} g^{q*}_{j\alpha}$ & $\xi \; \lambda_{i\alpha}^{q*} \lambda^q_{j\beta}$ \\ $(O_{l e q u}^1)_{\alpha \beta i j}$ & $(\bar{l}_L^\alpha e_R^\beta) \epsilon (\bar{q}_L^i u_R^j)$ & 0 & $- 2 \xi \; \lambda^u_{j\beta} \lambda_{i\alpha}^{q*}$ \\ $(O_{l e q u}^3)_{\alpha \beta i j}$ & $(\bar{l}_L^\alpha \sigma_{\mu\nu} e_R^\beta) \epsilon (\bar{q}_L^i \sigma^{\mu\nu} u_R^j)$ & 0 & $\frac{1}{2} \xi \; \lambda^u_{j\beta} \lambda_{i\alpha}^{q*}$\\ $(O_{e u})_{\alpha \beta i j}$ & $(\bar{e}_R^\alpha \gamma_\mu e_R^\beta) (\bar{u}_R^i \gamma^\mu u_R^j)$ & 0 & $- 2 \xi \; \lambda_{i\alpha}^{u\,*} \lambda^u_{j\beta}$ \\ $(O_{e d})_{\alpha \beta i j}$ & $(\bar{e}_R^\alpha \gamma_\mu e_R^\beta) (\bar{d}_R^i \gamma^\mu d_R^j)$ & $4 \xi g_{i\beta}^{d} g^{d*}_{j\alpha}$ & 0 \\ $(O_{N d})_{i j}$ & $(\bar N_R \gamma_\mu N_R) (\bar{d}_R^i \gamma^\mu d_R^j)$ & 0 & $- 2 \xi \; \lambda_{i N}^{d\,*} \lambda^d_{j N}$\\ $(O_{N u})_{i j}$ & $(\bar N_R \gamma_\mu N_R) (\bar{u}_R^i \gamma^\mu u_R^j)$ & $4 \xi g_{i N}^{u} g^{u*}_{j N}$ & 0 \\ $(O_{e N u d})_{\alpha i j}$ & $(\bar{e}_R^\alpha \gamma_\mu N_R) (\bar{u}_R^i \gamma^\mu d_R^j)$ & $4 \xi g_{i}^{u N} g^{d*}_{j\alpha}$ & $- 2 \xi \; \lambda_{i\alpha}^{u\,*} \lambda^d_{j}$\\ $(O_{l N q d}^1)_{\alpha i j}$ & $(\bar{l}_L^\alpha N_R) \epsilon (\bar{q}_L^i d_R^j)$ & 0 & $- 2 \xi \; \lambda^d_{j N} \lambda_{i\alpha}^{q*}$ \\ $(O_{l N q d}^3)_{\alpha i j}$ & $(\bar{l}_L^\alpha \sigma_{\mu\nu} N_R) \epsilon (\bar{q}_L^i \sigma^{\mu\nu} d_R^j)$ & 0 & $\frac{1}{2} \xi \; \lambda^d_{j N} \lambda_{i\alpha}^{q*}$ \\ $(O_{l e d q})_{\alpha \beta i j}$ & $(\bar{l}_L^\alpha e_R^\beta) (\bar d_R^i q_L^j)$ & $-8 \xi g^{d}_{i\beta} g_{j\alpha}^{q*}$ & 0 \\ $(O_{l N u q})_{\alpha i j}$ & $(\bar{l}_L^\alpha N_R) (\bar u_R^i q_L^j)$ & $-8 \xi g^{u}_{i N} g_{j\alpha}^{q*}$ & 0 \\ \end{tabular} \end{center} \caption{Dimension-six operators and corresponding Wilson coefficients obtained integrating out at tree-level the $U_1$ and $S_1$ mediators. $\xi=v^2/(4 m_{U,S}^2)$.} \label{tab:operators} \end{table} \subsection{Vector LQ ${\mathbf{U_1}}$} The general interaction Lagrangian of the vector LQ $U_1 \sim ({\bf 3}, {\bf 1}, +2/3)$ with SM fermions and a right-handed neutrino $N_R$ reads \begin{equation}\label{eq:lag_U} \mathcal{L} = U_1^\mu ({\bf g}^u_i \bar u_R^i \gamma_\mu N_R + {\bf g}^d_{i\alpha} \bar d_R^i \gamma_\mu e_R^\alpha + {\bf g}^q_{i\alpha} \bar q_L^i \gamma_\mu l_L^\alpha)+h.c.~, \end{equation} where ${\bf g}^{q,d}$ are $3\times 3$ matrices while ${\bf g}^{u}$ is a $3$-vector in flavour space. The integration of the $U_1$ state produces the seven dimension-six operators indicated in Tab.~\ref{tab:operators}, where $\xi = v^2 / (4 m_{U}^2)$. From these operators it is clear that this vector LQ can contribute to $R(D^{(*)})$ in several ways: \begin{enumerate} \item[\emph{i)}] via the vector $LL$ operator $O^3_{lq}$ proportionally to $g^q_{b(s)\tau}$; \item[\emph{ii)}] via the scalar operator $O_{ledq}$ proportionally to $g^d_{b\tau} g^q_{b(s)\tau}$; \item[\emph{iii)}] via the scalar operator $O_{lNuq}$ proportionally to $g^u_{c N} g^q_{b\tau}$; \item[\emph{iv)}] via the vector $RR$ operator $O_{eNud}$ proportionally to $g^u_{c N} g^d_{b\tau}$. \end{enumerate} The first three solutions involve a large coupling to third-generation left-handed quarks and leptons and have been studied widely in the literature \cite{Barbieri:2016las,Barbieri:2017tuq,Cline:2017aed,Buttazzo:2017ixm,Assad:2017iib,Calibbi:2017qbu,DiLuzio:2017vat,Bordone:2017bld,Greljo:2018tuh,Blanke:2018sro,Bordone:2018nbg}. Such structures can potentially lead to some tension with $Z$ boson couplings measurements, LFU tests in $\tau$ decays, and $B_s-$ $\bar B_s$ mixing. To avoid these issues and since our goal is to study mediators contributing to $R(D^{(*)})$ mainly via the operator in Eq.~\eqref{eq:bctnuBSM}, we set $g^q_{i\tau} \approx 0$ and focus instead on case \emph{iv)}. In order to explain both the $R(D^{(*)})$ and $R(K^{(*)})$ anomalies we assume the LQ couplings to fermions to have the following flavour structure: \begin{equation} {\bf g}^q = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & g_{s\mu}^q & 0 \\ 0 & g_{b\mu}^q & 0 \end{array}\right)~, \qquad {\bf g}^d = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & g_{b\tau}^d \end{array}\right)~, \qquad {\bf g}^u = \left( 0, ~ g_{cN}^u, 0 \right)^T~, \label{eq:flav_structure_U} \end{equation} with $g^d_{b\tau} g^u_{cN} \sim \mathcal{O}(1)$, $g^q_{b\mu}, g^q_{s\mu} \ll 1$. Note that one could potentially also add a coupling to the right-handed top, but since it does not contribute to the flavour anomalies we neglect it in the following. By fitting the excess in the charged-current LFU ratios one obtains with this coupling structure \begin{equation} \delta R_{D^{(*)}} =\frac{|g_{c N}^{u\,*} g^d_{b\tau}|^2}{m_{U}^4} \frac{v^4}{4 | V_{cb}|^2}=0.218 \pm 0.052 \end{equation} hence \begin{equation} \label{eq:RDfit_vec} |g_{c N}^{u} g^d_{b\tau}| \sim 0.62 \sqrt{\frac{\delta R_{D^{(*)}} }{0.218}}\left(\frac{m_{U}}{1\;{\rm TeV}}\right)^2. \end{equation} With the couplings in Eq.~\eqref{eq:flav_structure_U}, the vector LQ also contributes at the tree-level to $b \to s \mu^+ \mu^-$ transitions via the two operators $O^{1,3}_{lq}$. By fitting the anomaly and matching to the standard weak Hamiltonian notation we get \begin{equation} \Delta C_9^\mu = - \Delta C_{10}^\mu = - \frac{\pi v^2}{\alpha V_{tb} V_{ts}^*} \frac{g^q_{b\mu} (g^q_{s\mu})^*}{m_{U}^2} = -0.61 \pm 0.12~, \end{equation} where we used the result of the global fit in \cite{Altmannshofer:2017yso} (see also \cite{Descotes-Genon:2015uva,DAmico:2017mtc,Capdevila:2017bsm,Ciuchini:2017mik,Ghosh:2017ber,Hiller:2017bzc,Bardhan:2017xcc}). This corresponds to \begin{equation} g^q_{b\mu} (g^q_{s\mu})^* = \left( -0.93 \pm 0.18 \right) \times 10^{-3} \left(\frac{m_{U}}{1 \textrm{ TeV}}\right)^2~. \label{eq:RKfitU1} \end{equation} The vector LQ, with the couplings required to fit the $B$-anomalies as detailed above, contributes also to other flavour and precision observables. While all constraints can be successfully satisfied, we list in the following the most relevant ones. The contribution to the $B_c \to \mu N$ decay width and the corresponding limit \cite{Alonso:2016oyd} are given by \begin{equation} \mathcal{B}(B_c \to \mu N) = \frac{\tau_{B_c} f_{B_c}^2 m_{B_c}}{64 \pi} \left| \frac{c_{lNuq}}{\Lambda^2} \frac{m^2_{B_c}}{(\overline{m}_b + \overline{m}_c)} \right|^2 \lesssim 5\% \quad \rightarrow \quad |g^{q}_{b\mu} g^u_{c N}| \lesssim 0.23 \left( \frac{m_U}{1 \textrm{ TeV}} \right)^2~, \label{eq:BcU1limit} \end{equation} where $f_{B_c} \approx 0.43 \textrm{ GeV}$ \cite{Aoki:2016frl}, $m_{B_c} \approx 6.275 \textrm{ GeV}$ and $\tau_{B_c} \approx 0.507 \times 10^{-12} s$ \cite{Olive:2016xmw}. A chirally-enhanced contribution is also generated for the $D_s \to \mu N$ decay, which is measured at a few percent level: \begin{equation} \mathcal{B}(D_s \to \mu N) = \frac{\tau_{D_s} f_{D_s}^2 m_{D_s}}{64 \pi} \left( \frac{1}{(\Lambda_{\rm eff}^{cs})^4} + \left| \frac{2 g^{q\,*}_{s\mu} g^u_{cN}}{m_U^2} \frac{m^2_{D_s}}{(\overline{m}_s + \overline{m}_c)} \right|^2 \right) = (5.56 \pm 0.25) \times 10^{-3}~, \label{eq:DsU1} \end{equation} where $\Lambda_{\rm eff}^{cs} = (1 / 2\sqrt{2} G_F V_{cs})^{1/2}$, $f_{D_s} \approx 0.25 \textrm{ GeV}$ \cite{Aoki:2016frl}, $m_{D_s} \approx 1.986 \textrm{ GeV}$ and $\tau_{D_s} \approx 5 \times 10^{-13} s$ \cite{Olive:2016xmw}, which gives an upper 95\% CL bound $|g^{q}_{s\mu} g^u_{c N}| \lesssim 0.18 \left( \frac{m_U}{1 \textrm{ TeV}} \right)^2$. The prediction for the lepton flavour violating (LFV) decay $B_s \to \tau \mu$ from the $(O_{ledq})_{\mu\tau bs}$ operator is given by \begin{equation} \mathcal{B}(B_s \to \tau \mu) = \frac{\tau_{B_s} f_{B_s}^2 m_{B_s}}{32 \pi} \left( 1 - \frac{m_\tau^2}{m_{B_s}^2} \right)^2 \left| \frac{c_{ledq}}{\Lambda^2} \frac{m^2_{B_s}}{(\overline{m}_b + \overline{m}_s)} \right|^2 \approx 5.4 \times 10^{-5} \left| \frac{g^{q\, *}_{s\mu} g^d_{b\tau}}{10^{-2}} \left( \frac{ 1\textrm{ TeV}}{m_U } \right)^2 \right|^2~, \end{equation} where $f_{B_s} \approx 0.224 \textrm{ GeV}$ \cite{Aoki:2016frl}, $m_{B_s} \approx 5.37 \textrm{ GeV}$ and $\tau_{B_s} \approx 1.51 \times 10^{-12} s$ \cite{Olive:2016xmw}. The only weak constraint on this decay is the indirect one arising from the total lifetime measurements of the $B_s$ meson, but in the future this process could be directly looked for at Belle-II. A contribution to $B_s-\bar B_s$ mixing is generated at the loop level and is proportional to $(g^q_{b\mu} (g^q_{s\mu})^*)^2$, which makes it negligibly small given Eq.~\eqref{eq:RKfitU1}. These couplings also induce a tree-level contribution to $b\to c\mu\nu$, which is constrained at the $\sim 1\%$ level, however also the prediction for this observable is well below the experimental bound due to the small size of the couplings. Finally we notice that at one loop the vector LQ generates also contributions to $Z$ couplings to SM fermions, precisely measured at LEP-1. These effects can also be understood from the renormalisation group (RG) evolution of the operators in Tab.~\ref{tab:operators} from the scale $m_{U}$ down to the electroweak scale \cite{Feruglio:2016gvd,Feruglio:2017rjo,Cornella:2018tfd}. The relevant deviations in $Z$ couplings are:\footnote{Defined as $g_{f_{L,R}}^Z = g_{f_{L,R}}^{Z, \textrm{SM}} + \Delta g_{f_{L,R}}^Z$, where $g_{f_{L,R}}^{Z, \textrm{SM}} = (T_{3L}^f - Q^f s^2_{\theta_W})$. The limit on $\Delta g_{\nu_R}^Z$ comes from $N_\nu = \Gamma_{inv} / \Gamma_{\nu\bar\nu}^\textrm{SM} = 2 + \left| 1 + 2 \Delta g_{\nu_L^\mu}^Z\right|^2 + \left| 2 \Delta g_{\nu_R}^Z \right|^2 = 2.9840 \pm 0.0082$.} \begin{equation}\begin{split} |\Delta g_{\tau_R}^Z| &= \frac{v^2}{16 \pi^2 m_U^2} \frac{g_Y^2 |g^d_{b\tau}|^2}{3} \log\frac{m_U}{m_Z} \approx (3.8 \times 10^{-5}) \frac{|g_{b\tau}^d |^2}{(m_U / 1 \textrm{ TeV})^2} < 1.2 \times 10^{-3} \\ |\Delta g_{N_R}^Z| &= \frac{v^2}{32 \pi^2 m_U^2} \frac{4 g_Y^2 |g^u_{c N}|^2}{3} \log\frac{m_U}{m_Z} \approx (7.5 \times 10^{-5}) \frac{|g_{cN}^u |^2}{(m_U / 1 \textrm{ TeV})^2} < 2 \times 10^{-3} \label{eq:RGEZbounds}~, \end{split}\end{equation} where the 95\% confidence level (CL) limits have been taken from Ref.~\cite{ALEPH:2005ab}. It is clear that the $\mathcal{O}(1)$ couplings required to address the $R(D^{(*)})$ anomalies do not induce any dangerous effects in these observables. We conclude this section by stressing that the vector LQ $U_1$ with the coupling structure in Eq.~\eqref{eq:flav_structure_U} is able to successfully fit both charged- and neutral-current $B$-physics anomalies, while at the same time satisfying all other flavour and precision constraints with no tuning required. In Sec.~\ref{sec:collider} we show how this mediator can also pass all available limits from direct searches, but it should be observed with more data gathered at the LHC. Finally, in Sections~\ref{sec:neutrino} and ~\ref{sec:cosmo} we show how the sterile neutrino $N_R$ can satisfy all constraints from both neutrino physics and cosmology. \subsection{Scalar LQ ${\mathbf{S_1}}$} The general interaction Lagrangian for the scalar LQ $S_1 \sim ({\bf \bar 3}, {\bf 1}, +1/3)$ and a right-handed neutrino $N_R$ is \begin{equation}\label{eq:lag_S1} \mathcal{L} = S_1 \left( {\bf \lambda}^u_{i,\alpha} \bar u_R^{c,i} e_R^\alpha + {\bf \lambda}^d_i \bar d_R^{c,i} N_R + {\bf \lambda}^q_{i,\alpha} \bar q_L^{c,i} \epsilon \ell_L^\alpha \right) + h.c.~, \end{equation} where ${\bf \lambda}^{q,u}$ are $3\times 3$ matrices while ${\bf \lambda}^{d}$ is a $3$-vector in flavour space and the supscript $c$ denote the charge conjugation operator. The operators generated by integrating out this LQ are listed in Tab.~\ref{tab:operators}. As for the vector LQ, also the scalar can contribute to $R(D^{(*)})$ in several ways, including via a large coupling to third generation left-handed quarks and leptons \cite{Gripaios:2009dq,Sakaki:2013bfa,Hiller:2014yaa,Gripaios:2014tna,Bauer:2015knc,Das:2016vkr,Becirevic:2016oho,Hiller:2016kry,Crivellin:2017zlb,Cai:2017wry,Dorsner:2017ufx,Buttazzo:2017ixm,Fajfer:2018bfj,Marzocca:2018wcf}, which however leads to tension with electroweak precision tests and $B_s-\bar B_s$ mixing \cite{Buttazzo:2017ixm,Marzocca:2018wcf}. We thus focus on the case where $g^q_{i\tau} \ll 1$ and where the leading contribution to $b \to c \tau \nu$ arises from the operator in Eq.~\eqref{eq:bctnuBSM}. Contrary to the vector LQ, the scalar one does not contribute to $b\to s\mu^+\mu^-$ at the tree-level. It does, however, at one loop \cite{Bauer:2015knc} via box diagrams proportionally to the $\lambda^q_{s\mu} \lambda^q_{b\mu}$ couplings. Our goal is thus to fit $R(D^{(*)})$ at tree-level via right-handed currents involving $N_R$, while possibly fitting $R(K^{(*)})$ at one-loop with the corresponding couplings to left-handed fermions. In this spirit we require the following couplings to be non-vanishing: \begin{equation} {\bf \lambda}^q = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & \lambda_{s\mu}^q & 0 \\ 0 & \lambda_{b\mu}^q & 0 \end{array}\right)~, \qquad {\bf \lambda}^u = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & \lambda_{c\tau}^u \\ 0 & 0 & 0 \end{array}\right)~, \qquad {\bf \lambda}^d = \left( 0, ~ 0, ~ \lambda_{bN}^d \right)^T~. \label{eq:flav_structure_S} \end{equation} In the limit where one does not address $R(K^{(*)})$, {\emph{i.e.}} $\lambda^q_{q\mu} \approx 0$, the only NP contribution to $R(D^{(*)})$ is given by the operator in Eq.~\eqref{eq:bctnuBSM}: \begin{equation} \delta R_{D^{(*)}} =\frac{|\lambda_{c\tau}^{u\,*} \lambda^d_{b N}|^2}{4 m_{S}^4} \frac{v^4}{4| V_{cb}|^2}=0.218 \pm 0.052 \label{eq:fitRDS1} \end{equation} which further implies \begin{equation} \label{eq:RDfit} |\lambda_{c\tau}^{u} \lambda^d_{b N}| \sim 1.25 \sqrt{\frac{\delta R_{D^{(*)}} }{0.218}}\left(\frac{m_{S}}{1\;{\rm TeV}}\right)^2. \end{equation} Thus with $\cal O$(1) couplings also the scalar LQ should live at the TeV scale in order to explain the measured values of $R(D^{(*)})$ . In the more general case, the couplings in $\lambda^q$ in Eq.~\eqref{eq:flav_structure_S} induce also different contributions to $R(D^{(*)})$ which can be relevant since, as shown below, $\lambda^q_{b\mu}$ should be large if one aims to fit $R(K^{(*)})$: \begin{equation}\begin{split} R_D = \frac{R(D)}{R(D)_{\textrm{SM}}} &\approx 1 + 0.14 |\lambda_{c\tau}^{u} \lambda^d_{b N}|^2 \left(\frac{m_{S}}{1\;{\rm TeV}}\right)^{-4} + 0.19 |\lambda_{c\tau}^{u} \lambda^q_{b\mu}|^2 \left(\frac{m_{S}}{1\;{\rm TeV}}\right)^{-4} = 1.36 \pm 0.15~, \\ R_{D^*} = \frac{R(D^*)}{R(D^*)_{\textrm{SM}}} &\approx 1 + 0.14 |\lambda_{c\tau}^{u} \lambda^d_{b N}|^2 \left(\frac{m_{S}}{1\;{\rm TeV}}\right)^{-4} + 0.032 |\lambda_{c\tau}^{u} \lambda^q_{b\mu}|^2 \left(\frac{m_{S}}{1\;{\rm TeV}}\right)^{-4} = 1.186 \pm 0.062~, \label{eq:RDRDstS1} \end{split}\end{equation} with a correlation $-0.203$. The operator $\propto \lambda^u_{c\tau} \lambda^{q \, *}_{b\mu}(\bar\nu_L^\mu \tau_R)(\bar b_L c_R)$ also induces a chirally enhanced contribution to the LFV process $B_c \to \tau \bar\nu^\mu_L$: \begin{equation} \mathcal{B}(B_c \to \tau \bar\nu^\mu_L) = \frac{\tau_{B_c} f_{B_c}^2 m_{B_c}}{64 \pi} \left( 1 - \frac{m_\tau^2}{m_{B_s}^2} \right)^2 \left| \frac{\lambda^u_{c\tau} \lambda^q_{b\mu}}{2 m_S^2} \frac{m^2_{B_c}}{(\overline{m}_b + \overline{m}_c)} \right|^2 \lesssim 5\%~. \end{equation} The corresponding constraint \begin{equation} |\lambda^u_{c\tau} \lambda^q_{b\mu}| \lesssim 0.66 \left( \frac{m_S}{1 \textrm{ TeV}} \right)^2~, \end{equation} makes the contribution of these couplings to $R(D^{(*)})$ in Eq.~\eqref{eq:RDRDstS1} subleading, simplifying then the contribution to charged-current anomalies to the expression in Eq.~\eqref{eq:fitRDS1}. The couplings to quark and lepton doublets $\lambda^q_{q\mu}$ generate a $b\to c\mu \nu$ charged-current transition, which implies a violation of LFU in $b\to c \ell \nu$ processes which is however constrained at the percent level \cite{Jung:2018lfu} \begin{equation} \delta R_{b\to c}^{\mu e} \approx 0.03 \left( \frac{1 \textrm{ TeV}}{m_S} \right)^2 \text{Re}\left[ \lambda_{b\mu}^{q\,*} \left( \lambda_{b\mu}^{q} + V_{cs} \frac{\lambda_{s\mu}^{q} }{V_{cb}} \right) \right] < \mathcal{O}(1\%). \end{equation} Since, as shown below, in order to fit $R(K^{(*)})$ the coupling $\lambda_{b\mu}^{q\,*}$ has to be larger than 1, it is necessary to tune the parenthesis as \begin{equation} \lambda^q_{s\mu} \sim - \frac{V_{cb}}{V_{cs}} \lambda_{b\mu}^q~. \label{eq:tuning_bcmunu} \end{equation} This relation also suppresses the non-interfering contribution to the same observable from the $(O^{1,3}_{lNqd})_{\mu c b}$ operators. Note that this relation corresponds to aligning the coupling to $t_L \mu_L$ in the up-quark mass basis, so that the LQ has a much suppressed coupling to $c_L$. The same couplings also induce a possibly large tree-level contribution to $b \to s \nu^\mu_L \nu^\mu_L$. The 95\% CL limit on $\mathcal{B}(B \to K^* \nu \nu)$ fixes the upper bound \begin{equation} R_{\nu\nu}: \quad - 1.2 \left( \frac{m_S}{1 \textrm{ TeV}}\right)^2 < \frac{\lambda_{b\mu}^{q} \lambda_{s\mu}^{q\,*}}{V_{tb} V_{ts}^*} < 2.2 \left( \frac{m_S}{1 \textrm{ TeV}}\right)^2 \quad \longrightarrow \quad |\lambda_{b\mu}^{q}|^2 \lesssim 2.2 \left( \frac{m_S}{1 \textrm{ TeV}}\right)^2~, \label{eq:RnunuBoundS1} \end{equation} where in the second step we used the condition in Eq.~\eqref{eq:tuning_bcmunu}. The loop contribution to $B \to K^{(*)} \mu^+ \mu^-$ is given by \cite{Bauer:2015knc} \begin{equation} \Delta C_9^\mu = - \Delta C_{10}^\mu \approx \frac{m_t^2}{16\pi \alpha m_S^2} |V_{t d_i}^* \lambda^q_{d_i \mu}|^2 - \frac{\sqrt{2}}{128 \pi \alpha G_f m_S^2} \left( \frac{\lambda^q_{b\mu} \lambda^{q\, *}_{s\mu}}{V_{tb} V_{ts}^*} \right) |V_{t d_i}^* \lambda^q_{d_i \mu}|^2 = -0.61 \pm 0.12 \label{eq:fit_bsmumu_1} \end{equation} Imposing the condition of Eq.~\eqref{eq:tuning_bcmunu} we obtain \begin{equation} |\lambda_{b\mu}^{q}|^2 \approx 0.87 + 3.84 \left( \frac{m_{S}}{1\;\rm{TeV}}\right) \sqrt{\frac{\Delta C_9^\mu}{-0.61}}. \label{eq:fit_bsmumu_3} \end{equation} Hence an $\cal O$(1) $\lambda^q_{b\mu}$ coupling is needed to explain the $R(K^{(*)})$ anomaly. This is compatible with the constraint in Eq.~\eqref{eq:RnunuBoundS1} for $m_S \gtrsim 2 \textrm{ TeV}$. As for the case of the vector LQ, the RG evolution of the effective operators down to the electroweak scale generates an effect in $Z$ couplings. In this setup this is particularly relevant for the $Z\mu\mu$ one, due to the contribution proportional to $y_t^2$: \begin{equation} \Delta g^Z_{\mu_L} = \frac{v^2}{64 \pi^2 m_S^2} \left( 6 y_t^2 + \frac{g_Y^2}{3} - g^2 \right) |\lambda_{b\mu}^{q}|^2 \log \frac{m_S}{m_Z} \approx (1.1 \times 10^{-3}) \frac{|\lambda_{b\mu}^{q}|^2}{(m_S / 1\textrm{ TeV})^2} < 2.2 \times 10^{-3}, \end{equation} which is compatible with Eq.~\eqref{eq:fit_bsmumu_3} for $m_S \gtrsim 2.2 \textrm{ TeV}$. The effects in $Z\tau_R\tau_R$ and $ZN_R N_R$ are similar to those in Eq.~\eqref{eq:RGEZbounds} and do not pose relevant constraints. \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figures/S1_RK_fit} \caption{\small 95\% CL limits from flavour observables and $Z$ couplings measurements on $\lambda^q_{b\mu}$ as a function of the scalar LQ mass. The green (yellow) region represents the parameter space which fits $R(K^{(*)})$ at $1\sigma$ ($2\sigma$).} \label{fig:S1_RK_fit} \end{center} \end{figure} At one loop, the couplings $\lambda^q_{b\mu}$ and $\lambda^q_{s\mu}$ also contribute to $B_s - \bar B_s$ mixing: \begin{equation} \frac{C_0^{\rm NP}}{C_0^{\textrm{SM}}} = \frac{1}{C_0^{\textrm{SM}}}\frac{v^2}{4 m_S^2} \left( \frac{\lambda_{b\mu}^{q} \lambda_{s\mu}^{q\,*}}{V_{tb} V_{ts}^*} \right)^2 \approx 0.24 \left(\frac{1 \textrm{ TeV}}{m_S} \right)^2 \left|\frac{\lambda^q_{b\mu}}{2} \right|^4 = \Big\{~^{0.07 \pm 0.09 \text{ -- UTfit \cite{UTFIT:2016}}}_{-0.11 \pm 0.06 \text{ -- DKL \cite{DiLuzio:2017fdq}}}~, \end{equation} where $C_0^{\rm{SM}} = 4\pi \alpha S_0(x_t) / s_w^2 \approx 1$. It is clear that some tension is present with the value required to fit $R(K^{(*)})$, Eq.~\eqref{eq:fit_bsmumu_3}, for any value of $m_S$. These limits are shown in Fig.~\ref{fig:S1_RK_fit}. While the model is compatible with the experimental bounds on $B_s$ mixing within $2\sigma$ if the result from UTfit \cite{UTFIT:2016} is considered, the bound from Ref.~\cite{DiLuzio:2017fdq} (see also Refs.~\cite{Bazavov:2016nty,Blanke:2016bhf}) excludes the $R(K^{(*)})$ solution, unless some other NP contribution to $B_s - \bar B_s$ mixing cancels the one from $S_1$. \section{Collider searches} \label{sec:collider} In Sec.~\ref{sec:models} we have shown that in order to explain the observed value of $R(D^{(*)})$ both the vector and the scalar LQ should have a mass that, for ${\cal O}(1)$ value of the couplings, are around 1 TeV, thus implying the possibility of testing their existence in high-energy collider experiments. At the LHC LQs can be searched for in three main ways: \emph{i)} they can be produced on-shell via QCD interactions; \emph{ii)} they can be singly produced via their couplings to SM fermions; \emph{iii)} they can be exchanged in the t-channel in $q\bar q$ scattering. In this Section we will illustrate the main constraints arising from LHC searches on the two considered LQ models from both pair-production and off-shell exchange. Single-production modes, instead, while will be relevant in the future for large LQ masses, at present do not offer competitive bounds, see {\emph{e.g.}} Ref.~\cite{Dorsner:2018ynv}. \subsection{Vector Leptoquark ${\mathbf{U_1}}$} \subsubsection*{Pair-production} The interactions of Eq.~\eqref{eq:lag_U} can be constrained in several ways by LHC searches. When produced on-shell and in pairs through QCD interactions, the LQs phenomenology is only dictated by the relative weight of their branching ratios. As we discussed in Sec.~\ref{sec:models}, the couplings $g^q_{s\mu}$ and $g^q_{b\mu}$ in Eq.~\eqref{eq:lag_U} can give $R(K^{(*)})$ at tree-level, thus implying that they should be considerably smaller than $g^d_{b\tau}$ and $g^u_{cN}$, which are responsible for explaining $R(D^{(*)})$ also at tree-level, see Eq.~\eqref{eq:RKfitU1} and Eq.~\eqref{eq:RDfit_vec}. For this reason $g^q_{s\mu}$ and $g^q_{b\mu}$ can be neglected while studying the LHC phenomenology of the vector LQ. The relative rate of the dominant decay channels is thus set by the following ratio \begin{equation} \frac{\Gamma(U_1 \to b \bar \tau)}{\Gamma(U_1 \to c \bar N_R)}\sim\frac{|g^d_{b\tau}|^2}{|g^u_{c N}|^2}. \end{equation} Regarding production, LQs can be copiously produced in pairs at the LHC through QCD interactions described by the following Lagrangian \begin{equation}\label{eq:lag_kin_U1} {\cal L}_{\rm kin.}^{U_1} = -\frac{1}{2}U_{1\,\mu\nu}^\dag U_1^{\mu\nu}- i g_s \kappa U_{1}^{\mu\,\dag} T^a U_{1}^{\nu}G_{\mu\nu}^a + m_{U}^2 U_{1\,\mu}^\dag U_1^\mu. \end{equation} Here $g_s$ is the strong coupling constant, $G_{\mu\nu}^a$ the gluon field strength tensor, $T^a$ the $SU(3)_c$ generators with $a=1,...,8$ and $\kappa$ is a dimensionless parameter that depends on the ultraviolet origin of the vector LQ. The choices $\kappa=0,1$ correspond to the minimal coupling case and the Yang-Mills case respectively. Barring the choice of $\kappa$, the cross-section only depends on the LQ mass~\footnote{In reality, additional model dependent processes can contribute to the LQ pair production cross section. We however checked that for perturbative values of the LQ couplings they are subdominant with respect to leading QCD ones. This is also true for the case of the scalar LQ discussed in Sec.~\ref{sec:LHC_S1}.}. For our analysis we compute the LQ pair production cross-section at LO in QCD with {\tt MadGraph5\_aMC@NLO}~\cite{Alwall:2014hca} through the implementation of the Lagrangian of Eq.~\eqref{eq:lag_kin_U1} in {\tt Feynrules} performed in~\cite{Dorsner:2018ynv} that has been made publicly available~\footnote{ Unless explicitely stated otherwise, all the cross-sections used in this work have been computed with {\tt MadGraph5\_aMC@NLO}. When the relevant model files were not publicly available, we have implemented the relevant Lagrangians with the {\tt FeynRules} package and exported in the {\tt UFO} format~\cite{Degrande:2011ua}.}. The CMS collaboration has performed various analyses targeting pair produced LQs. In particular the analysis in~\cite{Sirunyan:2017yrk}, recently updated in \cite{CMS-PAS-EXO-17-016}, searched for a pair of LQs decaying into a $2b2\tau$ final state setting a limit of $\sim 5\;$fb on the inclusive cross-section times the branching ratio for a LQ with a mass of 1\textrm{ TeV}. In the case of the $2c2N_R$ final state, we can reinterpret the existing experiental limits on first and second generation squarks decaying into a light jet and a massless neutralino~\cite{Aaboud:2017vwy}, for which the ATLAS collaboration provided the upper limits on the cross-sections for various squark masses on~{\tt HEPData}, which have then been used to compute the bounds as a function of the LQ mass~\footnote{The limits derived in this way agree with those obtained by the CMS collaboration by reinterpreting SUSY searches in~\cite{Sirunyan:2018kzh}.}. The bounds arising from LQs pair production searches are shown as green and blue shaded areas in Fig.~\ref{fig:LHC_U1} for $\kappa=0$ (left panel) and 1 (right panel) in the $m_{U}-g^d_{b\tau}$ plane. Here $g^u_{cN}$ has been fixed to match the central value of $R({D^{(*)}})$ according to Eq.~\eqref{eq:RDfit_vec}. Also shown are the projections for a LHC integrated luminosity of 300 fb$^{-1}$, which have been obtained by rescaling the current limits on the cross section by the factor $\sqrt{300\;{\rm fb}^{-1}/{\cal L}_0}$, with ${\cal L}_0$ the current luminosity of the considered analysis. All together we see that current direct searches are able to constrain vector LQs up to $\sim1.3\;$TeV for $\kappa=0$, and $\sim 1.8\;$TeV for $\kappa=1$ when the dominant decay mode is into a $2c2N_R$ final state, with slightly weaker limits in the case of an inclusive $2b2\tau$ decay. \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figures/LHC_U1_ALL_k0.pdf}\hfill \includegraphics[width=0.48\textwidth]{figures/LHC_U1_ALL_k1.pdf} \caption{ \small Limits arising from direct and indirect LHC searches in the $m_{U}-{g}^d_{b\tau}$ plane, with ${g}^u_{c N}$ fixed to fit the central value of $R_{D^{(*)}}$ for $\kappa=0$ (left) and $\kappa=1$ (right). Current limits are shown as shaded areas, while projections for 300 fb$^{-1}$ of integrated luminosity as dashed lines. The arrow indicates the region excluded by the $\tau\nu$ search. The region where $g^u_{c N}$ becomes non perturbative is also illustrated. } \label{fig:LHC_U1} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.48\textwidth]{figures/LHC_U1.pdf} \caption{\small Present and projected limits from $\tau \nu$ searches in the $m_{U}-|{ g}^u_{cN}{ g}^d_{b\tau}|$ plane. Also shown are the 68\% and 95\% CL intervals around the central values of $R_{D^{(*)}}$, Eq.~\eqref{eq:RDst}.} \label{fig:LHC_U1_2} \end{center} \end{figure} \subsubsection*{Off-shell exchange} From the Lagrangian of Eq.~\eqref{eq:lag_U}, and with the assumptions of Eq.~\eqref{eq:flav_structure_U}, we see that other relevant constraints can arise from $\bar c c \to N_R N_R$, $\bar b b \to \tau \tau$ and $\bar b c \to \bar \tau N_R$ processes which occur through the exchange of a t-channel LQ. In particular, $\bar b c \to \bar \tau N_R$ directly tests the same interactions responsible for explaining the $R(D^{(*)})$ anomalies. The ATLAS collaboration published a search for high-mass resonances in the $\tau \nu$ final state with 36~fb$^{-1}$ of luminosity \cite{Aaboud:2018vgh}, which we can use to obtain limits in our model. To do this, we computed with {\tt MadGraph5\_aMC@NLO} the fiducial acceptance $\mathcal{A}$ and reconstruction efficiency $\epsilon$ in our model as a function of the threshold in the transverse mass $m_T$, and used the model-independent bound on $\sigma(pp \to \tau \nu + X) \times \mathcal{A} \times \epsilon$ as a function of $m_T$ published in \cite{Aaboud:2018vgh} to derive the constraints. We then rescale the expected limits on the cross section with the square root of the luminosity to derive the estimate for future projections. The present and future-projected limits in the $m_U$ vs. $|g^u_{cN} g^d_{b\tau}|$ plane derived in this way are shown in Fig.~\ref{fig:LHC_U1_2}, together with the band showing the region which fits the $R(D^{(*)})$ anomaly. We notice that, while the present limits are still not sensitive enough to test the parameter space relevant for the anomalies, with 300~fb$^{-1}$ most of the relevant space will be covered experimentally. Also, with more and more luminosity, this channel will put \emph{upper limits} on the LQ mass (when imposing a successful fit of the $R({D^{(*)}})$ anomaly). This complements the \emph{lower limits} usually derived from pair-production searches. The $c \bar c \to N_R \bar N_R$ channel gives rise to a fully invisible final state. In this case one can ask for the presence of an initial state radiation jet onto which one can trigger, thus obtaining a mono-jet signature. The CMS collaboration has performed this analysis for the case of a coloured scalar mediator connecting the SM visible sector with a dark matter candidate~\cite{Sirunyan:2017jix}. By assuming only couplings with the up type quarks, and fixing this coupling to one, they obtain a bound of $1350\;{\rm{GeV}}$ on the LQ mass. This corresponds to a parton level cross-section of $\sim 16\;$fb for $p_T^j>250\;$GeV, which we use as an upper limit on the monojet cross-section to set the limits on the vector LQ mass and couplings. For the $b \bar b \to \tau \tau$ process, we impose the bound obtained in~\cite{Faroughy:2016osc} and rescale it with the $\sqrt{\mathcal{L}}$ factor in order to get the estimate for the projected sensitivity. The current and projected constraints arising from the off-shell analyses are shown together with those from LQ pair production searches in Fig.~\ref{fig:LHC_U1}. We observe that monojet and $\tau\tau$ searches nicely complement direct searches for small and large $g^d_{b\tau}$, respectively. Impressively, the off-shell search for $\tau N_R$, which exclude the region on the {\emph{right}} of the contours, will completely close the parameter space already with 300 fb$^{-1}$ of integrated luminosity, thus making this scenario falsifiable in the near future. \subsection{Scalar LQ $\mathbf{S_1}$} \label{sec:LHC_S1} \subsubsection*{Pair-production} As for the vector case, also the interactions of the scalar LQ in Eq.~\eqref{eq:lag_S1} can be constrained in several ways. The on-shell production of a pair of scalar LQs is the dominant search channel at the LHC, which only depends on the LQ mass and branching ratios.\footnote{To compute the LQ pair production rates we have used next-to-leading-order QCD cross section for squarks pair production from the LHC Higgs Cross Section Working Group~\url{https://twiki.cern.ch/twiki/bin/view/LHCPhysics/SUSYCrossSections}.} Since in Sec.~\ref{sec:models} we showed that the couplings $\lambda^q_{s\mu}$ and $\lambda^q_{b\mu}$ of $S_1$ that are needed to fit $R(K^{(*)})$ might be incompatible (depending on the SM prediction considered) with the constraints arising from $B_s - \bar B_s$ mixing, we set them to zero for the forthcoming discussion. For LQ pair production searches the phenomenology of the scalar LQ is thus determined by the following ratio \begin{equation} \frac{\Gamma(S_1 \to \bar b \bar N_R)}{\Gamma(S_1 \to \bar c \bar \tau)}\sim\frac{|\lambda^d_{bN}|^2}{|\lambda^u_{c\tau}|^2}. \end{equation} The CMS analysis~\cite{Sirunyan:2018kzh} searches LQs decaying into the $b\bar b \nu \bar \nu$ final state. This analysis can be directly applied to the case of the scalar LQ, given than the only difference with the decay mode targeted by the experimental analysis is the nature of the final state neutrino, which however does not strongly affect the kinematics of the event. For the $2c2\tau$ final state no direct searches exist. The CMS analysis in~\cite{Sirunyan:2017yrk}, recently updated in \cite{CMS-PAS-EXO-17-016}, targets the $b\bar b \tau^+\tau^-$ decay mode and in principle cannot be applied to our scenario. We however observe that, for 100\% branching ratios, the cross section in the analysis signal region ($\sigma_{\rm SR}$) for the ${\rm LQ}\to c\tau$ or $b\tau$ cases is given by \begin{equation} \begin{split} &\sigma_{\rm SR}^{{\rm LQ}\to c \tau} = \sigma_{\rm Th.}^{{\rm LQ}} \times [{\cal A}\times \epsilon]_{LQ\to c \tau} \times ( 2 \epsilon_{c}(1-\epsilon_{c})+\epsilon_{c}^2) \\ &\sigma_{\rm SR}^{{\rm LQ}\to b \tau} = \sigma_{\rm Th.}^{{\rm LQ}} \times [{\cal A}\times \epsilon]_{{\rm LQ}\to b \tau} \times ( 2 \epsilon_{b}(1-\epsilon_{b})+\epsilon_{b}^2) \\ \end{split} \end{equation} where $\epsilon_c$ is the probability to mis-identify a $c$-jet as a $b$-jet, $\epsilon_b$ is the $b$-jet tagging efficiency, $[{\cal A}\times \epsilon]_i$ is the acceptance for the considered final state and $\sigma_{\rm Th.}^{{\rm{LQ}}}$ is the LQs pair production cross section. Since the kinematics of the event is not expected to change if a final state quark is a $b$-jet or a $c$-jet, the ratio of the number of events in the signal region for the case of the $b\tau$ and $c\tau$ final state is simply given by~\footnote{The analysis requires only one $b$-tag jet, while no flavour requirement is imposed on the second jet.} \begin{equation} \label{eq:btagrescaling} \frac{\sigma_{\rm SR}^{{\rm{LQ}}\to c \tau}}{\sigma_{\rm SR}^{{\rm{LQ}}\to b \tau}} = \frac{2 \epsilon_{c}(1-\epsilon_{c})+\epsilon_{c}^2}{2 \epsilon_{b}(1-\epsilon_{b})+\epsilon_{b}^2}, \end{equation} {\emph{i.e.}} the cross section is rescaled by a factor only dictated by the jet tagging efficiencies. In particular the upper limit on the cross section has to be divided by the factor in Eq.~\eqref{eq:btagrescaling} which is smaller than 1. For concreteness we use the 70\% $b$-tag efficiency working point of~\cite{Sirunyan:2017yrk} from which we obtain $\epsilon_{c}\sim 20\%$~\cite{Sirunyan:2017ezt}. The bounds arising from LQs pair production searches are shown as green and orange shaded areas in Fig.~\ref{fig:LHC_S1} (left) in the $m_{S}-\lambda^d_{bN}$ plane for the $2b2N_R$ and $2c2\tau$ final state respectively, where $\lambda^u_{c\tau}$ has been fixed to match the central value of $R({D^{(*)}})$, see Eq.~\eqref{eq:RDfit}. We also again show the projections for a higher LHC integrated luminosity, namely 300~fb$^{-1}$. All together we see that current direct searches are able to constrain scalar LQs with a mass of $\sim 1\;$TeV when the dominant coupling is the one to $b N$ while a weak constraints of $\sim 600\;$GeV can be set if the dominant coupling is the one to $c\tau$, with these limits becoming $\sim 1.3\;$TeV and 1 TeV respectively for 300~fb$^{-1}$. \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{figures/LHC_S1_All.pdf} \hfill \includegraphics[width=0.485\textwidth]{figures/LHC_S1.pdf} \caption{\small (Left) Limits arising from direct and indirect LHC searches in the $m_{S}-\lambda^d_{bN}$ plane, with $\lambda^u_{c\tau}$ fixed to fit the central value of $R_{D^{(*)}}$. Current limits are shown as shaded areas, while projections for $300\;{\rm{fb}}^{-1}$ of integrated luminosity as dashed lines. The arrow indicates the region excluded by the $\tau\nu$ search. The region where $\lambda^u_{c\tau}$ becomes non perturbative is also illustrated.\\ (Right) Limits from $\tau \nu$ searches in the $m_{S}-|\lambda^u_{c\tau}\lambda^d_{bN}|$ plane. Also shown are the 68\% and 95\% CL intervals around the central values of $R_{D^{(*)}}$, Eq.~\eqref{eq:RDst}.} \label{fig:LHC_S1} \end{center} \end{figure} \subsubsection*{Off-shell exchange} Similarly to the vector LQ, also the scalar $S_1$ can be exchanged in t-channel in $c \bar b \to \tau \bar N_R$, $b \bar b \to N_R \bar N_R$, and $c\bar c \to \tau\tau$ processes. Also in this case the $c \bar b \to \tau \bar N_R$ process directly tests the same couplings involved in the explanation of the $R(D^{(*)})$ anomalies. The experimental limits, and future projections, are obtained from the ATLAS analysis \cite{Aaboud:2018vgh} in the same way as described for the vector LQ case. The derived limits in the $m_{S}-|\lambda^d_{bN}\lambda^u_{c\tau}|$ plane, superimposed with the 68\% and 95\% CL intervals around the central values for $R(D^*)$, are shown in the right panel of Fig.~\ref{fig:LHC_S1}. Also in the scalar LQ case this search will put an \emph{upper limit} on the LQ mass $m_S$ once the fit of the charged current flavour anomalies is imposed, and the high luminosity phase of the LHC with 3000 fb$^{-1}$ of integrated luminosity will cover the whole relevant parameter space. The $b \bar b \to N_R \bar N_R$ final state can be constrained by monojet searches in an analogous way as done for the vector LQ. The excluded parameter space is shown as a purple region in the left panel of Fig.~\ref{fig:LHC_S1}. The limits on the $c\bar c \to \tau\tau$ process can be obtained from the ones computed in~\cite{Faroughy:2016osc} for $b \bar b \to \tau \tau$ case (shown in the bottom panel of Fig.~6 of~\cite{Faroughy:2016osc}) by taking into account the different parton luminosities for the two different initial state quarks. In particular, we approximate the $R_{cb}(\hat{s}) = \mathcal{L}_{cc}(\hat{s})/\mathcal{L}_{bb}(\hat{s}) \approx 2.5$ ratio as constant and rescale the limit on the $y^{b\tau}_{L}$ coupling in \cite{Faroughy:2016osc} neglecting the interference of the signal with the SM background: $\text{limit}(|\lambda^u_{c\tau}|) \approx \text{limit}(|y^{b\tau}_{L}|) R_{cb}^{1/4}$. The resulting excluded region is shown as a red region in the left panel of Fig.~\ref{fig:LHC_S1}. All together the current and projected constraints arising from these three analyses are shown together with the one arising from LQ pair production searches in the left panel of Fig.~\ref{fig:LHC_S1}. We observe that $\tau\tau$ searches nicely complement direct searches for small $\lambda^q_{bN}$ while also in this case searches for $\tau N_R$, which again exclude the region on the right of the contours, will almost completely close the parameter space already with 300 fb$^{-1}$ of integrated luminosity. \section{Neutrino masses and decays} \label{sec:neutrino} The phenomenology of both the SM-like and sterile neutrino crucially depends on whether only the $R(D^{(*)})$ anomalies are addressed or if also the neutral-current ones are. This is particularly relevant for the vector LQ, since this state allows to explain both without any tension with flavour, precision, or collider constraints. For this reason in the following we discuss both scenarios separately, stressing the main consequences for each of them. \subsection{Addressing only $R(D^{(*)})$} The operator responsible for reproducing the $R(D^{(*)})$ anomalies, Eq.~\eqref{eq:bctnuBSM}, generates a Dirac mass term $\mathcal{L} \sim m^D \bar{\nu}^\tau_L N_R + h.c.$ at two loops, where one can estimate \cite{Greljo:2018ogz,Asadi:2018wea} \begin{equation} m^D_{R({D^{(*)}})} \sim \frac{g^2}{2(16 \pi^2)^2} \frac{c_{R_D} m_b m_c m_\tau V_{cb}}{\Lambda^2} \sim 10^{-3}~\text{eV}~. \end{equation} Such a small contribution to neutrino masses does not affect their phenomenology in a relevant way and therefore can be mostly neglected. In this scenario the leading decay mode for the heavy neutrino is $N_R \to \nu_\tau \gamma$, which also arises at two loops from the same operator, with a rate (see Ref.~\cite{Greljo:2018ogz} and references therein) \begin{equation} \label{eq:nugamma} \tau_{N_R \to \nu_\tau \gamma} \sim 10^{25} \left( \frac{\text{keV}}{m_{N_R}} \right)^{3} s ~, \end{equation} which is much larger than the age of the Universe. \subsection{Addressing also $R(K^{(*)})$} \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figures/nu_mass_U1} \caption{\small Diagram responsible for generating a $\nu-N_R$ Dirac mass term at one loop in the vector LQ model in case both charged- and neutral-current anomalies are addressed.} \label{fig:nuMassU1} \end{center} \end{figure} If one wants to address also the neutral-current anomalies $R(K^{(*)})$ the situation becomes more complicated. In the following we focus on the model with the vector LQ, since it is the one which allows to do so without introducing tension with other observables. The chirality-flipping operators $O_{lNuq}$ induce a Dirac mass term between $N_R$ and $\nu_\mu$ at one loop, see Fig.~\ref{fig:nuMassU1}, and with less suppression from light fermion masses: \begin{equation} m^D_{(R(D^{(*)}) + R(K^{(*)}))_{U}} \sim \frac{1}{16 \pi^2} g^u_{cN} g^q_{b\mu} m_c V_{cb} \sim 10 ~\text{keV}~, \end{equation} where we used the constraint in Eq.~\eqref{eq:BcU1limit}. Such large neutrino masses are of course incompatible with experiments. One possible solution is to finely tune these radiative contributions with the corresponding bare Dirac neutrino mass parameter, in order to get small masses. A more natural and elegant solution can instead be found by applying the inverse see-saw mechanism \cite{Mohapatra:1986aw,Mohapatra:1986bd} (see also \cite{Dias:2012xp}). This was also employed recently in the context of the $B$-meson anomalies in Ref.~\cite{Greljo:2018tuh}. In its simplest realisation, this mechanism consists in adding another sterile state\footnote{In this subsection we use the tilde to denote gauge eigenstates, and reserve the notation without the tilde for the mass eigenstates.} $\tilde S_L$ with a small Majorana mass $\mu_S$ and Dirac mass $M_R$ with $\tilde N_R$. By defining $n = (\tilde \nu_L, \tilde N_R^c, \tilde S_L)^t$ the mass Lagrangian ${\cal L}_n=-1/2\,\bar n\, M_n\, n^c$ can be written in terms of the following mass matrix \begin{equation} M_n = \left( \begin{array}{ccc} 0 & m^D & 0 \\ m^D & 0 & M_R \\ 0 & M_R & \mu_S \end{array}\right)~, \end{equation} Diagonalising the matrix, in the limit $\mu_S \ll m^D < M_R$, the spectrum presents a light SM-like Majorana neutrino with mass \begin{equation} m_{\nu_L}^{\rm light} \sim \left( \frac{m^D}{\sqrt{(m^D)^2 + M_R^2}} \right)^2 \mu_S \label{eq:numassISS} \end{equation} and two heavy psuedo-Dirac neutrinos $N_{R_{1,2}}$ with masses $m_{N_R} \sim \sqrt{(m^D)^2 + M_R^2}$ and a splitting of order $\mu_S$. A small enough $\mu_S$ can therefore control the smallness of the contribution to the light neutrinos without the need of any fine tuning. The mixing angle between the light neutrinos and the sterile one is given by \begin{equation} \theta_{\nu_\mu N} \sim \frac{m^D}{M_R} \lesssim 10^{-2}~, \end{equation} where we used the (conservative) experimental bound of Ref.~\cite{Drewes:2015iva} for sterile neutrinos with masses $m_{N_R} \sim 10 \textrm{ MeV}$. Indeed, this limits puts a lower bound on the mass of the sterile neutrinos $m_{N_R} \gtrsim 10^2 m^D \sim 1 \textrm{ MeV}$ which is relevant for the cosmological analysis of the model. In this case, the main decay modes of the sterile neutrino are $N_R \to 3 \nu, \nu_\mu e^+e^-$ via the mixing with $\nu_\mu$ and an off-shell $Z$ boson exchange \cite{Greljo:2018ogz}: \begin{equation}\begin{split} \tau_{N_R \to 3 \nu} &\approx \left( \frac{G_f^2}{144 \pi^3} \left( 3 |g^Z_{\nu_L}|^2 + |g^Z_{e_L}|^2 + |g^Z_{e_R}|^2 \right) \theta_{\nu_\mu N}^2 m_{N_R}^5 \right)^{-1} \\ &\sim 2.5 \times 10^{5} \left( \frac{10 \textrm{ MeV}}{m_{N_R}} \right)^{5} \left( \frac{10^{-6}}{\theta^{\,2}_{\nu_\mu N}} \right)~ s ~. \label{eq:NR2nudecay} \end{split}\end{equation} In this scenario $N_R$ decouples from the SM thermal bath at a temperature of $\sim 300$ MeV (see next section), then becomes non relativistic and behaves like matter, comes to dominate the energy density after big bang nucleosynthesis (BBN), and decays into neutrinos and electrons before the epoch of matter radiation equality. This would generate a large contribution to the SM neutrino and electron energy densities before CMB, which is not cosmologically viable. To avoid this problem $N_R$ should decay before BBN, which requires $\tau_{N_R} < 1s$. Looking at the leading decay mode, Eq.~\eqref{eq:NR2nudecay}, a simple way to achieve this is to increase both $m_{N_R} \approx M_R$ and $m_D$ such that $m_{N_R} \gtrsim 130 \textrm{ MeV}$ and $\theta_{\nu_\mu N} \sim 10^{-3}$ (satisfying the limits from Ref.~\cite{Drewes:2015iva}). In this case a suitable short lifetime can be obtained. Such a mass of the sterile neutrino is close to the bound where it could potentially have an effect on the kinematics of $B \to D^{(*)} \tau \nu$. However a precise analysis of this scenario can only be performed with all details of the experimental analysis available. Interestingly, there are almost no constraints on $\theta_{\nu_\mu N}$ in the window of $\sim 30-40$ MeV (roughly the mass difference between the charged pion and the muon, see for example \cite{Drewes:2018gkc}). This window provides an opportunity for a short enough lifetime of $N_R$ in this model. Future measurements by DUNE \cite{Ballett:2018fah} and NA62\cite{Drewes:2018gkc} will be able to test the scenarios with $m_{N_R}\gtrsim 130$ MeV and with $m_{N_R}\in[30,40]$ MeV. Another possibility is to add a mixing of $N_R$ with the $\tau$ neutrino, by adding a suitable Dirac mass term. In this case the lower limits on $\theta_{\nu_\tau N}$ \cite{Drewes:2015iva} are much weaker, allowing $\theta_{\nu_\tau N}^{\,2} \lesssim 10^{-3}$ for $m_{N_R} \approx 100 \textrm{ MeV}$ and even larger ones for lighter masses. This allows to reduce even further the $N_R$ lifetime, while keeping the $N_R$ mass below the 100 MeV threshold. \section{Cosmology of $N_R$} \label{sec:cosmo} In this section we discuss cosmological bounds and opportunities in the presence of right handed neutrinos. As we saw in the previous section, if we only want to address the $R(D^{(*)})$ anomaly the right handed neutrino can be as light as $10^{-3}$ eV and is cosmologically stable. Instead, if we also address the $R(K^{(*)})$ anomaly then it is much heavier and with a shorter lifetime. In particular we showed that it must decay before BBN in order to be a viable option. In this section we focus on the case where only $R(D^{(*)})$ is addressed and $N_R$ is cosmologically stable. \subsection{Relic density} Addressing only $R(D^{(*)})$, $N_R$ can be light and has a lifetime longer than the age of the universe. It therefore contributes to the DM relic density. Fitting the $R(D^{(*)})$ anomaly fixes the strength of the interaction of $N_R$ with the right handed $b,c,\tau$. This in turn implies that $N_R$ was in thermal equilibrium in the early universe, and determines when it decoupled from the thermal bath. Solving the Boltzmann equation (see Appendix \ref{sec:Boltzmann}) we find that $N_R$ freezes out at a temperature of $\sim 300$ MeV, slightly above the QCD phase transition. Since $m_{N_R} \lesssim 100$ MeV in order to explain $R(D^{(*)})$, it is relativistic at freeze-out. Its relic abundance today, assuming a lifetime longer than the age of the universe, is then~\cite{Gershtein:1966gg, Cowsik:1972gh} \begin{align} \Omega_N h^2 & = \frac{s_0 m_{N_R } }{\rho_c}\left[ \left (\frac{n}{s}\right)_{\rm today} = \left (\frac{n}{s}\right)_{\rm decoupling} \right] \nonumber \\ & =\frac{s_0 m_{N_R}}{\rho_c}\left[\frac{\frac{3}{4\pi^2}\times 2 \times \zeta(3)T_{\rm dec}^3}{\frac{2 \pi^2}{45}T_{\rm dec}^3 g_{*S}(T_{\rm dec})}\right]= 0.12 \ \frac{50}{g_{*S}(T_{\rm dec})} \ \frac{m_{N_R}}{50 \ \hbox{eV}} \, . \label{relicNR} \end{align} Here $s_0 = 2891$ cm$^{-3}$ is the present entropy density and $\rho_c = 1.05 \times 10^4 \ h^2$ eV cm$^{-3}$ the critical energy density~\cite{Olive:2016xmw}. We find a yield $\left(\frac{n}{s}\right)_{\rm today}$ which ranges between $8.3 \times 10^{-3}$ and $1.3 \times 10^{-2}$, and correspondingly \footnote{ The final yield depends on whether the UV completion of the model allows, on top of $b c \leftrightarrow N_R\tau$, also one of the $N_R N_R\leftrightarrow bb,\tau\tau, cc$ scattering processes. In the latter case the freeze-out of $N_R$ is slightly delayed and the yield turns out to be slightly higher, see Appendix \ref{sec:Boltzmann}. The value of $g_{*S}(T)$ has a strong dependence on $T$ when we are close to the QCD phase transition, as in this case. We use $g_{*S}(T_{\rm dec}) = 50$ in the estimates that follow. The reader should keep in mind that, while in the right ballpark, this number has some degree of uncertainty. } $g_{*S}(T_{\rm dec})$ in the range between 35 and 60. For the sake of the estimates which follow, we take $g_{*S}(T_{\rm dec}) = 50$ as our reference value. We see that $m_{N_R} \approx 50 \;$eV can account for the required amount of DM in the universe. However this is now a hot relic, and as such it is not consistent with structure formation. To make it comply with these bounds, we can simply lower its mass. For $m_{N_R} \lesssim$ eV, the right handed neutrino makes up less than 2\% of the DM abundance and it is safely within the structure formation bound~\cite{Boyarsky:2008xj}. \subsection{$\Delta N_{\rm eff}$} Such a light $N_R$ contributes to the number of effective relativistic species, $N_{\rm eff}$. The quantity $\Delta N_{\rm eff}$ is defined as the ratio of the energy density in dark radiation and that in one species of SM neutrino at the time of BBN, \bel{ndef} \Delta N_{\rm eff} = { 3 \rho_{dr} ( t_{BBN}) \over \rho_{\nu} (t_{BBN}) } = \left(\frac{T_{N,\rm BBN}}{T_{\nu, \rm BBN}}\right)^4 . \end{equation} The ratio of the temperatures can be found using the total entropy conservation in the visible sector, just after the right-handed neutrino decoupled from the thermal bath~\cite{Steigman:2013yua}: \begin{eqnarray} \frac{T_{N,\rm BBN}}{T_{\nu, \rm BBN}} = \left( \frac{g_{*S}(T_{ \rm BBN})}{g_{*S}(T_{\rm dec})} \right)^{1/3} \, . \end{eqnarray} Thus, from Eq.~\eqref{ndef}, we get \begin{eqnarray} \Delta N_{\rm eff} = \left(\frac{10.73}{ g_{*S}(T_{\rm dec})}\right)^{4/3} \sim 0.13 \left(\frac{50}{g_{*S}(T_{\rm dec})}\right)^{4/3}~, \end{eqnarray} which is within the experimental constraints \cite{Ade:2015xua}. We then conclude that a minimal model with a single right-handed neutrino $N_R$ lighter than an eV can explain the $R(D^{(*)})$ anomalies and evade all the relevant cosmological constraints. However $N_R$ can only be a small fraction of the DM in this case. \subsection{The dark matter option and entropy injection} We have shown that in the minimal scenario $N_R$ is a hot relic and can only constitute a small fraction of the observed DM energy density. It is interesting to explore the possibility of raising the $N_R$ mass to the keV range to make it a warm dark matter candidate. From Eq.~\eqref{relicNR} we see that $m_{N_R} \sim$ keV results in overclosure of the universe. We can then consider adding to the model a second heavier right-handed neutrino, $\chi_R$, whose decay produces enough entropy to dilute the abundance of $N_R$ \cite{Scherrer:1984fd,Kolb:1990vq}\footnote{For a recent application of the entropy dilution in the models with right-handed neutrinos see \cite{Nemevsek:2012cd,King:2012wg,Bezrukov:2009th}.}. The dilution factor, defined as \begin{equation} D\equiv\frac{S_{\rm{after}~ \chi~\rm{ decay} }}{S_{\rm{before}~ \chi ~ \rm{decay}}}~, \end{equation} modifies the relic density and $\Delta N_{\rm eff}$ as \begin{eqnarray} \label{eq:reldensity} \Omega_N h^2= \frac{1}{D} 0.12 \frac{50}{g_{*S}(T_{\rm dec})} \ \frac{m_{N_R}}{50 \ \hbox{eV}} \, , \nonumber\\ \Delta N_{\rm eff} =\frac{1}{D^{4/3}} \left(\frac{10.73}{ g_{*S}(T_{\rm dec})}\right)^{4/3}. \end{eqnarray} Note that we need $D$ of order 20 if we want to push $m_{N_R}$ to the keV range. In what follows we study if we can achieve such a dilution in a rather minimal setup. We assume that the heavier right-handed neutrino $\chi_R$, analogously to $N_R$, is subject to the interaction \begin{equation} \label{eff-lag} {\cal L}_{\chi_R} = \frac{\lambda}{\Lambda_\chi^2}(\bar c_R \gamma_\mu b_R ) (\bar \tau_R \gamma^\mu \chi_R)~. \end{equation} We want $\chi_R$ to decouple from the thermal bath at high temperature (but still below $\Lambda_\chi$, so the use of the effective interaction is justified), to come to dominate the energy density of the universe, then to decay and reheat the universe between 300 MeV (the decoupling temperature of $N_R$) and BBN. We discuss each step in turn. $\chi_R$ decouples from the thermal bath when $\Gamma = n \langle \sigma v \rangle \simeq H$, with $\sigma = \frac{\lambda^2 s}{16 \pi \Lambda_\chi^4}$ (here $s$ is the centre of mass energy squared). Assuming $\chi_R$ is relativistic at decoupling, we find \begin{equation} \label{eq:couplingtemp} T_{\chi} =3\times 10^{-2} g_*^{1/6} \lambda^{-2/3} \textrm{ GeV} \, , \end{equation} and a yield \begin{equation} \label{chiyield} Y_\chi = \frac{n_\chi}{s} = \frac{45}{\pi^4 g_{*S}(T_{\chi, {\rm decoupling}})} \, . \end{equation} Then, as the universe expands and the temperature decreases, $\chi_R$ becomes non relativistic, and eventually dominates the energy density. It decays when $\Gamma_\chi \simeq H_\chi$, with the Hubble parameter \begin{equation} H^2_\chi = \frac{\rho_\chi}{3 M_p^2} = \frac{M_\chi s(T_{\rm{before}~ \chi~ \rm{decay}}) Y_\chi}{3 M_p^2} \, , \end{equation} and the decay rate into $b,c,\tau$ \begin{equation} \label{chidecay} \Gamma_{\chi} \simeq\frac{1}{1536\,\pi^3} \, \frac{\lambda^2}{\textrm{ TeV}^4} \, M_{\chi}^5 \, . \end{equation} We find the reheat temperature, $T_{\rm{after}~ \chi~ \rm{decay}}$, assuming that the energy density of $\chi_R$ is instantaneously converted into radiation at decay, \begin{equation} \label{Treheat} \frac{\pi^2}{30} g_* T^4_{\rm{after}~ \chi~ \rm{decay}}=\rho_\chi\simeq 3 \Gamma_\chi^2 M_p^2 \, . \end{equation} This temperature must be above BBN, but below the $N_R$ decoupling temperature: \begin{equation} \label{eq:ineqw} 1 \ {\rm MeV} <T_{{\rm after}~ \chi~ {\rm decay}}< 300 \ {\rm MeV}~. \end{equation} The dilution factor can be expressed as \cite{Scherrer:1984fd,Kolb:1990vq} \begin{eqnarray} \label{entropybound} D= \frac{g_*(T_{\rm{after}~ \chi~\rm{ decay}}) T^3_{\rm{after}~ \chi~ \rm{decay} }}{g_*(T_{\rm{before}~ \chi~ \rm{decay}})T^3_{\rm{before}~ \chi ~\rm{ decay}}}\simeq 1.8 \langle g_*^{1/3}\rangle^{3/4} \frac{M_{\chi} Y_\chi}{\sqrt{M_p \Gamma_\chi}} = 1.8\frac{M_{\chi} Y_\chi}{ T_{\rm{after}~ \chi~\rm{ decay}}} \, . \end{eqnarray} $D$ is shown in Fig.~\ref{fig:s} in the $M_\chi$ vs. $\lambda$ plane as black contours, where we see that the entropy injection factor can reach at most $\sim 100$. It is instructive to trade the parameters $M_\chi$ and $\lambda$ for $T_{\chi}$ and $T_{\rm{after}~ \chi~ \rm{decay}}$, using Eqs.~\eqref{Treheat}, \eqref{eq:couplingtemp}, \eqref{chidecay}. Then the expression for the $D$ becomes \begin{eqnarray} \label{entropybound} D= \frac{1.8 M_{\chi} Y_\chi}{T_{\rm{after}~ \chi ~ \rm{decay} }}\simeq 0.02 \left(\frac{T_\chi}{T_{\rm{after} ~\chi~ \rm{decay}}}\right)^{\frac{3}{5}}\left(\frac{\Lambda_\chi}{\hbox{TeV}}\right)^{4/5}, \end{eqnarray} which indicates that the maximal value can be achieved for the maximal decoupling temperature $T_\chi$ and the minimal reheat temperature. As in our scenario we restrict to a decoupling temperature below the mediator mass, $\sim 1\;{\rm TeV}$, the maximal entropy dilution that can be achieved is $D_{max}\sim 100$. If we consider a higher decoupling temperatur, the dilution factor does not improve. The reason is that above the mediator mass the cross section for the $bc \leftrightarrow \tau\chi$ scattering process scales as $1/s$, rather than $s/\Lambda_\chi^4$. As a result the reaction rate $n \langle \sigma v \rangle$ is linear in $T$, implying that the process is out of equilibrium at very high temperatures, and freezes in at lower temperatures. When the temperature drops below $\Lambda_\chi$ we are back to the scenario we have studied above. \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{figures/sinjected} \end{center} \caption{\small Isocontours of the entropy injection $D$ (black lines). The lower red area is excluded because the reheated temperature $T_{\text{after } \chi \text{ decay}}$ is below $1$ MeV and the upper red area becase the reheated temperature is above right handed neutrino decoupling temperature. Red contours indicate $T_{\text{after } \chi \text{ decay}}$ in MeV. $T_\chi$, shown in the right axis, is the decoupling temperature of $\chi_R$ from the interaction in Eq.~\eqref{eff-lag}. \label{fig:s}} \end{figure} We are now in the position to assess whether such a dilution factor leads to a successful model. We see from Eq.~\eqref{eq:reldensity} that we can raise $m_{N_R}$ up to 5 keV and have $N_R$ contribute to the totality of dark matter energy density. However, for this mass the decay $N_R \to \nu \gamma$ is too fast (see Eq.~\eqref{eq:nugamma}) and excluded by X-ray measurements, which put a bound $\tau_{N_R \to \nu \gamma} > 10^{26-27}$ s in that region \cite{Essig:2013goa, Boyarsky:2018tvu}. To avoid this bound, we should push $m_{N_R}$ down to $\sim 1$ keV. This is in some tension with constraints on warm dark matter from the Lyman-$\alpha$ forest (see for example \cite{Boyarsky:2018tvu}), which prefers a sterile neutrino heavier than 3-5 keV. However, due to the large entropy dilution, our $N_R$ is slightly colder and likely to comply with the Lyman-$\alpha$ bound also when $m_{N_R} \sim 1$ keV. Further detailed studied are needed to confirm if this is the case. \section{Conclusions} \label{sec:concl} The set of deviations from the Standard Model observed in various $B$-meson decays, from different experiments and in various different observables, is one of the most compelling experimental hints for BSM physics at the TeV scale ever obtained. Even more interesting is the possibility that all the observed deviations could be explained in a coherent manner by the same new physics. This has been the focus of a large effort from the theory community in recent years and several attempts have been put forward to achieve this goal. It became clear that this is not an easy task, in particular due to the fact that the large size of the required new physics effect to fit the $R(D^{(*)})$ anomalies generates tensions with either high-$p_T$ searches or other flavour observables. In this spirit, it has become important to look for other possible solutions to the anomalies with different theoretical assumptions, which might help to evade the constraints. One such possibility is that the BSM operator contributing to $R(D^{(*)})$ does not involve a SM neutrino but a sterile right-handed neutrino $N_R$. If the operator has a suitable right-right vector structure and the sterile neutrino is light enough, the the kinematics of the process remain SM-like and the solution is viable. In this paper we study two possible tree-level mediators for such operator in a simplified model approach: the vector leptoquark $U_1^\mu$ and the scalar leptoquark $S_1$. In the first part of the paper we explore the possibility that these mediators could generate both charged- and neutral-current $B$-physics anomalies. We find that the vector $U_1^\mu$, which contributes to $b\to s \mu \mu$ at the tree-level, provides a viable fit with no tension with any other flavour observable. The scalar $S_1$, instead, contributes to the neutral-current process at one loop, thus requiring larger couplings to fit $R(K^{(*)})$. This generates a tension with the bound from $B_s$-$\bar B_s$ mixing which makes the combined solution of both class of anomalies from this mediator disfavoured. For both models we study the present constraints, and future projections, from direct searches at the LHC, including all relevant on-shell LQ pair-production modes as well as channels where the LQ is exchanged off-shell in the t-channel. We find that at present both scenarios are viable, but already with 300 fb$^{-1}$ of luminosity LHC will test almost all the viable parameter space. In particular, the search in the $\tau \nu$ final state, which directly test the interactions relevant for the $R(D^{(*)})$ anomalies, puts \emph{upper limits} on the LQ mass and in the future will completely cover the region which fits the anomalies. In the second part of the paper we study the phenomenology of the sterile neutrino $N_R$. This depends crucially on whether or not both classes of anomalies are addressed or only the charged-current ones are. In the former case a Dirac mass term with the muon neutrino is generated at one loop with a size of tens of keV. In order to keep the SM neutrinos light it is possible to employ the inverse see-saw mechanism, by introducing another sterile neutrino with a small Majorana mass and a large Dirac mass with $N_R$. The outcome of this is that the SM-neutrinos are light but the sterile ones are above 10 MeV. The mixing between the muon and sterile neutrino induces a fast decay of $N_R$, rendering it unstable cosmologically. To avoid issues with the thermal history of the Universe it should decay before BBN, which requires its mass to be $\sim 100 \textrm{ MeV}$. If instead only the $R(D^{(*)})$ anomalies are addressed the picture changes completely. In this case a Dirac mass term with the tau neutrino is generated at two loops and it is small enough to not have any impact in neutrino phenomenology. The main decay of $N_R$ in this case is into $\nu_\tau \gamma$ and arises at two loops as well, with a lifetime much longer than the age of the Universe. In order not to overclose the Universe energy density its mass should be below $\sim 50\;$eV, which makes it a hot relic. The constraints on the allowed amount of hot dark matter impose an upper limit on its contribution to the present dark matter density, which translates into an upper bound for the mass $m_{N_R} \lesssim$ eV. If the sterile neutrino is to constitute the whole dark matter, an entropy injection at late times is necessary in order to dilute its abundance. This can be obtained, for example, by adding another heavy sterile neutrino which decays into SM particles after $N_R$ decouples. In this case we find that $N_R$ could be a warm dark matter candidate with a mass $\sim$ (few keV). This option is highly constrained by current cosmological and astrophysical observations. While our model seems to have a small region of viable parameter space, a conclusive statement requires further detailed studies. To conclude, the $U_1^\mu$ model presented in this paper allows to fit both charged- and neutral-current anomalies with no tension at all with present low- and high-$p_T$ bounds. The sterile neutrino in this case is cosmologically unstable, decaying before BBN happens. In case one aims at only solving the $R(D^{(*)})$ anomalies, instead, the neutrino is stable and if it is light enough it satisfies all cosmological constraints. With some additions to the model, in particular a mechanism for entropy injection after it decouples, it can also be a candidate for dark matter at the keV scale. \subsection*{Acknowledgements} We thank Marco Nardecchia for discussions and for collaborating in the early stages of the work. We also thank Andrea Romanino and Serguey Petcov for useful discussions.
train/arxiv
BkiUdcY5qoYAv4ZEwKvS
5
1
\section{Introduction} A homogeneous polynomial $F$ is a \defining{direct sum} if there exist non-zero polynomials $F_1$, $F_2$ such that $F = F_1 + F_2$ and $F_1 = F_1(t_1,\dotsc,t_s)$, $F_2 = F_2(t_{s+1},\dotsc,t_n)$ for some linearly independent linear forms $t_1,\dotsc,t_n$. For example, $F=xy$ is a direct sum, as $F = \frac{1}{4}(x+y)^2 - \frac{1}{4}(x-y)^2$. In coordinate-free terms, $F \in S^d V$ is a direct sum if $F = F_1 + F_2$ for nonzero $F_i \in S^d V_i$, $i = 1,2$, such that $V_1 \oplus V_2 = V$. Most polynomials are not direct sums, see Lemma~\ref{lemma: general indecomposable}. Nevertheless it can be difficult to show that a particular polynomial is not a direct sum. For instance, Sepideh Shafiei shared with us the following question: is the generic determinant $\det_n = \det((x_{i,j})_{i,j=1}^n)$, a homogeneous form of degree $n$ in $n^2$ variables, a direct sum? For $n=2$, $\det_2 = x_{1,1}x_{2,2} - x_{1,2}x_{2,1}$ is visibly a direct sum. On the other hand, for $n>2$ it is easy to see the determinant is not decomposable as a direct sum in the original variables, but it is not immediately clear whether it is decomposable after a linear change of coordinates. We answer this question in the negative, see Corollary~\ref{cor_determinant_is_not_dir_sum}. \begin{problem}\label{problem: dir sum} Give necessary or sufficient conditions for a polynomial to be a direct sum. \end{problem} We approach this problem through \textit{apolarity}. Suppose $S = \mathbb{C}[x_1,\dotsc,x_n]$ and $T = \mathbb{C}[\alpha_1,\dotsc,\alpha_n]$. When the number of variables is small we may write $S = \mathbb{C}[x,y]$ and $T = \mathbb{C}[\alpha, \beta]$, or $S = \mathbb{C}[x,y,z]$ and $T = \mathbb{C}[\alpha, \beta, \gamma]$. (For simplicity we assume throughout that our base field is the field of complex numbers $\mathbb{C}$. However, our results also hold for other algebraically closed base fields of any characteristic. We comment on the applicable modifications in Section~\ref{sect: other fields}.) We let $T$ act on $S$ by letting $\alpha_i$ act as the partial differentiation operator $\partial/\partial x_i$. This action is denoted by the symbol $\aa$, as in $\alpha \beta^2 \aa x^2 y^3 z^4 = \partial^3 x^2 y^3 z^4 / \partial x \partial y^2 = 12 x y z^4$. This is the \defining{apolarity action}; $T$ is called the \defining{dual ring} of $S$. Let $F \in S$ be a homogeneous polynomial of degree $d$. The \defining{apolar} or \defining{annihilating ideal} $F^\perp \subset T$ is the set of polynomials $\Theta \in T$ such that $\Theta \aa F = 0$. The quotient $A_F = T/F^\perp$ is called the \defining{apolar algebra} of $F$. The \defining{Waring rank} $r(F)$ of $F$ is the least $r$ such that $F = \ell_1^d + \dotsb + \ell_r^d$ for some linear forms $\ell_i$. A lower bound for Waring rank, following from ideas of Sylvester in 1851 \cite{Sylvester:1851kx}, is that $r(F)$ is bounded below by the maximum value of the Hilbert function of $A_F$. Ranestad and Schreyer \cite{MR2842085} have recently shown that the Waring rank of $F$ is bounded below by $\frac{1}{\delta} \length(A_F)$, where $\delta$ is the greatest degree of a minimal generator of $F^\perp$ and $\length(A_F)$ is the length of the apolar algebra, that is, the sum of all the values of the Hilbert function of $A_F$. The bound of Ranestad--Schreyer is best when $\delta$ is small, that is when $F^\perp$ is generated in small degrees. So it is natural to ask when this occurs, or conversely when $F^\perp$ has high-degree generators. \begin{problem}\label{problem: apolar gen degrees} Give necessary or sufficient conditions for $F^\perp$ to be generated in low degrees or in high degrees; that is, for the greatest degree $\delta$ of a minimal generator of $F^\perp$ to be small or large. \end{problem} As we shall see, ``small'' and ``large'' should be considered relative to the degree $d$ of $F$. It is through serendipity that while simultaneously studying Problems \ref{problem: dir sum} and \ref{problem: apolar gen degrees}, as separate problems, the authors noticed that they were actually not separate. These two problems are linked by the following result (see also \cite[Lem.~2.9, Lem.~3.27]{KleppePhD2005}). \begin{thm}\label{thm: direct sum generator} If $F$ is a direct sum then $F^\perp$ has a minimal generator of degree $\deg(F)$. \end{thm} Sepideh Shafiei has shown that the apolar ideal of the generic determinant $\det_n$ is generated in degree $2$ \cite{Shafiei:ud}. Thus \begin{cor}\label{cor_determinant_is_not_dir_sum} For $n > 2$ the generic determinant is not a direct sum. \end{cor} Other results of Shafiei concerning apolar ideals of permanents, Pfaffians, etc., have similar consequences for direct sum indecomposability of these forms. Despite its centrality in linking Problems~\ref{problem: dir sum} and~\ref{problem: apolar gen degrees}, Theorem~\ref{thm: direct sum generator} is surprisingly easy to prove, see Section~\ref{sect: max deg apolar gen of direct sum}. The converse to Theorem~\ref{thm: direct sum generator} does not hold. \begin{example}\label{example: border rank 2 apoequ not dirsum} $F = xy^2 \in S = \mathbb{C}[x,y]$ has $F^\perp = \langle \alpha^2, \beta^3 \rangle \subset T = \mathbb{C}[\alpha, \beta]$ with the minimal generator $\beta^3$ of degree $3$, but $F$ is not a direct sum. Indeed, in two variables a direct sum $x^d - y^d$ factors as $x^d - y^d = \prod_{k=1}^d (x - \zeta^k y)$ ($\zeta$ a primitive $d$-th root of unity), with distinct linear factors, while $xy^2$ does not have distinct factors; or use Proposition~\ref{prop: smith stong}. \end{example} \begin{example}\label{example: plane cubic apoequ not dirsum} The cubic $F = x^2 y - y^2 z = y(x^2 - yz) \in \mathbb{C}[x,y,z]$ has $F^\perp = \langle \gamma^2, \alpha\gamma, \alpha^2 + \beta\gamma, \beta^3, \alpha\beta^2 \rangle$, so $F^\perp$ has two minimal generators of degree $3$. Thus $F$ satisfies the necessary condition of Theorem~\ref{thm: direct sum generator}. However $F$ is not a direct sum by Proposition~\ref{prop: smith stong}. \end{example} Note however that $xy^2$ is a limit of direct sums: \[ xy^2 = \lim_{t \to 0} \frac{1}{3t} \Big( (y+tx)^3 - y^3 \Big) \] as is $y(x^2+yz)$: \[ y(x^2+yz) = \lim_{t \to 0} \frac{1}{6t^2} \Big( (y + tx + 2t^2 z)^3 + (y - tx)^3 - 2y^3 \Big) . \] We will show that if $F^\perp$ has a minimal generator of degree $\deg(F)$, then $F$ is a limit of direct sums. But the converse does not hold: not every limit $F$ of direct sums has the property that $F^\perp$ has a minimal generator of degree $\deg(F)$. \begin{example}\label{example: nonconcise limit of dirsum} For $t \neq 0$, $x^d - ty^d$ is a direct sum and $\lim_{t \to 0} x^d - ty^d = x^d$. However $(x^d)^\perp = \langle \alpha^{d+1},\beta \rangle$ has no minimal generator of degree $d$. A perhaps more satisfying example is $\lim_{t \to 0} xyz - tw^3 = xyz$, again a limit of direct sums, with $(xyz)^\perp = \langle \alpha^2, \beta^2, \gamma^2, \delta \rangle$, having no minimal generator of degree $3$. \end{example} It is no coincidence that in both of these examples the limit polynomial uses fewer variables than the direct sums at $t \neq 0$. We will show that in general, if $F$ is a limit of direct sums which cannot be written using fewer variables, then $F^\perp$ has a minimal generator of degree $\deg(F)$. We now introduce terminology to give a precise statement of these results. First note that $F^\perp$ is a homogeneous ideal containing $\langle \alpha_1,\dotsc,\alpha_n \rangle^{d+1}$, all forms of degree at least $d+1$. Thus $F^\perp$ is generated in degree at most $d+1$: the $\delta$ in the Ranestad--Schreyer theorem satisfies $1 \leq \delta \leq d+1$. We mention the following observation, previously noted by Casnati and Notari \cite[Rem.~4.3]{MR2769229}. \begin{prop} \label{prop: d+1 generator rank 1} $F^\perp$ has a minimal generator of degree $d+1$ if and only if $F = \ell^d$ is a power of a linear form. \end{prop} A proof is given in Section~\ref{sect: max deg apolar gen of direct sum}. For brevity we refer to a minimal generator of $F^\perp$ as an \defining{apolar generator} of $F$. Any apolar generator of degree equal to $\deg(F)$ is called an \defining{equipotent apolar generator}. We introduce the notation $\DirSum = \DirSum_{n;d}$ for the set of direct sums (of degree $d$ in $n$ variables), $\ApoEqu$ for the set of forms with an equipotent apolar generator, and $\Con$ for the set of forms that cannot be written using fewer variables. (Such forms are called \defining{concise}, see Section~\ref{sect: conciseness}.) We will show that every form with an equipotent apolar generator is a limit of direct sums, so that we have the following inclusions: \[ \begin{array}{ccccc} \DirSum &\subset& \ApoEqu &\subset& \overline{\DirSum} \\ \cup & & \cup & & \cup \\ \DirSum \cap \Con &\subset& \ApoEqu \cap \Con &\subset& \overline{\DirSum} \cap \Con \end{array} \] In fact most of these inclusions are strict in general. The vertical inclusions clearly are strict as soon as $n \geq 2$. We have $\DirSum \cap \Con \subsetneqq \ApoEqu \cap \Con$ (and of course $\DirSum \subsetneqq \ApoEqu$) by Examples \ref{example: border rank 2 apoequ not dirsum} and \ref{example: plane cubic apoequ not dirsum}. And we have $\ApoEqu \subsetneqq \overline{\DirSum}$ by Example \ref{example: nonconcise limit of dirsum}. Surprisingly, the last remaining inclusion is in fact an equality (compare with \cite[Cor.~4.7]{KleppePhD2005}). \begin{thm}\label{thm: apoequ = closure of dirsum} For $n \geq 2$ and $d \geq 3$, every form with an equipotent apolar generator is a limit of direct sums and conversely, every concise limit of direct sums has an equipotent apolar generator. In particular $\ApoEqu \cap \Con = \overline{\DirSum} \cap \Con$. \end{thm} The theorem is proved in Section~\ref{sect: maximal degree apolar gens and limits}. One direction is proved in Theorem~\ref{thm: apoequ => limit of dirsum} and the other direction is proved in Theorem~\ref{thm: limit of s fold direct sum then s-1 max deg apo gens}. Moreover, Theorem~\ref{thm: apoequ => limit of dirsum} provides a normal form for the limits of direct sums which are not direct sums. In such cases, for some choice of basis $x_1,\dotsc,x_k$, $y_1,\dotsc,y_k$, $z_1,\dotsc,z_{n-2k}$ of $V$: \begin{multline} F(\fromto{x_1}{x_k},\fromto{y_1}{y_k},\fromto{z_1}{z_{n-2k}}) \\ = \sum_{i=1}^k x_i \frac{\partial H(\fromto{y_1}{y_k})}{\partial y_i} + G(\fromto{y_1}{y_k},\fromto{z_1}{z_{n-2k}}) \label{equ_normal_form_for_limit_of_dir_sums} \end{multline} for homogeneous polynomials $H(y)$ in $k$ variables and $G(y,z)$ in $n-k$ variables, both of degree $d$. One might hope naively to prove at least one direction of Theorem~\ref{thm: apoequ = closure of dirsum} by arguing that if $F_t \to F$, then presumably $F_t^\perp \to F^\perp$. If for each $t \neq 0$, $F_t$ is a direct sum, then $F_t^\perp$ has a minimal generator of degree $d = \deg F$ by Theorem~\ref{thm: direct sum generator}; and then one might hope to finish by appealing to the semicontinuity of graded Betti numbers, to show that $F^\perp$ also has at least one minimal generator of degree $d$. However this argument cannot succeed, as $F_t \to F$ does not imply $F^\perp_t \to F^\perp$ as a flat limit. For instance, consider the family of polynomials $F_t = tx^d + xy^{d-1}$ in $x$ and $y$ parametrized by~$t$, with $d \ge 4$. We have $F_t \to F_0 = xy^{d-1}$, and \[ F_t^{\perp} = \begin{cases} \langle \alpha^2\beta, \alpha^{d-1} + dt\beta^{d-1} \rangle & \text{for $t\ne 0$, or} \\ \langle \alpha^2, \beta ^d \rangle & \text{for $t = 0$.} \end{cases} \] Thus the flat limit $\lim_{t\to 0} (F_t^{\perp}) = \langle \alpha^2\beta, \alpha^{d-1}, \beta^d \rangle \subsetneqq F_0^{\perp}$. Nevertheless, for those cases in which $F_t \to F$, the $F_t$ are direct sums, and $F^\perp_t \to F^\perp$ is a flat family, it follows that $F^\perp$ has a degree $d$ generator by semicontinuity. When such a family $\{F_t\}$ exists, we say $F$ is an \defining{apolar limit} of direct sums. The locus of apolar limits of direct sums is denoted $\ApoLim$. We have \begin{thm}\label{thm: apolim and apoequ} $\ApoLim \subset \ApoEqu$, and: \begin{enumerate} \item There exists $n$ such that for $d \ge 2n$, the inclusion is strict: $\ApoLim \subsetneqq \ApoEqu$. \item If $d=3$ or if $n=3$, then $\ApoLim = \ApoEqu$. \end{enumerate} \end{thm} In other words, the inclusion is strict for some $n$ and sufficiently large $d$, but there is equality in some cases, including $d=3$ or $n=3$. The inclusion $\ApoLim \subset \ApoEqu$ follows by the semicontinuity of graded Betti numbers, as described above. The existence of $n$ for which the inclusion is strict is explained in Section~\ref{sect: non-apolar limit}, particularly, in Proposition~\ref{prop: limit not apolar limit}. The proof of the equality for $d=3$ is straightforward, and it is explained in Proposition~\ref{prop: cubic apolar limit}. The proof of the equality for $n=3$ is obtained by longer, but elementary methods in Theorem~\ref{thm: apolim=apoequ in the plane}. Certainly, using more refined techniques one may be able to determine whether $\ApoLim = \ApoEqu$ also in other cases. In any case, we emphasize that because of Theorem~\ref{thm: apolim and apoequ}, the naive hope described above cannot suffice to prove Theorem~\ref{thm: apoequ = closure of dirsum}. That is, for some $n$ and $d$, there are forms in $\ApoEqu$ whose apolar generators of degree $d$ do not arise via semicontinuity of graded Betti numbers for any family of direct sums. The strictness of $\ApoLim \subsetneqq \ApoEqu$ forces a more delicate argument for Theorem~\ref{thm: apoequ = closure of dirsum}. The strictness of the inclusion is a consequence of the existence of certain zero-dimen\-sio\-nal Gorenstein local schemes with restricted deformations, which we call \emph{uncleavable schemes}. A scheme supported at a single point is uncleavable if all its deformations are supported at a single point, see Section~\ref{sect: non-apolar limit} for references and more details. We show that for at least $n=14$ we have $\ApoLim\subsetneqq \ApoEqu$. This is because the shortest non-smoothable zero-dimensional Gorenstein scheme of length $14$ is uncleavable. However, we expect that $\ApoLim \subsetneqq \ApoEqu$ should hold for all sufficiently large $n$. We explain in detail in Section~\ref{sect: non-apolar limit} which deformation theoretic properties of schemes of length $n$ we need in order to obtain $\ApoLim \neq \ApoEqu$. See also \cite[Sect.~4.2, Cor.~4.24]{KleppePhD2005} for related examples. \medskip It is also interesting to study the case in which $F^\perp$ is generated in low degrees. For example, $(\det_n)^\perp$ is generated in degree $2$, as is $(x_1\dotsm x_n)^\perp = \langle \alpha_1^2,\dotsc,\alpha_n^2 \rangle$. See Table~\ref{table:plane cubics} for examples of plane cubics. We show that an upper bound for the degrees of minimal generators of $F^\perp$ forces an upper bound on the degree of $F$; equivalently, if $F$ has a high degree relative to the number of variables then $F^\perp$ must have high degree minimal generators. \begin{thm}\label{thm: apolar generator degree lower bound} If $F$ is a homogeneous form of degree $d$ in $n$ variables and $\delta$ is the highest among the degrees of minimal generators of $F^\perp$ then $d \leq (\delta-1)n$. \end{thm} In particular if $F^\perp$ is generated by quadrics then $d \leq n$. It would be interesting to classify polynomials $F$ of degree $d=n$ such that $F^\perp$ is generated by quadrics. See Section~\ref{sect: lower bound on degree} for a brief discussion and the proof of the theorem. \begin{notation} Throughout the paper, $F \in S^d V$ is a homogeneous form of degree $d$ in $n = \dim V$ variables. More generally $F$ may be a divided-powers form of degree $d$, see \cite[App.~A]{MR1735271}. \end{notation} \begin{remark} Direct sums and their limits have also appeared in other articles. In \cite{MR1067383}, functions (not necessarily polynomials) are called \emph{decomposable} when they are sums of functions in independent variables. In \cite{MR1096431} they are called \emph{sum-maps}, while in \cite{MR1127057} and \cite{MR1969308} they are called \emph{direct sums}. In \cite{MR2548229}, polynomials with a direct sum decomposition are called \emph{polynomials of Sebastiani--Thom type}. They are called \emph{connected sums} in \cite{Shafiei:ud}, following \cite{MR2177162}, \cite{MR2738376}, where the term \emph{connected sum} is used to refer to a closely related concept, see Section \ref{sect: connected sums}. In \cite{MR1119265}, forms (homogeneous polynomials) $p$ and $q$ over $\mathbb{C}$ are called \emph{unitarily disjoint} if they depend on disjoint sets of variables, after a unitary linear change of variables with respect to a fixed Hermitian product on the space of linear forms. (see \cite{MR1119265} for details). In \cite{KleppePhD2005}, direct sum decompositions are called \emph{regular splittings} and limits of direct sum decompositions are called \emph{degenerate splittings}. In \cite{khoury_jayanthan_srinivasan} and references therein the authors study apolar algebras of homogeneous forms $F$, which are either direct sums $F=x^d + G(y_1,\dotsc, y_{n-1})$ of ``type'' $(1,n-1)$ or their limits $x y^{d-1} + G(y, z_1,\dotsc, z_{n-2})$ --- compare with the normal form \eqref{equ_normal_form_for_limit_of_dir_sums} for $k=1$. Their work is motivated by earlier articles \cite{MR2158748}, \cite{elkhoury_srinivasan_a_class_of_Gorenstein_Artin_algebra_of_codim_4}, where the special case of $n=4$ has been studied. In this series of articles the direct sums and their limits serve the purpose of a classification of Gorenstein Artin algebras with prescribed invariants. Our results in this article may have similar applications, which need to be further studied. \end{remark} \subsection{Outline of paper} In the remainder of this Introduction we give proofs of some elementary statements including Theorem~\ref{thm: direct sum generator} and Proposition~\ref{prop: d+1 generator rank 1}. In Section~\ref{sect: background} we review background, including: apolarity; conciseness; secant varieties and border rank; the easy cases of binary forms and plane cubics; semicontinuity of graded Betti numbers; Gorenstein Artin algebras; and connected sums. In Section~\ref{S: dimension of dirsum} we discuss the dimension of the direct sum locus and uniqueness of direct sum decompositions. In Section~\ref{sect: apolar generators and limits of direct sums} we collect results that relate quadratic apolar generators to direct sums and to maximal degree apolar generators. We prove Theorem~\ref{thm: apoequ = closure of dirsum}. Then we prove Theorem~\ref{thm: apolar generator degree lower bound}. In Section~\ref{sect: variation in families} we prove Theorem~\ref{thm: apolim and apoequ}. In Section~\ref{sect: generalizations} we generalize some of our results to linear series of forms (instead of a single form). We consider ``almost direct sums.'' Finally we discuss the generalization of our results to algebraically closed fields in any characteristic. \subsection{Equipotent apolar generator of a direct sum}\label{sect: max deg apolar gen of direct sum} We begin with a few elementary statements. \begin{proof}[Proof of Theorem~\ref{thm: direct sum generator}] Say $F = G-H$ where $G \in S^x = \mathbb{C}[x_1,\dotsc,x_i]$, $H \in S^y = \mathbb{C}[y_1,\dotsc,y_j]$, and $G, H \neq 0$. Let us denote the dual rings $T^\alpha = \mathbb{C}[\alpha_1,\dotsc,\alpha_i]$, $T^\beta = \mathbb{C}[\beta_1,\dotsc,\beta_j]$. We work in $S = S^x \otimes S^y = \mathbb{C}[x_1,\dotsc,x_i,y_1,\dotsc,y_j]$ with dual ring $T = T^\alpha \otimes T^\beta = \mathbb{C}[\alpha_1,\dotsc,\alpha_i,\beta_1,\dotsc,\beta_j]$. We have $G^\perp \cap H^\perp \subset F^\perp$, where $G^\perp$ and $H^\perp$ are computed in $T$ rather than $T^\alpha$, $T^\beta$. On the other hand if $\Theta \in (F^\perp)_k$, then $\Theta \aa G = \Theta \aa H \in S^x_{d-k} \cap S^y_{d-k}$. But this intersection is zero if $k \neq d$, so we must have $\Theta \aa G = \Theta \aa H = 0$. Thus $(G^\perp \cap H^\perp)_k = (F^\perp)_k$ for all $k \neq d$. Now let $\delta_1 \in T^\alpha_d$ such that $\delta_1 \aa G = 1$ and let $\delta_2 \in T^\beta_d$ such that $\delta_2 \aa H = 1$. Such elements exist in abundance: there is an affine hyperplane of them in $T^\alpha_d$ and in $T^\beta_d$, by the hypothesis that $G$ and $H$ are nonzero. Let $\Delta = \delta_1 + \delta_2$. Then $\Delta \aa G = \Delta \aa H = 1$, so $\Delta \notin G^\perp \cap H^\perp$, but $\Delta \aa F = 0$. This element $\Delta$ is a minimal generator of $F^\perp$: it cannot be generated in lower degrees, since all elements in lower degrees lie in $G^\perp \cap H^\perp$. \end{proof} For future reference we record the additional details given in the above proof (see also \cite[Lem.~3.27]{KleppePhD2005}). \begin{lemma}\label{lem: apolar of direct sum} Let $F = G-H$ be a direct sum decomposition of degree $d$, with $G$, $H$ nonzero. Then \[ F^{\perp} = G^\perp \cap H^\perp + \langle \Delta \rangle \] where $\Delta = \delta_1 + \delta_2 \in T_d$ is homogeneous of degree $d$, $\delta_1 \aa G = \delta_2 \aa H = 1$, $\delta_1$ can be written only using variables dual to variables of $G$, and $\delta_2$ can be written only using variables dual to variables of $H$. \end{lemma} \begin{proof} The only statement left to prove is that any degree $d$ element of $F^{\perp}$ is in the ideal $G^\perp \cap H^\perp + \langle \Delta \rangle$. Let $\Theta \in (F^\perp)_d$. Then $\Theta \aa G = \Theta \aa H \in \mathbb{C}$; call this value $c$. We have $\Theta - c\Delta \in G^\perp \cap H^\perp$. \end{proof} See also \cite[Lemma~3.1]{Casnati:2013ad} for a description of $F^\perp$ in terms of the extensions of the ideals $G^\perp \cap T^\alpha$, $H^\perp \cap T^\beta$. \begin{cor} If $F = F_1 + \dotsb + F_s$ is a direct sum of $s$ terms then $F^\perp$ has at least $s-1$ equipotent apolar generators. \end{cor} \begin{proof} It follows by induction on $s$ from Lemma~\ref{lem: apolar of direct sum}. Explicitly, if $F = F_1 + \dotsb + F_s$ then \begin{multline*} \dim F^\perp_d = 1 + \dim(F_1^\perp \cap (F_2 + \dotsb + F_s)^\perp)_d \\ = \dotsb = s-1 + \dim (F_1^\perp \cap \dotsb \cap F_s^\perp)_d , \end{multline*} so $F^\perp$ has at least $s-1$ minimal generators of degree $d$. \end{proof} We call $F$ an \defining{$s$-fold direct sum} when $F$ can be written as a direct sum of $s$ terms, that is, $F = F_1 + \dotsb + F_s$ with each $F_i \in V_i$ and $V_1 \oplus \dotsb \oplus V_s = V$. We will use frequently the following simple characterization of direct sums. \begin{cor}\label{cor_quadratic_generators_of_a_direct_sum} Let $V = V_1 \oplus V_2$ and $F \in S^d V$. Then the following are equivalent: \begin{enumerate} \item \label{item_a_direct_sum} $F = F_1 + F_2$ where $F_1 \in S^d V_1$ and $F_2 \in S^d V_2$, \item \label{item_quadratic_generators} $V_1^* V_2^* \subset F^\perp$, \item \label{item_zero_locus} $V_1 \cup V_2$ contains the common affine scheme-theoretic zero locus $V((F^\perp)_2)$ of the quadrics in $F^{\perp}$. \end{enumerate} \end{cor} Note that in \ref{item_a_direct_sum} $F$ is not necessarily a direct sum, as $F_1$ or $F_2$ could be zero. It is easy to overcome this, by simply adding an assumption that $F^{\perp}$ has no linear generators and both vector spaces $V_1$ and $V_2$ are non-trivial. \begin{proof} If $F= F_1 + F_2$, then clearly the reducible quadrics in $V_1^* V_2^*$ annihilate $F$. In the other direction, if $V_1^* V_2^* \subset F^\perp$, and we give $V_1$ a basis $\fromto{x_1}{x_a}$ and $V_2$ a basis $\fromto{y_1}{y_{n-a}}$, then the condition \ref{item_quadratic_generators} implies that $F$ cannot have mixed terms divisible by $x_i y_j$. Thus $F$ is as in \ref{item_a_direct_sum}. Finally \ref{item_zero_locus} is simply a geometric rephrasing of \ref{item_quadratic_generators}, using the correspondence between ideals and affine schemes. \end{proof} We give an alternate proof of the statement observed by Casnati and Notari \cite[Rem.~4.3]{MR2769229}, that a form $F$ of degree $d$ has an apolar generator of degree $d+1$ if and only if $F = x^d$ has Waring rank $1$. This proof illustrates in a simple case some of the techniques we will use later. \begin{proof}[Proof of Proposition~\ref{prop: d+1 generator rank 1}] If $F = x_1^d$ then $F^\perp = \langle \alpha_1^{d+1},\alpha_2,\dotsc,\alpha_n \rangle$. Conversely suppose $F$ has degree $d$ and $F^\perp_{d+1}$ has a minimal generator. Let $I = (F^\perp)_{\leq d}$, the ideal generated by forms in $F^\perp$ of degree at most $d$, and note that $I_{d+1}\subset T_{d+1} = F^\perp_{d+1}$ has codimension at least $1$, because otherwise no generator would be needed. Then there is a nonzero polynomial $G$ of degree $d+1$ annihilated by $I$, since $G$ is annihilated by $I$ if and only if it is annihilated by $I_{d+1}$ (see \cite[Prop.~3.4(iii)]{MR3121848}). Moreover $I_{d} = (F^\perp)_{d}$ has codimension exactly $1$, by the symmetry of the Hilbert function of $A_F$ (see \textsection\ref{section: gorenstein artin}): $\dim (A_F)_d = \dim (A_F)_0 = 1$. Now $I_d \subset (G^\perp)_d \subsetneqq T_d$, so $I_d = (G^\perp)_d$. Thus the Hilbert function of $A_G$ has $h_{A_G}(d) = 1$. By symmetry $h_{A_G}(1) = 1$. Then $G$ has only one essential variable (see \textsection\ref{sect: conciseness}) so $G$ can be written as a homogeneous form of a single variable; necessarily $G$ has Waring rank $1$. Say $G = x^{d+1}$. We have $G^\perp\subset F^{\perp}$, since $(G^\perp)_{\le d} = I_{\le d} = (F^{\perp})_{\le d}$ and $(F^{\perp})_{\ge d+1} = T_{\ge d +1}$. So $F = \alpha \aa G = c x^d$ for some $\alpha$ by Lemma~\ref{lemma: apolar containment}, i.e., by the inclusion-reversing part of the Macaulay inverse system Theorem. \end{proof} \subsection*{Acknowledgements} As indicated by appropriate citations, many of the statements in this article are either a part of the third author's PhD thesis \cite{KleppePhD2005}, or they are similar to statements in that thesis. We thank Sepideh Shafiei for suggesting to us the problem of direct sum decomposability of determinants and for sharing with us her work in preparation. We thank Joachim Jelisiejew, Grzegorz Kapustka, Jan O.~Kleppe, Robert Lazarsfeld, and Anna Otwinowska for helpful comments, and Kristian Ranestad for connecting the unpublished thesis of the third author with the work of the other authors. We especially thank the anonymous referee for unusually thorough reviews which included a number of helpful suggestions. The computer algebra software packages Macaulay2 \cite{M2} and Magma \cite{magma} were useful in compiling Table~\ref{table:plane cubics} and in calculations of examples. \section{Background}\label{sect: background} For a homogeneous ideal $I$ in the polynomial ring $S$, a \defining{minimal generator} of $I$ is a non-zero homogeneous element of the graded module $I/\mathfrak{m}I$ where $\mathfrak{m} = \langle x_1,\dotsc,x_n \rangle$ is the irrelevant ideal. By the ``\defining{number}'' of minimal generators of a given degree $k$ we mean the dimension of the $k$-th graded piece $(I/\mathfrak{m}I)_k$. Following the convention of \cite{hartshorne}, by an \emph{(algebraic) variety}, we always mean an irreducible algebraic set. By a \defining{general element} of an algebraic variety we always mean any element of some suitably chosen open dense subset. When $V$ is a vector space, $\mathbb{P} V$ denotes the projective space of lines through the origin of $V$. When $v \in V$ is a nonzero vector, $[v]$ denotes the point in $\mathbb{P} V$ determined by $v$, that is, the line through the origin of $V$ spanned by $v$. \subsection{Apolarity} Let $S$ be a polynomial ring and $T$ its dual ring. For a fixed homogeneous $F \in S_d$ of degree $d$, the $i$-th \defining{catalecticant} $C^i_F$ is a linear map $T_{d-i} \to S_i$ defined by $C^i_F(\Theta) = \Theta \aa F$. The term ``catalecticant'' was introduced by Sylvester in 1851~\cite{Sylvester:1851kx}. The images of the catalecticants are the inverse systems studied by Macaulay~\cite{MR1281612}. The catalecticant maps give an isomorphism between $A_G = T/G^\perp$ and the principal $T$-submodule of $S$ generated by $G$, consisting of elements $\Theta \aa G$ for $\Theta \in T$. \begin{lemma}\label{lemma: apolar containment} Suppose $F, G \in S$ are two homogeneous polynomials. If $G^\perp \subset F^{\perp}$, then $F = \Theta \aa G$ for some $\Theta \in T$. \end{lemma} Indeed, by the inclusion-reversing part of Theorem 21.6 of \cite{eisenbud:comm-alg}, the $T$-submodule of $S$ generated by $F$ is contained in the $T$-submodule generated by $G$. One connection between apolarity and geometry is indicated by Exercise 21.6 of \cite{eisenbud:comm-alg}, which relates the apolar ideals of plane conics to their ranks. Another connection is given by the following well-known lemma (see for example \cite[Proposition 4.1]{MR1215329}). \begin{lemma}\label{lem: apolarity singularity} Let $\alpha \in T_1$ be a linear form. Then $\alpha^k \in F^\perp$ if and only if $F$ vanishes to order at least $d-k+1$ at the corresponding point in the projective space $[\alpha] \in \mathbb{P} T_1 \cong \mathbb{P} S_1^*$. \end{lemma} In particular $\alpha^{d-1} \in F^\perp$ if and only if $V(F)$ is singular at $[\alpha]$. \begin{proof} $\alpha^k \in F^\perp$ is equivalent to $\Theta \alpha^k \aa F = 0$ for all $\Theta \in T_{d-k}$, equivalently $\alpha^k \aa (\Theta \aa F) = 0$ for all $\Theta \in T_{d-k}$. For such $\Theta$, $\Theta F$ is a form of degree $k$, so $\alpha^k \aa (\Theta \aa F)$ is equal to the evaluation of $\Theta \aa F$ at the point $[\alpha]$ (up to scalar multiple). This vanishes for all $\Theta \in T_{d-k}$ precisely when $F$ vanishes at $[\alpha]$ to order at least $d-k+1$. \end{proof} More detailed treatments of apolarity may be found in \cite[Lect.~8]{Geramita}, \cite[Sect.~1.1]{MR1735271}, and \cite{MR3121848}. \subsection{Conciseness}\label{sect: conciseness} A homogeneous form $F \in \mathbb{C}[x_1,\dotsc,x_n]$ is \defining{concise} (with respect to $x_1,\dotsc,x_n$) if $F$ cannot be written as a polynomial in fewer variables. That is, if there are linearly independent linear forms $t_1,\dotsc,t_k$ such that $F \in \mathbb{C}[t_1,\dotsc,t_k] \subset \mathbb{C}[x_1,\dotsc,x_n]$, then $k = n$. In coordinate-free terms, $F \in S^d V$ is concise (with respect to $V$) if $F \in S^d W$, $W \subset V$ implies $W = V$. Concise polynomials are also called \defining{nondegenerate}, but we will follow the terminology of the tensor literature. The following are equivalent: \begin{enumerate} \item $F \in S^d V$ is concise. \item The hypersurface $V(F) \subset \mathbb{P} V^*$ is not a cone. \item There is no point in $\mathbb{P} V^*$ at which $F$ vanishes to order $d$. \item The catalecticant $C^1_F$ is onto. \item The apolar ideal $F^\perp$ has no linear elements: $F^\perp_1 = 0$. \end{enumerate} We define the \defining{span} of $F$, denoted $\langle F \rangle$, to be the image of the catalecticant $C^1_F$. We have $F \in S^d \langle F \rangle$. With this notation, $G + H$ is a direct sum decomposition if and only if $\langle G \rangle \cap \langle H \rangle = \{0\}$ and $G,H \neq 0$. The elements of $\langle F \rangle$ are called \defining{essential variables} of $F$; by the number of essential variables of $F$ we mean the dimension $\dim \langle F \rangle$ of $\langle F \rangle$ as a $\mathbb{C}$-vector space. See \cite{MR2279854}. The locus in $S^d V$ of non-concise polynomials is a Zariski-closed subset called the \defining{subspace variety} and denoted $\Sub$. Its complement is the open set $\Con$. \subsection{Secant varieties and border rank} Let $v_d\colon \mathbb{P} V \to \mathbb{P}(S^d V )$ be the Veronese map, $v_d([\ell]) = [\ell^d]$. Recall $F \in S^d V $ has Waring rank $r$ if and only if $F$ is a sum of $r$ $d$-th powers of linear forms in $V $, but not fewer. Equivalently, $[F] \in \mathbb{P} (S^d V )$ lies in the linear span of some $r$ points in the Veronese variety $v_d(\mathbb{P} V )$, but does not lie in the span of any fewer points. The Zariski closure of the set of projective points corresponding to affine points of rank at most $r$ is the $r$-th secant variety $\sigma_r(v_d(\mathbb{P} V ))$ of the Veronese variety. The \defining{border rank} of $F$, denoted $br(F)$, is the least $k$ such that $[F]$ lies in the $k$-th secant variety of the Veronese variety. Evidently $br(F) \leq r(F)$ and strict inequality may occur. Note $\dim \langle F \rangle \leq br(F)$. Indeed, $\dim \langle F \rangle \leq r(F)$ clearly, so $\dim \langle F \rangle \leq r$ for all $F$ in a dense subset of $\sigma_r(v_d(\mathbb{P} V ))$. Since $\dim \langle F \rangle = \rank C^1_F$ varies lower semicontinuously in $F$, we have $\dim \langle F \rangle \leq r$ for all $F$ in $\sigma_r(v_d(\mathbb{P} V ))$. The second secant variety $\sigma_2(v_d(\mathbb{P} V ))$ is the disjoint union of the set of points of rank $2$, the set $v_d(\mathbb{P} V )$ itself, and (for $d>2$) the set of points on tangent lines to $v_d(\mathbb{P} V )$. Points of the third type have border rank $2$, so only $2$ essential variables. Such a point necessarily has the form $x y^{d-1}$ after a linear change of variables; we have $r(x y^{d-1}) = d$. Thus $br(F)=2$ if and only if either $F=x^d+y^d$ and $r(F)=2$, or $F=x y^{d-1}$ and $r(F)=d$. We remark that the extreme case of direct sum, i.e., the $n$-fold direct sum in an $n$-dimensional vector space $V$ coincides with a sufficiently general element of the $n$-th secant variety $\sigma_n(v_d(\mathbb{P} V))$. In particular, the closure of the set of such extreme direct sums is equal to the secant variety. \subsection{Binary forms} The following lemma is standard; see, for example, Theorem 1.44 of \cite{MR1735271}. \begin{lemma} The apolar ideal of a homogeneous binary form $F$ of degree $d$ is a complete intersection ideal generated in degrees $r$ and $d+2-r$ for some integer $1 \leq r \leq (d+2)/2$. The border rank of $F$ is $r$. \end{lemma} \begin{cor} Let $F$ be a binary form of degree $d$. The apolar ideal of $F$ has a generator of degree $d$ if and only if $F$ has border rank $2$. \end{cor} Note that the condition $br(F)=2$ excludes polynomials of rank $1$, so $F$ must be concise. Thus the locus of concise forms with an equipotent degree apolar generator is exactly the locus of concise forms which are limits of direct sums, that is, $\ApoEqu \cap \Con = \overline{\DirSum} \cap \Con$. This is the case $n=2$ of Theorem~\ref{thm: apoequ = closure of dirsum}. \subsection{Plane cubics}\label{sect: plane cubics} If a plane cubic $F$ is a direct sum then in suitable coordinates we may write $F = x^3 + G(y,z)$ where $G$ is a nonzero binary cubic form. We may choose coordinates so that $G(y,z)$ is $y^3$, $y^3+z^3$, or $y^2 z$, that is, $r(G) = 1$, $2$ or $3$. Thus up to change of coordinates there are exactly three plane cubics which are direct sums. We summarize the types of plane cubics in Table~\ref{table:plane cubics}, adapted from \cite{Landsberg:2009yq}. The columns mean the following: $\beta_{1,i}$ is the number of apolar generators of degree $i$, $r$ is Waring rank, and $br$ is border rank. (We omit $\beta_{1,4}=1$ for $F = x^3$.) The rows representing direct sums are in \textbf{bold face} and the rows representing non-concise polynomials are in \textit{italic face}. \begin{table}[hbt] \begin{center} \begin{tabular}{l l c l l l l} Description & normal form & $\beta_{1,1}$ & $\beta_{1,2}$ & $\beta_{1,3}$ & $r$ & $br$ \\ \hline\hline \textit{triple line} & $x^3$ & $2$ & $0$ & $0$ & $1$ & $1$ \\ \hline \textit{\textbf{three concurrent lines}} & $x^3-y^3$ & $1$ & $1$ & $1$ & $2$ & $2$ \\ \hline \textit{double line + line} & $x^2y$ & $1$ & $1$ & $1$ & $3$ & $2$ \\ \hline \textbf{irreducible (Fermat)} & $x^3 + y^3 + z^3$ & & $3$ & $2$ & $3$ & $3$ \\ \hline irreducible & $y^2z - x^3 - xz^2$ & & $3$ & $0$ & $4$ & $4$ \\ \hline \textbf{cusp} & $y^2z - x^3$ & & $3$ & $2$ & $4$ & $3$ \\ \hline triangle & $xyz$ & & $3$ & $0$ & $4$ & $4$ \\ \hline conic + transversal line & $x(x^2+yz)$ & & $3$ & $0$ & $4$ & $4$ \\ \hline irreducible, smooth & $y^2z - x^3$ & & $3$ & $0$ & $4$ & $4$ \\ \qquad ($a^3 \neq 0, -27/4$) & \qquad $- axz^2 - z^3$ \\ \hline irreducible, singular & $y^2z - x^3$ & & $3$ & $0$ & $4$ & $4$ \\ \qquad ($a^3 = -27/4$) & \qquad $- axz^2 - z^3$ \\ \hline conic + tangent line & $y(x^2 + yz)$ & & $3$ & $2$ & $5$ & $3$ \\ \hline \end{tabular} \end{center} \caption{Plane cubic curves.}\label{table:plane cubics} \end{table} This table shows the case $n=3$, $d=3$ of Theorem~\ref{thm: apoequ = closure of dirsum}. \begin{cor} Let $F$ be a concise plane cubic. The apolar ideal of $F$ has a minimal generator of degree $3$ if and only if $F$ is a limit of direct sums. \end{cor} \begin{proof} Table \ref{table:plane cubics} shows that a concise plane cubic has a minimal apolar generator of degree $3$ if and only if the cubic has border rank $3$, which is equivalent to its being a limit of Fermat cubics. \end{proof} \subsection{Semicontinuity of graded Betti numbers}\label{sect: semicontinuity of graded Betti numbers} In this section we work over an arbitrary algebraically closed field $\Bbbk$. Let $I$ be a homogeneous ideal in a polynomial ring $T =\Bbbk[\alpha_1,\dotsc,\alpha_n]$ with standard grading. The \defining{graded Betti numbers} of $I$ are defined as follows. Fix a minimal free resolution of $T/I$, \[ 0 \leftarrow T \leftarrow \bigoplus T(-j)^{\beta_{1,j}} \leftarrow \bigoplus T(-j)^{\beta_{2,j}} \leftarrow \dotsb \] The $\beta_{i,j}$ are the graded Betti numbers of $I$ (more precisely, of $T/I$). We have $\beta_{i,j} = \dim_{\Bbbk} \Tor^i(I,\Bbbk)_{j}$ \cite[Prop.~1.7]{eisenbud:syzygies}. In the proof of Theorem~\ref{thm: limit of s fold direct sum then s-1 max deg apo gens} we will use the fact that when the ideal $I$ varies in a flat family, the graded Betti numbers vary upper-semicontinuously. That is, if $I_t$ is a flat family of ideals, $\beta_{i,j}(I_0) \geq \lim_{t \to 0} \beta_{i,j}(I_t)$. Boraty\'nski and Greco proved that when the ideal $I$ varies in a flat family, the Hilbert functions and Betti numbers vary semicontinuously \cite{MR839041}. Ragusa and Zappal\'a proved the semicontinuity of graded Betti numbers of flat families of zero-dimensional ideals \cite[Lem.~1.2]{MR2165191}. Semicontinuity of graded Betti numbers more generally seems to be a well-known ``folk theorem''; for example, different ideas for proofs are sketched in \cite[Remark following Theorem~1.1]{MR2084070}, and in \cite[Corollary~3.3]{MR3060753}. We give a quick proof here for the sake of self-containedness. \begin{prop}\label{prop: semicontinuity of graded Betti numbers} Let $T = \Bbbk[\alpha_1,\dotsc,\alpha_n]$ with standard grading, and consider the power series ring $\Bbbk[[U]]$, with $\deg U = 0$. Suppose $I \subset T \otimes_{\Bbbk} \Bbbk[[U]]$ is a homogeneous ideal, flat over $\Spec(\Bbbk[[U]])$. For $\mathfrak{p} \in \Spec(\Bbbk[[U]])$ let $I_{\mathfrak{p}} = I \otimes k(\mathfrak{p})$. Fix any $i$ and $j$. Then the function $\mathfrak{p} \mapsto \beta_{i,j}(I_\mathfrak{p})$ is upper-semicontinuous. \end{prop} \begin{proof} Start with the Koszul resolution of $\Bbbk = T/(\alpha_1,\dotsc,\alpha_n)$, regarded as a sheaf on $\Spec (\Bbbk[[U]])$ (although independent of $U$). Tensor the resolution with $I$, take the degree $j$ part of the resulting complex, and denote by $I_{k, \mathfrak{p}}$ the $k$-th graded piece of $I_{\mathfrak{p}}$. The $\Tor$ we are interested in is the homology of this complex of vector spaces: \[ \dotsb \leftarrow \Wedge{i-1} V^{*} \otimes I_{j-i+1,\mathfrak{p}} \leftarrow \Wedge{i} V^{*} \otimes I_{j-i,\mathfrak{p}} \leftarrow \Wedge{i+1} V^{*} \otimes I_{j-i-1,\mathfrak{p}} \leftarrow \dotsb \] where $V^{*}$ is the vector space spanned by $\alpha_1,\dotsc,\alpha_n$. By \cite[Exer.~20.14]{eisenbud:comm-alg}, the dimensions of the vector spaces $I_{q,\mathfrak{p}}$ are locally (in $\mathfrak{p}$) constant. Locally in $\mathfrak{p}$, then, this is a complex of fixed finite dimensional vector spaces with differentials given by matrices whose entries are polynomial in $\mathfrak{p}$. The graded Betti number $\beta_{i,j}$ is the dimension of the $i$-th cohomology of this complex; the dimensions of cohomology of such complexes are upper semicontinuous. \end{proof} \begin{remark} Graded Betti numbers of flat families of ideal sheaves on projective space are \emph{not} semicontinuous. For example, let three points in $\mathbb{P}^2$ move from linearly independent position for $u \neq 0$ to collinear position when $u=0$. For $u\neq0$, the ideal sheaf $\tilde I_u$ is generated by three quadrics having two linear syzygies. At $u=0$ the ideal sheaf $\tilde I_0$ is a complete intersection of type $(1,3)$ (with one linear generator, one cubic generator, and just one syzygy). The point is that the sheaf $\tilde I_0$ is the sheafification of the flat limit ideal $I_0$. In the above example the flat limit ideal has an embedded point at the origin, which is lost in the sheafification. \end{remark} Our brief proof does not recover the ``consecutive cancellation'' as in \cite{MR2084070}, but we will not use consecutive cancellation. \subsection{Gorenstein Artin algebras}\label{section: gorenstein artin} Let $A$ be an algebra. Most of the time we will consider \defining{standard graded algebras}, that is, $A$ is a graded algebra with $A_0 = \mathbb{C}$ and $A$ is generated in degree $1$. In this situation, the \defining{embedding dimension} of $A$ is $\dim A_1$. Let $\mathfrak{m} = \bigoplus_{i>0} A_i$ be the graded maximal ideal. The \defining{socle} of a graded algebra $A$ is the ideal $\Soc(A) = (0 : \mathfrak{m})$, that is, the annihilator of the graded maximal ideal in $A$. When $A$ is Artinian the socle includes $A_d$ where $d = \max\{i : A_i \neq 0\}$. When $A$ is Artinian, $A$ is \defining{Gorenstein} if and only if $\Soc(A)$ is $1$-dimensional. The \defining{socle degree} of $A$ is $\max\{i : A_i \neq 0 \}$. We use \cite[Cor.~21.16]{eisenbud:comm-alg}. Say $F$ is a concise homogeneous form of degree $d$ in $n$ variables and $I = F^\perp$ is a zero-dimensional Gorenstein ideal, so $A = T/I$ is a Gorenstein Artin algebra. Then $A$ has socle degree $d = \deg F$ and embedding dimension $n$. Let $A = T/I$ have the minimal free resolution $M_{\bullet}$: \[ 0 \leftarrow T\! = \! M_0 \stackrel{d_1}{\leftarrow} M_1 \stackrel{d_2}{\leftarrow} \dotsb \stackrel{d_{n}}{\leftarrow} M_n \leftarrow 0. \] The resolution $M_{\bullet}$ is self-dual, that is, isomorphic to its dual, up to shifts in grading and homological degrees. We call this isomorphism the \defining{Gorenstein symmetry}. In particular, writing each $M_i = \bigoplus_j T(-a^i_j)$, we have: \[ M_n = M_0^* = T(-d-n) \quad \text{and} \quad M_{n-i} = M_i^* = \bigoplus T(-d-n+a^i_j). \] The main focus of this paper is Gorenstein ideals having a minimal generator in degree $d$, that is $\beta_{1,d}(I) > 0$. Throughout the article we will frequently use the following consequence of the Gorenstein symmetry: \begin{equation}\label{equ: Gorenstein symmetry for maximal degree apo gens} \beta_{1,d}(F^{\perp}) = \beta_{n-1,n}(F^{\perp}), \text{ thus } \beta_{1,d}(F^{\perp}) > 0 \Longleftrightarrow \beta_{n-1,n}(F^{\perp}) >0. \end{equation} As we shall see, $\beta_{n-1,n}(F^{\perp})$ can be easier to control than $\beta_{1,d}(F^{\perp})$. We will also use the more elementary symmetry of the Hilbert function of a graded Gorenstein Artin algebra, $h_A(i) = h_A(d-i)$ for a Gorenstein Artin algebra $A$ of socle degree $d$. See for example \cite[Theorem~4.1]{MR0485835}. We will make use of the following two results. The first is a special case of Thm.~8.18 of \cite{eisenbud:syzygies}. \begin{lemma}[{\cite[Thm.~8.18]{eisenbud:syzygies}}]\label{lem: beta gt 0 then minors} Suppose $I \subset T$ is a homogeneous ideal with $\beta_{n-1,n}(I) > 0$ and no linear generators. Then there exists a choice of coordinates $\fromto{\alpha_1}{\alpha_n}$ of $T$ and linearly independent linear forms $\fromto{\ell_1}{\ell_k} \in T_1$ for some $0<k<n$ such that the $2\times 2$ minors of the following matrix are contained in $I$: \begin{equation}\label{equ: n times 2 matrix with linear forms} \begin{pmatrix} \alpha_1 & \cdots & \alpha_k & \alpha_{k+1}& \cdots & \alpha_n \\ \ell_1 & \cdots & \ell_k & 0 & \cdots & 0 \\ \end{pmatrix}. \end{equation} \end{lemma} The second is a special case of Thm.~8.11 of \cite{eisenbud:syzygies}. \begin{lemma}[{\cite[Thm.~8.11]{eisenbud:syzygies}}]\label{lem: beta inequality subideal} Suppose $I \subset T$ is a homogeneous ideal containing no linear forms and $J \subset I$ is a homogeneous subideal. Then $\beta_{n-1,n}(J) \leq \beta_{n-1,n}(I)$. \end{lemma} In Section~\ref{sect: variation in families} we will also mention Gorenstein Artin algebras which are not necessarily graded. More precisely we will consider finite Gorenstein schemes that are spectra of those algebras. These schemes arise naturally when treating deformations of Gorenstein Artin schemes. \subsection{Connected sum}\label{sect: connected sums} When $A$, $A'$ are graded Gorenstein Artin algebras over a field $\Bbbk$, both of socle degree $d$, the (formal) \defining{connected sum} $A \# A'$ is defined as follows \cite{MR2177162,MR2738376}. $A \# A'$ is the graded algebra with graded pieces \[ (A \# A')_k = \begin{cases} \Bbbk, & k=0, \\ A_k \oplus A'_k, & 0 < k < d, \\ \Bbbk & k=d, \end{cases} \] in which the products of two elements in $A$ or in $A'$ are as before modulo the identification of $A_d \cong A'_d \cong (A \# A')_d$, and the product of a positive-degree element in $A$ with one in $A'$ is zero. (See also \cite{MR2929675,Ananthnarayan:2014gf} for more general constructions.) That this is named the ``connected sum'' of algebras is motivated by the following example. If $X, Y$ are $d$-dimensional connected closed manifolds with cohomology rings $A_X$, $A_Y$, then the cohomology ring of the connected sum $X \# Y$ is the connected sum of the cohomology rings: $A_{X \# Y} = A_X \# A_Y$. When a polynomial is a direct sum as we have defined it, its apolar algebra is a connected sum in the above sense (see also \cite[Lem.~3.27]{KleppePhD2005}). \begin{prop}\label{prop: direct sum poly -> connected sum apolar algebra} If $F = G - H$ is a direct sum decomposition then $A_F = A_{G} \# A_{H}$. \end{prop} \begin{proof} Let $d = \deg F$, $T$, $T^\alpha$, and $T^{\beta}$ be as in the proof of Lemma~\ref{lem: apolar of direct sum}. By Lemma~\ref{lem: apolar of direct sum} the annihilators satisfy $(G^\perp)_k \cap (H^\perp)_k = (F^\perp)_k$ when $k < d$. Note that $T^\alpha_1 \subset H^\perp$ and $T^\beta_1 \subset G^\perp$. Thus for $p,q>0$, $T^\alpha_p \otimes T^\beta_q \subset G^\perp \cap H^\perp \subset F^\perp$. Recall that $G^\perp$ is the apolar ideal of $G$ in $T$, i.e., of $G$ as an element of $S$; the apolar ideal of $G$ in $T^\alpha$ (considering $G \in S^x$) is $G^\perp \cap T^\alpha$, and similarly for $H$. Hence for $0 < k < d$, \begin{multline*} (A_F)_k = T_k / F^\perp_k = \Big( \bigoplus_{p+q=k} T^\alpha_p \otimes T^\beta_q \Big) / F^\perp \\ = (T^\alpha)_k/(G^\perp \cap T^\alpha)_k \oplus (T^\beta)_k/(H^\perp \cap T^\beta)_k = (A_{G})_k \oplus (A_{H})_k \end{multline*} as claimed. \end{proof} We can use this to give a simple ``toy'' application of our results. Suppose $X$ and $Y$ are $d$-dimensional connected closed complex manifolds with cohomology rings $A_X$, $A_Y$, and suppose that these rings are standard graded (which is by no means typical: cohomology rings of manifolds can contain generators of different degrees). Write $A_X = S^x/G^\perp$ and $A_Y = S^y/H^\perp$. Then the connected sum $X \# Y$ has cohomology ring $A_{X \# Y} \cong A_X \# A_Y \cong (S^x \otimes S^y)/(G+H)^\perp$. Therefore if $M$ is a $d$-dimensional connected closed complex manifold whose cohomology ring $A_M = S/F^\perp$ is standard graded and $F$ is not decomposable as a direct sum, then $M$ is not decomposable as a connected sum, at least not into factors whose cohomology rings are standard graded. In particular if $F^\perp$ has no minimal generator in degree $d$ then this holds by Theorem~\ref{thm: direct sum generator}. There are well-known topological consequences of a direct sum decomposition, for example involving monodromy \cite{MR0293122} and logarithmic vector fields \cite{MR2548229}. It is not immediately obvious what \emph{geometric} consequences may follow from a direct sum decomposition. R.~Lazarsfeld shared with the fourth author the observation that if $F = F_1 + F_2$ is a direct sum then $\Sing(V(F)) = \Sing(V(F_1)) \cap \Sing(V(F_2))$, that is, the singular locus of $F$ is an intersection of two cones with disjoint vertices. Furthermore, defining $\Sigma_a(G) = \{ p \mid \mult_p(G) > a \}$, the common zero locus of the $a$-th partial derivatives of $G$ (so that $\Sigma_0(G) = V(G)$, $\Sigma_1(G) = \Sing V(G)$), we have $\Sigma_a(F) = \Sigma_a(F_1) \cap \Sigma_a(F_2)$ for all $a>0$. One necessary condition for $F$ to be a direct sum can be deduced immediately from Proposition~4.2 of \cite{MR2738376}, which we state here for the reader's convenience. We use the following terminology, taken from \cite{MR2738376}: A \defining{standard graded Poincar\'e duality algebra} of \defining{formal dimension $d$} is precisely a (standard graded) Gorenstein Artin algebra of socle degree $d$ (together with a choice of a nonzero socle element, which we ignore). The \defining{rank} of such an algebra $H$ is the dimension of $H_1$. The \defining{$\times$-length} of a subspace $V \subset H_1$ is the least integer $c$ such that any product of $c+1$ elements of $V$ is zero in $H$ if such an integer exists, otherwise the $\times$-length of $V$ is infinite. In particular, $V$ has $\times$-length strictly less than $d$ if and only if any product of $d$ elements of $V$ is zero in $H$. \begin{prop}[{Proposition~4.2 of \cite{MR2738376}}]\label{prop: smith and stong original} Let $H$ be a standard graded Poincar\'e duality algebra of formal dimension $d$. Suppose there is a codimension one subspace $V \subset H_1$ of $\times$-length strictly less than $d$. Then, either \begin{enumerate} \item $H$ is indecomposable with respect to the connected sum operation $\#$, or \item $H$ has rank two and $H \cong \mathbb{F}[x,y]/(xy,x^d-y^d) \cong (\mathbb{F}[x]/(x^{d+1})) \# (\mathbb{F}[y]/(y^{d+1}))$. \end{enumerate} \end{prop} Note that we work over the base field $\mathbb{F} = \mathbb{C}$. The resulting necessary condition for $F$ to be a direct sum is the following: \begin{prop}\label{prop: smith stong} If $F$ has a linear factor then either $F$ is not a direct sum, or $F = x^d - y^d$ for some linear forms $x, y$. \end{prop} \begin{proof} By changing coordinates if necessary, suppose $x_1$ divides $F$. Let $W = x_1^\perp = \langle \alpha_2,\dotsc,\alpha_n \rangle \subset V^* = T_1$. This is a codimension $1$ subspace whose $d$-th power is in $F^\perp$, that is $w_1 \dotsm w_d \in F^\perp$ for every $w_1,\dotsc,w_d \in W$. Indeed, each monomial appearing in $F$ has at least one factor $x_1$ and hence is annihilated by every product of $d$ elements of $W$. Let $H = A_F = T/F^\perp$. Then $W_1 = W/F^\perp_1$ is a codimension $1$ subspace of $H_1$ such that the product of any $d$ elements in $W_1$ is zero in $H$. By Proposition~\ref{prop: smith and stong original}, either $H$ is indecomposable with respect to the connected sum operation, or $H \cong \mathbb{C}[\alpha,\beta]/(\alpha\beta,\alpha^d-\beta^d) \cong (\mathbb{C}[\alpha]/\alpha^{d+1}) \# (\mathbb{C}[\beta]/\beta^{d+1})$. In the first case it follows that $F$ is not a direct sum. In the second case it follows that $F = x^d - y^d$ after a suitable change of coordinates. \end{proof} We remark in passing that this proposition is essentially just a restatement of Proposition~\ref{prop: smith and stong original}. We have seen that if $F$ has a linear factor then there is a codimension $1$ subspace $W_1 \subset (A_F)_1$ such that the product of any $d$ elements in $W_1$ is zero in $A_F$, i.e., the $\times$-length of $W_1$ is less than $d$. Conversely if $W_1$ is such a subspace, say $W_1 = x_1^\perp = \langle \alpha_2,\dotsc,\alpha_n \rangle$, then every degree $d$ monomial in $\alpha_2,\dotsc,\alpha_n$ annihilates $F$, so $F$ does not contain any terms that are monomials in just the variables $x_2,\dotsc,x_n$. That is, every monomial appearing in $F$ has at least one factor $x_1$, so $F$ is divisible by $x_1$. Thus the hypotheses of Proposition~\ref{prop: smith and stong original} and Proposition~\ref{prop: smith stong} are equivalent. Similarly, the conclusions are equivalent. \section{Dimension of direct sum locus and uniqueness}\label{S: dimension of dirsum} We discuss the uniqueness of the subspaces over which $F \in \DirSum$ splits and we compute the dimension of $\DirSum$. \subsection{Uniqueness of direct sum decompositions} Thom conjectured in \cite{MR1067383} that every germ at $0$ of an analytic function $F$ has a unique finest decomposition as a sum of germs of functions in independent variables, up to analytic equivalence. This means that if \[ F = F_1 + F_2 +\dotsb +F_k \] with $F_i$ in independent variables and each $F_i$ cannot be written as such a sum, then Thom expected that for any other such decomposition $F = G_1 + G_2 +\dotsb +G_l$, one must have $k=l$ and there exists an analytic isomorphism near $0$ preserving $F$ and transporting $G_i$ to $F_i$ (up to permuting the $G_i$). This was proved for quasi-homogeneous functions in \cite{MR582497}. One may ask if a homogeneous polynomial has a unique finest decomposition as a sum of polynomials in independent variables. More generally, for a homogeneous polynomial $F$, we say that one direct decomposition is finer than another if every direct summand subspace appearing in the second decomposition is a direct sum of subspaces appearing in the first (finer) one. That is, if $F = G_1 + \dotsb + G_k$ with $G_i \in S^d V_i$ for $i=1,\dotsc,k$ and $V_1 \oplus \dotsb \oplus V_k = V$ and also $F = G'_1 + \dotsb + G'_l$ with $G'_j \in S^d V'_j$ for $j=1,\dotsc,l$ and $V'_1 \oplus \dotsb \oplus V'_l = V$, then the direct sum decomposition $G_1 + \dotsb + G_l$ is finer than $G'_1 + \dotsb + G'_k$ if every $V'_i$ is a direct sum of one or more of the $V_j$. Clearly if $F$ is concise then a direct sum decomposition $F = G_1 + \dotsb + G_k$ is maximally fine if and only if each summand $G_i \in S^d V_i$ is concise with respect to $V_i$ and indecomposable as a direct sum. And clearly every concise $F$ has a maximally fine direct sum decomposition. The uniqueness question asks whether every concise $F$ has a unique maximally fine direct sum decomposition. In fact, quadrics decompose as direct sums over many splittings of the vector space: for example $x^2 + y^2 = (c x + s y)^2 + (s x - c y)^2$ for any $c,s$ such that $c^2+s^2=1$. For this reason we usually restrict to degrees $d \geq 3$, and sometimes $d \geq 4$. In these degrees the question of uniqueness has a positive answer. \begin{thm}[{\cite[Thm.~3.7]{KleppePhD2005}}]\label{thm: unique finext direct sum decomposition} Let $F$ be a concise form of degree $d \geq 3$. Then $F$ has a unique maximally fine direct sum decomposition. \end{thm} In fact \cite[Thm.~3.7]{KleppePhD2005} holds in any characteristic and gives a description of the subspaces appearing in the maximally fine direct sum decomposition. Moreover, \cite[Prop.~3.1]{MR2680196} provides an analogous uniqueness decomposition for connected sums of Gorenstein Artin algebras. However the proof of Theorem~\ref{thm: unique finext direct sum decomposition} requires some preparation which lies outside the scope of this paper. Here we show a weaker statement: essentially that the direct sum decomposition is uniquely determined for forms in an open dense subset of $\DirSum$. This is sufficient for our purposes and does not require many tools other than those already introduced. It is easy for binary forms. \begin{prop} Every direct sum in two variables of degree $d \geq 3$ has a uniquely determined decomposition. \end{prop} \begin{proof} There is a unique (up to scalar) generator of the apolar ideal in degree $2$ (and another in degree $d$). Writing $F = x^d - y^d$, this quadratic apolar generator is $Q = \alpha \beta$, and the pair of subspaces $\langle x \rangle$, $\langle y \rangle$ over which $F$ decomposes is determined as the pair of lines corresponding to the pair of points in projective space $\{ [x], [y] \} = V(Q)$. \end{proof} To go further we use the notion of \defining{compressed} algebras, see Definition 3.11 and Proposition 3.12 of \cite{MR1735271}. We recall, not the most general definition, but just the definition in the case that $A = A_F = T/F^\perp$ is a graded Gorenstein Artin algebra of socle degree $d$. In this case $A$ is compressed if, for each $i=0,\dotsc,d$, we have $\dim A_i = \min(\dim S^i(A_1), \dim S^{d-i}(A_1))$. If we have chosen $T$ and the isomorphism $A = A_F = T/F^\perp$ in such a way that $F$ is concise, then $A$ is compressed if and only if $\dim A_i = \min(\dim T_i,\dim T_{d-i})$. When $F \in S^d V$ is general, $A_F$ is compressed \cite[Proposition~3.12]{MR1735271}. (Recall that a general element of a variety is any element of a suitable dense open subset of the variety.) \begin{lemma}\label{lemma: general indecomposable} Let $d \geq 4$ and $n = \dim V \geq 2$. For $F \in S^d V$ general, $F^{\perp}$ has no quadratic generators and $F$ is not decomposable as a direct sum. \end{lemma} \begin{proof} We have that $A_F$ is compressed. This implies $F^\perp$ has no generators in degrees less than or equal to $d/2$, as $\dim (F^\perp)_i = \dim T_i - \dim A_i = 0$ for $0 \leq i \leq d/2$. In particular $(F^\perp)_2 = 0$ so $F$ is not decomposable as a direct sum (see Corollary~\ref{cor_quadratic_generators_of_a_direct_sum}). \end{proof} Table~\ref{table:plane cubics} shows that a general cubic in $n=3$ variables is not decomposable as a direct sum. But a general binary cubic is decomposable as a direct sum: Let $F = F(x,y)$ be a binary cubic with distinct roots. By a linear substitution we may move those roots to be the cubic roots of unity; in these coordinates $F = x^3-y^3$. \begin{prop}\label{prop: general dirsum unique decomposition} For $d \geq 4$ and $n \geq 2$, there is a dense subset of $\DirSum$ (that is, a union of a dense subset of each irreducible component of $\DirSum$) such that for $F$ in this subset, $F$ decomposes as a direct sum over a uniquely determined pair of subspaces. \end{prop} \begin{proof} Let $F \in \DirSum$ be arbitrary, $F = G + H$, with $G \in S^d V_1$, $H \in S^d V_2$, $V_1 \oplus V_2 = V$. Now, let $G' \in S^d V_1$, $H' \in S^d V_2$ be general, and $F' = G' + H'$. As $G' \to G$ and $H' \to H$ we have $F' \to F$. Clearly $F'$ decomposes as a direct sum over $V_1 \oplus V_2$; we claim that this is the unique pair of subspaces over which $F'$ decomposes. We have $({F'}^\perp)_2 = ({G'}^\perp)_2 \cap ({H'}^\perp)_2$, see Lemma~\ref{lem: apolar of direct sum}. Since $A_{G'}$ and $A_{H'}$ are compressed, ${G'}^\perp$ and ${H'}^\perp$ have no quadratic generators other than $({G'}^\perp)_2 = V_2^* V^*$ and $({H'}^\perp)_2 = V^* V_1^*$. In particular, then, $({F'}^\perp)_2 = V_1^* V^* \cap V^* V_2^* = V_1^* V_2^*$ and $V_1 \cup V_2$ is the zero locus $ V(({F'}^\perp)_2)$. Hence $V_1$ and $V_2$ are uniquely determined by $F'$, as claimed. This shows that there is a dense subset of $\DirSum$ whose elements decompose as direct sums over uniquely determined pairs of subspaces. \end{proof} On the other hand, there exists an open dense subset of cubic direct sums in three variables for which there is not a unique decomposition as a direct sum over two subspaces. Indeed, a cubic direct sum in three variables can be written as $F = x^3 + G(y,z)$. If $G$ is a general cubic binary form, then (with another change of coordinates) $F = x^3 + y^3 + z^3$. However, we do see that $F$ decomposes as a direct sum over $V = \langle x \rangle \oplus \langle y \rangle \oplus \langle z \rangle$, and this finest decomposition is uniquely determined by $F$, as $\{ [x], [y], [z] \} = V(\alpha \beta, \alpha \gamma, \beta \gamma) = V((F^\perp)_2)$. \subsection{Dimension} For $V_1 \oplus V_2 = V$ let $\DirSum^*(V_1,V_2) = S^d(V_1) \oplus S^d(V_2)$; note that this contains degenerate sums involving $0 \in S^d(V_1)$ or $0 \in S^d(V_2)$. For $a+b=n$, $a \leq b$, let $\DirSum^*(a,b)$ be the union of the $\DirSum^*(V_1,V_2)$ for $\dim V_1 = a$, $\dim V_2 = b$. Each $\DirSum^*(a,b)$ is irreducible as it is the image of the natural projection\ \[ \begin{split} S^d(\mathbb{C}^a) \times S^d(\mathbb{C}^b) \times \operatorname{GL}_n(\mathbb{C}) &\to S^d(V),\\ (G, H, M) &\mapsto G(m_1,\dotsc,m_a) + H(m_{a+1},\dotsc,m_n) \end{split} \] where the $m_i$ are the columns of the matrix $M$. Of course this is not injective. For each $a+b=n$ let $\DirSum(a,b) \subset \DirSum^*(a,b)$ be the subset of $F$ which are indeed decomposable as direct sums in which one term involves $a$ variables and the other involves $b$ variables, i.e., discarding those elements of $\DirSum^*(a,b)$ in which one or both terms are identically zero. Further, let $\DirSum^{\circ}(a,b) \subset \DirSum(a,b)$ be the subset of concise forms $F$. Then $\DirSum^{\circ}(a,b)$ is a Zariski open subset of $\DirSum^*(a,b)$, since its complement is defined by rank conditions on the catalecticant $C^1_F$. Now $\DirSum = \bigcup_{a+b=n} \DirSum(a,b)$. We see that $\DirSum$ contains the dense subset $\DirSum^{\circ} = \bigcup_{a+b=n} \DirSum^{\circ}(a,b)$, i.e., a union of a dense open subset of each $\DirSum(a,b)$. \begin{prop}\label{prop: dimension of dirsum} For $d \geq 4$ and $n \geq 3$, $\dim \DirSum^*(a,b) = 2ab + \binom{d+a-1}{a-1} + \binom{d+b-1}{b-1}$ and $\dim \DirSum = 2(n-1) + 1 + \binom{d+n-2}{n-2}$. \end{prop} \begin{proof} Let $\DirSum^{\circ \circ}(a,b) \subset \DirSum^{\circ}(a,b)$ be the set of $F = G+H$ such that $A_G$ and $A_H$ are compressed. There is a map from $\DirSum^{\circ \circ}(a,b)$ to $G(a,V) \times G(b,V)$ whose general fiber has dimension $\binom{d+a-1}{a-1} + \binom{d+b-1}{b-1}$. This shows $\dim \DirSum^*(a,b)$ is as claimed. This dimension is maximized when $(a,b) = (1,n-1)$. \end{proof} A more refined dimension formula is found in \cite[Thm.~3.47]{KleppePhD2005}. Moreover an analogous formula for connected sum Gorenstein algebras is in \cite[Prop.~4.4]{MR2738376}. \section{Apolar generators and limits of direct sums}\label{sect: apolar generators and limits of direct sums} Let $F\in S^d V$ be a homogeneous polynomial of degree $d$. Recall that an equipotent generator of $F$ is a minimal generator of the ideal $F^{\perp}$ of degree $d$. In this section we collect results that relate quadratic generators to direct sums and to equipotent apolar generators. Then we relate equipotent apolar generators to limits of direct sums. \subsection{Quadratic generators} Forms with an equipotent apolar generator have similar characteristics to forms which are direct sums. Perhaps the best illustration of this is the behavior of quadratic apolar generators. We make first the following easy observation: \begin{prop}\label{prop: direct sum quadratic generators} If $F$ is a concise direct sum in $n$ variables then $F$ has at least $n-1$ quadratic apolar generators. \end{prop} It was previously shown by Meyer and Smith that $F$ has at least one quadratic apolar generator \cite[Lem.~VI.2.1]{MR2177162}, without assuming $F$ to be concise. Moreover, \cite[Thm.~3.35]{KleppePhD2005} provides a calculation of all graded Betti numbers of $F^{\perp}$. \begin{proof} Say $F \in S^d V$ is a direct sum over $V = V_1 \oplus V_2$ with $\dim V_i = v_i$, $v_1 + v_2 = \dim V = n$. Then by Corollary~\ref{cor_quadratic_generators_of_a_direct_sum} we have $V_1^* V_2^* \subset (F^\perp)_2$, a subspace of dimension $v_1 v_2 \geq n-1$. By hypothesis there are no linear forms in $F^\perp$ so everything in $(F^\perp)_2$ is a minimal generator. \end{proof} Conversely, if $V = V_1 \oplus V_2$ and $V_1^{*} V_2^{*} \subset F^\perp$ then $F = F_1 + F_2$ where $F_1 \in S^d V_1$, $F_2 \in S^d V_2$, see Corollary~\ref{cor_quadratic_generators_of_a_direct_sum}. If furthermore $F$ is concise then $F_1, F_2 \neq 0$ and $F$ is a direct sum. More generally, if $V = V_1 \oplus \dotsb \oplus V_s$, then $F = F_1 + \dotsb + F_s$ where $F_i \in S^d V_i$ if and only if $\bigoplus_{i < j} V_i^* V_j^* \subset F^\perp$ as quadratic generators, where $V_i^* = \bigcap_{j \neq i} V_j^\perp$. (In coordinates, if each $V_i$ has a basis $x_{i,1},\dotsc,x_{i,n_i}$ then $V_i^*$ is spanned by the dual basis elements $\alpha_{i,1},\dotsc,\alpha_{i,n_i}$.) If this holds and furthermore $F$ is concise then each $F_i \neq 0$ and $F$ is a direct sum of $s$ terms. \begin{cor} If $F$ is a concise form in $n$ variables which is a direct sum of $s \geq 2$ terms then $F^\perp$ has at least $(s-1)(2n-s)/2$ quadratic generators. \end{cor} \begin{proof} When $F$ is a direct sum over $V = V_1 \oplus \dotsb \oplus V_s$, $F^\perp$ contains $\bigoplus_{i < j} V_i^* V_j^*$ as quadratic generators, see Corollary~\ref{cor_quadratic_generators_of_a_direct_sum}. The fewest quadratic generators arise when the summands $V_1,\dotsc,V_s$ have dimensions $1,\dotsc,1,n+1-s$, yielding the statement. \end{proof} Less obviously we have \begin{prop}\label{prop: degree d generator quadratic generators} If $F$ is a concise form in $n$ variables and $F$ has an equipotent apolar generator then $F$ has at least $n-1$ quadratic apolar generators. \end{prop} \begin{proof} Since $F$ is concise, $F^{\perp}$ has no linear generators. Then the quadratic elements provided by Lemma~\ref{lem: beta gt 0 then minors} are minimal generators, and there are at least $n-1$ independent ones, for example the $2 \times 2$ minors given by the first and $i$-th columns of \eqref{equ: n times 2 matrix with linear forms} for \mbox{$2 \leq i \leq n$}. \end{proof} Note that the Fermat hypersurface $x_1^d + \dotsb + x_n^d$ has $\binom{n}{2}$ quadratic apolar generators. This is the maximum number possible for smooth forms, as the following easy observation shows. \begin{prop} If $F$ defines a smooth hypersurface of degree $d \geq 3$ on $\mathbb{P}^{n-1}$ then $F^\perp$ has at most $\binom{n}{2}$ quadratic generators. More generally if the set of points in $\mathbb{P}^{n-1}$ at which $F$ vanishes to order $\geq a$ has dimension $k$, then $\dim (F^\perp)_{d-a+1} \leq \binom{n+d-a}{d-a+1} - n + k +1$. \end{prop} \begin{proof} Otherwise $\mathbb{P} F^\perp_{d-a+1} \subset \mathbb{P} T_{d-a+1}$ necessarily has a $(k+1)$-dimensional intersection with the Veronese variety $v_{d-a+1}(\mathbb{P} T_1)$, since it has codimension $\binom{n+d-a}{d-a+1}-n$. For each $[\alpha^{d-a+1}]$ in this intersection, $F$ vanishes to order at least $a$ at $[\alpha]$ by Lemma \ref{lem: apolarity singularity}. This gives a $(k+1)$-dimensional set along which $F$ vanishes to order $a$. The first statement follows with $a=d-1$ and $k=-1$ when $V(F)$ is smooth. \end{proof} Having the maximum number of quadratic apolar generators does not characterize Fermat hypersurfaces, however; the concise plane cubics all have the maximum number of quadratic apolar generators, see Table~\ref{table:plane cubics}. \subsection{Equipotent apolar generators and limits of direct sums}\label{sect: maximal degree apolar gens and limits} Fix a degree $d$ and number of variables $n$. Let $V$ be a vector space with $\dim V = n$. In this section we prove first that if $F \in S^d V$ has an equipotent apolar generator then $F$ is a limit of direct sums. We next prove that if $F$ is also concise then the converse holds. This assumption is needed, by Example~\ref{example: nonconcise limit of dirsum}. \begin{thm}\label{thm: apoequ => limit of dirsum} If $F$ has an equipotent apolar generator then $F$ is a limit of direct sums. Moreover, either $F$ is a direct sum or it can be written in the following normal form, for some choice of basis $x_1,\dotsc,x_k$, $y_1,\dotsc,y_k$, $z_1,\dotsc,z_{n-2k}$ of $V$: \[ F(x,y,z) = \sum x_i \frac{\partial H(y)}{\partial y_i} + G(y,z). \] Here $G \in S^d\langle\fromto{y_1}{y_k},\fromto{z_1}{z_{n-2k}} \rangle$ and $H \in S^d\langle\fromto{y_1}{y_k}\rangle$. \end{thm} \begin{proof} We immediately reduce to the case that $F$ is concise: If $F$ is concise over $W \subset V$, we will write $F$ as a limit of direct sums which are in $S^d W$. We assume $F^\perp$ has a generator in degree $d = \deg F$. By Gorenstein symmetry \eqref{equ: Gorenstein symmetry for maximal degree apo gens}, $\beta_{n-1, n}(F^{\perp})>0$. By Lemma~\ref{lem: beta gt 0 then minors} there are linearly independent linear forms $\ell_1,\dotsc,\ell_k$ for some $0 < k < n$ such that $F^\perp$ contains the $2 \times 2$ minors of the matrix \[ \begin{pmatrix} \alpha_1 & \cdots & \alpha_k & \alpha_{k+1} & \cdots & \alpha_n \\ \ell_1 & \cdots & \ell_k & 0 & \cdots & 0 \end{pmatrix} . \] Let $L : T_1 \to T_1$ be the linear map given by $L(\alpha_i) = \ell_i$ for $1 \leq i \leq k$, $L(\alpha_i) = 0$ for $i > k$. That is, for all $i, j$, $\alpha_i L(\alpha_j) - \alpha_j L(\alpha_i) \in F^\perp$. By linearity, $v L(w) - w L(v) \in F^\perp$ for all $v, w \in T_1$. Let $\tilde{L} : \Wedge{2} T_1 \to F^\perp$ be defined by $\tilde{L}(v \wedge w) = v L(w) - w L(v) \in F^\perp$. Since $0 < k < n$, $L$ is not zero or a scalar multiple of the identity and has a nontrivial kernel. We begin by changing basis in $V^*$ (and dually in $V$) to put $L$ into Jordan normal form. It turns out that if $L$ has distinct eigenvalues then $F$ decomposes as a direct sum over the generalized eigenspaces of $L$; otherwise, if $L$ is a nonzero nilpotent matrix, then $F$ is a limit of direct sums. Suppose first that $\lambda_i \neq \lambda_j$ are distinct eigenvalues of $L$. Then there are some positive integers $\nu_i$, $\nu_j$ such that $(L-\lambda_i)^{\nu_i} \alpha_i = (L-\lambda_j)^{\nu_j} \alpha_j = 0$ but $(L-\lambda_i)^{\nu_i-1} \alpha_i, (L-\lambda_j)^{\nu_j-1} \alpha_j \neq 0$. We show that $\alpha_i \alpha_j \in \image(\tilde{L}) \subset F^\perp$, by induction on $\nu_i+\nu_j$. The induction begins with $\nu_i=\nu_j=1$. Then $\tilde{L}(\alpha_i \wedge \alpha_j) = \alpha_i L(\alpha_j) - \alpha_j L(\alpha_i) = (\lambda_j-\lambda_i) \alpha_i \alpha_j$. If, say, $\nu_i > 1$, so $L(\alpha_i) = \lambda_i \alpha_i + \alpha_{i-1}$, then $\tilde{L}(\alpha_i \wedge \alpha_j) = (\lambda_j - \lambda_i)\alpha_i \alpha_j + \alpha_{i-1}\alpha_j \in F^\perp$. Since $\alpha_{i-1}\alpha_j \in F^\perp$ by induction, $\alpha_i \alpha_j \in F^\perp$. This shows that, for the generalized eigenspace decomposition $V^* = \bigoplus_{\lambda} V^*_\lambda$, we have $(V^*_\lambda)(\bigoplus_{\mu \neq \lambda} V^*_\mu) \subset F^\perp$ for each eigenvalue $\lambda$. Thus $F = \sum_\lambda F_\lambda$, $F_\lambda \in S^d V_\lambda$ where $V_\lambda = (V^*_\lambda)^* = \bigcap_{\mu \neq \lambda} (V^*_\mu)^\perp$ by Corollary~\ref{cor_quadratic_generators_of_a_direct_sum}. Since $F$ is concise each $F_\lambda$ must be nonzero (in fact, concise with respect to $V_\lambda$). In this case, then, $F$ is a direct sum. Now we see what happens when $L$ has just one eigenvalue. Then $L$ is nilpotent, since $\ker L \ne 0$. We claim $\image(\widetilde{L^\nu}) \subset F^\perp$ for all $\nu \geq 1$. Indeed, \[ \begin{split} \widetilde{L^\nu} (\alpha \wedge \beta) &= \alpha \cdot L^\nu(\beta) - \beta \cdot L^\nu(\alpha) \\ &= \alpha L(L^{\nu-1} \beta) - (L^{\nu-1} \beta)(L \alpha) + (L \alpha)(L^{\nu-1} \beta) - \beta (L^{\nu-1} L \alpha) \\ &= \tilde{L}(\alpha \wedge L^{\nu-1} \beta) + \widetilde{L^{\nu-1}}(L(\alpha) \wedge \beta) \end{split} \] which is in $F^\perp$ by induction. Now suppose $L^{\nu+1} = 0 \neq L^{\nu}$; we replace $L$ with $L^{\nu}$, so we can assume $L^2 = 0 \neq L$. Say $k = \rank L$. Then the Jordan normal form of $L$ yields a basis $\alpha_1,\dotsc,\alpha_k$, $\beta_1,\dotsc,\beta_k$, $\gamma_1,\dotsc,\gamma_{n-2k}$ of $V^*$ such that $L(\beta_i) = \alpha_i$ and $L(\alpha_i) = L(\gamma_i) = 0$. So $L$ can be written in block form with respect to this basis, \[ L = \begin{blockarray}{cccc} {\scriptstyle \alpha} & {\scriptstyle \beta} & {\scriptstyle \gamma} & ~ \\ \begin{block}{(ccc)c} 0 & I & 0 & {\scriptstyle \alpha} \\ 0 & 0 & 0 & {\scriptstyle \beta} \\ 0 & 0 & 0 & {\scriptstyle \gamma} \\ \end{block} \end{blockarray}. \] We give $V$ the dual basis $x_1,\dotsc,x_k$, $y_1,\dotsc,y_k$, $z_1,\dotsc,z_{n-2k}$. Now $\tilde{L}(\alpha_i \wedge \beta_j) = \alpha_i \alpha_j \in F^\perp$, and $\tilde{L}(\gamma_i \wedge \beta_j) = \gamma_i \alpha_j \in F^\perp$. Since \[ \langle \alpha_1,\dotsc,\alpha_k \rangle \langle \alpha_1,\dotsc,\alpha_k, \gamma_1,\dotsc,\gamma_{n-2k} \rangle \subset F^\perp , \] we have $F = \sum x_i H_i(y) + G(y,z)$ where $\deg H_i = d-1$, $\deg G = d$. Furthermore $\tilde{L}(\beta_i \wedge \beta_j) = \alpha_j \beta_i - \alpha_i \beta_j \in F^\perp$, so $\partial H_i / \partial y_j = \partial H_j / \partial y_i$. Thus there exists $H(y)$ such that $H_i = \partial H / \partial y_i$. This shows the normal form part of the statement of the theorem. Finally we write $F$ as a limit of direct sums as follows: \begin{equation}\label{eq: limit of dirsum} F = \lim_{t \to 0} \frac{1}{t} \Big( H(y_1 + t x_1, \dotsc, y_k + t x_k) + t G(y,z) - H(y) \Big), \end{equation} which for $t \neq 0$ is a direct sum over $\langle y_i + t x_i \rangle \oplus \langle y,z \rangle$. \end{proof} \begin{example} Let $F = xy^{d-1}$ so that $F^\perp = \langle \alpha^2, \beta^d \rangle$. Then $F^\perp$ contains the $2 \times 2$ minors of the matrix \[ \begin{pmatrix} \alpha & \beta \\ 0 & \alpha \end{pmatrix}. \] In the notation of the above proof, $L : T_1 \to T_1$ is given by $L(\alpha) = 0$, $L(\beta) = \alpha$. Then $L$ is nilpotent, $L^2 = 0$. The next step in the proof provides a decomposition $F = x H_1(y)$, where $\deg H_1 = d-1$; so $H_1(y) = y^{d-1}$. We have $H_1 = \partial H/\partial y$ where $H(y) = (1/d) y^d$. In the proof's notation, $G=0$. Of course then \[ F = xy^{d-1} = \lim_{t \to 0} \frac{1}{dt} \left( (y+tx)^d - y^d \right) . \] \end{example} \begin{example} Let $F = x^2 y - y^2 z$. We saw in Example~\ref{example: plane cubic apoequ not dirsum} that $F^\perp = \langle \gamma^2, \alpha \gamma, \alpha^2 + \beta \gamma, \beta^3, \alpha \beta^2 \rangle$, so $F$ has two equipotent apolar generators. And $F^\perp$ contains the $2 \times 2$ minors of the matrix \[ \begin{pmatrix} \alpha & \beta & \gamma \\ -\gamma & \alpha& 0 \end{pmatrix}. \] In the notation of the above proof, the endomorphism corresponding to this matrix is $L : T_1 \to T_1$, given by $L(\alpha) = -\gamma$, $L(\beta) = \alpha$, $L(\gamma) = 0$. Note that $L$ is nilpotent, $L^3 = 0$. We replace $L$ with $L' = L^2$, represented by the matrix \[ \begin{pmatrix} \alpha & \beta & \gamma \\ 0 & -\gamma & 0 \end{pmatrix}. \] Again $L'$ is nilpotent, $L'^2 = 0$. Although the labeling of variables is different than in the proof, the proof's next step yields $F = z H_1(y) + G(x,y)$ where $H_1(y) = -y^2$; apparently $G(x,y) = x^2 y$. Then $H(y) = -(1/3) y^3$. We get \[ \begin{split} F &= \lim_{t \to 0} \frac{1}{t} \left( H(y+tz) + tG(x,y) - H(y) \right) \\ &= \lim_{t \to 0} \frac{1}{3t} \left( -(y+tz)^3 + t x^2 y + y^3 \right) , \end{split} \] a limit of direct sums over the subspaces $\langle y+tz \rangle \oplus \langle x,y \rangle$ for $t \neq 0$. \end{example} Theorem~\ref{thm: apoequ => limit of dirsum} has several parallels in \cite{KleppePhD2005}. The linear map $L$ in its proof corresponds to one of the matrices in \cite[Def.~2.14]{KleppePhD2005}. The nilpotent case is covered by \cite[Thm.~4.5]{KleppePhD2005}. In the case of distinct eigenvalues, the projections onto the distinct eigenspaces give the orthogonal idempotent matrices discussed in \cite[Prop.~3.5]{KleppePhD2005}. An extended result in this case is given in \cite[Thm.~3.7]{KleppePhD2005}. Now we prove the converse (c.f. \cite[Lem.~4.2]{KleppePhD2005}). \begin{thm}\label{thm: limit of s fold direct sum then s-1 max deg apo gens} If $F$ is a concise limit of $s$-fold direct sums then $F$ has at least $s-1$ equipotent apolar generators. \end{thm} \begin{proof} Suppose $F=F_0$ is a concise limit of $s$-fold direct sums, $F_0 = \lim F_t$. Let $J$ be the flat limit of the ideals $F_t^\perp$. We have $J \subset F^{\perp}$, since differentiation varies continuously as the $F_t$ varies regularly. Indeed, for $\Theta \in J$, $\Theta = \lim \Theta_t$ for $\Theta_t \in F_t^\perp$, so $\Theta_t \aa F_t = 0$ for $t \neq 0$; hence $\Theta \aa F = \lim \Theta_t \aa F_t = 0$, so $\Theta \in F^\perp$. By Proposition~\ref{prop: semicontinuity of graded Betti numbers}, upper-semicontinuity of graded Betti numbers, $\beta_{n-1,n}(J) \geq s-1$. Now there is no general inequality between the graded Betti numbers of an arbitrary homogeneous ideal $I$ and homogeneous subideal $J \subset I$; $\beta_{i,j}(I) > \beta_{i,j}(J)$, $\beta_{i,j}(I) < \beta_{i,j}(J)$, and $\beta_{i,j}(I) = \beta_{i,j}(J)$ all are possible. However in this simple case we do have the inequality we are looking for by Lemma \ref{lem: beta inequality subideal}. That is, $\beta_{n-1,n}(F^{\perp}) \ge \beta_{n-1,n}(J) \geq s-1$, since $F$ is concise, meaning $(F^{\perp})_1=0$. By Gorenstein symmetry \eqref{equ: Gorenstein symmetry for maximal degree apo gens} there are at least $s-1$ minimal generators of degree $d$ in $F^{\perp}$. \end{proof} This completes the proof of Theorem~\ref{thm: apoequ = closure of dirsum}, which comprises Theorems~\ref{thm: apoequ => limit of dirsum} and \ref{thm: limit of s fold direct sum then s-1 max deg apo gens}. \begin{example}\label{example: form of plane direct sums} Let $F$ be a concise plane curve ($n=3$) of degree $d$ having an equipotent apolar generator. Either $F$ is a direct sum, $F = x^d + G(y,z)$, or else $F$ is a limit of direct sums of the form $F = x y^{d-1} + G(y,z)$ by the normal form part of Theorem~\ref{thm: apoequ => limit of dirsum}. Note that if $G(y,z)$ includes terms $a y^{d-1} z + b y^d$ then replacing $x$ with $x + az + by$ gives us \[ F = x y^{d-1} + z^2 G_{d-2}(y,z) \] where $\deg G_{d-2} = d-2$. Conversely if $F$ is of this form then $F$ is a limit of direct sums and has an equipotent apolar generator. Thus a concise plane curve $F$ is a limit of direct sums and has an equipotent apolar generator if and only if, after a linear change of coordinates, either $F = x^d + G(y,z)$ or $F = x y^{d-1} + z^2 G_{d-2}(y,z)$. \end{example} \subsection{Lower bound for degree of apolar generators}\label{sect: lower bound on degree} Here we prove Theorem~\ref{thm: apolar generator degree lower bound}, a lower bound for the maximum degree of the apolar generators of a form $F$ in terms of the degree $d$ of $F$ and the number $n$ of variables: we show that $F^\perp$ always has a minimal generator of degree at least $(d+n)/n$. \begin{lemma}\label{lemma: socle degree of complete intersection} Suppose $I \subset T$ is a homogeneous complete intersection ideal of codimension $n$. Then $I = G^\perp$ for some $G \in S$. If the minimal generators of $I$ are homogeneous of degrees $\fromto{\delta_1}{\delta_n}$, then $\deg G = \delta_1 + \dots + \delta_n - n$. \end{lemma} \begin{proof} The existence of $G$ follows from Theorem 21.6 of \cite{eisenbud:comm-alg}. The degree of $G$ is equal to the socle degree of $A_G \cong T/I$, which is $\sum_{i=1}^n (\delta_i - 1)$ by, for example, Exercise 21.16 of \cite{eisenbud:comm-alg}. \end{proof} We will deduce Theorem~\ref{thm: apolar generator degree lower bound} from the following slightly stronger proposition. \begin{prop}\label{prop: apolar generator degree lower bound strengthening} Let $F$ be a homogeneous form of degree $d$ in $n$ variables. Suppose $F^\perp$ has minimal generators $\Theta_1,\dotsc,\Theta_s$ such that $\deg \Theta_i = d_i$ for each $i$, and $d_1 \leq \dotsb \leq d_s$. Let $\delta$ be an integer such that the ideal $(F^\perp)_{\leq \delta}$ is $\mathfrak{m}$-primary. Assume $d_k = \delta < d_{k+1}$ or $k = s$ and $\delta = d_s$; necessarily $k \geq n$. Then $d \leq d_k + d_{k-1} + \dotsb + d_{k-n+1} - n$. \end{prop} \begin{proof} For each $i = k, k-1, \dotsc, k-n+1$, let $\Psi_i \in (F^\perp)_{d_i}$ be general. Since $(F^\perp)_{\leq d_k}$ is $\mathfrak{m}$-primary, $(F^\perp)_{d_i}$ is a basepoint free linear series on $V(\Psi_k,\dotsc,\Psi_{i+1})$ for each $i$. Then by Bertini's Theorem \cite[Thm.~I.6.3]{MR725671} and downward induction on $i$, the ideal $(\Psi_k,\Psi_{k-1},\dotsc,\Psi_{i})$ is a complete intersection for each $i \geq k-n+1$. In particular $I = \langle \Psi_k,\dotsc,\Psi_{k-n+1} \rangle$ is a complete intersection of codimension $n$. By Lemma~\ref{lemma: socle degree of complete intersection}, $I=G^{\perp}$ for a form $G$ of degree $d_k+\dotsb+d_{k-n+1}-n$. And $F = \Theta \aa G$ for some $\Theta \in T$ by Lemma~\ref{lemma: apolar containment}, so $\deg F \leq \deg G$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: apolar generator degree lower bound}] Let $d_1 \leq \dotsb \leq d_s$ be the degrees of the minimal generators of $F^\perp$, as in the previous proposition. Then regardless of the value of $k$ we have $d \leq d_k + \dotsb + d_{k-n+1} - n \leq n \delta - n$. \end{proof} Suppose $F$ is a concise homogeneous polynomial in $n$ variables for which $\delta =2$, i.e., $F^{\perp}$ is generated by quadrics. Then Theorem~\ref{thm: apolar generator degree lower bound} implies $d = \deg F \le n$. Moreover, the proof of the theorem shows $F = \Theta \aa G$ for some $\Theta \in T$ and $G\in S$ such that $\deg G = n$ and $G^{\perp}$ is a complete intersection of $n$ quadrics. For example, let $F$ be a determinant of a generic $k \times k$ matrix, \[ F= \det \begin{pmatrix} x_{11} & \cdots & x_{1k} \\ \vdots & & \vdots \\ x_{k1} & \cdots & x_{kk} \end{pmatrix}. \] In this case $\deg F =k$ and the number of variables is $k^2$. Then as $G$ we may take the monomial $\prod_{i,j=1}^k x_{ij}$. However it is not true that every homogeneous polynomial of the form $\Theta \aa G$ must have $(\Theta \aa G)^{\perp}$ generated by quadrics. For example, let $G = x_1 \dotsm x_6$, let $\Theta = \alpha_1 \alpha_2 \alpha_3 - \alpha_4 \alpha_5 \alpha_6$, and let $F = \Theta \aa G = x_4 x_5 x_6 - x_1 x_2 x_3$. Then $G^\perp$ is a complete intersection of quadrics but $F^\perp$ has a minimal generator of degree $3$ by Theorem~\ref{thm: direct sum generator}, namely, $\alpha_4 \alpha_5 \alpha_6 + \alpha_1 \alpha_2 \alpha_3$. The problem of classification of all homogeneous polynomials $F$ with $F^{\perp}$ generated by quadrics appears to be difficult. We expect that the answer must be complex: if $F$ is the permanent of a generic symmetric matrix then $F^\perp$ has minimal generators of degree $3$, while the apolar ideal of the determinant of the same matrix has only quadratic generators, see Theorems 3.23 and 3.11 of \cite{Shafiei:2013fk}. However, the above discussion shows that it might be helpful to first classify $G$ with $d= \deg G = n$ and $G^{\perp}$ generated by quadrics. Even the classification of $G$ is difficult. For $d=n=2$, any rank two quadric $G$ has $G^{\perp}$ generated by quadrics. For $d=n=3$, the plane cubic $G$ has $G^{\perp}$ generated by quadrics if and only if it is concise and has no degree $3$ minimal generators, equivalently, $G$ is not a limit of direct sums, see Table~\ref{table:plane cubics}. In particular, the general plane cubic has its apolar ideal generated by quadrics. For $d=n\ge 4$, the general form produces a \emph{compressed algebra} (see proof of Lemma~\ref{lemma: general indecomposable}), and thus has no quadratic generators in the apolar ideal. \section{Variation in families}\label{sect: variation in families} If $F_t \to F$, it does not necessarily follow that $F_t^\perp \to F^\perp$ or $A_{F_t} \to A_F$ as flat families. For example, $x^d + t y^d \to x^d$ as $t \to 0$, but $(x^d)^\perp = \langle \alpha^{d+1},\beta \rangle$ is not the flat limit of the ideals $(x^d + ty^d)^\perp = \langle t\alpha^d - \beta^d, \alpha\beta \rangle$. This can also occur if all polynomials in the family are concise, for example $x^d + y^d + t(x+y)^d \to x^d + y^d$ as $t \to 0$. When $\{F_t\}$ is a family of polynomials such that $\{F_t^\perp\}$ is a flat family, we say $\{F_t\}$ is an \defining{apolar family} and $F_t \to F_0$ is an \defining{apolar limit}. It is equivalent to say $\{A_{F_t}\}$ is a flat family. Since we only consider homogeneous polynomials, $\{F_t\}$ is an apolar family if and only if the Hilbert functions of the $A_{F_t}$ are locally constant \cite[Exer.~20.14]{eisenbud:comm-alg}. When this holds, in particular their sum, the length of $A_{F_t}$, is locally constant. On the other hand, the values of the Hilbert function are lower semicontinuous in $t$ since they are the ranks of catalecticants, which are linear maps depending regularly on $t$. Thus if $\length(A_{F_t})$ is constant in $t$ then the Hilbert function must also be constant. We underline that this implication is dramatically false if we consider flat families of apolar algebras of non-homogeneous polynomials, see for instance \cite{MR713096} or \cite{casnati_jelisiejew_notari_Hilbert_schemes_via_ray_families}. \begin{remark} Families of homogeneous polynomials with constant Hilbert function are intensively studied. If $T$ is a finite sequence of positive integers, then the set of all homogeneous polynomials of degreee $d$ with Hilbert function $T$ is denoted in the literature by $Gor(T)$, see for instance \cite{MR1735271}. In particular, a family $F_t$ is an apolar family if and only if for some $T$ we have $F_t \in Gor(T)$ for all $t$. \end{remark} \begin{prop}\label{prop: cubic apolar limit} Every concise limit of cubic forms is an apolar limit. \end{prop} \begin{proof} The Hilbert function of the apolar algebra of any concise cubic form in $n$ variables is $1,n,n,1$, so every concise cubic form has apolar length $2n+2$ and every family of concise cubic forms is automatically an apolar family. \end{proof} Proposition \ref{prop: cubic apolar limit} shows that when $d=3$, $\ApoLim \cap \Con = \ApoEqu \cap \Con$. We will show that for some $n$ and sufficiently large $d$, $\ApoLim \cap \Con \subsetneqq \ApoEqu \cap \Con$. Then will we show that for $n=3$ once again $\ApoLim \cap \Con = \ApoEqu \cap \Con$. First, we introduce cactus rank and use it to examine some cases in which we can show that a form $F$ has numerous equipotent apolar generators. \subsection{Cactus rank and number of maximal degree apolar generators} \label{sect: cactus rank} In this section we examine some cases in which we can show that a form $F$ not only has a maximal degree apolar generator, but in fact has several. Throughout this section we assume $n = \dim V \geq 2$. \begin{prop}\label{prop: rank n is direct sum} Suppose $F$ is concise and in addition the Waring rank $r(F)=n$. Then $F$ is a direct sum of $n$ terms and $F$ has at least $n-1$ equipotent apolar generators. Furthermore if $d>2$, $F$ has exactly $n-1$ equipotent apolar generators. \end{prop} \begin{proof} Up to a choice of coordinates, $F$ is the equation of the Fermat hypersurface. \end{proof} \begin{prop}\label{prop: border rank n is a limit of direct sums} Suppose $F$ is concise and in addition the border rank $br(F)=n$. Then $F$ is a limit of direct sums of $n$ terms and $F$ has at least $n-1$ equipotent apolar generators. \end{prop} \begin{proof} $F$ is a limit of polynomials as in Proposition~\ref{prop: rank n is direct sum}. The second statement follows by Theorem~\ref{thm: limit of s fold direct sum then s-1 max deg apo gens}. \end{proof} There is another notion of rank of polynomials, namely the \defining{cactus rank} \cite{MR2842085}. It is also called the \defining{scheme length} in \cite[Definition~5.1]{MR1735271}. The cactus rank of $F \in S^d V$ is the minimal length of a zero dimensional subscheme $R \subset \mathbb{P} V$ such that $[F] \in \langle v_d(R) \rangle$, or equivalently $I(R) \subset F^{\perp}$ \cite[Prop.~3.4(vi)]{MR3121848}. We prove an analogue of Propositions~\ref{prop: rank n is direct sum} and \ref{prop: border rank n is a limit of direct sums} for cactus rank: \begin{thm}\label{thm: cactus rank n is a limit of direct sums} Suppose $F$ is concise and in addition the cactus rank $cr(F)=n$. Then $F$ is a limit of direct sums and $F$ has at least $n-1$ equipotent apolar generators. \end{thm} Unlike in Propositions \ref{prop: rank n is direct sum} and \ref{prop: border rank n is a limit of direct sums}, we do not claim that $F$ is a limit of $n$-fold direct sums. This theorem is proven in three steps. The first step (Lemma~\ref{lemma: cactus rank n and d ge 2n}) is the same statement, but with an extra assumption that $d \ge n+1$. In the second step we use the first step to prove a property about syzygies of zero dimensional schemes embedded in a concisely independent way, which might be of interest on its own. In the final step we use the syzygies of schemes to prove the theorem. To obtain a number of minimal generators in some degree we compare two ideals $J \subset I \subset T$ (for example $I= F^{\perp}$), where $J$ is generated by $I_{\le \delta}$. Then we compare the Hilbert functions of $T/I$ and $T/J$. The smallest integer $d$ where $h_{T/I}(d) \ne h_{T/J}(d)$ is a degree in which there must be a minimal generator of $I$; in fact there are at least $h_{T/J}(d)-h_{T/I}(d)$ minimal generators of degree $d$. \begin{lemma}\label{lemma: cactus rank n and d ge 2n} With $F$ as in Theorem~\ref{thm: cactus rank n is a limit of direct sums}, if in addition $d \ge n+1$, then $h_{A_F}$, the Hilbert function of $A_F$, is $(1,n,n,\dotsc, n,1, 0, \dotsc)$ and $F^{\perp}$ has exactly $n-1$ minimal generators in degree $d$. \end{lemma} \begin{proof} Consider an ideal $I \subset T$ defining the scheme realizing the cactus rank of $F$. That is, $I$ is a saturated homogeneous ideal, $I \subset F^{\perp}$ and $B= T/I$ is a graded algebra with constant Hilbert polynomial equal to $cr(F)=n$. Let $I' = I_{\leq n}$ be the ideal generated by the forms in $I$ of degree less than or equal to $n$, and let $B' = T/I'$. First note that we have the following inequality of Hilbert functions: $h_{A_F} \le h_B \le n$. Since $F$ is concise, $n= h_{A_F}(1)=h_{A_F}(d-1)$. Thus $h_B(1) =n$, and since $h_B$ is nondecreasing \cite[Rem.~2.8]{MR2309930}, we must have $h_B(i) =n$ for all $i \ge 1$. In particular $h_{B'}(n) = h_B(n) = n$ since $I'_n = I_n$, and $h_{B'}(n+1) \geq h_B(n+1) = n$. On the other hand, by Macaulay's Growth Theorem \cite[Thm~3.3]{MR1648665} or \cite[Cor.~5.1]{MR3121848} we have $h_{B'}(n+1) \leq h_{B'}(n)$. Thus $B'$ realizes the maximal possible growth of a Hilbert function from $h_{B'}(n) = h_{B'}(n+1)$ onwards and hence by Gotzmann's Persistence Theorem \cite[Thm~3.8]{MR1648665} or \cite[Cor.~5.3]{MR3121848} we have $h_{B'}(i) = n = h_B(i)$ for all $i \geq n$. Thus $I'_i = I_i$ for all $i \geq n$. This shows that the ideal $I$ is generated by $I_{\le n}$. By Macaulay's Growth Theorem we have $n = h_{A_F}(d-1) \leq h_{A_F}(d-2) \leq \dotsb \leq h_{A_F}(n) \leq n$. In particular, $I$ and the ideal generated by $I_n = (F^{\perp})_{n}$ agree in degrees $n, n+1, \dotsb, d-1$. However $h_{A_F}(d) = 1$ while $h_B(d) = n$. Thus $F^{\perp}$ needs exactly $n-1$ minimal generators in degree $d$. Moreover, $I$ is saturated and $F^{\perp}_{\le n}$ is a saturation of $(F^{\perp})_{n} = I_{n}$, hence: \[ F^{\perp}_{\le n} = ((F^{\perp})_{n})^{sat} = (I_{n})^{sat} \subset I^{sat} = I \subset F^{\perp}. \] Therefore $F^{\perp}$ and $I$ agree up to degree $d-1$, and the Hilbert function of $A_F$ is $(1,n,n,\dots, n, 1)$. \end{proof} We will consider $R \subset \mathbb{P} V$, a zero dimensional locally Gorenstein subscheme. Such schemes arise naturally when considering cactus rank. Namely it follows from \cite[Lem.~2.3]{MR3121848} that if $cr(F) =n$, then there exists a zero dimensional locally Gorenstein subscheme $R$ such that $\length R =n$ and $F \in \langle v_d(R) \rangle$. Here we will study such $R$ which are embedded into $\mathbb{P} V$ in \emph{a concisely independent way}, that is $\length R = \dim V$ and $R$ is not contained in any hyperplane. Note that every finite scheme can be embedded in a concisely independent way: By, for example, \cite[Lem.~2.3]{MR3092255}, if $R \subset \mathbb{P} V$ is a finite scheme of length $r$, then the Veronese re-embedding $v_{r-1}(R) \subset \mathbb{P}(S^{r-1} V)$ spans an $(r-1)$-dimensional projective subspace in which $R$ is embedded concisely independently. \begin{example}\label{example_concise scheme} Suppose $G$ is a concise cubic in $6$ variables $\fromto{x_1}{x_6}$. Let $R = \Spec A_G$ be the zero-dimensional Gorenstein scheme of length $14$ determined by $G$. We will now describe in some detail a concisely independent embedding of $R$ into $\mathbb{P}^{13} = \mathbb{P} V$, where $V = \langle \fromto{x_1}{x_6}, \fromto{y_1}{y_6}, z, w \rangle$, and $T= \mathbb{C}[\fromto{\alpha_1}{\alpha_6}, \fromto{\beta_1}{\beta_6}, \gamma, \delta ]$ is the coordinate ring. This embedding will play a role in Example~\ref{example_limit_polynomial_but_not_apolar_limit}, our explicit example of a homogeneous form which is a limit of direct sums but not an apolar limit of direct sums. Consider the ideal $I$ generated by: \begin{enumerate} \item \label{item_generators_of_I__apolar_to_G} The apolar ideal of $G$ in $\mathbb{C}[\fromto{\alpha_1}{\alpha_6}]$. This provides $15$ quadric minimal generators and perhaps some cubics. \item \label{item_generators_of_I__alpha_beta} 30 quadrics $\alpha_i\beta_j$ for $i \ne j$, $i,j \in \setfromto{1}{6}$. \item \label{item_generators_of_I__gamma_smthg} 13 quadrics $\alpha_i \gamma$, $\beta_i \gamma$, $\gamma^2$ for $i \in \setfromto{1}{6}$. \item \label{item_generators_of_I__beta_beta} 21 quadrics $\beta_i \beta_j$ for $i,j \in \setfromto{1}{6}$. \item \label{item_generators_of_I__alpha_beta_minus_gamma_delta} 6 quadrics $\alpha_i \beta_i - \gamma \delta$ for $i \in \setfromto{1}{6}$. \item \label{item_generators_of_I__Theta_minus_beta_delta} 6 quadrics obtained in the following way: For $i \in \setfromto{1}{6}$ let $\Theta_i \in \mathbb{C}[\fromto{\alpha_1}{\alpha_6}]$ be a quadric such that $\Theta_i \aa G = x_i$ (these quadrics exist, since $G$ is concise with respect to $\langle \fromto{x_1}{x_6}\rangle$). Then include in $I$ the following quadrics: $\Theta_i - \beta_i\delta$. \end{enumerate} Altogether we obtain $91$ quadrics and perhaps some cubics (depending on $G$). The radical of the homogeneous ideal $I$ is generated by: \[ \sqrt{I} = \langle \fromto{\alpha_1}{\alpha_6}, \fromto{\beta_1}{\beta_6}, \gamma \rangle. \] To see this, $\alpha_i^4 \in I$ by \ref{item_generators_of_I__apolar_to_G}, $\beta_i^2 \in I$ by \ref{item_generators_of_I__beta_beta}, $\gamma^2 \in I$ by \ref{item_generators_of_I__gamma_smthg}. Thus the projective scheme defined by $I$ is supported at the single point $[w] \in \mathbb{P} V$, which is contained in the open subset $\delta \ne 0$. Evaluating the generators of $I$ at $\delta =1$, the reader can easily check that the scheme supported at $[w]$ is isomorphic to $R$, and also that there are no linear forms in $T_1$ that contain this scheme. Also it is not difficult to see that the Hilbert function of $T/I$ is $(1, 14, 14, 14, \dotsc)$. We combine this information with the fact that the Hilbert function of a saturated ideal is non-decreasing, see \cite[Rem.~2.8]{MR2309930}. We conclude that $I$ is the saturated ideal defining $R \subset \mathbb{P} V$, and the embedding of $R$ is concisely independent, because $I_1=0$ and $\length R = \dim V =14$. Finally, we remark that for general $G \in S^3 \mathbb{C}^6$, the scheme $R$ is the shortest non-smoothable Gorenstein scheme. See \cite[Lemma~6.21]{MR1735271}, where it is shown that $R$ is non-smoothable, and \cite{Casnati20141635}, where it is shown that all shorter Gorenstein schemes are smoothable. \end{example} \begin{prop}\label{prop: concisely independent then beta ge n-1} Suppose $R$ is a finite Gorenstein scheme as above, $\length R =n$ and $R \subset \mathbb{P} V$ is concisely independent. Let $J \subset T$ be the saturated homogeneous ideal of $R$. Then $\beta_{n-1,n}(J) \ge n-1$. \end{prop} \begin{proof} Consider a general $F\in \langle v_d (R) \rangle$ for some $d \ge 2n$. It follows from \cite[Lem.~2.3]{MR3121848} that $F$ is not contained in $\langle v_d(Q) \rangle$ for any $Q \subsetneqq R$. It further follows from \cite[Cor.~2.7]{MR3092255} that $R$ is determined by $F$. Namely, $R$ is the unique subscheme of $\mathbb{P} V$ of length $n$ such that $F \in \langle v_d (R) \rangle$. Also $cr(F) =n$. By \cite[Thm.~1.6]{MR3121848}, $J = (F^{\perp})_{\le n}$. In particular, $h_{A_F}(1) = h_{T/J}(1) =n$, that is $F$ is concise. By Lemma~\ref{lemma: cactus rank n and d ge 2n} and Gorenstein symmetry \eqref{equ: Gorenstein symmetry for maximal degree apo gens} we have $\beta_{n-1, n}(F^{\perp}) \ge n-1$. The syzygies involve only quadratic generators of $F^{\perp}$, so they also exist in $(F^{\perp})_{\le n} = J$, and $\beta_{n-1, n}(J) \ge n-1$ as claimed. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: cactus rank n is a limit of direct sums}] Let $R\subset \mathbb{P} V$ be a locally Gorenstein scheme of length $n$ such that $F \in \langle v_d(R) \rangle$, whose existence is guaranteed by the definition of cactus rank and \cite[Lem.~2.3]{MR3121848}. Let $J \subset T$ be the homogeneous saturated ideal defining $R$. We have $J \subset F^{\perp}$, and thus \[ \beta_{n-1, n}(F^{\perp}) \ge \beta_{n-1, n}(J) \ge n-1 \] by Lemma \ref{lem: beta inequality subideal} and Proposition~\ref{prop: concisely independent then beta ge n-1}. By Gorenstein symmetry \eqref{equ: Gorenstein symmetry for maximal degree apo gens} the ideal $F^{\perp}$ must have at least $n-1$ minimal generators of degree $d$, as claimed. \end{proof} \subsection{Cleavable and uncleavable schemes}\label{sect: non-apolar limit} In this section we give examples of limits of direct sums which cannot be obtained as apolar limits of direct sums; these are points in $\ApoEqu$ that are not in $\ApoLim$. Recall that we are working over the field $\mathbb{C}$, in particular in characteristic $0$. Let $n_0$ be the minimal integer such that there exists a non-smoothable locally Gorenstein scheme of length $n_0$. As we will see, for our purposes we do not need to know the value of $n_0$, just that there is such a value. And in fact the value of $n_0$ may depend on the characteristic. Although the value of $n_0$ does not matter for us, it turns out that in characteristic $0$ this value has been determined very recently. It is well known that $12 \le n_0 \le 14$, see \cite[Sect.~6, Sect.~8.1]{MR3121848} for an overview and references, and the recent work \cite{Casnati20141635} proving $n_0\ne 11$. Even more recently, Gianfranco Casnati, Joachim Jelisiejew, and Roberto Notari have shown that $n_0=14$ \cite[Thms A and B]{casnati_jelisiejew_notari_Hilbert_schemes_via_ray_families}. That is, they prove that all Gorenstein schemes of length at most $13$ are smoothable. This has been predicted by Anthony Iarrobino (private communication). See also \cite{MR3225123} for a related partial result. For the remainder of this section we work in characteristic $0$, so the reader may take $n_0$ to be $14$. Nevertheless, to emphasize that the particular value does not matter and in order to write statements which are easier to generalize to other characteristics---and also because the proof that $n_0=14$ in characteristic $0$ was in fact found after a first version of this paper was posted---we continue to write $n_0$. \begin{prop}\label{prop: cr n br gt n} Let $n = \dim V \ge n_0$, and $d \ge 2n-1$. Then there exist concise polynomials $F \in S^d V$ with $cr(F) =n$, but $br(F) > n$. \end{prop} \begin{proof} Let $R$ be any non-smoothable Gorenstein scheme of length $n$. Embed $R \subset \mathbb{P} V$ in a concisely independent way. Let $F \in \langle v_d (R) \rangle$ be a general element. Then $F$ is not contained in $\langle v_d (R') \rangle$ for any $R' \subsetneqq R$ by \cite[Lem.~3.5(iii)]{MR3092255}. By \cite[Cor.~2.7]{MR3092255} we cannot have $F \in \langle v_d (Q) \rangle$ for any scheme $Q \subset \mathbb{P} V$ of length less than $n$, so $cr(F) = n$. For the same reason, since $R$ is not smoothable, there exists no smoothable scheme $Q \subset \mathbb{P} V$ of length at most $n$, with $F \in \langle v_d (Q) \rangle$. Were $br(F) \le n$, then there would have to be such a $Q$, see for example \cite[Prop.~11]{BGI}, \cite[Lem.~2.6]{MR3092255}, or \cite[Prop.~2.5]{MR3121848}. Thus $br(F) > n$. \end{proof} \begin{remark} We have seen that if $F$ has an equipotent apolar generator then $F$ is a limit of direct sums. And we have seen that if $F$ is a concise form in $n$ variables and $F$ is a limit of direct sums of $n$ terms then $F$ has $n-1$ equipotent apolar generators. Now, a form $F$ as in Proposition~\ref{prop: cr n br gt n} has $n-1$ equipotent apolar generators by Theorem~\ref{thm: cactus rank n is a limit of direct sums}, so it is a limit of direct sums, but it is not necessarily a limit of direct sums of $n$ terms. Thus the closure of the locus of Fermat polynomials is contained in the locus of forms with $n-1$ equipotent apolar generators, but this containment can be strict. \end{remark} \begin{prop} Let $n < n_0$. Then every homogeneous concise polynomial $F \in S^d V$ with $cr(F) =n$ has $br(F) = n$. \end{prop} \begin{proof} By \cite[Thm.~1.4(i)]{MR3121848} we have $br(F)\le n$. Since $F$ is concise we also have $br(F) \ge n$. \end{proof} We would like to introduce some terminology about zero-dimensional schemes. \begin{defn} Suppose $\mathcal{R} \to B$ is a flat family of zero-dimensional schemes, $b\in B$ is a closed point, and $R=\mathcal{R}_b$ is the special fiber over $b$. We say $\mathcal{R} \to B$ is a \defining{cleaving of $R$} if the base $B$ is irreducible, the special fiber $R$ is supported at a single point, and the general fiber is not supported at a single point. If $R$ admits a cleaving, then we say $R$ is \defining{cleavable}. Otherwise, i.e., if $R$ is finite scheme supported at a single point that does not admit any cleaving, we say $R$ is \defining{uncleavable}. \end{defn} We remark that in \cite{casnati_jelisiejew_notari_Hilbert_schemes_via_ray_families} cleavable schemes are called \emph{limit-reducible}, and uncleavable schemes are called strongly non-smoothable. In \cite{MR805334} a component of the Hilbert scheme containing uncleavable schemes is called an \emph{elementary component}. Note however that not every scheme which belongs to an elementary component is uncleavable, as this component intersects also other components of the Hilbert scheme. \begin{lemma}\label{lemma_properties_of_cleaving} The following are elementary properties of cleavings and (un-)cleavable schemes. \begin{enumerate} \item A single reduced point is uncleavable. All other smoothable schemes are cleavable. \item A general fiber of any cleaving of a zero dimensional Gorenstein scheme supported at a single point is a Gorenstein scheme. (More generally, every deformation of a finite Gorenstein scheme is Gorenstein.) \item Every non-smoothable Gorenstein scheme of length $n_0=14$ is local (that is, supported at a single point) and uncleavable. \item Every Gorenstein scheme of length less than $n_0$ is cleavable, unless it is a single reduced point. \end{enumerate} \end{lemma} \begin{proof} The first property is clear. To be Gorenstein is an open condition on the Hilbert scheme, thus the second property follows. Also the third property is straightforward --- if there exists a cleaving of a Gorenstein scheme of length $n_0$, then $R$ is a flat limit of a disjoint union of two shorter Gorenstein schemes. By definition of $n_0$, both shorter schemes must be smoothable. Thus $R$ is smoothable. The final property is clear, since all smoothable schemes are flat limits of disjoint points, hence they admit a cleaving. \end{proof} Note that not every non-smoothable Gorenstein scheme is uncleavable. Potentially it could happen that every non-smoothable Gorenstein scheme of some fixed length admits a cleaving to a disjoint union of two shorter schemes, at least one of which is non-smoothable. Thus while every $n \geq n_0$ is clearly the length of a non-smoothable Gorenstein scheme, it could potentially happen that some $n > n_0$ is not the length of any uncleavable Gorenstein scheme. And in fact it is not immediately clear how to show that any $n > n_0$ is the length of an uncleavable Gorenstein scheme. It would be rather surprising if $n_0$ were the only length of an uncleavable Gorenstein scheme, or if there were only finitely many such lengths. If that were the case, the Gorenstein schemes would be ``finitely generated'', that is, there would be a finite number of schemes (``generators''), such that all the others can be obtained in a flat family (with irreducible base), from a disjoint union of ``generators''. It seems more plausible that every sufficiently large integer is the length of some uncleavable Gorenstein scheme, or at least that there are infinitely many such lengths. But it is beyond the scope of this paper to determine all the possible lengths of uncleavable Gorenstein schemes. For the purpose of Theorem~\ref{thm: apolim and apoequ} it is enough that there exists such a length, namely $n_0$. Let us briefly remark, that there exist uncleavable schemes of any (sufficiently high) finite length, if we drop the assumption of Gorenstein \cite[Thm~2]{shafarevich_deformations_of_commutative_algebras_of_socle_degree_2}. In the Gorenstein case, it is expected that for every $n \ge 8$ a general Gorenstein scheme with Hilbert function $(1,n,n,1)$ is uncleavable, see \cite[Lem.~6.21]{MR713096}. So, let $R$ be an uncleavable Gorenstein scheme of length $n_1 > 1$. In particular, by Lemma~\ref{lemma_properties_of_cleaving}, we have $n_1 \ge n_0$. For example, we may choose $R$ such that $n_1 = n_0$. In characteritic $0$, another possible value of $n_1 > 14$, would be the minimal length of non-smoothable Gorenstein scheme contained in $\mathbb{P}^5$ (or, respectively, in $\mathbb{P}^4$). It is known that such $n_1 \le 42$ (respectively, $n_1\le 140$). \begin{prop}\label{prop: limit not apolar limit} Suppose $n=n_1$ and $d \ge 2n_1$, with $R \subset \mathbb{P} V$ a concisely independently embedded uncleavable Gorenstein scheme of length $n_1$. Let $F \in S^d V$ be a concise polynomial such that $cr(F) =n_1$ and $F\in \langle v_d(R) \rangle$. (For example, if $n_1=n_0$, take $F$ with $cr(F) =n_0$ and $br(F) > n_0$.) Then $F$ is a limit of direct sums, but it is not an apolar limit of direct sums. \end{prop} See Example~\ref{example_limit_polynomial_but_not_apolar_limit} for an explicit example of such a polynomial $F$. \begin{proof} Suppose on the contrary that $F$ is an apolar limit of direct sums $F_t = G_t + H_t$. The Hilbert function of $A_F$ is $(1, n, n, \dotsc, n, 1,0,\dotsc)$ by Lemma~\ref{lemma: cactus rank n and d ge 2n}. Consider the two Hilbert functions $h_{A_{G_t}}$ and $h_{A_{H_t}}$. We must have $h_{A_{G_t}}(k)+h_{A_{H_t}}(k) = n$ for all $1\leq k \leq d-1$. In particular, since $h_{A_{G_t}}(k),h_{A_{H_t}}(k) \ge 1$, we have $h_{A_{G_t}}(k),h_{A_{H_t}}(k) \leq n-1$, for all $1\leq k \leq d-1$. By \cite[Lem.~5.2]{MR3121848} we must have $h_{A_{G_t}}(k) = (1,a,a,\dotsc, a,1,0,\dotsc)$ and $h_{A_{H_t}}(k) = (1,b,b,\dotsc, b,1,0,\dotsc)$ for all $t$ close to $0$, where $a+b=n$. By \cite[Thm.~1.6]{MR3121848}, $G_t \in \langle v_d(Q'_t)\rangle$ and $H_t \in \langle v_d(Q''_t)\rangle$ for schemes $Q'_t$ and $Q''_t$ of length $a$ and $b$, respectively. Denote the flat limit $Q =\lim_{t\to 0} (Q'_t \sqcup Q''_t)$. The length of $Q$ is $a+b=n$. Moreover, for each $t$ close but not equal to zero, $Q'_t$ and $Q''_t$ are embedded (respectively) into disjoint linear subspaces $\mathbb{P}^{a-1}_t$ and $\mathbb{P}^{b-1}_t$. Thus the defining ideal $I(Q'_t \sqcup Q''_t)$ of $Q'_t \sqcup Q''_t$ satisfies: \[ I(Q'_t \sqcup Q''_t) = I(Q'_t) \cap I(Q''_t) = (G_t^{\perp})_{\le n} \cap (H_t^{\perp})_{\le n} \] The last equality $I(Q'_t)= (G_t^{\perp})_{\le n}$ (and analogously for $Q''_t$ and $H_t$) follows from \cite[Thm.~1.6(iii)]{MR3121848}. Since $F_t = G_t + H_t$ is a direct sum, by Lemma~\ref{lem: apolar of direct sum} one has $(G_t^{\perp})_{\le n} \cap (H_t^{\perp})_{\le n} = (F_t^{\perp})_{\le n}$. Since we are considering an apolar limit, we may pass to the limit with ideals: \begin{align*} J & = \lim_{t\to 0} (I(Q'_t \sqcup Q''_t)) = \lim_{t\to 0} (F_t^{\perp})_{\le n} = (F^{\perp})_{\le n}. \end{align*} $J$ is a homogeneous ideal defining $Q= \lim_{t\to 0} (Q'_t \sqcup Q''_t)$, though at this point potentially $J$ is not saturated. However, using $J= (F^{\perp})_{\le n}$, and by \cite[Thm.~1.6(iii)]{MR3121848} applied to $F$, the ideal $J$ must be saturated and $F \in \langle v_d(Q) \rangle$. By the uniqueness in \cite[Thm.~1.6(ii)]{MR3121848} we have $R = Q$, which is a contradiction, since $Q$ is a limit of smaller disjoint schemes, i.e., $Q$ is cleavable, while $R$ is not. In the case $n_1 =n_0$, $Q$ is smoothable of length $n_0$, the above considerations imply that the border rank of $F$ is (at most) $n$, a contradiction with our assumption $br(F) > n$. \end{proof} \begin{example} \label{example_limit_polynomial_but_not_apolar_limit} Let $G$ be a general homogeneous cubic in $6$ variables $\fromto{x_1}{x_6}$, and let \[ F = (d-2) z^{d-3} G + z^{d-2} (x_1 y_1 + x_2 y_2 + x_3 y_3 + x_4 y_4 + x_5 y_5 + x_6 y_6) + \frac{1}{d-1} z^{d-1} w. \] Consider the concisely independent scheme $R$ defined from $G$ as in Example~\ref{example_concise scheme}. Then $F$ is apolar to $R$, which can be verified by acting on $F$ with the generators \ref{item_generators_of_I__apolar_to_G}--\ref{item_generators_of_I__Theta_minus_beta_delta} of Example~\ref{example_concise scheme}. Thus $F \in \langle v_d(R) \rangle$. Moreover, since $G$ is general, in characteristic $0$, $R$ is a shortest non-smoothable Gorenstein scheme, see \cite[Thm A]{casnati_jelisiejew_notari_Hilbert_schemes_via_ray_families} and \cite[Lem.~6.21]{MR1735271}. In particular, $R$ is uncleavable by Lemma~\ref{lemma_properties_of_cleaving}. Thus, by Proposition~\ref{prop: limit not apolar limit}, $F$ is a limit of direct sums, but not an apolar limit of direct sums. \end{example} \subsection{Apolar limits in the plane} We show that in the plane, $\ApoLim = \ApoEqu$. Recall that when considering forms in $3$ variables we write $S = \mathbb{C}[x,y,z]$ and $T = \mathbb{C}[\alpha, \beta, \gamma]$. \begin{thm}\label{thm: apolim=apoequ in the plane} Let $F$ be a concise form of degree $d$ in $n=3$ variables having an equipotent apolar generator. Then $F$ is an apolar limit of direct sums. \end{thm} Note that in contrast to the situation of Proposition~\ref{prop: cubic apolar limit}, where every family of direct sum cubic forms having as the limit a concise cubic form must be an apolar family, we do not claim here that every family $F_t \to F$ is necessarily an apolar family. Rather, we claim only that there exists \textit{some} apolar family of direct sums $F_t \to F$. As we will see in the proof of Theorem \ref{thm: apolim=apoequ in the plane}, ``typically'' (here we do not want to specify precisely what does ``typically'' mean, see the proof below for an explicit statement), the limit indicated in Example~\ref{example: form of plane direct sums} provides an apolar limit. However, this limit does not work in all cases, as illustrated by the following example: \begin{example} Consider the following sextic in three variables: \[ F = xy^5 + y^3 z^3. \] As indicated by Example~\ref{example: form of plane direct sums}, this is a limit of direct sums. Indeed: \[ F_t = \tfrac{1}{t} \left( (y+\tfrac{1}{6} t x)^6 + t y^3 z^3 - y^6 \right) \] is a family of homogeneous polynomials ($t$ is a parameter of the family), with $F_0 = F$ and $F_t$ for $t\ne 0$ after an easy coordinate change becomes: \[ x^6 + y^3 z^3 - y^6. \] In particular, $F_t$ for $t\ne 0$ is a direct sum and the Hilbert function of $A_{F_t}$ is $1,3,4,5,4,3,1$, while the Hilbert function of $A_F = A_{F_0}$ is $1,3,4,4,4,3,1$. Thus the family $F_t$ is not an apolar family. However, another family: \[ \tfrac{1}{t} \left( (y+\tfrac{1}{6} t x)^6 - y^6 + t y^3 z^3 - \tfrac{1}{400} t^2 z^6 \right) \] presents $F$ as an apolar limit of direct sums. \end{example} \begin{proof}[Proof of Theorem~\ref{thm: apolim=apoequ in the plane}] By Example~\ref{example: form of plane direct sums}, either $F$ is a direct sum, in which case the statement is trivial, or else after a change of coordinates, $F = xy^{d-1} + G(y,z)$. Write $G(y,z) = \sum_{q=0}^d \binom{d}{q} a_q y^q z^{d-q}$. First we find the Hilbert function of $A_F$. In order to reduce subscripts, we write $h_F$ for the Hilbert function of $A_F$. We have $h_F(0) = h_F(d) = 1$. We claim that for $1 \leq k \leq d-1$, $h_F(k) = h_{\gamma^2 \aa G}(k-1)+2$. Recall that for a form $H \in S_e$ of degree $e$ and an integer $i \geq 0$, $T_i \aa H$ is the linear subspace $T_i \aa H = \{ \Theta \aa H \mid \Theta \in T_i \} \subseteq S_{e-i}$. First, $h_{\gamma^2 \aa G}(k-1) = \dim T_{d-k-1} \gamma^2 \aa G$, and $T_{d-k-1} \gamma^2 \aa G$ is spanned by, for $0 \leq i \leq d-k-1$, \[ \begin{split} \beta^i \gamma^{d-k-1-i} \gamma^2 \aa G &= \frac{d!}{(k-1)!} \sum_{q=i}^{i+k-1} \binom{k-1}{q-i} a_q y^{q-i} z^{k+i-q-1} \\ &= \frac{d!}{(k-1)!} \sum_{j=0}^{k-1} \binom{k-1}{j} a_{j+i} y^j z^{k-j-1} . \end{split} \] So $h_{\gamma^2 \aa G}(k-1) = \rank M$ where $M_{ij} = \binom{k-1}{j} a_{j+i}$ for $0 \leq i \leq d-k-1$, $0 \leq j \leq k-1$. Meanwhile $h_F(k) = \dim T_{d-k} \aa F$, and $T_{d-k} \aa F$ is spanned by $T_{d-k-1} \gamma \aa F = T_{d-k-1} \gamma \aa G$ together with $\{\alpha^j \beta^{d-k-j} \aa F \mid 0 \leq j \leq d-k\}$. Note that $\alpha^2 \aa F = 0$. We have \[ \alpha \beta^{d-k-1} \aa F = \frac{(d-1)!}{k!} y^k, \qquad \beta^{d-k} \aa F = \frac{(d-1)!}{(k-1)!} x y^{k-1} + \beta^{d-k} \aa G . \] Here, $\beta^{d-k} \aa F$ is linearly independent of the other spanning elements since it is the only one with a monomial involving $x$. And $T_{d-k-1} \gamma \aa G$ is spanned by, for $0 \leq i \leq d-k-1$, \[ \begin{split} \beta^i \gamma^{d-k-1-i} \gamma \aa G &= \frac{d!}{k!} \sum_{q=i}^{i+k} \binom{k}{q-i} a_q y^{q-i} z^{k+i-q} \\ &= \frac{d!}{k!} \sum_{j=0}^k \binom{k}{j} a_{j+i} y^j z^{k-j} . \end{split} \] Thus $T_{d-k-1} \gamma \aa G + \mathbb{C} y^k$ is spanned by $y^k$ together with, for $0 \leq i \leq d-k-1$, \[ \sum_{j=0}^{k-1} \binom{k}{j} a_{j+i} y^j z^{k-j} . \] So $\dim(T_{d-k-1} \gamma \aa G + \mathbb{C} y^k) = 1 + \rank N$ where $N_{ij} = \binom{k}{j} a_{j+i}$ for $0 \leq i \leq d-k-1$, $0 \leq j \leq k-1$. Note $N$ is obtained from $M$ by rescaling columns, so $\rank M = \rank N$. This proves that $h_F(k) = h_{\gamma^2 \aa G}(k-1) + 2$, as claimed. For any $1 \leq r \leq \frac{d}{2}$, let $h^r$ be the following function: \[ h^r(k) = \begin{cases} 1, & k = 0 \\ k+2, & 1 \leq k \leq r-1 \\ r+2, & r \leq k \leq d-r \\ (d-k)+2, & d-r+1 \leq k \leq d-1 \\ 1, & k = d \\ 0, & k<0 \text{ or } k>d . \end{cases} \] What we have shown is that $h_F = h^r$ where $r = br(\gamma^2 \aa G)$. Let \[ H^r = \{ F = xy^{d-1} + G(y,z) \mid br(\gamma^2 \aa G) = r \} . \] Write $\hat\sigma_r(v_d(\mathbb{P}^1))$ for the affine cone over the $r$-th secant variety. Then $F \mapsto \gamma^2 \aa G$ maps $H^r$ onto $\hat\sigma_r(v_{d-2}(\mathbb{P}^1)) \setminus \hat\sigma_{r-1}(v_{d-2}(\mathbb{P}^1))$, the set of binary forms of border rank $r$. Also the fibers of the map are irreducible, specifically copies of $\mathbb{A}^2$, as the fiber through $F$ consists of $F + ay^d + by^{d-1} z$. Thus $H^r$ is irreducible. The claim of the Theorem is that for all $F \in H^r$ we can obtain $F$ as a limit of direct sums which have Hilbert function $h^r$. In the following paragraph we are going to prove the claim under an additional assumption that $F$ is a general element of $H^r$. Then, in the final paragraph, we are going to use this ``generic'' case, and the irreducibility of $H^r$ to conclude the statement for all polynomials $F$ in question. So suppose $F \in H^r$ is general. Then $\gamma^2 \aa G \in \hat\sigma_r(v_{d-2}(\mathbb{P}^1))$ is also a general element, so $\gamma^2 \aa G = \ell_1^{d-2} + \dotsb + \ell_r^{d-2}$ where the $[\ell_i] \in \mathbb{P}^1$ are in general position. In this case it is easy to integrate $\gamma^2 \aa G$ (twice) and hence $G = c_1 \ell_1^d + \dotsb + c_r \ell_r^d + ay^{d-1}z + by^d$. Therefore $F = (x+az+by)y^{d-1} + (c_1 \ell_1^d + \dotsb + c_r \ell_r^d)$. Let $x' = x+az+by$. For $t \neq 0$, let $F'_t = \frac{1}{t}(y + \frac{t}{d} x')^d + (c_1 \ell_1^d + \dotsb + c_r \ell_r^d - \frac{1}{t} y^d)$. Since the $\ell_i$ are general and $r+1 \leq \frac{d+2}{2}$, we have for general $t \neq 0$, \[ r \left( c_1 \ell_1^d + \dotsb + c_r \ell_r^d - \frac{1}{t}y^d \right) = br \left( c_1 \ell_1^d + \dotsb + c_r \ell_r^d - \frac{1}{t}y^d \right) = r+1 . \] Since $F'_t$ is a direct sum, $h_{F'_t} = h^r = h_F$ (the first equality follows from Proposition~\ref{prop: direct sum poly -> connected sum apolar algebra}). Thus $F'_t \to F$ is an apolar limit. By the argument in the previous paragraph, the irreducible set $H^r$ is contained in the Zariski closure of the locus of direct sums with Hilbert function $h^r$. So every $F \in H^r$ is a limit of direct sums with Hilbert function $h^r$, i.e., it is an apolar limit of direct sums. This proves the claim of the theorem. \end{proof} Note that the apolar ideal of a form in $3$ variables is a height $3$ Gorenstein ideal, and is therefore generated by the principal Pfaffians of a skew-symmetric matrix \cite{MR0453723}. Nevertheless we have not used this information; instead, the key was information about apolar ideals of forms in one less variable. It would be interesting to investigate whether the structure described in \cite{MR0453723} can lead to a generalization of Theorem~\ref{thm: apolim=apoequ in the plane} for forms in $n=4$ variables, compare with \cite{elkhoury_srinivasan_a_class_of_Gorenstein_Artin_algebra_of_codim_4}. It would also be interesting to study limits of direct sums of type $(1,n-1)$, $s$-fold direct sums of type $(1,1,\dotsc,1,n-s+1)$, direct sums of type $(1,\dotsc,1,2,\dotsc,2)$, and so on. Note however that for limits of direct sums of type $(1,n-1)$ one cannot expect a similar result to Theorem~\ref{thm: apolim=apoequ in the plane}. This is because for $n=14$, the polynomial $F$ presented in Example~\ref{example_limit_polynomial_but_not_apolar_limit} is a limit of direct sums of type $(1,13)$ and it has been proved it is not an apolar limit. \section{Generalizations and further questions}\label{sect: generalizations} \subsection{Linear series}\label{sect: linear series} There are natural generalizations of some of our results to linear series. Let $W \subseteq S^d V$ be a linear series. A \defining{simultaneous power sum decomposition} of $W$ is a collection of linear forms $\ell_1,\dotsc,\ell_r$ such that $W$ is contained in the span of $\ell_1^d,\dotsc,\ell_r^d$; equivalently, for each $F \in W$, there are scalars $c_1,\dotsc,c_r$ such that $F = \sum c_i \ell_i^d$. The \defining{simultaneous Waring rank} of $W$, denoted $r(W)$, is the least length $r$ of a simultaneous power sum decomposition. Let $S = \mathbb{C}[V] = \mathbb{C}[x_1,\dotsc,x_n]$, where $x_1,\dotsc,x_n$ is a basis for $V$, and let $T = \mathbb{C}[V^*] = \mathbb{C}[\alpha_1,\dotsc,\alpha_n]$ act on $S$ by letting $\alpha_i$ act as the partial differentiation operator $\partial/\partial x_i$. The \defining{apolar annihilating ideal} $W^\perp \subset T$ is $W^\perp = \{ \Theta \mid \Theta \aa F = 0, \forall F \in W \}$. That is, $W^\perp = \bigcap_{F \in W} F^\perp$. The apolar algebra $A_W = T / W^\perp$ is a level Artinian algebra with socle degree $d$ and type equal to the dimension of $W$, meaning that its socle is entirely in degree $d$ and has dimension equal to $\dim W$. In particular $A_W$ is Gorenstein if and only if $W$ is one-dimensional, i.e., spanned by a single form. There is an Apolarity Lemma just as in the case of a single form. This is well-known to experts; see for example \cite[Thm.~2.3]{MR2223453}. For the reader's convenience we state it here: \begin{lemma} With notation as above, $\ell_1,\dotsc,\ell_r$ is a simultaneous power sum decomposition of $W$ if and only if the ideal $I = I(\{[\ell_1],\dotsc,[\ell_r]\})$ satisfies $I \subseteq W^\perp$. \end{lemma} \begin{proof} $\ell_1,\dotsc,\ell_r$ is a simultaneous power sum decomposition of $W$ if and only if $W$ lies in the span of the $\ell_i^d$, if and only if every $F \in W$ can be written as a linear combination of the $\ell_i^d$, if and only if $I \subseteq F^\perp$ for every $F \in W$ (by the usual Apolarity Lemma), if and only if $I \subseteq \bigcap F^\perp = W^\perp$. \end{proof} Thus $r(W) \geq \dim (A_W)_a$ for every $0 \leq a \leq d$: indeed, if $\ell_1,\dotsc,\ell_r$ is a simultaneous power sum decomposition of $W$ with defining ideal $I$ then $r = \codim I_a \geq \codim (W^\perp)_a = \dim (A_W)_a$ for $a > 0$ (and it is trivial for $a=0$). There is a generalization of the Ranestad--Schreyer lower bound \cite{MR2842085} for Waring rank for linear series, with essentially the same proof as for the case of a single form. We briefly review the proof for completeness. \begin{prop} With notation as above, let $\length(A_W)$ be the length of $A_W$ and suppose $W^\perp$ is generated in degrees less than or equal to $\delta$. Then $r(W) \geq \length(A_W) / \delta$. \end{prop} \begin{proof} Suppose $\ell_1,\dotsc,\ell_r$ is a simultaneous power sum decomposition of $W$ with defining ideal $I$. The vanishing locus $V((W^\perp)_\delta)$ in affine space is just the origin (i.e., a scheme supported at the origin). By Bertini's theorem, the linear series $(W^\perp)_\delta$ has no basepoints in projective space. Let $G \in (W^\perp)_\delta$ be a general form. Then $G$ does not vanish at any projective point $[\ell_i]$, so the affine hypersurface $V(G)$ does not contain any line which is an irreducible component of $V(I)$. Therefore by Bezout's theorem, the intersection of $V(G)$ and $V(I)$ has degree equal to $\delta r$. But this intersection contains the scheme $V(W^\perp)$ which has length equal to $\length(A_W)$. So $\delta r \geq \length(A_W)$. \end{proof} A \defining{direct sum decomposition} of $W$ is an expression $V = V_1 \oplus V_2$ and subspaces $W_1 \subseteq S^d V_1$, $W_2 \subseteq S^d V_2$, such that $W \subset W_1 \oplus W_2$ and the projections $W \to W_1$, $W \to W_2$ are isomorphisms. Equivalently, for a basis $F^1,\dotsc,F^k$ of $W$, each $F^i = F^i_1 + F^i_2$ with $F^i_1 \in W_1$, $F^i_2 \in W_2$, and the $F^i_1$ are linearly independent and so are the $F^i_2$. Now we can generalize some of our results to the case of linear series. Here is a generalization of Theorem~\ref{thm: direct sum generator}: \begin{prop} If a linear series $W$ of degree $d$ forms admits a direct sum decomposition then $W^\perp$ has at least $s = \dim W$ minimal generators of degree $d$. \end{prop} \begin{proof} Say $W \subset W_x \oplus W_y$ is a direct sum decomposition where \[W_x \subseteq (S^x)_d = \mathbb{C}[x_1,\dotsc,x_i]_d, \ \ W_y \subseteq (S^y)_d = \mathbb{C}[y_1,\dotsc,y_j]_d, \ \text{ and }W_x, W_y \neq 0.\] We denote the dual rings $T^\alpha = \mathbb{C}[\alpha_1,\dotsc,\alpha_i]$, $T^\beta = \mathbb{C}[\beta_1,\dotsc,\beta_j]$, $T = T^\alpha \otimes T^\beta$. We have $W_x^\perp \cap W_y^\perp \subseteq W^\perp$. If $0 \leq k \leq d$ and $\Theta \in (W^\perp)_k$ then for every $F \in W$, say $F = G-H$ where $G \in W_x$, $H \in W_y$, we have $\Theta \aa (G-H) = 0 = \Theta \aa G - \Theta \aa H$. So $\Theta \aa G \in S^x_{d-k}$ and $\Theta \aa H \in S^y_{d-k}$ are equal, which implies $\Theta \aa G = \Theta \aa H = 0$ or $d-k = 0$. Thus $(W_x^\perp \cap W_y^\perp)_k = W^\perp_k$ for $0 \leq k < d$. Let $F^1,\dotsc,F^s$ be a basis for $W$ and for each $i$ let $F^i = F^i_x - F^i_y$, $F^i_x \in W_x$, $F^i_y \in W_y$. For $1 \leq j \leq s$ let $\delta_{x,j} \in T^x_d$ such that $\delta_{x,j} \aa F^i_x = 1$ if $i=j$, $0$ if $i \neq j$. Similarly let $\delta_{y,j} \in T^y_d$ such that $\delta_{y,j} \aa F^i_y = 1$ if $i=j$, $0$ if $i \neq j$. There are such elements by the linear independence of the $F^i_x$ and the $F^i_y$. Let $\Delta_j = \delta_{x,j} + \delta_{y,j}$. Then $\Delta_j \aa F^i = 0$ for each $i$, so $\Delta_j \in W^\perp$, but $\Delta_j \aa F^j_x = \Delta_j \aa F^j_y = 1$ so $\Delta_j \notin W_x^\perp \cap W_y^\perp$. Hence each $\Delta_j$ is a minimal generator of $W^\perp$. The $\Delta_j$ are linearly independent since if $\sum a_j \Delta_j = 0$ then $a_i = (\sum a_j \Delta_j) \aa F^i_x = 0$ for each $i$. \end{proof} Next we give a generalization of Proposition~\ref{prop: d+1 generator rank 1}. \begin{prop} Let $W \subseteq S^d V$. Then $W^\perp$ has a minimal generator of degree $d+1$ if and only if there is a nonzero $(d+1)$-form $G \in S^{d+1} V$ such that $T_1 \aa G \subseteq W$. \end{prop} \begin{proof} First suppose $W^\perp$ has a minimal generator of degree $d+1$. Let $I \subset W^\perp$ be the ideal generated by elements of degree $\leq d$. Then $I_{d+1} \neq T_{d+1}$. Let $G \in S^{d+1} V$ be a nonzero element annihilated by $I_{d+1}$. Since $G$ is annihilated by $I_{d+1}$, $G$ is annihilated by $I$, so $(W^\perp)_d = I_d \subseteq (G^\perp)_d$. If $F \in T_1 \aa G$ then $G^\perp \subseteq F^\perp$. In particular $(W^\perp)_d \subseteq F^\perp$, so $F \in W$. This shows $T_1 \aa G \subseteq W$. Conversely, suppose $G \in S^{d+1} V$ is a nonzero $(d+1)$-form such that $T_1 \aa G \subseteq W$. As before, let $I \subseteq W^\perp$ be the ideal generated by elements of degree $\leq d$. In particular, $I_d \subseteq (W^\perp)_d \subseteq ((T_1 \aa G)^\perp)_d$. Then $I_{d+1} \subseteq (G^\perp)_{d+1}$: indeed, if $\Theta = \sum \alpha_i \theta_i$, each $\theta_i \in I_d$, then $\Theta \aa G = \sum \theta_i \aa (\alpha_i \aa G) = 0$ since each $\alpha_i \aa G \in W$ and each $\theta_i \in I_d \subseteq (W^\perp)_d$. In particular, $I_{d+1} \neq T_{d+1} = (W^\perp)_{d+1}$. So $W^\perp$ has a minimal generator of degree $d+1$. \end{proof} Recall that the $r$-th \defining{prolongation} of $W \subseteq S^d V$ is the set of $(d+r)$-forms $G$ such that $T^{e-d} \aa G \subseteq W$ \cite[Definition~1.1]{MR2541390}. The above Proposition shows that $W^\perp$ has a minimal generator of degree $d+1$ if and only if the first prolongation of $W$ is nonzero. We give a generalization of Theorem~\ref{thm: apolar generator degree lower bound}. First we generalize Lemma~\ref{lemma: apolar containment}. \begin{lemma} Suppose $W \subseteq S^d V$, $U \subseteq S^e V$ are linear series such that $U^\perp \subseteq W^\perp$. Then $e \geq d$ and $W \subseteq T_{e-d} \aa U$. That is, for every $F \in W$, there are $G \in U$ and $\Theta \in T_{e-d}$ such that $F = \Theta \aa G$. \end{lemma} \begin{proof} This follows by the inclusion-reversing part of \cite[Thm.~21.6]{eisenbud:comm-alg}. \end{proof} Now here is a generalization of Theorem~\ref{thm: apolar generator degree lower bound}. \begin{prop} Let $W \subseteq S^d V$ be a linear series and $n = \dim V$. Suppose $W^\perp$ has minimal generators $\Theta_1,\dotsc,\Theta_s$ such that $\deg \Theta_i = d_i$ for each $i$, and $d_1 \leq \dotsb \leq d_s$. Let $\delta$ be an integer such that the ideal $(W^\perp)_{\leq \delta}$ is $\mathfrak{m}$-primary. Assume $d_k = \delta < d_{k+1}$ or $k = s$ and $\delta = d_s$; necessarily $k \geq n$. Then $d \leq d_k + d_{k-1} + \dotsb + d_{k-n+1} - n$. \end{prop} The proof is similar to the proof of Proposition~\ref{prop: apolar generator degree lower bound strengthening}. As before, we immediately deduce \begin{cor} Let $W \subseteq S^d V$ be a linear series. Let $n = \dim V$. Suppose $W^\perp$ is generated in degrees less than or equal to $\delta$. Then $d \leq (\delta-1)n$. \end{cor} It would be interesting to see if our other results, such as Theorem~\ref{thm: apoequ = closure of dirsum}, can be generalized to linear series. The proofs we have given have used Gorenstein duality, which is not available since the ideals $W^\perp$ are not Gorenstein. \subsection{Overlapping sums} Let $F = G_1 - G_2$, $G_i \in S^d V_i$, $G_i \neq 0$ for $i=1,2$. Theorem~\ref{thm: direct sum generator} shows that if $V_1 \cap V_2 = \{0\}$, then $F^\perp$ has a minimal generator of degree $d$. Here we are interested in allowing $V_1 \cap V_2$ to be nonzero, so that they form an ``almost direct sum'' or ``overlapping sum.'' We give a statement for the case $\dim(V_1 \cap V_2) = 1$. \begin{prop} Let $F = G_1 - G_2$, $G_i \in S^d V_i$, with $G_i$ concise in $V_i$ for $i=1,2$, and suppose $V_1 \cap V_2$ is one-dimensional, spanned by $x$. Moreover, suppose $V_i \ne \langle x \rangle$, and $\max(\deg_x(G_1),\deg_x(G_2)) < d/2$. Let $s = \max\{t \mid x^t \in T \aa G_1 \cap T \aa G_2\}$. Then $F^\perp$ has a minimal generator of degree $d-s$. \end{prop} \begin{proof} As in the proof of Theorem~\ref{thm: direct sum generator}, $G_1^\perp \cap G_2^\perp \subset F^\perp$. We claim $F^\perp_a = (G_1^\perp \cap G_2^\perp)_a$ for $0 \leq a < d-s$. If $\Theta \in F^\perp_a$ then $\Theta \aa G_1 = \Theta \aa G_2 \in S^{d-a} \langle x \rangle$, that is, $\Theta \aa G_1 = \Theta \aa G_2 = c x^{d-a}$ for some scalar $c$. We have $x^s \in T \aa G_1 \cap T \aa G_2$, $x^{s+1} \notin T \aa G_1 \cap T \aa G_2$; more generally, $x^k \in T \aa G_1 \cap T \aa G_2$ if and only if $k \leq s$. So we must have $c=0$ or $d-a \leq s$, equivalently $\Theta \in G_1^\perp \cap G_2^\perp$ or $a \geq d-s$. This proves the claim. Note if $x^t \in T \aa G_1$ then $t \leq \deg_x(G_1)$. Thus $s \leq \deg_x(G_1) < d/2$. Now there exist $\delta_1, \delta_2 \in T_{d-s}$ such that $\delta_i \aa G_i = x^s$, $\delta_1 \aa G_2 = \delta_2 \aa G_1 = 0$. To see this, let $V_1$ have basis $\{x,y_1,\dotsc,y_j\}$ and let $V_2$ have basis $\{x,z_1,\dotsc,z_k\}$. Let $\{\alpha,\beta_1,\dotsc,\beta_j\}$ be the dual basis for $V_1^*$ and $\{\alpha,\gamma_1,\dotsc,\gamma_k\}$ be the dual basis for $V_2^*$. Let $T^\beta = \mathbb{C}[\alpha,\beta_1,\dotsc,\beta_j]$ and $T^\gamma = \mathbb{C}[\alpha,\gamma_1,\dotsc,\gamma_k]$. We have $T = \mathbb{C}[\alpha,\beta_1,\dotsc,\beta_j,\gamma_1,\dotsc,\gamma_k]$. There is a $\delta'_1 \in T_{d-s}$ such that $\delta'_1 \aa G_1 = x^s$. Since every term of $\delta'_1$ that involves a $\gamma_i$ annihilates $G_1$, we can delete those terms to get an element $\delta_1 \in T^\beta_{d-s}$ such that $\delta_1 \aa G_1 = x^s$. Every term of $\delta_1$ that has a $\beta_i$ annihilates $G_2$. If $\delta_1$ has a term $c \alpha^{d-s}$, $c \neq 0$, then by the hypothesis $\deg_x(G_2) < d/2 < d-s$ we see that this term also annihilates $G_2$. So $\delta_1 G_2 = 0$ as desired. It is similar to produce $\delta_2$. Let $\Delta = \delta_1 + \delta_2$. Then $\Delta \in F^\perp_{d-s}$, but $\Delta \notin G_1^\perp \cap G_2^\perp$. Hence $\Delta$ is a minimal generator of $F^\perp$: it cannot be generated in lower degrees, since all elements in lower degrees lie in $G_1^\perp \cap G_2^\perp$. \end{proof} \begin{cor} Let $F = G_1 - G_2$, $G_i \in S^d V_i$, $G_i$ concise in $V_i$ for $i=1,2$, and suppose $V_1 \cap V_2$ is one-dimensional, spanned by $x$. Moreover, suppose $V_i \ne \langle x \rangle$ and $t = \max(\deg_x(G_1),\deg_x(G_2)) < d/2$. Then $F^\perp$ has a minimal generator of degree at least $d-t$, in particular strictly greater than $d/2$. \end{cor} It would also be interesting to investigate cases with larger overlap, or more than two overlapping summands. \subsection{Other base fields}\label{sect: other fields} Most of our results, including results overlapping with \cite{KleppePhD2005}, also hold over any algebraically closed field $\Bbbk$. The main difference is that we need to consider $S= \Bbbk[\fromto{x_1}{x_n}]^{DP}$ to be the divided power algebra, rather than the polynomial ring, and the apolarity action of $T$ on $S$ is now as if the differentiation was very naive: $\alpha_i \aa {x_i}^{(d)} = {x_i}^{(d-1)}$ (no $d$ coeffficient). See \cite[Appendix A]{MR1735271}, \cite[Section A2.4]{eisenbud:comm-alg}. All occurences of powers of linear forms in $S$ should now be replaced by the divided powers, for instance $x^{d-1}y$ should be $x^{(d-1)}y$, etc. In particular, the Veronese embedding $v_d\colon \mathbb{P} V \to \mathbb{P} (S^d V) $ is now $v_d([x])= [x^{(d)}]$ and the Waring rank is computed with respect to the image of $v_d$ (and the border rank, cactus rank, etc.). In this setup Theorems~\ref{thm: direct sum generator}, \ref{thm: apoequ = closure of dirsum} and \ref{thm: apolar generator degree lower bound} and their proofs remain valid over any algebraically closed base field $\Bbbk$ of any characteristic, so in particular, direct sums and their concise limits have equipotent apolar generators; any concise divided powers form with an equipotent apolar generator is a limit of direct sums; and the bound $d \leq (\delta-1)n$ relating the greatest degree $\delta$ of apolar generators and the degree $d$ of the form is valid. Proposition~\ref{prop: d+1 generator rank 1} only requires the change of $\ell^d$ into $\ell^{(d)}$, so that forms of degree $d$ with apolar generator in degree $d+1$ essentially depend on just one variable. Proposition~\ref{prop: smith stong} is proved over any field in \cite{MR2738376}. Theorem~\ref{thm: apolim and apoequ} consists of two parts. The first consists of Theorem~\ref{thm: apolim=apoequ in the plane} (in three variables all limits of direct sums are apolar limits of direct sums) and Proposition~\ref{prop: cubic apolar limit} (all limits of cubics are apolar limits), which are valid with no significant change to the statements or proofs. In fact the proof of Theorem~\ref{thm: apolim=apoequ in the plane} becames slightly simpler, as all coefficients like $\frac{d!}{k!}$ or $\binom{k}{q-i}$ are replaced with just $1$ and in the end the matrices $M$ and $N$ are just equal. The second part is Proposition~\ref{prop: limit not apolar limit} (there exists a limit of direct sums, which is not an apolar limit of direct sums). To prove this statement, we used results from \cite{MR3121848}, which is written over $\mathbb{C}$, so Proposition~\ref{prop: limit not apolar limit} is not proven over $\Bbbk$. Similarly, the other results presented in Sections~\ref{sect: cactus rank} and \ref{sect: non-apolar limit} also depend on \cite{MR3121848}. However, the first and the second named authors believe that the results of \cite{MR3121848} used in this article can be generalized to any characteristic. Section~\ref{sect: plane cubics} and Table~\ref{table:plane cubics} describe the behavior and classification of plane cubics. In positive characteristics, these are different, particularly the cases of characteristics $2$ and $3$. The numerous examples throughout the paper might be valid only in some characteristics, while in the other characteristics they need to be appropriately adjusted. The exact values of integers $n_0$ (the length of the shortest non-smoothable scheme) and $n_1$ (the possible lengths of uncleavable schemes) may be different in positive characteristics, particularly in characteristics $2$ and $3$. No other changes are needed to make this paper valid over any algebraically closed field. \bigskip \renewcommand{\MR}[1]{{}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
train/arxiv
BkiUbpw5qsNCPcsBukSg
5
1
\section{hogehoge} becomes HOGEHOGE in pdf file. We can avoid \usepackage{titlesec} \newcommand*{\raggedright}{\raggedright} \titleformat{\chapter}[display] {\normalfont\huge\bfseries\raggedright}{\chaptertitlename\ \thechapter} {20pt}{\Huge} \titleformat{\section} {\normalfont\Large\bfseries\raggedright}{\thesection}{1em}{} \titleformat{\subsection} {\normalfont\large\bfseries\raggedright}{\thesubsection}{1em}{} \titleformat{\subsubsection} {\normalfont\bfseries\raggedright}{\thesubsubsection}{1em}{} \usepackage[english]{babel} \begin{document} \title{Muon specific two-Higgs-doublet model} \author{Tomohiro Abe} \affiliation{ Institute for Advanced Research, Nagoya University, Furo-cho Chikusa-ku, Nagoya, Aichi, 464-8602 Japan } \affiliation{ Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Furo-cho Chikusa-ku, Nagoya, Aichi, 464-8602 Japan } \author{Ryosuke Sato} \affiliation{Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 7610001, Israel} \author{Kei Yagyu} \affiliation{INFN, Sezione di Firenze, and Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino, Italy} \begin{abstract} We investigate a new type of a two-Higgs-doublet model as a solution of the muon $g-2$ anomaly. We impose a softly-broken $Z_4$ symmetry to forbid tree level flavor changing neutral currents in a natural way. This $Z_4$ symmetry restricts the structure of Yukawa couplings. As a result, extra Higgs boson couplings to muons are enhanced by a factor of $\tan\beta$, while their couplings to all the other standard model fermions are suppressed by $\cot\beta$. Thanks to this coupling property, we can avoid the constraint from leptonic $\tau$ decays in contrast to the lepton specific two-Higgs-doublet model, which can explain the muon $g-2$ within the 2$\sigma$ level but cannot within the $1\sigma$ level due to this constraint. We find that the model can explain the muon $g-2$ within the 1$\sigma$ level satisfying constraints from perturbative unitarity, vacuum stability, electroweak precision measurements, and current LHC data. \end{abstract} \maketitle \newpage \section{Introduction} It is well known that the measured value of the muon anomalous magnetic moment ($g-2$)~\cite{Bennett:2006fi} deviates from the standard model (SM) prediction~\cite{Davier:2010nc, Hagiwara:2011af} with more than 3$\sigma$. This large deviation has been a long standing problem in particle physics, and many models beyond the SM have been studied to solve this discrepancy~\cite{0902.3360}. Since the new experiments are planed at Fermilab \cite{Grange:2015fou} and J-PARC \cite{Iinuma:2011zz}, it is worthwhile to find a good benchmark model that solves this problem. Among various scenarios, the lepton specific (Type-X) two-Higgs-doublet model (THDM) gives a simple solution to explain the muon $g-2$ anomaly~\cite{1409.3199,1412.4874}.\footnote{ Other scenarios of THDMs without the natural flavor conservation~\cite{Glashow:1976nt} are discussed in Refs.~\cite{1502.07824, 1511.05162, 1511.08880}.} This model is known as one of the four THDMs~\cite{Barger:1989fj,hep-ph/9401311,0902.4665} with a softly-broken $Z_2$ symmetry which is imposed to avoid flavor changing neutral currents (FCNCs) at the tree level~\cite{Glashow:1976nt}. This model contains additional Higgs bosons, namely a CP-even ($H^0$), a CP-odd ($A^0$), and charged ($H^{\pm}$) Higgs bosons. Their couplings to the SM charged leptons are enhanced by a factor of $\tan\beta$ which is the ratio of two Vacuum Expectation Values (VEVs) of the two doublet Higgs fields. Although this enhancement can significantly reduce the discrepancy in the muon $g-2$, its amount is severely constrained from precision measurements of the leptonic $\tau$ decay: $\tau \to \mu \nu_\tau \bar{\nu}_\mu$ whose amplitude with the $H^\pm$ mediation is proportional to $\tan^2\beta$. Consequently, it turns out difficult to explain the muon $g-2$ anomaly within the 1$\sigma$ level~\cite{1504.07059,1605.06298}. In this paper, we propose a new type of the THDM that avoids the constraint from the $\tau$ decay without losing the advantage of the Type-X THDM. We impose a softly-broken $Z_4$ symmetry to forbid tree level FCNCs in a natural way as in the Type-X THDM. This $Z_4$ symmetry is also important to restrict the structure of Yukawa couplings. As a result, only the additional Higgs boson couplings to muons are enhanced by a factor of $\tan\beta$, while their couplings to all the other SM fermions are suppressed by $\cot\beta$. We call this model the ``muon specific THDM ($\mu$THDM)''. Thanks to this coupling property, the large contribution to the leptonic $\tau$ decay amplitude by $\tan^2\beta$ provided in the Type-X THDM disappears in the $\mu$THDM because of the cancellation of the $\tan\beta$ factor between the tau and the muon Yukawa couplings to $H^\pm$. This is a crucial difference of this model from the Type-X THDM. We will show that the $\mu$THDM can explain the muon $g-2$ anomaly within the $1\sigma$ level in the parameter space allowed by bounds from perturbative unitarity, vacuum stability, electroweak precision measurements, and current LHC data. This paper is organized as follows. After describing our model in Sec.~\ref{sec:model}, we discuss constraints on model parameters from perturbative unitarity, vacuum stability, electroweak precision measurements, and current LHC data in Sec.~\ref{sec:g2}. In addition, we show that the parameter space which explains the muon $g-2$ anomaly within $1\sigma$ is allowed by these constraints We devote Sec.~\ref{sec:conclusion} for our conclusion. \section{Model}\label{sec:model} \subsection{Lagrangian} The Higgs sector of the $\mu$THDM is composed of two SU(2)$_L$ doublet scalar fields $H_1$ and $H_2$. We impose a softly-broken $Z_4$ symmetry to prevent tree level FCNCs. The charge assignment for the SM fermions and the Higgs fields are summarized in Table~\ref{tab:matter}.\footnote{Our model can be extended so as to realize non-zero masses of left-handed neutrinos and large mixing angles between $\nu_\mu$ and $\nu_{e,\tau}$ which are observed by neutrino experiments. We discuss such extension without a hard breaking of the $Z_4$ symmetry in Appendix~\ref{app:neutrino}.} \begin{table} \caption{Particle contents and the charge assignment.} \label{tab:matter} \begin{tabular}{|c||c|c|c||c|c|c||c|c|c||c|c|} \hline & $q_L^{j}$ & $u_R^j$ & $d_R^j$ & $\ell_L^e$ & $\ell_L^\tau$ & $\ell_L^\mu$ & $e_R$ & $\tau_R$ & $\mu_R$ & $H_1$ & $H_2$ \\ \hline SU(3)$_c$ & 3 & 3 & 3 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ SU(2)$_L$ & 2 & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 1 & 2 & 2 \\ U(1)$_Y$ & 1/6 & 2/3 & $-1/3$ & $-1/2$ & $-1/2$ & $-1/2$ & $-1$ & $-1$ & $-1$ & 1/2 & 1/2 \\ Z$_4$ & 1 & 1 & 1 & 1 & 1 & $i$ & 1 & 1 & $i$ & $-1$ & 1 \\ \hline \end{tabular} \end{table} The Yukawa interaction terms under this charge assignment are given by\footnote{ We discuss the possibility of other discrete symmetries which realize this Yukawa structure in Appendix~\ref{sec:Zn}. } \begin{align} {\cal L}^{\text{Yukawa}} =& - \bar{q}_L \tilde{H}_2 Y_u u_R - \bar{q}_L H_2 Y_d d_R - \bar{L}_L H_1 Y_{\ell 1} E_R - \bar{L}_L H_2 Y_{\ell 2} E_R + (h.c), \label{yuk} \end{align} where $\tilde{H}_2 = i\sigma^2 H^{*}_2$, and $Y_u$, $Y_d$, $Y_{\ell 1}$ and $Y_{\ell 2}$ are $3 \times 3$ matrices in generation space. The left(right)-handed lepton filed $L_L$ $(E_R)$ is defined as \begin{align} L_L = (\ell_L^e , \ell_L^{\tau} , \ell_L^{\mu} )^T, \quad E_R = (e_R^{} , \tau_R^{} , \mu_R^{})^T. \end{align} The $Z_4$ symmetry restricts the structure of the lepton Yukawa matrices as follows: \begin{align} Y_{\ell 1} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & y_{\mu} \end{pmatrix} , \quad Y_{\ell 2} = \begin{pmatrix} y_{e} & y_{e\tau} & 0 \\ y_{\tau e} & y_{\tau} & 0 \\ 0 & 0 & 0 \end{pmatrix} . \label{eq:texture_of_leptonYukawa} \end{align} We can take $y_{e \tau} = y_{\tau e} = 0$ by field rotations without loss of generality. The Higgs potential takes the same form as in the THDM with a softly-broken $Z_2$ symmetry: \begin{align} V=& m_1^2 H_1^{\dagger} H_1 + m_2^2 H_2^{\dagger} H_2 - \left(m_3^2 H_1^{\dagger} H_2 + (h.c.) \right) + \frac{ \lambda_1}{2} (H_1^{\dagger} H_1)^2 + \frac{\lambda_2}{2} (H_2^{\dagger} H_2)^2 \notag \\ & + \lambda_3 (H_1^{\dagger} H_1) (H_2^{\dagger} H_2) + \lambda_4 (H_1^{\dagger} H_2) (H_2^{\dagger} H_1) + \left( \frac{1}{2} \lambda_5 (H_1^{\dagger} H_2)^2 + (h.c.) \right), \label{eq:HiggsPotential} \end{align} where $m_1^2$, $m_2^2$, $\lambda_1$, $\lambda_2$, $\lambda_3$, and $\lambda_4$ are real. In general, $m_3^2$ and $\lambda_5$ are complex, but we assume these two parameters to be real for simplicity, by which the Higgs potential is CP-invariant. We parametrize the component fields of the Higgs doublets by \begin{align} H_i =& \left( \begin{matrix} \pi^{+}_i \\ \frac{1}{\sqrt{2}} \left( v_i + \sigma_i - i \pi_i^{3} \right) \end{matrix} \right) , \quad \quad (i = 1, 2), \label{components} \end{align} where $v_1~(v_2)$ is the VEV of the $H_1~(H_2)$ field. It is convenient to express these two VEVs in terms of $v$ and $\tan\beta$ defined by $v \equiv \sqrt{v_1^2 + v_2^2} \simeq (\sqrt{2}G_F)^{-1/2}\simeq 246$ GeV with $G_F$ being the Fermi constant and $\tan\beta \equiv v_2/v_1$, respectively.\footnote{The exact relation between $v$ and $G_F$ is given in Appendix \ref{sec:EWPTapp}.} The mass eigenstates of the scalar bosons and their relation to the gauge eigenstates expressed in Eq.~(\ref{components}) are given by the following rotations: \begin{align} \left( \begin{matrix} \pi_{Z} \\ A^0 \\ \end{matrix} \right) =& \left( \begin{matrix} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{matrix} \right) \left( \begin{matrix} \pi_1^{3} \\ \pi_2^{3} \end{matrix} \right) , \\ \left( \begin{matrix} \pi_{W^{\pm}} \\ H^{\pm} \\ \end{matrix} \right) =& \left( \begin{matrix} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{matrix} \right) \left( \begin{matrix} \pi_1^{\pm} \\ \pi_2^{\pm} \end{matrix} \right) , \\ \left( \begin{matrix} H^0 \\ h \\ \end{matrix} \right) =& \left( \begin{matrix} \cos \alpha & \sin \alpha \\ -\sin \alpha & \cos \alpha \end{matrix} \right) \left( \begin{matrix} \sigma_1 \\ \sigma_2 \\ \end{matrix} \right) , \label{alpha} \end{align} where $\pi_{W^\pm}^{}$ and $\pi_Z^{}$ are the Nambu-Goldstone bosons which are absorbed into the longitudinal component of the $W^\pm$ and $Z$ bosons, respectively. We identify $h$ as the discovered Higgs boson with a mass of 125~GeV at the LHC. The mixing angle $\alpha$ is expressed by the potential parameters as \begin{align} \tan2\alpha = \frac{2(v^2\lambda_{345}-M^2)\tan\beta}{v^2(\lambda_1 - \lambda_2\tan^2\beta) - M^2(1-\tan^2\beta)}, \end{align} where \begin{align} \lambda_{345} \equiv& \lambda_3 + \lambda_4 + \lambda_5, \quad M^2 \equiv \frac{1 + t_\beta^2 }{t_\beta} m_3^2. \label{eq:def_M} \end{align} The CP-conserving Higgs potential can then be described by the following 8 independent parameters: \begin{align} m_{H^\pm}^{},~m_A^{},~m_H^{},~m_h,~M^2,~\alpha,~\beta,~v, \label{para} \end{align} where $m_{H^\pm}^{},~m_A^{},~m_H^{}$ and $m_h$ denote the masses of $H^\pm$, $A^0$, $H^0$ and $h$, respectively. We introduce the following shorthand notations for the later convenience. \begin{align} s_x = \sin x, \quad c_x = \cos x, \quad t_x = \tan x. \end{align} \subsection{Yukawa couplings in large $\tan\beta$ regime} From Eq.~(\ref{yuk}), we can extract interaction terms for the mass eigenstates of the Higgs bosons with the third generation fermions and the muon as follows: \begin{align} {\mathcal L}_\text{int} &= -\sum_{f=t,b,\tau}\frac{m_f}{v}\left[ \left(s_{\beta-\alpha}+\frac{c_{\beta-\alpha}}{t_\beta}\right){\overline f}fh +\left(c_{\beta-\alpha} - \frac{s_{\beta-\alpha}}{t_\beta}\right){\overline f}fH^0 +2i\frac{I_f}{t_\beta}{\overline f}\gamma_5fA^0\right]\notag\\ & -\frac{m_\mu}{v}\left[ \left(s_{\beta-\alpha}-t_\beta c_{\beta-\alpha}\right){\overline \mu}\mu h +\left(c_{\beta-\alpha} +t_\beta s_{\beta-\alpha} \right){\overline \mu}\mu H^0 +it_\beta {\overline \mu}\gamma_5 \mu A^0\right]\notag\\ &-\frac{\sqrt{2}}{v} \left\{\frac{1}{t_\beta}\left[\overline{t} \left( m_b \text{P}_R-m_t\, \text{P}_L\right)b\,H^+ + m_{\tau} \overline{\nu_\tau} \, P_R \, \tau \,H^+ \right] -t_\beta m_{\mu} \overline{\nu_\mu} \, P_R \, \mu \,H^+ +(h.c.) \right\}, \label{int} \end{align} where $P_{L}(P_{R})$ is the projection operator for left(right)-handed fermions and $I_f=+1/2 \, (-1/2)$ for $f = t \, (b,\tau,\mu)$. The masses of fermions are given by \begin{align} m_\mu = \frac{v}{\sqrt{2}}\frac{y_\mu}{\sqrt{1+t_\beta^2}},~~ m_f = \frac{v}{\sqrt{2}}\frac{y_f t_\beta}{\sqrt{1+t_\beta^2}}~~~(f = t,b,\tau), \label{mass} \end{align} From Eq.~(\ref{int}), it is clear that only the muon couplings to the extra Higgs bosons are enhanced by taking large $\tan\beta$. In order to solve the muon $g-2$ anomaly, we need a large value of $\tan\beta$ to obtain significant loop effects of extra Higgs bosons as we will show it in the next section. Let us here discuss how large value of $\tan\beta$ we can take without spoiling perturbativity. From Eq.~(\ref{mass}) we obtain \begin{align} y_\mu = \frac{\sqrt{2}m_\mu}{v} \sqrt{1 + t^2_\beta} \simeq 0.6 \left(\frac{t_\beta}{1000} \right). \end{align} For example, $t_\beta \lesssim 5000$ for $y_{\mu} \lesssim 3$. Clearly from Eq.~(\ref{mass}), all the other Yukawa couplings approach to the corresponding SM value in large $\tan\beta$, so that they do not cause the violation of perturbativity in this limit. \subsection{Scalar quartic couplings in large $\tan\beta$ regime} Next, we discuss the behavior of the Higgs quartic couplings in the large $\tan\beta$ regime. All these couplings (times $v^2$) can be rewritten in terms of the parameters shown in Eq.~(\ref{para}) as \begin{align} \lambda_1 v^2 =& \left( m_h^2 c_{\beta - \alpha}^2 + m_H^2 s_{\beta - \alpha}^2- M^2\right)t_\beta^2 + 2(m_H^2 - m_h^2)s_{\beta - \alpha} c_{\beta - \alpha}\, t_\beta + m_h^2 s_{\beta - \alpha}^2 + m_H^2 c_{\beta - \alpha}^2 , \label{lam1} \\ \lambda_2 v^2 =& m_h^2 s_{\beta - \alpha}^2 + m_H^2 c_{\beta - \alpha}^2 - 2(m_H^2 - m_h^2) \frac{c_{\beta - \alpha} s_{\beta - \alpha}}{t_\beta} +( m_h^2 c_{\beta - \alpha}^2 + m_H^2 s_{\beta - \alpha}^2- M^2 )\frac{1}{t_\beta^2} ,\\ \lambda_3 v^2 =&(m_H^2 - m_h^2) c_{\beta - \alpha} s_{\beta - \alpha} \left( t_\beta - \frac{1}{t_\beta} \right) + 2 m_{H^{\pm}}^2 - M^2 + m_H^2 - m_h^2(c_{\beta - \alpha}^2 -s_{\beta - \alpha}^2 ) ,\\ \lambda_{4}v^2 =& M^2 - m_A^2 - 2 (m_{H^{\pm}}^2 - m_A^2) ,\\ \lambda_{5}v^2 =& M^2 - m_A^2 . \label{lam5} \end{align} We find that in the large $\tan\beta$ regime, $\lambda_1$ and $\lambda_3$ can be very large because they are proportional to $t_\beta^2$ and $t_\beta$, respectively, which causes the validity of perturbative calculations to be lost. In order to keep $\lambda_1$ and $\lambda_3$ to be reasonable values, we can take $M^2$ and $s_{\beta - \alpha}$ so as to cancel the large contribution from the $t_\beta^2$ and $t_\beta$ terms as follows: \begin{align} M^2 & = m_h^2 c_{\beta - \alpha}^2 + m_H^2 s_{\beta - \alpha}^2 - 2 s_{\beta - \alpha} c_{\beta - \alpha} ( m_h^2 - m_H^2) \frac{1}{t_\beta} - X v^2 \frac{1}{t_\beta^2} , \label{eq:adjust_M2}\\ s_{\beta - \alpha} & = 1 , \label{eq:adjust_sBA} \end{align} where $X$ is an arbitrary number. It is worth noting that in the limit $s_{\beta - \alpha} \to 1$ (the so-called alignment limit~\cite{Gunion:2002zf}), the SM-like Higgs boson $h$ couplings to weak bosons $g_{hVV}^{}$ $(V=W,Z)$ and fermions $g_{hff}$ become the same value as those of the SM Higgs boson at the tree level, because these are given by $g_{hVV}^{} = g_{hVV}^{\text{SM}} s_{\beta - \alpha}$, $g_{hff}^{} = g_{hff}^{\text{SM}} (s_{\beta - \alpha} + c_{\beta-\alpha}/t_\beta)$ ($f \neq \mu $) and $g_{h\mu\mu}^{} = g_{h\mu\mu}^{\text{SM}} (s_{\beta - \alpha} - c_{\beta-\alpha}t_\beta)$. Because no large deviation in the Higgs boson couplings from the SM prediction has been discovered at current LHC data~\cite{1606.02266}, our choice $s_{\beta - \alpha} =1$ is consistent with these results. After imposing Eqs.~\eqref{eq:adjust_M2} and \eqref{eq:adjust_sBA}, we find \begin{align} \lambda_1 =& \frac{m_h^2}{v^2} + X ,\\ \lambda_2 =& \frac{m_h^2}{v^2} + \frac{X}{t_\beta^4},\\ \lambda_{3} =& \frac{2 m_{H^{\pm}}^2 -2m_H^2 + m_h^2}{v^2} + \frac{X}{t_\beta^2} ,\\ \lambda_{4} =& \frac{m_H^2 + m_A^2 - 2m_{H^{\pm}}^2}{v^2} - \frac{X}{t_\beta^2} ,\\ \lambda_{5} =& \frac{m_H^2 - m_A^2}{v^2}- \frac{X}{t_\beta^2}. \label{lam55} \end{align} These $\lambda$'s are at most ${\cal O}(1)$ as long as we take $m_{H^\pm}^2 \sim m_{A}^2 \sim m_H^2$, so that we can still treat them as perturbation. We take $X = 0$ for simplicity in the following analysis. Constraints from perturbative unitarity is discussed in Sec.~\ref{sec:PU}. \section{Muon $g-2$ and Constraints on parameter space \label{sec:g2}} In this section, we discuss the muon $g-2$ anomaly and various constraints on the model parameters. \subsection{Muon $g-2$} In the scenario with $s_{\beta-\alpha} = 1$ as discussed in the previous section, new contributions to $a_\mu \equiv (g-2)/2$ purely comes from the loop contributions of $H^0$, $A^0$ and $H^\pm$, because the couplings of $h$ becomes exactly the same as those of the SM Higgs boson at the tree level. One-loop diagram contributions to $\delta a_\mu \equiv a_\mu - a_\mu^{\text{SM}}$ from additional Higgs boson loops are calculated as~\cite{Dedes:2001nx} \begin{align} \delta a_\mu^H &= \frac{G_F m_\mu^2}{4\sqrt{2}\pi^2} t^2_\beta \left( \frac{c_{\beta - \alpha}}{t_\beta} + s_{\beta - \alpha} \right)^2 r_H^{} f_H(r_H^{}),\\ \delta a_\mu^A &= \frac{G_F m_\mu^2}{4\sqrt{2}\pi^2} t^2_\beta r_A f_A(r_A),\\ \delta a_\mu^{H^\pm} &= \frac{G_F m_\mu^2}{4\sqrt{2}\pi^2} t_\beta^2 r_{H^\pm} f_{H^\pm}(r_{H^\pm}), \end{align} where $r_{H,A,H^\pm} = m_\mu^2 / m_{H,A,H^\pm}^2$ and \begin{align} f_H(r) &= \int_0^1 dx \frac{x^2(2-x)}{r x^2 - x + 1}, \\ f_A(r) &= \int_0^1 dx \frac{-x^3}{r x^2 - x + 1}, \\ f_{H^\pm}(r) &= \int_0^1 dx \frac{-x^2(1-x)}{r x^2 + (1-r) x}. \end{align} For $r_{H,A,H^\pm} \ll 1$, we can approximate the above formulae as follows: \begin{align} \delta a_\mu^H &\simeq \frac{G_F m_\mu^2}{4\sqrt{2}\pi^2}t_\beta^2 \left(s_{\beta - \alpha} + \frac{c_{\beta - \alpha}}{t_\beta} \right)^2 \frac{m_\mu^2}{m_H^2} \left( - \frac{7}{6} - \ln \frac{m_\mu^2}{m_H^2} \right) ,\\ \delta a_\mu^A &\simeq \frac{G_F m_\mu^2}{4\sqrt{2}\pi^2} \frac{m_\mu^2}{m_A^2} \left( \frac{11}{6} + \ln \frac{m_\mu^2}{m_A^2} \right) ,\\ \delta a_\mu^{H^\pm} &\simeq \frac{G_F m_\mu^2}{4\sqrt{2}\pi^2} \frac{m_\mu^2}{m_{H^\pm}^2} \left( -\frac{1}{6} \right). \end{align} We here briefly mention the contribution from two-loop Barr-Zee diagrams~\cite{BZ1,BZ2}. In the Type-II and Type-X THDMs, the Barr-Zee diagrams also give important contributions, because the tau and/or bottom Yukawa couplings to the additional Higgs bosons can be enhanced by $\tan\beta$. As a result, these two-loop contributions can be comparable to the one-loop diagram. However, in the present model, the both tau and bottom Yukawa couplings are suppressed by $\cot\beta$ as seen in Eq.~(\ref{int}). Therefore, the contribution from two-loop diagrams is simply suppressed by the loop factor, so that these cannot be important. We thus only consider the one-loop diagram for the muon $g-2$. Numerical results for $\delta a_\mu$ are shown in Fig.~\ref{fig:g-2_and_constraints} on the $m_H^{}$--$\tan\beta$ plane. The blue and cyan regions show the regions of parameter space where we can explain the muon $g-2$ within 1$\sigma$ and 2$\sigma$, respectively. Here, we consider the case with $H^{0}$ to be the lightest of all the additional Higgs bosons, and we display the three cases for the mass difference between $m_H^{}$ and $m_A^{}(=m_{H^\pm}^{})$ being 80 (left), 90 (center) and 100 (right) GeV. We can see that the prediction of $\delta a_\mu$ is not changed so much among these three cases. We find that the discrepancy of the muon $g-2$ becomes 1$\sigma$ by taking, e.g., $m_H^{}=300 (600)$ GeV with $\tan\beta=1000\,(3000)$. \begin{figure}[tb] \includegraphics[width=0.31\hsize]{./graphs/EWPT_PU_80-crop.pdf} \quad \includegraphics[width=0.31\hsize]{./graphs/EWPT_PU_90-crop.pdf} \quad \includegraphics[width=0.31\hsize]{./graphs/EWPT_PU_100-crop.pdf} \quad \caption{ Regions where the prediction for the muon $g-2$ is consistent with the measurement within 1$\sigma$ (blue) and 2$\sigma$ (cyan). The green (darker green) region shows the cutoff scale to be less than 100 (10)~TeV given by the perturbative unitarity bound (see Sec.~\ref{sec:PU}). The red region indicates the cutoff scale to be less than 10~TeV given by the vacuum stability bound (see Sec.~\ref{sec:PU}). The gray region is excluded by the electroweak precision measurements at 95\% CL (see Sec.~\ref{sec:ewpt}). } \label{fig:g-2_and_constraints} \end{figure} \subsection{Constraints on scalar quartic couplings } \label{sec:PU} The scalar quartic couplings $\lambda_1$--$\lambda_5$ in the Higgs potential can be constrained by taking into account the following theoretical arguments. Such constraint can be translated into the bound on the physical Higgs boson masses and mixing angles via Eqs.~(\ref{lam1})--(\ref{lam5}). First, the Higgs potential must be bounded from below in any direction of the scalar field space. The sufficient condition to guarantee the vacuum stability is given by~\cite{PHRVA.D18.2574,Sher:1988mj,PHLTA.B449.89,PHLTA.B471.182} \begin{align} \lambda_1>0, \quad \lambda_2>0, \quad \sqrt{\lambda_1\lambda_2}+\lambda_3+\text{MIN}(0,\lambda_4+\lambda_5,\lambda_4-\lambda_5)>0. \label{eq:stability} \end{align} Next, perturbative unitarity requires that $s$-wave amplitude matrices for elastic scatterings of scalar boson 2-body to 2-body processes must not be too large to satisfy S matrix unitarity. This perturbative unitarity condition is expressed as \begin{align} |a_{i,\pm}^0|\leq\frac{1}{2}, \label{pv1} \end{align} where $a_{i,\pm}^0$ are the eigenvalues of such $s$-wave amplitude matrices. In the CP-conserving THDMs, these eigenvalues are given by~\cite{PHLTA.B313.155,hep-ph/0006035,PHRVA.D72.115010,Kanemura:2015ska}: \begin{align} a_{1,\pm}^0 &= \frac{1}{32\pi} \left[3(\lambda_1+\lambda_2)\pm\sqrt{9(\lambda_1-\lambda_2)^2+4(2\lambda_3+\lambda_4)^2}\right],\\ a_{2,\pm}^0 &= \frac{1}{32\pi}\left[(\lambda_1+\lambda_2)\pm\sqrt{(\lambda_1-\lambda_2)^2+4\lambda_4^2}\right],\\ a_{3,\pm}^0 &= \frac{1}{32\pi}\left[(\lambda_1+\lambda_2)\pm\sqrt{(\lambda_1-\lambda_2)^2 + 4\lambda_5^2} \right],\\ a_{4,\pm}^0 &= \frac{1}{16\pi}(\lambda_3+2\lambda_4 \pm \lambda_5),\\ a_{5,\pm}^0 &= \frac{1}{16\pi}(\lambda_3\pm\lambda_4),\\ a_{6,\pm}^0 &= \frac{1}{16\pi} (\lambda_3 \pm \lambda_5). \label{pv2} \end{align} We impose the above two conditions given in Eqs.~(\ref{eq:stability}) and (\ref{pv1}) at an arbitrary energy scale $\mu$. In this case, all the scalar quartic couplings $\lambda_1$--$\lambda_5$ should be understood as a function of $\mu$, where their energy dependence are determined by solving renormalization group equations. In addition, we require that no Landau pole appears up to a certain energy scale, and we call this the triviality bound. From the above consideration, we can define the cutoff scale of the theory $\Lambda_{\text{cutoff}}$ in such a way that one of the three conditions, i.e., the perturbative unitarity, the vacuum stability and the triviality bounds is not satisfied. The renormalization group equations are expressed by a set of $\beta$-functions for dimensionless parameters defined by \begin{align} \mu \frac{d}{d\mu} c = \frac{1}{(4\pi)^2} \beta_{c}. \end{align} We calculate the $\beta$-functions by using \texttt{SARAH}~\cite{1309.7223}. They are approximately given as follows: {\allowdisplaybreaks \begin{align} \beta_{g_1} \simeq& 7 g_{1}^{3}, \\ \beta_{g_2} \simeq& -3 g_{2}^{3}, \\ \beta_{g_3} \simeq& -7 g_{3}^{3}, \\ \beta_{\lambda_1} \simeq& +\frac{3}{4} g_{1}^{4} +\frac{3}{2} g_{1}^{2} g_{2}^{2} +\frac{9}{4} g_{2}^{4} -3 g_{1}^{2} \lambda_1 -9 g_{2}^{2} \lambda_1 +12 \lambda_{1}^{2} +4 \lambda_{3}^{2} +4 \lambda_3 \lambda_4 +2 \lambda_{4}^{2} +2 \lambda_{5}^{2} \nonumber \\ &+4 \lambda_1 y_\mu^2 -4 y_\mu^4, \\ \beta_{\lambda_2} \simeq& +\frac{3}{4} g_{1}^{4} +\frac{3}{2} g_{1}^{2} g_{2}^{2} +\frac{9}{4} g_{2}^{4} -3 g_{1}^{2} \lambda_2 -9 g_{2}^{2} \lambda_2 +12 \lambda_{2}^{2} +4 \lambda_{3}^{2} +4 \lambda_3 \lambda_4 +2 \lambda_{4}^{2} +2 \lambda_{5}^{2} \nonumber \\ & +12 \lambda_2 y_t^2 -12 y_t^4, \\ \beta_{\lambda_3} \simeq& \lambda_3 \left( 2 y_\mu^2 +6 y_t^2 -3 g_{1}^{2} -9 g_{2}^{2} +6 \lambda_1 +6 \lambda_2 +4 \lambda_{3} \right) \nonumber\\ & +\frac{3}{4} g_{1}^{4} -\frac{3}{2} g_{1}^{2} g_{2}^{2} +\frac{9}{4} g_{2}^{4} +2 \lambda_1 \lambda_4 +2 \lambda_2 \lambda_4 +2 \lambda_{4}^{2} +2 \lambda_{5}^{2} , \\ \beta_{\lambda_4} \simeq & 3 g_{1}^{2} g_{2}^{2} +8 \lambda_{5}^{2} + \lambda_4 \left( 2 \lambda_1 +2 \lambda_2 +8 \lambda_3 +4 \lambda_{4} -3 g_{1}^{2} -9 g_{2}^{2} +2 y_\mu^2 +6 y_t^2 \right) ,\\ \beta_{\lambda_5} \simeq & \lambda_5 \left( 2 \lambda_1 +2 \lambda_2 +8 \lambda_3 +12 \lambda_4 -3 g_{1}^{2} -9 g_{2}^{2} +2 y_\mu^2 +6 y_t^2 \right), \\ \beta_{y_t} \simeq& \frac{9}{2} y_t^3 +y_t \Big( -8 g_{3}^{2} -\frac{17}{12} g_{1}^{2} -\frac{9}{4} g_{2}^{2} \Big), \\ \beta_{y_\mu} \simeq& \frac{5}{2} y_\mu^3 - \frac{3}{4} y_\mu \Big(5 g_{1}^{2} + 3 g_{2}^{2} \Big). \end{align}} Here, we take into account the $y_t$ and $y_\mu$ dependence, and all the other Yukawa couplings are neglected because of their smallness. In addition, we ignore higher loop contributions. In Fig.~\ref{fig:g-2_and_constraints}, we show the $\Lambda_{\text{cutoff}}$ dependence on the $m_H^{}$--$\tan\beta$ plane. The regions filled by green (darker green) indicate those with $\Lambda_{\text{cutoff}}\leq 100$ (10) TeV due to the perturbative unitarity bound or the triviality bound. In addition, the regions filled by red show those with $\Lambda_{\text{cutoff}}\leq 10$ TeV due to the vacuum stability condition. If we assume that the model is valid up to 10~TeV and explains the muon $g-2$ within 1$\sigma$, then the mass of $H^0$ should be smaller than 800~GeV. \subsection{Constraints from the electroweak precision measurements \label{sec:ewpt}} The oblique $S$, $T$ and $U$ parameters introduced by Peskin and Takeuchi~\cite{PRLTA.65.964,PHRVA.D46.381} provide a convenient formalism to discuss the constraint on model parameters from electroweak precision measurements. However, we cannot simply apply this formalism to our model, because those parameters are formulated under the assumption that new particles do not give sizable direct corrections to light fermion (including the muon) scattering processes $f_1\bar{f}_2 \to f_3\bar{f}_4$ through vertex corrections and wave function renormalizations. The other assumption is that the new physics scale is sufficiently higher than the electroweak scale. In our setup, both of them cannot be justified. Hence we need to modify the formulation with the $S$, $T$ and $U$ parameters by taking into account vertex corrections and wave function renormalizations. By varying the four model parameters ($m_H$, $m_A$, $m_{H^{\pm}}$, $t_\beta$), we find that the minimum value of $\chi^2$ to be $\chi_{\text{min.}}^2 = 23.7587$ which is given at ($m_H$, $m_A$, $m_{H^{\pm}}$, $t_\beta$) = (59.4~GeV, 398~GeV, 402~GeV, 686). We calculate $\Delta \chi^2 \equiv \chi^2 - \chi_{\text{min.}}^2$ by varying $m_H$ and $t_\beta$ with fixed values for $m_A$ and $m_{H^{\pm}}$. The result is shown in Fig.~\ref{fig:g-2_and_constraints} where the gray region is excluded at 95\% CL. The detail of our analysis is given in Appendix \ref{sec:EWPTapp}. \subsection{Constraints and signatures at the LHC experiment\label{sec:LHC}} Finally, we discuss the constraint on parameters from current LHC data. In our model, the quark Yukawa couplings to the additional Higgs bosons are highly suppressed by $\cot\beta$ in the large $\tan\beta$ regime. Therefore, the additional neutral Higgs bosons $A^0$ and $H^0$ cannot be produced via the gluon fusion process: $gg \to A^0/H^0$. For the same reason, the $gb \to tH^-$ process for the $H^\pm$ production also does not work. Moreover, the vector boson fusion process: $qQ \to q'Q'H^0$ is negligible, because the $H^0VV$ couplings are proportional to $c_{\beta-\alpha}$. As a result, the main production mode for these Higgs bosons is their pair productions via the $s$-channel mediation of a virtual gauge boson: \begin{align} pp \to Z^* \to H^0 A^0,~~pp \to W^* \to H^\pm A^0/H^\pm H^0,~~pp \to \gamma^*/Z^* \to H^+ H^-. \label{prod} \end{align} Because of the muon specific property, the decay branching ratios for $H^{0}$, $A^0$, and $H^\pm$ with the parameter choice in Fig.~\ref{fig:g-2_and_constraints} are given as follows: \begin{align} \text{Br}(H^{0} &\to \mu \bar{\mu}) \simeq 1, \\ \text{Br}(A^{0} &\to \mu \bar{\mu}) + \text{Br}(A^{0} \to H^0 Z) \simeq 1,\\ \text{Br}(H^{-} &\to \mu \bar{\nu}_{\mu}) + \text{Br}(H^{-} \to H^0 W^{-}) \simeq 1. \end{align} The relative magnitude between the above two branching ratios of $A^0$ and that of $H^\pm$ mainly depends on the values of $t_\beta$ and the mass difference between $m_H^{}$ and $m_{H^\pm}^{}$. For example, we obtain $\text{Br}(A^{0} \to \mu \bar{\mu})$ and $\text{Br}(H^{-} \to \mu \bar{\nu}_{\mu})$ to be about 89(99.1)\% and 96(99.7)\% for $m_{H}^{}=300(600)$ GeV, $m_{H^\pm}^{}- m_H^{} = 100$ GeV and $t_\beta = 1000(3000)$, respectively. Therefore, the collider signature of the model is multi-muon final states. \begin{table} \centering \caption{The parameter points that we investigate. $\sigma_{13\text{TeV}}^{}$ is defined in Eq.~(\ref{13tev}). $N_{\mu\text{-THDM}}$ is the expected signal event numbers in the last bin of Fig.~2(b) in Ref.~\cite{CMS:2017wua}. ${\cal L}_{3\sigma}$ is the integrated luminosity at which we can expect 3$\sigma$ deviation from the SM prediction if we apply the same analysis as Ref.~\cite{CMS:2017wua}. The data points with ``-'' in the last column are already excluded. } \label{tab:LHC} \begin{tabular}{|c|c|c||c|c|c|} \hline $m_{H^0}$ [GeV] & $m_{A^{0}}(=m_{H^{\pm}})$ [GeV] & $\tan\beta$ & $\sigma_{13\text{TeV}}$ [fb] & $N_{\mu\text{-THDM}}$ & ${\cal L}_{3\sigma}$ [fb$^{-1}$] \\ \hline 600 & 700 & 3000 & 0.41 & 6.6 & -\\ 620 & 710 & 3000 & 0.369 & 5.9 & -\\ 640 & 730 & 3100 & 0.316 & 5.2 & 44\\ 660 & 750 & 3300 & 0.2707 & 4.5 & 58\\ 680 & 770 & 3400 & 0.2334 & 3.9 & 75\\ 700 & 790 & 3700 & 0.20 & 3.4 & 97\\ \hline \end{tabular} \end{table} We show the production cross sections in some parameter points given in Table~\ref{tab:LHC}. Here, the production cross section is defined as the sum of all the modes given in Eq.~(\ref{prod}) at 13~TeV, \begin{align} \sigma_{13\text{TeV}} \equiv& \sum_{X = A^0, H^{\pm}} \sigma(pp \to H^0 X) + \sum_{Y = H^{\pm}} \sigma(pp \to A^0 Y) + \sigma(pp \to H^+ H^-).\label{13tev} \end{align} We generate \texttt{UFO} files~\cite{1108.2040} by using \texttt{FeynRules 2.3.3}~\cite{1310.1921}, and use \texttt{MadGraph 5}~\cite{1106.0522} to estimate the production cross sections. Signal events are simulated by using \texttt{MadGraph 5}, \texttt{PYTHIA 6.428}~\cite{hep-ph/0603175}, and \texttt{DELPHES 3.3.3}~\cite{1307.6346}. % We compare the number of events predicted in our model with that of the CMS result for the multi-lepton signal search at 13~TeV with 35.9~fb$^{-1}$ data~\cite{CMS:2017wua}. We find the last bin of Fig.~2(b) in Ref.~\cite{CMS:2017wua} gives the stringent bound on the mass of $H^0$ because our model predicts three-muon final states with large $p_T$, e.g., via $pp \to H^0H^\pm \to \mu^+\mu^-\mu^\pm\nu$. The observed (expected) background event number in the bin is 3(3.5). The expected signal event numbers in several parameter points are shown in Table~\ref{tab:LHC}. We use the the CLs method~\cite{hep-ex/9902006,Read:2000ru,Read:2002hq}, and find that the region with $m_{H}^{} \lesssim 640$ GeV is excluded at 95\% CL. Also, we show the integrated luminosity which is required to give the 3$\sigma$ deviation from the SM expectation for each parameter point. We can see that the allowed parameter points ($m_{H}^{} \geq 640$ GeV) could give the 3$\sigma$ deviation during the LHC Run 2 experiment. \section{Conclusions \label{sec:conclusion}} We have investigated a new type of the THDM, i.e. $\mu$THDM, as a solution of the muon $g-2$ anomaly. Differently from the other THDMs with a softly-broken $Z_2$ symmetry, this model predicts that only the muon couplings to the additional Higgs bosons are enhanced by $\tan\beta$, while all the other SM fermion couplings to them are suppressed by $\cot\beta$. Thanks to this coupling property, the $\mu$THDM can avoid the strong constraint from the leptonic $\tau$ decay in contrast to the Type-X THDM which cannot explain the muon $g-2$ within the $1\sigma$ level due to this constraint. We find that the $\mu$THDM can explain the muon $g-2$ within the 1$\sigma$ level satisfying constraints from perturbative unitarity, vacuum stability, electroweak precision measurements, and current LHC data. We have found that large $\tan\beta$ is required to solve the muon $g-2$ anomaly within the $1\sigma$ level. Its typical values is ${\cal O}(1000)$ with the masses of the additional Higgs bosons to be in the range of 100--1000 GeV. The large $\tan\beta$ is equivalent to the large muon Yukawa coupling, $y_{\mu} \sim {\cal O}(1)$. In order to see the effect of such large Yukawa coupling, we have studied the constraints from the perturbative unitarity and the vacuum stability conditions. We have found that the smaller mass regime for the additional Higgs bosons is preferable. For example, if we require the cutoff scale of this model to be above 10~TeV, $H^0$ should be lighter than 800 GeV in the case of $m_A^{}=m_{H^\pm}^{} = m_H^{}+90$ GeV and $\sin(\beta-\alpha) = 1$. Another consequence of the large Yukawa coupling is multi-muon final states at the LHC. We have found that the region with $m_H^{} \lesssim 640$ GeV is excluded at 95\% CL by the LHC data with 13 TeV of the collision energy and 35.9 fb$^{-1}$ of the integrated luminosity. From these constraints, we conclude that the cutoff scale of the $\mu$THDM is higher than 10~TeV but have to be lower than 100~TeV if the model solves the muon $g-2$ anomaly within 1$\sigma$ level. At the end, we briefly discuss how to weaken the constraint from the multi-muon signature at the LHC and make the cutoff scale higher. One possible way is to add neutral and stable particles which couple to the additional Higgs bosons. Then new decay modes of the additional Higgs bosons can open and the rate of the multi-muon final state can be reduced. Another way is to embed this model into the context of composite THDMs~\cite{1105.5403,1612.05125} whose typical cutoff scale is around 10~TeV. In that case, the model should be emerged from (unknown) UV dynamics. \section*{Acknowledgments} We would like to thank Howard E.~Haber and Pedro Ferreira for their comments. We also thank Mihoko M.~Nojiri and Michihisa Takeuchi for their comments on LHC phenomenology. This work was supported by JSPS KAKENHI Grant Number 16K17715 [TA].
train/arxiv
BkiUfu85qhDCjszB7PnK
5
1
\section{Introduction} \label{sec:intro} Modern cosmology is getting more and more mature as accumulating observational data are available for us. We have however a fundamental lack of understanding of the physical nature of the dark components introduced to explain the dominant source of gravity that gathers material to form rich structures in the late universe (dark matter) as well as the accelerating cosmic expansion (dark energy), together filling the majority of the cosmological energy budget. Aiming at having further insights on those substances, growing number of large-scale observational programs are ongoing and planned \citep[e.g.][]{2014PASJ...66R...1T,laureijs2011,2009arXiv0912.0201L,DESI}. Of crucial importance from the theoretical point of view is our ability to prepare an accurate model template with which one can confront such observational data for their interpretation. Since a larger survey means a smaller statistical error, the relative contribution from the systematic error arising from the inaccuracy of the template should be more important. Given the gigantic area coverage and depth of ambitious future programs, there is the necessity to come up with a really accurate theoretical framework to predict the observed large scale structure to attain their full potential to infer the underlying theory governing the universe. One of the most difficult aspect of the large-scale structure prediction is the complicated relation between the matter density fluctuations dominated by invisible dark matter and visible structures such as galaxies \cite{kaiser84}. The so-called galaxy bias cannot be predicted from first principles, unless one can model all the baryonic effects relevant for the formation and evolution of galaxies. While hydrodynamical simulations might be one way to proceed, the vastly large dynamical range, kpc to Gpc in length scale, is a big obstacle. Typically, one comes up with empirical subgrid models and calibrate them against observed statistics of galaxies \citep[see e.g., ][for recent attempts]{2014Natur.509..177V,2014MNRAS.445..175G,Vogelsberger_2014,2015MNRAS.450.1937C,2015MNRAS.446..521S,2014MNRAS.444.1453D,2018MNRAS.475..676S,2019ComAC...6....2N}. Alternatively, one can formulate the statistical properties of galaxies on large scales via a perturbative expansion in which poorly known galaxy physics is parameterized by a set of effective bias operators. The strength of these operators is controlled by free coefficients, which should be treated as nuisance parameters. The recently developed Effective Field Theory of Large Scale Structure (EFTofLSS) provides a systematic way to derive all possible operators and corresponding bias coefficients that are allowed by symmetry \citep{McDonald:2009dh,baumann12,carrasco12,Assassi:2014fva,Senatore:2014via,Senatore:2014eva,Senatore:2014vja,Lewandowski:2014rca,Lewandowski:2015ziq} \citep[also see][for a review]{Desjacques18}. Since this approach, in principle, does not assume any specific model of galaxy formation, it provides us with a conservative theoretical model for the galaxy density and velocity fields on large scales. The generality of the effective field theory approach comes at the price of having to marginalize over many free coefficients, which can compromise cosmological constraints. These constraints can become weaker compared to other theoretical templates in which a specific bias prescription is employed, such as halo model approaches. The detailed balance between the robustness and the tightness of the cosmological constraints has been addressed in recent studies \citep[e.g.,][]{Hand:2017ilm,2020PhRvD.101b3510K,Osato_2019}. There are several non-trivial choices behind the application of the EFT to the data. First, one should determine the wavenumber up to which the EFT calculation up to a chosen perturbative order is reliable. This data cut should be carefully tested to avoid biased parameter estimates. Then, one has to decide how many nuisance parameters to keep in the fit (there are about 10 at the one-loop order) and what priors to use. Indeed, at the power spectrum level many EFT operators are degenerate among each other. Thus, one has to accurately determine their principal components to make the cosmological analysis efficient. All these subtleties should be examined and validated in a transparent manner to convince the community of the robustness of the EFT approach. To that end, in this paper, we conduct a first \textit{blind} test of EFTofLSS for clustering of galaxies in redshift space. Two independent groups, which will be referred to as ``West Coast'' (D'Amico, Senatore and Zhang) and ``East Coast'' (Ivanov, Simonovi\'c and Zaldarriaga), have analyzed the mock data generated by yet another group (Nishimichi and Takada, simply ``Japan Team'' hereafter). In this process, the true cosmological parameters used to generate the simulation mock data were known only to the Japan Team. The two analyzing teams have participated in the challenge on the condition that the results would be published regardless of the outcome, and the pipelines could not be modified after unblinding. We present these results in our paper in the original form. To complement the result of the blinded analysis and to get more insight on the origin of the cosmological information, we briefly discuss post-unblinding analyses. The layout of this paper is as follows. We first describe the design of our mock challenge program in Sec.~\ref{sec:challenges}. We then specify the mock simulations in Sec.~\ref{sec:mock}. The theoretical template and the method to conduct parameter inference are explained in Sec.~\ref{sec:theory}. Then the results of the blinded analysis are summarized in Sec.~\ref{sec:res_blind}. We conclude this study in Sec.~\ref{sec:conclusion}. \section{Design of Blinded Cosmology Challenge} \label{sec:challenges} Throughout this paper, we consider a flat $\Lambda$CDM cosmology. This is motivated by the recently claimed tension in the values of the Hubble parameter, one from local measurements such as the distance ladder, and the other from the Cosmic Microwave Background (CMB) assuming a flat $\Lambda$CDM model \citep[see][and references therein]{2019NatRP...2...10R}. In such a situation, a robust measurement from other independent observable channels would be important, and indeed, the galaxy clustering, when the full shape information of its spectra is analysed, has been shown to serve as such a probe \citep{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret,2020A&A...633L..10T}. Also important might be a similar, but a weaker tension in the amplitude of the density fluctuations in the current universe \citep{Hildebrandt17,2018PhRvD..98d3526A,Hikage19}. This is known to be degenerate with the matter density parameter from the late-time observables. We wish to demonstrate through the challenge the current status of the use of galaxy clustering in particular with an EFT approach to describe the nonlinear nature of the cosmological large scale structure. \subsection{Cosmological parameters} \label{subsec:cosmo} To assess the reliability of the galaxy-clustering analyses within the flat $\Lambda$CDM model, three cosmological parameters, $\ln(10^{10}A_\mathrm{s})$, $\Omega_\mathrm{m}$ and $H_0$, are randomly drawn from independent normal distributions. These parameters are the logarithm of the amplitude of the primordial power spectrum at $k_0=0.05\,\mathrm{Mpc}^{-1}$, the matter density parameter at present and the current Hubble expansion rate in km/s/Mpc, respectively. While the mean values of the normal distributions are set to be the best-fit values determined by Planck satellite \cite{planck-collaboration:2015fj}, we consider the standard deviation four times larger than the same experiment to test the validity of the model in a broader parameter space. While all of the information above is shared among all the collaborators, the three random numbers drawn were kept only within the Japan Team until we finally unblinded them. On the other hand, we fix the baryon fraction, $f_\mathrm{b} = 0.1571$ and the spectral index $n_\mathrm{s}=0.9649$. These values are shared with the two US teams. In typical current large-scale structure survey analyses, these two parameters are not very well determined due to the weak sensitivity of the target galaxy observables unless one adds priors motivated by CMB observations and/or big-bang nucleosynthesis, while it would be possible to constrain them from futuristic galaxy surveys. Therefore, letting the US analysis teams know the exact values of them loosely corresponds to adding CMB priors \footnote{It is not trivial how one can best arrange a challenge where external prior information is added. To keep the analysis fully blinded, the Japan Team decided not to give any prior information to the analysis teams for the challenge presented in this paper.}. Further, for simplicity and to avoid the complication to deal with massive neutrinos both in theory and in simulations, we set the neutrino masses to be exactly zero. Under the above settings, the linear matter-density transfer function is computed using the public Boltzmann solver \textsc{CAMB} \cite{camb}. The parameter file passed to this code by the Japan Team is provided to the US teams after the values of $\omega_\mathrm{b}$, $\omega_\mathrm{c}$, $H_0$ and $A_\mathrm{s}$ are erased. The main goal of the challenge is to infer the three cosmological parameters $A_\mathrm{s}$, $\Omega_\mathrm{m}$ and $H_0$. It was agreed among all the teams that, once these cosmological parameters are unblinded, the results reported by the time may not be modified any more. \subsection{Target observables} \label{subsec:obs} We focus on the galaxy clustering in redshift space in the initial challenge presented in this paper. More specifically, we work in Fourier space and analyse the multipole moments of the galaxy power spectrum. This includes physical and observational effects such as the Baryon Acoustic Oscillation (BAOs; \cite{1970ApJ...162..815P,1970Ap&SS...7....3S,1984ApJ...285L..45B,1987MNRAS.226..655B,1989ApJS...71....1H}), redshift-space distortions (RSD; \cite{jackson72,kaiser87}) and the Alcock-Paczynski (AP; \cite{alcock79}) effect, where the AP is induced artificially by distorting the simulation boxes (see the next section for further detail). On top of these distinctive features, the mock data should contain the cosmological information through the overall shape of the power spectra, which might be hindered by the presence of various nonlinear effects. The aim of this challenge is to assess how robustly one may extract the fundamental cosmological parameters within the flat $\Lambda$CDM framework. The Japan Team constructs mock galaxy catalogs and measures the multipole moments of the power spectra. To discriminate the \textit{systematic} error from the statistical error, this experiment is done in huge simulation volumes much larger than the current surveys. The galaxy catalogs are constructed to roughly mimic the CMASS and the LOWZ catalog from the 12th Data Release of Sloan Digital Sky Survey \citep[Ref.][hereafter SDSS DR12]{2015ApJS..219...12A}. The details of these simulations will follow in the next section. Since the galaxy bias is formulated to be as general as possible in the EFT, based only on symmetry considerations without assuming any specific model with which galaxies are defined, the detail of the mock galaxies would not give a significant impact to the blinded analysis as long as one sticks to an EFT approach. However, other approaches such as the halo model would be directly impacted by the piece of information on the exact procedure with which the mock galaxies are distributed within the simulation volume. Therefore, any further information on the mock galaxies detailed in the next section was not provided to the US teams before unblinding. For completeness, the set of mock data as well as the information on the simulations provided to the US teams are summarised at a dedicated website (\url{http://www2.yukawa.kyoto-u.ac.jp/~takahiro.nishimichi/data/PTchallenge/}). All the data and the information were shared through this website. Interested readers may download the same set of data and participate in the blinded challenge by analysing the data using their own theoretical template, as the exact cosmological parameter values are not exactly shown in this paper nor on the website. \section{Generating mock redshift-space power spectra of BOSS-like galaxies} \label{sec:mock} The Japan Team works on the construction of mock galaxy catalogs and measurement of the power spectra. The settings of the numerical simulations, the prescription for the mock galaxies and the analysis methods to determine their statistics are described in this section. \subsection{Specification of simulations} \label{subsec:simu} We follow the gravitational dynamics of ten random realizations of the matter density field expressed by $3,072^3$ mass elements sampled in comoving periodic cubes with the side length $L = 3,840\,h^{-1}\mathrm{Mpc}$. The total volume, $566\,(h^{-1}\mathrm{Gpc})^{3}$, is about a hundred times that of the CMASS and LOWZ sample from SDSS BOSS DR12, which together have a volume coverage of $5.7 \, (h^{-1}\mathrm{Gpc})^3$ \citep{2013AJ....145...10D}. The large volume of our simulations allows us to determine the statistics of the mock galaxies very precisely with little sample-variance error. Therefore, we can conduct a fairly stringent test of the systematic error due to an imperfect modeling of the target statistics. The initial conditions are generated with a code developed in \cite{nishimichi09} and then parallelized in \cite{Valageas11a} based on the second-order Lagrangian Perturbation Theory (2LPT; \cite{scoccimarro98,crocce06a}). Following the result presented in \cite{DQ1}, the starting redshift of the simulations are set at $z=29$ to roughly optimize the total systematic error arising from the artificial growing mode due to the grid pre-initial condition \cite{Marcos06,Joyce07,Garrison16} and the truncation of the LPT at the second order given the mean inter-particle distance of the simulations. We prepare ten independent random realizations, each of which is then evolved by a public Tree-Particle Mesh code \textsc{Gadget2}~\cite{gadget2} with $6,144^3$ grid points for fast Fourier transform (FFT) and the tree softening length of $62.5\,h^{-1}\mathrm{kpc}$. The other simulation parameters to control the force accuracy as well as the time-stepping criteria are the same as in \cite{DQ1}. We store the particle snapshots at $z=3$, $2$, $1$, $0.61$, $0.51$ and $0.38$. We populate galaxies to the lowest three redshifts, and conventionally call the catalogs as CMASS2 ($z=0.61$), CMASS1 ($z=0.51$) and LOWZ ($z=0.38$) in what follows. \subsection{Mock galaxy identification} \label{subsec:galaxies} After obtaining the particle snapshots, we run the \textsc{Rockstar} halo finder~\cite{Behroozi:2013}, which is based on the six dimensional phase space friends-of-friends algorithm. This code identifies not only isolated ``central'' halos but also ``satellite'' halos existing as substructures of more massive halos without any distinction at first in the primary output files. For simplicity, we treat each of them irrespectively of whether it is a central or a satellite halo and populate a galaxy only according to the virial mass assigned by \textsc{Rockstar}. We impose a soft cutoff to the virial mass to select massive halos to populate galaxies randomly with the probability \begin{eqnarray} P(M_\mathrm{vir}) = \frac{1}{2}\left[1+\mathrm{erf}\left(\frac{\log_{10}M_\mathrm{vir}-\log_{10}M_\mathrm{min}}{\sigma_{\log_{10}M}}\right)\right],\label{eq:HOD} \end{eqnarray} where ${\rm erf}(x)$ is the error function. We have two parameters, $\log_{10}M_\mathrm{min}$ and $\sigma_{\log_{10}M}$, which determine the typical minimum mass and the profile of the soft mass cutoff, respectively. We set $\log_{10} M_\mathrm{min} = 13.08$, $12.97$ and $12.95$ for LOWZ, CMASS1 and CMASS2 ($M_{\rm min}$ is given in unit of $h^{-1}M_\odot$), respectively, while the value of $\sigma_{\log_{10}M}$ is fixed to $0.35$ for all of the samples. These choices are made such that the resultant clustering signal of the mock galaxies, especially the amplitude of the power spectra at small $k$ becomes roughly consistent with the observation (see the next subsection for more detail). We assume that the populated mock galaxies are located at the center-of-mass position of the core particles determined by \textsc{Rockstar}. Similarly, we assign the the center-of-mass velocities of the same core particles to the mock galaxies, which are used when we displace the positions of mock galaxies to redshift space \citep{2020PhRvD.101b3510K}. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{PTchallenge_catalog_property.pdf} \end{center} \caption{The abundance of halos per unit logarithmic mass interval (upper) and the mean number of mock galaxies per halo (lower) as a function of the virial mass of halos. The mean of the ten random realizations are shown at three output redshifts of the simulations as indicated by the figure legend.} \label{fig:HOD} \end{figure} We show the abundance of (central) halos as well as the mean number of galaxies per central halo as a function of the virial mass in Fig.~\ref{fig:HOD}. Here, we define ``central'' halos from the \textsc{Rockstar} catalog as those satisfying the condition that any other halo is not more massive than the halo of interest to within a sphere of radius $R_{\rm vir}^{\rm cen}$, where $R_{\rm vir}^{\rm cen}$ is the virial radius of the central halo. Note that an isolated halo is also identified as a central halo according to this definition. On the other hand, the halos which reside around a more massive neighbor to within the neighbor's viral radius are identified as ``satellite'' (sub)halos. The particular definition does not really affect our mock galaxy catalog due to our recipe (Eq.~\ref{eq:HOD}) to populate galaxies. The lower panel of Fig.~\ref{fig:HOD} shows the average number of mock galaxies in central halos, i.e. the halo occupation distribution (HOD), as a function of central halo mass. Note that unlike the standard HOD prescription, the HOD of our mock catalog is not given a priori, and rather is \textit{measured} from the mocks with the central/satellite split. Nevertheless, the shape of HOD in our mock catalogs looks similar to what can be found in the literature, e.g., \citep{2011ApJ...728..126W,2015ApJ...806....2M}. There appear two regimes; halos around the soft cutoff near $M_{\rm vir}=10^{13}\,h^{-1}M_\odot$ host only one galaxy (i.e., a central galaxy), while massive halos above $10^{14}\,h^{-1}M_\odot$ receive a significant contribution from satellite galaxies, displaying a power-law like form in the HOD. \subsection{Measurement of the mock signal and error} \label{subsec:measurement} We here describe the method to measure the power spectra and estimate the data covariance from the mock galaxy catalogs. The measurement is done based on FFT of the density field. We first assign the mock galaxies in redshift space to $n_\mathrm{g}^3 = 2048^3$ grid points using the Cloud in Cell (CIC) interpolation scheme. We employ the distant observer approximation in the mapping to the redshift space. We follow Ref.~\cite{Sefusatti16} to correct for the aliasing effect \cite{jing05}, by the so-called interlacing method. To do this, we prepare another density grid but with mass assignment done after shifting the galaxy positions by half the grid size along all of the three Cartesian axes and then corrected for the phase shift by multiplying an appropriate factor to the field in Fourier space. By taking the average of the two density grids, the original and the interlaced, we can get rid of the aliasing effect due to the odd images, which would give the dominant aliasing source to standard cosmological power spectra with decaying amplitude toward higher wavenumbers. The effect of the CIC window function will eventually be removed later in Eq.~(\ref{eq:estimator}). We then reinterpret the wavenumbers by taking account of the AP effect. Namely, we rescale the fundamental modes along the each of the three axes as \begin{eqnarray} &&\tilde{k}_{\mathrm{f},x} = \tilde{k}_{\mathrm{f},y} = \frac{D_\mathrm{A}^\mathrm{(true)}(z)}{D_\mathrm{A}^\mathrm{(fid)}(z)}k_f,\nonumber\\ &&\tilde{k}_{\mathrm{f},z} = \frac{H^\mathrm{(fid)}(z)}{H^\mathrm{(true)}(z)}k_f,\label{eq:AP} \end{eqnarray} where $k_\mathrm{f} = 2\pi / L$ is the original fundamental mode in the absence of AP effect. In the above, we take the $z$ direction in the simulation box as the line-of-sight direction, and the upper scripts, (true) and (fid), indicate that the comoving angular diameter distance, $D_\mathrm{A}(z)$, or the Hubble expansion rate, $H(z)$, are calculated assuming the correct, blinded cosmological parameters, and a fiducial cosmological parameters, respectively. Here, we adopt a flat $\Lambda$CDM cosmology with $\Omega_\mathrm{m}^\mathrm{(fid)}=0.3$ as the fiducial cosmology, and this information is shared with the two US analysis teams. $\Omega_\mathrm{m}^\mathrm{(fid)}$ that was used to create the mock catalogs should not be confused with the true cosmological parameter $\Omega_\mathrm{m}$, which was used in the simulations and which was kept in secret to the analyzing teams. The Japan Team then estimates the first three non-zero multipole moments, monopole ($\ell=0$), quadrupole ($\ell=2$) and hexadecapole ($\ell=4$) by taking weighted averages of the squared Fourier modes: \begin{eqnarray} \hat{P}_\ell (k_i) = \frac{2\ell+1}{N_i}\sum_{\tilde{\mbox{\boldmath$k$}} \in \mathrm{bin}\,i} \mathcal{P}_\ell(\mu_{\tilde{\mbox{\boldmath$k$}}})\hat{P}(\tilde{\mbox{\boldmath$k$}}),\label{eq:estimator} \end{eqnarray} with \begin{eqnarray} \hat{P}(\tilde{\mbox{\boldmath$k$}}) = \frac{\tilde{V}\left|\delta_{\tilde{\mbox{\boldmath$k$}}}\right|^2-\tilde{P}_\mathrm{shot}(\tilde{\mbox{\boldmath$k$}})}{W_\mathrm{CIC}^2(\tilde{\mbox{\boldmath$k$}})}, \end{eqnarray} where the distorted volume $\tilde{V}$ is given by \begin{eqnarray} \tilde{V} = \left(\frac{D_\mathrm{A}^\mathrm{(fid)}(z)}{D_\mathrm{A}^\mathrm{(true)}(z)}\right)^2\frac{H^\mathrm{(true)}(z)}{H^\mathrm{(fid)}(z)} \, L^3,\label{eq:AP_vol} \end{eqnarray} analogously to Eq.~(\ref{eq:AP}) to account for the AP effect, the summation runs over wavevectors $\tilde{\mbox{\boldmath$k$}}^{\mathrm{T}} = (\tilde{k}_{\mathrm{f},x} i_x, \tilde{k}_{\mathrm{f},y} i_y, \tilde{k}_{\mathrm{f},x} i_z)$ specified by an integer vector $(i_x,i_y,i_z)$, $\mathcal{P}_\ell$ denotes the $\ell$-th order Legendre polynomial, $\mu_{\tilde{\mbox{\boldmath$k$}}}$ is the cosine between the wavevector $\tilde{\mbox{\boldmath$k$}}$ and the $z$-direction, and $N_i$ stands for the number of Fourier modes contained in the $i$-th wavenumber bin. In the above, we have subtracted the shot noise, $\tilde{P}_\mathrm{shot}$, from the measured power spectrum\footnote{Notice that this contribution is coming from the zero-lag correlator inherent in point processes and thus exactly $1/n_\mathrm{g}$ for any tracers with a number density $n_\mathrm{g}$. However, on the modeling side, the stochastic contribution in galaxy spectra uncorrelated with large scale density fluctuations is sometimes also referred to as the shot noise. In this definition, it is well known that the shot noise, i.e., the level of stochasticity, can deviate from the $1/n_\mathrm{g}$ Poissonian noise. While we omit this in the analyses shown in the main text, the possible impact of treating this as an additional free parameter is discussed in the Appendix}. We evaluate the shot noise taking into account the interlacing technique for the aliasing correction and the CIC window function. Denoting \begin{eqnarray} \tilde{\kappa}_a = \frac{\pi \tilde{k}_a}{2\tilde{k}_{\mathrm{Ny},a}}, \end{eqnarray} with $\tilde{k}_{\mathrm{Ny},a} = \tilde{k}_{\mathrm{f},a} n_\mathrm{g}/2$ being the direction-dependent Nyquist frequency ($a=x$, $y$ or $z$), the resultant expression for the wavevector-dependent shot noise contribution is given as \begin{eqnarray} \tilde{P}_\mathrm{shot}(\tilde{\mbox{\boldmath$k$}}) &=& \sum_{n_x,n_y,n_z: \mathrm{even}} W_\mathrm{CIC}^2(\tilde{\mbox{\boldmath$k$}}+2\tilde{\mbox{\boldmath$k$}}_\mathrm{Ny} \mbox{\boldmath$n$}^{\mathrm{T}}) \frac{\tilde{V}}{N_\mathrm{gal}},\nonumber\\ &=& \left[\prod_{a=x,y,z} C_a(\tilde{k}_a)\right] \frac{\tilde{V}}{N_\mathrm{gal}}, \end{eqnarray} with $W_\mathrm{CIC}$ being the CIC window function \begin{eqnarray} W_\mathrm{CIC}(\tilde{\mbox{\boldmath$k$}}) = \prod_{a=x,y,z} \mathrm{sinc}^{ 2}\tilde{\kappa}_a, \end{eqnarray} and the final shot-noise correction factor, $C_a$, given as the infinite summation over even integers can be computed analytically as \begin{eqnarray} C_a(\tilde{k}_a) = \frac{1}{12}\left(1+\cos\tilde{\kappa}_a\right)^2 \left(2+\cos\tilde{\kappa}_a\right). \end{eqnarray} See Ref.~\cite{jing05} for a similar expression but without the interlacing correction that erases the odd images. The estimator, Eq.~(\ref{eq:estimator}), is computed at 100 wavenumber bins between the first bin edge taken at zero to the final bin edge at $1\,h\,\mathrm{Mpc}^{-1}$ evenly spaced by $0.01\,h\,\mathrm{Mpc}^{-1}$. The representative wavenumber of each bin, $k_i$ in Eq.~(\ref{eq:estimator}), is computed as the average of the norm of the wavevectors that actually enter the bin: \begin{eqnarray} k_i = \frac{1}{N_i}\sum_{\tilde{\mbox{\boldmath$k$}} \in \mathrm{bin}\,i} \left|\tilde{\mbox{\boldmath$k$}}\right|.\label{eq:k_i} \end{eqnarray} The pairs of numbers, $(k_i,\hat{P}_\ell(k_i))$, are provided to the analysis teams as the mock measurements, and the above way to compute the representative number of each $k$ bin is informed to the analysis team. The data files also contain estimates of the covariance matrix. It is obtained assuming Gaussianity \citep{2020PhRvD.101b3510K}: \begin{eqnarray} \mathrm{Cov}_{ij}^{\ell \ell'} &=& \left\langle\left(\hat{P}_\ell(k_i)-\langle\hat{P}_\ell(k_i)\rangle\right)\left(\hat{P}_{\ell'}(k_j)-\langle\hat{P}_{\ell'}(k_j)\rangle\right)\right\rangle,\nonumber\\ &=& \delta_{ij}^\mathrm{K}\frac{(2\ell+1)(2\ell'+1)}{N_i^2}\nonumber\\ &&\times \sum_{\tilde{\mbox{\boldmath$k$}} \in \mathrm{bin}\,i}\mathcal{P}_\ell(\mu_{\tilde{\mbox{\boldmath$k$}}})\mathcal{P}_{\ell'}(\mu_{\tilde{\mbox{\boldmath$k$}}}) \left[P(\tilde{\mbox{\boldmath$k$}})+P_\mathrm{shot}\right]^2,\label{eq:covar} \end{eqnarray} where $P(\tilde{\mbox{\boldmath$k$}})$ is the expectation value of $\hat{P}(\tilde{\mbox{\boldmath$k$}})$. The expression reduces to the real-space formula by Ref.~\cite{feldman94} when $\ell=\ell'=0$. In reality, however, we have to make use of a noisy estimate of the power spectrum $\hat{P}(\tilde{\mbox{\boldmath$k$}})$ for each wavevector $\tilde{\mbox{\boldmath$k$}}$ instead of $P(\tilde{\mbox{\boldmath$k$}})$, and this can impact the estimation of the covariance matrix significantly. Therefore, instead of computing Eq.~(\ref{eq:covar}), we first bin the Fourier modes in 10 evenly-spaced $|\mu_{\mbox{\boldmath$k$}}|$ bins and take the average of $\hat{P}(\tilde{\mbox{\boldmath$k$}})$ within each bin to suppress the noise. The binned estimates are then used in Eq.~(\ref{eq:covar}), but the summation now runs over bins instead of individual wavevectors, to obtain our estimate of the covariance matrix. The Japan Team considers two settings for the covariance matrix. The first is to use the volume and the shot noise consistent with the mock simulations. In addition, they provide another estimate scaled to the BOSS DR12 catalogs, by substituting the number density from the observation and then scaling the number of Fourier modes according to the ratio of the surveyed and the simulated volume. The set of estimates, $\hat{P}(k_i)$ and $\mathrm{Cov}_{ij}^{\ell\ell'}$, with the latter now has only diagonal entries with respect to the subscripts, $i$ and $j$, due to the Gaussian approximation, are tabulated for each of the ten random realizations and provided through the website. The Japan Team leaves the decision to the US Teams on how to exactly use these estimates: which survey specification for the estimation of the covariance matrix to adopt, to combine the ten realization and analyse the averaged spectra just once or to analyse each realization one by one, or to further estimate the non-Gaussian error from the realization-to-realization scatter. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{PTchallenge_multipole.pdf} \end{center} \caption{First three multipole moments (monopole, quadrupole and hexadecapole) of the power spectrum in redshift space measured from our mock galaxy catalogs at three redshifts (the solid lines). The $1$-$\sigma$ uncertainty intervals assuming the survey parameter of the SDSS Data Release 12 are shown by the shaded regions. Also shown by the error bars are taken from Ref.~\cite{Beutler17} based on SDSS DR12. For these data points, the measurements from the sample in the North Galactic Cap (NGC) and the South Galactic Cap (SGC) are shown separately by different symbols as indicated by the figure legend. Note that the Alcock-Paczynski effect is artificially induced assuming $\Omega_\mathrm{m}=0.3$ in the redshift-distance conversion. The analysis teams can only access exactly the data vector shown in this figure. The analyses presented in this paper is based on the monopole and the quadrupole moments from the catalog at $z=0.61$.} \label{fig:multipole} \end{figure} We show in Fig.~\ref{fig:multipole} the average multipole moments of the power spectra at the three redshifts corresponding to LOWZ, CMASS1 and CMASS2. The solid lines show the mock measurements, where the shaded region around each line denotes the $1$-$\sigma$ error scaled to the SDSS BOSS DR12 survey parameters. The three lines in each panel depict the monopole, quadrupole and hexadecapole from top to bottom. Also shown by the symbols with error bars are the actual measurements from the BOSS data by \cite{Beutler17}. The measurements from the North and South Galactic Cap are respectively plotted by the upward and the downward triangles. Overall, the mock data follows the observed spectra. The monopole moment especially exhibits an excellent agreement, because the model parameters used to distribute the mock galaxies are chosen to match this moment. There is, however, small mismatch in the quadrupole moment: the observed data shows a stronger damping behaviour to the higher wavenumbers. It is out of the scope of the current investigation to see if or not this can be alleviated by further tuning the model parameters without spoiling the success in the monopole. This is nontrivial since the cosmological parameters adopted in the mock simulations could be off from the true unknown parameters governing our Universe, or the recipe to populate mock galaxies might not be flexible enough to meet the reality. \section{Theoretical template} \label{sec:theory} In this section we describe the implementation of the theoretical model by the two teams participating in the cosmological analysis challenge. The employed methodologies are almost identical to the ones used in the analysis of the actual BOSS data by the same teams \citep{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret}. Both teams participating in the PT challenge use, essentially, the same theoretical template. However, there are differences in the implementation of IR resummation, the choice of nuisance parameters and their priors. Besides, the two teams use absolutely independent pipelines based on different software. This section describes in detail the pipelines used by the two teams and focuses on methodological differences. \subsection{Common basis for the EFT formulation} \label{subsec:common} On general grounds, it is believed that any physical system has a {\it unique} and {\it correct} description at long wavelengths where the microscopical details of the physical system under consideration can be encoded in just a few coefficients of the terms in the equations of motion. In the context of the long-distance universe, this description is believed to be the Effective Field Theory of Large-Scale Structure (EFTofLSS)~\cite{Baumann:2010tm,Carrasco:2012cv}. The originality of the EFTofLSS with respect to other pre-existing perturbative methods that were applied in the context of LSS is two-fold. First is the presence of suitable terms in the equations of motion that encode the effect of short-distance non-linearities and galaxies at long distances, and that cannot be predicted without detailed knowledge of galaxy physics, and therefore are generically fit to observations. Second, the equations of motion in the EFTofLSS have non-linear terms that are proportional to some parameters. Due to the many phenomena that control the evolution of our universe, there are several of these parameters, such as the size of the density perturbation or the ratio of a given wavelength with respect to the size of the displacements induced by short distance modes~\cite{Senatore:2014via}. For all of these parameters but one, an iterative solution is performed. Instead for one parameter, the one encoding the effect of long wavelength displacements, a non-linear solution is performed, which goes under the name of IR-Resummation~\cite{Senatore:2014via,Baldauf:2015xfa,Senatore:2017pbn,Lewandowski:2018ywf,Blas:2016sfa}. Different incarnations of the EFTofLSS make this expansion more or less manifest. For example, the Lagrangian-space EFTofLSS~\cite{Porto:2013qua} automatically solves non-linearly in the effect of long-displacements, and so, it is identical to the Eulerian EFTofLSS that we use here after this has been IR-Resummed~\cite{Senatore:2014via}. In the EFTofLSS, the description of the clustering of galaxies in redshift space is performed in the following way. First, the dark matter and baryonic fields are described in terms of fluids with a non-trivial stress tensor. Galaxies are biased tracers, in the sense that, if $\delta_g$ is the galaxy overdensity, we have that~\cite{Senatore:2014eva} \begin{align}\label{eq:biasgeneral} &\delta_g(x,t)=\sum_n\int dt' K_n(t,t')\, \tilde {\cal{O}}_n(x_{\rm fl}, t')\\ \nonumber &\qquad\quad=\sum_{n,m} b_{n,m}(t)\, {\cal{O}}_{n,m}(x, t) \end{align} where $\tilde{\cal{O}}_n$ are all possible fields, such as, for example, the dark matter density, that, by general relativity, can affect the formation of galaxies. $K_n(t,t')$ are some kernels that relate how a field at a certain time affects the galaxies at later times, and $x_{\rm fl}$ is the location at time $t'$ of the fluid element that is at $x$ at time $t$. The last step of the above equation can be performed using the perturbative expression for the matter and baryonic fields. In fact, in perturbation theory the time- and space-dependence parts factorize in a form, schematically, given by $\delta(\vec k,t)\sim \sum_n f_n(t) \delta^{(n)}(\vec k)$, where $\delta^{(n)}$ is order $n$ in the expansion parameters. This allows us to define the biases $b$ as $b_{n,m}(t)\sim \int dt' K_n(t,t') f_m(t')$. This provides the first complete parametrization of the bias expansion, though many earlier attempts were made and substantial but partial successes were obtained. Next, we need to describe the observed density field in redshift space. This is a combination of the density field in configuration space and density times powers of the velocity field of galaxies, such as $\rho(\vec x,t) v(\vec x,t)^i, \rho(\vec x,t)v^i(\vec x,t) v_i(\vec x,t),\ldots$. Again, these short-distance-dependent terms are described as above as biased tracers of the density and baryonic fields~\cite{Senatore:2014vja}. Because of what we just discussed, the range over which different implementations of the EFTofLSS can differ is extremely limited: they may choose a different basis for the EFT-parameters, they may add an incomplete, and therefore different, set of higher-order conterterms to partially include the effect of some higher order calculation that was not performed, or they may have different implementations or approximations for the IR-Resummation. We are going to list them in detail next. \subsection{Group dependent implementation} \label{subsec:group} Although both teams use the same theoretical model, there are several important methodological differences. Moreover, the two groups have made very different choices in the model implementation and numerical algorithms. This section describes in detail the pipelines used by the two teams. \subsubsection{East Coast Team} \label{subsubsec:TeamEC} The East Coast Team used only the monopole and the quadrupole in the analysis. The East Coast Team analyzed the challenge data with and without the hexadecapole moment and found identical constraints.\footnote{On the scales of interest the hexadecapole signal is dominated by leakage contributions from the monopole and quadrupole. These contributions appear due to discreteness effects, i.e.~because the monopole and quadrupole are not exactly orthogonal to the hexadecapole on a finite grid. Even with the gigantic volume of the challenge simulation and the wide binning the hexadecapole moment happened to be dominated by the systematic leakage from lower multipole moments. } Given these reasons, the East Coast Team refrained from using the hexadecapole moment in the baseline analysis. The theoretical model used by the East Coast Team for these two multipoles can be written schematically as \begin{equation} P_\ell(k) = P_\ell^{\rm tree}(k) + P_\ell ^{\rm loop} (k) + P_\ell^{\rm ctr}(k) + P^{\nabla^4_z \delta}_\ell(k) \;. \end{equation} The tree-level contribution is given by the Kaiser formula \cite{kaiser87}. The loop corrections are calculated using the standard one-loop power spectra for dark matter and biased tracers (see e.g., \cite{Bernardeau:2001qr,Blas:2015qsi,Desjacques18} and references therein). The bias model consists of the following bias operators \cite{Assassi:2014fva,Mirbabayi:2014zca,Senatore:2014eva} \begin{equation} \delta_g (\boldsymbol{k})= b_1 \delta (\boldsymbol{k}) + \frac {b_2}2 \delta^2 (\boldsymbol{k}) + b_{\mathcal G_2} \mathcal G_2(\boldsymbol{k}) \;, \end{equation} where the momentum-space representation of $\mathcal G_2$ operator is given by \begin{equation} \mathcal G_2(\boldsymbol{k}) = \int \frac {d^3\boldsymbol{p}}{(2\pi)^3} \left[ \frac{(\boldsymbol{p}\cdot(\boldsymbol{k}-\boldsymbol{p}))^2}{p^2|\boldsymbol{k}-\boldsymbol{p}|^2} - 1 \right] \delta(\boldsymbol{p}) \delta(\boldsymbol{k}-\boldsymbol{p}) \;. \end{equation} The one-loop power spectrum has one extra bias operator multiplied by an additional parameter $b_{\Gamma_3}$. However, this contribution is almost fully degenerate with the counterterms and $\mathcal{G}_2$ operator on the scales of interest. Given this strong degeneracy, the East Coast Team has set $b_{\Gamma_3}=0$ in the baseline analysis. Running the MCMC chains with and without $b_{\Gamma_3}$, it was checked that this choice does not affect constraints on cosmological parameters. The standard one-loop counterterms for the monopole and the quadrupole are \cite{Senatore:2014vja} \begin{equation} P_0^{\rm ctr}(k) = -2c^2_0 k^2 P_{11}(k) \;,\quad P_2^{\rm ctr}(k) =-\frac{4f}{3}c^2_2 k^2 P_{11}(k)\,, \end{equation} where $f=\mathrm{d}\ln D_+/\mathrm{d} \ln a$ is the logarithmic growth rate, $D_+$ denotes the linear growth factor and $P_{11}(k)$ is the linear power spectrum. The purpose of these counterterms is to fix the UV-dependence of the loops and to partly take into account the effects of the fingers-of-God \cite{jackson72}. The East Coast Team also added an extra $k^4$ term shared between the multipoles, \begin{equation} \label{eq:k4} P^{\nabla^4_z \delta}(k,\mu) = - c (\mu k f)^4 (b_1+f\mu)^2 P_{11}(k) \;. \end{equation} This new counterterm takes into account next-to-leading order of the fingers-of-God. Note that on general grounds one also expects the presence of the stochastic contribution of the form \cite{Senatore:2014vja,Perko:2016puo}, \begin{equation} P_{\text{RSD, stoch}}= -c_{\epsilon }k^2 \mu^2\,. \end{equation} This contribution happens to be very degenerate with the counterterm \eqref{eq:k4} on the scales of interest for the analysis and it was not included in the model by the East Coast Team. The East Coast Team has implemented IR-Resummation and the Alcock-Paczynski effect as explained in detail in Refs.~\cite{2019JCAP...11..034C,2019arXiv190905277I}. Importantly, the East Coast team has used the IR resummation algorithm based on the wiggly-smooth decomposition directly in Fourier space \cite{Baldauf:2015xfa,Blas:2016sfa,Ivanov:2018gjr}, which allowed for a significant boost of computational speed. This scheme is efficient and numerically stable. Moreover, it is based on solid systematic parametric expansion that guarantees that the error is under control at every order of IR resummation. It was explicitly checked that the residuals introduced by our procedure are much smaller than the 2-loop contributions which are not included in the model, in full agreement with theoretical expectations \cite{Blas:2016sfa,Ivanov:2018gjr}. The labels that indicate IR-Resummation and the AP effect were omitted in all equations in this section to avoid clutter. However, the reader should keep in mind that they are always included in the model. The total number of nuisance parameters used in the blinded analysis of the East Coast Team is 6: three counterterms ($c^2_0$, $c^2_2$, $c$) and three bias parameters ($b_1$, $b_2$, $b_{\mathcal G_2}$). Since the shot noise contribution has been subtracted from the measured spectra, the corresponding parameter was not fitted, in contrast to Ref.~\cite{2019arXiv190905277I}. As far as the cosmological parameters are concerned, the basis that was used consists of the dimensionless Hubble constant $h$ ($H_0=h\cdot 100$ km/s/Mpc), the physical matter density $\omega_\mathrm{m}$, and the normalization $A^{1/2}$ defined with respect to the best-fit Planck value for the base $\Lambda$CDM cosmology, \begin{equation} \begin{split} &A^{1/2}\equiv \left(\frac{A_{\rm s}}{A_{{\rm s},\,\text{Planck}}}\right)^{1/2}\,,\\ & \text{where } \quad A_{{\rm s},\,\text{Planck}} = 2.0989\cdot 10^{-9}\,. \end{split} \end{equation} All varied cosmological and nuisance parameters were assigned flat priors without boundaries, i.e. $(-\infty,\infty)$. The evaluation of perturbation theory integrals was performed using the FFTLog method of \cite{Simonovic:2017mhp} implemented as a module in the \textsc{CLASS} Boltzmann solver \cite{class2,Chudaykin:2020aoj}. Using the IR-Resummation based on wiggly-smooth decomposition, a single evaluation of a theory model is of the order $\mathcal O(1)$ sec for high precision settings. This allows for a new evaluation of the non-linear power spectra at every step of the MCMC chain, which is what is done in the East Coast Team analysis. The MCMC analysis was performed using the \textsc{Montepython~v3.0} \cite{Audren:2012wb,Brinckmann:2018cvx} sampler interfaced with the modified version of the \textsc{CLASS} code. The nuisance parameters were sampled in the ``fast mode'' \cite{Lewis:2002ah} at a negligible computational cost. Since the $k$-binning of the challenge spectra is very wide ($\Delta k = 0.01~h\,\mathrm{Mpc}^{-1}$) compared to the fundamental mode of the box, the theoretical predictions had to be properly averaged over each bin. The boundaries of the bins were estimated using the simulation volume, known to both teams. The East Coast Team checked that the estimated boundaries allow one to accurately reproduce the provided weighted means of the $k$-bins and found that averaging the theory over the bin versus evaluating it in the mean can induce roughly $\mathcal O(0.5)\sigma$ shifts in cosmological parameters. \subsubsection{West Coast Team} \label{subsubsec:TeamWC} The implementation of the West Coast Team is the result of a long journey where each of ingredients of the EFTofLSS that is necessary to apply it to data was one-by-one subsequently developed, tested on simulations, shown to be successful. Though not all those results are directly used in the analysis, the West Coast Team, and probably nobody, would simply have never applied the model to the data without those intermediate successes. We therefore find it nice to add, in each instance where the EFTofLSS is applied to data, the following footnote where we acknowledge at least a fraction of those important developments\footnote{The initial formulation of the EFTofLSS was performed in Eulerian space in~\cite{Baumann:2010tm,Carrasco:2012cv}, and then extended to Lagrangian space in~\cite{Porto:2013qua}. The dark matter power spectrum has been computed at one-, two- and three-loop orders in~\cite{Carrasco:2012cv, Carrasco:2013sva, Carrasco:2013mua, Carroll:2013oxa, Senatore:2014via, Baldauf:2015zga, Foreman:2015lca, Baldauf:2015aha, Cataneo:2016suz, Lewandowski:2017kes,Konstandin:2019bay}. Some additional theoretical developments of the EFTofLSS that accompanied these calculations were a careful understanding of renormalization~\cite{Carrasco:2012cv,Pajer:2013jj,Abolhasani:2015mra} (including rather-subtle aspects such as lattice-running~\cite{Carrasco:2012cv} and a better understanding of the velocity field~\cite{Carrasco:2013sva,Mercolli:2013bsa}), of the several ways for extracting the value of the counterterms from simulations~\cite{Carrasco:2012cv,McQuinn:2015tva}, and of the non-locality in time of the EFTofLSS~\cite{Carrasco:2013sva, Carroll:2013oxa,Senatore:2014eva}. These theoretical explorations also include an instructive study in 1+1 dimensions~\cite{McQuinn:2015tva}. In order to correctly describe the Baryon Acoustic Oscillation~(BAO) peak, an IR-resummation of the long displacement fields had to be performed. This has led to the so-called IR-Resummed EFTofLSS~\cite{Senatore:2014via,Baldauf:2015xfa,Senatore:2017pbn,Lewandowski:2018ywf,Blas:2016sfa}. A method to account for baryonic effects was presented in~\cite{Lewandowski:2014rca}. The dark-matter bispectrum has been computed at one-loop in~\cite{Angulo:2014tfa, Baldauf:2014qfa}, the one-loop trispectrum in~\cite{Bertolini:2016bmt}, and the displacement field in~\cite{Baldauf:2015tla}. The lensing power spectrum has been computed at two loops in~\cite{Foreman:2015uva}. Biased tracers, such as halos and galaxies, have been studied in the context of the EFTofLSS in~\cite{ Senatore:2014eva, Mirbabayi:2014zca, Angulo:2015eqa, Fujita:2016dne, Perko:2016puo, Nadler:2017qto} (see also~\cite{McDonald:2009dh}), the halo and matter power spectra and bispectra (including all cross correlations) in~\cite{Senatore:2014eva, Angulo:2015eqa}. Redshift space distortions have been developed in~\cite{Senatore:2014vja, Lewandowski:2015ziq,Perko:2016puo}. Clustering dark energy has been included in the formalism in~\cite{Lewandowski:2016yce,Lewandowski:2017kes,Cusin:2017wjg,Bose:2018orj}, primordial non-Gaussianities in~\cite{Angulo:2015eqa, Assassi:2015jqa, Assassi:2015fma, Bertolini:2015fya, Lewandowski:2015ziq, Bertolini:2016hxg}, and neutrinos in~\cite{Senatore:2017hyk,deBelsunce:2018xtd}. Faster evaluation schemes for evaluation for some of the loop integrals have been developed in~\cite{Simonovic:2017mhp}.}. The model for the West Coast Team and the analysis techniques are the same as the one used in~\cite{DAmico:2019fhj,Colas:2019ret}, to which we refer for details. The one-loop redshift-space galaxy power spectrum reads: \begin{align}\label{eq:powerspectrum}\nonumber P_{g}(k,\mu) & = Z_1(\mu)^2 P_{11}(k) \\ \nonumber & + 2 \int \frac{d^3q}{(2\pi)^3}\; Z_2(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q},\mu)^2 P_{11}(|\boldsymbol{k}-\boldsymbol{q}|)P_{11}(q) \\ \nonumber &+ 6 Z_1(\mu) P_{11}(k) \int\, \frac{d^3 q}{(2\pi)^3}\; Z_3(\boldsymbol{q},-\boldsymbol{q},\boldsymbol{k},\mu) P_{11}(q)\nonumber \\\nonumber & + 2 Z_1(\mu) P_{11}(k)\left( c_\text{ct}\frac{k^2}{{ k^2_\textsc{m}}} + c_{r,1}\mu^2 \frac{k^2}{k^2_\textsc{m}} + c_{r,2}\mu^4 \frac{k^2}{k^2_\textsc{m}} \right)\\ &+ \frac{1}{\bar{n}_g}\left( c_{\epsilon,1}+ c_{\epsilon, 2}\frac{k^2}{k_\textsc{m}^2} + c_{\epsilon,3} f\mu^2 \frac{k^2}{k_\textsc{m}^2} \right). \end{align} $k^{-1}_\textsc{m}$ controls the bias derivative expansion and we set it to be $\simeq k^{-1}_\textsc{nl}$, which is the scale controlling the expansion of the dark matter derivative expansion. We set $k_\textsc{nl}=0.7 h {\rm Mpc}^{-1}$. $\bar{n}_g$ is the mean galaxy density. In the next to the last line of Eq.~(\ref{eq:powerspectrum}), the term in $c_{\rm ct}$ represents a linear combination of a higher derivative bias~\cite{Senatore:2014eva} that appears in Eq.~(\ref{eq:biasgeneral}) and the speed of sound of dark matter~\cite{Baumann:2010tm,Carrasco:2012cv}: $\delta(\vec k,t)\supset k^2 \delta_{\rm lin}(\vec k, t)$. The terms in $c_{r,1}$ and $c_{r,2}$ represent the redshift-space counterterms~\cite{Senatore:2014vja}: $\delta_{\rm redshift}(\vec k,t)\supset k^2 \mu^2 \delta(k,t),\ k^2 \mu^4 \delta(k,t) $. In the last line of Eq.~(\ref{eq:powerspectrum}), we have the stochastic counterterms: $c_{\epsilon,1}$ and $c_{\epsilon,2}$ originate from Taylor expansion of Eq.~(\ref{eq:biasgeneral})~\cite{Senatore:2014eva}, while $c_{\epsilon,3}$ originates from the redshift-space expressions~\cite{Senatore:2014vja}. The redshift-space galaxy density kernels $Z_1,Z_2$ and $Z_3$ are given in Appendix~\ref{sec:galaxykernels}. These kernels depend on the bias coefficients that we define as explained below Eq.~(\ref{eq:biasgeneral}). By choosing only the linearly-independent ones, this gives rise to the so-called base of descendants. While up to cubic order this base is equivalent to more standard bases, already at quartic perturbative order new terms appear. The IR-resummation is performed in a numerically efficient way using the original method for configuration and redshift space developed in~\cite{Senatore:2014via,Senatore:2017pbn,Lewandowski:2018ywf}, where all the errors are parametrically controlled by the perturbative order of the calculation ({\it i.e.} no uncontrolled approximations are present)~\footnote{Especially within the observational community, a non-linear treatment of the BAO based on the decomposition of the wiggle and smooth part of the power spectrum has been popular for a long time (see for example~\cite{Eisenstein:2006nj}). However, this Team does not find this decomposition to be under parametric control (i.e. there is no small parameter controlling its correctness). It is possible to go from the original IR-Resummation to the simplified ones based on the decomposition by performing a series of approximations (see Appendix of~\cite{Lewandowski:2018ywf}). Of course, this does not mean that the errors which are introduced are large or significant, as can be {a-posteriori} checked on numerical simulations.}. We define the following combination of parameters: $c_2 = (b_2 + b_4) / \sqrt{2}$, $c_4 = (b_2 - b_4) / \sqrt{2}$, $c_{\epsilon,\rm mono} = c_{\epsilon,\rm 2} + f c_{\epsilon,\rm 3} / 3$ and $c_{\epsilon,\rm quad} = 2 f c_{\epsilon,\rm 3} / 3$. As we analyze only the monopole and the quadrupole, we set $c_{r,2} = 0$ since the two redshift-space counterterms are degenerate in this case, but we allow a larger prior on $c_{r,1}$ to absorb the contribution of $c_{r,2}$ in the quadrupole. Additionally, since the shot noise is known and has been subtracted from the data, we set $c_{\epsilon,1}=0$. This leaves us with the set ($b_1$, $c_2$, $b_3$, $c_4$, $c_{ct}$, $c_{r,1}$, $c_{\epsilon, \rm mono}$, $c_{\epsilon, \rm quad}$) of 8 parameters. The PT challenge data are precise enough to determine all EFT parameters with no priors. However, we impose the following priors motivated by the fact that all EFT parameters are expected to be $\mathcal{O}(1)$~\footnote{ Notice that the consistency of the EFTofLSS is based on a power counting argument that assumes that the subsequent terms of the perturbative expansion are much smaller than the ones that are kept. In order for this to be the case, it is essential that the physical nuisance parameter are kept $\mathcal{O}(1)$, once the relevant physicals scales have been factorized. } : \begin{equation} \begin{split} & b_1\in [0, 4]_{\rm flat} \, , \quad c_2 \in [-4, 4]_{\rm flat} \, , \quad b_3 \in 10_{\rm gauss} \, , \\ & c_4 \in 2_{\rm gauss}\, , \quad c_{\rm ct} \in 4_{\rm gauss} \, , \quad c_{r,1} \in 8_{\rm gauss}\, , \\ & c_{\epsilon,\rm mono} \in 2_{\rm gauss}\, , \quad c_{\epsilon,\rm quad} \in 4_{\rm gauss}\, . \end{split} \end{equation} As it is evident from Eqs.~(\ref{eq:powerspectrum}) and (\ref{eq:redshift_kernels}), some EFT-parameters appear linearly in the model power spectrum, and therefore appear quadratically in the Likelihood. If we are not interested in the actual value of these parameters, as it is our case, we can marginalize over these parameters analytically, obtaining a marginalized likelihood that is a function of only 3 parameters: $b_1$, $c_2$ and~$c_4$. Given that the $k$-bins ($\Delta k=0.01h/{\rm Mpc}$) contain many fundamental modes, the West Coast Team averages the predictions of the model over each bin. As a check, the Team verified that the provided effective $k$ of the bin was correctly reproduced. In terms of the cosmological parameters, the West Coast Team has parameterized their analysis in terms of the dimensionless Hubble constant $h$ ($H_0=h\cdot 100$ km/s/Mpc), the present-day matter density fraction $\Omega_\mathrm{m}$, and the normalization of the power spectrum $A_\mathrm{s}$. The evaluation of the perturbation theory integrals were performed either by direct numerical integration, or by the FFTLog method of~\cite{Simonovic:2017mhp}, obtaining the same result. \section{Results of blinded analysis} \label{sec:res_blind} In this section we display the results obtained by the two teams. The input values of the cosmological parameters were unblinded after each team has submitted its results for consensus data cuts. We present these results in the original form prepared by either team independently. Both teams have chosen to analyze the mean power spectrum (at $z=0.61$) over 10 realizations with the covariance estimated from the inverse sum of covariances for 10 single boxes, \begin{equation} \bar C = \left(\sum_i C_i^{-1}\right)^{-1}\,, \quad \bar P = \bar C * \sum_i C^{-1}_i P_i\,, \end{equation} where $P_i$, $C_i$ are the power spectrum and covariance of the $i$'th box and ${\bar P}$, ${\bar C}$ are the final mean and covariance that have been analyzed. This procedure ensures that the analysis is approximately equivalent to fitting the spectrum from a single simulation box of 566~($h^{-1}\text{Gpc})^3$ volume. We stress that the obtained statistical errors on cosmological parameters correspond to the total volume of 10 simulation boxes, i.e.~566~($h^{-1}\text{Gpc})^3$. \subsection{East Coast Team} \label{subsec:TeamA} Although the East Coast Team submitted its baseline results for the average over 10 challenge boxes at $z=0.61$, they have also analyzed the data for other redshifts and found consistent results across all challenge spectra. Prior to unblinding, the East Coast Team has submitted results for 8 different evenly-spaced values of $k_{\rm max}$ in the range $(0.08-0.2)\;h\,\mathrm{Mpc}^{-1}$. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.49\textwidth]{triangle_2D_east_lowk} \includegraphics[width=0.49\textwidth]{triangle_2D_east_highk} \end{center} \caption{Marginalized posteriors for the three varied cosmological parameters as a function of $k_{\rm max}$ (quoted in $h\,\mathrm{Mpc}^{-1}$ in the figure legend) obtained by the East Coast Team. Dashed lines mark the input parameters which were revealed once the Team submitted its final result.} \label{fig:contours1} \end{figure*} The marginalized posteriors for the three cosmological parameters are shown in Fig.~\ref{fig:contours1} for several choices of $k_{\rm max}$. Between $k_{\rm max}=0.08\;h\,\mathrm{Mpc}^{-1}$ and $k_{\rm max}=0.14\;h\,\mathrm{Mpc}^{-1}$ the different contours are compatible within $1\sigma$. When pushing to higher values of $k_{\rm max}$, the shifts in the central values of the posterior distributions become significant. Note that for $k_{\rm max}>0.14\;h\,\mathrm{Mpc}^{-1}$ the contours of $h$ and $\omega_\mathrm{m}$ remain consistent even though the other parameter exhibit clear shifts. The East Coast Team quoted its final results for a conservative choice of $k_{\rm max}=0.12\;h\,\mathrm{Mpc}^{-1}$ because this is the scale up to which the Team believed the theoretical modeling is sufficiently accurate given sub-percent statistical error bars and the size of neglected nonlinear corrections (see Fig.~\ref{fig:breakdown1}, in which we display an estimate of the two-loop correction from \cite{Baldauf:2016sjb}). The 1d marginalized limits for the cosmological and parameters and the linear bias $b_1$ are given in Table~\ref{tab:res}. After the true parameters were unblinded, the values obtained by the East Cost Team were replaced by relative differences. For convenience, the values of $\sigma_8$, $\Omega_\mathrm{m}$ and $\ln(10^{10}A_\mathrm{s})$ derived from the East Coast Team MCMC chains are also quoted. As we have seen after unblinding, the true values of $\omega_\mathrm{m}$ and $h$ reside within $2\sigma$ posterior regions even at $k_{\rm max}=0.2~h\,\mathrm{Mpc}^{-1}$, while the clustering amplitude measurement is consistent up to $k_{\rm max}=0.14~h\,\mathrm{Mpc}^{-1}$. Importantly, the Team has also inferred a correct value of the linear bias\footnote{ The true value of the linear bias was estimated as follows. The Japan team has measured the real space matter-matter auto-spectra along with the galaxy-matter cross spectrum. Then, we took the ratio, $b_1=P_{gm}/P_{mm}$ evaluated in the very first $k$-bin averaged over the ten realizations as an estimate of the bias parameter. } coefficient $b_1$. \begin{table*}[t!] \begin{tabular}{|c|c|c|} \hline $ k_{\rm max} =0.12~h\,\mathrm{Mpc}^{-1}$ & best-fit & mean $\pm 1\sigma$ \\ [0.5ex] \hline\hline $ \Delta A^{1/2}/A^{1/2} \cdot 10^{2}$ & $-0.15$ & $-0.16\pm 1.0$ \\ \hline $\Delta h/h \cdot 10^{2}$ & $-0.55$ & $-0.59 \pm 0.46$ \\ \hline $\Delta \omega_\mathrm{m}/\omega_\mathrm{m}\cdot 10^2$ & $0.2 $ & $0.15 \pm 1.4$ \\ \hline $\Delta b_1/b_1\cdot 10^2$ & $0.20$ & $0.22 \pm 1.2$ \\ \hline \hline $\Delta \Omega_\mathrm{m}/\Omega_\mathrm{m} \cdot 10^2$ & $1.3$ & $1.2 \pm 0.9$ \\ \hline $\Delta \ln(10^{10}A_\mathrm{s})/\ln(10^{10}A_\mathrm{s})\cdot 10^2$ & $-0.098$ & $-0.11 \pm 0.69$ \\ \hline $\Delta \sigma_8/\sigma_8 \cdot 10^2$ & $-0.094$& $-0.022 \pm 0.92$ \\ \hline \end{tabular} \caption{ The baseline results obtained by the East Coast Team for $k_{\rm max}=0.12h\,\mathrm{Mpc}^{-1}$ at $z=0.61$. Only the cosmological parameters and $b_1$ are shown. Note that $\Omega_{\rm m}$, $\ln(10^{10}A_\mathrm{s})$ and $\sigma_8$ in the lower disjoint table shows the results for derived parameters. } \label{tab:res} \end{table*} \begin{figure*}[ht] \begin{center} \includegraphics[width=0.49\textwidth]{Pks_z061} \centering \includegraphics[width=0.49\textwidth]{residuals_kmax012_z061} \centering \includegraphics[width=0.49\textwidth]{P0contr_z061_bestfit_kmax012} \centering \includegraphics[width=0.49\textwidth]{P2contr_z061_bestfit_kmax012} \end{center} \caption{ \textit{Upper panel}: Comparison of the data for the monopole and the quadrupole (the error bars are there, albeit barely visible) with the best-fit model (left panel) obtained by the East Coast Team. The residuals for the monopole and the quadrupole for the best-fit model with $\chi^2/{\rm dof}= 12/(24-9)$ (right panel). Note that the quadrupole data points are slightly shifted for better visibility. \textit{Lower panel}: Different contributions to the monopole (left panel) and quadrupole (right panel) power spectra. The data errors and the two-loop estimate are also displayed. We plot the absolute values, some terms are negative.} \label{fig:breakdown1} \end{figure*} Fig.~\ref{fig:breakdown1} shows the comparison of the best-fit model at $k_{\rm max}=0.12~h\,\mathrm{Mpc}^{-1}$ to the data and the residuals. The quality of the fit is quite good, $\chi^2/{\rm dof}= 12/(24-9)$. It is consistent with the hypothesis that the data follow the $\chi^2$-distribution with $15$ degrees of freedom. The lower panel of Fig.~\ref{fig:breakdown1} displays a breakdown of different contributions to the best-fit model. The linear theory contribution dominates on all scales, which is consistent with the applicability of perturbation theory. Towards $k_{\rm max} = 0.12\;h\,\mathrm{Mpc}^{-1}$ the loop corrections (including the $k^2$-counterterms) become progressively important. Note that the one loop corrections are detectable already on very large scales, $\sim 0.02~h\,\mathrm{Mpc}^{-1}$. The $k^4$-counterterm is important only for the quadrupole around~$k_{\rm max}= 0.12\;h\,\mathrm{Mpc}^{-1}$, where it dominates over the other loop corrections. \subsection{West Coast Team} \label{subsec:TeamB} As specified before, the West Coast Team has analyzed the mean over the 10 boxes in the high redshift bin at $z=0.61$, using the covariance on the mean. Originally, for the purpose of parameter estimation, the Team presented the results up to $k_{\rm max}=0.12 \,h\,\mathrm{Mpc}^{-1}$ since this is the $k_{\rm max}$ at which the Team predicted the estimates to be still unbiased. The marginalized posteriors for the cosmological parameters are shown in Fig.~\ref{fig:west-contours}, and best fit and means are listed in Table~\ref{tab:west-results}. When the true results were revealed, it is found that $A_\mathrm{s}$ and $H_0$ lie within the $1$-$\sigma$ region of the estimates of the West Coast Team, and $\Omega_\mathrm{m}$ within the $1.5$-$\sigma$ region. $b_1$ is also correctly reproduced within the $1$-$\sigma$ interval. Additionally, one can see that the pre-unblinding results at $k_{\rm max}=0.14 \,h\,\mathrm{Mpc}^{-1}$, which however was not the $k_{\rm max}$ at which the Team anticipated to be most accurate, are even closer to the true values. In Fig.~\ref{fig:west-bestfit} the Team shows that the data are well fitted by the theoretical model with the best-fit parameters, with $-2 \log \mathcal{L}/\textrm{dof} = 16/(24-6)$, corresponding to a very good $p$-value~\footnote{ Notice that the Likelihood of this team is not Gaussian.}. In the lower panel, different contributions to the best fit power spectra are shown, to check the self-consistency of the perturbative expansion. It is apparent that the one-loop term is safely less that $10\%$ of the linear one at all $k$'s. In addition to the one-loop term, an estimate of the two-loop contribution, i.e. $P_{\rm 1-loop}^2/P_{\rm lin}$, is shown: clearly, at least for the quadrupole, this estimate is of the order of the error on the data at the highest $k$. This is an additional indication that for roughly $k_{\rm max} \gtrsim 0.12\text{-}0.14 \, h\,\mathrm{Mpc}^{-1}$ the one-loop model will not be an accurate description of the data, and parameter estimation will suffer from theory systematics. \emph{After unblinding}, the West Coast Team submitted additional results at $k_{\rm max}=0.14 , 0.16,0.18 , 0.20 \, h\,\mathrm{Mpc}^{-1}$. This is because it was subsequently decided that it was interesting to explore the $k_{\rm max}$-dependence of the theory-systematic error. In fact, though this has already been analyzed by the Team in both their original papers~\cite{DAmico:2019fhj,Colas:2019ret}, the challenge simulation is different and its volume larger. At the higher $k_{\rm max}$'s, the Team performs the (analytical) marginalization over the additional $c_{\epsilon,\rm mono}$ parameter, with a Gaussian prior with $\sigma_{c_{\epsilon, \rm mono}}=2$. The effect of adding this parameter is completely negligible at low $k_{\rm max}$: in fact, the Team chose to safely set it to zero for the original chains. Indeed one can check that the results are unchanged at low $k_{\rm max}$ when adding this parameter. However, because of the small error bars of the simulation data, at higher $k_{\rm max}$ this parameter has to be added to the model. The trend as a function of $k_{\rm max}$ is apparent from Fig.~\ref{fig:west-contours}. $\Omega_\mathrm{m}$ and $H_0$ are well recovered up to $k_{\rm max} = 0.18 \, h\,\mathrm{Mpc}^{-1}$, approximately within the $1\text{-}\sigma$ region, the estimate of clustering amplitude $A_\mathrm{s}$ starts to deviate significantly from the true value after $k_{\rm max} \gtrsim 0.14 \, h\,\mathrm{Mpc}^{-1}$. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.49\textwidth]{triangle_2D_west_lowk_final.pdf} \includegraphics[width=0.49\textwidth]{triangle_2D_west_highk_final.pdf} \end{center} \caption{Marginalized posteriors for the three varied cosmological parameters as a function of $k_{\rm max}$ (quoted in $h\,\mathrm{Mpc}^{-1}$ in the figure legend) obtained by the West Coast Team. Dashed lines mark the input parameters which were revealed once the Team submitted its final result, similarly to Fig.~\ref{fig:contours1}. } \label{fig:west-contours} \end{figure*} \begin{table*}[ht] \centering \begin{tabular}{|l|c|c|} \hline Param & best-fit & mean$\pm\sigma$ \\ \hline $\Delta\Omega_\mathrm{m}/\Omega_\mathrm{m} \cdot 10^2$ &$1.3$ & $1.2_{-0.8}^{+0.8}$ \\ $\Delta h / h \cdot 10^2$ &$-0.7$ & $-0.6_{-0.6}^{+0.6}$ \\ $\Delta \ln (10^{10}A_{\rm s}) / \ln (10^{10}A_{\rm s}) \cdot 10^2$ &$0.1$ & $0.1_{-0.7}^{+0.7}$ \\ $\Delta b_1 /b_1\cdot 10^2$ &$0.8$ & $0.7_{-1.1}^{+1.0}$ \\ \hline \end{tabular} \caption{ Similar to Table~\ref{tab:res}, but the results obtained by the West Coast Team for $k_{\rm max} = 0.12~h\,\mathrm{Mpc}^{-1}$ at $z=0.61$. Only cosmological parameters and $b_1$ are shown.} \label{tab:west-results} \end{table*} \begin{figure}[h!] \begin{center} \includegraphics[width=0.49\textwidth]{ptchallenge_bestfit_012.pdf}\\ \includegraphics[width=0.49\textwidth]{ptchallenge_residual_012.pdf}\\ \includegraphics[width=0.49\textwidth]{ptchallenge_linloop_012.pdf} \end{center} \caption{\textit{Upper panel}: Comparison of the data for the monopole (black) and the quadrupole (blue) with the best-fit model obtained by the West Coast Team. \textit{Middle panel}: Residuals for the monopole and the quadrupole for the best-fit model with the partially-marginalized Likelihood giving $-2 \log \mathcal{L}/{\rm dof}= 16/(24-6)$ for $k_{\rm max} = 0.12~h\,\mathrm{Mpc}^{-1}$. \textit{Lower panel}: Different contributions to the monopole and quadrupole power spectra. We plot just the absolute values, some terms are negative.} \label{fig:west-bestfit} \end{figure} \subsection{Comparison of the two analyses} \label{subsec:blind_summary} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth] {1D_posterior_cosmo_main_rev1.pdf} \end{center} \caption{One dimensional marginalized posterior distributions for the three main cosmological parameters as well as the linear bias parameter as a function of the maximum wavenumber $k_\mathrm{max}$ considered in the analysis. The 68\% credible intervals derived by the East and West Coast Team are shown respectively by the blue and red error bars with the mean marked by the upward and downward triangles, respectively. Overplotted by the shaded regions are those scaled to the volume of SDSS DR12. The error bars are slightly shifted horizontally to avoid a heavy overlap. } \label{fig:1D_cosmo_main} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.48\textwidth]{triangle_2D_main_kmax008_rev1.pdf} \includegraphics[width=0.48\textwidth]{triangle_2D_main_kmax012_rev1.pdf} \includegraphics[width=0.48\textwidth]{triangle_2D_main_kmax016_rev1.pdf} \includegraphics[width=0.48\textwidth]{triangle_2D_main_kmax020_rev1.pdf} \end{center} \caption{Two dimensional marginal posterior distributions for the three main cosmological parameters and the linear bias parameter. The 68\% and 95\% credible intervals derived by the East and West Coast Team are shown respectively by the cyan and orange contours. The corresponding one dimensional marginal distributions are shown in the diagonal panels by the solid and dashed lines. The maximum wave number included in this analysis is $k_\mathrm{max} = 0.08$ (upper left), $0.12$ (upper right), $0.16$ (lower left) and $0.2\,h\,\mathrm{Mpc}^{-1}$ (lower right). Three degeneracy directions for some parameter combinations are also displayed in the contour panels by the thick dashed lines (see text for more detail). } \label{fig:2D_4panels} \end{figure*} So far we have presented the analyses done by two teams. We now compare the two and discuss how different model assumptions lead to the different cosmological-parameter constraints. First, since the two teams employ different sets of cosmological parameters as the varied parameters, a direct comparison between Figs.~\ref{fig:contours1} and \ref{fig:west-contours} is not very clear. We stick here instead to the parameter space $(\Omega_\mathrm{m}, H_0, A_\mathrm{s})$ to see the constraints. We first show in Fig.~\ref{fig:1D_cosmo_main} the one dimensional marginalized error on these parameters as a function of the maximum wavenumber, $k_\mathrm{max}$, used in the analysis. The $1$-$\sigma$ credible intervals by the East (West) Coast Team are shown by the upward (downward) triangles with error bars. Also shown by the shades are the same intervals but scaled for the SDSS BOSS DR12 according to the ratio of the simulated and the observed volume\footnote{We adopt the total volume of SDSS BOSS DR12, $5.7(h^{-1}\mathrm{Gpc})^3$, instead of that of CMASS2.}. Overall, the ground truth values of the three cosmological parameters stay within or slightly off from the $1$-$\sigma$ interval up to $k_\mathrm{max} = 0.14~h\,\mathrm{Mpc}^{-1}$. The inferred primordial scalar amplitude, $A_\mathrm{s}$, in particular, is always within the interval up to this $k_\mathrm{max}$ from both teams. On the one hand, $A_\mathrm{s}$ starts to deviate from the ground truth in a systematic way with statistical significance above this $k_\mathrm{max}$. This is consistent with the expectation that two-loop corrections become important at these scales. On the other hand,~$H_0$ and $\Omega_\mathrm{m}$ stay roughly within $1\text{-}\sigma$ from the true value all the way up to $k_{\rm max}=0.2~h\,\mathrm{Mpc}^{-1}$. However, if one focuses on the shaded regions corresponding to the statistical error from the actual BOSS survey, the ground truth values are always well within the $1$-$\sigma$ interval, which justifies the $k_{\rm max}$ choice of the analyses from the same teams in Refs.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret}. While the size of the error bars shrinks towards higher $k_\mathrm{max}$, the gain is small after $k_\mathrm{max} \gtrsim 0.14~h\,\mathrm{Mpc}^{-1}$. This could be caused by the combination of two effects. Firstly, the relative contribution of the shot noise in the data covariance becomes important. Secondly, the EFT parameters controlling the nonlinear corrections become important in such a way that the additional information coming from small-scale modes mainly determines these parameters rather than the cosmological parameters. If one looks into the trend in the error bars more in detail, the results from the two teams are clearly different, especially when $k_\mathrm{max} \lesssim 0.1~h\,\mathrm{Mpc}^{-1}$, up to factor $\sim2$ smaller by the West Coast Team. This difference is driven by the prior treatment. The East Coast Team had no priors on the chosen set of nuisance parameters, whereas the West Coast Team has always kept the nuisance parameters within physically-motivated bounds. Thus, the observed difference of the results between the two teams implies that on scales larger than $0.1~h\,\mathrm{Mpc}^{-1}$ the data are not good enough to break degeneracies between the cosmological and nuisance parameters. These degeneracies get broken at larger wavenumbers, where the results of the two teams agree regardless of the nuisance parameters' priors. Let us briefly discuss some cosmological implications of our blinded analysis. The cosmological information probed by redshift galaxy surveys can be crudely divided into four different categories: \begin{itemize} \item Shape information. The shape of the galaxy power spectrum is controlled by the physical matter density $\omega_\mathrm{m}$. This parameter is measured from the data regardless of the choice of rulers such as $H_0$. $\omega_\mathrm{m}$ is extracted from the features of the power spectrum, such as the form of the BAO peaks, the baryonic suppression, the turnover, and the overall slope. \item Distance information, mainly encoded through the volume-average distance\footnote{It is defined as $D_\mathrm{V}(z)=\left(z(1+z)^2D^2_\mathrm{A}(z)/H(z)\right)^{1/3}$, where $D_\mathrm{A}(z)=\frac{1}{1+z}\int_0^z \frac{dz'}{H(z')}$ and $H^2(z)=H^2_0(\Omega_{\rm m}(1+z)^3+1-\Omega_\mathrm{m})$ in flat $\Lambda$CDM.} $D_\mathrm{V}(z)$. This parameter, essentially, controls the freedom to shift the power spectrum along the $k$ axis. In the flat $\Lambda$CDM framework this distance depends only on two cosmological parameters, $\omega_\mathrm{m}$ and $H_0$. Since $\omega_\mathrm{m}$ is measured from the shape, the constraint on $D_\mathrm{V}$ translates directly into a constraint on $H_0$. Note that $\Omega_\mathrm{m}$ in this picture can be seen as a parameter derived from a combination of the shape and distance parameters. \item Redshift space distortions. Observing galaxies in redshift space allows one to measure unbiased rms velocity fluctuation $f\sigma_8(z)=f(z)D_+(z)\sigma_8$. In $\Lambda$CDM $D_+$ and $f$ depend only on $\Omega_{\rm m}$, which is constrained from the shape and the distance. This way the RSD measurements constrain directly $A_\mathrm{s}$. \item The Alcock-Paczynski geometric distance information. The AP effect allows one to measure the combination $H(z)D_\mathrm{A}(z)$. However, in $\Lambda$CDM this combination is a slow function of cosmological parameters at small redshifts. Thus, it does not contribute significantly to the overall constraints on $\Omega_\mathrm{m}$, see Ref.~\cite{Ivanov:2019pdj} for more detail. \end{itemize} We can see in Fig.~\ref{fig:2D_4panels} that indeed our results are fully in line with these theoretical expectations. First, let us focus on the two-dimensional posterior in the $(\Omega_\mathrm{m}$--$H_0)$ plane. The change in the degeneracy direction is observed to rotate with increasing the maximum wavenumber. When $k_\mathrm{max}=0.08\,h\,\mathrm{Mpc}^{-1}$, $\Omega_\mathrm{m}$ and $H_0$ are negatively correlated. At the other end, the correlation turns to be a positive one for $k_\mathrm{max}=0.16$ and $0.2\,h\,\mathrm{Mpc}^{-1}$. We can interpret this as the outcome of the change in the relative importance of the BAO feature. Although the first BAO peak is already included at $k_\mathrm{max}=0.08\,h\,\mathrm{Mpc}^{-1}$, the dominant constraint is coming from the overall shape information, e.g., the matter-radiation equality scale ($\theta_\mathrm{eq} = 1/(k_\mathrm{eq}D_\mathrm{V}) \propto \Omega_\mathrm{m}^{-0.83}h^{-1}$, where $k_\mathrm{eq}$ denotes the equality wavenumber) at this maximum wavenumber. Indeed, the contours from the two teams are roughly oriented along this direction depicted by the red dashed line. At $k_\mathrm{max}=0.2\,h\,\mathrm{Mpc}^{-1}$, as we can clearly see the BAO feature up to the third peak (see Fig.~\ref{fig:multipole}), the BAO scale (the blue dashed line in Fig.~\ref{fig:2D_4panels}: $\theta_\mathrm{BAO} = r_\mathrm{s}/D_\mathrm{V}$ with the sound horizon scale $r_\mathrm{s}$) plays a more significant role. The measurement of the relative location of these two characteristic scales allow us to determine the physical density $\omega_\mathrm{m} = \Omega_\mathrm{m} h^2$, and together with the distance measurement through cosmology dependence of the redshift-distance conversion (i.e., a measurement of $D_\mathrm{V}$), we can break the degeneracy between $\Omega_\mathrm{m}$ and $H_0$. Once $D_\mathrm{V}(z)$ and $\omega_{\rm m}$ are fixed, the other parameters such as the distance parameters, $H(z)$, $D_\mathrm{A}(z)$ (with $h$ kept in the unit as $h\,\mathrm{Mpc}^{-1}$ or $h^{-1}\mathrm{Mpc}$) or the growth parameter, $f(z)$, are merely dependent parameters fully determined by $\Omega_\mathrm{m}$ given that we stick to the flat $\Lambda$CDM cosmology. Had we fitted the data with a more general expansion model, e.g.~dynamical dark energy or modified gravity models, the posterior distribution of these parameters would have been different. These parameters extracted from our MCMC chains, together with some other useful parameters, are displayed in Fig.~\ref{fig:1D_other}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.48\textwidth]{1D_posterior_others_final.pdf} \end{center} \caption{One dimensional marginalized posterior distributions of {\it derived} parameters for flat $\Lambda$CDM model as a function of the maximum wavenumber included in the analysis, $k_\mathrm{max}$. The fractional error is shown with the uncertainty in $H_0$ that is kept in the unit for the distance parameters (i.e., $D_\mathrm{A}$ is expressed in $h^{-1}\mathrm{Mpc}$ and $H$ is in $h\,\mathrm{Mpc}^{-1}$). } \label{fig:1D_other} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{triangle_2D_amplitude_kmax012_rev1.pdf} \end{center} \caption{Two dimensional marginalized posterior distributions for amplitude-related parameters relevant for the RSD measurement from the analyses at $k_\mathrm{max}=0.12\,h\,\mathrm{Mpc}^{-1}$. The expected degeneracy directions, $f\sigma_8$, $b_1\sigma_8$ or $f/b_1$ is constant, expected from linear RSD measurements are shown by the dashed lines. Note that $f(z)$ and $\sigma_8(z)$ are derived parameters fully fixed once $\Omega_\mathrm{m}$ and $A_\mathrm{s}$ are given within the flat $\Lambda$CDM model. } \label{fig:2D_amplitude} \end{figure} Apart from the shape-related parameters, the determination of the amplitude parameter is of interest. We can see in Fig.~\ref{fig:2D_4panels} that the posterior of the amplitude parameter, $A_\mathrm{s}$, is strongly correlated with the linear bias parameter $b_1$. To understand this more clearly, we show the constraints on the parameters relevant for the measurement of RSD (the one-dimensional and the two-dimensional marginalized posterior in Figs.~\ref{fig:1D_other} and \ref{fig:2D_amplitude}, respectively). In the two-dimensional contour plot, we can see that the amplitude parameter scaled to the redshift of the survey volume, $\sigma_8(z) = [D_+(z)/D_+(z=0)]\,\sigma_8$, is strongly degenerate with the linear bias parameter, $b_1$, just as we have seen for $A_\mathrm{s}$ and $b_1$. In fact, they are expected to be fully degenerate in the absence of RSD information in linear theory. We can also see in Fig.~\ref{fig:1D_cosmo_main} that $b_1$ starts to depend weakly on $k_\mathrm{max}$ above $\sim 0.14\,h\,\mathrm{Mpc}^{-1}$ with statistical significance, and a similar departure from the ground truth value happens at the same place but to the opposite direction in $\sigma_8(z)$ as shown in Fig.~\ref{fig:1D_other}. The other famous degeneracy directions, $f\sigma_8$ or $f/b$, which are the direct observables from linear RSD, do not appear in our contours in Fig.~\ref{fig:2D_amplitude}. This is again due to the fact that the flat $\Lambda$CDM assumption makes $f$ a dependent variable fully determined by $\Omega_\mathrm{m}$. What we see here is that the constraint on $\Omega_\mathrm{m}$ through the shape and distance measurement discussed above, combined with the measurement of $f\sigma_8$ from RSD, allows us to constrain $\sigma_8$ (and thus $A_\mathrm{s}$) directly. \section{Conclusion and outlook} \label{sec:conclusion} In this paper we have presented results of the blinded cosmology challenge initiated to test theoretical models for redshift-space galaxy clustering. The task was to assess whether the theoretical model, here EFTofLSS, can recover the blinded cosmological parameters in N-body simulation from the mock data of redshift-space power spectrum multipoles for BOSS-like galaxies. The sufficiently large volume, dynamical range and high resolution of the challenge simulation allow one to pin down any potential inaccuracy of theoretical modeling, compared to the statistical errors for the BOSS-like survey. The simulations were run by a team (``Japan Team'') that kept the true parameters in secret. The mock data were analyzed by two other independent teams (``East Coast Team" and ``West Coast Team'') who volunteered to participate in the challenge. The rule of the challenge is that the true parameters can be unblinded only when the analyzing teams submit their final results to the simulation team. All the three teams agreed that the submitted results be presented in this paper, without any change, after the unblinding. Both analyzing teams used the same theoretical model based on the effective field theory of large-scale structure. However, there exist some nontrivial differences, whose impact on the final cosmological inference should be tested quantitatively with care. The corresponding pipelines were the ones applied to the real BOSS data in Refs.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret}. We have discussed in detail methodological and technical differences between these two pipelines. Despite these differences, both teams have successfully recovered the true cosmological parameters within expected statistical error bars. This suggests that perturbation theory, once consistently implemented, can be used as a standard tool for unbiased estimation of cosmological parameters from galaxy surveys. The enormously large total simulation volume used in the challenge helped to assess systematic error due to the incomplete theoretical modeling by suppressing statistical error to a level much lower than the current surveys. The biased cosmological inference beyond the reported maximum wavenumber used for the challenge, $k_\mathrm{max}=0.12\,h\,\mathrm{Mpc}^{-1}$, consistently determined by both teams, indicates the typical systematic error one can make from actual surveys with much smaller observed volume (see, e.g., Fig.~\ref{fig:1D_cosmo_main} up to $k_\mathrm{max}=0.2\,h\,\mathrm{Mpc}^{-1}$). For instance, the analyses of SDSS BOSS galaxies by Refs.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret} adopt $k_\mathrm{max}$ around $0.2\,h\,\mathrm{Mpc}^{-1}$ (0.18 to 0.25 depending on the paper and the redshift bin of the galaxy sample). While the detailed choice of varied cosmological parameters as well as the way to combine with CMB constraints are different from what is presented here, one can make a reasonable guess on the potential systematic biases on the inferred cosmological parameters of these papers out of our results. Out of the three cosmological parameters that we considered here, the scalar amplitude parameter, $A_\mathrm{s}$, is most severely biased beyond $k_\mathrm{max}=0.12h\,\mathrm{Mpc}^{-1}$, reaching $\sim 4\%$\footnote{We estimate this theory sytematic error as the distance from the truth of the $1 \sigma$ region of the posterior, as done in~\cite{DAmico:2019fhj}.} at $k_\mathrm{max}=0.2\,h\,\mathrm{Mpc}^{-1}$, while the two other parameters, $\Omega_\mathrm{m}$ and $H_0$, are fairly unbiased even when the EFT template starts to fail. This indicates that the latter two are mostly constrained through the shape of the spectrum (mostly the distinctive BAO feature) The situation should be the same in actual observational data analyses such as the one listed above. Although the precise value of the detected parameter bias on $A_\mathrm{s}$ can depend on the detail in the halo-galaxy connection mainly through the uncertainty in the strength of the redshift-space distortions, it is assuring to observe that our worst case value of $4\%$ is still below the statistical error from Refs.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret}, which are $12\%$ to $19\%$ ($68\%$ C.L.) depending on the paper. Future experiments with even larger survey volume and higher galaxy number density will allow us to lower these uncertainties and in that case one have to be more careful on the parameter bias due to the model inaccuracy, either by lowering $k_\mathrm{max}$ or by improving the model itself. We investigate the parameter constraints for a hypothetical survey with the volume of Dark Energy Spectroscopic Instrument (DESI: \cite{DESI}) in Appendix~\ref{app:desi}. We are currently exploring a number of various post-blinded research directions. The first one includes a thorough investigation of the information content of redshift galaxy surveys. Second, it would be curious to see how much the $k_{\rm max}$ value where one-loop perturbation theory breaks down depends on the properties of the galaxy population, i.e. assembly bias or satellite fraction. Third, it will be interesting to see how well perturbation theory performs for other observables, e.g. the galaxy-galaxy weak lensing or the redshift-space bispectrum. These research avenues are left for future work. We have presented the results obtained by analyzing teams in the way such that the true parameters are still blinded to the readers. This is done in case some other researchers would like to test their theory models on the challenge spectra. All challenge data are available online at \url{http://www2.yukawa.kyoto-u.ac.jp/~takahiro.nishimichi/data/PTchallenge/}. We encourage all groups working on galaxy clustering analysis to participate in the challenge. \acknowledgements{TN, LS, MT and MZ acknowledge a warm hospitality of the BCCP-IAS workshop ``The Nonlinear Universe 2018'' held at Smartno, Slovenia, where this work was initiated. This work is supported in part by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and by MEXT/JSPS KAKENHI Grant Numbers JP17K14273 (TN), JP15H05887 (MT), JP15H05893 (MT), JP15K21733 (MT), and JP19H00677 (TN, MT). TN also acknowledges financial support from Japan Science and Technology Agency (JST) CREST Grant Number JPMJCR1414 and by JST AIP Acceleration Research Grant Number JP20317829, Japan. Numerical computations were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. GDA is partially supported by Simons Foundation Origins of the Universe program (Modern Inflationary Cosmology collaboration). LS is partially supported by Simons Foundation Origins of the Universe program (Modern Inflationary Cosmology collaboration) and by NSF award 1720397. MZ is supported by NSF grants AST1409709, PHY-1820775 the Canadian Institute for Advanced Research (CIFAR) program on Gravity and the Extreme Universe and the Simons Foundation Modern Inflationary Cosmology initiative. MI is partially supported by the Simons Foundation's Origins of the Universe program and by the RFBR grant 20-02-00982~A. }
train/arxiv
BkiUfK3xaKgQX6NqspPc
5
1
\subsection{Passive dynamic walker} \label{sec:PDwalker} \begin{figure}[htpb] \centering \includegraphics[width=0.9\linewidth]{walker.png} \caption{\label{fig:walker} Model and state parameters for a passive dynamic walker from Coleman and Ruina~\cite{Coleman:1998}. This system is stable, but the system's region of stability is small.} \end{figure} We applied the particle trace method to assess the parametric (modeling) limits with respect to stability of a bipedal passive dynamic walker~\cite{Coleman:1998} (illustrated in Figure~\ref{fig:walker}). We sampled over the inertial and kinematic parameters of the walker, resulting in four hundred different particles. A fixed point cycle computation process (as described in~\cite{Coleman:1998}) was applied to each particle to compute initial conditions that yield a walking cycle. Each particle trace was simulated for sufficient duration to allow the biped to walk up to 20 steps. \begin{figure}[htpb] \includegraphics[width=\linewidth]{walker-margins.png} \caption{\label{fig:walker-data} Stability region for the passive dynamic walker. Upright time over an 8s walk for varied c.o.m. positions ($x_{\textrm{cm}}$) and foot mass ($m_1$). $x_{\textrm{cm}}$ is offset from an initial value of 0.0 and $m_1$ is offset from an initial value of 1.0 kg (initially matching $m_2$).} \end{figure} All modeling parameters described in Figure~\ref{fig:walker} were perturbed in our assessment: $\{I_{xx}, I_{yy}, I_{zz},I_{xy}, I_{xz}, I_{yz},$ $m_1, \alpha, x_{cm}, y_{cm}, z_{cm} , l, w, r, m_2\} \sim \hspace{-1mm} \mathcal{N}(\mu_\textrm{param},\sigma_\textrm{param})$ where $\mu_\textrm{param}$ is the mean (expected) parameter value for a working simulated system and standard deviation $\sigma_\textrm{param}$ is 5\% of $\mu_\textrm{param}$. Region(s) of feasible walker parameters can be determined by examining the duration that parameterizations remain upright. \begin{figure}[htpb] \includegraphics[width=\linewidth]{walker-time-thin.png} \caption{\label{fig:walker-data2} 400 randomly sampled particles of the passive walker simulated over 30s (virtual time); the walker's roll orientation is plotted over the course of the experiment. Termination points (falls) are marked with circles.} \end{figure} \textbf{Results:} \quad Figure~\ref{fig:walker-data} depicts the stability region of the passive walker with respect to a grid sampling over the c.o.m. offset (front-back) and left leg mass. Grid sampling becomes costly when attempted over all parameters due to the curse-of-dimensionality. A pseudorandom sampling over all fifteen parameters detects similar points of failure for the walking robot, corresponding to the steep ledges seen in Figure~\ref{fig:walker-data}. Figure~\ref{fig:walker-data2} displays one state value (roll) with respect to time. State data was collected from four hundred particle traces for this plot. Falls occur in clusters (at 2s and 8s) as the walker passes through a region of its state space with a grazing bifurcation (usually a scuffed or stubbed foot for this system). Figure~\ref{fig:walker-data2} depicts a single bifurcating event, scuffing a foot on the ground, generating a compact cluster of state data between particles. Figure~\ref{fig:walker-data} indicates that a more robust walker can be obtained by increasing $m_1$ slightly ($\approx 0.02$~kg). \subsection{Detecting grazing bifurcation for a quadruped} \label{sec:bifurcation} \begin{figure}[htpb] \centering \vspace{0.1in} \begin{tabular}{ccc} \begin{adjustbox}{valign=t} \includegraphics[trim={15cm 0cm 15cm 2cm},clip,height=1.57in]{step-1.png} \end{adjustbox} &\hspace{-0.15in} \begin{adjustbox}{valign=t} \begin{tabular}{@{}c@{}} \includegraphics[trim={10cm 7cm 12cm 5cm},clip,height=0.75in]{stepH-2.png} \\[0.2ex] \includegraphics[trim={10cm 7cm 12cm 5cm},clip,height=0.75in]{stepL-2.png}\\ \end{tabular} \end{adjustbox} &\hspace{-0.15in} \begin{adjustbox}{valign=t} \begin{tabular}{@{}c@{}} \includegraphics[trim={10cm 7cm 12cm 5cm},clip,height=0.75in]{stepH-3.png}\\[0.2ex] \includegraphics[trim={10cm 7cm 12cm 5cm},clip,height=0.75in]{stepL-3.png} \end{tabular} \end{adjustbox} \end{tabular} \caption{\label{fig:bifurcation-image}A time-lapse of the virtual \software{Links} robot stepping over (top) or into (bottom) an obstacle given high and low step heights, respectively.} \end{figure} We simulated placing a 18 DoF quadrupedal robot model (\emph{Links}) next to a curb obstacle, and directed the quadruped to step over the curb while performing with variable step height (but with parameters otherwise fixed). A time-lapse depiction of the diverging behavior is shown in Figure~\ref{fig:bifurcation-image}. The robot clearly will collide with the curb if the robot does not step high enough. We predict that a grazing bifurcation will occur if the step height is approximately equal to the curb height: small changes in initial conditions, modeling parameters, or sensing (of, e.g., curb geometry) will determine whether or not the robot strikes the obstacle. We ran three trials, each of which uses one of three preset gait control policies that attempt two, three, and four cm step heights. The curb height was fixed at three cm. \begin{figure}[H] \centering \includegraphics[width=\linewidth]{bifurcation} \caption{\label{fig:links-bifurcation} Virtual quadrupedal robot base yaw when turning into a 3 cm tall curb obstacle. Each line represents a particle, and each color represents a policy. \textbf{Red} particles following a 4 cm step height policy (marked as 0.04 m on plot) step over the curb and continue to turn. \textbf{Blue} particles following a 2 cm step height policy (marked as 0.02 m on plot) strike the curb and are prevented from turning. \textbf{Green, dotted} particles following a 3 cm step height policy (marked as 0.03 m on plot), where step height matches the curb height (3 cm), experience bifurcation by only occasionally striking the obstacle. The behavior of these particles is less predictable.} \end{figure} \begin{figure*} \begin{center} \vspace{0.1in} \includegraphics[width=0.99\linewidth]{RSS-videos/fall.png} \end{center} \vspace{-0.1in} \caption{\label{fig:links-walking}A two second time-lapse of \software{Links} walking with a gait period duration of: 0.6 seconds (Top); 1.0 seconds (Middle); 1.5 seconds (Bottom). The robot became progressively less stable as the gait period duration increased.} \end{figure*} \textbf{Results:} \ Grazing bifurcation drives the traces using the three cm step-height policy into one of two distinct clusters of states that correspond to robot models clearing and striking the curb, correspondingly (see Figure~\ref{fig:links-bifurcation}). Comparing the sequence of events in each particle trace for this policy, the cluster that corresponds to robots stepping over the obstacle contains the contact sequence \{~RH/ground, LH/ground, RF/ground, LF/ground~\}, and the cluster that corresponds to robots striking the obstacle contains the contact sequence \{~RH/ground, LH/ground, RF/obstacle, RF/ground, LF/ground~\}. The sampling strategy efficiently uncovers the grazing bifurcation, as the variance of the green paths in Figure~\ref{fig:links-bifurcation} indicate; a grid search over the quadruped's 36 dimensional state space is intractable. We expect that the robot's \emph{in situ} performance would be difficult to predict under this policy because the robot's behavior is sensitive to modeling and estimation uncertainty. In contrast, the four cm step height policy allowed the quadruped to step over the curb for all traces and the two cm step height policy caused the robot to collide with the curb in all traces. It is reasonable to expect that the real robot would behave predictably under both of these policies. \subsection{Assessing grasping plan robustness} \label{sec:bifurcation-arm} We simulated an 11 DoF fixed-base manipulator robot performing a picking task (reaching, grasping, and lifting) on a ball within its reach. Two distinct plans are generated to achieve the tasks: Path A corresponds to the gripper approaching from above the ball, moving in a straight line between the gripper's initial position and the expected position of the ball; Path B moves sideways from above the ball during the approach to grasp the ball from its side. We ran two trials, with each trial generating forty particles. The trials were differentiated only by their approach trajectory and resulting grasp orientation on the ball. A trial was considered to be successful if the final position of the ball matched that planned. We expected that the relative success rates of the particle traces executing the two plans would reveal the more robust one; we posit that the absolute success rate of a plan would indicate the robustness of a plan executed \emph{in situ}, but we did not test this hypothesis. \begin{figure}[htpb] \centering \includegraphics[width=\linewidth]{final} \caption{\label{fig:final-pos} Final position of the ball after the pick behavior following Path A (Red Circle) and Path B (Blue Triangle). Points arrayed along the bottom of the plot are resting on the ground plane.} \end{figure} \textbf{Results:} \quad Both paths result in the ``original'' (unperturbed model) robot successfully picking the ball from the given initial conditions. These paths could correspond to a brittle plan generated using existing techniques. Path A resulted in a 88\% success rate while Path B successfully completed the picking task 63\% (a 40\% performance differential) of the time. A finger tapping the sphere and causing it to roll was a typical cause of failure for Path~B. Figure~\ref{fig:final-pos} shows the final position of the ball in each of the particle traces. Examples of successful and failing attempts using each trajectory are depicted in Figure~\ref{fig:plans}. Figure~\ref{fig:plans} depicts a visual realization of what a particle trace might look like within this framework. \begin{figure}[htpb] \centering \includegraphics[height=2in]{traj1-good} \includegraphics[height=2in]{traj1-bad} \caption{\label{fig:plans} Path A (top) and Path B (bottom) attempting to grasp the ball. Successes (left) maintain hold on the ball and failures (right) drop the ball or push it away.} \vspace{0.1in} \includegraphics[height=2in]{traj2-good} \includegraphics[height=2in]{traj2-bad} \end{figure} \subsection{Statistical behavior of a rigid robot from particle traces} \label{section:exp:monte-carlo} The degree of correlation between the behavior of robots simulated over a timespan of seconds or minutes and those robots' physically situated counterparts depends on many factors. A flexible robot may evince little of the behavior of its virtual counterpart simulated using multibody dynamics, for example. This section focuses on the correlation between simulated and \emph{in situ} behavior for a scenario that \emph{should} be well modeled using fast simulation tools. This issue is important because one can only expect grazing bifurcations located in simulation to be informative if there is some correlation between simulated and \emph{in situ} behavior. \paragraph{Robot} The robot used in physical trials, \software{Links}, is an 18 degree-of-freedom (12 actuated) quadruped robot constructed from Dynamixel actuators and steel links (see Figure~\ref{fig:links}). Base orientation is recorded by IMU that produces updates at 100 Hz. Modeling uncertainties and errors on even such a small robot are legion and include, but are not limited to, the rigid body assumption, gear backlash, communication delay, IMU sensing delay, the rigid contact assumption, and back EMF. The modeling and state parameters sampled are listed in Figure~3. \begin{figure}[htpb] \centering \includegraphics[height=1.3in]{links-situ}\includegraphics[height=1.3in]{links-sim} \caption{\label{fig:links} (Left) the \software{Links} robot in position to begin a walking experiment. (Right) A snapshot of our physical model of \software{Links} in simulation.} \end{figure} \paragraph{Dynamics model} The virtual quadrupedal robot was modeled using a box geometry for the base link inertia and geometry, cylinders for the limb link inertias and geometries, and spheres for foot inertia and geometries. Modeling parameters for the virtual quadruped were set from measurements collected from the \software{Links} robot. There are no compliant elements in the structure of \software{Links} (unless one counts the transmission), allowing it to be readily modeled as a multi-rigid body. \paragraph{Control Policy} We used a simple gait planning system to move the robot in a walking gait around a one foot diameter circle. The desired planar motion of the base of the robot $\{\dot{x},\dot{y},\dot{\theta}\}$ is input to the planner, which produces a trajectory for each foot that attempts to drive the robot base toward the intended operational-space configuration while maintaining sticking contact with the ground. We adjusted a single gait parameter (gait period duration) and observed how it affected the behavior of the robot. Gait period duration was adjusted from an empirically observed stable value of 0.6 seconds per gait cycle, upward to a value where we had previously observed definite failure: 1.5 seconds per gait cycle. Each particle was traced over 20s of virtual time or until a fall, and \software{Links} was permitted to walk for 20s of wall time. \begin{comment} \subsection{Smooth behavior} - idea: we need confidence our approach will work, particularly when run on real robots with many hidden variables and black box systems. so we test this on a system without obvious non-smooth dynamics \newline - keep the joints away from limits \newline - goal: make sure that we can capture the extents in state space of a smooth behavior. If we can't do that, then we can't hope to identify divergences from non-smooth events (why?) \newline - experimental description here \end{comment} \begin{figure}[htpb] \centering \includegraphics[width=\linewidth]{real-stability} \includegraphics[width=\linewidth]{sim-stability} \caption{\label{fig:links-data} Roll orientation data for the \software{Links} robot walking in a circle: (Top) \emph{in situ} and (Bottom) \emph{in sim}. Each line is labeled with its corresponding value for the gait period duration).} \end{figure} \textbf{Results:} \quad A time-lapse depiction of this experiment is presented in Figure~\ref{fig:links-walking}. We recorded the roll orientation of the robot base and labeled a configuration a fall if the roll exceeded $\frac{\pi}{2}$ radians from vertical. We observed that \software{Links} completed the 20s walk \emph{in situ} without falling when no particle traces exhibited a fall. When the robots fell in some traces, \software{Links} walked for 10s \emph{in situ} before falling. When the models in all traces fell (after the first step), \software{Links} fell on its first step \emph{in situ} as well (see Figure~\ref{fig:links-data}). While a simulation of the robot from modeling and estimates might not have exhibited the robot's \emph{in situ} behavior for a given policy, the aggregate behavior over all particle traces matched the \emph{in situ} performance well (see Figure~\ref{fig:links-time}). \begin{figure}[htpb] \includegraphics[width=\linewidth]{average-runtime} \caption{\label{fig:links-time} Duration of time until a fall of the locomoting robot plotted with respect to the gait period duration parameter. ($\times$ mark, dotted line) Duration of wall time until a fall of the \software{Links} robot. ($\circ$ mark, solid line) Average duration of virtual time until a fall for all particle traces.} \end{figure} \begin{comment} \emph{When all particle traces in a trial correspond to walking without a fall, \software{Links} did not fall over}. We infer from this evidence that gait period durations between 0.6 and 0.7 are robust to modeling error. Although a single simulation trial never quantitatively matches the behavior of the \software{Links} robot, the distribution of particles states over time encompassed the qualitative behavior of the physical system. Results such as these emphasize the need for simulated testing involving multiple samples and perturbed robot models; without the information provided by multiple samples, we would predict that the robot that fell half-way through the experiment would either fail immediately or not at all. The consensus of multiple particles is needed to determine the trustworthiness of simulated results and the predictability of a robot's behavior, before moving to a physical experiment. Determining the validity of the conjecture that, \emph{if no particles experience a failure for a certain control policy when performing a simulated task, then the physical robot will likely succeed in the physically simulated task when the tested control policy}, is a topic for further study. \end{comment} \section{INTRODUCTION} \label{section:intro} The standard approach to validating robot behavior is simulated testing followed by \emph{in situ} testing. This approach does not inspire confidence as simulations often fail to reflect real world behavior and \emph{in situ} testing is tedious and slow. This problem has instigated research into formal verification methods for robotics (e.g.,~\cite{Johnson:2015,Posa:2015}), which appears promising; intense study is currently underway to scale these approaches to higher degree of freedom systems. This paper explores an alternative path that is straightforward, easily implemented, and uses techniques already familiar to many roboticists to bridge the extremes of isolated physical simulation tests and full-on testing on real robotic hardware. Our approach focuses specifically on robots that physically interact with their environment via contact (i.e., manipulation and locomotion). Contact is a governing factor for the movement of legged robots about their environment and for the manner in which robot hands pick up, move, operate, and otherwise manipulate objects in their environment. As the photos in Figure~\ref{fig:expectation} depict, the unexpected presence or absence of contact can cause catastrophic failure.\footnote{Russ Tedrake claimed that this problem was a dominant cause of failure of the robots in DARPA's Robotics Challenge in a plenary session at Humanoids 2015.} We seek to iteratively improve control policy and physical design robustness by identifying and modifying policies and designs that are sensitive to modeling and estimation errors. We perform this task by detecting and addressing novel, nonsmooth, bifurcating events that appear between simulations of perturbed robot models (henceforth denoted \emph{particle traces}). {\it Our results indicate that novel, divergent behavior can be identified efficiently, at least for some tasks performed by some robots, with even a small number of samples}. We assess the particle trace approach using \1 a quadrupedal robot and a physically simulated model of this robot performing locomotion tasks; \2 a virtual manipulator robot performing a picking action; and \3 a physically simulated passive dynamic walker~\cite{Coleman:2001}, for which we assess the walking stability empirically over perturbations in modeling parameters. This last task demonstrates the computational efficiency of the sampling approach---particularly in relation to ongoing related work in assessing stability of hybrid dynamical systems that has proven difficult to scale to higher dimensional systems. \begin{comment} If we find that numerous trials of a task performed on a simulated robot whose model has been perturbed so that it differs from the expected model used by a controller, we can be confident that a controller using the expected model on inherently imperfect robotic hardware (subject to manufacturing uncertainties among other sources of error) will experience a similar level of success as the imperfect simulated models. We demonstrate the successful application of our strategy determine if a task can be performed successfully on a physical robot---or similarly provide confidence that a successful outcome of a high dimensional tasks with numerous modeling parameters is likely on imperfectly modeled hardware. We intend to use this method as an intermediate testing phase between when a planner or controller works in simulation and introducing the new software to robotic hardware. Simulation of a robot using this method can then provide an expectation of the correctness of a robot's task performance (hopefully preventing the scenarios seen in Figure~\ref{fig:expectation}), \emph{even if the simulation cannot accurately predict the state space evolution of the physical system.} \end{comment} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{fall.png} \end{center} \caption{\label{fig:expectation} Two robots performing tasks with anticipated contact (images captured from a video of DARPA's recent robotics challenge). Both control strategies---(top) turning a door handle, (bottom) standing up from a seat---quickly diverge from plan without the anticipated contact, and the robots fall catastrophically. The particle trace approach can be applied to identify brittle aspects of a plan.} \end{figure} \section{BACKGROUND AND RELATED WORK} Our present work considers robot dynamics that are well modeled by rigid bodies and rigid or nearly rigid contact. The approach is not necessarily predicated on these assumptions, but the necessary simulations must be sufficiently fast (likely precluding most deformable body simulations). \subsection{Nonsmooth mechanics} In addition to the challenges of analyzing nonlinear dynamics stability (of multi-rigid body systems), the problem discussed in this paper requires consideration of \emph{nonsmooth mechanical systems}~\cite{Brogliato:1996}, for which velocities can change discontinuously due to impacts and even non-impacting contact with Coulomb friction~\cite{Stewart:2000a}. \begin{comment} Nonsmooth mechanical systems can be modeled using the differential complementarity problem~\cite{Pang:2008} below: \begin{align} \dot{\vect{q}} & = \vect{v}(t) \\ \dot{\vect{v}} & = f(t, \vect{q}(t), \vect{v}(t), \vect{u}(t)) \\ \textrm{subject\ to\ } \vect{0} \le \vect{u}\ & \bot\ g(\vect{q}(t)) \ge \vect{0} \end{align} where $\vect{q} \in \mathbb{R}^n$ are the generalized coordinates of the robotic system in this application, $\mathbf{f}(.)$ is an ordinary differential equation, $\mathbf{g}(.)$ are unilateral (i.e., contact, joint limit) constraints, $\vect{u} \in \mathbb{R}^p$ are forces used to satisfy the unilateral constraints, and the operator $\bot$ is commonly used in literature on complementarity problems to indicate the constraint $\tr{\vect{a}}\vect{b} = 0$ for $\vect{a}, \vect{b} \in \mathbb{R}^m$. The equations above provide a compact representation of rigid body dynamics with contact and joint limits, using the models and equations set forth in~\cite{Stewart:1996,Anitescu:1997,Miller:2003}. \end{comment} Multibody dynamics with rigid contact and Coulomb friction---which captures important stick-slip transitions---can be modeled as a differential algebraic equation (DAE): \begin{align} \ddot{\vect{q}} & = \vect{f}(\vect{q}, \dot{\vect{q}}, \vect{u}) \\ \vect{0} & = \vect{\phi}(\vect{q}) \label{eqn:DAE2} \end{align} where $\vect{\phi}(.)$ is a set of \emph{active} algebraic constraints, out of $m$ total constraints. Some constraints are always active, like bilateral joint constraints. Other constraints are only active if certain conditions are met; e.g., a contact constraint between two polyhedra would only be active when the bodies are in contact at that point and they would otherwise (i.e., without the constraint in place) interpenetrate at that point: \begin{align} \dot{\phi}_i(\vect{q}, \dot{\vect{q}}) = 0 \textrm{ if } \phi_i(\vect{q}) = 0 \textrm{ and } \lambda_i > 0, \textrm{ for } i=1,\ldots,m \label{eqn:DAE1} \end{align} where $\lambda_i$ acts as a Lagrange Multiplier (i.e., it is zero if the constraint is active and non-negative otherwise). These kind of problems can be formalized as a differential complementarity problem~\cite{Pang:2008}. \subsection{Stability analysis and control of nonsmooth systems} A number of researchers have studied stability analysis and control of nonsmooth mechanical systems~\cite{Brogliato:1996,Tomlin:2000,Tomlin:2003,Prajna:2007,Prajna:2007a,Leine:2008,Leine:2008a,Papachristodoulou:2009,Posa:2015}, for which hybrid dynamical systems have been a common formal model. These systems have been applied toward the study of walking machines and robots, which we have also used as an illustrative application. \subsection{Computer Animation} \cite{Twigg:2007} uses \emph{visual plausibility}, qualitatively undetectable perturbations to collision parameters, to generate a set of possible ``worlds''. Their idea is essentially the inverse of ours: where we focus on using various perturbations to a simulation to characterize robotic behavior and identify possible divergences, they perturb simulations to try to find plausible, but low probability events. \subsection{Robust Control} Robust controllers seek control policies that are effective given bounded errors~\cite{Bemporad:2007}, which generally appear as initial state and control signal perturbations; work in robust model predictive control (MPC) also accounts for error in the system model. Such systems are tested and/or designed by stress-testing that composes perturbing modeling assumptions on initial state, control signal, environment geometry, or contact data and selecting a controller that performs best across all cases. Robust control has been used for improving the reliability or reducing uncertainty of locomoting systems~\cite{Wang:2009a, Mombaur:2005, Saglam:2014a, Burden:2015}, effecting grasping behaviors~\cite{Kim:2013,Mahler:2015,Weisz:2012,Zheng:2005}, and planning trajectories that reduce system uncertainty~\cite{Johnson:2016}. Validating such robust controllers through stress-testing, commonly known as \emph{falsification}, seeks to find counter examples to the robustness claims of a controller~\cite{Branicky:2005,Abbas:2013,Esposito:2005}. \subsection{Monte Carlo method and particle approaches} There exists a tenuous relationship between our approach and Monte Carlo and particle-based approaches for state estimation of nonsmooth systems~\cite{Duff:2011,Zhang:2013b,Koval:2013,Li:2015,Li:2015a}. The extent of the similarity is that both use stochasticity in addition to probability distributions over state to generate time series datasets (\emph{traces}) for each perturbation to dynamical system (\emph{particle}) parameters. These works expose the interaction between dynamics and rigid contact mechanics, as developed in theory of linear complementarity systems~\cite{Shen:2005}. \section{Approach} The particle trace approach attempts to locate novel contact events, task failures, and other nonsmooth hybrid state transitions that can have a large impact on system state. \input{methods.tex} \section{Experiments} \input{experiments.tex} \section{Discussion} We have been able to locate seemingly hard to locate grazing bifurcations (i.e., ones that lie in small volumes in state space) for high dimensional systems with very few particle traces. Since each particle trace is completely independent, particle traces can be generated in an ``embarassingly parallel'' manner. So not only is the particle trace approach versatile and simple to implement, it can be quite fast given sufficient computational resources. We believe the following questions now require much deeper investigation: \1 What scenarios can be constructed for which grazing bifurcations are computationally demanding to find (much like the ``bugtrap'' scenarios for motion planning)?; \2 What other dynamic scenarios can statistical ensembles of physically simulated robots reliably characterize (and where will such simulations fail to characterize behavior)?; \3 Since simulations are capable of generating huge quantities of data, how can such state space telemetry data be efficiently ``mined'' to locate clusters of similar high-level behavior? \bibliographystyle{plain} \subsection{Sampling approach} Each particle's parameterization incorporates the robot, model, environment, and other simulation features that result from a pseudo-random sampling on each of the uncertain elements of a robotic simulation. Each particle is then ``traced'' over a user-specified time by simulating the sensing, dynamics, and control policy (see Figure~\ref{fig:sample-parameter}). This paper focuses on multi-rigid body dynamics with rigid contact because these models capture the first-order effects without excessive parameter tuning and because the models' dynamics can be simulated orders of magnitude more rapidly than the next more representative set of models (i.e., deformable bodies). Sensory simulation is limited to IMU data in the present work. \subsection{Generating particles} A particle's parameterization determines the evolution of the simulated robotic system and serves as the only cause of diverging behavior between the individual particle traces. \emph{The perturbation to the ``true'' model is not observed by the robot's planning, control, and state estimation systems, which generally assume a known robot model (henceforth denoted the \emph{expected model}) when, e.g., calculating dynamic and kinematic information.} \begin{figure}[htpb] \vspace{0.1in} \hspace{-0.02in}\includegraphics[width=\linewidth]{initial-jitter.png} \caption{\label{fig:sample-parameter} Example of a \emph{particle trace}. When a new particle is generated: \1 Initial values of parameters are determined before the first simulation update. \2 Parameters that experience noise online (at each simulation update) are modified from their initial value by sampling a jitter value. } \end{figure} We sample a random perturbation to each particle parameter from an estimated distribution on its uncertainty (usually limiting samples to within three standard deviations, excluding the tails). Each model and estimation parameter of the particle is then offset from its expected model value using the sampled perturbation \emph{before} starting the simulation. While a particle is traced as the robot follows its control policy over the course of simulation, additional perturbations to sensor noise and control lag jitter\footnote{Control lag jitter is a small delay that is added/subtracted from the control lag and randomly selected on each control loop iteration.} are sampled \emph{on each control loop iteration}. Figure~\ref{fig:sample-parameter} illustrates this sampling process for a single particle parameter. We have used normally or uniformly distributed uncertainty on particle parameters (link dimension, link density, joint axes, control lag), and homoscedastic (fixed over time), normally distributed uncertainty for parameter noise (contact friction, control lag jitter, sensor data). The experiments described in \S\ref{section:experiments} used Gaussian and uniform distributions over wide ranges (to effect a safety factor), and we did not attempt to tune the distribution parameters. The number of distributions and parameters (see Figures~3~\&~\ref{fig:links-geometry}) would likely make such tuning infeasible in any case. Further work will be necessary to assess the ramifications of modeling parameters, state estimates, and sensory noise distributions that tend to follow heteroscedastic (varying over time), leptokurtic (fat-tailed), or skewed distributions. \subsection{Computation Time} Each particle can be integrated stably in the \software{Moby} simulator at approximately real-time speed: each second of time in simulation (virtual time) takes about a second to compute (wall time). Pseudorandom sampling decouples allows producing any number of particle traces~$s$ in parallel in linear time $\mathcal{O}(\frac{s T}{c m})$ with respect to the number of particle traces ($s$), where $m$ is the real-time factor of the simulation (\mbox{$m > 1$} being faster than real-time), $T$ is the duration of the experiment in virtual time, and $c$ is the number of processor cores available on the machine. With enough cores ($c \geq s$), this algorithm can run in constant time. \begin{figure}[h] \begin{tcfigure}{Sampleable parameters for \software{Links} robot} \small \textbf{\emph{(Parameters determined at the start of a particle trace)}} \textbf{\small Model:} \footnotesize link density: \{$1\times$base, $4\times$hip, $4\times$thigh, $4\times$shin, $4\times$foot\} link length: \{$4\times$hip, $4\times$thigh, $4\times$shin\} link radius: \{$4\times$hip, $4\times$thigh, $4\times$shin, $4\times$foot\} joint axis (conical error): 12$\times$actuated joints \textbf{\small Environment:} \footnotesize surface friction, surface compliance, contact model, surface geometry \textbf{\small Initial state:} \footnotesize $x,y,z, \psi,\phi, \theta, q_1 \cdots q_{12}, \dot{x},\dot{y},\dot{z}, \dot{\psi},\dot{\phi}, \dot{\theta}, \dot{q}_1 \cdots \dot{q}_{12}$ \textbf{\small Other:} \footnotesize control lag \vspace{0.05in} \hrule \vspace{0.05in} {\small \emph{\textbf{(Parameters determined during particle trace execution)}} } \textbf{\small Encoder noise:} {\footnotesize $q_1 \cdots q_{12}, \dot{q}_1 \cdots \dot{q}_{12}, \ddot{q}_1 \cdots \ddot{q}_{12}$} \textbf{\small Force sensor noise:} {\footnotesize $u_1 \cdots u_{12}$} \textbf{\small IMU noise:} {\footnotesize $\ddot{x},\ddot{y},\ddot{z},\ddot{\psi},\ddot{\phi}, \ddot{\theta}$} \textbf{\small GPS noise:} {\footnotesize $x,y$ } \textbf{\small Magnetometer noise:} {\footnotesize $\psi,\phi, \theta$} \textbf{\small State Estimation noise:} {\footnotesize $z, \dot{x},\dot{y},\dot{z},\dot{\psi},\dot{\phi}, \dot{\theta} $} \textbf{\small Sensed actuator torque noise:} {\footnotesize $u_{\mathrm{des},1} \cdots u_{\mathrm{des},12}$} \textbf{\small Other:} {\footnotesize control lag jitter } \end{tcfigure} \addtocounter{figure}{1} \end{figure} \begin{figure}[h] \centering \vspace{0.1in} \includegraphics[width=.8\linewidth]{param.png} \caption{\label{fig:links-geometry} A depiction of the probabilistic geometric parameters of a legged robot: shin length, thigh length, and foot radius.} \end{figure} \subsection{Identifying divergent behaviors} \label{section:method:identifying} We hypothesize that a robot's behavior is likely to be predictable if no nonsmooth events occur or if nonsmooth events occur at similar times between particles. Similarly, we hypothesize that behavior is harder to predict if some particle traces experience novel nonsmooth events or nonsmooth events occurring in novel sequences. Accordingly, our approach searches for both ``grazing'' events (events likely to occur in only very particular conditions) and near-miss events. Such events are known in the hybrid dynamical systems literature as ``grazing bifurcations''~\cite{Budd:1996}. Our second hypothesis anticipates that outcomes will be challenging to predict when a robot operates around such regions of state space. As examples, a slightly longer leg than is modeled might cause the foot to scuff the floor unexpectedly, and a foot heavier than its model might be unable to clear the top of a step when climbing stairs. Given the immense computational resources that can be applied to produce the particle traces, we require a means of identifying divergent behavior in potentially massive amounts of generated data. We identify divergent behavior automatically by searching for \1 novel events; \2 a novel \emph{sequence} of events; and \3 novel outcome to similar events (between particles). \emph{Event} is used to denote a mode switch, which can occur upon impacts and upon switching between sliding, sticking, and rolling contact. Novelty would normally be determined against a baseline, expected behavior. But often such expectations are hard to predict given a control policy or task description in a complex environment. Instead, novelty is identified as an event's time, location, or object pair differs from that experienced by other particle traces at a similar time. Divergent behavior can also be detected at a goal-oriented level (i.e., rather than detecting novel events) by, e.g., searching for failure to perform a specific task: falls during locomotion, or dropped objects in a pick-and-place task, etc. Normal behavior can also be determined through consensus, where an outlier would be indicative of divergent behavior. Our demonstrations in \S\ref{section:experiments} detect divergent behavior through task failure detection. Further study will focus on efficient detection of detect divergent behavior between particle traces. \section*{\kvtcb@text@index}\addcontentsline{toc}{section}{\kvtcb@text@index}% \par\noindent% }% \tcb@doc@index@pgf@% } \def\tcb@doc@index@pgfchapter{% \def\index@prologue{\ifdefined\phantomsection\phantomsection\fi\@makeschapterhead{\kvtcb@text@index}\addcontentsline{toc}{chapter}{\kvtcb@text@index}}% \tcb@doc@index@pgf@% } \let\tcb@doc@index@pgf=\tcb@doc@index@pgfsection% \def\tcb@doc@index@doc{% \let\tcb@index@Com=\SpecialMainInde \let\tcb@index@Env=\SpecialMainEnvIndex% \tcbset{index german settings}% \EnableCrossrefs% \PageIndex% } \def\tcb@doc@index@off{}% \tcbset{% reset@documentation/.style={% index command=\index,% index format=pgf, english language, documentation listing style=tcbdocumentation, index default settings, color option=Option, color definition=Definition, color hyperlink=Hyperlink, doc left=2em, doc right=0pt, doc left indent=-2em, doc right indent=0pt, doc head=, before doc body=, after doc body=, doc description=, doc into index=true, index colorize=false, index annotate=true, doc marginnote=, }, initialize@reset=reset@documentation, } \tcbset{ before example/.store in=\kvtcb@beforeexample, after example/.store in=\kvtcb@afterexample, before example=\par\smallskip, after example=, }
train/arxiv
BkiUabDxK4tBVgEFScQf
5
1
\section{Appendix A} \begin{figure*}[] \includegraphics[width=1.00\textwidth]{bleu_mixed_block_avg_exp} \hspace*{-5pt}\includegraphics[width=1.00\textwidth]{comet_mixed_block_avg_exp} \caption{Comparison of different training regimes for EN-CS translation on newstest20 in terms of BLEU (top) and COMET (bottom). Background colors for block-BT regime show which part of training data was used for given part of the training. Green means authentic parallel data, blue is CS->EN backtranslation and red is EN->CS forward translation.} \label{fig:mixed_avg_exp} \end{figure*} \section{Introduction} \label{sec:introduction} This work focuses on exploring of two methods used in NMT in order to improve translation quality: backtranslation and Minimum Bayes Risk decoding using neural-based evaluation metric as the utility function. The methods used and related work are presented in Section 1. In Section 2 we describe our experimental setting and results. \section{Methods} We describe methods we used to build our system in this section. \subsection{Block backtranslation} The translation quality of NMT depends heavily on the amount of parallel training data. It has been shown that the authentic bilingual data can be partially supplemented by synthetically parallel, machine translated monolingual text \cite{bojar-tamchyna-2011-improving,sennrich-etal-2016-improving,xie-etal-2018-noising,edunov-etal-2018-understanding}. Often, the synthetic and authentic parallel data are mixed in the training dataset but previous research shows that simply mixing the two types of text does not yield optimal translation quality. We use block backtranslation (\textit{block-BT}) in configuration similar to \citet{popel-etal-2020-cubbitt}. This method creates blocks of parallel and synthetic data and presents them to the neural network separately, switching between the two types during the training. Since in last year's WMT, the submission using block-BT by \citet{gebauer-etal-2021-cuni} did not find any improvements, presumably due to improperly chosen block size, we decided to verify effectiveness of this method once again. \paragraph{Averaging type} Previous work on \textit{block-BT} shows the importance of averaging the checkpoints to combine information from different blocks of training data in order to obtain good performance. We compare checkpoint averaging with another method of combining older sets of model's parameters with the current one -- \textit{exponential smoothing}. After each update $u$, the current parameters $\Theta_u$ are averaged (with smoothing factor $\alpha$) with parameters after the previous update $\Theta_{u-1}$: $$\Theta_{u}=\alpha\Theta_{u}+(1-\alpha)\Theta_{u-1} $$ Previous work by \citet{Popel2018MachineTU} contains experiments with exponential averaging, but only on the level of already saved checkpoints, not online during the training after each update as for our work. \paragraph{Minimum Bayes Risk decoding} NMT models predict conditional probability distribution over translation hypotheses given a source sentence. To select the most probable translation under the model (mode of the model's distribution), an approximation of MAP (\textit{maximum-a-posteriori}) decoding is used, most commonly the beam search \cite{beam_search}. However, beam search and MAP decoding in general have many shortcomings described in recent work \cite{stahlberg-byrne-2019-nmt,meister-etal-2020-beam} and other approaches have been proposed to generate a high-quality hypothesis from the model. One of them, MBR (Minimum Bayes Risk) decoding \cite{GOEL2000115,kumar-byrne-2004-minimum}, has been proposed as an alternative to MAP. MBR does not produce a translation with the highest probability, but a translation with the best value of utility function. This utility function is usually an automatic machine translation evaluation metric. However, to optimize towards the best utility function value, it would necessary to know the ideal selection of hypothesis. In the case of MT, that would mean a perfect, best possible translation, which of course is not known during the translation process. For this reason, an approximation of the ideal translation is used, based on the model's probability distribution \cite{eikema_mbr}. This can be implemented as generating a list of hypotheses (e.g. using sampling or beam search) and then computing utility function of each hypothesis using all the other hypotheses as the ideal translation approximation (i.e. as references). This approximation of MBR decoding can be seen as consensus decoding -- the hypothesis most similar to all the others is chosen. Also, in this implementation, it is more appropriate to name the process reranking, rather than decoding, and we will do so from now on. Even though MBR is able to optimize towards many metrics and increase the scores, these gains did not translate into better human evaluation of the final translations, when using traditional metrics based on surface similarities like BLEU. Recent successes in development of novel metrics for machine translation has renewed interest in this method. \cite{comet_mbr,bleurt_mbr,muller-sennrich-2021-understanding}. \section{Experiments} In this section we present our experimental setup and results. \subsection{Tools} We tokenize the text into subwords using FactoredSegmenter\footnote{\url{https://github.com/microsoft/factored-segmenter}} and SentencePiece~\cite{kudo-richardson-2018-sentencepiece}. We use MarianNMT~\cite{junczys-dowmunt-etal-2018-marian-fast} to train the models. BLEU scores are computed using SacreBLEU~\cite{post-2018-call}, for COMET scores~\cite{rei-etal-2020-comet} we use the original implementation.\footnote{\url{https://github.com/Unbabel/COMET}} \subsection{Datasets} We train English-Czech NMT models for our experiments. We train our models on CzEng 2.0~\citep{kocmi-2020-announcing}. We use all 3 subsets of CzEng corpus: the originally parallel part, which we call \textit{auth}, Czech monolingual data translated into English using MT (\textit{csmono}) and English monolingual data translated into Czech using MT (\textit{enmono}). We use \texttt{newstest2020}~\cite{barrault-etal-2020-findings} as our dev set and \texttt{newstest2021}~\cite{akhbardeh-etal-2021-findings} as our test set. For experiments concerning translation of named entities (NE), we used a test set originally designed for Czech NLG in restaurant industry domain\cite{dusek-jurcicek-2019-neural}.\footnote{\url{https://github.com/UFAL-DSG/cs_restaurant_dataset}} It contains sentences which include names of restaurants and addresses in Czech and their translations in English. We will call this test set the \texttt{restaurant} test set. \iffalse We also prepared a second smaller test set for named entities, aimed to test the copying behavior from Czech to English.\footnote{Link to the dataset} We created a template of four sentences in Czech and English, fit to be completed by a name of a city district. We used all city district names in a large Czech city. Czech side contains different declensions of the district names, while in English, only the singular nominative form is used. \todo[]{show example} Thus, to obtain perfect accuracy, the model needs to convert the Czech word form to the singular nominative (if not in that form already) and then copy it into English translation. We will denote this test set \texttt{districts}. \textbf{The results for \textit{districts} test set will be completed in the camera-ready version.} \fi \subsection{Models} We train Transformer-base (which we denote \textit{base}) and Transformer-big (\textit{big 6-6}) models with standard parameters~\cite{vaswani-2017-attention} as pre-configured in MarianNMT. For the largest model (\textit{big 12-6}), we use Transformer-big with 12 encoder layers and depth scaled initialization \cite{junczys-dowmunt-2019-microsoft,zhang-etal-2019-improving}.\footnote{Training scripts available at: \url{https://github.com/cepin19/wmt22_general}} We also used learning rate of $1\mathrm{e}{-4}$ for the 12 layer model instead of $3\mathrm{e}{-4}$, which was used for other models. We trained all models for at least 1.4M updates. We computed validation BLEU scores every 5k updates and we stopped if the score did not improve for 30 consecutive validations. \todo[]{HW, GPUs, training times, base vs. big vs deeper} We trained the models on a heterogenous grid server, which includes combinations of Quadro RTX 5000, GeForce GTX 1080 Ti, RTX A4000 and GeForce RTX 3090 cards. Typical training time on 4 1080Ti of the base models for 1.4M updates was 7 days. \subsection{Block-BT settings} For all our experiments, we save a checkpoint every 5k updates and we vary only the size of the blocks during which the training data stay in the same type (20k, 40k, 80k and 160k updates). The length of the blocks is the same for all block types. We circle through the block types in the following order: \textit{auth}$\rightarrow$\textit{csmono}$\rightarrow$\textit{auth}$\rightarrow$\textit{enmono}. For checkpoint averaging, we average consecutive 8 checkpoints. For exponential smoothing, we use default Marian configuration ($\alpha=0.001$, but there are some slight modifications based on number of updates since start of the training and batch size). We also look at the effects of using only backtranslation, or both back- and forward-translation. \begin{figure*}[ht] \centering \includegraphics[width=0.95650\textwidth]{bleu_mixed_block_avg_exp_no_comb} \hspace*{-5pt}\includegraphics[width=0.95650\textwidth]{comet_mixed_block_avg_exp_no_comb} \caption{Comparison of different training regimes for EN$\rightarrow$CS translation on newstest20 in terms of BLEU (top) and COMET (bottom). Background colors for block-BT regime show which part of training data was used for a given part of the training. Green means authentic parallel data, blue is CS$\rightarrow$EN backtranslation and red is EN$\rightarrow$CS forward translation.} \label{fig:mixed_avg_exp_no_comb} \end{figure*} \subsection{Block-BT results} \begin{table*}[!htp]\centering \small \begin{tabular}{lrrrrrrr}\toprule \textbf{Model size} &\textbf{Block size} &\textbf{Avg type} &\textbf{update (k)} &\textbf{ BLEU} &\textbf{update (k)} &\textbf{ COMET} \\\midrule \multirow{18}{*}{base} &mixed &exp &1340 &34.7 &1760 &0.7337 \\ &mixed &exp+avg8 &1365 &34.7 &965 &0.7326 \\ \cmidrule{2-7} &\multirow{4}{*}{20k} &- &1360 &34.6 &640 &0.7324 \\ & &exp &410 &34.9 &725 &0.7406 \\ & &avg8 &660 &34.8 &1385 &0.7349 \\ & &exp+avg8 &420 &34.9 &735 &0.7399 \\ \cmidrule{2-7} &\multirow{4}{*}{40k} &- &610 &34.8 &1415 &0.7363 \\ & &exp &1130 &35.3 &1290 &\textbf{0.7474} \\ & &avg8 &780 &35.5 &1420 &0.7462 \\ & &exp+avg8 &1150 &35.5 &1075 &0.7466 \\ \cmidrule{2-7} &\multirow{4}{*}{80k} &- &1250 &34.9 &960 &0.7393 \\ & &exp &1210 &35.2 &1450 &0.7447 \\ & &avg8 &985 &35.5 &665 &\textbf{0.7474} \\ & &exp+avg8 &585 &35.3 &1150 &0.7455 \\ \cmidrule{2-7} &\multirow{4}{*}{160k} &- &1130 &34.9 &1210 &0.7387 \\ & &exp &1125 &35.3 &1285 &0.7453 \\ & &avg8 &1135 &35.5 &1305 &0.7467 \\ & &exp+avg8 &1145 &35.3 &1310 &\textbf{0.7473} \\ \midrule \multirow{2}{*}{big 6-6} & \multirow{2}{*}{40k} &exp &445 &35.4 &1125 &0.7546 \\ & &exp+avg8 &300 &35.4 &1310 &0.7567 \\ \midrule \multirow{1}{*}{big 12-6} &\multirow{1}{*}{40k} &exp & 130 &36.1 & 1210 & 0.7848 \\ \bottomrule \end{tabular} \caption{Best COMET and BLEU scores on EN-CS newstest2020 for all the combinations of models size, training regime and block size. We report the best score and an number of updates after which was this score reached.} \label{tab:big_results} \end{table*} \paragraph{Training regime and averaging method} First, we compare different training regimes: \textit{mixed-BT}, where all the training datasets are concatenated and shuffled together and \textit{block-BT} with 40k updates long blocks and two possible averaging types -- exponential smoothing (\textit{exp}) or checkpoint averaging (\textit{avg8}). Figure \ref{fig:mixed_avg_exp_no_comb} shows the behavior of BLEU and COMET scores on \texttt{newstest2020} during the training for these configurations inthe interval between 480k and 1280k updates. The behaviour is not stable earlier than 480k steps and 1280k is the nearest lower multiplicative for the largest block size. \textit{40k block} curve represents the model without any averaging, \textit{40k block avg8} is the model trained without exponential smoothing but each checkpoint was averaged with 7 previous checkpoints for the evaluation, \textit{40k block exp} model was trained with continuous exponential smoothing. Finally, we also experimented with combination of both -- exponential smoothing and averaging after the training. The combination does not improve over the separate averaging techniques and we omitted the curve from the figure for clarity In both metrics, \textit{block-BT} with either form of averaging outperforms \textit{mixed-BT} training. Without any averaging, the advantage of \textit{block-BT} over \textit{mixed-BT} is smaller. Type of averaging does not seem to play a large role -- checkpoint averaging, exponential smoothing and their combination yield very similar best scores. The best scores on \texttt{newstest2020} for each combination of parameters are presented in Table~\ref{tab:big_results}. The curves for checkpoint averaging and exponential smoothing behave similarly, with exponential averaging reacting faster to change of the block. Additionally, the \textit{avg8} models have higher peaks in \textit{enmono} (red) blocks, especially for BLEU scores. The shape of the curves could be tuned by changing frequency of saving checkpoints and number of checkpoints to be averaged for checkpoint averaging method, or by changing the $\alpha$ factor for exponential smoothing. There are differences in behaviour between BLEU and COMET score curves. Most notably, COMET is less sensitive to transition from \textit{auth} (green) to \textit{csmono} (blue) blocks. We hypothesize this is caused by lower sensitivity of COMET score to wrong translation of NE and rare words \cite{comet_mbr}. We present further experiments in this direction later. \todo[]{There are also peaks in forward translation, especially for BLEU and avg8 they seem higher than in auth regions, investingate in NE part.} \begin{figure*}[ht!] \centering \includegraphics[width=0.95650\textwidth]{bleu_block_size_exp} \hspace*{-5pt}\includegraphics[width=0.95650\textwidth]{comet_block_size_exp} \caption{Comparison of how the block size affects behavior of BLEU (top) and COMET (bottom) scores during the training for block-BT with exponential smoothing of the parameters, without checkpoint averaging, on EN-CS \texttt{newstest2020}.} \label{fig:comet_block_size_exp} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=0.95650\textwidth]{bleu_block_size_noexp_avg} \hspace*{-5pt}\includegraphics[width=0.9565\textwidth]{comet_block_size_noexp_avg} \caption{Comparison of how the block size affects behavior of BLEU (top) and COMET (bottom) scores during the training or block-BT with checkpoint averaging and no exponential smoothing of the parameters, on EN-CS \texttt{newstest2020}.} \label{fig:comet_block_size_noexp_avg} \end{figure*} \paragraph{Block size} We assess the influence of block size for both of the two averaging methods. We compare block sizes of 20k, 40k, 80k and 160k updates. The behaviour of COMET and BLEU scores is presented in Figures \ref{fig:comet_block_size_exp} and \ref{fig:comet_block_size_noexp_avg} for exponential smoothing and checkpoint averaging, respectively. The best scores are again shown in Table \ref{tab:big_results}. We see that 20k block size yields noticeably worse results when using checkpoint averaging than the other sizes. The negative effect of the small block size is less pronounced when using exponential smoothing, yet still present. Other block sizes perform similarly in both metrics. This result is expected, since for 8-checkpoint averaging with 5k updates checkpointing interval, it is necessary to have a block size of at least 40k updates to fit all the 8 checkpoints and thus explore all possible ratios of \textit{auth} and \textit{mono} data. \todo[]{So it seems that it is important to be able to fit at least n checkpoints inside a single block for n-checkpoint averaging} \begin{figure*}[h] \centering \includegraphics[width=0.95650\textwidth]{bleu_czen_mixed_block_exp_avg.pdf} \hspace*{-5pt}\includegraphics[width=0.9565\textwidth]{comet_czen_mixed_block_exp_avg.pdf} \caption{Comparison of different training regimes for CS$\rightarrow$EN translation on \texttt{newstest2020} in terms of BLEU (top) and COMET (bottom). Background colors for block-BT regime show which part of training data was used for a given part of the training. Green means authentic parallel data, blue is CS$\rightarrow$EN forward translation and red is EN$\rightarrow$CS backtranslation.} \label{fig:mixed_avg_exp_czen} \end{figure*} \iffalse \begin{table*}[h]\centering \scriptsize \centering \begin{tabular}{lrrrrrrrrr}\toprule \textbf{Model size} &\textbf{Block size} &\textbf{Avg type} &\textbf{update (k)} &\textbf{best dev BLEU} &\textbf{test BLEU} &\textbf{update (k)} &\textbf{best dev COMET} &\textbf{test COMET} \\\midrule \multirow{6}{*}{base} &\multirow{2}{*}{mixed} &exp &1405 &25.2 & &1220 &0.4149 & \\ & &exp+avg8 &1430 &25.1 & &1220 &0.4114 & \\ &\multirow{4}{*}{40k} &- &580 &25.3 & &1040 &0.4086 & \\ & &exp &755 &25.3 & &570 &0.4183 & \\ & &avg8 &765 &25.4 & &1060 &0.4175 & \\ & &exp+avg8 &1080 &25.2 & &1230 &0.4186 & \\ \bottomrule \end{tabular} \caption{COMET and BLEU scores for Czech to English directions. The best checkpoints were chosen based on their performance on \texttt{newstest2020} (dev). Test score is the score of the same checkpoint on \texttt{newstest2021}. }\label{tab:csen_results} \end{table*} \fi \begin{table}[p] \scriptsize { \begin{tabular}{l@{~~~}r@{~~~}r@{~~~}r@{~~~}r@{~~~}r@{~~~}r@{~~~}r}\toprule & & &\multicolumn{2}{c}{\textbf{best BLEU}} &\multicolumn{2}{c}{\textbf{best COMET}} \\ \textbf{Model} &\textbf{Block} &\textbf{Avg type} &\textbf{update (k)} &\textbf{BLEU} &\textbf{update (k)} &\textbf{COMET} \\ \midrule \multirow{6}{*}{base} &\multirow{2}{*}{mixed} &exp &1405 &25.2 & 1220 &0.4149 \\ & &exp+avg8 &1430 &25.1 & 1220 &0.4114 \\ &\multirow{4}{*}{40k} &- &580 &25.3 & 1040 &0.4086 \\ & &exp &755 &25.3 & 570 &0.4183 \\ & &avg8 &765 &25.4 &1060 &0.4175 \\ & &exp+avg8 &1080 &25.2 & 1230 &0.4186 \\ \bottomrule \end{tabular}} \caption{COMET and BLEU scores for Czech to English directions. The best checkpoints were chosen based on their performance on \texttt{newstest2020}. }\label{tab:csen_results} \end{table} \paragraph{Reverse direction} For the reverse direction, Czech to English, we performed less extensive evaluation. We only compare \textit{mixed}, \textit{block-BT} with 40k blocks and either exponential smoothing or checkpoint averaging. The behavior of the metrics is shown in Figure \ref{fig:mixed_avg_exp_czen} and the final best scores on \texttt{newstest2020} are presented in Table \ref{tab:csen_results}. \textit{Block-BT} still outperforms \textit{mixed} training, but by a smaller margin than in the other direction. We attribute this difference to the fact that the Czech side of the CzEng dataset is more often translationese (originally English text translated into Czech) and thus differs more from \textit{csmono} part, giving space for the larger gains. \paragraph{Backtranslation direction} \begin{table}[!htp]\centering \scriptsize \begin{tabular}{lrrrrrrr}\toprule & & &\multicolumn{2}{c}{\textbf{BLEU}} &\multicolumn{2}{c}{\textbf{COMET}} \\ \bf dir & \bf regime & \bf datasets & \bf Dev & \bf Test & \bf Dev & \bf Test \\\midrule \multirow{6}{*}{encs} &\multirow{3}{*}{mixed} &all &34.7 & 20.9 &0.7337 & 0.6206 \\ & &auth+cs & 31.5 & 19.5 & 0.6904 & 0.5779\\ & &auth+en & 34.8 &20.6 &0.7258 &0.6097 \\ &\multirow{3}{*}{block} &all &35.3 & \textbf{21.1 }&0.7474 & \textbf{0.6245}\\ & &auth+cs &33.9 & 19.9&0.7232 & 0.5908\\ & &auth+en & \textbf{35.4} & 20.7 &\textbf{0.7497 }& 0.6147 \\ \midrule \multirow{3}{*}{csen} &\multirow{1}{*}{mixed} &all & 25.2& - &0.4149 & - \\ &\multirow{2}{*}{block} &all & 25.3 & - &0.4183 & - \\ & &auth+en & 24.3& - &0.3682 & - \\ \bottomrule \end{tabular} \caption{Results on newstest2020 and newstest2021 for various dataset combinations on dev (\textit{newstest2020}) and test (\textit{newstest2021}) sets, respectivelly, COMET scores are computed by wmt20-comet-da model.}\label{tab:bt_direction} \end{table} We also evaluate influence of using only backtranslations (i.e. \textit{csmono} for en$\rightarrow$cs) as additional synthetic data (monolingual data in target language automatically translated to source language) or adding also forward translations (automatic translations from source language to target; \textit{enmono}) and we present the results in Table \ref{tab:bt_direction}. Interestingly, the results show large gains in both BLEU and COMET when using forward translation. We hypothesize this is caused by the good quality of the model used to create the translation for \textit{enmono}. In such case, the translation model plays the role of the teacher in teacher$\rightarrow$student training and might lead to good quality results. \paragraph{Named entities test sets} \begin{figure*}[h] \centering \includegraphics[width=0.95650\textwidth]{bleu_czen_block40k_exp_ne_acc.pdf} \includegraphics[width=0.95650\textwidth]{comet_czen_block40k_exp_ne_acc.pdf} \caption{Behaviour of BLEU (top), COMET (bottom) on \texttt{newstest2020} and NE translation accuracy on \texttt{restaurant} test set for Czech to English translation with block-BT using exponential smoothing.} \label{fig:ne_acc} \end{figure*} \begin{table}[!htp]\centering \small \begin{tabular}{lrrrr}\toprule \textbf{Update (k)} &\textbf{COMET} &\textbf{BLEU} &\textbf{Acc}\\\midrule 570 &\textbf{0.4183} &24.9 &0.607\\ 755&0.4038 &\textbf{25.3} &0.629 \\ 590 &0.4099 &24.9 & \textbf{0.636} \\ \bottomrule \end{tabular} \caption{Best checkpoints of Czech to English model trained with 40k blocks and exponential smoothing in terms of COMET (first row), BLEU (second row) on newstest2020 and NE translation accuracy on restaurant test set (third row).} \label{tab:ne_acc} \end{table} From anecdotal evidence, we have seen that checkpoints with large influence of backtranslated data perform worse on named entities (NE) translation and COMET and BLEU scores might not reflect this drop of accuracy. We evaluate the models in terms of accuraccy of NE translation on the \texttt{restaurant} test set. We selected Czech to English direction, since the evaluation is easier given lower morphological richness of the target language. \todo[]{I also have results for en-cs, I should integrate them}Figure \ref{fig:ne_acc} shows comparison of behavior of NE (NE) translation accuracy on the restaurant test set and COMET and BLEU scores on \texttt{newstest2020} for exponential smoothing and checkpoint averaging. NE accuracy peaks towards the end of \textit{auth} regions (green). Both COMET and BLEU scores peak also during the \textit{auth} part of the training, but, especially for COMET, the peak occurs in earlier stages after the switch to \textit{auth}. Overall, BLEU curve correlates better with the NE accuracy curve. We hypothesize this might be related to the fact that COMET was found to be insensitive to NE errors by \citet{amrhein2022identifying}. However, it seems that the shift between the accuracy and the other two metrics is not too large in our settings and choosing the best performing model in terms of either COMET or BLEU should not hurt NE translation by a large amount. We further investigate that in Table \ref{tab:ne_acc} -- we chose the checkpoint with the best COMET (first row) and the best BLEU (second row) on the \texttt{newstest2020} and the checkpoint with the best NE translation accuracy on the restaurant test set (third row). We compute all three metrics for these three models. The best COMET checkpoint obtains accuracy of 60.7\% on the restaurant test set, the best BLEU checkpoint reaches the accuracy of 62.9\%, while the best accuracy reached by any checkpoint is 63.6\%. \todo{Do this for noexp avg, since it seems it also peaks in csmono regions which could kill the acc} \todo{So for en->cz, the ne acc is actually lowest in auth, while for cz->en, it is highest in auth? Weird, check brno results also} \subsection{MBR reranking} \begin{table}[!htp]\centering \scriptsize \begin{tabular}{lrrrrrr}\toprule \textbf{i} &\textbf{auth} &\textbf{cs} &\textbf{en} &\textbf{AVG comet20} &\textbf{MBR comet20} &\textbf{comet21} \\\midrule 1 & - &- &- &0.7322 & 0.7888 & 0.0885 \\ 2& 9 &2 &1 &\textbf{0.7430} &0.8082 &0.0946 \\ 3&4 &4 &4 &0.7408 &0.8182 &0.0972 \\ 4&12 &0 &0 &0.7425 &0.8010 &0.0929 \\ 5&0 &12 &0 &0.7303 &0.8104 &0.0949 \\ 6&0 &0 &12 &0.7372 &0.7960 &0.0918 \\ 7&1 &7 &4 &0.7370 &\textbf{0.8232} &\textbf{0.0981} \\ 8&0 &7 &5 &0.7361 &\textbf{0.8232} &\textbf{0.0980 }\\ 9&2 &7 &3 &0.7377 &\textbf{0.8231} &\textbf{0.0981 }\\ \bottomrule \end{tabular} \caption{Results of MBR reranking on \texttt{newstest2020} for different selection of the hypotheses n-best lists produced by checkpoints from different training blocks. In total, 12 n-best lists produced by transformer-base models are concatenated and the first three columns show how many n-best lists are used from each block (the checkpoints for each block are sorted by COMET (wmt20-da model), so these are produced by the best performing checkpoints). The \textit{AVG COMET20} shows the average wmt20-da COMET scores for the first hypotheses of each n-best list that was used,\textit{ MBR COMET20} shows wmt20-da score of the final sentences after MBR reranking, COMET21 shows results of the same sentences from wmt21-da model.}\label{tab:mbr} \end{table} We used MBR reranking to rerank concatenation of n-best lists produced by various checkpoints. In total, we used 6-best lists from 12 checkpoints, i.e. 72 candidate hypotheses for each sentence. We divided the checkpoints based on which block of the training data they were saved in and sorted them by COMET score on \texttt{newstest2020}. Using different strategies we selected the best performing checkpoints to provide the n-best lists. We present the results in Table \ref{tab:mbr}. The first row shows results for mixed-BT regime, i.e. we concatenated n-best lists produced by the 12 best performing mixed-BT checkpoints. In the second row, the block-BT training checkpoints were used to create n-best lists, selected only based on their COMET scores, without any regard on the block type they were saved in. In third row, we combine n-best lists from 4 best checkpoints from each type of block. In rows 4-6, we use best checkpoints from each type of block separately. In the final three rows, we show the optimal selections which yielded the best score. The results suggest that larger diversity across block types of the checkpoints improves MBR results: the combination of n-best lists produced by checkpoints from diverse block types provides better hypotheses for MBR, even though the average COMET score of these checkpoints is lower than for the less diverse selection (see rows 2 and 3). \todo[]{Todo: I should use BLEURT for final evaluation, since I used comet as a utility function and I want to see if other metrics also improve} \todo[]{I could use dictionary and constrained model to translate rare words (with all the possible dict translations) and add these translations into the pool, but probably in some other paper, it would be too much for a system description, the problem is that comet is not very sensitive to rare word translation} \todo[]{I could use combination of different metrics for mbr, but i need to scale them properly} \subsection{Submission} \begin{table}[h]\centering \footnotesize \begin{tabular}{l@{~~}r@{~~}r@{~~}r@{~~}r@{~~}r}\toprule \textbf{auth} &\textbf{cs} &\textbf{en} &\textbf{AVG comet20} &\textbf{MBR comet20} &\textbf{comet21} \\\midrule 9 &2 &8 &0.7802 & 0.8566 & 0.1114 \\ \bottomrule \end{tabular} \caption{Our final submission for the EN-CS general translation task, based on outputs of the transformer-big 12-6 model. Meaning of the columns is identical to Table \ref{tab:mbr}.}\label{tab:final} \end{table} \begin{table}[h]\centering \footnotesize \begin{tabular}{lr@{~~~}r@{~~~}r@{~~~}r}\toprule System&\llap{COMET-B} &COMET-C &ChrF-all \\\midrule Online-W &97.8 &79.3 &70.4 \\ Online-B &97.5 &76.6 &71.3 \\ CUNI-Bergamot * &\textbf{96.0 }&\textbf{79.0} &65.1 \\ JDExploreAcademy * &95.3 &77.8 &\textbf{67.2} \\ Lan-Bridge &94.7 &73.8 &70.4 \\ Online-A &92.2 &71.1 &67.5 \\ CUNI-DocTransformer * &91.7 &72.2 &66.0 \\ CUNI-Transformer * &86.6 &68.6 &64.2 \\ Online-Y &83.7 &62.3 &64.5 \\ Online-G &82.3 &61.5 &64.6 \\ \bottomrule \end{tabular} \caption{Results of automatic metrics on WMT22 General Task test set. Constrained submissions are marked by an asterisk, the best scores among constrained submissions are bold. COMET-B and COMET-C are COMET scores for the two different references, ChrF is computed using both references together.}\label{tab:test_set} \end{table} Our primary submission is based on the \textit{big 12-6} model and MBR reranking. We explored all the possible combinations of 18 checkpoints from different blocks as described in the previous section. The results of the best combination are shown in Table \ref{tab:final}. We present the results of the official evaluation in our task in Table \ref{tab:test_set}. There were 5 submitted systems (4 constrained) and 5 online services. Our submission ranks first in COMET score among the constrained systems and third in ChrF. \section{Conclusion} We describe our submission to WMT22 and experiments that have led to development of our system. We confirm effectiveness of block-BT on the recent COMET metric. We demonstrate the behavior of the translation quality over the course of the training and discuss the effects of various settings. We also show that MBR reranking can benefit from more diverse checkpoints created by block-BT training. \section{Acknowledgements} \small This work was supported by GAČR EXPRO grant NEUREM3 (19-26934X), Bergamot project (European Union's Horizon 2020 research and innovation programme under grant agreement No 825303) and by the Ministry of Education, Youth and Sports of the Czech Republic, Project No. LM2018101 LINDAT/CLARIAH-CZ. \FloatBarrier \todo[]{compute bleurt for best single and then for the mbr rescored}
train/arxiv
BkiUe-LxaKgQTovFR9MX
5
1
\section*{Introduction} Let $A$ be a (generally, not necessarily associative) algebra, and $A = \bigoplus_{g \in \Gamma} A_g$ its grading over a set $(\Gamma, *)$, i.e. $*: \Gamma \times \Gamma \to \Gamma$ is a partial binary operation defined for each pair $(g,h)$ such that $A_g A_h \ne 0$, in which case $A_g A_h \subseteq A_{g * h}$. How the identities satisfied by the algebra $A$ are related to identities satisfied by the grading set $\Gamma$? Since the operation $*$ on $\Gamma$ is partial, in the latter case it makes sense to speak about the (im)possibility to complete $*$ in such a way that it will satisfy that or another identity, or, in more strict terms, about the (im)possibility of embedding of $(\Gamma, *)$ into an appropriate magma\footnote{ ``Magma'' means a set with an (everywhere defined) binary operation on it, without any additional conditions. In the older literature, the term ``groupoid'' was used instead, but since then the latter term was taken by category theorists.} $(G, \cdot)$ such that $g * h = g \cdot h$ whenever $A_g A_h \ne 0$. It is immediate that commutativity or anticommutativity of $A$ implies that $\Gamma$ can be embedded into a commutative magma. Elementary manipulations involving homogeneous components $A_g$'s of graded Lie and associative algebras may suggest that both Jacobi identity and associativity of the algebra $A$ are strongly connected with the associativity of the grading set $\Gamma$. In the Lie case, it was believed for a while (and even claimed in an influential paper \cite{zass} as Theorem 1(a)) that each grading of a Lie algebra is a \emph{semigroup grading}, i.e. the grading set $(\Gamma, *)$ can be embedded into a semigroup. This is indeed so for all gradings of Lie and associative algebras appearing naturally (root space decompositions with respect to a Cartan subalgebra, gradings arising from various group or Hopf algebra actions on the algebra, $\mathbb Z$-gradings providing connection between Lie and Jordan algebras, semigroup algebras and their twisted variants, grading by Pauli matrices motivated by physics, etc.). However, in \cite{elduque-1} and \cite{elduque-2} examples of non-semigroup gradings of Lie algebras were given. The aim of this note is to provide an example of a non-semigroup grading of an associative algebra. This is done in \S \ref{sec-nonsemigr} following approach of \cite[\S 3]{delta}, where it was shown how non-semigroup gradings of Lie algebras can be constructed using $\delta$-derivations. \S \ref{sec-quest} contains some further questions. \section{An example of a non-semigroup grading}\label{sec-nonsemigr} In the associative case, instead of $\delta$-derivations we may consider a slightly more general notion of $(\delta,\gamma)$-derivations, i.e. linear maps $D: A \to A$ on an algebra $A$ such that $$ D(xy) = \delta D(x)y + \gamma x D(y) $$ for any $x,y \in A$, and some fixed elements of the ground field $\delta, \gamma$. In the Lie case, due to anticommutativity, any such condition implies that either $\delta = \gamma$, i.e. $D$ is a $\delta$-derivation, or that $D$ is an element of ``generalized centroid'', i.e. $$ D(xy) = (\delta + \gamma)D(x)y = (\delta + \gamma)xD(y) $$ for any $x,y\in A$, the latter condition being too restrictive to be interesting. (The same dichotomy holds for commutative algebras). \begin{lemma}\label{lemma-a} Let $A$ be a finite-dimensional algebra over an algebraically closed field $K$, and $A = \bigoplus_{\lambda \in K} A_\lambda$ is the root space decomposition with respect to an $(\delta,\gamma)$-derivation of $A$. Then $A_\lambda A_\mu \subseteq A_{\delta\lambda + \gamma\mu}$ for any $\lambda, \mu \in K$. \end{lemma} Note that the algebra $A$ here and below is not assumed to be associative, or Lie, or to satisfy any other distinguished identity. \begin{proof} It is trivial to check that if $x$ and $y$ are eigenvectors of an $(\delta,\gamma)$-derivation of $A$, corresponding to eigenvalues $\lambda$ and $\mu$ respectively, then the product $xy$ is an eigenvector corresponding to $\delta\lambda + \gamma\mu$ (or zero, if $\delta\lambda + \gamma\mu$ is not an eigenvalue). Then proceed by induction on the sum of multiplicities of the respective eigenvalues, exactly the same way as in, for example, \cite[Chapter III, \S 2]{jacobson}. \end{proof} The following is a slightly modified ``nonassociative'' analogue of the Lie-algebraic statement \cite[Proposition 3.1]{delta}. \begin{proposition} Let $A$ be a finite-dimensional algebra over an algebraically closed field, and $D$ an $(\delta,\gamma)$-derivation of $A$. Suppose that there are roots $\lambda, \mu, \eta, \theta, \xi$ (not necessarily distinct) in the root space decomposition of $A$ with respect to $D$ such that \begin{gather} 0 \ne A_\lambda A_\eta \subseteq A_\theta, \quad A_\theta A_\mu \ne 0 , \label{eq-cond} \\ 0 \ne A_\eta A_\mu \subseteq A_\xi, \quad A_\lambda A_\xi \ne 0 , \label{eq-cond1} \end{gather} and $(\delta^2 - \delta)\lambda \ne (\gamma^2 - \gamma)\mu$. Then the said root space decomposition is a non-semigroup grading of $A$. \end{proposition} Note that the conditions (\ref{eq-cond}) and (\ref{eq-cond1}) are somewhat weaker than $(A_\lambda A_\eta) A_\mu \ne 0$ and $A_\lambda (A_\eta A_\mu) \ne 0$, respectively. \begin{proof} The conditions (\ref{eq-cond}) and (\ref{eq-cond1}) ensure that both expressions $(\lambda * \eta) * \mu$ and $\lambda * (\eta * \mu)$ are defined. If the root space decomposition of $A$ with respect to $D$ is a semigroup grading, then these two expressions are equal: $(\lambda * \eta) * \mu = \lambda * (\eta * \mu)$. By Lemma \ref{lemma-a}, this equality is equivalent to $(\delta^2 - \delta)\lambda = (\gamma^2 - \gamma)\mu$, a contradiction. \end{proof} \begin{corollary} The conclusion of Proposition holds in each of the following cases: \begin{enumerate} \item $\delta = \gamma \ne 0,1$, and $\lambda \ne \mu$; \item $\delta \ne \gamma$, $\delta + \gamma \ne 1$, and $\lambda = \mu \ne 0$. \end{enumerate} \end{corollary} \begin{proof} Obvious. \end{proof} Now we will provide an example of a family of associative algebras having $\delta$-derivations as in heading (i) of the Corollary, and hence admitting a non-semigroup grading. Let $V$ be a vector space over a field $K$, and $f_L,f_R,g_L,g_R: V \to V$ be four linear maps. Consider the vector space direct sum \begin{equation*} Ke \oplus Ka \oplus V \oplus V^\prime , \end{equation*} where $Ke$ and $Ka$ are one-dimensional vector spaces spanned by elements $e$ and $a$ respectively, and $V^\prime$ is a second copy of $V$, identified with $V$ via a nondegenerate linear map $v \mapsto v^\prime$. Define the multiplication on this direct sum as follows: \begin{gather*} e^2 = e , \quad a^2 = 0 , \quad av = f_L(v)^\prime , \quad va = f_R(v)^\prime , \quad av^\prime = g_L(v) , \quad v^\prime a = g_R(v) , \end{gather*} where $v \in V$, and the rest of the products between basic elements are zero. The associativity of the so defined algebra, let us denote it as $A(f_L,f_R,g_L,g_R)$, is equivalent to the following conditions: \begin{align*} f_L \circ g_L &= g_L \circ f_L = 0 \\ f_R \circ g_R &= g_R \circ f_R = 0 \\ g_R \circ f_L &= g_L \circ f_R \\ f_R \circ g_L &= f_L \circ g_R . \end{align*} \begin{lemma}\label{lemma-der} Suppose that each of the maps $f_L,f_R,g_L,g_R$ is nonzero, and $(\delta,\gamma) \ne (0,0)$. Then each $(\delta,\gamma)$-derivation $D$ of the algebra $A(f_L,f_R,g_L,g_R)$ is of the following form: \begin{align*} D(e)\phantom{\prime} &= \begin{cases} 0 & \text{ if } \delta + \gamma \ne 1 \\ \beta e & \text{ if } \delta + \gamma = 1 \end{cases} \\ D(a)\phantom{\prime} &= \alpha a + v_a + w_a^\prime \\ D(v)\phantom{\prime} &= \varphi(v) + \psi(v)^\prime , \quad v \in V \\ D(v^\prime) &= \widetilde{\varphi}(v) + \widetilde{\psi}(v)^\prime \end{align*} where $\alpha, \beta \in K$, $v_a, w_a \in V$, $\varphi,\widetilde{\varphi},\psi,\widetilde{\psi}: V \to V$ are linear maps, and the following conditions are satisfied: \begin{alignat*}{2} &(\delta f_R + \gamma f_L)(v_a) &\>= 0 \\ &(\delta g_R + \gamma g_L)(w_a) &\>= 0 \end{alignat*} and \begin{align*} \widetilde\varphi \circ f_L &= \gamma g_L \circ \psi \\ \widetilde\psi \circ f_L &= \delta\alpha f_L + \gamma f_L \circ \varphi \\ \widetilde\varphi \circ f_R &= \delta g_R \circ \psi \\ \widetilde\psi \circ f_R &= \gamma\alpha f_R + \delta f_R \circ \varphi \\ \varphi \circ g_L &= \delta\alpha g_L + \gamma g_L \circ \widetilde\psi \\ \psi \circ g_L &= \gamma f_L \circ \widetilde\varphi \\ \varphi \circ g_R &= \gamma\alpha g_R + \delta g_R \circ \widetilde\psi \\ \psi \circ g_R &= \delta f_R \circ \widetilde\varphi . \end{align*} \end{lemma} \begin{proof} Direct calculations. \end{proof} The non-vanishing conditions of Lemma \ref{lemma-der} are merely technical ones, to avoid consideration of numerous degenerate tedious cases. We may specialize this setup in many different ways to get an example of an algebra having a $(\delta,\gamma)$-derivation satisfying the condition of Proposition or its Corollary, and hence admitting a non-semigroup grading. One of the easiest ways is to set $f_L = f_R = g_L = g_R = f$, where $f \circ f = 0$ (say, $V$ is $2$-dimensional, and $f$ has the matrix $\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$ in the canonical basis), $\delta = \gamma = -1$, $\alpha = \beta = 0$, $v_a = w_a = 0$, and $\psi = \widetilde\varphi = 0$, $\varphi = \id_V$, $\widetilde\psi = -\id_V$. Then $D$ from Lemma \ref{lemma-der} is a $(-1)$-derivation (or, \emph{antiderivation}) of the algebra $A(f,f,f,f)$. The eigenvalues of $D$ are $0, 1, -1$, with eigenspaces $A_{0} = Ke \oplus Ka$, $A_1 = V$, and $A_{-1} = V^\prime$. Then by heading (i) of Corollary, the root space decomposition $A(f,f,f,f) = A_0 \oplus A_1 \oplus A_{-1}$ is a non-semigroup grading. This fact can be also verified directly: as $A_0^2 = Ke \subset A_0$, $A_0 A_1 = A_1 A_0 = (\im f)^\prime \subset A_{-1}$, and $A_0 A_{-1} = A_{-1} A_0 = \im f \subset A_1$, we have the following (partial) operation on the grading set: $$ 0*0 = 0 , \quad 0 * 1 = 1 * 0 = -1, \quad 0 * (-1) = (-1) * 0 = 1 , $$ what contradicts associativity: $$ 1 = 0*(-1) = (0*0)*(-1) \ne 0*(0*(-1)) = 0*1 = -1 . $$ (Note that this is the same non-associative grading set as in the Lie-algebraic example in \cite{elduque-2}). The algebra $A(f,f,f,f)$ is, obviously, commutative, with a commutative grading. By modifying this example to make the maps $f_L$, $f_R$, $g_L$, $g_R$ different, it is possible to get various examples of associative non-commutative algebras with a non-semigroup grading, commutative or not. The relevant calculations are trivial, but somewhat cumbersome, and are left to the interested reader. \section{Further questions}\label{sec-quest} If $L = \bigoplus_{g \in \Gamma} L_g$ is a Lie algebra graded by an \emph{abelian group} $\Gamma$, then its universal enveloping algebra $U(L)$ is a $\Gamma$-graded associative algebra, with the graded components $U(L)_g$ linearly spanned by monomials of the form $x_1 \dots x_k$, where $x_i \in L_{g_i}$ and $\sum_{i=1}^k g_i = g$ (see, e.g., \cite[Theorem 4.3]{strade-farn}). The algebra $U(L)$ is infinite-dimensional, what, perhaps, is not that interesting in our context. In the positive characteristic it is possible, however, to define the same grading on the finite-dimensional \emph{restricted} universal enveloping algebra of a graded restricted Lie algebra. However, the facts that multiplication in $\Gamma$ is defined everywhere, and is associative, are crucial in this construction, and it is unclear how to extend or modify it to grading by an arbitrary set $\Gamma$. \begin{question} Is it possible to construct a grading of the (restricted) universal enveloping algebra, given (arbitrary, not necessarily semigroup) grading of the underlying Lie algebra? \end{question} A positive answer to this question will produce a plethora of non-semigroup gradings of finite-dimensional associative algebras in positive characteristic, different from those exhibited in \S \ref{sec-nonsemigr}: take any of the examples from \cite{elduque-1} or \cite{elduque-2} over a field of characteristic $p>0$, pass, if necessary, to the $p$-envelope, and consider the restricted universal enveloping algebra. \begin{question} What is the minimal dimension of an associative algebra admitting a non-semigroup grading? \end{question} It is, probably, possible to prove, following the approach of \cite[Theorem in \S 1]{elduque-2}, and classification of low-dimensional associative algebras, that any grading of an associative algebra of dimension $\le 3$ is a semigroup grading. Since the underlying algebra is not necessarily commutative, there are apriori much more possibilities for a noncommutative partial operation on a $2$- and $3$-element grading set. The relevant calculations should be straightforward, but definitely cumbersome. We also failed to find examples of non-semigroup gradings of associative algebras of dimension $4$ and $5$. The minimal dimension of an algebra with non-semigroup grading following the scheme of \S \ref{sec-nonsemigr} is $6$. By analogy with the question about gradings of simple Lie algebras from \cite{elduque-1}, one may ask \begin{question} Is it true that any grading of a full matrix algebra is a semigroup grading? \end{question} Note that this question cannot be approached by constructing an appropriate $(\delta,\gamma)$-derivation as in \S \ref{sec-nonsemigr}: it is easy to see that any $(\delta,\gamma)$-derivation of a full matrix algebra is either an (inner) derivation, or a scalar multiple of the identity map (see, e.g., \cite[Theorem 1]{shest} for a slightly more general statement). Finally, note that, in principle, the same approach as in \S \ref{sec-nonsemigr} may be used to construct examples of non-semigroup gradings in varieties of algebras satisfying other identities of degree $3$ (like Leibniz, Zinbiel, left-symmetric, Lie-admissible, Alia algebras, etc.). Another interesting topic would be to explore the question from the point of view of operadic Koszul duality: for example, does the presence/absence of non-semigroup gradings of algebras over a binary quadratic operad $\mathscr P$ entails the same for algebras over the operad Koszul dual to $\mathscr P$? \section*{Acknowledgements} Thanks are due to Miroslav Korbel\'a\v{r} for asking questions which prompted me to write this note. This work was supported by the Statutory City of Ostrava (grant 0924/2016/Sa\v{S}), and the Ministry of Education and Science of the Republic of Kazakhstan (grant 0828/GF4).
train/arxiv
BkiUchY5qoYDgPtUqAPy
5
1
\section{Introduction} {\it EPOXI} (EPOCh + DIXI) is a NASA Discovery Program Mission of Opportunity using the Deep Impact flyby spacecraft \citep{Blume05}. From January through August 2008, the EPOCh (Extrasolar Planet Observation and Characterization) Science Investigation used the HRI camera \citep{Hampton05} with a broad visible bandpass filter to gather precise, rapid cadence photometric time series of known transiting exoplanet systems. The majority of these targets were each observed nearly continuously for several weeks at a time. In Table 1 we give basic information about the seven EPOCh targets and the number of transits of each that EPOCh observed. One of the EPOCh science goals is a search for additional planets in these systems. Such planets would be revealed either through the variations they induce on the transits of the known exoplanet, or directly through the transit of the second planet itself. The search for additional planets in the EPOCh observations of the M~dwarf GJ~436 was presented in \cite{Ballard10}. Because GJ~436 is a nearby M~dwarf, it is the only EPOCh target for which we are sensitive to planets as small as 1.25 $R_{\oplus}$. In this work, we conduct a search for photometric transits of additional planets; the transit times of the known exoplanets observed by EPOCh are presented in \cite{Christiansen10b}. The search for transiting planets in the EPOCh light curves is scientifically compelling because the discovery of two transiting bodies in the same system permits the direct observation of their mutual dynamical interactions. This enables constraints on the masses of the two bodies independent of any radial velocity measurement \citep{Holman05, Agol05}, as has been done for the multiple transiting planet system Kepler 9 \citep{Holman10}. There are also separate motivations for searches for additional planets around the EPOCh targets. The search for additional transits in the EPOCh observations is complementary to existing constraints on additional planets from photometric observations, radial velocity measurements, and transit timing analyses of the known exoplanet. Here we briefly summarize such work to date for our five targets. \cite{Smith09} investigated 24 light curves of known transiting exoplanets, including the {\it EPOXI} targets HAT-P-4, TrES-2, WASP-3, and HAT-P-7, and found that they were sensitive to additional transits of Saturn-sized planets with orbital periods less than 10 days with greater than 50\% certainty, although that probability is less for HAT-P-4 \citep{Kovacs07} because of decreased phase coverage. Transit timing analyses of TrES-3b \citep{ODonovan07} have ruled out planets in interior and exterior 2:1 resonances \citep{Gibson09}, although the transit times obtained by \cite{Sozzetti09} for TrES-3b may suggest a deviation from a constant period that could be attributed to an additional body. \cite{Freistetter09} found that a broad range of orbits around TrES-2 \citep{ODonovan06} would be dynamically stable for additional planets, although the constraints presented by \cite{Rabus09} for TrES-2 have ruled out an 5 $M_{\oplus}$ planet in the 2:1 resonance specifically, and \cite{Holman07} found no deviations in the transit timing residuals from the predicted ephemeris. Additionally, \cite{Raetz09} observed a candidate transit in their photometry of TrES-2 which might be attributed to an additional body in the system in an external orbit to TrES-2b. However, \cite{Kipping10} investigated the TrES-2 {\slshape Kepler} observations and found no unexpected photometric decrements and no significant transit timing or transit duration variation. \cite{Maciejewski10} performed an analysis of the transit times of WASP-3b \citep{Pollacco08}, and found evidence for planet with a mass of 15 $R_{\oplus}$ in a orbit close to a 2:1 resonance with the known planet. In the HAT-P-7 \citep{Pal08} radial velocity measurements, \cite{Winn09} found a drift that provides evidence for a third body. This radial velocity trend is consistent with any period longer than a few months. Finally, the light curves obtained by the {\slshape Kepler} Mission \citep{Borucki10} will ultimately enable exquisite constraints on the presence of additional planets in two of the systems which were also observed by {\it EPOXI}: TrES-2 and HAT-P-7. The remainder of this paper is organized as follows. In Section 2, we describe the photometry pipeline we created to produce the time series. In Section 3, we describe the search we conduct for additional transiting planets. We present a Monte Carlo analysis of the EPOCh observations of HAT-P-4, TrES-3, TrES-2, WASP-3, and HAT-P-7, and demonstrate the sensitivity to detect additional planets in the Neptune-sized and Saturn-sized radius ranges. In Section 4, we present our best candidate transit signals, and from the search for additional transits we place upper limits on the radii of additional putative planet in these systems in the exterior 3:2 and 2:1 resonances with the known exoplanets. \section{Observations and Data Reduction} The photometric pipeline we developed for the EPOCh data is discussed at length in \cite{Ballard10} (concerning GJ~436 in particular), and is summarized in \cite{Christiansen10a} (concerning HAT-P-7) and \cite{Christiansen10b} (concerning HAT-P-4, TrES-3, TrES-2, and WASP-3). We outline here the basic steps we undertake to produce the final EPOCh time series. We acquired observations of the five EPOCh targets presented here nearly continuously for approximately two-week intervals. These intervals were interrupted for several hours at approximately 2-day intervals for data downloads. We also obtained for TrES-2, WASP-3, and HAT-P-7 approximately one day of ``pre-look'' observations, implemented to optimize pointing for each target, that predate the continuous observations by a week. The basic characteristics of the targets and observations are given in Tables 1 and 2. Observations of this type were not contemplated during development of the original Deep Impact mission; the spacecraft was not designed to maintain very precise pointing over the timescale of a transit (Table \ref{tbl-2}). Furthermore, the available onboard memory precludes storing the requisite number of full-frame images (1024$\times$1024 pixels). Hence, the observing strategy during the later observations (TrES-2, WASP-3, and HAT-P-7) used 256$\times$256 sub-array mode for those times spanning the transit, and 128$\times$128 otherwise. This strategy assured complete coverage at transit, with minimal losses due to pointing jitter exceeding the 128$\times$128 sub-array at other times. We elected to exclude the following EPOCh data from the search for additional transits: first, the observations of XO-2, for which we gathered only partial transits and had relatively sparse phase coverage due to pointing jitter and data transfer losses. Second, we did not use the observations from the second EPOCh campaign for HAT-P-4 (from 29 Jun - Jul 7 2008), which we could not calibrate to the same level of precision as the original observations for reasons explained below. Our sensitivity to additional transit signals in the HAT-P-4 light curve, which should theoretically have improved with additional observations removed in time, was in reality diminished due to the increased correlated noise in the second campaign HAT-P-4 observations. For this reason, we elected to use only the original 22 days of observations in the search for additional transits. \begin{deluxetable}{cccc} \tabletypesize{\scriptsize} \tablecaption{EPOCh Targets \label{tbl-1}} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{$V$ Magnitude} & \colhead{Number of Transits Observed\tablenotemark{a}} & \colhead{Dates Observed [2008]} } \startdata HAT-P-4 & 11.22 & 10 & Jan 22--Feb 12, Jun 29--Jul 7 \\ TrES-3 & 11.18 & 7 & Mar 8--March 10, March 12--Mar 18\\ XO-2 & 12.40 & 3 & Mar 11, Mar 23--Mar 28\\ GJ 436 & 10.67 & 8 & May 5--May 29\\ TrES-2 & 11.41 & 9 & Jun 27--Jun 28, Jul 9--Jul 18, Jul 21--Aug 1\\ WASP-3 & 10.64 & 8 & Jul 18--Jul 19, Aug 1--Aug 9, Aug 11--Aug 17 \\ HAT-P-7 & 10.50 & 8 & Aug 9--Aug 10, Aug 18--Aug 31\\ \enddata \tablenotetext{a}{Some transits are partial.} \end{deluxetable} \begin{deluxetable}{cc} \tabletypesize{\scriptsize} \tablecaption{Characteristics of the EPOCh Observations \label{tbl-2}} \tablewidth{0pt} \tablehead{ \colhead{} & \colhead{} } \startdata Telescope aperture & 30 cm \\ Spacecraft memory & 300 Mb \\ Bandpass & 350-1000 nm \\ Integration time & 50 seconds \\ Pointing jitter & $\pm$ 20 arc-sec per hour \\ Defocus & 4 arc-sec FWHM \\ Pixel scale & 0.4 arc-sec per pixel \\ Subarray size & 256$\times$256 pixels spanning transit, 128$\times$128 otherwise\tablenotemark{a} \\ \enddata \tablenotetext{a}{With the exception of the HAT-P-4 observations during 2008 January and February and TrES-3 observations, which were conducted entirely in 128$\times$128 subarray mode.} \end{deluxetable} We used the existing Deep Impact data reduction pipeline to perform bias and dark subtractions, as well as preliminary flat fielding \citep{Klaasen05}. We first determined the position of the star on the CCD using PSF fitting, by maximizing the goodness-of-fit (with the $\chi^{2}$ statistic as an estimator) between an image and a model PSF (oversampled by a factor of 100) with variable position, additive sky background, and multiplicative brightness scale factor. We then processed the images to remove sources of systematic error due to the CCD readout electronics. We first scaled down the two central rows by a constant value, then we scaled down the central columns by a separate constant value. Finally, in the case of 256$\times$256 images, we scaled the entire image by a multiplicative factor to match the 128$\times$128 images (the determination of the optimal correction values is performed independently for each target). We performed aperture photometry on the corrected images, using an aperture radius of 10 pixels, corresponding to twice the HWHM of the PSF. To remove remaining correlated noise due to the interpixel sensitivity variations on the CCD, we fit a 2D spline surface to the brightness variations on the array as follows. We randomly drew a subset of several thousand out-of-transit and out-of-eclipse points from the light curve (from a data set ranging from 11,000 total points in the case of TrES-3 to 20,000 points in the case of HAT-P-4), recorded their X and Y positions, and calculated a robust mean of the brightness of the 30 nearest spatial neighbors for each selected point. To determine the robust mean, we used an iterative sigma-clipping routine that recalculates the mean after excluding outliers further than 3 sigma from the mean estimate at each iteration (the routine concludes after the iteration when no new outliers are identified). Given the set of X and Y positions and the average brightness values of the 30 points which lie nearest those positions, we fit a spline surface to the brightness variations in X and Y using the IDL routine \verb=GRID_TPS=. This spline surface has the same resolution as the CCD, and approximates a flat field of the CCD which has been convolved by a smoothing kernel with width equal to the average distance required to enclose 30 neighboring points. We then corrected each data point individually by bilinearly interpolating on the spline surface to find the expected brightness of the star at each X and Y position. We then divide each observation by its expected brightness to remove the effects of interpixel sensitivity variations. We used only a small fraction of the observations to create the spline surface in order to minimize the potential transit signal suppression introduced by flat fielding the data with itself. To produce the final time series, we iterated the above steps, fitting for the row and column multiplicative factors, the sub-array size scaling factor, and the 2D spline surface that minimized the out-of-transit standard deviation of the photometric time series. We include two additional steps in the reduction of these data that were not included in the \cite{Ballard10} reduction of the GJ~436 EPOCh observations. First, during the second campaign observations of HAT-P-4 and TrES-2, we observed an increase in brightness when the position of the star was located in the lower right-hand quadrant of the CCD. At the image level, we observed a bright striping pattern in this quadrant that caused the measured brightness of the star to increase as soon as the PSF entered this quadrant. We found that the dependence of the brightness increase in this quadrant was correlated with the Instrument Control Board Temperature value recorded for for each image in the FITS header. For the HAT-P-4 second campaign and TrES-2 observations, we first fit a spline to the dependence of the photometric residuals (after the bootstrap flat field was applied, described below) on the Instrument Control Board Temperature, using residuals for which the entire PSF of the star fell into the CCD quadrant in question. The most egregious brightness increase is 4 mmag, when this effect was most prominent. We then performed aperture photometry again on these targets and corrected each image by interpolating the Instrument Control Board Temperature for that image onto the spline, multiplying this correction value by the fraction of the PSF core that fell into the quadrant in question, and dividing this value from the photometry. We found that this iterative procedure largely removed this quadrant-dependent effect. In the latter half of TrES-2 observations, we no longer observed the brightness increase in this CCD quadrant. Therefore, we found that the correction procedure was only necessarily for the second campaign HAT-P-4 and the first portion of TrES-2 observations. However, because of the 6-month separation of the second campaign HAT-P-4 observations from the original HAT-P-4 observations, the behavior of the CCD had sufficiently altered to disallow the combination of the data into a single 2D spline surface. The separate 2D spline correction of the second campaign observations (spanning only 8 days), coupled with the residual striping artifacts, sufficiently decreased the precision of the second campaign observations that we elected to exclude them from the search for additional transiting planets around HAT-P-4. Secondly, we include one final correction after we have removed the interpixel brightness variations with the 2D spline, which is to perform an additional point-by-point correction to the data taken during transit and secondary eclipse of the known exoplanets. The bootstrap flat field randomly selects a set of points to create the spline surface, instead of using all the data to create this surface; this minimizes the suppression of additional transits. Our sensitivity to additional transits is sufficiently diminished during the transits of the known planet that we are concerned more with removing correlated noise, and less concerned about avoiding additional transit suppression. The two reasons for the diminished sensitivity during transit windows are, first, that we fit a slope with time to the points immediately outside of the transit of the known exoplanet (from 3 minutes to 30 minutes before and after transit) and divide by the slope in order to normalize each transit before we fit for the system parameters (this procedure is also detailed in \citealt{Ballard10}). This could have the possibility of removing a decrement due to an additional transit. Second, there is also the possibility of an occultation of one planet by another. We therefore elected to perform a point-by-point correction after the 2D spline for points occurring during the transit and eclipse of the known exoplanet, wherein we find a robust mean of the 30 nearest neighbors to each point (using the same iterative routine described above) and divide this value individually for each point in transit or eclipse. This has the benefit of removing additional correlated noise during transit and eclipse, while still minimizing signal suppression of additional putative planets outside of these time windows After we take these steps to address the systematics associated with the observations, we achieve a precision for the unbinned observations which is approximately twice the photon noise limited precision for all five targets. We estimate the photon limited precision at the image level, by converting the stellar flux from watts per meter squared per steradian per micron (W/m$^{2}\cdot$sr$\cdot\mu$m) to electron counts as follows. We first divide by the conversion factor in the FITS header (keyword RADCALV = 0.0001175 W/m$^{2}\cdot$sr$\cdot\mu$m per DN/s), then multiply by the exposure time (INTTIME = 50.0005 s), and finally multiply by the gain (28.80 e/DN, per \citealt{Klaasen08} for the HRI camera). We then estimate the photometric error by calculating $1/\sqrt{N}$, where N is the number of electrons. We have excluded read noise, bias, and dark current from the estimation of the photon limited precision because these quantities contribute negligibly to the total number of electrons measured within the aperture; we briefly summarize our reasoning here. Using the results of the calibration tests on the HRI instrument shown in \cite{Klaasen08}, we estimate the read noise and dark current (given the CCD temperature of 160 K, as recorded in the image headers) to contribute less than a DN. Calculating the median bias value per pixel from the overclocked pixels associated with each image, and then multiplying this median value by the number of pixels contained within the aperture, we determine that the bias contributes less than 500 DN. When compared to the total measured DN flux contained in the aperture, which is of order 10$^{5}$ DN for the dimmest target star, TrES-3, we conclude that read noise, bias, and dark current are negligible. We repeat the photon limited precision calculation on 50 images for each target, and take the mean of these values to be our estimate for the photon limited precision. Our precision of 1.21 mmag for HAT-P-4 is 94\% above the limit, 2.17 mmag for TrES-3 is 106\% above the limit, 1.62 mmag for TrES-2 is 136\% above the limit, 0.97 mmag for WASP-3 is 106\% above the limit, and 0.86 mmag for HAT-P-7 is 91\% above the limit. The EPOCh precision for GJ~436 of 0.51 mmag was only 56\% above the photon noise limit, which we attribute to the longer baseline of observations with fewer gaps in phase coverage, both of which enabled us to create a higher quality 2D spline flat field \citep{Ballard10}. Figure \ref{fig:lightcurve1} shows five EPOCh time series after the 2D spline correction is applied; these light curves are identical to the ones presented in \cite{Christiansen10a,Christiansen10b}. In the right panel adjacent to each time series, we show how the time series, after the 2D spline is applied, bins down as compared to Gaussian noise over timescales of 7 hours (512 points) or less. We selected the longest contiguous portion of the lightcurve between transits (and excluding secondary eclipse) to calculate the standard deviation as a function of binsize-- this unbinned portion typically comprises about 2500 points. We compare the expected Gaussian scatter for a bin size of 1 hour (assuming that sigma decreases as ${N}^{-1/2}$, and normalizing to the observed rms of the unbinned data) to the measured scatter, and find that the presence of correlated noise inflates the 1$\sigma$ error bar by a factor of 1.86 for HAT-P-4, 1.58 for TrES-3, 1.90 for TrES-2, 2.77 for WASP-3, and 3.14 for HAT-P-7 for 1 hour timescales. \begin{figure}[h!] \begin{center} \includegraphics[height=6.9in]{f1.eps} \caption{\textit{Left panels:} {\it EPOXI} time series for targets HAT-P-4, TrES-3, TrES-2, WASP-3, and HAT-P-7. For TrES-3 (second panel from top), we show the light curve with original modulation due to star spots at bottom. \textit{Right panels:} The standard deviation versus bin size for each target, compared to the ideal Gaussian limit (shown with a line, normalized to match the value at N=1).} \label{fig:lightcurve1} \end{center} \end{figure} We also investigate the transit signal suppression introduced by using a flat field created from the out-of-transit and out-of-eclipse data itself. We avoid the suppression of known transits in each data set by iteratively excluding those observations (using an ephemeris for the known planet derived from the EPOCh observations) from the points used to generate the flat field surface, so that we only use the presumably constant out-of-transit and out-of-eclipse observations to sample the CCD sensitivity. However, if the transit of an additional planet occurs while the stellar PSF is lying on a part of the CCD that is never visited again, the 2D spline algorithm instead models the transit as diminished pixel sensitivity in that CCD location. To quantify the suppression of additional transits that result from using the 2D spline, we inject transit light curves with periods ranging from 0.5 days to 7 days in intervals of 30 minutes in phase (ranging from a phase of zero to a phase equal to the period) into the EPOCh light curve just prior to the 2D spline step. After performing the 2D spline, we then phase the data at the known injected period and fit for the best radius, using $\chi^{2}$ as the goodness-of-fit statistic. We show in Figure \ref{fig:suppression} the radius suppression as a function of period for five EPOCh targets. The HAT-P-4 observations occur over a longer duration with less gaps in phase coverage, so even at a period of 7 days, we have 95\% confidence that the radius of an additional transiting planet will not be suppressed to less than 60\% its original value. For example, an additional 8 $R_{\oplus}$ planet orbiting HAT-P-4 will appear no smaller than 0.6$\times$8 $R_{\oplus}$, or 4.8 $R_{\oplus}$, with 95\% confidence. However, for a target with sparser phase coverage, such as WASP-3, we have 95\% confidence that the radius will not be suppressed to less than 45\% its original value. The same 8 $R_{\oplus}$ planet orbiting WASP-3 will therefore appear no smaller than 0.45$\times$8 $R_{\oplus}$, or 3.6 $R_{\oplus}$, with the same confidence. The average (50\% confidence) suppression value of 75\% across all periods and for all targets reflects the average density of points on the CCD (and thus the quality of the 2D spline), which is indicative of the pointing jitter of the instrument. We describe our incorporation of signal suppression into our search for additional planets in greater detail in Section 3.2. \begin{figure}[h!] \begin{center} \includegraphics[height=6.5in]{f2.eps} \caption{The 50\% and 95\% confidence values for suppression of additional transits as a function of orbital period in the EPOCh observations. We have 50\% confidence that the transit signal will not be suppressed more than the value of the dashed line at that period, and 95\% confidence that the transit signal will not be suppressed more than the value of the solid line.} \label{fig:suppression} \end{center} \end{figure} \section{Analysis} \subsection{Search for Additional Transiting Planets} We search the {\it EPOXI} time series for evidence for additional shallow transits. We developed software to search for these additional transits, which is discussed at length in \cite{Ballard10}. The steps involved in the procedure are summarized in this section. We conduct a Monte Carlo analysis to assess how accurately we could recover an injected planetary signal in each of the EPOCh light curves. We evaluate our sensitivity to transit signals on a grid in radius and period space sampled at regular intervals in $R_{P}^{2}$ and regular frequency spacing in $P$. We create an optimally spaced grid as follows: for the lowest period at each radius, we determine the radii at which to evaluate the adjacent periods by solving for the radius at which we achieve equivalent signal-to-noise (for this reason, we expect significance contours to roughly coincide with the grid spacing). We use the \cite{Mandel02} routines for generating limb-darkened light curves given these parameters to compute a grid of models corresponding to additional possible planets. If we make the simplifying assumptions of negligible limb darkening of the host star, a circular orbit, and an orbital inclination angle $i$ of 90$^{\circ}$, the set of light curves for additional transiting bodies is a three parameter family. These parameters are radius of the planet $R_{p}$, orbital period of the planet $P$, and orbital phase $\phi$. To generate these light curves, we also use the stellar radii values determined by \cite{Christiansen10a,Christiansen10b} from the {\it EPOXI} data, with the corresponding stellar masses, from the literature, that were used to calculate those radii. Those radius and mass values are 1.60 $R_{\odot}$ and 1.26 $M_{\odot}$ \citep{Kovacs07} for HAT-P-4, 0.82 $R_{\odot}$ and 0.93 $M_{\odot}$ \citep{Sozzetti09} for TrES-3, 0.94 $R_{\odot}$ and 0.98 $M_{\odot}$ \citep{Sozzetti07} for TrES-2, 1.35 $R_{\odot}$ and 1.24 $M_{\odot}$ \citep{Pollacco08} for WASP-3, and 1.82 $R_{\odot}$ and 1.47 $M_{\odot}$ \citep{Pal08} for HAT-P-7. At each test radius and period, we inject planetary signals with 75 randomly assigned phases into the residuals of EPOCh light curves with the best transit model divided out, and then attempt to recover blindly the injected signal by minimizing the $\chi^{2}$ statistic. The period range of injected signals is selected for each target individually, to ensure the injected transit signal comprises at least two transits in most cases. For a target with high phase coverage, like HAT-P-4, we inject signals with periods up to 7 days, but for targets with observations of a shorter duration and sparser phase coverage, like TrES-3 or WASP-3, we inject signals up to 3.5 and 2.5 days, respectively. For planets with periods higher than these values, we may detect the planet, but with a single transit, we expect only a very weak constraint on the period. We first conduct a coarse $\chi^{2}$ grid search in radius, period, and phase. We select the spacing of this grid to minimize processing time while ensuring that the transit was not missed; we polish the parameters with a finer $\chi^{2}$ search after the initial coarse search. We sample the $\chi^{2}$ space at 300 points in period space (at even frequency intervals between 0.5 and 8.5 days), 50 points in radius space (between 0.5 and 5.5 Earth radii) and a variable number of points in phase space set by the resolution of the transit duration for each period. We use an expression for the transit duration $\tau$ given by \cite{Seager03}: \begin{equation} \mbox{sin }i\mbox{ sin}\left(\frac{\pi\tau}{P}\right)=\sqrt{\left( \frac{R_{\star}+R_{P}}{a}\right)^{2}-\mbox{cos}^{2}i}. \label{eq:duration} \end{equation} For each test model, we compute the $\chi^{2}$, using the out-of-transit standard deviation to estimate the error in each point. After the grid $\chi^{2}$ minimum is determined, we use the \verb=amoeba= minimization routine \citep{Nelder65} to more finely sample the $\chi^{2}$ space in order to find the $\chi^{2}$ minimum from the specified nearest grid point. We also investigate whether aliases of the best-fit period from the $\chi^{2}$ grid improve the fit. We find that roughly half of the best solutions from the grid are aliases of the injected period, most at either half or twice the value of the injected period, but we test aliases at every integer ratio from 1/35 to 35 times the given period (although aliases other than 1:2, 2:1, 3:1, 1:3, 2:3, or 3:2 occurs less than 3\% of the time for all targets). We also repeat the finer grid search at the three next lowest $\chi^{2}$ minima, in case the best solution (or an alias of the best solution) lies closer to that grid point. For all injected signals, we recover a solution which is a better fit (in the $\chi^{2}$ sense) than the injected signal. For this reason, we are confident that we are sampling the $\chi^{2}$ space sufficiently finely to locate the best solution. We quantify the success of this analysis by how well the search blindly recovers the known injected transit signal. We define the error on the recovered parameter, for instance period, to be $\mid P_{injected}-P_{observed}\mid/P_{injected}$. Figure \ref{fig:success_results_all} shows this relative error in radius, with 95\% confidence, for all searches. As we note in the last paragraph of Section 2, we anticipate suppression of additional transit signals from the bootstrap flat field treatment of the {\it EPOXI} data. We evaluate the suppression we expect at the period values used in the Monte Carlo analysis, using the results shown in Figure \ref{fig:suppression}. We incorporate this expected suppression by vertically shifting the effective radius values of the grid points at which we evaluate our sensitivity to additional transits. For example, for HAT-P-4 at 1.63 days, all grid points have been shifted upward in radius by a value of 1/0.7, or 1.42, because we anticipate that the radius will be suppressed to no smaller than 70\% its original value. For this reason, the recovery statistics corresponding to a 3.0 $R_{\oplus}$ transit depth in the final light curve would be accurate for an original transit signal of a 4.3 $R_{\oplus}$ planet once we fold our expectation of signal suppression. \begin{figure}[h!] \begin{center} \includegraphics[height=6.5in]{f3.eps} \caption{Constraints on radius from the Monte Carlo analysis. For each point in radius and period, we create 75 light curves with random orbital phases, inject them into the EPOCh residuals, and attempt to recover them blindly. The diamonds indicate the grid of radii and periods at which we evaluate our sensitivity; the contours are produced by interpolating between these points. The contours indicate the relative error in radius (absolute value of recovered-injected/injected radius) that encloses 95\% of the results.} \label{fig:success_results_all} \end{center} \end{figure} We also evaluate the overall detection probability for putative transiting planets. Given the cadence and coverage of the {\it EPOXI} observations, we determine the number of in-transit points we expect for a given radius, period, and phase (where the phase is evaluated from 0 to 1 periods, in increments of 30 minutes). We then evaluate the expected significance of the detection, assuming a boxcar-shaped transit at the depth of $(R_{P}/R_{\star})^{2}$, and the standard deviation of the time series. At each phase and period individually, we scale down $R_{P}$ to incorporate the signal suppression at that ephemeris. We use the improvement in the $\chi^{2}$ over the null hypothesis to define a positive detection, after we have removed the best candidate transit signal (described in Section 4.1). If we do not first remove this signal, then we are a priori defining a ``detectable'' signal to be any signal more prominent than the best candidate signal, and we would be unable to evaluate this signal's authenticity. We set our detection limit at an improvement in $\chi^{2}$ over the null hypothesis that signifies a correctly recovered period (which we define as a period error of $<$1\%). This number is variable among the EPOCh targets due to the precision of the observations and the contamination of correlated noise. The detection probability of additional transiting planets, as a function of their radius and orbital period, is shown in Figure \ref{fig:coverage}. For the HAT-P-4, TrES-3, and TrES-2, the $\Delta\chi^{2}$ cutoff is set at 250, 200, and 200 respectively. For WASP-3, the $\Delta\chi^{2}$ criterion for detection is 400, and for HAT-P-7, the cutoff is 500. There are five exceptions here for these threshold values across the five targets: in our analysis of WASP-3, we find two instances of a significance higher than 400, but an incorrect period value: these signals both comprise 4 full transit events, and are recovered at a 4:1 alias of the true period of 2.34 days. Due to the same instances of correlated noise that produce two positive deviations during two of the four transit events that decrease the depths by 2 mmag, a better solution is found at an alias of 4:1 that at the true period. For HAT-P-7, we find three similar cases of a 2:1 alias providing a better solution than the injected period, although the significance of the detection is above the threshold vale of 500. For these three injected signals, which comprise three transits, two of the transits are recovered correctly, and the third overlays a single instance of correlated noise that decreases the depth of the transit by 0.5 mmag. We investigated the true signal-to-noise ratio (including correlated noise) associated with a single detectable transit with the cutoff $\Delta\chi^{2}$ significance for each target. We use a method similar to the one described by \cite{Winn08} to determine the contribution of correlated noise to the standard deviation over a transit duration timescale. We first solved for the transit depth associated with the cutoff $\Delta\chi^{2}$ value, assuming a single boxcar transit with standard deviation equal to the out-of-transit and out-of-eclipse standard deviation of the unbinned time series. We next found the standard deviation at a bin size corresponding to a transit duration for each target. We assume an edge-on transit (which assumption we also used for the Monte Carlo analysis) and the shortest period where we expect mostly single transits (this period is slightly larger than the largest period used for the Monte Carlo analysis; which period range was selected so that we would expect at least two transits in nearly all cases). This approximate orbital period is 7.5 days for HAT-P-4, 4.0 days for TrES-3, 5.0 days for TrES-2, 3.0 days for WASP-3, and 5.0 days for HAT-P-7. Using the cutoff transit depth and the standard deviation associated with the transit duration for each target, we find that the signal-to-noise ratio associated with the detection criteria is approximately constant across the targets, ranging between 5 and 8. The variation in the $\Delta\chi^{2}$ value can be attributed in part to the varying presence of correlated noise in the different data sets (and also to the number of points associated with each transit, which depends on the transit duration). We confirm empirically that planets of these radii are detectable by examining the detection probability as a function of radius and orbital period shown in Figure \ref{fig:coverage}. We convert the cutoff transit depth to a planetary radius, assuming the stellar radius derived from the EPOCh observations and average suppression of the radius to 0.75 its original value (roughly constant for all EPOCh targets, as shown in Figure \ref{fig:suppression}). This radius value physically corresponds to the minimum planetary radius detectable by EPOCh from a single transit. This value is 7.1 $R_{\oplus}$ for HAT-P-4, 6.2 $R_{\oplus}$ for TrES-3, 5.3 $R_{\oplus}$ for TrES-2, 6.5 $R_{\oplus}$ for WASP-3, and 7.9 $R_{\oplus}$ for HAT-P-7. Comparing to the nearest radius value in Figure \ref{fig:coverage}, we find that indeed, at the shortest orbital period where we should expect to see single transits, we can detect a planet with radius associated with the detection criteria at high significance. At longer orbit periods, we still expect single transits, but the likelihood that the single transit occurs during a gap in the phase coverage increases. \begin{figure}[h!] \begin{center} \includegraphics[height=6.5in]{f4.eps} \caption{Detection probability versus period for planets ranging in size from 3 to 10 $R_{\oplus}$. The detection criteria is set by the percentage of phases at a given period for which the number of points observed in transit produces a $\chi^{2}$ improvement of the cutoff significance, compared to the null hypothesis ($\Delta\chi^{2}$ of 250 for HAT-P-4, 200 for TrES-3 and TrES-2, 400 for WASP-3, and 500 for HAT-P-7). We assume a boxcar-shaped transit at the depth of $(R_{P}/R_{\star})^{2}$. The vertical lines show the positions of the 3:2 and 2:1 resonances with the known planet, and the cross-hatching shows the location of orbits which are not guaranteed to be stable by Hill's criterion per \cite{1993G}.} \label{fig:coverage} \end{center} \end{figure} \section{Discussion} \subsection{Best Candidate Transit Signals} We present our best candidate transits here, for each of the five EPOCh targets. Figure \ref{fig:best_sols} shows each of the individual candidate transit events that comprise the best candidate signal, as well as the entire phased and binned signal. For HAT-P-4, the best candidate is a 2.7 $R_{\oplus}$ planet in a 3.1 day orbit; the $\Delta\chi^{2}$ significance is 61 (as compared to a detection criterion of 250). For TrES-3, the best candidate is a 2.9 $R_{\oplus}$ planet in a 2.63 day orbit; the $\Delta\chi^{2}$ significance is 87 (as compared to a detection criterion of 200). For TrES-2, the best candidate is a 3.6 $R_{\oplus}$ planet with a period of 7.22 days; the significance is $\Delta\chi^{2}$ of 269 (as compared to a detection criterion of 200). For WASP-3, the best candidate is a 4.2 $R_{\oplus}$ with a period of 5.9 days; the $\Delta\chi^{2}$ significance is 232 (as compared to a detection criterion of 400). For HAT-P-7, the best candidate is a 4.4 $R_{\oplus}$ planet with a 3.9 day orbit; the significance of the detection is a $\Delta\chi^{2}$ of 201 (as compared to a detection criterion of 500). The only candidate signal above the $\Delta\chi^{2}$ detection threshold is the one in the TrES-2 light curve; this candidate signal comprises two transit events (the other predicted events occur during gaps in the phase coverage). One of the candidate transit events occurs in a portion in the CCD that is never visited afterward; these data are therefore uncalibrated by the 2D spline flat field and are unreliable. Without the transit signal that occurs in the uncalibrated area of the CCD, the $\Delta\chi^{2}$ significance of the remaining transit is 80, which is well below the detection threshold of 200. \begin{figure}[h!] \begin{center} \includegraphics[height=6.5in]{f5.eps} \caption{The best candidate transits for the five EPOCh targets. Each of the individual candidate transit events comprising the signal are shown at left; the phased and binned signal is shown at right. A time of zero on each X axis corresponds to the time of the first transit of the known planet observed by EPOCh. The $\Delta\chi^{2}$ significance of the HAT-P-4, TrES-3, WASP-3, and HAT-P-7 candidate signals fall below the detection criteria. While the significance of the TrES-2 candidate is above the detection criteria, one of the candidate transits (shown in the leftmost panel) occurs in a sparsely sampled part of the CCD, so the observations are uncalibrated and unreliable. Excising this candidate event, the significance of the remaining signal falls below the detection threshold.} \label{fig:best_sols} \end{center} \end{figure} \subsection{Radius constraints} From the results of our Monte Carlo analysis and phase coverage analysis, we can rule out transiting planets in the sub-Saturn radius range for HAT-P-4, TrES-3, and WASP-3, the Saturn-sized radius range for HAT-P-7, and Neptune-sized radius range for TrES-2. We consider in particular our sensitivity to additional planets in the dynamically favorable 3:2 and 2:1 resonance orbits with the known exoplanets. In Figure \ref{fig:coverage}, we show the detection probability as a function of period for planets ranging in size from 3 to 10 $R_{\oplus}$, with positions of the exterior 3:2 and 2:1 resonances marked by vertical lines. We also indicate in Figure \ref{fig:coverage} the regions not guaranteed to be stable by Hill's criterion, per the formula given in \cite{1993G}. Assuming an eccentricity of zero for both the known and putative additional planet, the planetary orbits are assumed to be stable if the following condition holds, where $\mu_{1}=m_{1}/M_{star}$, $\mu_{2}=m_{2}/M_{star}$, $\alpha=\mu_{1}+\mu_{2}$, and $\delta=\sqrt{a_{2}/a_{1}}$: \begin{equation} \alpha^{-3}\left(\mu_{1}+\frac{\mu_{2}}{\delta^{2}}\right)\left(\mu_{1}+\mu_{2}\delta\right)^{2}>1+3^{4/3}\cdot\frac{\mu_{1}\mu_{2}}{\alpha^{4/3}}, \label{eq:hillradius} \end{equation} We solve numerically for the boundaries in $\delta$ of the stable region, using the stellar masses and masses for the known planets given by \cite{Kovacs07} for HAT-P-4b, \cite{Sozzetti09} for TrES-3b, \cite{Sozzetti07} for TrES-2b, \cite{Pollacco08} for WASP-3, and \cite{Pal08} for HAT-P-7, and conservatively using a putative mass for the second body equal to the mass of Saturn. This results in an overestimate of the extent of the Hill-unstable region for the planets with masses smaller than Saturn; while we find we are sensitive to planets with radii well below that of Saturn, the mass of putative additional planets depends on their composition and is uncertain. However, the critical $\delta$ values vary slowly with increased mass of the putative additional planet, so that increasing the mass to that of Jupiter changes the periods associated with the closest Hill-stable orbits by only 7\% at most for these systems. In some cases, the 3:2 orbital resonance is not guaranteed to be Hill-stable (though it may be stable); the exact boundary of the stable region depends on the mass we assume for the additional planet. From the detection probabilities shown in Figure \ref{fig:coverage}, in the HAT-P-4 system, we are sensitive to planets as small as 8 $R_{\oplus}$ in the 3:2 and 2:1 resonance with HAT-P-4b (with a period of 3.06 days) with 95\% confidence. In the TrES-3 system, with the known exoplanet in a 1.3 day orbit, we would have detected a 5 $R_{\oplus}$ planet in the 3:2 and 1:2 resonance with 70\% and 50\% probability, respectively, and an 8 $R_{\oplus}$ in either orbit with nearly 100\% probability. Around TrES-2 we are sensitive to the smallest planets, and would have detected a 4 $R_{\oplus}$ planet with 65\% probability in the 3:2 resonance with TrES-2b (which has a period of 2.47 days). In both the 3:2 and 2:1 resonances, we had a high probability of detecting a 5.0 $R_{\oplus}$ planet: $>$95\% in the case of the 3:2 resonance, and 90\% in the 2:1 resonance. Around WASP-3, we had 50\% chance of detecting a 5.0 $R_{\oplus}$ planet in the 3:2 resonance with WASP-3b (which has a period of 1.85 days), and would have seen a planet as small as 8 $R_{\oplus}$ in either the 3:2 or 2:1 resonance with $>$95\% probability. Around HAT-P-7, we would have detected a Saturn-sized 10 $R_{\oplus}$ planet in either the 3:2 or 2:1 resonances with 95\% probability, and had a 70\% chance of detecting an 8 $R_{\oplus}$ planet. If we assume an inclination equal to that of the known exoplanet, we can rule out additional transiting planets of HAT-P-4, WASP-3, and HAT-P-7 in the 3:2 and 2:1 resonances of the sizes stated above, as we still expect additional planets to transit at those orbital distances. However, the known exoplanets in both the TrES-3 and TrES-2 systems are already in grazing orbits, so additional planets in the exterior 3:2 and 2:1 resonances would not be expected to transit if they were strictly coplanar with the known exoplanet. However, if the orbits of additional planets were misaligned by 2.0$^{\circ}$ in the case of TrES-3 and 1.4$^{\circ}$ in the case of TrES-2 (using the planetary inclinations and stellar radii from \citealt{Christiansen10a,Christiansen10b}) then we would observe a transit in both of the 3:2 and 2:1 exterior resonances. The orbital inclinations of the ice and gas giants in our solar system vary by up to nearly 2$^{\circ}$ \citep{Cox00}, so it is feasible that an additional planet in these systems could transit. \section{Acknowledgments} We are extremely grateful to the {\it EPOXI} Flight and Spacecraft Teams that made these difficult observations possible. At the Jet Propulsion Laboratory, the Flight Team has included M. Abrahamson, B. Abu-Ata, A.-R. Behrozi, S. Bhaskaran, W. Blume, M. Carmichael, S. Collins, J. Diehl, T. Duxbury, K. Ellers, J. Fleener, K. Fong, A. Hewitt, D. Isla, J. Jai, B. Kennedy, K. Klassen, G. LaBorde, T. Larson, Y. Lee, T. Lungu, N. Mainland, E. Martinez, L. Montanez, P. Morgan, R. Mukai, A. Nakata, J. Neelon, W. Owen, J. Pinner, G. Razo Jr., R. Rieber, K. Rockwell, A. Romero, B. Semenov, R. Sharrow, B. Smith, R. Smith, L. Su, P. Tay, J. Taylor, R. Torres, B. Toyoshima, H. Uffelman, G. Vernon, T. Wahl, V. Wang, S. Waydo, R. Wing, S. Wissler, G. Yang, K. Yetter, and S. Zadourian. At Ball Aerospace, the Spacecraft Systems Team has included L. Andreozzi, T. Bank, T. Golden, H. Hallowell, M. Huisjen, R. Lapthorne, T. Quigley, T. Ryan, C. Schira, E. Sholes, J. Valdez, and A. Walsh. Support for this work was provided by the {\it EPOXI} Project of the National Aeronautics and Space Administration's Discovery Program via funding to the Goddard Space Flight Center, and to Harvard University via Co-operative Agreement NNX08AB64A, and to the Smithsonian Astrophysical Observatory via Co-operative Agreement NNX08AD05A.
train/arxiv
BkiUda85qoYApuLfLq8M
5
1
\section{Introduction} \footnotetext{Coteries SA, EPFL Innovation Park, Lausanne, Switzerland} Large autoregressive language models have drawn wide attention due to their zero-shot and few-shot capabilities, allowing them to be used for a wide variety of Natural Language Processing tasks without the need for task-specific finetuning or annotation data~\cite{radford2019language,brown2020language}. Additionally, previous work highlights the improved sample and compute efficiency of larger models, generally justifying the move towards larger models~\cite{kaplan2020scaling}. Although large language models, such as GPT-3~\cite{brown2020language}, have been trained on multilingual corpuses, the performance on NLP tasks may vary significantly between languages. Assessing zero-shot performance in non-English languages is challenging due to the limited number of human-curated benchmarks available. However, with the exception of recent work in machine translation~\cite{tran2021facebook}, multilingual models generally perform worse than mono- or bilingual language models~\cite{arivazhagan2019massively}. Monolingual autoregressive language models in French have previously been proposed. GPT-fr~\cite{simoulin2021modele} and PAGnol~\cite{launay2021pagnol} have been trained on filtered versions of Common Crawl\footnote{\url{https://commoncrawl.org/}} and CCNet~\cite{wenzek2019ccnet}, respectively. Both works highlight the importance of deduplicating and filtering of pre-training data and use decoder-only transformer architectures, closely following the GPT models with model sizes reaching 1B and 1.5B parameters, respectively. It's worth noting that these works do not directly compare performance against extreme-scale large multilingual models, such as GPT-3, in particular with regard to zero-shot tasks. Previous work on the various encoding biases in large language models highlights the importance of dataset curation and documentation~\cite{bender2021dangers,caswell2021quality}. Experiments conducted on GPT-3 (which has been trained on 570GB of text data from Common Crawl) show that the model may generate toxic sentences even when prompted with non-toxic text~\cite{gehman2020realtoxicityprompts}. Although applying filtering of training data using automated toxicity scores may introduce classifier-specific biases~\cite{welbl2021challenges}, this technique remains more effective than decoder-based detoxification using methods such as swear word filters, PPLM~\cite{dathathri2019plug}, soft prompt tuning~\cite{lester2021power} or toxicity control tokens~\cite{keskar2019ctrl}. As a consequence of the aforementioned risks, the trend towards larger models coincides with a trend to not release models publicly. Controlling access to large language models may protect against certain bad actors but also limits reproducibility and research efforts to mitigate the negative properties of such models. In a push for building models in the open, EleutherAI, a grassroot collective of researchers, released GPT-J~\cite{gpt-j}, a 6B parameter English language model. This model was trained on the Pile [20], a 825GB text corpus by the same collective. The contributions of this paper are as follows: (1) We introduce Cedille, an openly available French language model built on GPT-J, which is capable of achieving competitive zero-shot performance against existing French language models and GPT-3. (2) We release the toxicity scores of the complete French C4 dataset, and (3) we provide a comparison of Cedille's toxicity to other language models (including GPT-3). \section{Methods} \label{sec:methods} \subsection{Model architecture} \label{sub:model_architecture} Our model architecture is identical to GPT-J~\cite{gpt-j}. GPT-J uses a similar transformer architecture to the one used in 6.7B GPT-3 with three main differences: (1) No sparse attention patterns were used; (2) the dimension of the attention head was increased from 128 to 256; and (3) Rotary positional embeddings~\cite{su2021roformer} were used instead of sinusoidal embeddings. See Table~\ref{tab:tab_model} for more details. \begin{Table} \renewcommand*{\arraystretch}{1.2} \begin{tabular}{l|l} Number of parameters & \num{6053381344} \\ \hline Number of layers $N$ & 28 \\ \hline Model dimensions $d_{\text{model}}$ & \num{4096} \\ \hline Feed-forward dimension $d_{\text{ff}}$ & \num{16384} \\ \hline Number of attention heads $n_{\text{heads}}$ & \num{16} \\ \hline Head dimension $d_{\text{head}}$ & \num{256} \\ \hline Context size & \num{2048} \\ \hline Vocab size & \num{50257} \end{tabular} \captionof{table}{Cedille model details.} \label{tab:tab_model} \end{Table} \subsection{Training data} \label{sub:training_data} Cedille is trained on a filtered version of the French part of the multilingual C4 (mC4) dataset~\cite{xue2020mt5}, which contains 332M documents or 1.1TB of uncompressed text. mC4 is extracted from 71 Common Crawl snapshots (years 2013 to 2020) and uses CLD3\footnote{\url{https://github.com/google/cld3}}, a small feed-forward neural network, for language identification. mC4 filtered out pages of less than three lines of at least 200 characters. We apply two different forms of filtering to the dataset 1) toxicity filtering using the Detoxify model~\cite{Detoxify} and 2) loss filtering using the FlauBERT model~\cite{le2019flaubert}. For both filtering steps we compute the metric on a per document level of the entire base dataset. In some cases chunking the documents into splits of \num{1200} characters was necessary due to the fixed context size of the used models. Chunks smaller than 600 characters were not evaluated. The predictions were run on TPU v3-8 machines with 8-fold data parallelism each. Each percentile as well as the tails of both the loss and the toxicity distribution were sampled and manually inspected to find suitable cut-off values for filtering. The inspection of these samples revealed that both toxicity and loss values were appropriate\footnote{Despite the positive visual inspection a bug in the loss computation was discovered much later in the analysis. Further investigation revealed that roughly 10\% of samples were wrongly included in the final dataset as a result. Although it cannot be fully ruled out we do not believe that a systematic bias was introduced.}. We removed documents corresponding to a toxicity score higher than 0.5, corresponding to 0.25\% of the content (0.8M documents). For the loss filtering we considered the loss distribution of each of the \num{2048} files and removed documents below a 0.2 percentile loss (corresponding to a loss value of roughly 4.5) and above an absolute loss value of 10. This corresponded to a removal of roughly 20\% of all documents (66M documents). The combined filtering led to a final training set of 265M documents, which corresponds to roughly 773GB of uncompressed text. The text was then run through the \texttt{fix\_text} method of the Python library ftfy~\cite{speer-2019-ftfy} using NFKC normalization and encoded using the unmodified GPT-2 tokenizer. Documents were simply concatenated and split into samples of \num{2049} tokens. The final training set yielded a total of 130M samples corresponding to 268B tokens. \subsection{Training process} \label{sub:training_process} Cedille was trained starting from the official GPT-J model checkpoint using the mesh-transformer-jax codebase~\cite{mesh-transformer-jax}. Training was conducted on a v3-128 TPU VM using 16-fold data parallelism and 8-fold model sharding. For all our experiments we used an effective batch size of 256. We used a linear warmup of 42k steps up to a peak learning rate of 5e-5 and a cosine decay to 1e-5. Weight decay was set to 0.1. Cedille was trained for 150k steps, which corresponds to 0.3 epochs on the training set or 78.7B tokens. The starting and final training perplexities were 6.13 and 3.89, respectively. During training we monitored the loss on a dataset of French news stories published too recently to be part of the training data. \subsection{Evaluation} \label{sub:evaluation} Zero-shot performance was evaluated using a forked version of the lm-evaluation-harness codebase~\cite{eval-harness}. In particular, we added a different way of evaluating perplexity using strides (see section~\ref{sec:perplexity}), implemented the various benchmarks discussed in this work, and integrated the mesh-transformer-jax library (for evaluating checkpoints on TPUs) and the Pagnol model families. Benchmarking was conducted on v3-8 TPU VMs and on A100 GPUs. Toxicity evaluation was conducted using a modified version of the real-toxicity-prompts codebase\footnote{\url{https://github.com/allenai/real-toxicity-prompts}}. The main difference is the use of the Detoxify model in order to predict toxicity (see section~\ref{sec:tox_analysis}). Our adapted codebase is available at \url{https://github.com/coteries/real-toxicity-prompts}. \section{Tasks} \subsection{Perplexity} \label{sec:perplexity} \begin{Table} \renewcommand*{\arraystretch}{1.2} \begin{tabular}{lrrr} \toprule Model & \#params & Byte-PPL & Token-PPL \\ \midrule GPT-3 (ada) & 1.3B\footnote{OpenAI hasn't officially disclosed the size of the models provided by their API, however recent experiments suggest the mapping presented in the table~\cite{gpt3_model_sizes}.} & 1.930 & 7.952 \\ GPT-3 (babbage)& 6.7B & 1.973 & 6.447 \\ GPT-3 (curie)& 13B & 1.809 & 5.082 \\ GPT-3 (davinci)& 175B & 1.656 & 3.993 \\ GPT-J & 6.05B & 1.746 & 5.797 \\ \textbf{Cedille}& 6.05B &\textbf{1.646} & \textbf{3.932} \\ Pagnol (small)& 124M & 1.852 & 17.802 \\ Pagnol (medium)& 335M & 1.775 & 14.623 \\ Pagnol (large)& 773M & 1.725 & 12.791 \\ GPT-fr (base)& 1B & 2.090 & 11.882 \\ \bottomrule \end{tabular} \captionof{table}{Byte-level and token-level perplexity scores on the WikiText-fr benchmark (lower is better).} \label{tab:tab_ppl} \end{Table} Zero-shot perplexity was evaluated on the test subset of the WikiText-fr\footnote{\url{https://huggingface.co/datasets/asi/wikitext_fr}} dataset~\cite{simoulin2021modele}, containing articles from the French Wikipedia which are part of the ``quality articles'' or ``good articles'' categories, similar to the English WikiText-103 dataset~\cite{merity2016pointer}. The test set contains 589k words or 3.7M characters of cleaned French text from 60 articles. We evaluated perplexity by concatenating the text without further preprocessing and using a sliding window approach~\cite{ppl_fixed_length} with a stride of 512 tokens. Therefore models with a context window of \num{1024} tokens (GPT-fr, Pagnol) had 512 tokens of context, whereas models with a context window of \num{2048} tokens had \num{1536} tokens of context. Table~\ref{tab:tab_ppl} shows the summed log likelihoods both normalized by number of characters and by number of tokens. Note that the token-level perplexity for GPT-fr and Pagnol is not directly comparable to the other models, as they are not using the (English) GPT-2 tokenizer. Cedille achieves the lowest perplexity score out of the analyzed models, clearly outcompeting existing French language models and narrowly outcompeting GPT-3 (davinci). Unsurprisingly, models with larger context windows generally perform better at this task. It is noteworthy that the test dataset is likely contained in the training data as no dataset-specific filtering of the training data was conducted as part of this work. \subsection{Summarization} \label{sec:summarization} We evaluated the summarization capabilities on the OrangeSum benchmark, as introduced in the BARThez work~\cite{eddine2020barthez} as a French equivalent of XSum~\cite{narayan2018don}. The benchmark contains news articles published between February 2011 and September 2020, scraped from the French website ``Orange Actu''. The models were given the news article in the test subset using the following prompt: \texttt{\{article text\}\textbackslash nPour r\'esumer :} The models were tasked to generate 100 tokens using top-$k$ of 2 and a temperature of 1, following the methodology in~\cite{radford2019language}. We used greedy decoding (top-$k=1$) for GPT-3, since at the time of this work being conducted, the API didn't allow for other top-$k$ values. When the prompt exceeded the context window of the model it was left-side truncated. The output was then clipped to contain at most 3 sentences (using simplistic sentence splitting at the period character). Table~\ref{tab:tab_sum} shows the ROUGE score~\cite{lin2004rouge} of the output compared to the title of the corresponding articles. \begin{Table} \renewcommand*{\arraystretch}{1.2} \begin{tabular}{lrrr} \toprule Model & $R_1$ & $R_2$ & $R_L$ \\ \midrule GPT-3 (ada) & 13.95 & 4.75 & 11.59 \\ GPT-3 (babbage) & 4.62 & 1.76 & 3.86 \\ GPT-3 (curie) & 5.28 & 2.21 & 4.42 \\ \textbf{GPT-3 (davinci)} & \textbf{15.49} & \textbf{5.82} & \textbf{13.05} \\ GPT-J & 14.46 & 4.72 & 11.68 \\ Cedille & 14.74 & 4.83 & 11.86 \\ Pagnol (small) & 8.52 & 1.61 & 7.24 \\ Pagnol (medium) & 8.98 & 1.86 & 7.55 \\ Pagnol (large) & 9.19 & 1.85 & 7.71 \\ GPT-fr (base) & 10.15 & 2.60 & 8.27 \\ \bottomrule \end{tabular} \captionof{table}{Performance of summarization in French. Shown are the ROUGE scores on the OrangeSum dataset (higher is better).} \label{tab:tab_sum} \end{Table} Generally, we observed some variance due to the non-greedy sampling procedure. However, computational limitations and cost made it difficult to estimate this variance. We also observed that the choice of the prefix (``Pour r\'esumer :'') strongly influences the scores. Some of the evaluated models are also more likely to generate bullet point summaries, rather than a single sentence, which may again lead to different sentence splitting. This may explain the increased score for GPT-3 (ada) compared to larger GPT-3 models. Nevertheless, the scores provided in Table~\ref{tab:tab_sum} give some rough indication of summarization performance. \subsection{Question Answering (QA)} \label{sec:qa} Question answering (QA) was evaluated on FQuAD (French Question Answering Dataset)~\cite{d2020fquad}, a dataset inspired by the English SQuAD equivalent~\cite{rajpurkar2016squad}. The models were evaluated on the validation subset, which contains 3188 human-curated question-answer pairs, based on 768 high-quality French Wikipedia articles. \begin{Table} \renewcommand*{\arraystretch}{1.2} \begin{tabular}{lrr} \toprule Model & $F1$ & Exact match (\%)\\ \midrule GPT-3 (ada) & 19.09 & 4.48 \\ GPT-3 (babbage) & 26.16 & 8.81 \\ \textbf{GPT-3 (curie)} & \textbf{39.49} & \textbf{17.84} \\ GPT-3 (davinci) & - & - \\ GPT-J & 26.14 & 6.96 \\ Cedille & 34.59 & 12.23 \\ Pagnol (small) & 10.66 & 0.43 \\ Pagnol (medium) & 13.80 & 0.84 \\ Pagnol (large) & 17.67 & 2.72 \\ GPT-fr (base) & 15.15 & 2.03 \\ \bottomrule \end{tabular} \captionof{table}{Question-answering F1 and exact match scores in French on the FQuAD benchmark (higher is better).} \label{tab:tab_fquad} \end{Table} The models were evaluated using the SQuAD v2 metric~\cite{rajpurkar2016squad}, which also takes into consideration ``no answer'' probabilities, i.e.\ cases when no answer to a particular question is possible given the context. The models were tasked to generate 100 tokens and at most 1 sentence using greedy sampling and the following prompt: \texttt{Titre: \{title\}\textbackslash nContexte: \{context\}\textbackslash n\textbackslash n \\ Question: \{question\}\textbackslash n\textbackslash nR\'eponse:} The ``no answer'' probabilities were calculated against the string: \texttt{\{prompt\} Sans r\'eponse.} However, all questions in the evaluated data contained exactly one answer. The results in Table~\ref{tab:tab_fquad} show that GPT-3 is very competitive on this task, with GPT-3 (curie) outperforming Cedille and all other evaluated models. GPT-3 (davinci) was not evaluated on this task for cost reasons, as OpenAI did not support our request for funding at the time of writing. The results may be contrasted to a finetuned version of CamemBERT~\cite{martin2019camembert} which yields F1 of 88\% and best match of 78\% on this dataset~\cite{d2020fquad}. \subsection{Translation} \label{sec:translation} Zero-shot translation was evaluated for the language pair English and French on the WMT14 dataset~\cite{bojar2014findings}. Traditionally, such benchmarks are evaluated using the BLEU score~\cite{papineni2002bleu}. The datasets contains \num{3003} samples each and are provided by the sacrebleu library~\cite{post-2018-call}. The zero-shot task is formulated using the following pattern: \texttt{\{source\_lang\} phrase: \{text\}\textbackslash n\{target\_lang\} phrase:} Where \texttt{source\_lang} and \texttt{target\_lang} are French and English, respectively, depending on the direction. Greedy sampling is used to generate 256 tokens. The output was clipped to at most 1 sentence. Cedille outperforms other models for the direction English to French, highlighting the strong French writing capabilities (see Table~\ref{tab:tab_translation}). Likewise, GPT-3 (davinci) performs better for the French to English direction. Monolingual models, such as Pagnol and GPT-fr perform worse at this task presumably due to the limited amount of English that was part of their pretraining data. Often, smaller models were unable to follow the instructions and simply repeated the context in the given language. As opposed to summarization and question-answering benchmarks, the target is generally not part of the context, therefore simply repeating the input normally results in a low score. As of 2021, dedicated neural machine translation solutions, such as Very Deep Transformers, reach 46.4 BLEU for English to French translation~\cite{liu2020very}. \begin{Table} \renewcommand*{\arraystretch}{1.2} \begin{tabular}{lrr} \toprule Model & BLEU (en\textrightarrow fr) & BLEU (fr\textrightarrow en)\\ \midrule GPT-3 (ada) & 2.71 & 16.64 \\ GPT-3 (babbage) & 3.20 & 24.56 \\ GPT-3 (curie) & 13.45 & 27.15 \\ \textbf{GPT-3 (davinci)} & 20.40 & \textbf{27.70} \\ GPT-J & 14.71 & 26.06 \\ \textbf{Cedille} & \textbf{24.89} & 20.59 \\ Pagnol (small) & 0.76 & 1.20 \\ Pagnol (medium) & 1.07 & 1.48 \\ Pagnol (large) & 1.06 & 3.47 \\ GPT-fr (base) & 1.47 & 1.57 \\ \bottomrule \end{tabular} \captionof{table}{BLEU scores for ranslation on WMT14 for the English-French language pair (higher is better).} \label{tab:tab_translation} \end{Table} \section{Toxicity analysis} \label{sec:tox_analysis} In order to evaluate the toxicity of the model we closely followed the work conducted in~\cite{gehman2020realtoxicityprompts}. We studied the case of unprompted (i.e.\ conditioned only on a start-of-sentence token) and prompted generation. The original work in~\cite{gehman2020realtoxicityprompts} used the Perspective API, a service that uses machine learning classifiers to estimate the perceived toxicity of text. In this work, we employ the Detoxify tool~\cite{Detoxify} instead. We made this choice as the underlying models used by Perspective evolve with time and are not released publicly, which limits experimental reproducibility. Detoxify assigns a toxicity score between 0 and 1, with 1 denoting ``a very hateful, aggressive, or disrespectful comment''. We refer to content with a score $>0.5$ as ``toxic''. We use the ``multilingual'' Detoxify model from release v0.4.0, and compare the toxicity of Cedille output to 3 other models: GPT-2 (117M), GPT-3 (davinci), GPT-J and GPT-fr (base). \subsection{Unprompted toxicity} \label{sec:tox_unprompted} For the unprompted toxicity we analyze the expected maximum toxicity, i.e.\ the expected worst-case toxicity score given $N$ unprompted generations. Figure~\ref{fig:fig_tox} shows bootstrap estimates (\num{1000} iterations) of the expected maximum toxicity for $N$ generations with variance bounds as shades. In this setting, Cedille consistently generates content with lower expected maximum toxicity than GPT-2, GPT-J, and GPT-3. After 100 generations, this value is under \num{0.5} for GPT-fr and Cedille (0.41 and 0.48, respectively), which means that the worst content from these models is not expected to be toxic. This is in contrast with the other models, for which maximum expected toxicity values are 0.64, 0.54 and 0.56. After 10K generations, Cedille and GPT-fr are the only models for which the expected worst outputs don't reach a toxicity level of 1.0 We expect all other models to have at least one output that is maximally toxic as detected by Detoxify. Generally the two models that perform best are GPT-fr and Cedille, which were both trained on carefully filtered datasets, pointing to the importance of dataset curation when considering the safety of language models. Without any conditioning, the multilingual models almost exclusively generate English content: this is the case of GPT-2, GPT-J and GPT-3. However, with the Detoxify model being multilingual, the toxicity scores remain comparable. \begin{figure*} \centering \includegraphics[width=.5\textwidth]{figures/eos_prompts.pdf} \caption{Unprompted expected maximum toxicity against increasing numbers of generations.} \label{fig:fig_tox} \end{figure*} \subsection{Prompted toxicity} \label{sec:tox_prompted} For prompted toxicity we used a set of 50 French prompts with values of toxicity spanning the full range, with a mean of 0.34. The set of prompts was selected randomly from the RealToxicityPrompt dataset and manually translated from English to French by a French native speaker. We used a smaller number of prompts than in~\cite{gehman2020realtoxicityprompts} due to limited computing resources. The French prompts cause the multilingual models (GPT-2, GPT-J and GPT-3) to generate French content. For each prompt, each model generates 50 completions. We used nucleus sampling with $p=0.9$ to generate up to 20 tokens per continuation, following the protocol from~\cite{gehman2020realtoxicityprompts}. Table~\ref{tab:tab_prompted_tox} shows two properties: 1) the expected maximum toxicity over 25 generations (with standard deviations in parentheses) and 2) the empirical probability of generating toxic text at least once among 25 generations. \begin{Table} \renewcommand*{\arraystretch}{1.2} \begin{tabular}{lrr} \toprule Model & Exp. max tox. & Prob. toxicity \\ \midrule GPT-2\footnote{Upon manual inspection, it appeared that GPT-2 is unable to generate sensible French content, and as such the resulting toxicity values can't be compared to other models.} & \textit{0.63 (0.23)} & \textit{0.66} \\ GPT-3 (davinci) & 0.68 (0.27) & 0.74 \\ GPT-J & 0.73 (0.26) & 0.78 \\ \textbf{Cedille} & \textbf{0.66 (0.27)} & \textbf{0.72} \\ GPT-fr (base) & 0.73 (0.27) & 0.78 \\ \bottomrule \end{tabular} \captionof{table}{Toxicity of prompted generations.} \label{tab:tab_prompted_tox} \end{Table} For both properties, Cedille outperforms the other models. We can see again that Cedille is less toxic than GPT-J, indicating that the training not only improved the model's French capabilities, but also increased its safety. \section{Conclusions} In this work we introduced Cedille, a large auto-regressive French language model. Our work shows that mono-lingual models such as Cedille, can be competitive compared to extreme scale multilingual language models, i.e.\ GPT-3. Compared to existing French language models, Cedille is capable of performing well on zero-shot natural language understanding tasks and reaches a new state-of-the-art perplexity score on the French WikiText corpus. Lastly, our approach of toxicity filtering of the training data led to a decrease in both maximum toxicity as well as the likelihood of toxic output. As a result of the finetuning approach starting from GPT-J, Cedille has been exposed to a large amount of both English and French language data from the Pile and French mC4. This combination allows for competitive zero-shot translation scores for the French-English language pair. Early experiments indicate that finetuning an existing English language model and adapting it to French is more efficient even with considerable compute and data investments (see appendix). Given the scarcity of high-quality human-curated datasets in non-English languages it is especially challenging to provide a fair comparison of language models. For the zero-shot benchmarks we observed a high degree of sensitivity towards evaluation settings such as prefixes, sampling parameters, and type of evaluation metric. The scores should therefore only be considered as a rough guidance and model performance may be highly task specific. In this work we haven't provided performance metrics for other NLP tasks such as text classification or word sense disambiguation. Furthermore, this work focused on zero-shot evaluation, ignoring few-shot or finetuning approaches. Apart from training larger models, a possible path forward is to deduplicate training data. This method has been shown to improve end-task performance significantly~\cite{wenzek2019ccnet,lee2021deduplicating} but was not conducted as part of this work. In order to further reduce language model toxicity, a possible direction is the integration of human feedback in the training process in order to reduce toxic output generation~\cite{ouyangtraining}. \paragraph{Data availability.} Cedille is available under the MIT License on the Hugging Face model hub: \url{https://huggingface.co/Cedille/fr-boris}, and on our GitHub repository: \url{https://github.com/coteries/cedille-ai}. Regarding the French mC4 toxicity scores and toxicity analysis code, please refer to: \url{https://github.com/coteries/real-toxicity-prompts}. \paragraph{Funding.} This work was funded by, and conducted at, Coteries SA\footnote{\url{https://coteries.com}}. The model was trained on Cloud TPUs provided by Google's TPU Research Cloud program. \paragraph{Acknowledgments.} We thank S\'ebastien Flury and Fran\c{c}ois Bochatay for their guidance and feedback. Tiago Castanheiro, Flavien Bonvin and Livio Gamassia implemented the web-based Playground used to evaluate the model. Tiago Castanheiro, Flavien Bonvin, Sacha Toufani, Livio Gamassia, and Kasper Andkjaer tested out multiple versions of the model. S\'ebastien Von Roth designed the Cedille logo as well as the visual design of the Playground and Cedille website\footnote{\url{https://cedille.ai}}. Sonja Dossenbach assembled the dataset of recent French news. We are grateful to EleutherAI for publicly releasing the GPT-J model and offering us support on their Discord server\footnote{\url{https://discord.gg/zBGx3azzUn}}. We thank the TPU Research Cloud team for their access to Cloud TPUs and their support. \raggedcolumns \printbibliography \end{multicols} \pagebreak \beginsupplement \begin{center} \noindent\rule{\textwidth}{1.5pt} \\ \vspace{.2cm} \textsc{\Huge{Supplementary Material}} \\ \vspace{.1cm} \noindent\rule{\textwidth}{1.5pt}\\ \vspace{.5cm} \end{center} \section{Experiments training from scratch} Given the amount of compute and data available, training from scratch rather than finetuning was considered. We experimented training Cedille from scratch using both the GPT-2 tokenizer (Cedille-fs-GPT2, vocab size \num{50400}) and the GPT-fr tokenizer (Cedille-fs-GPTfr, vocab size \num{50,000}) for 60k steps using a peak learning rate of 1.2e-4 end learning rate 1.2e-5, and \num{7281} warm-up steps. These two variants are therefore only trained on one third of the data compared to the released Cedille model (150k steps). In order to have a fair comparison we show the result of Cedille after the same amount of steps (Cedille-60k). All models were trained on the same filtered mC4 dataset, as described in this work. As shown in Table~\ref{tab:tab_from_scratch}, Cedille-60k outperforms the from-scratch variants on the WikiText-fr benchmark. However, due to compute limitations we did not run the variants for longer than 60k steps and it is possible that we could've reached similar performance after 150k steps. Furthermore, both variants perform similarly, even though they are using a different tokenizer. Due to the variants performing very similarly, we conclude that even though a dedicated French tokenizer is a lot more efficient at encoding French text compared to the GPT-2 tokenizer, its benefit with regard to end-task performance was minimal in our experiments. \begin{table}[h!] \renewcommand*{\arraystretch}{1.2} \centering \begin{tabular}{lrr} \toprule Model & PPL (byte) & PPL (token) \\ \midrule GPT-J & 1.746 & 5.797 \\ \textbf{Cedille-60k} & \textbf{1.673} & \textbf{4.112} \\ Cedille-fs-GPT2 & 1.794 & 4.972 \\ Cedille-fs-GPTfr & 1.775 & 6.856 \\ \bottomrule \end{tabular} \caption{ Byte-level and token-level perplexities for the WikiText-fr benchmark. Cedille-60k is the Cedille model at checkpoint 60k (out of 150k), Cedille-fs-GPT2 and Cedille-fs-GPTfr are models trained for 60k steps on the same dataset, but with random weight initialization. } \label{tab:tab_from_scratch} \end{table} \end{document}
train/arxiv
BkiUbt05qrqCykKvZ6FN
5
1
\section{I.$J_{z}/t=2$ results} Fig.~\ref{fig:Jz2p0OccET} shows the single-particle Green's function and SDW-XY correlation function at the ribbon edge as a function of interlayer $J/t$ interaction when $J_{z}/t=2$. The bulk quantum critical point is obtained from energy curves and SDW-XY magnetic structure factors which will be shown in the following section. The $J_{z}/t=2$ case shares the similar behavior as the $J_{z}/t=0$ and $1$ cases. The single-particle Green's function at the ribbon edge shows the exponential decay before the bulk quantum phase transition, while the SDW-XY correlation function still decays as a power-law behavior. \begin{figure}[htp!] \centering \includegraphics[width=0.8\columnwidth]{Suppl_Jz2p0_EdgeCorr.eps} \caption{Single-particle Green's function (a) and SDW-XY correlation function (b) at the ribbon edge as a function of $J/t$ when $J_{z}/t=2$.} \label{fig:Jz2p0OccET} \end{figure} \section{II. magnetic orders} The Ising-like $J_{z}$ term in our Hamiltonian can be decompose into the following three terms, \begin{equation} -\frac{J_{z}}{4}\sum_{i}\left[(\hat{n}_{1i\uparrow}-\hat{n}_{1i\downarrow})-(\hat{n}_{2i\uparrow}-\hat{n}_{2i\downarrow})\right]^{2}=-\frac{J_{z}}{4}\sum_{\xi,i,\sigma}\hat{n}_{\xi i\sigma}+\frac{J_{z}}{2}\sum_{\xi,i}\hat{n}_{\xi i\uparrow}\hat{n}_{\xi i\downarrow}+2J_{z}\sum_{i}\hat{S}_{1i}^{z}\hat{S}_{2i}^{z} \end{equation} The first term is the on-site potential term, the second term is the on-site Coulomb repulsive interaction and the third term is the Ising exchange interaction between two layer sites. When $J_{z}\gg J$, $J_{z}$ will drive the system into a Ising antiferromagnetic ordered (SDW-Z) state. We define the SDW-Z antiferromagnetic magnetic order along $z$ direction as follows \begin{equation} \begin{split} M_{AA}^{zz}(\mathbf{r}_{j}-\mathbf{r}_{i})=&S_{A_{1}A_{1}}^{zz}(\mathbf{r}_{j}-\mathbf{r}_{i})-S_{A_{1}A_{2}}^{zz}(\mathbf{r}_{j}-\mathbf{r}_{i}) -S_{A_{2}A_{1}}^{zz}(\mathbf{r}_{j}-\mathbf{r}_{i})+S_{A_{2}A_{2}}^{zz}(\mathbf{r}_{j}-\mathbf{r}_{i})\\ S_{mn}^{zz}(\mathbf{r}_{j}-\mathbf{r}_{i})=&\bra{\Psi}\hat{S}_{i}^{z}\hat{S}_{j}^{z}\ket{\Psi}/\braket{\Psi|\Psi},i\in m, j\in n \end{split} \end{equation} From Fig.~\ref{fig:SzzStrFct}, there is no SDW-XY and SDW-Z magnetic orders (and no time-reversal symmetry breaking) in the whole $J/t>0$ parameter regime when $J_{z}/t\le 2.0$. However, when $J_{z}/t=3.0$, SDW-Z order emerges in the middle of $J/t$ parameter region. \begin{figure}[htp!] \centering \includegraphics[width=0.7\columnwidth]{Suppl_MagneticOrder.eps} \caption{SDW-XY (a,c) and SDW-Z (b, d) structure factor as a function of $J/t$ and linear system size $L$ for $J_{z}/t=2$ and $J_{z}/t=3$. There is no SDW-Z magnetic order when $J_{z}/t\le 2$ in the whole $J/t$ regime. Around the bulk quantum phase transition critical point (QCP), SDW-XY structure factor shows a power-law increasing tendency with system size $L$, however, the power-law increasing exponent is less than 2 which means no SDW-XY magnetic order will develop around the QCP in the thermodynamic limit.} \label{fig:SzzStrFct} \end{figure} \section{III. Energy curves} We plot the expectation values of four parts of the Hamiltonian in Fig.~\ref{fig:EngySite} as a function of $J/t$ for different $J_{z}/t$ values. From the inflection point of the energy curves and magnetic structure factor shown in Fig.~\ref{fig:SzzStrFct}, we can obtain the approximate bulk quantum phase transition points without calculating the energy gaps. \begin{figure}[htp!] \centering \includegraphics[width=0.7\columnwidth]{Suppl_EnergyPerSite.eps} \caption{Ground state energy per site as a function of $J/t$ when $J_{z}/t=1$ and $2$. The linear system size used here is $L=15$. Combined with Fig.~\ref{fig:SzzStrFct}, we can get the phase diagram which is shown in Fig.1 (b) in the main text.} \label{fig:EngySite} \end{figure} \section{IV. Other matrix elements of edge Green's function and O(4) correlation function} In the main text, we only show the Green function between $A_{1}$ sublattice and $B_{1}$ sublattice in the same layer along the ribbon edge, {\it{i.e.}}, an off-diagonal term of the edge Green's function matrix. Here, we present that the diagonal parts of Green function matrix also show similar behavior as the off-diagonal part. \begin{figure}[htp!] \centering \includegraphics[width=0.4\columnwidth]{Suppl_Trace_EdgeGF.eps} \caption{The trace of single-particle Green's function matrix at the ribbon edge as a function of $J/t$ when $J_{z}/t=0$.} \label{fig:TraceOccET} \end{figure} Fig.~\ref{fig:TraceOccET} shows the trace of single-particle Green's function matrix $\text{Tr}\mathbf{G}_{\mathbf{r}}^{\uparrow}=G_{A_{1}A_{1}}^{\uparrow} +G_{A_{2}A_{2}}^{\uparrow}+G_{B_{1}B_{1}}^{\uparrow}+G_{B_{2}B_{2}}^{\uparrow}$ at the ribbon edge as a function of $J/t$ when $J_{z}/t=0$. The diagonal part of single-particle Green's function at the edge also shows the exponential decay before the bulk quantum phase transition. For the SDW-XY correlation matrix, we have show the $|N_{AA}^{+-}|$ (with combined elements) in the main text. Here, we also show you the power-law decay of $|N_{BB}^{+-}|$ and $|S_{A_{1}B_{1}}^{+-}|$ before the bulk quantum phase transition in Fig.~\ref{fig:OtherSDWXY}, where $N_{BB}^{+-}$ defines as \begin{equation} N_{BB}^{+-}(\mathbf{r}_{j}-\mathbf{r}_{i})=\frac{1}{2}[S_{B_{1}B_{1}}^{\pm}(\mathbf{r}_{j}-\mathbf{r}_{i})-S_{B_{1}B_{2}}^{\pm}(\mathbf{r}_{j}-\mathbf{r}_{i}) -S_{B_{2}B_{1}}^{\pm}(\mathbf{r}_{j}-\mathbf{r}_{i})+S_{B_{2}B_{2}}^{\pm}(\mathbf{r}_{j}-\mathbf{r}_{i})]. \end{equation} \begin{figure}[htp!] \centering \includegraphics[width=0.8\columnwidth]{Suppl_SpmET_BB_AB_R.eps} \caption{The SDW-XY correlation functions $|N_{BB}^{+-}|$ and $|S_{A_{1}B_{1}}^{+-}|$ at the ribbon edge as a function of $J/t$ when $J_{z}/t=0$.} \label{fig:OtherSDWXY} \end{figure} \section{V. Finite-size effects} In the main text, we mainly use the $L_{a_{1}}=27, L_{a_{2}}=9$ system size in the PQMC calculations. Here, we show that $L_{a_{2}}=9$, which is the width of the ribbon, is large enough to obtain thermodynamic limit behavior. As shown in Fig.~\ref{fig:La2FSE}, when we increase the $L_{a_{2}}$ from 5 to 11, little change both in the single-particle Green's function as well as two-particle bosonic correlation function, can be observed. \begin{figure}[htp!] \centering \includegraphics[width=0.8\columnwidth]{Suppl_La2WidthFSE.eps} \caption{The single-particle Green's function and SDW-XY correlation function at the ribbon edge change little when we increase $L_{a_{2}}$ from 5 to 11. The insets show the y-axis values of the right-most points as a function of ribbon width $L_{a_{2}}$.} \label{fig:La2FSE} \end{figure} \section{VI. Strange Correlator} Apart from creating a physical spatial edge to study the edge physics, we can also calculate the strange correlator to reflect the physical edge between two topological distinct many-body ground state wave functions~\cite{YZYou2014b,HQWu2015}. \begin{equation} C(r,r')=\frac{\bra{\Omega}\hat{\phi}(r)\hat{\phi}(r')\ket{\Psi}}{\braket{\Omega|\Psi}} \end{equation} we can define the single-particle strange correlator and spin strange correlator by replacing the bra state with a topological trivial state $\bra{\Omega}$ in Eq.~(4) in the main text. The single-particle strange correlator also shows an exponential decay before the bulk quantum phase transition while the spin strange correlator remains power-law decay, indicating the interacting QSH phase $\ket{\Psi}$ is topologically distinct from the trivial phase $\bra{\Omega}$, and there exist gapless bosonic modes at the spatial interface between two systems. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{Suppl_StrangeCorrelator.eps} \caption{single-particle strange correlator and SDW-XY strange correlator as a function of $J/t$ when $J_{z}/t=0$.} \label{fig:StrCorr} \end{figure} \section{VII. on-site $U$ interaction} \label{sec:UInt} The phase diagram of bilayer KMH model with on-site $U\sum_{i}(\hat{n}_{i\uparrow}+\hat{n}_{i\downarrow}-1)^{2}$ interaction and inter-layer $J$ interaction is shown in Fig.~\ref{fig:UJPhase} (a). The phase boundaries are obtained from the bosonic gap closing as well as the nonzero magnetic order parameter in our previous paper Ref.~\cite{YYHe2016a}. Based on the exponential decay of edge single-particle Green's function in Fig.~\ref{fig:UJPhase} (b) and the power-law decay of edge SDW-XY correlation function in Fig.~5 in the main text, we conclude that the quantum spin Hall phase with \textit{finite} interaction $U$ and $J$ which is shown in Fig.~\ref{fig:UJPhase} (a) is also a bosonic SPT phase. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{Suppl_UJPhaseDiagram.eps} \caption{(a) Phase Diagram of bilayer KMH model with on-site $U$ interaction and inter-layer $J$ interaction. The red line shows the vertical phase path we used in Fig.~5 in the main text. The exponential decay of single-particle Green's function at the ribbon edge indicates that fermions are still gapped when $U/t$ is increased at $J/t=2.75$.} \label{fig:UJPhase} \end{figure} \end{document}
train/arxiv
BkiUdLjxK6nrxl9bNXDO
5
1
\section*{\noindent Editor\hfill} David Garfinkle\\ \smallskip Department of Physics Oakland University Rochester, MI 48309\\ Phone: (248) 370-3411\\ Internet: \htmladdnormallink{\protect {\tt{garfinkl-at-oakland.edu}}} {mailto:[email protected]}\\ WWW: \htmladdnormallink {\protect {\tt{http://www.oakland.edu/?id=10223\&sid=249\#garfinkle}}} {http://www.oakland.edu/?id=10223&sid=249\#garfinkle}\\ \section*{\noindent Associate Editor\hfill} Greg Comer\\ \smallskip Department of Physics and Center for Fluids at All Scales,\\ St. Louis University, St. Louis, MO 63103\\ Phone: (314) 977-8432\\ Internet: \htmladdnormallink{\protect {\tt{comergl-at-slu.edu}}} {mailto:[email protected]}\\ WWW: \htmladdnormallink{\protect {\tt{http://www.slu.edu/colleges/AS/physics/profs/comer.html}}} {http://www.slu.edu//colleges/AS/physics/profs/comer.html}\\ \bigskip \hfill ISSN: 1527-3431 \bigskip DISCLAIMER: The opinions expressed in the articles of this newsletter represent the views of the authors and are not necessarily the views of APS. The articles in this newsletter are not peer reviewed. \begin{rawhtml} <P> <BR><HR><P> \end{rawhtml} \end{flushleft} \pagebreak \section*{Editorial} The next newsletter is due December 2016. This and all subsequent issues will be available on the web at \htmladdnormallink {\protect {\tt {https://files.oakland.edu/users/garfinkl/web/mog/}}} {https://files.oakland.edu/users/garfinkl/web/mog/} All issues before number {\bf 28} are available at \htmladdnormallink {\protect {\tt {http://www.phys.lsu.edu/mog}}} {http://www.phys.lsu.edu/mog} Any ideas for topics that should be covered by the newsletter should be emailed to me, or Greg Comer, or the relevant correspondent. Any comments/questions/complaints about the newsletter should be emailed to me. A hardcopy of the newsletter is distributed free of charge to the members of the APS Topical Group on Gravitation upon request (the default distribution form is via the web) to the secretary of the Topical Group. It is considered a lack of etiquette to ask me to mail you hard copies of the newsletter unless you have exhausted all your resources to get your copy otherwise. \hfill David Garfinkle \bigbreak \vspace{-0.8cm} \parskip=0pt \section*{Correspondents of Matters of Gravity} \begin{itemize} \setlength{\itemsep}{-5pt} \setlength{\parsep}{0pt} \item Daniel Holz: Relativistic Astrophysics, \item Bei-Lok Hu: Quantum Cosmology and Related Topics \item Veronika Hubeny: String Theory \item Pedro Marronetti: News from NSF \item Luis Lehner: Numerical Relativity \item Jim Isenberg: Mathematical Relativity \item Katherine Freese: Cosmology \item Lee Smolin: Quantum Gravity \item Cliff Will: Confrontation of Theory with Experiment \item Peter Bender: Space Experiments \item Jens Gundlach: Laboratory Experiments \item Warren Johnson: Resonant Mass Gravitational Wave Detectors \item David Shoemaker: LIGO Project \item Stan Whitcomb: Gravitational Wave detection \item Peter Saulson and Jorge Pullin: former editors, correspondents at large. \end{itemize} \section*{Division of Gravitational Physics (DGRAV) Authorities} Chair: Laura Cadonati; Chair-Elect: Peter Shawhan; Vice-Chair: Emanuele Berti. Secretary-Treasurer: Thomas Baumgarte; Past Chair: Deirdre Shoemaker; Members-at-large: Steven Drasco, Tiffany Summerscales, Duncan Brown, Michele Vallisneri, Kelly Holley-Bockelmann, Leo Stein. Student Members: Megan Jones, Jessica McIver. \parskip=10pt \vfill\eject \section*{\centerline {DGRAV}} \addtocontents{toc}{\protect\medskip} \addtocontents{toc}{\bf DGRAV News:} \addcontentsline{toc}{subsubsection}{ \it DGRAV, by Deirdre Shoemaker} \parskip=3pt \begin{center} Deirdre Shoemaker, Georgia Institute of Technology \htmladdnormallink{deirdre.shoemaker-at-physics.gatech.edu} {mailto:[email protected]} \end{center} We did it! After years of trying, the APS Topical group on Gravitation has become the APS Division of Gravitational Physics (DGRAV), as announced at the 2016 GGR Business meeting in Salt Lake City. What did it take to make it to division? What does it mean to you? As a topical group, we needed to achieve 3\% of APS' total membership in two consecutive years. We achieved our first 3.07\% in January 2015 and then 3.08\% in January 2016. Once we reached this milestone, we petitioned the APS Council to become a division. We were the first group in 17 years to petition for division status. What's next? We need you, the DGRAV membership, to approve our new Division Bylaws. They have mainly changed to reflect the new name of our unit and to add an elected official, the DGRAV Councilor. As quoted from the bylaws, ``The Division Councilor shall serve as liaison between the Council of the Society and the Executive Committee of the Division. Following each Council meeting, the Division Councilor shall report to the Chair and the Secretary-Treasurer regarding Council actions that affect the status and operations of the Division. Reports shall be made to the entire Executive Committee during their regularly scheduled meetings.'' The term of the Councilor is four years, beginning on January of the year following the election. The Councilor may not serve more than two consecutive terms. Our new bylaws have one additional change to include the position of webmaster. Many of you may not be aware that this Newsletter is our official newsletter for the APS. Most of the units in APS have APS handle their newsletters, but Matters of Gravity actually predates the GGR! In the same spirit of the position of Editor of the Newsletter standing outside of the executive committee of DGRAV, we have added a webmaster. The webmaster has the duty of managing our website, \htmladdnormallink {\protect {\tt {http://dgrav.org}}} {http://dgrav.org}, and our social media presence. Beverly Berger formed and chaired GGR in 1995. Let's give her and all the others (yes including any of your family and friends that joined GGR!) a virtual round of applause for making this possible. \section*{\centerline {we hear that \dots}} \addtocontents{toc}{\protect\medskip} \addcontentsline{toc}{subsubsection}{ \it we hear that \dots , by David Garfinkle} \parskip=3pt \begin{center} David Garfinkle, Oakland University \htmladdnormallink{garfinkl-at-oakland.edu} {mailto:[email protected]} \end{center} Ronald Drever, Kip Thorne, and Rainer Weiss were awarded the Shaw Prize in Astronomy and the Kavli Prize in Astrophysics. Emanuele Berti was elected Vice Chair of DGRAV; Kelly Holley-Bockelmann and Leo Stein were elected members at large of the Executive Committee of DGRAV. Megan Jones was elected Student Representative of DGRAV. Gregory Adkins has been awarded the APS Prize for a Faculty Member for Research in an Undergraduate Institution. Raymond Beausoleil has been awarded the APS Distinguished Lectureship Award on the Applications of Physics. Douglas Finkbeiner, Shane Larson, Pierre Michel, Dwight Neuenschwander, Scott Ransom, Stephan Schlamminger, and Rodger Thompson have been elected APS Fellows. Hearty Congratulations! \section*{\centerline {GW150914 - A Watershed Event for Gravity}} \addtocontents{toc}{\protect\medskip} \addtocontents{toc}{\bf Research Briefs:} \addcontentsline{toc}{subsubsection}{ \it GW150914 , by Gabriela Gonz\'alez and David Reitze} \parskip=3pt \begin{center} Gabriela Gonz\'alez, Louisiana State University \htmladdnormallink{gonzalez-at-lsu.edu} {mailto:[email protected]} \end{center} \begin{center} David Reitze, LIGO Laboratory and Caltech \htmladdnormallink{reitze-at-ligo.caltech.edu} {mailto:[email protected]} \end{center} \section*{Introduction} Those of us who work on LIGO will forever remember exactly where we were on September 14, 2015 when we first learned of a nearly simultaneous `trigger' recorded on the LIGO Hanford and Livingston detectors. That trigger would eventually become GW150914, the first gravitational wave ever recorded. Perhaps equally momentous, it would also reveal the first ever observation of a binary black hole system in the universe colliding to merge and form a new black hole. Almost exactly one hundred years after gravitational waves were first theorized by Einstein, the discovery by the LIGO Scientific Collaboration and Virgo Collaboration marked the culmination of a scientific quest that began over 50 years ago. Over that time period, this pursuit has brought together well over a thousand researchers worldwide to develop the new science of gravitational wave physics and astronomy. And like most endeavors of this historical magnitude, the path had a few twists and turns along the way. \section*{History} Prior to 1960, no one seriously contemplated developing detectors for gravitational waves because, quite simply, no one seriously believed that gravitational waves could ever be detected. That changed when Joseph Weber began using large cylindrical aluminum bars to search for gravitational waves. While his claims of gravitational wave detections in the 1960s and 70s ultimately proved to be incorrect (resulting in some acrimonious scientific debates), Weber's experimental efforts led Michael Gertsenshtein and Vladislav Pustovoit and, independently Weber himself and also Rainer Weiss to propose using laser interferometers as detectors. Gravitational-wave interferometer research programs sprung up in the 1970s at MIT (Weiss), the University of Glasgow (Ron Drever and Jim Hough), and the Max Planck Institute in Garching, Germany (Hans Billing). Independently, Kip Thorne (Caltech) began a research group at Caltech focusing on gravitational wave theory, and began collaborating with Vladimir Braginsky (Moscow State University) on some of the more intriguing quantum aspects of suspended-mirror gravitational-wave interferometers. This ultimately blossomed into an experimental effort at Caltech, led by Drever (who had moved from Glasgow) and Stan Whitcomb. Each of these groups began tackling the challenge of building and understanding the complex subtleties of operating ultrasensitive suspended mirror interferometers. The period between 1984 and 1992 saw both innovative advances in interferometer designs and the formulation of a joint Caltech-MIT collaboration to design and build two kilometer-scale gravitational-wave observatories. However, funding for LIGO didn't come about quickly or easily. When LIGO was first proposed as a large-scale project, it was met with great resistance. It was deemed too risky and too expensive; the chance of failure was too high, and the scientific payoff too low relative to more established types of astronomy to justify the expenditure. Nonetheless, the US National Science Foundation recognized both the huge scientific potential in gravitational wave physics and astronomy and the cutting edge technology that could result from designing and building a gravitational wave detector. The NSF took a huge risk in funding LIGO, but it was a measured risk. The technological leap from the prototypes to the advanced detectors was deemed too great to be carried out in a single step. A two stage approach was adopted in which an initial set of interferometers (Initial LIGO) would be built with a sensitivity where gravitational waves might be detected (but more likely not), followed by the construction of a second set of interferometers (Advanced LIGO) that would have a high probability of detection. In 1991 the US Congress appropriated LIGO's first year of funding. In 1992 Hanford, Washington and Livingston, Louisiana were chosen as the sites for LIGO's interferometers, and a cooperative agreement for the management of LIGO was signed between NSF and Caltech. In 1994 Barry Barish (Caltech) was appointed LIGO Director and oversaw LIGO's construction phase as well as the installation and commissioning of LIGO's initial interferometers. In 1997, the LIGO Scientific Collaboration was created to organize and coordinate LIGO's technical and scientific research and data analysis, and for expanding LIGO to include scientists from institutions beyond Caltech and MIT. Initial LIGO was operated from 2002 through 2010, producing over 100 papers and quite a few interesting upper limits on gravitational wave emissions from compact binary systems, pulsars, and even the primordial universe. The initial LIGO interferometers were decommissioned in 2010 to make way for the Advanced LIGO interferometers. Designed to be ten times more sensitive to gravitational wave strains, Advanced LIGO is completely new - every component and subsystem has been re-designed and rebuilt to achieve the sensitivity goal. Following an installation period lasting four years, the Advanced LIGO interferometers were completed in 2014 and commissioned until September 2015 when the inaugural observing run `O1' began. \section*{The Advanced LIGO Interferometers} The Advanced LIGO interferometers are the most sensitive scientific instruments ever conceived and built. Consisting of two identical 4 km arm length interferometers in Hanford, WA and Livingston, LA, they are a technological tour-de-force, bringing together a wide array of technologies that have redefined the state-of-the-art throughout almost every facet of their design - the world's most stable high power lasers, the most precisely figured and coated 'test mass' mirrors, the most sophisticated low frequency seismic isolation and mirror suspension systems, one of the world's largest high vacuum systems, and gluing it all together, hundreds of feedback control loops that are capable of sensing and maintaining the 4 km length of the interferometers to almost ${10}^{-19}$ m in their most sensitive bands. Owing to its cutting edge nature, the successful construction and commissioning of Advanced LIGO required solving a very large number of problems - related to the handling of high optical power, angstrom-level polishing of massive optical components, development of strong but exquisitely delicate silica fibers for suspending the 40 kg test masses (three suspensions experienced fiber breakage during the installation phase despite stringent protocols), and precision control engineering in a high-vacuum environment. In the end, everything came together! The superb LIGO commissioning team made rapid progress, and by September 2015, the Livingston and Hanford interferometers were achieving sensitivities four times better than achieved by initial LIGO. During the first week of September, the decision was made to officially begin the run on September 18, 2015. The interferometers were in an `Engineering Run' phase, and were taking science quality data since mid-August. This turned out to be very fortunate. \section*{The Discovery} During `Engineering Run 8', which began on August 17, operations were conducted 24 hours a day and seven days a week. Most of those hours were dedicated to tests for calibrating the detector, tuning the injection methods for simulating gravitational waves in the detector, setting up automated alerts for possible gravitational wave detections, measuring the effects of induced environmental noise in the detector, and many more tasks that needed to be ready before the official first `O1' run. During the times that the two LIGO detectors were running unperturbed by these tests, online algorithms were constantly running to search for gravitational waves and test their performance against the new Advanced LIGO data. When coincident ``triggers'' produced by these methods exceeded a (low) significance threshold, automated database entries were produced with a lot of numbers and plots - there had been several entries with injection tests, as well as weak coincidences that were not statistically significant. On September 14, at 5:51am US Eastern time, a program looking for short transients registered a very significant coincidence. Attentive scientists in Europe and early risers in the US noticed the trigger, and produced a time-frequency plot which showed what is expected from a binary coalescence - but a very short duration one, which would ordinarily correspond to two merging black holes. At first glance, this looked much more like an injection than an astrophysical signal - there was no injection label associated with the trigger, but could it still be an injection? Perhaps a blind injection? Many emails and phone calls later, it was clear that it was not an injection - it was a real coincidence! The next question that was asked - Could it be an instrumental artifact? - was also eventually ruled out. On September 14, many decisions were taken to start the ``discovery process'': the instrument configuration was frozen as much as possible to make sure there was not an instrumental source of transients, coincident or not, that looked like binary coalescences. After a few days, a minimal set of tests was finished, and the instrument was put into ``normal'' observation mode. \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{image1.jpg} \caption{Figure on cover of Physical Review Letters, Feb 12, 2016, showing the data in the LIGO Hanford (blue) and Livingston (red) detectors due to the coalescence of two black holes.} \end{figure} We estimated we needed at least 15 days of two-interferometer coincident data with no similar transients to bound the significance at the magical 5-sigma threshold. By the end of October, we had collected a sufficient amount of data, and separate `offline' analyses using matched-filtering to look for binary coalescences were ready to `open the box' to answer the question - did the significance hold up? It did! A few champagne corks popped, but we were not done yet - long periods over the holidays were still needed to finish the review of the methods used to find the event, review the data calibration and its errors, estimate the parameters of the system, write a paper with the observations and other papers with the details, and wait for external peer review of the main result. While the LSC and Virgo had approved in early September 2015 (!) a ``detection procedure'' to validate the first detection, publish and announce it, very few if any of us thought that we'd have to apply it so soon. But we were very happy to do so! Readers of this newsletter have probably already read the articles containing the details: on February 11, the LIGO Scientific and Virgo Collaborations proudly announced the discovery of two black holes, 29 and 36 solar masses respectively, that had merged into a single black hole more than a billion years ago, producing ripples of space time that passed through Earth and are still traveling through the Universe, after leaving a small but very detectable signal first in the LIGO Livingston detector, and seven milliseconds later in the LIGO Hanford detector. The signals were consistent with Einstein's theory of General Relativity, and using his theory we could derive bounds for many parameters for the system: the initial and final masses, spins, distance, inclination, and (very rough) sky localization. It was a single detection, but contained lots of information! All of the papers produced by the LIGO and Virgo collaborations on GW150914 are available at \htmladdnormallink {\protect {\tt {http://papers.ligo.org}}} {http://papers.ligo.org} \section*{The Future is bright!} The news of the first gravitational wave detection made it to first page of most of the major newspapers worldwide, resulting in tremendous interest by both the scientific and general public. But this is just the beginning: LIGO and Virgo are still finishing the analysis of the rest of the data - taken until January 12 - when the first observational run finished. The analysis of the remaining O1 data should give us more information about the rates of black hole mergers that can be measured. The detectors are designed to be about three times more sensitive than the O1 run, and scientists are hard at work on improving the instruments, and making them more robust. A second observational run is planned for the fall of 2016, lasting about six months - we expect more detections, and are now prepared to become a real Observatory. Soon, the Virgo detector will finish installation and commissioning, and join the network to provide much better sky localization of the sources. As the LIGO and Virgo detectors improve both sensitivity and duty cycle, we will see many black hole mergers - and likely other sources, including binary neutron star or neutron star-black hole mergers, a stochastic background of those mergers, possibly close rotating neutron stars in our galaxy - and perhaps more excitingly, gravitational waves of unknown origin. Farther into the future, but on the horizon, detectors in Japan and India will join the network of ground-based interferometers. Moreover, the success of the LISA Pathfinder mission bodes well for a future gravitational-wave detector in space. The future is gravity-bright! \section*{\centerline {Remembering Felix Pirani}} \addtocontents{toc}{\protect\medskip} \addtocontents{toc}{\bf Obituaries:} \addcontentsline{toc}{subsubsection}{ \it Remembering Felix Pirani, by Stanley Deser} \parskip=3pt \begin{center} Stanley Deser, Brandeis University \htmladdnormallink{deser-at-brandeis.edu} {mailto:[email protected]} \end{center} Felix Pirani, who died at age 87 on the last day of 2015, was one of the leaders in the postwar renaissance of General Relativity. When I first met him, at the famous ``GR-0'' 1955 Bern conference, we constituted half of the younger generation there (and two of those were mere tourists). A prodigy, Felix entered UBC at 14, writing his first paper as an undergraduate. He learned Relativity at Toronto and Carnegie Tech (where he got his DSc) from Schild. Synge and Infeld were also important mentors. His thesis was one of the early attempts, and the first serious postwar one, at quantizing GR. A subsequent PhD, with Hermann Bondi at Cambridge, led him into cosmology. This was followed by a year with Synge at DIAS in Dublin, which he devoted to establishing the reality of gravitational radiation, a controversial topic at the time. He then joined Bondi's newly formed group at King's College, London, where he was to remain until his 1983 retirement. In 1967, he became head of the group and produced a steady flow of PhDs (many of whom remained close friends) while obtaining major new results. Two early examples: a brilliant application of the then new Petrov classification to establish, invariantly, the reality of gravitational waves; with Bondi and Ivor Robinson, a pioneering study of wave solutions in GR. During the year he spent as visiting professor at Brandeis in the sixties, I had the pleasure of collaborating with him on several papers, thus seeing firsthand his mastery of our field; his 1964 Brandeis Summer Institute lectures on gravitational radiation are still a classic reference. But his contribution to GR is far greater than the standard references to which his name is associated. He was one of our giants. Felix was also a man of many interests, political and artistic; indeed, he started a successful career as a mosaicist upon retirement. \vfill\eject \section*{\centerline {Remembering David Finkelstein}} \addcontentsline{toc}{subsubsection}{ \it Remembering David Finkelstein, by Predrag Cvitanovi\'c} \parskip=3pt \begin{center} Predrag Cvitanovi\'c, Georgia Institute of Technology \htmladdnormallink{predrag.cvitanovic-at-physics.gatech.edu} {mailto:[email protected]} \end{center} Theoretical physicist David Ritz Finkelstein, Professor Emeritus in the School of Physics of Georgia Tech, died peacefully at home on January 24, 2016. He was born in 1929, in New York City. He went to Stuyvesant High School in Manhattan, and worked as a page in the NY Public Library, which gave him access to the stacks where he spent much time reading. He started out as an engineering major at City College of New York, but graduated in 1949 with honors in both physics and mathematics, and the CCNY Physics Medal. In 1953 he received a PhD in Physics from MIT (advisor: Felix Villars), with a thesis entitled ``Non-linear meson theory of nuclear forces.'' From 1953 to 1960 he worked at Stevens Institute of Technology. He was Young Men's Philanthropic League Professor of Physics at Yeshiva University from 1959 to 1976. In Sidney Coleman's words, he ``was a brilliant scientist with a passion for long shots. This meant that nine times out of ten he devoted his talents to ideas that do not pay off, but one time out of ten, they do pay off. When this happened, David's work was found to be of a great significance, extraordinary penetration, and ten years ahead of everyone else's, as was the case when topological conservation laws entered the mainstream of quantum field theory.'' In a 1955 paper Finkelstein addressed the question of whether ``anomalous'' spin 1/2 had been overlooked for the gravitational field. His discovery of the topological origin of such anomalous spins and a speculation that maybe all physical variables are topological in origin was the thread that led him to kinks and the unidirectional membrane in the 50s and 60s, as well as to anyons in the 80s, antecedents of anomalous quantum numbers in the fractional Hall effect and in high temperature superconductivity. Finkelstein was the first to understand the nature of the black hole horizon. In 1958 he (age 29) described what is now known as a black hole (``unidirectional membrane''). The paper influenced Landau, Penrose and eventually Wheeler, and was instrumental in bringing general relativity into mainstream physics and today's vibrant black-hole research. But it took time. At the 1962 Relativistic Theories of Gravitation annual conference Finkelstein reported on his Schwarzschild black hole to an audience of three who turned out to be janitorial staff. In 1957, following a seminar Finkelstein gave in London, he met Roger Penrose, then a graduate student from Cambridge. The seminar, on extending Schwarzschild's metric, both into the past and into the future across null horizons-a basic ingredient of the current understanding of black holes---had been a revelation to Penrose. After the seminar Penrose explained to Finkelstein his spin-networks, and the two men exchanged their research subjects, for ever after. Finkelstein's extension of the Schwarzschild metric provided Penrose with an opening into general relativity, the subject which has animated his research ever since. Finkelstein picked up in return on the combinatorial aspects of quantum spin as a possible route to delving more deeply into the quantum nature of reality and took such ideas to greater lengths than anyone else. Another colleague whose career was shaped by Finkelstein's insights is Lenny Susskind. In 1967 he told Susskind: ``Forget perturbation theory. Black holes are the key.'' He explained--this was before Bekenstein--that the information in a region of space could not be as rich as the volume because most states would collapse to form a black hole. Finkelstein understood the holographic principle long before 't Hooft and Susskind. While his ``unidirectional membrane'' is today considered his key contribution to physics, for Finkelstein that calculation was only an exercise, an illustration for his overarching program to bring topology into quantum physics. He was the first to discover ``kinks'', topological charges and topological spin-statistics theorems, with Misner (1959) and Rubenstein (1962). Until the work of Finkelstein and Rubinstein on topology in quantum field theory, quantum field theory meant Feynman diagrams. He was arguably the first to understand the role of quantum vacua, and his papers were among the earliest on solitons in quantum theories, leading to instantons, Higgs particles, etc... It took quite a few years for the rest of the world to catch up. The 1962-63 papers with Jauch, Schiminovich and Speiser were the first to formulate a unified SU(2) gauge theory of massive vector bosons and light, introducing a type of ``Higgs mechanism'' before Higgs, and a type of electroweak unification before Glashow, Salam and Weinberg. However, by the late 1960s the limitations of the quaternionic formulation led him to shelve the whole quaternionic project. In 1976 Finkelstein became the chairman of the Belfer School of Yeshiva University Physics Department, and in 1978 its dean of Natural Sciences and Mathematics. On the basis of this administrative experience, he was appointed Georgia Institute of Technology Director of the School of Physics. Nobody understood his job colloquium, but all were impressed by recommendation letters the likes of which Georgia Tech had never seen. No letter mentioned the fact that his administrative jobs at Yeshiva were ceremonial, to finalize the closing of the already shuttered graduate sciences program. The first thing he did upon arrival was to inform the departmental secretary (in those halcyon days the department was run by a single secretary) that he was going to Majorca. So for a month the Chair could not be reached. But when, by the midyear, he failed to submit a budget, he was deposed by senior faculty, and replaced by an acting chair. Finkelstein went into a funk for two weeks, and emerged a changed man, having recognized that his failure as administrator had given him more time for research. He was charismatic and involved a number of dedicated students in his efforts to quantize geometry. If he did not show up at work, his students would go to his house, where he met them in his bathrobe. He was blissfully useless for any faculty committee task. In Finkelstein's own words: ``My committee services within Georgia Tech have not been onerous; I do not look this gift horse in the mouth, but serve as requested by the School of Physics.'' He taught whatever he wanted to teach, so he had to be deposed for the second time, this time by an uprising of undergraduates who found themselves taught quantum logic instead of the introduction to quantum mechanics that they had signed up for. In 2003 Finkelstein noticed in a lecture that between glancing at a formula in his notes, and writing it on the blackboard, he had forgotten it. So he went to the Chair, and arranged for his retirement. Soon enough he found out that it was statins that were making him stupid - once he went off statins, his sharp intellect came back. In retrospect, he found this misinformed decision was one of the best of his life - now he could devote himself fully to his research. A few days before his death from idiopathic pulmonary fibrosis he had his laptop in his bed, and was still working. David was a man who truly loved life. He is survived by his wife Shlomit Ritz Finkelstein, his children Daniel Finkelstein, Beth Bosworth, Eve Finkelstein, and Aria Ritz Finkelstein, his five grandchildren and two great-granddaughters. \vfill\eject \section*{\centerline {Steve, the Physicist}} \addcontentsline{toc}{subsubsection}{ \it Steve, the Physicist, by Bernard Whiting and Eric Poisson} \parskip=3pt \begin{center} Bernard Whiting, University of Florida \htmladdnormallink{bernard-at-phys.ufl.edu} {mailto:[email protected]} \end{center} \begin{center} Eric Poisson, University of Guelph \htmladdnormallink{epoisson-at-uoguelph.ca} {mailto:[email protected]} \end{center} As a community, we experienced a most bitter-sweet moment earlier this year when the LIGO announcement of gravitational wave detection came just a few days after we had suffered the sudden and unexpected loss of Steven Detweiler as a friend and colleague on February 8th, 2016. Gravitational wave physics was the area of research which drove Steve all his working life. Thus, it was with some irony that we found ourselves grieving at his passing while celebrating his life's interest during the announcement of the LIGO detection. Steve was not a member of the LIGO Scientific Collaboration that finally made the detection because he wanted the freedom to direct his own research. Nevertheless, he was an avid follower of the LIGO experiment and was fortunate enough to have heard, a week or so before his last early morning run, that a detection had been made. What would have thrilled Steve more than the announcement itself would have been the news that the waves detected actually came from colliding black holes with an unexpected size, about thirty times the mass of our Sun --- not smaller, and not much larger ({\it i.e.}, not supermassive) either. The study of black holes, and the gravitational waves they produce, had been a constant theme permeating Steve's scientific research. Steve initially began his work on black holes in conjunction with the Nobel Laureate, Subrahmanyan Chandrasekhar. It had been recently realized that when a black hole is disturbed --- say, by something falling in --- it will actually vibrate, and those vibrations initiate waves which propagate away throughout spacetime. The interaction between each black hole and its surrounding spacetime is very stiff, so the vibrations and gravitational waves are rapidly damped down and soon die away to nothing at all. But the frequencies and damping times are unique characteristics of the black hole they come from, and Steve was among the first to calculate and tabulate them for later use. In fact, that ``ring-down'' was one of the tell-tale features which the LIGO collaboration was so anxious to identify, since its unique character ensures that a black hole origin is absolutely unambiguous. For more than a decade, Steve's most recent area of interest has been in work on the gravitational self-force. This refers to the generally tiny effect the motion of a celestial body experiences due to the fact that its movement interacts with its own gravitational field, and it explains why two black holes in orbit around each other will eventually merge. The focus of Steve's effort was for a future, space-based, gravitational wave detector, originally called LISA. The perturbative methods which have been used in calculations for decades are ideal for this work. Steve's valued contribution was in convincing the community of what should actually be compared between different calculations, and in carrying out the computations to high numerical precision. The self-force world was now waiting for him to take this work to the next level and produce second order results to compare with post-Newtonian theory. One of Steve's very creative ideas, which is barely known among relativists, but is quite well known in the pulsar timing community, was his suggestion to use precise pulsar timing to detect gravitational waves from the collision of supermassive black holes in the centers of other galaxies. These very large wavelength waves would cause fluctuations in the arrival times of signals from pulsars --- rotating neutron stars in our own galaxy --- and the accumulation of the impact of these effects from all over the universe would represent an inescapable, noise-like, signal in the timing precision. This was a brilliant idea, far ahead of its time, leading to a technique that is only now coming into fruition. Pulsars were not even known when Einstein developed his general relativity theory. Though he might have been skeptical about the success of Steve's suggestion, he would certainly have appreciated its inspirational character. And when it does work, Steve will be fully vindicated. Among his personal traits, Steve was perhaps most admired for his fearless attitude and his fertile creativity. It may be strange to speak of a creative mind when describing a scientist. After all, science is supposed to be precise and exact, so where can creativity fit in? But the best scientists discover fresh insights into the workings of the world, and this does indeed require a creative mind. Steve's mind was full of insights, and we have all benefited greatly from his unique ways of thinking. This may explain why Steve was admired, but it is not why he was loved by anyone who had the chance to know him and spend time with him. It is not just that he would come across as a friendly, nice, and decent guy. In addition, Steve had a heightened empathy that allowed him to establish a deep and lasting rapport with new friends and colleagues, often almost immediately. Steve's scientific contributions span four decades, with many breakthrough works and deeply original ideas substantiating the impact he has had on the field of general relativity. Yet he was nominated for a Fellowship of the American Physical Society only in 2013. This was long overdue. His citation was ``for his many and varied contributions to gravitational physics, which include the computation of black-hole quasi-normal modes, the elucidation of pulsar timing to measure gravitational waves, and foundational contributions to the gravitational self-force.'' We should also note Steve's prescient work on Kaluza-Klein theory, which appeared in his most cited paper and yet Steve barely made any reference to it. Suffice it to say that Steve had diverse interests, and colleagues were still referring to him for input. He is already sorely missed. \vfill\eject \section*{\centerline {Remembering Sergio Dain}} \addcontentsline{toc}{subsubsection}{ \it Remembering Sergio Dain, by Luis Lehner and Manuel Tiglio} \parskip=3pt \begin{center} Luis Lehner, Perimeter Institute \htmladdnormallink{llehner-at-perimeterinstitute.ca} {mailto:[email protected]} \end{center} \begin{center} Manuel Tiglio, University of California San Diego \htmladdnormallink{tiglio-at-ucsd.edu} {mailto:[email protected]} \end{center} It is with our deepest sadness that we report on the sudden passing away of our friend and colleague Professor Sergio Dain. Sergio lost his brave yet short battle to cancer on the 24th of February 2016, at an early age of 46. Sergio got his Licenciatura and PhD at the University of C\'ordoba in Argentina in 1993 and 1999 respectively, spending a significant fraction of his PhD studies at the Albert Einstein Institute (AEI) for Gravitational Physics in Golm, Gemany. The research themes for his Licenciatura and PhD were on asymptotically flat spacetimes and topics in gravitational radiation. Afterwards, he spent several years at the AEI as a postdoctoral researcher, before returning to C\'ordoba in 2006 as a faculty member and as an independent investigator at CONICET. After his PhD, Sergio's interests transitioned to geometrical analysis, where he made important contributions to the initial data problem in General Relativity and conserved quantities in black hole collisions. He then produced a series of seminal papers establishing geometrical inequalities between angular momentum and mass in General Relativity. These influential results were recognized in numerous ways, in particular through invitations to give plenary lectures at, for example, the 20th International Conference on General Relativity and Gravitation and the 10th Amaldi meeting, held at Warsaw in 2013, as well as at the Fields Institute at Toronto in 2015. Sergio had deep and broad interests, always generously shared his research ideas and was a great mentor and role model. About a dozen undergraduate and graduate students received their degrees in Argentina under his supervision, and he also mentored four post-docs. Beyond his research activities, Sergio was avidly engaged in the promotion of Science in Argentina at different levels, as well as to the General Relativity community serving, for instance, on the Editorial Board of General Relativity and Gravitation. Beyond all of the above, to many of us, Sergio was a dear, exceptional friend and an outstanding person. We will always remember him for his sensitivity, fondness for the arts, his great sense of humor, contagious laughter, unconditional friendship, and a deep devotion to science. \vfill\eject \section*{\centerline {Is there a place for gravitational physics} \centerline {in the modern, corporate university?}} \addtocontents{toc}{\protect\medskip} \addtocontents{toc}{\bf Editorial:} \addcontentsline{toc}{subsubsection}{ \it Gravitational physics in the modern university, by David Garfinkle} \parskip=3pt \begin{center} David Garfinkle \htmladdnormallink{garfinkl-at-oakland.edu} {mailto:[email protected]} \end{center} \vskip0.5truein (Note: the opinions expressed in this editorial are solely those of the author and do not represent any sort of official pronouncement by either DGRAV or APS). \vskip0.25truein Over the past two decades or so, some disturbing trends have taken place in universities in the US: tuition has gone way up (as has student debt). State support for public universities has gone way down. The percentage of faculty who are tenured or tenure track keeps going down. The number of administrators keeps going up. And more and more university boards are taking the attitude that universities should be run like businesses. This is bad news for everyone. However, I will argue that it is even worse news for physics than for other areas of academic endeavor, and even worse news for gravitational physics than for other areas of physics. First let's consider what is driving these trends. The current trend in state level politics is for state governments to spend less money. Since broad support for intellectual pursuits has always been shaky at best in the US, when state governments think about what to cut, support for higher education is always at or near the top of the list. In response, state universities make up for lost revenue by raising tuition, and cut costs by hiring more part time faculty and fewer full time faculty. In principle, these trends in state universities need not have affected private universities at all. However, private universities, especially elite ones, have generally positioned their tuition higher than that of state universities and have marketed themselves as using that higher tuition to provide a higher quality education. Thus, when state universities raised tuition, it is not surprising that private universities followed suit. However, there is one point missing from the previous analysis: if universities were really motivated to cut costs, wouldn't they hire fewer administrators, not more? What then explains the explosion of ``administrative bloat''? One possible answer is provided by Benjamin Ginsberg in his book {\it The Fall of the Faculty}. Ginsberg argues that administrative bloat is driven by the need of administrators to feel important. Thus, for example, the more people who are working for the Dean, the more important he feels, and since administrative tasks don't actually have to be productive, it is easy for the Dean to invent activities for the administrators in his office to do. If Ginsberg's analysis is correct, then from an administrator's point of view, both students and faculty are merely means to generate revenue that can be used to hire more administrators. Thus the administration of a university would be motivated to enroll as many students as possible and to charge them as high a tuition as possible; and to hire as few faculty as possible, with the largest teaching load possible, and at the lowest salary possible. That is, new faculty hires are likely to be overwhelmingly non-tenure track faculty with such a high teaching load that they have no time at all for scholarship. However, the administration cannot hire all part time, non-tenure track faculty, because some tenure track faculty are needed to teach the upper level courses in each department taken by majors and graduate students. Thus the administration of a university is likely to target those few tenure track hires to the departments with the largest number of majors. This is bad news for physics, since the number of physics majors tends to be rather small compared to those in other departments. With a small number of majors, a physics department in a modern university is likely to be regarded by the administration as a service department, delivering its credit hours in introductory courses taken by those students whose major requires an introductory physics course, such as engineering or health science. Thus, if the administration hires any tenure track physicists at all, they are likely to be in those areas that most relate to those departments served by the physics department: areas such as materials science or biophysics, but certainly not gravitational physics. Thus the rise of the modern, corporate, administrative university is bad news for scholarship in all areas. But it is even worse news for physics than for other departments, and even worse news for gravitational physics than for other areas of physics. Of course, there will always be elite universities who justify their high tuition in part by the prestige of their faculty, and these universities will continue to hire tenure track physicists (possibly including gravitational physicists) who will continue to do excellent research. And from the point of view of the administrations of these elite universities, it is an additional motivating factor that the physicists they hire can bring in grant money whose overhead can be used to hire more administrators. However, these bright spots, important as they are, are likely to become ever fewer and farther between if the trend of the modern, corporate, administrative university continues. What (if anything) can be done about the problem of administrative bloat in the modern, corporate university? Actually, it is somewhat surprising that steps have not already been taken to curtail this problem. Recall that the money wasted on hiring superfluous administrators comes from students, their families, state governments, and the federal government. Together this could be a powerful constituency that should be highly motivated to do something about the problem. However, the modern, corporate university makes use of an extensive modern public relations apparatus to construct and manipulate its image. Rather than do something about administrative bloat, this PR apparatus simply denies that the problem exists and blames rising tuition and rising number of administrators on other factors, such as smaller state support, rising government regulation, rising costs of employee health care, and supposed inefficiency of faculty practices. This last claim is especially dangerous, as it also provides an excuse for university policies that take ever more decisions out of the hands of faculty and give them to administrators. Both individual faculty and the AAUP (American Association of University Professors) have long sounded the alarm about administrative bloat and about the increasing tendency to treat running a university as though it were the same as running a business. However, against the professional PR apparatus of the university boards and administrations, the AAUP and individual faculty are losing the war of words. Perhaps it is time for professional organizations (such as APS) to weigh in on this issue. \end{document}
train/arxiv
BkiUd_c4eILhQFSF2Dbh
5
1
\section{Introduction} Bayesian vector autoregressions (VARs) with multivariate stochastic volatility, first developed in \citet{CS05} and \citet{primiceri05}, are now the workhorse models in empirical macroeconomics. These multivariate stochastic volatility models, however, have the undesirable property that the implied likelihoods are not invariant to the order of the dependent variables.\footnote{This non-invariance problem is explicitly acknowledged and discussed in both \citet{CS05} and \citet{primiceri05}. See also the discussion in \citet{CCM19}.} This ordering issue has become an increasingly pertinent problem due to two prominent developments in the VAR literature. First, in the last two decades there has been a gradual departure from conventional recursive or zero identification restrictions to other more credible identification schemes---such as identification by sign restrictions \citep{Faust98,CD02, Uhlig05}---that do not restrict the order of the variables. Despite this development, models of \citet{CS05} and \citet{primiceri05} continue to be used to first obtain reduced-form estimates, which are then taken as inputs in the subsequent structural analysis. Since the reduced-form estimates are not order invariant, the results from the structural analysis depend on the order of the variables in a subtle way, often without explicit recognition by the user.\footnote{The implications of this non-invariance problem for structural analysis have been illustrated in \citet{Bognanni18} and \citet{Hartwig2019}.} Second, following the seminal contributions by \citet{BGR10} and \citet{koop13}, there is an increasing desire to use large VARs involving more than dozens of dependent variables for structural analysis. This development is partly motivated by the concern of informational deficiency of using a limited information set---by expanding the set of relevant variables, one can alleviate this concern \citep[see, e.g.,][]{HS91, LR93, LR94}. However, unless there is a natural variable ordering (e.g., using recursive identification restrictions), the ordering issue becomes more severe as the number of ways to order the variables increases exponentially with the number of variables. In view of these developments, we consider an alternative Bayesian VAR based on the factor stochastic volatility that is constructed to be invariant to the order of the dependent variables. Factor stochastic volatility models are commonly used for modeling high-dimensional financial data, but are less widely employed in empirical macroeconomics.\footnote{A notable exception is \citet{KH20}, who use Bayesian VARs with factor stochastic volatility for macroeconomic forecasting. \citet{CCM18} consider a related multiplicative 2-factor stochastic volatility model to study the impact of macroeconomic and financial uncertainty.} In specifying a suitable factor stochastic volatility model, there is often a tension between identification and order invariance. On the one hand, one can identify the factors and the associated factor loadings by fixing the orientation of the factors \citep[e.g., as in][]{GZ96,CNS06}. But this identification strategy essentially fixes the order of the variables, and therefore the identified model is not order invariant \citep[see, e.g., the discussion in][]{CLS18}. On the other hand, one could avoid fixing the orientation of the factors and obtain an order-invariant model, but it is unclear that the factors and the loadings are identified.\footnote{For example, \citet{Kastner19} does not impose any orientation restrictions on the factors, arguing that identification of the factor loadings is not necessary for his purpose of estimating the reduced-form covariance matrix.} We solve this dilemma between achieving identification and order invariance by carefully teasing out a set of conditions strong enough for identification, yet they are weak enough that the model remains order invariant. More specifically, we construct a VAR in which the innovations have a factor structure, and both the factors and the idiosyncratic errors follow stochastic volatility processes. We first show that the likelihood implied by this model is invariant to the order of the dependent variables. We then discuss sufficient conditions for identification of the factors and the factor loadings, building upon the approach in \citet{SF01} and extending it to a more general setting in which both the factors and the idiosyncratic errors are heteroscedastic. Under mild regularity conditions, we show that the factor loadings under our setup are identified up to permutation and sign changes. Furthermore, with additional sign restrictions that satisfy a set of conditions, we show that the factor loadings and the associated factors are point-identified. To determine the number of factors, we develop an estimator of the marginal likelihood based on an importance sampling approach to evaluate the observed-data or integrated likelihood. Through a series of Monte Carlo experiments, we show that our marginal likelihood estimator works well and is able to select the correct number of factors under a variety of settings. We then discuss how our VAR with factor stochastic volatility (VAR-FSV) can be used for structural analysis. More specifically, we develop various structural analysis tools for VAR-FSV similar to those designed for standard structural VARs. In particular, we describe methods to construct structural impulse response functions, forecast error variance decompositions and historical decompositions. We demonstrate the methodology by revisiting the 6-variable VAR identified by a set of sign restrictions on the contemporaneous impact matrix considered in \citet{FRS19}. We augment their system to a 20-variable VAR by including additional, seemingly relevant macroeconomic and financial variables, which helps alleviate the concern of informational deficiency. In addition, the impulse responses obtained using the VAR-FSV with the sign restrictions imposed are point-identified. Empirically, we show that by including the additional variables and sign restrictions, one can substantially sharpen inference. Our paper is related to the recent work by \citet{Korobilis20}, who uses a VAR with a factor error structure for structural analysis. His work is motivated by the computational challenge of imposing a large number of sign restrictions to obtain admissible draws using conventional accept-reject methods \cite[such as the widely used algorithm in][]{RWZ10}. This computational hurdle has so far limited the use of sign restrictions to relatively small systems with at most half a dozen dependent variables.\footnote{Large VARs, on the other hand, are mostly identified using recursive or zero restrictions. See, for example, \citet{LSZ96}, \citet{BGR10} and \citet{ER17}.} Instead of using standard structural VARs, \citet{Korobilis20} assumes that the factors in his model play the role of structural shocks, and shows that in this case structural analysis can be done efficiently even when one imposes a large number of sign restrictions. His model, however, is homoscedastic, and consequently it is only set-identified. By contrast, in our VAR-FSV both the factors and the idiosyncratic errors follow stochastic volatility processes. This feature does not only accommodate the empirical finding that macroeconomic and financial variables typically exhibit time-varying volatility \citep[see, e.g.,][]{clark11, CR15}, it also allows us to achieve point-identification of the factors and the factor loadings. Our work also contributes to the recent literature on using heteroscedasticity to identify conventional structural VARs, including \citet{WD15}, \citet{LLM10}, \citet{HL14}, \citet{BB20}, \citet{Lewis21} and \citet{BPSS21}. Our paper considers the alternative setting of a VAR with a factor stochastic volatility specification and establishes sufficient conditions for identification. One key advantage of using VAR-FSV for structural analysis, compared to structural VARs, is that under VAR-FSV it is computationally feasible to estimate large systems and impose a large number of sign restrictions. Our work is also related to the growing literature on constructing multivariate stochastic volatility models that are order invariant. One approach is based on Wishart or inverse-Wishart processes; examples include \citet{PG2006}, \citet{AM09}, \citet{CDLS18} and \citet{SZ20}. These models, however, are typically computationally intensive to estimate as the estimation involves drawing from non-standard high-dimensional distributions. As such, these models are generally not applicable to large datasets. An alternative approach is based on the common stochastic volatility models in \citet{CCM16} and \citet{chan20}. Although these models are designed for large systems and can be estimated quickly, they are more restrictive since the time-varying error covariance matrix depends on a single stochastic volatility process---in particular, the error variances are always proportional to each other. There are also order-invariant models that are based on the discounted Wishart process, such as those in \citet{Uhlig97}, \citet{WH06} and \citet{Bognanni18}. These models are convenient to estimate as they admit Kalman-filter type filtering and smoothing algorithms. The cost for this tractability, however, is that they are generally too tightly parameterized, and consequently, they tend to underperform in forecasting macroeconomic variables relative to standard stochastic volatility models such as \citet{CS05} and \citet{primiceri05} \citep[see][for an example]{ARRS21}. Lastly, the recent paper \citet{CKY21} extends the stochastic volatility model of \citet{CS05} by avoiding the use of Cholesky decomposition so that the extension is order-invariant. So far this reduced-form VAR is used for forecasting, and further research is needed to incorporate identification restrictions for structural analysis. The rest of this paper is organized as follows. Section~\ref{s:VAR-FSV} first introduces the VAR with factor stochastic volatility. Its theoretical properties, including order invariance and sufficient conditions for identification, are discussed in Section \ref{s:properties}. We then outline a posterior sampler and a marginal likelihood estimator for the model in Section \ref{s:estimation} and Section~\ref{s:ML}, respectively. Next, Section~\ref{s:tools} develops various structural analysis tools for the VAR-FSV model, including algorithms to construct structural impulse response functions and to perform various decompositions. Then, Section~\ref{s:MC} presents Monte Carlo results to illustrate how well the marginal likelihood estimator works under a variety of settings. We next demonstrate the proposed methodology via a structural analysis with sign restrictions in Section~\ref{s:application}. Finally, Section~\ref{s:conclusion} concludes and discusses some future research directions. \section{A Bayesian VAR with Factor Stochastic Volatility} \label{s:VAR-FSV} In this section we outline a Bayesian VAR with factor stochastic volatility (FSV) and the associated prior distributions. To that end, let $\mathbf{y}_t$ be an $n\times 1 $ vector of dependent variables at time $t$. Then, for $t=1,\ldots, T$, consider the following VAR-FSV model: \begin{align} \mathbf{y}_t & = \mathbf{a}_0 + \mathbf{A}_1 \mathbf{y}_{t-1} + \cdots + \mathbf{A}_p\mathbf{y}_{t-p} + \vect{\epsilon}_t, \label{eq:yt} \\ \vect{\epsilon}_t & = \mathbf{L} \mathbf{f}_t + \mathbf{u}_t^y, \label{eq:epsilont} \end{align} where $\mathbf{f}_t = (f_{1,t},\ldots, f_{r,t})'$ denotes a $r\times 1$ vector of latent factors and $\mathbf{L}$ is an $n\times r$ matrix of factor loadings. Note also that $\mathbf{L}$ is unrestricted. The disturbances $\mathbf{u}_t^y$ and the latent factors $\mathbf{f}_t$ are assumed to be independent at all leads and lags. Moreover, they are specified as jointly Gaussian: \begin{equation} \label{eq:ft} \begin{pmatrix}\mathbf{u}_t^y \\ \mathbf{f}_t \end{pmatrix} \sim\distn{N} \left(\begin{pmatrix} \mathbf{0}\\ \mathbf{0} \end{pmatrix}, \begin{pmatrix} \vect{\Sigma}_t & \mathbf{0} \\ \mathbf{0} & \vect{\Omega}_t \end{pmatrix}\right), \end{equation} where $\vect{\Sigma}_t = \text{diag}(\text{e}^{h_{1,t}},\ldots, \text{e}^{h_{n,t}})$ and $ \vect{\Omega}_t = \text{diag}(\text{e}^{h_{n+1,t}},\ldots, \text{e}^{h_{n+r,t}})$ are diagonal matrices. For $t=2,\ldots, T$, the log-volatilities evolve as: \begin{align} h_{i,t} & = \mu_{i} + \phi_i(h_{i,t-1} - \mu_i) + u_{i,t}^h, \quad u_{i,t}^h\sim\distn{N}(0,\sigma_i^2), \quad i=1,\ldots, n, \label{eq:ht1} \\ h_{n+j,t} & = \phi_{n+j} h_{n+j,t-1} + u_{n+j,t}^h, \quad u_{n+j,t}^h\sim\distn{N}(0,\sigma_{n+j}^2), \quad j=1,\ldots, r, \label{eq:ht2} \end{align} where we impose $|\phi_1|<1, \ldots |\phi_{n+r}|<1$ to ensure stationarity. Finally, the initial conditions follow the stationary distributions $h_{i,1} \sim \distn{N}(\mu_i,\sigma_i^2/(1-\phi_i^2)), i=1,\ldots, n,$ and $h_{n+j,1} \sim \distn{N}(0,\sigma_{n+j}^2/(1-\phi_{n+j}^2)), j=1,\ldots, r$. Note that the stationary distributions of the log-volatilities associated with the idiosyncratic errors have nonzero means, whereas the means of those associated with the factors are set to be zero for normalization. To facilitate estimation, we rewrite the VAR in \eqref{eq:yt} as \begin{equation} \label{eq:yt2} \mathbf{y}_t = (\mathbf{I}_n\otimes \mathbf{x}_t')\vect{\beta} + \vect{\epsilon}_t, \end{equation} where $\mathbf{I}_n$ is the identity matrix of dimension $n$, $\otimes$ is the Kronecker product, $\vect{\beta} = \text{vec}([\mathbf{a}_0, \mathbf{A}_1, \ldots, \mathbf{A}_p]')$ and $\mathbf{x}_t = (1,\mathbf{y}_{t-1}',\ldots,\mathbf{y}_{t-p}')'$ is a $k \times 1$ vector of intercept and lagged values with $k=np+1$. Next, we specify the prior distributions on the model parameters. Let $\vect{\beta}_i$ and $\mathbf{l}_i$ denote the VAR coefficients and the elements of $\mathbf{L}$ in the $i$-th equation, respectively, for $i=1,\ldots, n$. We assume the following independent priors on $\vect{\beta}_i$ and $\mathbf{l}_i$ for $i=1,\ldots, n$: \[ \vect{\beta}_i\sim\distn{N}(\vect{\beta}_{0,i},\mathbf{V}_{\vect{\beta}_i}),\quad \mathbf{l}_i\sim\distn{N}(\mathbf{l}_{0,i},\mathbf{V}_{\mathbf{l}_i}). \] We elicit the prior mean vector $\vect{\beta}_{0,i}$ and the prior covariance matrix $\mathbf{V}_{\vect{\beta}_i}$ similar to the Minnesota prior \citep{DLS84, litterman86, KK93}. Specifically, for growth rates data, we set $\vect{\beta}_{0,i} = \mathbf{0}$ to shrink the VAR coefficients to zero. For level data, $\vect{\beta}_{0,i}$ is set to be zero as well except for the coefficient associated with the first own lag, which is set to be one. The prior covariance matrix $\mathbf{V}_{\vect{\beta}_i}$ is constructed so that it depends on two key hyperparameters, $\kappa_1$ and $\kappa_2$, that control respectively the overall shrinkage strength of `own' lags and `other' lags. For a more detailed discussion of the Minnesota prior, see, e.g., \citet{KK10}, \citet{DNS11} or \citet{karlsson13}. Finally, for the parameters in the stochastic volatility equations, we assume the priors: \[ \mu_i \sim \distn{N}(\mu_{0,i},V_{\mu_i}), \; \phi_j\sim \distn{N}(\phi_{0,j},V_{\phi_j})1(|\phi_j|<1),\; \sigma_{j}^2 \sim \distn{IG}(\nu_{j},S_{j}), \] $i=1,\ldots, n$ and $j=1,\ldots, n+r$. \section{Order Invariance and Identification} \label{s:properties} In this section we describe a few important properties of the VAR-FSV model specified in \eqref{eq:yt}-\eqref{eq:ht2}. First, the likelihood implied by the model is invariant to the order of the variables (after permuting the relevant parameters appropriately). To see that, let $\mathbf{P}$ be an $n\times n$ permutation matrix such that $\mathbf{P}\bP' = \mathbf{P}'\mathbf{P} = \mathbf{I}_n$. For the $n$-variate Gaussian density $f_{\distn{N}}(\cdot ; \vect{\mu},\vect{\Sigma})$ with mean vector $\vect{\mu}$ and covariance matrix $\vect{\Sigma}$, it is easy to see that $f_{\distn{N}}(\mathbf{x} ; \vect{\mu},\vect{\Sigma}) = f_{\distn{N}}(\mathbf{P}\mathbf{x} ; \mathbf{P}\vect{\mu}, \mathbf{P}\vect{\Sigma}\mathbf{P}')$. Next, we derive an expression of the likelihood function. To that end, stack $\mathbf{h}_t^y = (h_{1,t},\ldots,h_{n,t})'$ and $\mathbf{h}_t^f = (h_{n+1,t},\ldots,h_{n+r,t})'$. We similarly define $\vect{\phi}_y, \vect{\phi}_f, \vect{\sigma}^{2}_y $ and $\vect{\sigma}^{2}_f$. In addition, we let $\mathbf{h}_t = (\mathbf{h}_t^{y'},\mathbf{h}_t^{f'})'$, $\vect{\phi} = (\vect{\phi}_y', \vect{\phi}_f')'$, $\vect{\sigma}^2 = (\vect{\sigma}^{2'}_y, \vect{\sigma}^{2'}_f)$ and $\vect{\mu} = (\mu_1,\ldots, \mu_n)'$. Then, the state equations \eqref{eq:ht1}-\eqref{eq:ht2} imply that the densities of $\mathbf{h}_t^y$ and $\mathbf{h}_t^f,$ for $t=2,\ldots, T$, are, respectively, \[ f_{\distn{N}}(\mathbf{h}_t^y; \vect{\mu} + \vect{\phi}_y\odot(\mathbf{h}_{t-1}^y-\vect{\mu}),\text{diag}(\vect{\sigma}^{2}_y)) \text{ and } f_{\distn{N}}(\mathbf{h}_t^f; \vect{\phi}_f\odot \mathbf{h}_{t-1}^f,\text{diag}( \vect{\sigma}^{2}_f)), \] where $\odot$ is the element-wise multiplication. Moreover, the initial conditions $\mathbf{h}_1^y$ and $\mathbf{h}_1^f$ have, respectively, the densities \[ f_{\distn{N}}(\mathbf{h}_1^y; \vect{\mu},\text{diag}(\vect{\sigma}^{2}_y\oslash(\mathbf{1} - \vect{\phi}_y))), \text{ and } f_{\distn{N}}(\mathbf{h}_1^f; \mathbf{0},\text{diag}(\vect{\sigma}^{2}_f\oslash(\mathbf{1} - \vect{\phi}_f))), \] where $\oslash$ denotes the element-wise division. Next, using the representation in \eqref{eq:yt2} and integrating out the factors, the density of $\mathbf{y}_t$ given the parameters and log-volatilities is $f_{\distn{N}}(\mathbf{y}_t; (\mathbf{I}_n \otimes \mathbf{x}_t')\vect{\beta},\mathbf{L}\vect{\Omega}_t\mathbf{L}'+ \vect{\Sigma}_t)$. Stacking $\mathbf{y}=(\mathbf{y}_1',\ldots, \mathbf{y}_T')'$, the likelihood function, or more precisely the integrated or observed-data likelihood, can therefore be written as \begin{equation}\label{eq:like} \begin{split} p(\mathbf{y} \,|\,\vect{\beta}, & \mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2) = \\ & \int f_{\distn{N}}(\mathbf{h}_1^y; \vect{\mu},\text{diag}(\vect{\sigma}^{2}_y\oslash(\mathbf{1} - \vect{\phi}_y))) f_{\distn{N}}(\mathbf{h}_1^f; \mathbf{0},\text{diag}(\vect{\sigma}^{2}_f\oslash(\mathbf{1} - \vect{\phi}_f))) \\ & \times \prod_{t=2}^T f_{\distn{N}}(\mathbf{h}_t^y; \vect{\mu} + \vect{\phi}_y\odot(\mathbf{h}_{t-1}^y-\vect{\mu}),\text{diag}(\vect{\sigma}^{2}_y))f_{\distn{N}}(\mathbf{h}_t^f; \vect{\phi}_f\odot \mathbf{h}_{t-1}^f,\text{diag}( \vect{\sigma}^{2}_f)) \\ & \times \prod_{t=1}^T f_{\distn{N}}(\mathbf{y}_t; (\mathbf{I}_n \otimes \mathbf{x}_t')\vect{\beta},\mathbf{L}\vect{\Omega}_t\mathbf{L}'+ \vect{\Sigma}_t) \text{d} \mathbf{h}. \end{split} \end{equation} Now, for an arbitrary permutation matrix $\mathbf{P}$, suppose we permute the order of the dependent variables $\tilde{\mathbf{y}}_t = \mathbf{P}\mathbf{y}_t$ and the associated lagged values $\tilde{\mathbf{x}}_t' = (1,(\mathbf{P}\mathbf{y}_{t-1})',\ldots, (\mathbf{P}\mathbf{y}_{t-p})') = \mathbf{x}_t'\mathbf{Q}'$, where $\mathbf{Q} = \text{diag}(1,\mathbf{I}_p\otimes\mathbf{P})$. We claim that the likelihood implied by the VAR-FSV model is invariant to the permutation $\mathbf{P}$ in the sense that \[ p(\mathbf{y} \,|\,\vect{\beta},\mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2) = p(\tilde{\mathbf{y}} \,|\, \tilde{\vect{\beta}},\tilde{\mathbf{L}},\tilde{\vect{\mu}},\tilde{\vect{\phi}},\tilde{\vect{\sigma}}^2), \] where $\tilde{\mathbf{L}} = \mathbf{P}\mathbf{L}, \tilde{\vect{\beta}} = (\mathbf{P}\otimes\mathbf{Q}) \vect{\beta}, \tilde{\vect{\mu}} = \mathbf{P}\vect{\mu}, \tilde{\vect{\phi}} = ((\mathbf{P}\vect{\phi}_y)', \vect{\phi}_f')'$ and $\tilde{\vect{\sigma}}^{2} = ((\mathbf{P}\vect{\sigma}_y^2)', \vect{\sigma}_f^{2'})'$.\footnote{Note that the permuted vector $\tilde{\vect{\beta}}$ consists of the VAR coefficients of the following system stacked by rows: \[ \tilde{\mathbf{y}}_t = \tilde{\mathbf{a}}_0 + \tilde{\mathbf{A}}_1 \tilde{\mathbf{y}}_{t-1} + \cdots + \tilde{\mathbf{A}}_p \tilde{\mathbf{y}}_{t-p} + \tilde{\vect{\epsilon}}_t, \] where $ \tilde{\mathbf{a}}_0 = \mathbf{P}\mathbf{a}_0$ and $\tilde{\mathbf{A}}_j = \mathbf{P}\mathbf{A}_j\mathbf{P}', j=1,\ldots,p.$ That is, \[ \tilde{\vect{\beta}} = \text{vec}\begin{pmatrix} (\mathbf{P}\mathbf{a}_0)' \\ (\mathbf{P}\mathbf{A}_1\mathbf{P}')' \\ \vdots \\ (\mathbf{P}\mathbf{A}_p\mathbf{P}')'\end{pmatrix} = \text{vec}\left( \mathbf{Q} \begin{pmatrix} \mathbf{a}_0' \\ \mathbf{A}_1' \\ \vdots \\ \mathbf{A}_p'\end{pmatrix} \mathbf{P}'\right) = (\mathbf{P}\otimes\mathbf{Q})\text{vec}\begin{pmatrix} \mathbf{a}_0' \\ \mathbf{A}_1' \\ \vdots \\ \mathbf{A}_p'\end{pmatrix} = (\mathbf{P}\otimes\mathbf{Q})\vect{\beta}. \]} That is, we obtain the same likelihood value for any permutation of $\mathbf{y}_t$ if the lagged values and the parameters are permuted accordingly. This claim of order invariance can be readily verified as follows. First, noting that \begin{align*} \mathbf{P}(\mathbf{I}_n \otimes \mathbf{x}_t')\vect{\beta} & = (\mathbf{P}\otimes 1)(\mathbf{I}_n \otimes \mathbf{x}_t')(\mathbf{P}'\otimes\mathbf{Q}') (\mathbf{P}\otimes\mathbf{Q}) \vect{\beta} \\ & = (\mathbf{P} \mathbf{I}_n \mathbf{P}') \otimes (1 \mathbf{x}_t' \mathbf{Q}') (\mathbf{P}\otimes\mathbf{Q}) \vect{\beta} \\ & = (\mathbf{I}_n \otimes \tilde{\mathbf{x}}_t')\tilde{\vect{\beta}}, \end{align*} we therefore obtain \[ f_{\distn{N}}(\mathbf{y}_t; (\mathbf{I}_n \otimes \mathbf{x}_t')\vect{\beta},\mathbf{L}\vect{\Omega}_t\mathbf{L}'+ \vect{\Sigma}_t) = f_{\distn{N}}(\tilde{\mathbf{y}}_t; (\mathbf{I}_n \otimes \tilde{\mathbf{x}}_t')\tilde{\vect{\beta}}, \tilde{\mathbf{L}}\vect{\Omega}_t\tilde{\mathbf{L}}' + \tilde{\vect{\Sigma}}_t), \] where $\tilde{\vect{\Sigma}}_t = \mathbf{P}\vect{\Sigma}_t\mathbf{P}'$. Similarly, we also have \begin{align*} f_{\distn{N}}(\mathbf{h}_1^y; \vect{\mu},\text{diag}(\vect{\sigma}^{2}_y\oslash(\mathbf{1} - \vect{\phi}_y))) & = f_{\distn{N}}(\mathbf{P}\mathbf{h}_1^y; \mathbf{P}\vect{\mu},\text{diag}((\mathbf{P}\vect{\sigma}^{2}_y)\oslash(\mathbf{1} - \mathbf{P}\vect{\phi}_y))),\\ f_{\distn{N}}(\mathbf{h}_t^y; \vect{\mu} + \vect{\phi}_y\odot(\mathbf{h}_{t-1}^y-\vect{\mu}),\text{diag}(\vect{\sigma}^{2}_y)) & = f_{\distn{N}}(\mathbf{P}\mathbf{h}_t^y; \mathbf{P}\vect{\mu} + (\mathbf{P}\vect{\phi}_y)\odot(\mathbf{P}\mathbf{h}_{t-1}^y-\mathbf{P}\vect{\mu}), \text{diag}(\mathbf{P}\vect{\sigma}^{2}_y)). \end{align*} Since the Gaussian densities in \eqref{eq:like} are equal to their permuted counterparts, the integrand in $p(\tilde{\mathbf{y}} \,|\, \tilde{\vect{\beta}},\tilde{\mathbf{L}},\tilde{\vect{\mu}},\tilde{\vect{\phi}},\tilde{\vect{\sigma}}^2)$ is exactly the same as that in \eqref{eq:like}. The only difference between the two integrals is the order of integration: $(\mathbf{h}_t^y,\mathbf{h}_t^f)$ versus $(\mathbf{P}\mathbf{h}_t^y,\mathbf{h}_t^f)$. But since the integral is finite, one can change the order of integration without changing the integral. Hence, the desired result follows. The following proposition summarizes this result. \begin{proposition}[Order Invariance] \label{thm:invar} \rm Let $p(\mathbf{y} \,|\,\vect{\beta},\mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2)$ denote the likelihood of the VAR-FSV model with lagged values $\mathbf{x}_1,\ldots, \mathbf{x}_T$. Let $\mathbf{P}$ be an arbitrary $n\times n$ permutation matrix and define $\tilde{\mathbf{y}}_t = \mathbf{P}\mathbf{y}_t$ and $\tilde{\mathbf{x}}_t' = \mathbf{x}_t'\mathbf{Q}'$, where $\mathbf{Q} = \text{diag}(1,\mathbf{P})$. Then, the VAR-FSV with dependent variables $\tilde{\mathbf{y}}_t $ and lagged values $\tilde{\mathbf{x}}_t$ has the same likelihood. More precisely, \[ p(\mathbf{y} \,|\,\vect{\beta},\mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2) = p(\tilde{\mathbf{y}} \,|\, \tilde{\vect{\beta}},\tilde{\mathbf{L}},\tilde{\vect{\mu}},\tilde{\vect{\phi}},\tilde{\vect{\sigma}}^2), \] where $\tilde{\mathbf{L}} = \mathbf{P}\mathbf{L}, \tilde{\vect{\beta}} = (\mathbf{P}\otimes\mathbf{Q}) \vect{\beta}, \tilde{\vect{\mu}} = \mathbf{P}\vect{\mu}, \tilde{\vect{\phi}} = ((\mathbf{P}\vect{\phi}_y)', \vect{\phi}_f')'$ and $\tilde{\vect{\sigma}}^{2} = ((\mathbf{P}\vect{\sigma}_y^2)', \vect{\sigma}_f^{2'})'$. \end{proposition} Next, we discuss sufficient conditions for identification of the factor loadings and latent factors. We mainly follow the approach in \citet{SF01}, but consider a more general setting in which the idiosyncratic errors $\mathbf{u}_t^y$ in \eqref{eq:epsilont} are also heteroscedastic. First, note that it follows from \eqref{eq:epsilont} and \eqref{eq:ft} that the covariance matrix of $\vect{\epsilon}_t$ is given by $\text{Var}[\vect{\epsilon}_t \,|\, \vect{\Omega}_t,\vect{\Sigma}_t] = \mathbf{L}\vect{\Omega}_t\mathbf{L}'+ \vect{\Sigma}_t :=\vect{\Gamma}_t$. The covariance structure of any observationally equivalent model to \eqref{eq:yt}--\eqref{eq:ft} with the same number of factors must satisfy $\vect{\Gamma}_t = \mathbf{L}^* \vect{\Omega}_t^* \mathbf{L}^{*'} + \vect{\Sigma}_t^*$ for all $t$, where $ \mathbf{L}^*$ is $n\times r$ and $\vect{\Omega}_t^*$ is $r\times r$. Furthermore, for a square matrix $\mathbf{A}$ of dimension $m$, we define $\text{vecd}(\mathbf{A})$ to be the $m\times 1$ vector that stores its diagonal elements. Now, we consider the following assumptions that are used throughout the paper. \begin{assumption} \label{ass1} \rm The stochastic processes in $\text{vecd}(\vect{\Omega}_t)$ are linearly independent, i.e., there does not exist $\vect{\delta} \in \mathbb{R}^{r}$, $\vect{\delta} \neq \mathbf{0}$ such that $\vect{\delta}'\text{vecd}(\vect{\Omega}_t) = 0$ for all $t$. \end{assumption} \begin{assumption} \label{ass2} \rm If any row of the matrix of factor loadings $\mathbf{L}$ is deleted, there remain two disjoint submatrices of rank~$r$. \end{assumption} Assumption~\ref{ass1} requires that no stochastic volatilities in the common factors can be expressed as a linear combination of other factor stochastic volatilities. Under our factor stochastic volatility model, this assumption is automatically satisfied. Assumption~\ref{ass2} limits the extent of sparseness in the factor loadings matrix to ensure one can separately identify the common and the idiosyncratic components. This assumption can be traced back to \citet{AR56}, and is widely adopted in the literature. Implicitly, it also requires that $r \leq (n-1)/2$. Since factor models are mostly applied to situations where the number of variables $n$ is much larger than the number of factors $r$, this is not a stringent condition. With Assumptions 1 and 2, one can show that the factor stochastic volatility model specified in \eqref{eq:epsilont}-\eqref{eq:ft} is identified up to permutations and sign changes of the factors. The identification results are summarized in the following proposition. \begin{proposition}[Identification of the Common Variance Component] \label{thm:iden1} \rm Under Assumption~1 and Assumption~2, the only observationally equivalent model to \eqref{eq:epsilont}-\eqref{eq:ht2} is ${\mathbf{L}}^*={\mathbf{L}}\mathbf{P}_{r}\mathbf{P}_{\pm}$, $\vect{\Omega}_t^* = \mathbf{P}_{\pm}\mathbf{P}_{r}\vect{\Omega}_t\mathbf{P}_{r}'\mathbf{P}_{\pm}'$ and ${\vect{\Sigma}}_t^*={\vect{\Sigma}}_t$, where $\mathbf{P}_{r}$ is a permutation matrix of dimension $r$ and $\mathbf{P}_{\pm}$ is a reflection matrix in which each diagonal entry is either~$+1$ or~$-1$. \end{proposition} We prove the proposition by adapting the results in \citet{AR56} and \citet{SF01} to our setting. The details are provided in Appendix~A. Proposition~\ref{thm:iden1} contains two sets of identification results. First, it shows that the common and idiosyncratic variance components can be separately identified. Second, the common variance component is identified up to permutations and sign switches of the latent factors. So far we have only considered the case where all $r$ factors are heteroskedastic. It turns out this is not necessary for identification of the common variance component. More generally, one can show that part of the factor loadings matrix is identified even when some of the factors are homoskedastic (their variances are normalized to be one). The following proposition summarizes such a partial identification result. \begin{proposition}[Partial Identification of the Common Variance Component When the Number of Heteroskedastic Factors Is $r_1 < r$] \label{thm:iden2} \rm Let $\vect{\Omega}_t = \diag(\vect{\Omega}_{1t},\mathbf{I}_{r_2})$, where $\vect{\Omega}_{1t}$ is a $r_1\times r_1$ covariance matrix and $\mathbf{I}_{r_2}$ is the $r_2$-dimensional identity matrix with $r=r_1+r_2$. Similarly partition $\mathbf{L} = (\mathbf{L}_1, \mathbf{L}_2)$ such that $\mathbf{L}_1$ is $n\times r_1$ and $\mathbf{L}_2$ is $n\times r_2$. If $\text{diag}(\vect{\Omega}_{1t},1)$ satisfies Assumption 1 and $\mathbf{L}$ satisfies Assumption~2, then $\mathbf{L}_1$ is identified up to permutations and sign switches. \end{proposition} The proof of this proposition is given in Appendix~A. The condition that $\text{diag}(\vect{\Omega}_{1t},1)$ satisfies Assumption 1---i.e., $(\text{vecd}(\vect{\Omega}_{1t})', 1)'$ are linearly independent for all $t$---requires all stochastic processes in $\text{vecd}(\vect{\Omega}_{1t})$ to be non-degenerate. (Otherwise those homoskedastic factors should be relocated to the homoskedastic part.) It is also worth noting that Proposition~\ref{thm:iden2} does not imply that for $r_1<r$, the common variance component is not identifiable. In fact, it turns out that the minimum number of heteroskedastic factors for identifying $\mathbf{L}$ (up to permutations and sign switches) is $ r_1 = r-1$. We summarize this result in the following corollary. \begin{corollary} \label{thm:coro1} \rm Under the assumptions in Proposition~\ref{thm:iden2}, if the number of heteroskedastic factors is $r_1\geq r-1$, then $\mathbf{L}$ is identified up to permutations and sign switches. \end{corollary} The reason why only $r_1 = r-1$ heteroskedastic factors are needed for identification is intuitive. Under the assumptions in Proposition~\ref{thm:iden2}, when $r_1 = r-1$, only one element in $\vect{\Omega}_t$ is normalized to one; the remaining $r-1$ stochastic processes in $\vect{\Omega}_{1t}$ are linearly independent. Consequently, $\vect{\Omega}_t$ also satisfies Assumption 1. And Corollary~\ref{thm:coro1} follows from Proposition~\ref{thm:iden1}. For $r_1 < r-1$, part of the factor loadings matrix $\mathbf{L}$ is invariant under general orthogonal transformation. To see that, suppose $r_1 < r-1$, and hence $\mathbf{L}_2$ has at least $r_2 \geq 2$ columns. Let $\mathbf{R}_{f_2}$ be a $r_2\times r_2$ orthogonal matrix other than permutation $\mathbf{P}_{r_2}$ or reflection $\mathbf{P}_{\pm}$ such that $\mathbf{R}_{f_2}\mathbf{R}_{f_2}' = \mathbf{I}_{r_2}$.\footnote{The only one-dimensional orthogonal matrices are reflections, namely, $+1$ and $-1$. Hence, $r_2$ must be at least 2.} Then, we have $\mathbf{L}\vect{\Omega}_t\mathbf{L}' = \mathbf{L}_1\vect{\Omega}_{1t}\mathbf{L}_1' + \mathbf{L}_2\mathbf{L}_2' = \mathbf{L}_1\vect{\Omega}_{1t}\mathbf{L}_1' + \mathbf{L}_2\mathbf{R}_{f_2} \mathbf{R}_{f_2}'\mathbf{L}_2'.$ Hence, $\mathbf{L}^* = (\mathbf{L}_1,\mathbf{L}_2\mathbf{R}_{f_2})$ and $\vect{\Omega}^*_t = \vect{\Omega}_t$ form an observationally equivalent model. For point-identification, one needs additional restrictions on $\mathbf{L}$ (or the latent factors) to pin down the unique permutation and sign configuration. As is common in macroeconomic analysis using VARs, sign restrictions implied by economic theory are often available to assist structural identification. For a recent contribution linking sign restrictions and factor models, see \citet{Korobilis20}. Below we describe how we can incorporate sign restrictions to achieve point-identification. To that end, let $\mathbf{S}$ denote the $n\times r$ matrix that collects the corresponding restrictions on the factor loadings matrix $\mathbf{L}$. The entries of $\mathbf{S}$ can take four values: 1, $-1$, 0 and N/A, which denotes positive restriction, negative restriction, zero restriction and no restrictions, respectively. For example, if economic theory implies that $\mathbf{L}_{ij}>0 $, then $\mathbf{S}_{ij} = +1$; if there are no restrictions on $\mathbf{L}_{ij} $, then $\mathbf{S}_{ij} = \text{N/A}$. Recall that under Assumptions 1-2, Proposition~\ref{thm:iden1} dictates that the factor loadings matrix $\mathbf{L}^*$ of any observationally equivalent model must be of the form $\mathbf{L}^* = \mathbf{L}\mathbf{P}$, where $\mathbf{P}$ is a product of a reflection and a permutation. To be observationally equivalent with the sign restrictions imposed---i.e., satisfying exactly the same sign restrictions---we must have $\mathbf{S}\mathbf{P} = \mathbf{S}$. Intuitively then, for point-identification of $\mathbf{L}$ there must be enough structure in $\mathbf{S}$ such that the only possible $\mathbf{P}$ is the identify matrix. Now, suppose that each column of $\mathbf{S}$ has at least one sign restriction and no columns are the same or negative of any other columns. These conditions are sufficient as they rule out any permutations or sign changes except the identity. To see that the conditions are necessary, suppose there is a column that has no sign restrictions. Then, changing the sign of the associated column in $\mathbf{L}$ (and the associated rows in $\mathbf{f}_t$) would leave the model observationally equivalent. Next, suppose one column is the same or the negative of any other column, then we can permute (and change signs if necessary) the relevant columns to leave the model observationally equivalent. We summarize these results in the following corollary. \begin{corollary} \label{thm:coro2} \rm Under Assumptions 1-2, the necessary and sufficient conditions for point-identification of the factor loadings matrix are that each column of $\mathbf{S}$ has at least one sign restriction and no columns are the same or negative of any other columns. \end{corollary} \section{Bayesian Estimation} \label{s:estimation} In this section we describe an efficient posterior sampler to estimate the model in \eqref{eq:yt}--\eqref{eq:ht2} with signs or zero restrictions specified in $\mathbf{S}$. Below we note a few details in our implementation with the goal of improving speed and sampling efficiency. First, even though the factors $\mathbf{f}_1,\ldots, \mathbf{f}_T$ are conditionally independent given the data and other parameters, we sample them jointly in one step using the precision sampler of \citet{CJ09}---instead of drawing them sequentially in a for-loop---to speed up the computations. Second, since VARs tend to have a lot of parameters even for small and medium systems, we implement an equation-by-equation estimation approach similar in spirit to that in \citet{CCM19} to sample the VAR coefficients. Specifically, given the latent factors $\mathbf{f}$, the VAR becomes $n$ unrelated regressions, and one can sample the VAR coefficients equation by equation without any loss of efficiency. Third, with the sign restrictions imposed in $\mathbf{S}$, the full conditional distribution of the factor loadings in each equation becomes a truncated multivariate normal distribution. To sample from such a distribution, we use the algorithm in \citet{botev17} that is based on quadratic programming. For notational convenience, stack $\mathbf{y} = (\mathbf{y}_1',\ldots, \mathbf{y}_T')',$ $\mathbf{f} = (\mathbf{f}_1',\ldots, \mathbf{f}_T')',$ $\mathbf{h} = (\mathbf{h}_1',\ldots, \mathbf{h}_{T}')'$ and $\vect{\beta} = (\vect{\beta}_1',\ldots, \vect{\beta}_n')'$. In addition, let $\mathbf{y}_{i,\boldsymbol{\cdot}} = (y_{i,1},\ldots, y_{i,T})'$ denote the vector of observed values for the $i$-th variable, $i=1,\ldots, n$. We similarly define $\mathbf{h}_{i,\boldsymbol{\cdot}} = (h_{i,1},\ldots, h_{i,T})', i=1,\ldots, n+r$. Then, posterior draws can be obtained by sampling sequentially from: \begin{enumerate} \item $p(\mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2) = p(\mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \mathbf{h})$; \item $p(\vect{\beta},\mathbf{L} \,|\, \mathbf{y}, \mathbf{f}, \mathbf{h}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2) = \prod_{i=1}^n p(\vect{\beta}_i,\mathbf{l}_i \,|\, \mathbf{y}_{i,\boldsymbol{\cdot}}, \mathbf{f}, \mathbf{h}_{i,\boldsymbol{\cdot}})$; \item $p(\mathbf{h} \,|\, \mathbf{y}, \mathbf{f},\vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2) = \prod_{i=1}^{n+r} p(\mathbf{h}_{i,\boldsymbol{\cdot}} \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2)$; \item $p(\vect{\sigma}^2 \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\mu}, \vect{\phi}) = \prod_{i=1}^{n+r} p(\sigma_i^2 \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}}, \mu_i, \phi_i)$; \item $p(\vect{\mu} \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\phi},\vect{\sigma}^2) = \prod_{i=1}^{n} p(\mu_i \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}}, \phi_i, \sigma^2_i) $; \item $p(\vect{\phi} \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\mu},\vect{\sigma}^2) = \prod_{i=1}^{n+r} p(\phi_i \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}}, \mu_i, \sigma^2_i) $. \end{enumerate} \textbf{Step 1}. As mentioned above, since the factors $\mathbf{f}_1,\ldots, \mathbf{f}_T$ are conditionally independent given other parameters, in principle one can sample each factor sequentially in a for-loop. Here, however, we vectorize all the operations and sample them jointly in one step to improve computational speed. More specifically, we first stack the VAR in \eqref{eq:yt}-\eqref{eq:epsilont} over $t=1,\ldots, T$ and write it as: \[ \mathbf{y} = \mathbf{X}\vect{\beta} + (\mathbf{I}_T\otimes \mathbf{L})\mathbf{f} + \mathbf{u}^y, \quad \mathbf{u}^y \sim \distn{N}(\mathbf{0},\vect{\Sigma}), \] where $\mathbf{X}$ is the matrix of intercepts and lagged values and $\vect{\Sigma} = \text{diag}(\vect{\Sigma}_1,\ldots, \vect{\Sigma}_T)$ with $\vect{\Sigma}_t = \text{diag}(\text{e}^{\mathbf{h}_t^y})$. In addition, it follows from \eqref{eq:ft} that $(\mathbf{f} \,|\, \mathbf{h})\sim \distn{N}(\mathbf{0},\vect{\Omega}),$ where $\vect{\Omega} = \text{diag}(\vect{\Omega}_1,\ldots, \vect{\Omega}_T)$ with $\vect{\Omega}_t = \text{diag}(\text{e}^{\mathbf{h}_t^f})$. Then, by standard linear regression results \citep[see, e.g.,][chapter 12]{CKPT19}, we have \begin{equation}\label{eq:postf} (\mathbf{f} \,|\,\mathbf{y}, \vect{\beta}, \mathbf{L}, \mathbf{h}) \sim \distn{N}(\hat{\mathbf{f}},\mathbf{K}_{\mathbf{f}}^{-1}), \end{equation} where \begin{equation} \label{eq:Kf} \mathbf{K}_{\mathbf{f}} = \vect{\Omega}^{-1} + (\mathbf{I}_T\otimes \mathbf{L}')\vect{\Sigma}^{-1}(\mathbf{I}_T\otimes \mathbf{L}), \quad \hat{\mathbf{f}} = \mathbf{K}_{\mathbf{f}}^{-1}(\mathbf{I}_T\otimes \mathbf{L}')\vect{\Sigma}^{-1}(\mathbf{y}-\mathbf{X}\vect{\beta}). \end{equation} Note that the precision matrix $\mathbf{K}_{\mathbf{f}}$ is a band matrix, i.e., it is sparse and all the nonzero entries are arranged along the diagonal bands above and below the main diagonal. As such, once can use the precision sampler of \citet{CJ09} to sample $\mathbf{f}$ efficiently. \textbf{Step 2}. Next, we sample $\vect{\beta}$ and $\mathbf{L}$ jointly to improve sampling efficiency. Given the latent factors $\mathbf{f}$, the VAR becomes $n$ unrelated regressions and we can sample $\vect{\beta}$ and $\mathbf{L}$ equation by equation. Recall that $\mathbf{y}_{i,\boldsymbol{\cdot}} = (y_{i,1},\ldots, y_{i,T})'$ is defined to be the $T\times 1$ vector of observations for the $i$-th variable; and that $\vect{\beta}_i$ and $\mathbf{l}_i$ represent, respectively, the VAR coefficients and the factor loadings in the $i$-th equation. Then, the $i$-th equation of the VAR can be written as \[ \mathbf{y}_{i,\boldsymbol{\cdot}} = \mathbf{X}_i\vect{\beta}_i + \mathbf{F} \mathbf{l}_i + \mathbf{u}_{i,\boldsymbol{\cdot}}^y, \] where $\mathbf{F} = (\mathbf{f}_{1,\boldsymbol{\cdot}},\ldots, \mathbf{f}_{r,\boldsymbol{\cdot}})$ is the $T\times r$ matrix of factors with $\mathbf{f}_{i,\boldsymbol{\cdot}} = (f_{i,1},\ldots, f_{i,T})'$. The vector of disturbances $\mathbf{u}_{i,\boldsymbol{\cdot}}^y= (u_{i,1},\ldots, u_{i,T})'$ is distributed as $\distn{N}(\mathbf{0},\vect{\Omega}_{\mathbf{h}_{i,\boldsymbol{\cdot}}})$, where $\vect{\Omega}_{\mathbf{h}_{i,\boldsymbol{\cdot}}} =\text{diag}(\text{e}^{h_{i,1}},\ldots, \text{e}^{h_{i,T}})$.\footnote{Note that zero restrictions on $ \mathbf{l}_i$ can be easily handled by redefining $\mathbf{l}_i $ and $ \mathbf{F}$ appropriately. For example, if the first element of $\mathbf{l}_i $ is restricted to be zero, we can define $\tilde{\mathbf{l}}_i $ to be the vector consisting of the second to $r$-th elements of $\mathbf{l}_i$ and $\tilde{\mathbf{F}} = (\mathbf{f}_{2,\boldsymbol{\cdot}},\ldots, \mathbf{f}_{r,\boldsymbol{\cdot}})$. Then, we replace $\mathbf{F} \mathbf{l}_i$ by $\tilde{\mathbf{F}}\tilde{\mathbf{l}}_i$.} Letting $\vect{\theta}_i = (\vect{\beta}_i',\mathbf{l}_i')'$ and $\mathbf{Z}_i = (\mathbf{X}_i,\mathbf{F})$, we can further write the VAR systems as \[ \mathbf{y}_{i,\boldsymbol{\cdot}} =\mathbf{Z}_i\vect{\theta}_i + \mathbf{u}_{i,\boldsymbol{\cdot}}^y. \] Let $R_i\subset \mathbb{R}^r$ be the support of $\mathbf{l}_i$ defined by the sign restrictions specified in the $i$-th row of $\mathbf{S}$. Then, using standard linear regression results, we obtain: \[ (\vect{\theta}_i \,|\, \mathbf{y}_{i,\boldsymbol{\cdot}}, \mathbf{f}, \mathbf{h}_{i,\boldsymbol{\cdot}}) \sim \distn{N}(\hat{\vect{\theta}}_i,\mathbf{K}_{\vect{\theta}_i}^{-1})1(\mathbf{l}_i\in R_i), \] where \[ \mathbf{K}_{\vect{\theta}_i} = \mathbf{V}_{\vect{\theta}_i}^{-1} + \mathbf{Z}_i'\vect{\Omega}_{\mathbf{h}_{i,\boldsymbol{\cdot}}}^{-1}\mathbf{Z}_i, \quad \hat{\vect{\theta}}_i = \mathbf{K}_{\vect{\theta}_i}^{-1}(\mathbf{V}_{\vect{\theta}_i}^{-1}\vect{\theta}_{0,i} + \mathbf{Z}_i\vect{\Omega}_{\mathbf{h}_{i,\boldsymbol{\cdot}}}^{-1} \mathbf{y}_{i,\boldsymbol{\cdot}}) \] with $\mathbf{V}_{\vect{\theta}_i} = \text{diag}(\mathbf{V}_{\vect{\beta}_i},\mathbf{V}_{\mathbf{l}_i})$ and $\vect{\theta}_{0,i} = (\vect{\beta}_{0,i}',\mathbf{l}_{0,i}')'$. A draw from this truncated multivariate normal distribution can be obtained using the algorithm in \citet{botev17}. The remaining steps are standard and we leave the details to Appendix B. Some simulation results are reported in Appendix D to show that the posterior sampler works well and the posterior estimates track the true values closely. It is worth noting that Proposition~\ref{thm:iden1} only guarantees that the factors and factor loadings are identified up to permutations and sign changes. Hence, in practice one might encounter the so-called label-switching problem. One way to handle this issue is to post-process the posterior draws to sort them into the correct categories; see, e.g., \citet{KS19} for such an approach. In our empirical application we impose sign restrictions that satisfy Corollary \ref{thm:coro2}---and consequently the factors and factor loadings are point-identified. Next, we document the runtimes of estimating the VAR-FSV of different dimensions to assess how well the posterior sampler scales to higher dimensions. More specifically, we report in Table~\ref{tab:times} the computation times (in minutes) to obtain 10,000 posterior draws from the VAR-FSV of dimensions $n= 15, 30, 50$ and sample sizes $T=300, 800$. The posterior sampler is implemented in $\mathrm{M}\mathrm{{\scriptstyle ATLAB}}$ on a typical desktop with an Intel Core i5-9600 @3.10 GHz processor and 16 GB memory. It is evident from the table that even for high-dimensional applications with 50 variables, the VAR-FSV model with sign restrictions imposed on the factor loadings can be estimated fairly quickly. \begin{table}[H] \centering \caption{The computation times (in minutes) to obtain 10,000 posterior draws from the VAR-FSV model with $n$ variables and $T$ observations. All VARs have $r=4$ factors and $p = 4$ lags.} \label{tab:times} \begin{tabular}{cccccc} \hline \hline \multicolumn{3}{c}{$T = 300$} & \multicolumn{3}{c}{$T = 800$} \\ $n = 15$ & $n=30$ & $n=50$ & $n = 15$ & $n=30$ & $n=50$ \\ \hline 12.5 & 25.7 & 45.0 & 33.3 & 67.3 & 110.2 \\ \hline \hline \end{tabular} \end{table} \section{Bayesian Model Comparison} \label{s:ML} This section first gives a brief overview on the theory of Bayesian model comparison via the marginal likelihood. Then, we introduce an algorithm to evaluate the likelihood, or more precisely the integrated likelihood marginal of the latent states, implied by the VAR-FSV model. Finally, we present an adaptive importance sampling algorithm to estimate the marginal likelihood under the VAR-FSV model. Suppose we wish to compare a collection of models $\{M_{1},\ldots, M_{K} \}$, where each model $M_k$ is defined by a likelihood function $p(\mathbf{y}\,|\, \vect{\theta}_k, M_{k})$ and a prior on the model-specific parameter vector $\vect{\theta}_k$ denoted by $p(\vect{\theta}_k \,|\, M_k)$. The gold standard for Bayesian model comparison is the Bayes factor in favor of $M_i$ against $M_j$, defined as \[ \text{BF}_{ij} = \frac{p(\mathbf{y}\,|\, M_i)}{p(\mathbf{y}\,|\, M_j)}, \] where \begin{equation}\label{eq:ml} p(\mathbf{y}\,|\, M_{k}) = \int p(\mathbf{y}\,|\, \vect{\theta}_k, M_{k}) p(\vect{\theta}_k\,|\, M_{k})\text{d}\vect{\theta}_k \end{equation} is the \emph{marginal likelihood} under model $M_k$, $k=i,j.$ This Bayes factor is related to the posterior odds ratio between the two models: \[ \frac{\mathbb P(M_i\,|\,\mathbf{y})}{\mathbb P(M_j\,|\,\mathbf{y})} = \frac{\mathbb P(M_i)}{\mathbb P(M_j)}\times \text{BF}_{ij}, \] where $\mathbb P(M_i)/\mathbb P(M_j)$ is the prior odds ratio. It if clear that if both models are equally probable \textit{a priori}, i.e., $p(M_i) = p(M_j)$, then the posterior odds ratio between the two models is equal to the Bayes factor. Hence, the Bayes factor has a natural interpretation and is easy to understand. For example, under equal prior odds, if $\text{BF}_{ij} = 50$, then model $M_i$ is 50 times more likely than model $M_j$ given the data. For a more detailed discussion of the Bayes factor and its role in Bayesian model comparison, see \citet{koop03} or \citet{CKPT19}. From here onwards we suppress the model indicator. \subsection{Integrated Likelihood Evaluation} \label{ss:intlike} To estimate the marginal likelihood, we first present an efficient way to evaluate the likelihood, or more precisely the integrated likelihood marginal of the latent states, given in~\eqref{eq:like} . For notational convenience, we rewrite~\eqref{eq:like} as \begin{equation}\label{eq:like2} p(\mathbf{y} \,|\,\vect{\beta}, \mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2) = \int p(\mathbf{y}\,|\,\vect{\beta},\mathbf{L},\mathbf{h}) p(\mathbf{h} \,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2) \text{d}\mathbf{h}, \end{equation} where the conditional density of $\mathbf{y}$ given $\mathbf{h}$ but marginal of $\mathbf{f}$ has the explicit expression \[ p(\mathbf{y}\,|\,\vect{\beta},\mathbf{L},\mathbf{h}) = (2\pi)^{-\frac{Tn}{2}}\prod_{t=1}^T|\mathbf{L}\vect{\Omega}_t\mathbf{L}'+\vect{\Sigma}_t|^{- \frac{1}{2}} \text{e}^{-\frac{1}{2}(\mathbf{y}_t - (\mathbf{I}_n\otimes\mathbf{x}_t')\vect{\beta})'(\mathbf{L}\vect{\Omega}_t\mathbf{L}'+\vect{\Sigma}_t)^{-1} (\mathbf{y}_t - (\mathbf{I}_n\otimes\mathbf{x}_t')\vect{\beta})}. \] The second term of the integrand, $p(\mathbf{h} \,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2)$, is a $T(n+r)$-variate Gaussian density implied by the state equations specified in \eqref{eq:ht1}-\eqref{eq:ht2}. Its analytical expression is given in Appendix~C, and in particular, its precision matrix is banded. Hence, both densities can be evaluated quickly. The main difficulty in evaluating the integrated likelihood in \eqref{eq:like2}, however, is that it requires integrating out all the latent log-volatilities, which involves solving a $T(n+r)$-dimensional integral. In what follows, we adopt the importance sampling approach developed for time-varying parameter VARs in \citet{CE18} to our setting.\footnote{There is a long tradition of using importance sampling to evaluate the integrated likelihood of stochastic volatility models. Earlier papers, such as \citet{DK97}, \citet{SP97}, \citet{KH02}, \citet{FW08}, \citet{McCausland12}, have focused mostly on univariate stochastic volatility models.} More specifically, given an importance sampling density $g$---that might depend on model parameters and the data---we evaluate the integrated likelihood via importance sampling: \begin{equation}\label{eq:IS_intlike} \hat{p}(\mathbf{y} \,|\, \vect{\beta}, \mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2) = \frac{1}{R_1} \sum_{r=1}^{R_1}\frac{p(\mathbf{y}\,|\,\vect{\beta},\mathbf{L},\mathbf{h}^{(r)})p(\mathbf{h}^{(r)}\,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2)} {g(\mathbf{h}^{(r)}; \mathbf{y}, \vect{\beta}, \mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2)}, \end{equation} where $\mathbf{h}^{(1)},\ldots, \mathbf{h}^{(R_1)}$ are independent draws from $g$. The choice of the importance sampling density $g$ is vital as it determines the variance of the estimator. In general, we would like to use an importance sampling density so that it well approximates the integrand in \eqref{eq:like2}. Our particular choice is motivated by the observation that there is, in fact, a theoretical zero-variance importance sampling density---it is $p(\mathbf{h}\,|\,\mathbf{y},\vect{\beta},\mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2)$, the conditional posterior distribution of $\mathbf{h}$ given the other parameters but marginal of $\mathbf{f}$. In practice, however, this density cannot be used as an importance sampling density as it is non-standard (e.g., its normalizing constant is unknown and it is unclear how one can efficiently generate samples from this density). But this observation provides us guidance for selecting a good importance sampling density. In particular, we aim to approximate this ideal importance sampling density using a Gaussian density. This is accomplished as follows. We first develop an expectation-maximization (EM) algorithm to locate the mode of $\log p(\mathbf{h}\,|\,\mathbf{y},\vect{\beta},\mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2)$, denoted as $\hat{\mathbf{h}}$. Then, we obtain the negative Hessian of this log-density evaluated at the mode, denoted as $\mathbf{K}_{\mathbf{h}}$. The mode and the negative Hessian are then used, respectively, as the mean vector and precision matrix of the Gaussian approximation. That is, the importance sampling density is $\distn{N}(\hat{\mathbf{h}},\mathbf{K}_{\mathbf{h}}^{-1}).$ We leave the technical details to Appendix~C. Below we comment on a few computational details. First, in the M-step of the EM algorithm, one needs to solve a $T(n+r)$-dimensional maximization problem, which is in general extremely computationally intensive. In our case, however, we are able to obtain analytical expressions of the gradient and the Hessian of the objective function (i.e., the Q-function), which allows us to implement the Newton-Raphson method. Furthermore, one can show that the Hessian is a) negative definite anywhere in $\mathbb{R}^{T(n+r)}$, and b) a band matrix. The former property guarantees rapid convergence of the Newton-Raphson method, while the latter property substantially speeds up the computations. Second, to construct the importance sampling estimator in \eqref{eq:IS_intlike}, one needs to both evaluate and sample from the $T(n+r)$-dimensional Gaussian importance sampling density $M$ times. For very high-dimensional Gaussian densities, both operations are generally computational costly. For our Gaussian importance sampling density, however, we can show that its precision matrix is banded. As such, samples from this Gaussian density can be obtained quickly using the precision sampler in \citet{CJ09}. Evaluation of the density can be done just as quickly. We summarize the evaluation of the integrated likelihood in Algorithm \ref{alg:intlike}. \begin{algorithm}[H] \caption{Integrated likelihood estimation.} \label{alg:intlike} Given the parameters $\vect{\beta}$, $\mathbf{L},$ $\vect{\mu},\vect{\phi}$ and $\vect{\sigma}^2$, complete the following two steps. \begin{enumerate} \item Obtain the mean vector $\hat{\mathbf{h}}$ and precision matrix $\mathbf{K}_{\mathbf{h}}$ of the Gaussian importance sampling density detailed in Appendix C. \item For $r = 1,\ldots, R_1$, simulate $\mathbf{h}^{(r)} \sim \distn{N}(\hat{\mathbf{h}},\mathbf{K}_{\mathbf{h}}^{-1})$ using the precision sampler in \citet{CJ09}, and compute the average \[ \hat{p}(\mathbf{y} \,|\, \vect{\beta}, \mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2) = \frac{1}{R_1} \sum_{r=1}^{R_1}\frac{p(\mathbf{y}\,|\,\vect{\beta},\mathbf{L},\mathbf{h}^{(r)})p(\mathbf{h}^{(r)}\,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2)} {g(\mathbf{h}^{(r)}; \mathbf{y}, \vect{\beta}, \mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2)}. \] \end{enumerate} \end{algorithm} \subsection{Marginal Likelihood Estimation} Next, we discuss the marginal likelihood estimation of the VAR-FSV model using an adaptive importance sampling approach called the improved cross-entropy method. This method requires little explicit analysis from the user and is applicable to a wide variety of problems (in contrast to the importance sampling estimator of the integrated likelihood estimation presented in Algorithm \ref{alg:intlike} that requires a lot of analysis). More specifically, suppose we wish to estimate the marginal likelihood $p(\mathbf{y})\equiv p(\mathbf{y}\,|\, M_k)$ given in~\eqref{eq:ml} using the following importance sampling estimator: \begin{equation} \label{eq:ISml} \widehat{p(\mathbf{y})}_{\rm IS} = \frac{1}{R_2}\sum_{r=1}^{R_2} \frac{p(\mathbf{y}\,|\, \vect{\theta}^{(r)})p(\vect{\theta}^{(r)})}{g(\vect{\theta}^{(r)})}, \end{equation} where $\vect{\theta}^{(1)},\ldots, \vect{\theta}^{(R_2)}$ are independent draws from the importance sampling density $g(\cdot)$. In particular, for our FSV model, $\vect{\theta} = \{\vect{\beta},\mathbf{L},\vect{\sigma}^2,\vect{\mu},\vect{\phi}\}$. While this importance sampling estimator in theory is unbiased and simulation consistent for any $g$---as long as it dominates $p(\mathbf{y}\,|\,\cdot)p(\cdot)$, i.e., $g(\vect{\theta})=0\Rightarrow p(\mathbf{y}\,|\,\vect{\theta})p(\vect{\theta})=0$---in practice its performance heavily depends on the choice of~$g$. Here we follow \citet{CE15} to use the improved cross-entropy method to construct $g$ optimally.\footnote{The original cross-entropy method was developed for rare-event simulation by \citet{rubinstein97, rubinstein99} using a multi-level procedure to construct the optimal importance sampling density. \citet{CE15} later show that this optimal importance sampling density can be obtained more accurately in one step using Markov chain Monte Carlo methods.} To motivate the improved cross-entropy method, first note that the ideal zero-variance importance sampling density is the posterior density $p(\vect{\theta} \,|\,\mathbf{y})$. That is, if we use $g^*(\vect{\theta}) = p(\vect{\theta}\,|\,\mathbf{y}) = p(\mathbf{y}\,|\,\vect{\theta})p(\vect{\theta})/p(\mathbf{y})$ as the importance sampling density, then the associated estimator in~\eqref{eq:ISml} has zero variance. Unfortunately, $g^*$ cannot be used in practice as its normalization constant is precisely the marginal likelihood, the unknown quantity we aim to estimate. This nevertheless provides a benchmark to construct an optimal importance sampling density. More specifically, we aim to find a density that is `close' to this benchmark $g^*$ that can be used as an importance sampling density. To that end, consider a parametric family $\mathcal{G} = \{ g(\vect{\theta};\mathbf{v}) \}$ indexed by the parameter vector $\mathbf{v}$. We then find the density $g(\vect{\theta};\mathbf{v}^*)\in\mathcal{G}$ such that it is, in a well-defined sense, the `closest' to $g^*$. One convenient measure of closeness between densities is the Kullback-Leibler divergence or the cross-entropy distance. More precisely, for two density functions $g_1$ and $g_2$, the cross-entropy distance from $g_1$ to $g_2$ is defined as: \[ \mathcal{D}(g_1,g_2) = \int g_1(\mathbf{x})\log \frac{g_1(\mathbf{x})}{g_2(\mathbf{x})}\text{d}\mathbf{x}. \] Given this measure of closeness, we obtain the density $g(\cdot;\mathbf{v})\in\mathcal{G}$ such that $\mathcal{D}(g^*,g(\cdot;\mathbf{v}))$ is minimized, i.e., $\mathbf{v}_{\text{ce}}^* = \mathop{\rm argmin}_{\mathbf{v}}\mathcal{D}(g^*,g(\cdot;\mathbf{v}))$. It can be shown that solving the CE minimization problem is equivalent to finding \[ \mathbf{v}^*_{\text{ce}} = \mathop{\rm argmax}_{\mathbf{v}}\int p(\mathbf{y}\,|\,\vect{\theta})p(\vect{\theta})\log g(\vect{\theta};\mathbf{v})\text{d}\vect{\theta}. \] In general this optimization problem is difficult to solve analytically as it involves a high-dimensional integral. Instead, we consider its stochastic counterpart: \begin{equation} \label{eq:maxMC} \widehat{\mathbf{v}}^*_{\text{ce}} = \mathop{\rm argmax}_{\mathbf{v}} \frac{1}{M}\sum_{m=1}^M \log g(\vect{\theta}_m; \mathbf{v}), \end{equation} where $\vect{\theta}_1,\ldots, \vect{\theta}_M$ are posterior draws from $p(\vect{\theta}\,|\,\mathbf{y}) \propto p(\mathbf{y}\,|\,\vect{\theta})p(\vect{\theta})$. It is useful to note that $\widehat{\mathbf{v}}^*_{\text{ce}}$ is exactly the maximum likelihood estimate for $\mathbf{v}$ if we view $g(\vect{\theta};\mathbf{v})$ as the likelihood function with parameter vector $\mathbf{v}$ and $\vect{\theta}_1,\ldots, \vect{\theta}_M$ as an observed sample. Since finding the maximum likelihood estimator is a standard problem, solving~\eqref{eq:maxMC} is typically easy. For example, analytical solutions to \eqref{eq:maxMC} are available for the exponential family \citep[e.g.,][p. 70]{rk:ce}. Next, we discuss the choice of the parametric family $\mathcal{G}$. One convenient class of densities is one in which each member $g(\vect{\theta} ; \mathbf{v})$ is a product of probability densities, e.g., $g(\vect{\theta}; \mathbf{v}) = g(\vect{\theta}_1; \mathbf{v}_1)\times\cdots\times g(\vect{\theta}_B; \mathbf{v}_B)$, where $\vect{\theta} = \{\vect{\theta}_1,\ldots, \vect{\theta}_B\}$ and $\mathbf{v} = \{\mathbf{v}_1,\ldots,\mathbf{v}_B\}$. One main advantage of this choice is that we can then reduce the generally high-dimensional maximization problem~\eqref{eq:maxMC} into $B$ separate low-dimensional maximization problems. For example, for the our FSV model, we divide $\vect{\theta} = \{\vect{\beta},\mathbf{L},\vect{\sigma}^2,\vect{\mu},\vect{\phi}\}$ into 5 natural blocks, and consider the parametric family \[ \begin{split} \mathcal{G} = & \left\{ g_{\distn{N}}(\vect{\beta}; \mathbf{v}_{1,\vect{\beta}}, \mathbf{v}_{2,\vect{\beta}}) g_{\distn{N}}(\mathbf{L}; \mathbf{v}_{1,\mathbf{L}}, \mathbf{v}_{2,\mathbf{L}}) \prod_{i=1}^{n+r} g_{\distn{IG}}(\sigma_{i}^2; v_{1,\sigma_i^2}, v_{2,\sigma_i^2}) \prod_{i=1}^{n}g_{\distn{N}}(\mu_{i}^2; v_{1,\mu_i}, v_{2,\mu_i}) \right. \\ & \quad \times \left. \prod_{i=1}^{n+r}g_{\distn{N}}(\phi_i; v_{1,\phi_i}, v_{2,\phi_i})1\left(|\phi_i|<1\right) \right\}, \end{split} \] where $g_{\distn{N}}$ and $g_{\distn{IG}}$ are, respectively, the Gaussian and the inverse-gamma densities. Given this choice of the parametric family, the maximization problem in \eqref{eq:maxMC} can be readily solved (either analytically or using numerical optimization). Given the optimal importance density, denoted as $g(\vect{\beta},\mathbf{L},\vect{\sigma}^2,\vect{\mu},\vect{\phi}; \mathbf{v}^*)$, we construct the following importance sampling estimator: \begin{equation}\label{eq:IS_ml} \widehat{p(\mathbf{y})} = \frac{1}{R_2} \sum_{r=1}^{R_2}\frac{p(\mathbf{y}\,|\, \vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)}) p(\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)})} {g(\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)};\mathbf{v}^*)}, \end{equation} where $(\vect{\beta}^{(1)},\mathbf{L}^{(1)},\vect{\sigma}^{2(1)},\vect{\mu}^{(1)},\vect{\phi}^{(1)}), \ldots, (\vect{\beta}^{(R_2)},\mathbf{L}^{(R_2)},\vect{\sigma}^{2(R_2)},\vect{\mu}^{(R_2)},\vect{\phi}^{(R_2)})$ are independent draws from $g(\vect{\beta},\mathbf{L},\vect{\sigma}^2,\vect{\mu},\vect{\phi}; \mathbf{v}^*)$ and $p(\mathbf{y}\,|\,\vect{\beta},\mathbf{L},\vect{\sigma}^2,\vect{\mu},\vect{\phi})$ is the integrated likelihood, which can be estimated using the estimator in \eqref{eq:IS_intlike}. We refer the readers to \citet{CE15} for a more thorough discussion of this adaptive importance sampling approach. We summarize the algorithm in Algorithm~\ref{alg:ml}. Note that Algorithm~\ref{alg:ml} has two nested importance sampling steps, and it falls within the importance sampling squared (IS$^2$) framework in \citet*{TSPK14}. We follow their recommendation to set $R_1$, the simulation size of the inner importance sampling loop (the importance sampling step for estimating the integrated likelihood), adaptively so that the variance of the log integrated likelihood is around 1. See also the discussion in \citet*{PdGK12}. \begin{algorithm}[H] \caption{Marginal likelihood estimation via the improved cross-entropy method.} \label{alg:ml} The marginal likelihood $p(\mathbf{y})$ can be estimated using the following steps. \begin{enumerate} \item Obtain $M$ posterior draws and use them to solve the CE minimization problem in \eqref{eq:maxMC} to obtain the optimal importance sampling density $g(\vect{\beta},\mathbf{L},\vect{\sigma}^2,\vect{\mu},\vect{\phi}; \mathbf{v}^*)$. \item For $r = 1,\ldots, R_2$, simulate $(\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)}) \sim g(\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)};\mathbf{v}^*)$ and compute the average \[ \widehat{p(\mathbf{y})} = \frac{1}{R_2} \sum_{r=1}^{R_2}\frac{\widehat{p(}\mathbf{y}\,|\,\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)})p(\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)})} {g(\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)};\mathbf{v}^*)}, \] where the integrated likelihood estimate $\widehat{p(}\mathbf{y}\,|\,\vect{\beta}^{(r)},\mathbf{L}^{(r)},\vect{\sigma}^{2(r)},\vect{\mu}^{(r)},\vect{\phi}^{(r)}) $ is computed using Algorithm \ref{alg:intlike} with $R_1$ independent draws. \end{enumerate} \end{algorithm} \section{Structural Analysis with the VAR-FSV} \label{s:tools} The VAR-FSV in \eqref{eq:yt}-\eqref{eq:epsilont} can be used to draw structural inference by employing standard tools such as impulse response functions, forecast error variance decompositions and historical decompositions. In particular, letting $\mathbf{A}(L) = \mathbf{I}_n - \mathbf{A}_1 L - \dotsm - \mathbf{A}_p L^p$, where $L$ is the lag operator, the representation \begin{equation} \mathbf{y}_t = \vect{\Phi}(1)\vect{\mu} + \vect{\Phi}(L)\vect{\epsilon}_t, \label{eq:reduced} \end{equation} where $\vect{\Phi}(L) = \mathbf{A}(L)^{-1}$ is well-defined assuming $\det \mathbf{A}(z) \ne 0$ for all $|z| < 1, z\in \mathbb{C}$ (i.e., the process $\{\mathbf{y}_t : t \in \mathbb{Z}\}$ is covariance-stationary). Although $\vect{\epsilon}_t$ does not contain structural shocks (since its elements are correlated), the reduced-form representation \eqref{eq:reduced} can be matched to a hypothetical structural representation of the form \begin{equation} \mathbf{y}_t = \vect{\Phi}(1)\vect{\mu} + \tilde{\vect{\Phi}}_t(L) \mathbf{u}_t, \label{eq:structural} \end{equation} where $\mathbf{u}_t$ is a vector of structural shock, and hence, its elements are uncorrelated. Note that $\tilde{\vect{\Phi}}_t(L)$ is time-varying because $\text{Var}(\vect{\epsilon}_t) = \mathbf{L}\vect{\Omega}_t\mathbf{L}' + \vect{\Sigma}_t$ is time-varying; consequently, hypothetical structural representations that can be matched to \eqref{eq:reduced} will generally have time-varying parameters. The standard structural VAR approach is to assume that (i) $\mathbf{u}_t$ is $n \times 1$ and (ii) $\vect{\epsilon}_t = \tilde{\vect{\Phi}}_{0,t} \mathbf{u}_t$, where $\tilde{\vect{\Phi}}_{0,t}$ is a $n\times n$ constant matrix with $\rank \tilde{\vect{\Phi}}_{0,t} = n$ and $\text{Var}(\mathbf{u}_t) = \mathbf{I}_n$. Then, $\tilde{\vect{\Phi}}_t(L) = \vect{\Phi}(L)\tilde{\vect{\Phi}}_{0,t}$ and $\tilde{\vect{\Phi}}_{0,t}$ satisfies \[ \tilde{\vect{\Phi}}_{0,t}\tilde{\vect{\Phi}}_{0,t}' = \mathbf{L}\vect{\Omega}_t\mathbf{L}' + \vect{\Sigma}_t. \] In this case, identification of $\tilde{\vect{\Phi}}_{0,t}$ requires additional restrictions since \[ \check{\vect{\Phi}}_{0,t}\check{\vect{\Phi}}_{0,t}' = \mathbf{L}\vect{\Omega}_t\mathbf{L}' + \vect{\Sigma}_t \] for all $\check{\vect{\Phi}}_{0,t} = \tilde{\vect{\Phi}}_{0,t}\mathbf{R}_t$, given an arbitrary orthonormal matrix $\mathbf{R}_t$ (i.e. satisfying $\mathbf{R}_t'\mathbf{R}_t = \mathbf{R}_t\mathbf{R}_t' = \mathbf{I}_n$). An alternative way to obtain structural inference in our settings---similar to \citet{Korobilis20}---is to assume that $\tilde{\vect{\Phi}}_t(L)$ is $n\times(r+n)$ and $\mathbf{u}_t$ is $(r+n)\times 1$, such that \begin{align} \tilde{\vect{\Phi}}_{0,t} &= \begin{pmatrix}\mathbf{L}\vect{\Omega}_t^\frac{1}{2} & \vect{\Sigma}_t^\frac{1}{2}\end{pmatrix}, & \mathbf{u}_t &= \begin{pmatrix}\tilde{\mathbf{f}}_t \\ \tilde{\mathbf{u}}_t^y\end{pmatrix}, \end{align} where $\tilde{\mathbf{f}}_t = \vect{\Omega}_t^{-\frac{1}{2}}\mathbf{f}_t$ and $\tilde{\mathbf{u}}_t^y = \vect{\Sigma}_t^{-\frac{1}{2}} \mathbf{u}_t^y$. In this case, $\tilde{\vect{\Phi}}_t(L) = \vect{\Phi}(L)\tilde{\vect{\Phi}}_{0,t}$ is also $n \times (r+n)$, which in departure from standard structural VARs leads to a `short' system \citep{FGS19}. Identification of impulse response functions and forecast error variance decompositions in short systems is generally problematic \citep{PR22,CF22}. However, in the formulation above, $\tilde{\vect{\Phi}}_{0,t}$ is identified due to $\vect{\Sigma}_t$ being restricted to a diagonal matrix and $\mathbf{L}$ being identified by sign restrictions as described in Section \ref{s:properties}. Hence, impulse response functions and forecast error variance decompositions to all shocks in $\mathbf{u}_t$ are identified, even though $\mathbf{u}_t$ is generally \emph{not recoverable} from past and future observations of $\mathbf{y}_t$, as defined in \citet{CJ21}. In our setting, the main interest lies in quantifying the effects of shocks in $\tilde{\mathbf{f}}_t$, and therefore, sign restrictions on $\mathbf{L}$ play the role of endowing these shocks with economic meaning. The remaining elements in $\tilde{\mathbf{u}}_t^y$ are not of direct interest and we do not treat them as economically meaningful shocks. Nevertheless, the restriction that $\vect{\Sigma}_t$ is diagonal plays a crucial role in the overall identification strategy along with an economically meaningful interpretation of $\tilde{\mathbf{f}}_t$. We provide explicit expressions for computing impulse response functions and forecast error variance decompositions in Appendix F. Finally, computing historical decompositions requires $\mathbf{f}_t$ and $\mathbf{u}_t^y$ (see Appendix F for details). The fact that $\mathbf{u}_t$ is not recoverable implies that historical decompositions may not be point identified. In a Bayesian setting, however, the posterior distribution of a historical decomposition at each horizon may still be constructed using draws from the posterior distribution of the VAR-FSV parameters together with draws of $\mathbf{f}_t$. In the algorithm developed in Section \ref{s:estimation}, draws of $\mathbf{f}_t$ are a by-product of simulation, while $\mathbf{u}_t^y$ is easily obtained as \[ \mathbf{u}_t^y = \mathbf{A}(L)\mathbf{y}_t - \vect{\mu} - \mathbf{L}\mathbf{f}_t, \] for each draw of $\vect{\mu}, \mathbf{A}_1, \dotsc, \mathbf{A}_p, \mathbf{L}, \mathbf{f}_t$. Therefore, draws from the posterior distribution of a HD are straightforward to compute. In addition, $\mathbf{u}_t$ can be regarded as being recoverable in the limit as $n\longrightarrow \infty$ from the VAR residual $\vect{\epsilon}_t$ (and therefore past and present $\mathbf{y}_t$) under a suitable assumption on the factor loadings $\mathbf{L}$. To see this, let $\mathbf{L}^+$ denote the Moore–Penrose inverse of $\mathbf{L}$. By Assumption \ref{ass2}, $\rank \mathbf{L} = r$ and $\mathbf{L}^+ = (\mathbf{L}'\mathbf{L})^{-1}\mathbf{L}'$. It also follows that a right inverse (although not a Moore-Penrose inverse) of $\tilde{\vect{\Phi}}_{0,t}$ is \begin{align} \tilde{\vect{\Phi}}_{0,t}^{-R} &= \begin{pmatrix} \vect{\Omega}_t^{-\frac{1}{2}}\mathbf{L}^+ \\ \vect{\Sigma}_t^{-\frac{1}{2}}(\mathbf{I}_n - \mathbf{L}\bL^+) \end{pmatrix}. \intertext{Consequently,} \tilde{\vect{\Phi}}_{0,t}^{-R} \tilde{\vect{\Phi}}_{0,t} &= \begin{pmatrix} \mathbf{I}_r & \vect{\Omega}_t^{-\frac{1}{2}}\mathbf{L}^+ \vect{\Sigma}_t^\frac{1}{2} \\ 0 & \mathbf{I}_n - \vect{\Sigma}_t^{-\frac{1}{2}}\mathbf{L}\bL^+\vect{\Sigma}_t^\frac{1}{2} \end{pmatrix}. \label{eq:recov} \end{align} In the factor model literature, a standard assumption \citep[e.g.][]{bai2003,FGLR09} is that $n^{-1}\mathbf{L}'\mathbf{L} \longrightarrow \vect{\Lambda}$ as $n \longrightarrow \infty$, where $\vect{\Lambda}$ is a constant (strictly) positive-definite matrix. It implies that the factors $\mathbf{f}_t$ are \emph{pervasive} in the sense that they significantly affect most of the variables \emph{on impact}.\footnote{It is worth emphasising, however, that this \emph{does not} imply $\mathbf{u}_t^y$ is a vector of \emph{idiosyncratic} errors, as defined by \cite{FHLR00,FL01} in the context of generalised dynamic factor models. In particular, the overall effect of $\mathbf{u}_t^y$ on $\mathbf{y}_t$ is $\vect{\Phi}(L)\mathbf{u}_t^y$, which is generally pervasive, albeit with a delay.} An immediate consequence of the pervasiveness assumption, together the regularity condition that $\text{Var}(\epsilon_{i,t}) < \infty$ for all $i = 1, \dotsc, n$, is $\mathbf{L}^+\vect{\Sigma}_t^\frac{1}{2}\tilde{\mathbf{u}}_t^y \overset{m.s.}{\longrightarrow} \mathbf{0}$. Combining this result with \eqref{eq:recov} yields \begin{equation} \tilde{\vect{\Phi}}_{0,t}^{-R}\vect{\epsilon}_t - \mathbf{u}_t = \left(\tilde{\vect{\Phi}}_{0,t}^{-R} \tilde{\vect{\Phi}}_{0,t} - \mathbf{I}_n\right)\mathbf{u}_t \overset{m.s.}{\longrightarrow} \mathbf{0}. \end{equation} Consequently, $\mathbf{u}_t$ is recoverable from $\vect{\epsilon}_t$ in the limit.\footnote{A more general result on recoverability with a fixed $n$ is given in \citet{CJ21}.} \section{A Monte Carlo Study: Determining the Number of Factors} \label{s:MC} In this section we conduct a series of simulation experiments to assess the adequacy of using the proposed marginal likelihood estimator to determine the number of factors. More specifically, we generate datasets from the VAR-FSV in \eqref{eq:yt}--\eqref{eq:ht2}, but we change the error structure to $\vect{\epsilon}_t = \mathbf{L} \mathbf{f}_t + \sqrt{\theta} \mathbf{u}_t^y$, where $\theta$ measures the signal-to-noise ratio, following \citet{bai2002determining}. We set parameter values so that if $\theta=r$, the idiosyncratic component will then have the same variance as the common component. In particular, we generate $L_{ij} \sim \mathcal{N}(0, 1)$ for $i=1,\ldots,n$ and $j=1,\ldots,r$ and set $\mu_i=0$ for $i=1,\ldots,n,$ so that the log-volatility processes associated with the idiosyncratic errors have 0 unconditional mean. The remaining parameters are generated as follows. The intercepts are drawn independently from the uniform distribution on the interval $(-10,10)$, i.e., $\distn{U}(-10, 10)$. For the VAR coefficients, the diagonal elements of the first VAR coefficient matrix are iid $\distn{U}(0,0.5)$ and the off-diagonal elements are from $\distn{U}(-0.2,0.2)$; all other elements of the $j$-th ($j > 1$) VAR coefficient matrices are iid $\distn{N}(0,0.1^2/j^2).$ Finally, for the log-volatility processes, we set $\phi_i=0.98$ and $\sigma_i^2 = 0.1^2$ for $i=1,\ldots,n+r$. In this Monte Carlo study, we select the true number of factors $r \in \{1,3,5\}$ and $\theta \in \{1,3,5,10\}$; and we consider the number of variables $n \in \{15, 30\}$ and sample size $T \in \{300,500,800\}$. For each set of $(r,\theta, n, T)$, we generate 100 datasets. For each dataset, we estimate the VAR-FSV models with $r=1,\ldots, 6$ factors and compute the associated marginal likelihood values. For this Monte Carlo experiment, a total of 14,400 separate MCMCs and marginal likelihood estimation are run (24 settings $\times$ 6 factor models $\times$ 100 datasets). Among the 6 factor models for each dataset and parameter setting, we select the one with the largest marginal likelihood value. Table~\ref{tab:MCfactor} reports the selection frequency. \begin{table}[H] \centering \caption{Selection frequency (\%) of the number of factors $r$ in 100 datasets. The DGP is $\vect{\epsilon}_t=\mathbf{L}\mathbf{f}_t+\sqrt{\theta} \mathbf{u}_t^{\mathbf{y}}$. } \label{tab:MCfactor} \begin{tabular}{cccccccccc} \hline \hline $n$ & $\theta$ & True $r$ & $T$ & $r=1$ & $r=2$ & $r=3$ & $r=4$ & $r=5$ & $r=6$ \\ \hline 15 & 1 & 1 & 300 & 0.90 & 0.10 & 0 & 0 & 0 & 0 \\ \rowcolor{lightgray} & & & 500 & 0.96 & 0.04 & 0 & 0 & 0 & 0 \\ & & & 800 & 0.99 & 0.01 & 0 & 0 & 0 & 0 \\ \rowcolor{lightgray} & 3 & 3 & 300 & 0 & 0.13 & 0.83 & 0.04 & 0 & 0 \\ & & & 500 & 0 & 0.02 & 0.97 & 0.01 & 0 & 0 \\ \rowcolor{lightgray} & & & 800 & 0 & 0 & 0.99 & 0.01 & 0 & 0 \\ & 5 & 5 & 300 & 0 & 0.01 & 0.12 & 0.49 & 0.38 & 0 \\ \rowcolor{lightgray} & & & 500 & 0 & 0 & 0.01 & 0.25 & 0.74 & 0 \\ & & & 800 & 0 & 0 & 0.01 & 0.05 & 0.94 & 0 \\ \rowcolor{lightgray} & 10 & 5 & 300 & 0 & 0.07 & 0.3 & 0.46 & 0.16 & 0.01 \\ & & & 500 & 0 & 0 & 0.1 & 0.48 & 0.42 & 0 \\ \rowcolor{lightgray} & & & 800 & 0 & 0 & 0.02 & 0.11 & 0.87 & 0 \\ 30 & 1 & 1 & 300 & 0.76 & 0.24 & 0 & 0 & 0 & 0 \\ \rowcolor{lightgray} & & & 500 & 0.97 & 0.02 & 0 & 0 & 0 & 0.01 \\ & & & 800 & 1.00 & 0 & 0 & 0 & 0 & 0 \\ \rowcolor{lightgray} & 3 & 3 & 300 & 0 & 0.02 & 0.86 & 0.11 & 0.01 & 0 \\ & & & 500 & 0 & 0.01 & 0.98 & 0.01 & 0 & 0 \\ \rowcolor{lightgray} & & & 800 & 0 & 0 & 1.00 & 0 & 0 & 0 \\ & 5 & 5 & 300 & 0 & 0 & 0.01 & 0.18 & 0.80 & 0.01 \\ \rowcolor{lightgray} & & & 500 & 0 & 0 & 0 & 0.02 & 0.97 & 0.01 \\ & & & 800 & 0 & 0 & 0 & 0.01 & 0.99 & 0 \\ \rowcolor{lightgray} & 10 & 5 & 300 & 0 & 0.01 & 0.1 & 0.36 & 0.53 & 0 \\ & & & 500 & 0 & 0 & 0 & 0.16 & 0.84 & 0 \\ \rowcolor{lightgray} & & & 800 & 0 & 0 & 0 & 0.02 & 0.98 & 0 \\ \hline \hline \end{tabular} \end{table} The Monte Carlo results show that the marginal likelihood estimator generally performs well in selecting the correct number of factors under a variety of settings. For example, for $n=15,$ $T=300,$ $r = 3$ and $\theta = 3$ (moderate signal-to-noise ratio), the marginal likelihood estimator is able to pick the correct number of factors 83\% of the times; for the rest of the cases, the model with one fewer factor (13\%) or one more factor (4\%) is selected. In addition, as the sample size $T$ increases to 500, the selection frequency of the $r=3$ factor model increases to 97\%. More generally, the selection frequency of the correct number of factors increases as the sample size $T$ increases for all cases considered, confirming that the marginal likelihood is a consistent model selection criterion. \section{Application: The Role of Financial Shocks in Economic Fluctuations} \label{s:application} We illustrate the proposed methodology by revisiting the structural analysis in \citet{FRS19} that is based on a standard structural VAR. More specifically, they use a 6-variable structural VAR to study the impacts of 5 structural shocks---demand, supply, monetary, investment and financial shocks---on a number of key economic variables, where these structural shocks are identified using sign restrictions on the contemporaneous impact matrix. The size of the VAR in their application is typical among empirical works that use sign restrictions for identification because of the computational burden.\footnote{For their 6-variable structural VAR, \citet{FRS19} report estimation time of about a week using a 12-core workstation.} However, there are a number of reasons in favor of using a larger set of macroeconomic and financial variables. First, in practice the mapping from variables in an economic model to the data is often not unique. For example, as argued in \citep{LMW21}, the economic variable inflation could be matched to data based on the CPI, PCE, or the GDP deflator, and it is not obvious which time series should be used. Instead of arbitrarily choosing one inflation measure, it is more appropriate to include multiple time series corresponding to the same economic variable in the analysis. Second, one might be concerned about the problem of informational deficiency that arises from using a limited information set. More specifically, influential papers such as \citet{HS91} and \citet{LR93, LR94} have pointed out that when the econometrician considers a narrower set of variables than the economic agent, the underlying model used by the econometrician is non-fundamental. That is, current and past observations of the variables do not span the same space spanned by the structural shocks. As a consequence, structural shocks cannot be recovered from the model. A natural way to alleviate this concern of informational deficiency is to use a larger set of relevant variables \citep[see, e.g.,][for a recent review on non-fundamentalness]{gambetti21}. In view of these considerations, we augment the 6-variable VAR with additional macroeconomic and financial variables, and consider a 20-variable VAR with factor stochastic volatility identified using sign restrictions. There are two related papers that use large VARs to study the role of financial shocks in economic fluctuations. First, \citet{chan21} considers a 15-variable structural VAR with a new asymmetric conjugate prior to identify the financial shocks. Given the larger system and the large number of sign restrictions, estimation time is about a week to obtain 1,000 admissible draws using the algorithm of \citet{RWZ10}. In contrast, the proposed approach takes less than a minute to obtain the same number of admissible draws, and is applicable to even larger systems. Second, \citet{Korobilis20} uses a 15-variable VAR with a factor error structure to identify the financial shocks, which can also be done quickly. The main advantage of our approach, however, is that the structural shocks obtained using our factor stochastic volatility model are point-identified, whereas they are only set-identified under a homoskedastic VAR. In practice, our approach can often provide sharper inference. \subsection{Data} \label{ss:data} We use a dataset that consists of 20 US quarterly variables, which are constructed from raw time-series taken from from different sources, including the Federal Reserve Bank of Philadelphia and the FRED database at the Federal Reserve Bank of St. Louis. For easy comparison with the results in \citet{FRS19}, we use the same sample period that spans from 1985:Q1 to 2013:Q2. The complete list of these time-series and their sources are given in Appendix~E. We include the same 6 variables used in the baseline model in \citet{FRS19}, namely, real GDP, GDP deflator, 3-month treasury rate, ratio of private investment over output, S\&P 500 index and a credit spread defined as the difference between Moody's baa corporate bond yield and the federal funds rate. In addition, we also include 14 additional macroeconomic and financial variables, such as the ratio of total credit over real estate value, labor market variables, mortgage rates, as well as other measures of inflation, interest rates and stock prices. These 20 variables are listed in Table~\ref{tab:sign} and the details of the raw data are given in Appendix~E. \subsection{Sign Restrictions and Impulse Responses} \label{ss:IR} In this section we re-examine the empirical application in \citet{FRS19} that identifies 5 structural shocks using a structural VAR with sign restrictions on the contemporaneous impact matrix. We first use the proposed VAR-FSV model to replicate their baseline results from a 6-variable VAR, but here we impose the sign restrictions on the factor loadings instead of the impact matrix. We then consider a larger VAR-FSV model with 20 variables to identify the same structural shocks. Now, we first employ the same 6 variables and the associated sign restrictions used in \citet{FRS19}, which are presented in the first six rows of Table~\ref{tab:sign}. The sign restrictions to identify the supply, demand, monetary, investment and financial shocks are exactly the same as in \citet{FRS19}, and we refer the readers to their paper for more details. Here we only note that in order to distinguish investment and financial shocks from demand shocks, they are assumed to have different effects on the ratio of investment over output. In particular, positive investment and financial shocks have a positive effect on the ratio, motivating by the idea that investment and financial shocks create investment booms. By contrast, positive demand shocks reduce the ratio of investment over output, i.e., even though investment level could increase in response to demand shocks, it does not increase as much as other components of output. We compute the impulse responses from the VAR-FSV with 5 factors, where the sign restrictions are imposed on the factor loadings. Since \citet{FRS19} use an improper/non-informative prior in their analysis, to make our results comparable, we consider a proper but relatively vague prior by setting $\kappa_1=\kappa_2=1$.\footnote{The variables are expressed in level. As such, the prior means of the first own lags are all set to be 1, whereas those of other VAR coefficients are set to be 0. In addition, the prior mean of $\mu_i$, the mean of the idiosyncratic log-volatility for the $i$-th variable, is set to be $\log(0.1\times \text{Var}(\mathbf{y}_{i,\cdot}))$. That is, \textit{a priori} about 10\% of the sample variance is attributed to idiosyncratic component. Finally, the prior variance $V_{\mu_{i}}$ is set to be 10 for $i=1,\ldots, n$.} We use the Gibbs sampler described in Section~\ref{s:estimation} to obtain 50,000 posterior, storing every 10-th draw, after a burn-in period of 5,000. \begin{table}[H] \centering \caption{Sign restrictions and identified structural shocks.} \label{tab:sign} \resizebox{\textwidth}{!}{\begin{tabular}{lccccc} \hline \hline & Supply & Demand & Monetary & Investment & Financial \\ \hline GDP & $+$ & + & + & + & + \\ \rowcolor{lightgray} GDP deflator & $-$ & + & + & + & + \\ 3-month tbill rate & NA & + & $-$ & + & + \\ \rowcolor{lightgray} Investment/output & NA & $-$ & NA & + & + \\ S\&P 500 & + & NA & NA & $-$ & + \\ \rowcolor{lightgray} Spread & NA & NA & NA & NA & NA \\ Spread 2 & NA & NA & NA & NA & NA \\ \rowcolor{lightgray} Credit/Real estate value & NA & NA & NA & NA & NA \\ Mortgage rates & NA & NA & NA & NA & NA \\ \rowcolor{lightgray} Personal consumption expenditures & + & + & + & + & + \\ Industrial production & + & + & + & + & + \\ \rowcolor{lightgray} Industrial production: final & + & + & + & + & + \\ CPI & $-$ & + & + & + & + \\ \rowcolor{lightgray} PCE index & $-$ & + & + & + & + \\ Employment & NA & NA & NA & NA & NA \\ \rowcolor{lightgray} All employees: Manufacturing & NA & NA & NA & NA & NA \\ 1-year tbill rate & NA & + & $-$ & + & + \\ \rowcolor{lightgray} 10-year tnote rate & NA & + & $-$ & + & + \\ DJIA & + & NA & NA & $-$ & + \\ \rowcolor{lightgray} NASDAQ & + & NA & NA & $-$ & + \\ \hline \end{tabular}} {\raggedright \footnotesize{Note: the variable spread is defined as the difference between Moody's baa corporate bond yield and the federal funds rate. Spread 2 is the difference between Moody's baa corporate bond yield and 10-year treasury yield.} \par} \end{table} Figure~\ref{fig:small-IRF-Financial} plots the impulse responses of the 6 variables to an one-standard-deviation financial shock. Despite the differences in methodology, the impulse responses are very similar to those given in \citet{FRS19}. Consistent with the findings in \citet{FRS19}, the results show that financial shocks have a substantial impact on output, stock prices and investment, but have a limited impact on inflation (measured by GDP deflator). Furthermore, even though the impact on the spread is unrestricted, we find that its reaction to financial shocks is significantly counter-cyclical. These results highlight one advantage of the proposed methodology: the median impulse responses from the VAR-FSV are very similar to those obtained using a standard structural VAR, but instead of using an accept-reject algorithm to obtain admissible draws, the sign restrictions can be easily incorporated in the estimation, and consequently, it can be done much faster. \begin{figure}[H] \begin{center} \includegraphics[height=12cm]{Fig-small-IRF-Financial.eps} \end{center} \caption{Impulse responses from a 6-variable VAR-FSV with 5 factors to an one-standard-deviation financial shock. The shaded region represents the 16-th and 84-th percentiles.} \label{fig:small-IRF-Financial} \end{figure} Next, we augment the 6-variable VAR with 14 additional macroeconomic and financial variables. Many of these new variables are alternative data series corresponding to the same economic variable. For example, in addition to GDP deflator as prices, we also include CPI and PCE index as alternative measures. Similarly, Dow Jones Industrial Average and NASDAQ indexes are added as alternative measures of stock prices. Furthermore, other seemingly relevant variables, such as labor market and national accounts variables, are also included to alleviate the concern of informational deficiency. The additional variables and the corresponding sign restrictions are listed in rows 7-20 of Table~\ref{tab:sign}. With $n=20$ variables and $r=5$ factors, this large VAR-FSV satisfies the condition that $r \leq (n-1)/2$. In addition, it is easy to verify that the sign restrictions given in Table~\ref{tab:sign} satisfy the conditions in Corollary~2, and therefore the latent factors, which we interpret as structural shocks, are point-identified. Given the large number of variables, it is crucial to apply proper shrinkage on the VAR coefficients. Following the bulk of the literature \citep[e.g.,][]{CCM15}, we set $\kappa_1 = 0.04$ and $\kappa_2 = 0.04^2$---i.e., the VAR coefficients associated with lags of other variables are shrunk more strongly to 0 than those on own lags. Again we obtain 50,000 posterior after a burn-in period of 5,000 to compute the impulse responses. The results are reported in Figure~\ref{fig:IRF-Financial}. The impulse responses from this 20-variable VAR-FSV are qualitatively similar to those from the smaller system, but the inference is much sharper. Specifically, the credible intervals of the 6 impulse response functions are much narrower, highlighting the benefits of incorporating more relevant information---more variables and sign restrictions as well as a more informative prior---to sharpen inference. For example, the credible intervals associated with the responses of investment and stock prices exclude zero for the first 32 quarters after the initial impact of a financial shock. This is in contrast to the much wider credible intervals from the 6-variable VAR (the median impulse responses of stock prices even become negative at longer horizons). The results from this large system therefore better highlight the impact of a positive financial shock, which \citet{FRS19} define as ``a shock that generates an investment and a stock market boom." \begin{figure}[H] \begin{center} \includegraphics[height=12cm]{Fig-IRF-Financial.eps} \end{center} \caption{Impulse responses from a 20-variable VAR-FSV with 5 factors to an one-standard-deviation financial shock. The shaded region represents the 16-th and 84-th percentiles.} \label{fig:IRF-Financial} \end{figure} Next, we plot in Figure~\ref{fig:IRF-other4} the median impulse responses of the 6 variables from the remaining 4 structural shocks. These impulse responses are similar to those presented in Figure 3 in \citet{FRS19}. In particular, we confirm that supply shocks generate large effects not only on output, but also on investment and stock prices. On the other hand, demand shocks have smaller effects on output, investment and stock prices, at least for short to medium horizons, but they are the main driver of prices. Finally, while we also find that monetary shocks have a protracted positive effect on output, their effects on stock prices are more subdued. \begin{figure}[H] \begin{center} \includegraphics[height=12cm]{Fig-IRF-other4.eps} \end{center} \caption{Median impulse responses from a 20-variable VAR-FSV with 5 factors to an one-standard-deviation supply, demand, monetary, and investment shock.} \label{fig:IRF-other4} \end{figure} \subsection{Historical and Forecast Error Variance Decompositions } \label{ss:decompositions} To quantify how much of the historical fluctuations in GDP and spread can be attributed to each of the structural shocks, we compute the historical decompositions of these two variables using the formulas derived in Appendix F. The results are reported in Figure~\ref{fig:HD_GDP} and Figure~\ref{fig:HD_spread}. \begin{figure}[H] \begin{center} \includegraphics[height=8cm]{Fig-HD-GDP.eps} \end{center} \caption{Historical decompositions of GDP from a 20-variable VAR-FSV with 5 factors.} \label{fig:HD_GDP} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[height=8cm]{Fig-HD-Spread.eps} \end{center} \caption{Historical decompositions of spread from a 20-variable VAR-FSV with 5 factors.} \label{fig:HD_spread} \end{figure} These historical fluctuations from the VAR-FSV are in line with those obtained using a standard structural VAR presented in \citet{FRS19}. In particular, financial shocks play a large role in explaining the historical fluctuations in both GDP and spread, especially in the lead-up and aftermath of the Great Recession of 2007-2009. Next, we quantify the amount of the prediction mean squared errors of 6 selected variables accounted for by each of the 5 structural shocks at different forecast horizons. More specifically, using the expressions developed in Appendix F, we compute the forecast error variance decompositions of the variables and the results are presented in Table~\ref{tab:FEVD}. \begin{table}[H] \footnotesize \centering \caption{Forecast error variance decompositions from a 20-variable VAR-FSV with 5 factors (as a percentage of the variation explained by the factors).} \label{tab:FEVD} \begin{tabular}{llccccc} \hline \hline & Horizon & Supply & Demand & Monetary & Investment & Financial \\ \hline GDP & 1 & 0.08 & 0.13 & 0.45 & 0.11 & 0.23 \\ \rowcolor{lightgray} & 5 & 0.12 & 0.11 & 0.46 & 0.08 & 0.23 \\ & 20 & 0.16 & 0.10 & 0.46 & 0.06 & 0.22 \\ \rowcolor{lightgray} GDP deflator & 1 & 0.11 & 0.84 & 0.01 & 0.04 & 0.00 \\ & 5 & 0.09 & 0.84 & 0.01 & 0.05 & 0.01 \\ \rowcolor{lightgray} & 20 & 0.07 & 0.85 & 0.01 & 0.05 & 0.02 \\ Interest rate & 1 & 0.07 & 0.00 & 0.00 & 0.58 & 0.35 \\ \rowcolor{lightgray} & 5 & 0.10 & 0.00 & 0.00 & 0.58 & 0.32 \\ & 20 & 0.12 & 0.00 & 0.01 & 0.58 & 0.29 \\ \rowcolor{lightgray} Investment & 1 & 0.07 & 0.01 & 0.44 & 0.16 & 0.33 \\ & 5 & 0.15 & 0.02 & 0.40 & 0.09 & 0.34 \\ \rowcolor{lightgray} & 20 & 0.23 & 0.02 & 0.37 & 0.06 & 0.32 \\ S\&P 500 & 1 & 0.92 & 0.00 & 0.00 & 0.01 & 0.07 \\ \rowcolor{lightgray} & 5 & 0.91 & 0.00 & 0.00 & 0.01 & 0.07 \\ & 20 & 0.91 & 0.00 & 0.00 & 0.01 & 0.08 \\ \rowcolor{lightgray} Spread & 1 & 0.02 & 0.00 & 0.00 & 0.13 & 0.85 \\ & 5 & 0.04 & 0.00 & 0.01 & 0.15 & 0.79 \\ \rowcolor{lightgray} & 20 & 0.07 & 0.01 & 0.04 & 0.19 & 0.69 \\ \hline \end{tabular} \end{table} Overall, financial shocks play a large role in explaining the forecast error variances of the majority of the variables. The two exceptions are prices (measured by GDP deflator), which are mainly impacted by demand shocks, and stock prices (measured by S\&P 500 index), which are mostly driven by supply shocks. \section{Concluding Remarks} \label{s:conclusion} We have considered an order-invariant VAR with factor stochastic volatility and shown how the presence of multivariate stochastic volatility allows for statistical identification of the model. Furthermore, we have worked out sufficient conditions in terms of sign restrictions on the impact of the structural shocks for point-identification of the corresponding structural model. To estimate the proposed order-invariant VAR, we developed an efficient MCMC algorithm that can incorporate a large number of variables and sign restrictions. In an empirical application involving 20 macroeconomic and financial variables, we demonstrated the ability of our methods to produce more precise impulse responses compared to a medium-sized structural VAR. \newpage \section*{Appendix A: Proofs of Propositions} In this appendix we provide the proofs of the propositions and corollaries stated in the main text. To that end, we first consider the following two lemmas. \begin{lemma}[\citet{magnus2019matrix}, Theorem 2.13, pp. 43] \rm A necessary and sufficient condition for the matrix equation $\mathbf{A}\mathbf{X}\mathbf{B} = \mathbf{C}$ to have a solution is that \begin{equation} \mathbf{A}\bA^+\mathbf{C}\mathbf{B}^+\mathbf{B} = \mathbf{C}, \label{MNeq1} \end{equation} where $\mathbf{D}^+$ denotes the Moore–Penrose inverse of $\mathbf{D}$. In this case the general solution is \begin{equation} \mathbf{X} = \mathbf{A}^+\mathbf{C}\mathbf{B}^+ +\mathbf{Q}-\mathbf{A}^+\mathbf{A}\mathbf{Q}\mathbf{B}\bB^+ \label{MNeq2} \end{equation} where $\mathbf{Q}$ is an arbitrary matrix of the appropriate dimension. In particular, if $\mathbf{A}$ has full column rank and $\mathbf{B}$ has full row rank, then the unique solution is given by: \begin{equation} \mathbf{X} = \mathbf{A}^+\mathbf{C}\mathbf{B}^+. \label{MNeq3} \end{equation} \end{lemma} \textbf{Proof of Lemma 1}: The proof that \eqref{MNeq1} is the necessary and sufficient condition and that the general solution has the form in \eqref{MNeq2} follows directly from \citet{magnus2019matrix}. For uniqueness \eqref{MNeq3}, note that if $\mathbf{A}$ and $\mathbf{B}$ have full column and row rank, respectively, their Moore–Penrose inverses can be computed as: \[ \mathbf{A}^+ = (\mathbf{A}'\mathbf{A})^{-1}\mathbf{A}', \quad \mathbf{B}^+ = \mathbf{B}'(\mathbf{B}\bB')^{-1}. \] It then follows that $\mathbf{A}^+\mathbf{A}=\mathbf{I}$ and $\mathbf{B} \mathbf{B}^+=\mathbf{I}$, and therefore \eqref{MNeq2} reduces to \eqref{MNeq3}. $\qed$ The next lemma adapts a theorem in \citet{AR56} to our setting with heteroskedastic factors. \begin{lemma}[\citet{AR56}, Theorem 5.1, pp. 118] \label{lemma2} \rm Under Assumption~\ref{ass2}, the common and idiosyncratic variance components are separately identified. That is, for two observationally equivalent models such that $\vect{\Gamma}_t =\mathbf{L}\vect{\Omega}_t\mathbf{L}'+ \vect{\Sigma}_t = {\mathbf{L}}^*{\vect{\Omega}}^*_t{\mathbf{L}}^{*\prime}+ {\vect{\Sigma}}^*_t$, we have $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime} =\mathbf{L}\vect{\Omega}_t\mathbf{L}'$ and $\vect{\Sigma}_t^* = \vect{\Sigma}_t$. \end{lemma} \noindent \textbf{Proof of Lemma~2}: Suppose we have two observationally equivalent models such that $\vect{\Gamma}_t =\mathbf{L}\vect{\Omega}_t\mathbf{L}'+ \vect{\Sigma}_t = {\mathbf{L}}^*{\vect{\Omega}}^*_t{\mathbf{L}}^{*\prime}+ {\vect{\Sigma}}^*_t$. We wish to show that $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime} =\mathbf{L}\vect{\Omega}_t\mathbf{L}'$ and $\vect{\Sigma}_t^* = \vect{\Sigma}_t$. Since the off-diagonal elements of $\mathbf{L}\vect{\Omega}_t\mathbf{L}'$ and of $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime}$ are the corresponding off-diagonal elements of $\vect{\Gamma}_t$, it suffices to show that the diagonal elements of $\mathbf{L}\vect{\Omega}_t\mathbf{L}'$ are equal to the diagonal elements of $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime}$. First note that Assumption~\ref{ass2} implies that $2r +1 \leq n$. Furthermore, let \[ \mathbf{L}=\begin{pmatrix} \mathbf{L}_1 \\ l_{r+1} \\ \mathbf{L}_2 \\ \mathbf{L}_3 \\ \end{pmatrix} , \ \ \ \mathbf{L}^*=\begin{pmatrix} \mathbf{L}_1^* \\ l_{r+1}^* \\ \mathbf{L}_2^* \\ \mathbf{L}_3^* \\ \end{pmatrix} \] where $\mathbf{L}_1$ and $\mathbf{L}_2$ are nonsingular square matrices of dimension $r\times r$, $l_{r+1}$ is the $(r+1)$-th row, and $\mathbf{L}_3$ is of dimension $(n-2r-1)\times r$ (it can be null if $n=2r+1$); $\mathbf{L}^*$ is partitioned into submatrices similarly. Then, we have \[ \mathbf{L}\vect{\Omega}_t\mathbf{L}'=\begin{pmatrix} \mathbf{L}_1\vect{\Omega}_t\mathbf{L}_1' & \mathbf{L}_1 \vect{\Omega}_t l_{r+1}' & \mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2' & \mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_3' \\ l_{r+1} \vect{\Omega}_t \mathbf{L}_1' & l_{r+1} \vect{\Omega}_t l_{r+1}' & l_{r+1} \vect{\Omega}_t \mathbf{L}_2' & l_{r+1} \vect{\Omega}_t \mathbf{L}_3' \\ \mathbf{L}_2 \vect{\Omega}_t \mathbf{L}_1' & \mathbf{L}_2 \vect{\Omega}_t l_{r+1}' & \mathbf{L}_2 \vect{\Omega}_t \mathbf{L}_2' & \mathbf{L}_2 \vect{\Omega}_t \mathbf{L}_3' \\ \mathbf{L}_3 \vect{\Omega}_t \mathbf{L}_1' & \mathbf{L}_3 \vect{\Omega}_t l_{r+1}' & \mathbf{L}_3 \vect{\Omega}_t \mathbf{L}_2' & \mathbf{L}_3 \vect{\Omega}_t \mathbf{L}_3' \\ \end{pmatrix} \] and $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime}$ has the same form. Since $\mathbf{L}_1 \vect{\Omega}_t l_{r+1}'$, $ \mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2' $, $l_{r+1} \vect{\Omega}_t \mathbf{L}_2'$ are off-diagonal, $\mathbf{L}_1 \vect{\Omega}_t l_{r+1}'=\mathbf{L}_1^* \vect{\Omega}_t^* l_{r+1}^{*\prime}$, $ \mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2'=\mathbf{L}_1^* \vect{\Omega}_t^* \mathbf{L}_2^{*\prime} $ and $l_{r+1} \vect{\Omega}_t \mathbf{L}_2'=l_{r+1}^* \vect{\Omega}_t^* \mathbf{L}_2^{*\prime}$. Note that since $\mathbf{L}_1$ and $\mathbf{L}_2$ are nonsingular, so is $\mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2'$. Next, since $\mathbf{L}\vect{\Omega}_t\mathbf{L}'$ is of rank $r$, any square submatrix of dimension larger than $r$ is singular. In particular, \begin{align*} 0&=\begin{vmatrix} \mathbf{L}_1^* \vect{\Omega}_t^* l_{r+1}^{*\prime} & \mathbf{L}_1^* \vect{\Omega}_t^* \mathbf{L}_2^{*\prime}\\ l_{r+1}^* \vect{\Omega}_t^* l_{r+1}^{*\prime} & l_{r+1}^* \vect{\Omega}_t^* \mathbf{L}_2^{*\prime} \\ \end{vmatrix} = \begin{vmatrix} \mathbf{L}_1 \vect{\Omega}_t l_{r+1}' & \mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2'\\ l_{r+1}^* \vect{\Omega}_t^* l_{r+1}^{*\prime} & l_{r+1} \vect{\Omega}_t \mathbf{L}_2' \\ \end{vmatrix} \\ &=(-1)^r l_{r+1}^* \vect{\Omega}_t^* l_{r+1}^{*\prime} |\mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2'| + f(\mathbf{L}\vect{\Omega}_t\mathbf{L}'). \end{align*} Similarly, $0 = (-1)^r l_{r+1} \vect{\Omega}_t l_{r+1}^{\prime} |\mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2'| + f(\mathbf{L}\vect{\Omega}_t\mathbf{L}')$. Since $|\mathbf{L}_1 \vect{\Omega}_t \mathbf{L}_2'| \neq 0$, we must have $l_{r+1} \vect{\Omega}_t l_{r+1}^{\prime}=l_{r+1}^* \vect{\Omega}_t^* l_{r+1}^{*\prime}$. In the same fashion, we can show that the other diagonal elements of $\mathbf{L}\vect{\Omega}_t\mathbf{L}'$ are equal to those of $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime}$. $\qed$ \noindent \textbf{Proof of Proposition 2}: Suppose we have two observationally equivalent models such that $\vect{\Gamma}_t =\mathbf{L}\vect{\Omega}_t\mathbf{L}'+ \vect{\Sigma}_t= {\mathbf{L}}^*{\vect{\Omega}}^*_t{\mathbf{L}}^{*\prime}+ {\vect{\Sigma}}^*_t$. Under Assumption~\ref{ass2}, Lemma~\ref{lemma2} implies that the common and the idiosyncratic variance components can be separately identified, i.e., $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime} =\mathbf{L}\vect{\Omega}_t\mathbf{L}'$ and $\vect{\Sigma}_t^* = \vect{\Sigma}_t$. For notational convenience, let $\vect{\omega}_t=\text{vecd}(\vect{\Omega}_t)=(\omega_{1,t}, \ldots, \omega_{r,t})'$ and $\vect{\sigma}_t=\text{vecd}(\vect{\Sigma}_t)=(\sigma_{1,t}, \ldots, \sigma_{n,t})'$. Consider the first identity $\mathbf{L}\vect{\Omega}_t\mathbf{L}'= \mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime}$. By Lemma 1, the necessary and sufficient condition to solve the system of equations for $\vect{\Omega}_t^*$ is \begin{equation} \mathbf{L}^*\mathbf{D}^*\vect{\Omega}_t\mathbf{D}^{*\prime}\mathbf{L}^{*\prime}- \mathbf{L} \vect{\Omega}_t\mathbf{L}'=\vect{0} \label{id1-eq1} \end{equation} where $\mathbf{D}^* = [\mathbf{L}^{*}]^{+}\mathbf{L}$ and $[\mathbf{L}^{*}]^{+}=(\mathbf{L}^{*\prime}\mathbf{L}^*)^{-1}\mathbf{L}^{*\prime}$ since $\mathbf{L}^*$ has full column rank. Let $l_{ij}$ and $l^d_{ij}$ denote, respectively, the $(i,j)$ element of $\mathbf{L}$ and the product $\mathbf{L}^*\mathbf{D}^*$. Equation~\eqref{id1-eq1} implies that $\sum_{k=1}^r\omega_{k,t}(l^d_{ik}l^d_{jk}-l_{ik}l_{jk})=0$, $i, j=1,\ldots, n$. Under the assumption that the elements in $\vect{\omega}_t$ are linearly independent (i.e., the only solution to $\vect{\delta}'\vect{\omega}_t = 0 $ for all $t$ is $\vect{\delta}=\vect{0}$), we must have $\mathbf{L}^*\mathbf{D}^*={\pm}\mathbf{L}$. Hence, $\mathbf{L}^*$ is obtained once $\mathbf{D}^*$ is determined. We consider $\mathbf{L}^*\mathbf{D}^*=\mathbf{L}$. As will be clear later, the same conclusion applies for the case $\mathbf{L}^*\mathbf{D}^*=-\mathbf{L}$. We next turn to the determination of $\mathbf{D}^*$. Since $\mathbf{L}^*$ has full column rank, again by Lemma~1, the unique solution to $\mathbf{L}^*\vect{\Omega}^*_t\mathbf{L}^{*\prime} = \mathbf{L}\vect{\Omega}_t\mathbf{L}'$ is $\vect{\Omega}^*_t = \mathbf{D}^*\vect{\Omega}_t\mathbf{D}^{*\prime}$. In particular, since $\vect{\Omega}_t^*$ is diagonal, we have $\sum_{l=1}^r\omega_{l,t}d^*_{il}d^*_{jl}=0$ for $i,j=1,\ldots, r, i\neq j$ and $t=1,\ldots,T$. These restrictions can be more succinctly expressed as $\dot{\vect{\Omega}}_T \mathbf{b}_{ij} = \mathbf{0}$, where $\mathbf{b}_{ij}=(d^*_{i1}d^*_{j1},\ldots,d^*_{ir}d^*_{jr})'$ and $\dot{\vect{\Omega}}_T = (\vect{\omega}_{1},\ldots,\vect{\omega}_{T})'$ is $T\times r$. Given that the rank of $\dot{\vect{\Omega}}_T$ is $r$ when the processes in $\vect{\omega}_t$ are linearly independent for $t=1,\ldots, T$, the only solution to such a set of $T$ homogeneous linear equations is $\mathbf{b}_{ij} = \mathbf{0}$ irrespective of $i$ and $j$. Therefore, each column of $\mathbf{D}^*$ contains at most one nonzero element (otherwise for some column $k$ there exist nonzero elements $d_{ik}$ and $d_{jk}$ with $i\neq j$ such that $d_{ik} d_{jk} \neq 0$, contradicting $\mathbf{b}_{ij} = \mathbf{0}$). In this scenario, similar to \citet{BB20}, we can write $\mathbf{D}^*=\mathbf{P}_{\pm}\mathbf{P}_{r}\mathbf{S}_{r}$, where $\mathbf{P}_{r}$ is one of the $r!$ permutation matrices, $\mathbf{P}_{\pm} = \diag(\pm 1, \ldots, \pm 1)$ is a reflection matrix that corresponds to one of the $2^r$ ways to switch the signs of the $r$ columns, and $\mathbf{S}_{r}$ is an arbitrary diagonal scaling matrix of dimension $r\times r$. Next, we show that $\mathbf{S}_{r}$ must be an identity matrix. Using the fact that $\vect{\Omega}^*_t = \mathbf{D}^*{\vect{\Omega}}_t\mathbf{D}^{*\prime}$, we can write the observationally equivalent factors as $\mathbf{f}^*_t=\mathbf{D}^*\mathbf{f}_t$. Without loss of generality, we consider solely the scaling effect, i.e., $\mathbf{D}^*=\mathbf{S}_{r} = \diag(s_1,\ldots,s_r)$. Now, $\mathbf{f}^*_t \sim\distn{N}(\mathbf{0}, {\vect{\Omega}}_t^*)$, where \begin{align} \vect{\Omega}_t^* & =\mathbf{S}_{r}\text{diag}(\text{e}^{h_{n+1,t}},\ldots, \text{e}^{h_{n+r,t}})\mathbf{S}_{r}^{'} \notag \\ & = \text{diag}(\text{e}^{h_{n+1,t}+2 \log s_1 },\ldots, \text{e}^{h_{n+r,t}+2\log s_r}) \notag \\ & \equiv \text{diag}(\text{e}^{h^*_{n+1,t}},\ldots, \text{e}^{h^*_{n+r,t}}). \label{standardization} \end{align} Since we standardize the unconditional variances of the log stochastic volatility processes to be one, we must have $\mathbb E h^*_{n+j,t} = \mathbb E [h_{n+j,t}+2\log s_j] = 0+2\log s_j =0$ for $j=1,\ldots,r$, which implies that $s_j = 1$. Thus, $\mathbf{S}_{r} = \mathbf{I}_r$, and the only form $\mathbf{D}^*$ can take is $\mathbf{D}^*=\mathbf{P}_{\pm}\mathbf{P}_{r}$. \qed \noindent \textbf{Proof of Proposition 3}: The proposition is equivalent to the claim that the only feasible factor loadings submatrix $\mathbf{L}_1^*$ under the assumptions must satisfy $\mathbf{L}_1^*=\mathbf{L}_1 \mathbf{D}_1$, where $\mathbf{D}_1=\mathbf{P}_{\pm}\mathbf{P}_{r_1}$. In what follows, we prove the claim by using a similar approach as in the proof of Proposition 2. First notice that since $\mathbf{L}$ satisfies Assumption~\ref{ass2}, any observationally equivalent model must satisfy ${\mathbf{L}}^*{\vect{\Omega}}^*_t{\mathbf{L}}^{*\prime}=\mathbf{L}\vect{\Omega}_t\mathbf{L}'$. Applying the same argument as in the proof of Proposition 2, the solution ${\mathbf{L}}^*$ must be in the form ${\mathbf{L}}={\mathbf{L}}^*\mathbf{D}^*$, and it follows that $\vect{\Omega}^*_t=\mathbf{D}^*{\vect{\Omega}}_t\mathbf{D}^{*\prime}$. Partition $\mathbf{D}^*$ conformably as $\mathbf{D}^*=(\mathbf{D}^*_1, \mathbf{D}^*_2)$, where $\mathbf{D}^*_1$ is $r\times r_1$ and $\mathbf{D}^*_2$ is $r\times r_2$. We thus have \begin{equation} \label{part1} \vect{\Omega}^*_t=(\mathbf{D}^*_1, \mathbf{D}^*_2) \begin{pmatrix} \vect{\Omega}_{1t} & \mathbf{0} \\ \mathbf{0} & \mathbf{I}_{r_2} \end{pmatrix}\begin{pmatrix} \mathbf{D}^{*\prime}_1 \\ \mathbf{D}^{*\prime}_2 \end{pmatrix}=\mathbf{D}^*_1 \vect{\Omega}_{1t} \mathbf{D}^{*\prime}_1+\mathbf{D}^*_2 \mathbf{D}^{*\prime}_2. \end{equation} Again, since ${\vect{\Omega}}_t^*$ is diagonal, the off-diagonal elements of $\mathbf{D}^*_1 \vect{\Omega}_{1t} \mathbf{D}^{*\prime}_1+\mathbf{D}^*_2 \mathbf{D}^{*\prime}_2$ must be zero, i.e., $\sum_{l_1=1}^{r_1}{\omega}_{l_1,t}d^*_{il_1}d^*_{jl_1}+\sum_{l_2=r_1+1}^{r}d^*_{il_2}d^*_{jl_2}=0$ for $j>i$ and $t=1,\ldots,T$, where ${{\omega}}_{l_1,t}$ is the $l_1$-th element in ${\vect{\omega}}_{t}=({{\omega}}_{1,t},\ldots,{{\omega}}_{r_1,t},1,\ldots,1)'$. For a given pair $(i,j)$, these restrictions can be expressed as $\ddot{\vect{\Omega}}_t \mathbf{b}_{ij}=\mathbf{0}_T$, where $\ddot{\vect{\Omega}}_t=(\dot{\vect{\omega}}_{1},\ldots,{\dot{\vect{\omega}}}_{t})'$ is a $T\times (r_1+1)$ matrix, $\dot{\vect{\omega}}_{t}=({{\omega}}_{1,t},\ldots,{{\omega}}_{r_1,t},1)'$ for $t=1,\ldots,T$, $\mathbf{b}_{ij}=(d^*_{i1}d^*_{j1},\ldots,d^*_{ir_1}d^*_{jr_1},\sum_{l_2=r_1+1}^{r}d^*_{il_2}d^*_{jl_2})'$. Given that the rank of $\ddot{\vect{\Omega}}_t$ is $r_1+1$ when the processes in $\dot{\vect{\omega}}_t$ are linearly independent, the only solution to such a set of $T$ homogeneous linear equations is $\mathbf{b}_{ij}=\mathbf{0}_{r_1}$ irrespective of $i$ and $j$. Therefore, applying the same argument as before, the condition that the first $r_1$ restrictions in $\mathbf{b}_{ij}=\mathbf{0}_{r_1}$---i.e., $d^*_{i1}d^*_{j1}= \cdots =d^*_{ir_1}d^*_{jr_1}=0$---implies that each column of $\mathbf{D}_1^*$ contains at most one element that is different from 0. Next, we show that any nonzero elements can only be in the upper $r_1$ rows of $\mathbf{D}_1^*$, which in turn makes the lower $r_2$ rows a zero submatrix. To that end, we partition $\mathbf{D}^*$ conformably and write \eqref{part1} as: \begin{align*} \vect{\Omega}^*_t & = \begin{pmatrix} \mathbf{D}_{11}^* & \mathbf{D}_{12}^* \\ \mathbf{D}_{21}^* & \mathbf{D}_{22}^* \end{pmatrix} \begin{pmatrix} \vect{\Omega}_{1t} & \mathbf{0} \\ \mathbf{0} & \mathbf{I}_{r_2} \end{pmatrix}\begin{pmatrix} \mathbf{D}_{11}^{*\prime} & \mathbf{D}_{21}^{*\prime} \\ \mathbf{D}_{12}^{*\prime} & \mathbf{D}_{22}^{*\prime} \end{pmatrix} \\ & =\begin{pmatrix} \mathbf{D}_{11}^*\vect{\Omega}_{1t}\mathbf{D}_{11}^{*\prime}+\mathbf{D}_{12}^*\mathbf{D}_{12}^{*\prime} & \mathbf{D}_{11}^*\vect{\Omega}_{1t}\mathbf{D}_{21}^{*\prime}+\mathbf{D}_{12}^*\mathbf{D}_{22}^{*\prime} \\ \mathbf{D}_{21}^*\vect{\Omega}_{1t}\mathbf{D}_{11}^{*\prime}+\mathbf{D}_{22}^*\mathbf{D}_{12}^{*\prime} & \mathbf{D}_{21}^*\vect{\Omega}_{1t}\mathbf{D}_{21}^{*\prime}+\mathbf{D}_{22}^*\mathbf{D}_{22}^{*\prime} \end{pmatrix}. \end{align*} Since $\vect{\Omega}^*_t$ is diagonal, we must have \begin{align} \mathbf{D}_{21}^*\vect{\Omega}_{1t}\mathbf{D}_{21}^{*\prime}+\mathbf{D}_{22}^*\mathbf{D}_{22}^{*\prime} &= \mathbf{I}_{r_2} \label{eq0:1} \\ \mathbf{D}_{11}^*\vect{\Omega}_{1t}\mathbf{D}_{21}^{*\prime}+\mathbf{D}_{12}^*\mathbf{D}_{22}^{*\prime} &= \mathbf{0} \label{eq0:2} \\ \mathbf{D}_{21}^*\vect{\Omega}_{1t}\mathbf{D}_{11}^{*\prime}+\mathbf{D}_{22}^*\mathbf{D}_{12}^{*\prime} &= \mathbf{0} \notag \end{align} and $\mathbf{D}_{11}^*\vect{\Omega}_{1t}\mathbf{D}_{11}^{*\prime}+\mathbf{D}_{12}^*\mathbf{D}_{12}^{*\prime}$ is diagonal. Using exactly the same argument in analyzing \eqref{part1}, $\mathbf{D}_{11}^*\vect{\Omega}_{1t}\mathbf{D}_{11}^{*\prime}+\mathbf{D}_{12}^*\mathbf{D}_{12}^{*\prime}$ being diagonal implies a set of $T$ homogeneous linear equations of dimension $r_1+1$. It follows that each column of $\mathbf{D}_{11}^*$ contains at most one nonzero element. Since earlier we have proved the same result for $\mathbf{D}^*_1=(\mathbf{D}^{*\prime}_{11},\mathbf{D}^{*\prime}_{21})'$, it must be the case that all the nonzero elements are in $\mathbf{D}^{*}_{11}$, i.e., $\mathbf{D}^{*}_{21}=\mathbf{0}$. Otherwise, there is a least one row in $\mathbf{D}_{11}^*$ whose elements are all 0, say row $k$ with $k\leq r_1$, which implies that \begin{align} [\vect{\Omega}^*_t]_{(k,k)}&= [\mathbf{D}_{11}^*\vect{\Omega}_{1t}\mathbf{D}_{11}^{*\prime}]_{(k,k)}+[\mathbf{D}_{12}^*\mathbf{D}_{12}^{*\prime}]_{(k,k)} \notag \\ &=\sum_{l_1=1}^{r_1}{\omega}_{l_1,t}d^*_{kl_1}d^*_{kl_1} + \sum_{l_2=r_1+1}^{r}d^{*2}_{kl_2}=0+\sum_{l_2=r_1+1}^{r}d^{*2}_{kl_2} = \text{constant}, \label{one} \end{align} where $[\mathbf{A}]_{(i,j)}$ denotes the $(i,j)$-th element of $\mathbf{A}$. It is clear that \eqref{one} violates the assumption that $(\text{vecd}(\vect{\Omega}_{1t}^*)', 1)'$ are linearly independent for all $t$. Now, using the fact that $\mathbf{D}^{*}_{21}=\mathbf{0}$ in \eqref{eq0:1}, it follows $\mathbf{D}_{22}^*$ is an orthogonal matrix. Next, using the fact that $\mathbf{D}^{*}_{21}=\mathbf{0}$ in \eqref{eq0:2}, we have $\mathbf{D}^*_{12}=\mathbf{0}$ since the orthogonal matrix $\mathbf{D}^*_{22}$ is invertible. Subsequently, the $(1,1)$-th block of $\vect{\Omega}_t^*$ reduces to $\mathbf{D}_{11}^*\vect{\Omega}_{1t}\mathbf{D}_{11}^{*\prime}$. From the earlier conclusion that each column of $\mathbf{D}_{11}^*$ has at most one nonzero element and the standardization requirement as shown in \eqref{standardization}, it is clear that $\mathbf{D}_{11}^*$ must be of the form $\mathbf{P}_{\pm}\mathbf{P}_{r_1}$. To summarize, we have shown that \[ \mathbf{L}^*=(\mathbf{L}^*_1, \mathbf{L}^*_2) = \mathbf{L} \mathbf{D}^{*\prime}= (\mathbf{L}_1, \mathbf{L}_2)\begin{pmatrix} \mathbf{P}_{\pm}\mathbf{P}_{r_1} & \mathbf{0} \\ \mathbf{0} & \mathbf{D}_{22}^{*\prime} \end{pmatrix} = (\mathbf{L}_1\mathbf{P}_{\pm}\mathbf{P}_{r_1}, \mathbf{L}_2\mathbf{D}_{22}^{*\prime}), \] where $\mathbf{D}_{22}^{*}$ is an orthogonal matrix of dimension $r_2$. $\qed$ \noindent \textbf{Proof of Corollary 1}: The proof follows directly from the proof of Proposition 3. More specifically, under the assumption $r_1=r-1$, $\mathbf{D}_{22}^{*}$ is an orthogonal matrix of dimension~1. Thus the only admissible $\mathbf{D}_{22}^{*}$ is $\pm 1$. So the full matrix $\mathbf{D}^{*}$ is also of the form $\mathbf{P}_{\pm}\mathbf{P}_{r}$. $\qed$ \newpage \section*{Appendix B: Estimation Details} In this appendix we provide the estimation details for fitting the model in~\eqref{eq:yt}--\eqref{eq:ht2}. More specifically, posterior draws can be obtained by sampling sequentially from the following distributions: \begin{enumerate} \item $p(\mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2) = p(\mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \mathbf{h})$; \item $p(\vect{\beta},\mathbf{L} \,|\, \mathbf{y}, \mathbf{f}, \mathbf{h}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2) = \prod_{i=1}^n p(\vect{\beta}_i,\mathbf{l}_i \,|\, \mathbf{y}_{i,\boldsymbol{\cdot}}, \mathbf{f}, \mathbf{h}_{i,\boldsymbol{\cdot}})$; \item $p(\mathbf{h} \,|\, \mathbf{y}, \mathbf{f},\vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2) = \prod_{i=1}^{n+r} p(\mathbf{h}_{i,\boldsymbol{\cdot}} \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2)$; \item $p(\vect{\sigma}^2 \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\mu}, \vect{\phi}) = \prod_{i=1}^{n+r} p(\sigma_i^2 \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}}, \mu_i, \phi_i)$; \item $p(\vect{\mu} \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\phi},\vect{\sigma}^2) = \prod_{i=1}^{n} p(\mu_i \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}}, \phi_i, \sigma^2_i) $; \item $p(\vect{\phi} \,|\, \mathbf{y}, \mathbf{f}, \vect{\beta}, \mathbf{L}, \mathbf{h}, \vect{\mu},\vect{\sigma}^2) = \prod_{i=1}^{n+r} p(\phi_i \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}}, \mu_i, \sigma^2_i) $. \end{enumerate} In Section~\ref{s:estimation} of the main text we describe the implementation details of Step 1 and Step2. Below we give the details of the remaining steps. \textbf{Step 3}: Sample $\mathbf{h}$. Again given the latent factors $\mathbf{f}$, the VAR becomes $n$ unrelated regressions and we can sample each vector of log-volatilities $\mathbf{h}_{i,\boldsymbol{\cdot}} = (h_{i,1},\ldots, h_{i,T})'$ separately. More specifically, we can directly apply the auxiliary mixture sampler in \citet*{KSC98} in conjunction with the precision sampler of \citet{CJ09} to sample from $(\mathbf{h}_{i,\boldsymbol{\cdot}} \,|\, \mathbf{y}, \mathbf{f},\vect{\beta},\mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2)$ for $i=1,\ldots, n+r$. For a textbook treatment, see, e.g., \citet{CKPT19} chapter 19. \textbf{Step 4}: Sample $\vect{\sigma}^2$. This step can be done easily, as the elements of $\vect{\sigma}^2$ are conditionally independent and, for $i=1,\ldots, n+r$, each follows an inverse-gamma distribution: \[ (\sigma_i^2 \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}},\mu_i,\phi_i) \sim \distn{IG}(\nu_{i}+T/2, \widetilde{S}_i), \] where $ \widetilde{S}_i = S_i + [(1-\phi_i^2)(h_{i,1}-\mu_i)^2 + \sum_{t=2}^T(h_{i,t}-\mu_i-\phi_i(h_{i,t-1}-\mu_i))^2]/2$, with the understanding that $\mu_i=0$ for $i>n$. \textbf{Step 5}: Sample $\vect{\mu}$. It is also straightforward to implement this step, as the elements of $\vect{\mu}$ are conditionally independent and, for $i=1,\ldots, n$, each follows a normal distribution: \[ (\mu_i \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}}, \phi_i, \sigma^2_i)\sim \distn{N}(\hat{\mu}_i, K_{\mu_i}^{-1}), \] where \begin{align*} K_{\mu_i} & = V_{\mu_i}^{-1} + \frac{1}{\sigma_i^2}\left[ 1-\phi_i^2 + (T-1)(1-\phi_i)^2\right] \\ \hat{\mu}_i & = K_{\mu_i}^{-1}\left[V_{\mu_i}^{-1}\mu_{0,i} + \frac{1}{\sigma_i^2}\left( (1-\phi_i^2)h_{i,1} + (1-\phi_i)\sum_{t=2}^T(h_{i,t}-\phi_ih_{i,t-1})\right)\right]. \end{align*} \textbf{Step 6}: To sample $\phi_i, i=1,\ldots, n+r$, note that \[ p(\phi_i \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}},\mu_i,\sigma^2_i)\propto p(\phi_i)g(\phi_i)\text{e}^{-\frac{1}{2\sigma^2_i}\sum_{t=2}^T(h_{i,t}-\mu_i-\phi_i(h_{i,t-1}-\mu_i))^2}, \] where $ g(\phi_i) = (1-\phi_i^2)^{\frac{1}{2}}\text{e}^{-\frac{1}{2\sigma_i^2}(1-\phi_i^2)(h_{i,1}-\mu_i)^2}$ and $p(\phi_i)$ is the truncated normal prior, with the understanding that $\mu_i=0$ for $i>n$. The conditional density $p(\phi_i \,|\, \mathbf{h}_{i,\boldsymbol{\cdot}},\mu_i,\sigma^2_i)$ is non-standard, but a draw from it can be obtained by using an independence-chain Metropolis-Hastings step with proposal distribution $\distn{N}(\hat{\phi}_i, K_{\phi_i}^{-1}) 1(|\phi_i|<1)$, where \begin{align*} K_{\phi_i} & = V_{\phi_i}^{-1} + \frac{1}{\sigma_i^2}\sum_{t=2}^{T}(h_{i,t-1}-\mu_i)^2\\ \hat{\phi}_h & = K_{\phi_i}^{-1}\left[V_{\phi_i}^{-1}\phi_{0,i} + \frac{1}{\sigma_i^2}\sum_{t=2}^{T}(h_{i,t-1}-\mu_i) (h_{i,t}-\mu_i) \right]. \end{align*} Then, given the current draw $\phi_i$, a proposal $\phi_i^*$ is accepted with probability $\min(1,g(\phi_i^*)/g(\phi_i))$; otherwise the Markov chain stays at the current state $\phi_i$. \newpage \section*{Appendix C: Technical Details on Integrated Likelihood Evaluation} In this appendix we provide the technical details for evaluating the integrated likelihood outlined in Section \ref{ss:intlike}. Recall that the integrated likelihood can be written as \begin{equation}\label{eq:intlike2} p(\mathbf{y} \,|\,\vect{\beta}, \mathbf{L},\vect{\mu},\vect{\phi},\vect{\sigma}^2) = \int p(\mathbf{y}\,|\,\vect{\beta},\mathbf{L},\mathbf{h}) p(\mathbf{h} \,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2) \text{d}\mathbf{h}, \end{equation} where the first term in the integrant has the following expression \[ p(\mathbf{y}\,|\,\vect{\beta},\mathbf{L},\mathbf{h}) = (2\pi)^{-\frac{Tn}{2}}\prod_{t=1}^T|\mathbf{L}\vect{\Omega}_t\mathbf{L}'+\vect{\Sigma}_t|^{- \frac{1}{2}} \text{e}^{-\frac{1}{2}(\mathbf{y}_t - (\mathbf{I}_n\otimes\mathbf{x}_t')\vect{\beta})'(\mathbf{L}\vect{\Omega}_t\mathbf{L}'+\vect{\Sigma}_t)^{-1} (\mathbf{y}_t - (\mathbf{I}_n\otimes\mathbf{x}_t')\vect{\beta})}. \] Next, we derive the joint density of the log-volatilities $p(\mathbf{h} \,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2)$. To that end, stack the state equations \eqref{eq:ht1}-\eqref{eq:ht2} over $t=1,\ldots, T$: \[ \mathbf{H}_{\vect{\phi}}\mathbf{h} = \mathbf{H}_{\vect{\phi}} \mathbf{m}_{\vect{\mu}} + \mathbf{u}^h, \quad \mathbf{u}^h\sim\distn{N}(\mathbf{0},\mathbf{S}_{\vect{\sigma}^2}), \] where $\mathbf{S}_{\vect{\sigma}^2} = \text{diag}(\sigma_1^2/(1-\phi_1^2), \ldots, \sigma_{n+r}^2/(1-\phi_{n+r}^2),\sigma_1^2,\ldots, \sigma_{n+r}^2,\ldots, \sigma_1^2, \ldots, \sigma_{n+r}^2)'$ and \[ \mathbf{m}_{\vect{\mu}} = \mathbf{1}_T\otimes\begin{pmatrix} \vect{\mu} \\ \mathbf{0} \end{pmatrix} \quad \mathbf{H}_{\vect{\phi}} = \begin{pmatrix} \mathbf{I}_{n+r} & \mathbf{0} & \cdots & \mathbf{0} \\ - \diag(\vect{\phi}) & \mathbf{I}_{n+r} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \vdots \\ \mathbf{0} & \cdots & - \diag(\vect{\phi}) & \mathbf{I}_{n+r} \end{pmatrix}. \] Or equivalently \[ \mathbf{h} = \mathbf{m}_{\vect{\mu}} + \mathbf{H}_{\vect{\phi}}^{-1}\mathbf{u}^h, \quad \mathbf{H}_{\vect{\phi}}^{-1}\mathbf{u}^h\sim\distn{N}(\mathbf{0},(\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}})^{-1}), \] as the determinant of the square matrix $\mathbf{H}_{\vect{\phi}}$ is one and is thus invertible. It follows that $(\mathbf{h} \,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2)\sim\distn{N}(\mathbf{m}_{\vect{\mu}},(\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}})^{-1})$ with log-density \begin{equation*} \begin{split} \log p(\mathbf{h} \,|\, \vect{\mu},\vect{\phi},\vect{\sigma}^2) & = -\frac{T(n+r)}{2}\log(2\pi) - \frac{T}{2}\sum_{i=1}^{n+r}\log\sigma_i^2 + \frac{1}{2}\sum_{i=1}^{n+r}\log(1-\phi_i^2) \\ & \quad - \frac{1}{2}(\mathbf{h}-\mathbf{m}_{\vect{\mu}})'\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}(\mathbf{h}-\mathbf{m}_{\vect{\mu}}). \end{split} \end{equation*} Next, we introduce an importance sampling estimator to evaluate the integral in \eqref{eq:intlike2}. The ideal zero-variance importance sampling density in this case is the conditional density of $\mathbf{h}$ given the data and other parameters but marginal of $\mathbf{f}$, i.e., $p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$. But this density cannot be directly used as an importance sampling density as it is non-standard. We instead approximate it using a Gaussian density, which is then used as the importance sampling density. \subsection*{An EM Algorithm to Obtain the Mode of $p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$} We first develop an EM algorithm to find the maximizer of the log marginal density $\log p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$. To implement the E-step, we compute the following conditional expectation for an arbitrary vector $\breve{\mathbf{h}}\in\mathbb{R}^{T(n+r)}$: \[ \mathcal{Q}(\mathbf{h} \,|\, \breve{\mathbf{h}}) = \mathbb E_{\mathbf{f}|\breve{\mathbf{h}}}\left[ \log p(\mathbf{h}, \mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)\right], \] where the expectation is taken with respect to $p(\mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \breve{\mathbf{h}}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2) = p(\mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \breve{\mathbf{h}})$. As discussed in Section~\ref{s:estimation} of the main text, the latent factors $\mathbf{f}_1,\ldots, \mathbf{f}_T$ are conditionally independent given the data and model parameters. In fact, for $t=1,\ldots, T$, they have the following Gaussian distributions: \[ (\mathbf{f}_t \,|\,\mathbf{y}, \vect{\beta}, \mathbf{L}, \breve{\mathbf{h}}) \sim \distn{N}(\hat{\mathbf{f}_t},\mathbf{K}_{\mathbf{f}_t}^{-1}), \] where \[ \mathbf{K}_{\mathbf{f}_t} = \breve{\vect{\Omega}}^{-1}_t + \mathbf{L}'\breve{\vect{\Sigma}}^{-1}_t\mathbf{L}, \quad \hat{\mathbf{f}_t} = \mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{L}'\breve{\vect{\Sigma}}^{-1}_t(\mathbf{y}_t- (\mathbf{I}_n\otimes \mathbf{x}_t')\vect{\beta}). \] Note that here we use $\breve{\mathbf{h}} = (\breve{\mathbf{h}}_{1}^{y'},\breve{\mathbf{h}}_{1}^{f'}, \ldots,\breve{\mathbf{h}}_{T}^{y'},\breve{\mathbf{h}}_{T}^{f'})' = (\breve{h}_{1,1},\ldots,\breve{h}_{n+r,1},\ldots, \breve{h}_{1,T},\ldots,\breve{h}_{n+r,T})'$ to construct $\breve{\vect{\Sigma}}_t = \text{diag}(\breve{\mathbf{h}}_{t}^{y}) = \text{diag}(\text{e}^{\breve{h}_{1,t}},\ldots, \text{e}^{\breve{h}_{n,t}})$ and $\breve{\vect{\Omega}}_t = \text{diag}(\breve{\mathbf{h}}_{t}^{f}) = \text{diag}(\text{e}^{\breve{h}_{n+1,t}},\ldots, \text{e}^{\breve{h}_{n+r,t}})$ instead of $\mathbf{h}$. Then, an explicit expression of $\mathcal{Q}(\mathbf{h} \,|\, \breve{\mathbf{h}})$ can be derived as follows: \begin{align*} \mathcal{Q}(\mathbf{h}\,|\, \breve{\mathbf{h}}) = & -\frac{1}{2}(\mathbf{h}-\mathbf{m}_{\vect{\mu}})'\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}(\mathbf{h}-\mathbf{m}_{\vect{\mu}}) -\frac{1}{2}\mathbf{1}_{T(n+r)}'\mathbf{h} \\ & -\frac{1}{2}\sum_{t=1}^T \mathbb E_{\mathbf{f}|\breve{\mathbf{h}}} \left[\mathbf{f}_t'\vect{\Omega}^{-1}_t \mathbf{f}_t + (\vect{\epsilon}_t-\mathbf{L}\mathbf{f}_t)'\vect{\Sigma}^{-1}_t (\vect{\epsilon}_t-\mathbf{L}\mathbf{f}_t)\right] + c_1 \\ = & -\frac{1}{2}(\mathbf{h}-\mathbf{m}_{\vect{\mu}})'\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}(\mathbf{h}-\mathbf{m}_{\vect{\mu}}) -\frac{1}{2} \mathbf{1}_{T(n+r)}'\mathbf{h} -\frac{1}{2}\sum_{t=1}^T\text{tr}\left(\diag(\text{e}^{-\mathbf{h}_t^f})(\hat{\mathbf{f}}_t\hat{\mathbf{f}}_t'+\mathbf{K}_{\mathbf{f}_t}^{-1})\right) \\ & -\frac{1}{2}\sum_{t=1}^T\text{tr}\left(\diag(\text{e}^{-\mathbf{h}_t^y})\left((\vect{\epsilon}_t-\mathbf{L}\hat{\mathbf{f}}_t)(\vect{\epsilon}_t-\mathbf{L}\hat{\mathbf{f}}_t)' + \mathbf{L}\mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{L}'\right)\right) + c_1, \end{align*} where $\vect{\epsilon}_t = \mathbf{y}_t- (\mathbf{I}_n\otimes \mathbf{x}_t')\vect{\beta}$ and $c_1$ is a constant not dependent on $\mathbf{h}$. In the M-step, we maximize the function $\mathcal{Q}(\mathbf{h} \,|\, \breve{\mathbf{h}}) $ with respect to $\mathbf{h}$. This can be done using the Newton-Raphson method \citep[see, e.g.,][]{handbook11}. To compute the gradient and Hessian of $\mathcal{Q}(\mathbf{h} \,|\, \breve{\mathbf{h}})$, let $\hat{z}_{i,t}^y$ denote the $i$-th diagonal element of $(\vect{\epsilon}_t-\mathbf{L}\hat{\mathbf{f}}_t)(\vect{\epsilon}_t-\mathbf{L}\hat{\mathbf{f}}_t)' + \mathbf{L}\mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{L}'$, $i=1,\ldots,n$. Similarly, let $\hat{z}_{j,t}^f$ denote the $j$-th diagonal element of $(\hat{\mathbf{f}}_t\hat{\mathbf{f}}_t'+\mathbf{K}_{\mathbf{f}_t}^{-1})$, $j=1\ldots, r$. Finally, define $\hat{\mathbf{z}} = (\hat{\mathbf{z}}_1',\ldots, \hat{\mathbf{z}}_T')'$, where $\hat{\mathbf{z}}_t = (\hat{z}_{1,t}^y,\ldots, \hat{z}_{n,t}^y, \hat{z}_{1,t}^f, \ldots, \hat{z}_{r,t}^f)'$. Then, we can rewrite $\mathcal{Q}(\mathbf{h} \,|\, \breve{\mathbf{h}}) $ more compactly as \[ \mathcal{Q}(\mathbf{h}\,|\, \breve{\mathbf{h}}) = -\frac{1}{2}(\mathbf{h}-\mathbf{m}_{\vect{\mu}})'\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}(\mathbf{h}-\mathbf{m}_{\vect{\mu}}) -\frac{1}{2} \mathbf{1}_{T(n+r)}'\mathbf{h} -\frac{1}{2}\hat{\mathbf{z}}' \text{e}^{-\mathbf{h}}. \] Hence, the gradient is given by \[ \mathbf{g}_\mathcal{Q} = -\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}(\mathbf{h}-\mathbf{m}_{\vect{\mu}}) -\frac{1}{2}\left(\mathbf{1}_{T(n+r)} -\text{e}^{-\mathbf{h}}\odot \hat{\mathbf{z}}\right), \] and the Hessian is \begin{equation}\label{eq:Hess_Q} \mathbf{H}_\mathcal{Q} = - \mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}} - \frac{1}{2}\diag\left(\text{e}^{-\mathbf{h}}\odot \hat{\mathbf{z}}\right), \end{equation} where $\odot$ denotes the entry-wise product. Since the determinant $|\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}| = |\mathbf{S}_{\vect{\sigma}^2}|^{-1}$ is strictly positive and the diagonal elements of $\diag\left(\text{e}^{-\mathbf{h}}\odot \hat{\mathbf{z}}\right)$ are positive, the Hessian $\mathbf{H}_\mathcal{Q}$ is negative definite for all $\mathbf{h}\in\mathbf{R}^{T(n+r)}$. This guarantees fast convergence of the Newton-Raphson method. In addition, the Hessian is a band matrix. This property can be used to further speed up computations with sparse and band matrix routines. Given the E- and M-steps above, the EM algorithm can be implemented as follows. We initialize the algorithm with $\mathbf{h} = \mathbf{h}^{(0)}$ for some constant vector $\mathbf{h}^{(0)}$. At the $j$-th iteration, we obtain $\mathbf{g}_\mathcal{Q}$ and $\mathbf{H}_\mathcal{Q}$, where $\hat{\mathbf{f}}_t$ and $\mathbf{K}_{\mathbf{f}_t}, t=1,\ldots, T,$ are evaluated using $ \mathbf{h}^{(j-1)}$. Then, we compute \[ \mathbf{h}^{(j)} = \mathop{\rm argmax}_{\mathbf{h}} \mathcal{Q}(\mathbf{h}\,|\, \mathbf{h}^{(j-1)}) \] using the Newton-Raphson method. We repeat the E- and M-steps until some convergence criterion is met, e.g., the norm between consecutive $\mathbf{h}^{(j)}$ is less than a pre-determined tolerance value. At the end of the EM algorithm, we obtain the mode of the density $p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$, which is denoted by $\hat{\mathbf{h}}$. We summarize the EM algorithm in Algorithm~\ref{alg:EM}. \begin{algorithm}[H] \caption{EM algorithm to obtain the mode of $p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$.} \label{alg:EM} Suppose we have an initial guess $\mathbf{h}^{(0)}$ and error tolerance levels $\epsilon_1$ and $\epsilon_2$, say, $\epsilon_1 = \epsilon_2 = 10^{-4}$. The EM algorithm consists of iterating the following steps for $j=1,2,\ldots$: \begin{enumerate} \item E-Step: Given the current value $\mathbf{h}^{(j-1)}$, compute $\mathbf{K}_{\mathbf{f}_t}$, $\hat{\mathbf{f}_t}, t=1,\ldots, T,$ and $\hat{\mathbf{z}}$ \item M-Step: Maximize $\mathcal{Q}(\mathbf{h} \,|\, \mathbf{h}^{(j-1)})$ with respect to $\mathbf{h}$ by the Newton-Raphson method. That is, set $\mathbf{h}^{(0,j-1)} = \mathbf{h}^{(j-1)}$ and iterate the following steps for $k=1,2,\ldots$: \begin{enumerate} \item Compute $\mathbf{g}_\mathcal{Q}$ and $\mathbf{H}_\mathcal{Q}$ using $\mathbf{K}_{\mathbf{f}_t}$, $\hat{\mathbf{f}_t}, t=1,\ldots, T,$ and $\hat{\mathbf{z}}$ obtained in the E-step, and set $\mathbf{h} = \mathbf{h}^{(k-1,j-1)}$ \item Update $\mathbf{h}^{(k,j-1)} = \mathbf{h}^{(k-1,j-1)} - \mathbf{H}_\mathcal{Q}^{-1}\mathbf{g}_\mathcal{Q}$ \item If, for example, $ \|\mathbf{h}^{(k,j-1)}-\mathbf{h}^{(k-1,j-1)}\| < \epsilon_1$, terminate the iteration and set $\mathbf{h}^{(j)} = \mathbf{h}^{(k,j-1)}$. \end{enumerate} \item Stopping condition: if, for example, $\|\mathbf{h}^{(j)}-\mathbf{h}^{(j-1)}\| < \epsilon_2$, terminate the algorithm. \end{enumerate} \end{algorithm} \subsection*{Computing the Hessian of $\log p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$} After obtaining the mode $\hat{\mathbf{h}}$ of the log density $\log p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$, next we compute the Hessian evaluated at $\hat{\mathbf{h}}$. Here we describe two approaches to do so. In the first approach, we provide an approximation of the Hessian using the EM algorithm. The resulting matrix is banded and is guaranteed to be negative definite. In the second approach, we directly compute the Hessian of $\log p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$. In our experience the two approaches give very similar results, but the first approach is more numerically stable. In what follows, we start with the first approach. Note that by Bayes' theorem, we have \[ p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2) = \frac{p(\mathbf{h},\mathbf{f} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)} {p(\mathbf{f} \,|\, \mathbf{h}, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)}. \] If we take the log of both sides and then take expectation with respect to $p(\mathbf{f} \,|\, \mathbf{h}, \mathbf{y}, \vect{\beta}, \mathbf{L})$, we obtain the identity \begin{equation}\label{eq:h_QH} \log p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2) = \mathcal{Q}(\mathbf{h} \,|\, \mathbf{h}) + \mathcal{H}(\mathbf{h} \,|\, \mathbf{h}), \end{equation} where $\mathcal{H}(\mathbf{h} \,|\, \mathbf{h}) = -\mathbb E_{\mathbf{f}| \mathbf{h}}\left[ \log p(\mathbf{f} \,|\, \mathbf{h}, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)\right] = -\mathbb E_{\mathbf{f}| \mathbf{h}}\left[ \log p(\mathbf{f} \,|\, \mathbf{h}, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu})\right]$. It follows that the Hessian of $\log p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi}, \vect{\sigma}^2)$ evaluated at $\hat{\mathbf{h}}$ is simply the sum of the Hessians of $\mathcal{Q}$ and $\mathcal{H}$ with $\mathbf{h} = \hat{\mathbf{h}}$. Note that the Hessian of $\mathcal{Q}(\mathbf{h} \,|\, \hat{\mathbf{h}})$ comes out as a by-product of the EM algorithm; an analytical expression is given in \eqref{eq:Hess_Q}. We use it as an approximation of the Hessian of $\mathcal{Q}(\mathbf{h} \,|\, \mathbf{h})$ evaluated at $\mathbf{h} = \hat{\mathbf{h}}$. Next, we derive an analytical expression for $\mathcal{H}(\mathbf{h} \,|\, \mathbf{h})$: \begin{align*} \mathcal{H}(\mathbf{h} \,|\, \mathbf{h}) & = -\mathbb E_{\mathbf{f}| \mathbf{h}}\left[ \log p(\mathbf{f} \,|\, \mathbf{h}, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}) \right] \\ & = \frac{Tr}{2}\log(2\pi) -\frac{1}{2}\sum_{t=1}^T \log|\mathbf{K}_{\mathbf{f}_t}| + \frac{1}{2} \sum_{t=1}^T \mathbb E_{\mathbf{f}_t|\mathbf{h}}\left[(\mathbf{f}_t-\hat{\mathbf{f}}_t)'\mathbf{K}_{\mathbf{f}_t} (\mathbf{f}_t-\hat{\mathbf{f}}_t)\right] \\ & = -\frac{1}{2}\sum_{t=1}^T\log|\mathbf{L}'\text{diag}(\text{e}^{-\mathbf{h}_t^y})\mathbf{L} + \text{diag}(\text{e}^{-\mathbf{h}_t^f})|+ c_2 \\ & = -\frac{1}{2}\sum_{t=1}^T\log|\mathbf{W}'\text{diag}(\text{e}^{-\mathbf{h}_t})\mathbf{W}| + c_2, \end{align*} where $c_2$ is a constant not dependent on $\mathbf{h}$ and $\mathbf{W} = \begin{pmatrix} \mathbf{L} \\ \mathbf{I}_r \end{pmatrix}$. In the above derivation we have used the fact that under $p(\mathbf{f}_t \,|\, \mathbf{h}, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu})$, the quadratic form $(\mathbf{f}_t-\hat{\mathbf{f}}_t)'\mathbf{K}_{\mathbf{f}_t}(\mathbf{f}_t-\hat{\mathbf{f}}_t)$ is a chi-squared random variable and its expectation does not depend on $\mathbf{h}$ (and thus absorbed into the constant $c_2$). To compute the Hessian of $\mathcal{H}$, we first note that \begin{align*} \frac{\partial}{\partial h_{i,t}}\mathbf{K}_{\mathbf{f}_t} & = \frac{\partial}{\partial h_{i,t}} \mathbf{W}'\text{diag}(\text{e}^{-\mathbf{h}_t})\mathbf{W} = \frac{\partial}{\partial h_{i,t}} \sum_{j=1}^{n+r} \text{e}^{-h_{j,t}}\mathbf{w}_j\mathbf{w}_j' = -\text{e}^{-h_{i,t}}\mathbf{w}_i\mathbf{w}_i', \\ \frac{\partial}{\partial h_{i,s}}\mathbf{K}_{\mathbf{f}_t} & = 0, \quad s\neq t, \end{align*} where $\mathbf{w}_j'$ is the $j$-th row of $\mathbf{W}$. Next, using standard results of matrix differentiation, we obtain \begin{align*} \frac{\partial}{\partial h_{i,t}} \mathcal{H}(\mathbf{h} \,|\, \mathbf{h}) & = -\frac{1}{2}\text{tr}\left(\mathbf{K}_{\mathbf{f}_t}^{-1} \frac{\partial\mathbf{K}_{\mathbf{f}_t}}{\partial h_{i,t}}\right) = \frac{1}{2}\text{e}^{-h_{i,t}}\mathbf{w}_i'\mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{w}_i, \\ \frac{\partial^2}{\partial h_{i,t}^2} \mathcal{H}(\mathbf{h} \,|\, \mathbf{h}) & = -\frac{1}{2}\left(\text{e}^{-h_{i,t}}\mathbf{w}_i'\mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{w}_i + \text{e}^{-h_{i,t}}\mathbf{w}_i' \mathbf{K}_{\mathbf{f}_t}^{-1}\frac{\partial \mathbf{K}_{\mathbf{f}_t}}{\partial h_{i,t}} \mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{w}_i\right) \\ & = -\frac{1}{2}\text{e}^{-h_{i,t}}\mathbf{w}_i' \mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{w}_i( 1 - \text{e}^{-h_{i,t}}\mathbf{w}_i'\mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{w}_i), \\ \frac{\partial^2}{\partial h_{i,t}\partial h_{j,t}} \mathcal{H}(\mathbf{h} \,|\, \mathbf{h}) & = \frac{1}{2} \text{e}^{-(h_{i,t}+h_{j,t})}\mathbf{w}_i' \mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{w}_j\mathbf{w}_j' \mathbf{K}_{\mathbf{f}_t}^{-1}\mathbf{w}_i, \quad i\neq j, \\ \frac{\partial^2}{\partial h_{i,t}\partial h_{j,s}} \mathcal{H}(\mathbf{h} \,|\, \mathbf{h}) & = 0, \quad s\neq t. \end{align*} Hence, the Hessian is block diagonal (and hence banded). More specifically, the Hessian of $\mathcal{H}(\mathbf{h} \,|\, \mathbf{h})$ can be written in the following matrix form \[ \mathbf{H}_\mathcal{H} = -\frac{1}{2}\mathbf{Z}' \odot (\mathbf{I}_{T(n+r)} - \mathbf{Z}), \] where $\mathbf{Z} = \text{diag}(\mathbf{Z}_1,\ldots, \mathbf{Z}_T)$ with $\mathbf{Z}_t = \text{diag}(\text{e}^{-\mathbf{h}_t})\mathbf{W}\mathbf{K}_{\mathbf{f}_t}^{-1} \mathbf{W}'.$ Finally, let $\mathbf{H}_\mathcal{Q}$ denote the Hessian of $\mathcal{Q}(\mathbf{h}\,|\,\mathbf{h})$ evaluated at $\mathbf{h} = \hat{\mathbf{h}}$. Then, the negative Hessian of the log marginal density of $\mathbf{h}$ evaluated at $\mathbf{h} = \hat{\mathbf{h}}$ is simply $\mathbf{K}_{\mathbf{h}} = - (\mathbf{H}_\mathcal{Q} + \mathbf{H}_\mathcal{H})$, which is used as the precision matrix of the Gaussian approximation. Note that since both $\mathbf{H}_\mathcal{Q}$ and $\mathbf{H}_\mathcal{H}$ are band matrices, so is $\mathbf{K}_{\mathbf{h}}$. The second approach directly computes the Hessian of the log marginal density: \begin{align*} \log p(\mathbf{h} \,|\, \mathbf{y}, \vect{\beta}, \mathbf{L}, \vect{\mu}, \vect{\phi},\vect{\sigma}^2) & = c_3 + \log p(\mathbf{y} \,|\, \vect{\beta}, \mathbf{h}, \mathbf{L}) + \log p(\mathbf{h} \,|\, \vect{\mu}, \vect{\phi},\vect{\sigma}^2), \\ & = c_4 \underbrace{-\frac{1}{2}\sum_{t=1}^T \log |\mathbf{L}\vect{\Omega}_t\mathbf{L}'+\vect{\Sigma}_t|}_{T_1(\mathbf{h})} \underbrace{-\frac{1}{2}\sum_{t=1}^T \vect{\epsilon}_t'(\mathbf{L}\vect{\Omega}_t\mathbf{L}'+\vect{\Sigma}_t)^{-1}\vect{\epsilon}_t}_{T_2(\mathbf{h})} \\ & \qquad \underbrace{-\frac{1}{2}(\mathbf{h}-\mathbf{m}_{\vect{\mu}})'\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}(\mathbf{h}-\mathbf{m}_{\vect{\mu}})}_{T_3(\mathbf{h})}, \end{align*} where $\vect{\epsilon}_t=\mathbf{y}_t - (\mathbf{I}_n\otimes\mathbf{x}_t')\vect{\beta}$, and $c_3$ and $c_4$ are constants not dependent on $\mathbf{h}$. Next we derive the Hessians of the functions $T_1, T_2$ and $T_3$. Let $\tilde{\mathbf{W}} = \begin{pmatrix} \mathbf{L}' \\ \mathbf{I}_n \end{pmatrix}$. Then, $\mathbf{G}_t \equiv \mathbf{L}\vect{\Omega}_t\mathbf{L}'+\vect{\Sigma}_t = \tilde{\mathbf{W}}'\text{diag}(\text{e}^{\mathbf{h}_t})\tilde{\mathbf{W}}$. Using a similar derivation of $\mathbf{H}_\mathcal{H}$ in the EM algorithm, it is easy to see that the Hessian of $ T_1(\mathbf{h})$ is given by: \[ \mathbf{H}_{T_1(\mathbf{h})} = -\frac{1}{2}\tilde{\mathbf{Z}}' \odot (\mathbf{I}_{T(n+r)} - \tilde{\mathbf{Z}}), \] where $\tilde{\mathbf{Z}} = \text{diag}(\tilde{\mathbf{Z}}_1,\ldots, \tilde{\mathbf{Z}}_T)$ with $\tilde{\mathbf{Z}}_t = \text{diag}(\text{e}^{\mathbf{h}_t})\tilde{\mathbf{W}}\mathbf{G}_{t}^{-1} \tilde{\mathbf{W}}'$. It is also clear that the Hessian of $T_3(\mathbf{h})$ is simply \[ \mathbf{H}_{T_3(\mathbf{h})} =-\mathbf{H}_{\vect{\phi}}'\mathbf{S}_{\vect{\sigma}^2}^{-1}\mathbf{H}_{\vect{\phi}}. \] Next, we derive the Hessian of $T_2(\mathbf{h})$ below. First note that \begin{align*} \frac{\partial}{\partial h_{i,t}}\mathbf{G}_t & = \frac{\partial}{\partial h_{i,t}} \tilde{\mathbf{W}}'\text{diag}(\text{e}^{\mathbf{h}_t})\tilde{\mathbf{W}} = \frac{\partial}{\partial h_{i,t}} \sum_{j=1}^{n+r} \text{e}^{h_{j,t}}\tilde{\mathbf{w}}_j\tilde{\mathbf{w}}_j' = \text{e}^{h_{i,t}}\tilde{\mathbf{w}}_i\tilde{\mathbf{w}}_i', \\ \frac{\partial}{\partial h_{i,s}}\mathbf{G}_t & = 0, \quad s\neq t, \end{align*} where $\tilde{\mathbf{w}}_j'$ is the $j$-th row of $\tilde{\mathbf{W}}$. Next, using standard results of matrix differentiation, we obtain \begin{align*} \frac{\partial}{\partial h_{i,t}} T_2(\mathbf{h}) & = \frac{1}{2} \vect{\epsilon}_t' \mathbf{G}_t^{-1}\frac{\partial \mathbf{G}_t}{\partial h_{i,t}} \mathbf{G}_t^{-1}\vect{\epsilon}_t=\frac{1}{2} \text{e}^{h_{i,t}}\vect{\epsilon}_t' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i\tilde{\mathbf{w}}_i' \mathbf{G}_t^{-1}\vect{\epsilon}_t=\frac{1}{2} \text{e}^{h_{i,t}}(\vect{\epsilon}_t' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i)^2, \\ \frac{\partial^2}{\partial h_{i,t}^2} T_2(\mathbf{h}) & = \frac{1}{2}\left(\text{e}^{h_{i,t}}(\vect{\epsilon}_t' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i)^2 - 2\text{e}^{h_{i,t}}(\vect{\epsilon}_t' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i)\vect{\epsilon}_t' \mathbf{G}_t^{-1}\frac{\partial \mathbf{G}_t}{\partial h_{i,t}} \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i \right) \\ & = \frac{1}{2}\text{e}^{h_{i,t}}(\vect{\epsilon}_t' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i)^2\left( 1 - 2\text{e}^{h_{i,t}} \tilde{\mathbf{w}}_i' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i\right), \\ \frac{\partial^2}{\partial h_{i,t}\partial h_{j,t}} T_2(\mathbf{h}) & = -\text{e}^{(h_{i,t}+h_{j,t})}(\vect{\epsilon}_t' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i)(\vect{\epsilon}_t' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_j)(\tilde{\mathbf{w}}_j' \mathbf{G}_t^{-1}\tilde{\mathbf{w}}_i), \quad i\neq j, \\ \frac{\partial^2}{\partial h_{i,t}\partial h_{j,s}} T_2(\mathbf{h}) & = 0, \quad s\neq t. \end{align*} More specifically, the Hessian of $T_2(\mathbf{h})$ can be written in the following matrix form \[ \mathbf{H}_{T_2(\mathbf{h})} = \frac{1}{2}\breve{\mathbf{Z}}' \odot (\mathbf{I}_{T(n+r)} - 2\tilde{\mathbf{Z}}), \] where $\breve{\mathbf{Z}} = \text{diag}(\breve{\mathbf{Z}}_1,\ldots, \breve{\mathbf{Z}}_T)$ with $\breve{\mathbf{Z}}_t = \text{diag}(\text{e}^{\mathbf{h}_t})\tilde{\mathbf{W}}\mathbf{G}_{t}^{-1}\vect{\epsilon}_t\vect{\epsilon}_t'\mathbf{G}_{t}^{-1} \tilde{\mathbf{W}}'$. Finally, the Hessian from direct computation is simply $\mathbf{H}_{\rm Direct}= \mathbf{H}_{T_1(\mathbf{h})}+ \mathbf{H}_{T_2(\mathbf{h})} + \mathbf{H}_{T_3(\mathbf{h})}$. Since $\mathbf{H}_{T_1(\mathbf{h})}, \mathbf{H}_{T_2(\mathbf{h})}$ and $\mathbf{H}_{T_3(\mathbf{h})}$ are all band matrices, so is $\mathbf{H}_{\rm Direct}$. \newpage \section*{Appendix D: Additional Monte Carlo Results} \label{app:D} In this appendix we present results on two artificial data experiments to illustrate the estimation accuracy of the VAR-FSV model under DGPs with and without stochastic volatility. In the first experiment, we generate a dataset from the VAR-FSV in \eqref{eq:yt}--\eqref{eq:ht2} with $n=10$, $T=500$, $r=3$ factors and $p=4$ lags. We then estimate the model using the posterior sampler outlined in Section~\ref{s:estimation}. The first dataset is generated as follows. First, the intercepts are drawn independently from the uniform distribution on the interval $(-10,10)$, i.e., $\distn{U}(-10, 10)$. For the VAR coefficients, the diagonal elements of the first VAR coefficient matrix are iid $\distn{U}(0,0.5)$ and the off-diagonal elements are from $\distn{U}(-0.2,0.2)$; all other elements of the $j$-th ($j > 1$) VAR coefficient matrices are iid $\distn{N}(0,0.1^2/j^2).$ All elements of the factor loadings matrix are iid standard normal: $\mathbf{L}_{ij} \sim \mathcal{N}(0, 1)$ for $i=1,\ldots,n$ and $j=1,\ldots,r$. Finally, for the log-volatility processes, we set $\mu_i=-1, \phi_i=0.98$ and $\sigma_i=0.1$ for $i=1,\ldots,n$, and $\phi_{n+j} = 0.98$, $\sigma_{n+j}=0.1$ for $j=1,\ldots,r$. The results of the artificial data experiments are reported in Figures~\ref{fig:sim1_coef}-\ref{fig:sim1_h}. It is evident from the figures that the posterior sampler works well and the posterior means track the true values closely. \begin{figure}[H] \begin{center} \includegraphics[width=.95\textwidth]{Fig-coeff-hetero.eps} \end{center} \caption{Scatter plots of the posterior means of the factor loadings (left panel), intercepts (middle panel) and VAR coefficients (right panel) against the true values from a DGP with stochastic volatility.} \label{fig:sim1_coef} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=.6\textwidth]{Fig-f-hetero.eps} \end{center} \caption{Time series plots of the posterior means of the factors $\mathbf{f}_{i,\cdot}$, $i=1,2,3,$ from a DGP with stochastic volatility.} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=.6\textwidth]{Fig-herr-hetero.eps} \end{center} \caption{Time series plots of the posterior means of the stochastic volatilities $\mathbf{h}_{i,\cdot}$, $i=1,5,10,$ from a DGP with stochastic volatility.} \label{fig:sim1_h} \end{figure} In the second experiment, we generate data from the same VAR-FSV but with several stochastic volatility components turned off. In particular, we set $h_{i,t} = 0$ for $i=1,2,3,4,5$, and $h_{n+j,t} = 0$ for $j=2,3$. That is, the idiosyncratic errors of the first five variables, as well as the last two factors, are homoscedastic. We then fit the data using the (mis-specified) fully heteroscedastic model. The results are reported in Figures~\ref{fig:sim2_coef}-\ref{fig:sim2_h}. When the DGP does not have full stochastic volatility, some elements of the factor loading matrix $\mathbf{L}$ are harder to pin down, since they are not point-identified. But it is interesting to note that the estimates of the stochastic volatility are still able to track the true values fairly closely. The estimates of the VAR coefficients are also close to the true values. \begin{figure}[H] \begin{center} \includegraphics[width=.95\textwidth]{Fig-coeff-homo.eps} \end{center} \caption{Scatter plots of the posterior means of the factor loadings (left panel), intercepts (middle panel) and VAR coefficients (right panel) against the true values from a DGP with partial stochastic volatility.} \label{fig:sim2_coef} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=.6\textwidth]{Fig-f-homo.eps} \end{center} \caption{Time series plots of the posterior means of the factors $\mathbf{f}_{i,\cdot}$, $i=1,2,3,$ from a DGP with partial stochastic volatility.} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=.6\textwidth]{Fig-herr-homo.eps} \end{center} \caption{Time series plots of the posterior means of the stochastic volatilities $\mathbf{h}_{i,\cdot}$, $i=1,5,10,$ from a DGP with partial stochastic volatility.} \label{fig:sim2_h} \end{figure} \newpage \section*{Appendix E: Data} This appendix provides the details of the raw data used to construct the variables in the empirical application. In particular, Table~\ref{tab:var} lists the variables and their sources. The sample period is from 1985:Q1 to 2013:Q2. \begin{table}[H] \centering \caption{Description of variables used in the empirical application.} \label{tab:var} \resizebox{\textwidth}{!}{\begin{tabular}{lll} \hline\hline Variable & Description & Source \\ \hline GDP & Log of real GNP/GDP & Federal Reserve Bank of Philadelphia \\ \rowcolor{lightgray} GDP Deflator & Log of price index of GNP/GDP & Federal Reserve Bank of Philadelphia \\ 3-month treasury bill & 3-month treasury bill rate & Federal Reserve Bank of St. Louis \\ \rowcolor{lightgray} Investment & Log of real gross private domestic investment & Federal Reserve Bank of St. Louis \\ S\&P 500 & Log of S\&P 500 & Yahoo Finance \\ \rowcolor{lightgray} Total credit & Log of loans to non-financial private sector & Board of Governors of the Federal \\ \rowcolor{lightgray} & & Reserve System \\ Mortgages & Log of home mortgages of households and & Federal Reserve Bank of St. Louis \\ & non-profit organizations & \\ \rowcolor{lightgray} Real personal consumption expenditures & Log of real personal consumption expenditures & Federal Reserve Bank of St. Louis \\ Real estate value & Log of real estate at market value of households & Federal Reserve Bank of St. Louis \\ & and non-profit organizations & \\ \rowcolor{lightgray} Corporate bond yield & Moody's baa corporate bond yield & Federal Reserve Bank of St. Louis \\ 10-year treasury note & 10-year treasury constant maturity rate & Federal Reserve Bank of St. Louis \\ \rowcolor{lightgray} Federal funds rate & Federal funds rate & Federal Reserve Bank of St. Louis \\ Mortgage rate & 30-year fixed rate mortgage average & Federal Reserve Bank of St. Louis \\ \rowcolor{lightgray} CPI & Log of consumer price index & Federal Reserve Bank of St. Louis \\ PCE & Log of price index of personal consumption & Federal Reserve Bank of St. Louis \\ & expenditure & \\ \rowcolor{lightgray} Employment & Log of employment level & Federal Reserve Bank of St. Louis \\ All employees: manufacturing & Log of all employees in the manufacturing sector & Federal Reserve Bank of St. Louis \\ \rowcolor{lightgray} Industrial production & Log of industrial production index & Federal Reserve Bank of St. Louis \\ Industrial production: final products & Log of industrial production: final products index & Federal Reserve Bank of St. Louis \\ \rowcolor{lightgray} 1-year treasury bill & 1-year treasury constant maturity rate & Federal Reserve Bank of St. Louis \\ Dow Jones Industrial Average & Log of Dow Jones Industrial Average index & Google Finance \\ \rowcolor{lightgray} Nasdaq Composite & Log of Nasdaq Composite & Federal Reserve Bank of St. Louis \\ \hline\hline \end{tabular} } \end{table} \newpage \section*{Appendix F: Structural Analysis Tools} In this appendix we provide details on various structural analysis tools for the VAR-FSV similar to those designed for the structural VAR. In what follows, we describe methods to construct structural impulse response functions, forecast error variance decompositions and historical decompositions. To derive expressions of responses of $\mathbf{y}_t$ to a one-time impulse in $\mathbf{f}_t$, we first rewrite the VAR($p$) in \eqref{eq:yt} as an equivalent VAR(1) as follows: \[ \mathbf{Y}_t = \mathbf{A}_0 + \mathbf{A} \mathbf{Y}_{t-1} +\mathbf{E}_t, \] where \[ \mathbf{Y}_t =\begin{pmatrix} \mathbf{y}_t \\ \mathbf{y}_{t-1} \\ \vdots \\ \mathbf{y}_{t-p+1} \\ \end{pmatrix}, \ \ \mathbf{A}_0 =\begin{pmatrix} \mathbf{a}_0 \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \\ \end{pmatrix}, \ \ \mathbf{E}_t =\begin{pmatrix} \vect{\epsilon}_t \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{pmatrix}, \ \ \mathbf{A} = \begin{pmatrix} \mathbf{A}_1 & \mathbf{A}_2 & \cdots & \mathbf{A}_{p-1} & \mathbf{A}_p\\ \mathbf{I}_n & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0}\\ \mathbf{0} & \mathbf{I}_n & \mathbf{0} & \cdots & \mathbf{0}\\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \mathbf{0} & \cdots & \mathbf{0} & \mathbf{I}_n & \mathbf{0} \end{pmatrix}. \] By successive substitution for $\mathbf{Y}_{t-s}$, this VAR(1) has a vector moving average representation \citep[see, e.g., Section 4.1 of][]{KL17}: \begin{equation} \mathbf{Y}_t = \sum_{s=0}^{\infty} \mathbf{A}^{s} \mathbf{A}_0+ \sum_{s=0}^{\infty} \mathbf{A}^{s} \mathbf{E}_{t-s}= (\mathbf{I}_{np}-\mathbf{A})^{-1} \mathbf{A}_0+ \sum_{s=0}^{\infty} \mathbf{A}^{s} \mathbf{E}_{t-s}. \label{eq:yt3} \end{equation} Left-multiplying~\eqref{eq:yt3} by $\mathbf{J}\equiv \left(\mathbf{I}_n, \mathbf{0}_{n\times n(p-1)}\right)$, which is of dimension $n \times np$, we have \begin{align*} \mathbf{y}_t & = (\mathbf{I}_n-\mathbf{A}_1-\cdots-\mathbf{A}_p)^{-1} \mathbf{a}_0+ \sum_{s=0}^{\infty} \mathbf{J}\mathbf{A}^{s}\mathbf{J}' \mathbf{J} \mathbf{E}_{t-s} \\ & = \vect{\mu}_{\mathbf{y}}+ \sum_{s=0}^{\infty} \vect{\Phi}_s \vect{\epsilon}_{t-s}, \end{align*} where $\vect{\mu}_{\mathbf{y}} = (\mathbf{I}_n-\mathbf{A}_1-\ldots-\mathbf{A}_p)^{-1} \mathbf{a}_0$ and $\vect{\Phi}_s=\mathbf{J}\mathbf{A}^{s}\mathbf{J}' = [\mathbf{A}^{s}]_{1:n,1:n}$ is the first $n\times n$ block of $\mathbf{A}^{s}$. Note that the coefficient matrices $\vect{\Phi}_s$ can also be calculated recursively as \[ \vect{\Phi}_0 = \mathbf{I}_n, \quad \text{and} \quad \vect{\Phi}_s = \sum_{j=0}^{s} \vect{\Phi}_{s-j} \mathbf{A}_j, \quad s=1,2,\ldots \] with $\mathbf{A}_j = \mathbf{0}$ for $j>p$. Finally, using the factor structure in \eqref{eq:epsilont} and standardizing the factors via $\tilde{\mathbf{f}}_{t} = \vect{\Omega}_t^{-\frac{1}{2}}\mathbf{f}_t$ with $\vect{\Omega}_t = \text{diag}(\text{e}^{h_{n+1,t}},\ldots, \text{e}^{h_{n+r,t}})$ so that $\tilde{\mathbf{f}}_{t} \sim\distn{N}(\mathbf{0},\mathbf{I}_r)$, we thus obtain \begin{equation}\label{eq:VMA} \mathbf{y}_t = \vect{\mu}_{\mathbf{y}} + \sum_{s=0}^{\infty} \vect{\Phi}_s\mathbf{L} \vect{\Omega}_{t-s}^{\frac{1}{2}}\tilde{\mathbf{f}}_{t-s} + \sum_{s=0}^{\infty} \vect{\Phi}_s\mathbf{u}_{t-s}^y. \end{equation} Since the latent factors act as structural shocks in our setup, in what follows we analyze how the system responds to unit shocks in $\tilde{\mathbf{f}}_{t}$. More specifically, we use the expression in~\eqref{eq:VMA} to derive structural impulse responses, forecast variance decomposition and historical decomposition. \subsection*{Structural Impulse Responses} We first derive an expression for the response of $y_{i,t+l}$, the $i$-th element in $\mathbf{y}_{t+l}$, to unit shocks in $\tilde{\mathbf{f}}_t = (\tilde{f}_{1,t},\ldots, \tilde{f}_{r,t})'$ $l$ period ago: \[ \theta_{ij,l,t} \equiv \frac{\partial y_{i,t+l}}{\partial \tilde{f}_{j,t}}, \] so that for each pair $(l,t)$, $\vect{\Theta}_{l,t} = (\theta_{ij,l,t})$ is of dimension $n\times r$. By differentiating \eqref{eq:VMA} with respect to $\tilde{\mathbf{f}}_t$, it is straightforward to see that the impulse response functions are given by: \[ \vect{\Theta}_{l,t} = \vect{\Phi}_l \mathbf{L} \vect{\Omega}_t^{\frac{1}{2}} = \left[\mathbf{A}^{l}\right]_{1:n,1:n}\mathbf{L} \vect{\Omega}_t^{\frac{1}{2}}. \] Since the variances of the latent factors are time-varying, these impulse response functions are technically also time-varying. However, the effect of the variance only scales the responses proportionally---the size of a unit shock changes over time. In practice we report only the impulse responses at a particular time $t$, e.g., $t=T$ with $\vect{\Theta}_{l,T}=\vect{\Phi}_l \mathbf{L} \vect{\Omega}_T^{\frac{1}{2}}$. \subsection*{Forecast Error Variance Decompositions} Next, we develop an expression to account for the proportion of the forecast error variance or the mean squared prediction error (MSPE) that is due to the variation in the latent factors. To that end, let $\mathbf{y}_{t+l\,|\, t}$ denote the optimal conditional forecast of $\mathbf{y}_{t+l}$ given the information up to time $t$. Then, using~\eqref{eq:VMA} it is follows that \[ \mathbf{y}_{t+l}-\mathbf{y}_{t+l\,|\, t}=\sum_{s=0}^{l-1} \vect{\Phi}_s \mathbf{L} \vect{\Omega}_{t+l-s}^{\frac{1}{2}}\tilde{\mathbf{f}}_{t+l-s}+\sum_{s=0}^{l-1} \vect{\Phi}_s\mathbf{u}_{t+l-s}^y. \] If we define $\vect{\Xi}_{l,t}=\vect{\Phi}_l\vect{\Sigma}_t^{\frac{1}{2}}$, then the MSPE can be expressed as \begin{align*} \text{MSPE}_t(l) & = \mathbb E[(\mathbf{y}_{t+l}-\mathbf{y}_{t+l\,|\, t})(\mathbf{y}_{t+l}-\mathbf{y}_{t+l\,|\, t})'] \\ & =\sum_{s=0}^{l-1} \vect{\Phi}_s \mathbf{L} \vect{\Omega}_{t+l-s}^{\frac{1}{2}}\mathbb E[\tilde{\mathbf{f}}_{t+l-s}\tilde{\mathbf{f}}_{t+l-s}'] \vect{\Omega}_{t+l-s}^{\frac{1}{2}}\mathbf{L}' \vect{\Phi}_s'+ \sum_{s=0}^{l-1} \vect{\Phi}_s\mathbb E[\mathbf{u}_{t+l-s}^y\mathbf{u}_{t+l-s}^{y\prime}]\vect{\Phi}_s' \\ & =\sum_{s=0}^{l-1} \vect{\Theta}_{s,t+l-s} \vect{\Theta}_{s,t+l-s}' + \sum_{s=0}^{l-1} \vect{\Xi}_{s,t+l-s} \vect{\Xi}_{s,t+l-s}' \equiv \text{MSPE}^{\mathbf{f}}_t(l)+\text{MSPE}^{\mathbf{u}}_t(l), \end{align*} where we have used the assumption that $\mathbf{u}_{t}$ and $\tilde{\mathbf{f}}_{s}$ are mutually independent for all leads and lags. Hence, we have decomposed the MSPE into two components: one that can be attributed to the latent factors and the other to the idiosyncratic shocks. Since in our setup both the variances of the factors and the idiosyncratic shocks are time varying, the expression for MSPE depends on $t$. In practice, we focus on $t=T$ and compute $\text{MSPE}_T(l)$. Let $\theta_{ij,s,T+l-s}$ denote the $(i,j)$ element of $\vect{\Theta}_{s,T+l-s}$. Then, the contribution of the $j$-th factor to the MSPE of $y_{i,t}$, $i=1,\ldots,n$, at horizon $l$ is \[ \text{MSPE}^{j,\mathbf{f}}_{i,T}(l) =\sum_{s=0}^{l-1} \theta_{ij,s,T+l-s}^2. \] Hence, we can further decompose the MSPE of $y_{i,t}$ attributed to the factors as \[ \text{MSPE}^{\mathbf{f}}_{i,T}(l) = \sum_{j=1}^r \text{MSPE}^{j,\mathbf{f}}_{i,T}(l)=\sum_{j=1}^r \left(\sum_{s=0}^{l-1} \theta_{ij,s,T+l-s}^2 \right). \] It follows that the ratio $\text{MSPE}^{j,\mathbf{f}}_{i,T}(l)/\text{MSPE}^{\mathbf{f}}_{i,T}(l)$ measures the contribution of the $j$-th factor in forecasting the $i$-the variable at time $T$ $l$ periods ahead as a fraction of the MSPE attributed to the factors. \subsection*{Historical Decompositions} Next, we develop expressions for historical decompositions. To that end, let $\dot{\mathbf{y}}_t$ denote the demeaned ${\mathbf{y}}_t$, i.e., $\dot{\mathbf{y}}_t = \mathbf{y}_t -\vect{\mu}_{\mathbf{y}}$. Then, it follows from~\eqref{eq:VMA} that one may approximate $\dot{\mathbf{y}}_t$ using \[ \hat{\dot{\mathbf{y}}}_t = \sum_{s=0}^{t-1} \vect{\Phi}_s \mathbf{L} {\mathbf{f}}_{t-s} + \sum_{s=0}^{t-1}\vect{\Phi}_s \mathbf{u}_{t-s}^y. \] For a covariance-stationary system, the approximation error becomes negligible for a sufficiently large $t$. To quantify how much the $j$-th factor explains the historically observed fluctuation in the $i$-th variable, let \[ \hat{\dot{y}}_{i,t}^{(j),\mathbf{f}}=\sum_{s=0}^{t-1}\left(\mathbf{e}_{n,i}'\vect{\Phi}_s \mathbf{L}\mathbf{e}_{r,j}\right) \left({\mathbf{e}_{r,j}'\mathbf{f}}_{t-s}\right), \quad \hat{\dot{y}}_{i,t}^{\mathbf{u}}=\sum_{s=0}^{t-1}\mathbf{e}_{n,i}'\vect{\Phi}_s \mathbf{u}_{t-s}^y, \] where $\mathbf{e}_{m,k}$ denotes the $m\times 1$ vector with a 1 in the $k$-th coordinate and 0 elsewhere. Then, the historical decomposition of the $i$-th element of $\hat{\dot{\mathbf{y}}}_t$ can be expressed as \[ \hat{\dot{y}}_{i,t}=\sum_{j=1}^r \hat{\dot{y}}_{i,t}^{(j),\mathbf{f}}+\hat{\dot{y}}_{i,t}^{\mathbf{u}}. \] Hence, we have expressed $\hat{\dot{y}}_{i,t}$ as the summation of $r+1$ terms: the variations in $y_{i,t}$ that can be attributed to the $r$ factors and an additional `residual' term. \newpage \ifx\undefined\leavevmode\rule[.5ex]{3em}{.5pt}\ \newcommand{\leavevmode\rule[.5ex]{3em}{.5pt}\ }{\leavevmode\rule[.5ex]{3em}{.5pt}\ } \fi \ifx\undefined\textsc \newcommand{\textsc}[1]{{\sc #1}} \newcommand{\emph}[1]{{\em #1\/}} \let\tmpsmall\small \renewcommand{\small}{\tmpsmall\sc} \fi
train/arxiv
BkiUdC7xK7kjXLSrFOdW
5
1
\section{Introduction\label{sec:intro}} Thixotropic phenomena are observed in fluids with a ``structure'' that evolves with time, and such fluids are common both in industry and in daily life. Common examples include flow batteries \cite{Helal_2014,Helal_2016,Narayanan,Wang_JoR}, crude oils \cite{DimitriouMcKinley_SM2014}, food materials \cite{Glicerina2015}, blood \cite{Jin_Blood2011,Armstrong2020}, and colloidal suspensions \cite{DullaertMewis_ModelThixo2005,Alessandrini1982,Beris_starJNNFM2008,Burgos2001,Kelessidis2008}, among numerous others \cite{MewisReview1979,Barnes_ThixoReview1997,MewisWagnerReview2009,LarsonWei-JoR2019}. As defined by Mewis and Wagner \cite{MewisWagnerReview2009,MewisWagner_book2012}, thixotropy is ``the continuous decrease of viscosity with time when flow is applied to a sample that has been previously at rest, and the subsequent recovery of viscosity when the flow is discontinued''. Real materials show both thixotropy and viscoelasticity, and microstructural buildup can cause an increase in the viscosity as well as the elastic modulus \cite{Joshi_JOR2021}. In many cases thixotropy may be distinguished from viscoelasticity based on timescales: viscoelastic timescales are typically shorter compared to the thixotropic timescales \cite{WeiSolomonLarson_SE-JOR2016,Larson_ConstEq-JOR2015}. Materials with short thixotropic timescales (e.g.\ $\lesssim\mathcal{O}(1)$~s) are typically labeled as \emph{not} thixotropic, but the degree of thixotropy of a material depends on not only the characteristic timescales, but also the amount of change in a rheological property of interest, e.g.\ shear viscosity or elastic modulus, or more generally a change in state of stress. Stress changes, and timescales over which these changes occur, must both be considered to quantify and compare the degree of thixotropy across different materials. Additionally, thixotropic materials involve a range of microstructural lengthscales and associated timescales, and this makes analyzing the rheology non-trivial. Much like viscoelastic materials, these are characterized by a large diversity of elementary units with multiple characteristic timescales \cite{Tschoegl_book1989,Metzler1996}. There is no universal \emph{predictive} model for thixotropy. Therefore, to enable comparison between materials, we introduce a universal \emph{descriptive} model. Our objective is to develop methods, irrespective of the material tested or the underlying predictive constitutive model, to quantify thixotropy using observed stress changes ($\Delta\sigma$) and timescales ($\tau_{\rm char}$). This concept is illustrated in Fig.~\ref{fig:intro_Ashby}. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth,trim={0 0 5cm 0},clip]{thixo-manu-fig1} \caption{\label{fig:intro_Ashby}Proposed Ashby-style mapping for comparing thixotropic responses. Even with a chosen experimental protocol, the dependencies of two relevant thixotropic properties on the experimental conditions makes the parameter space high-dimensional.} \end{figure} The simple idea of using $\Delta\sigma$ and $\tau_{\rm char}$ actually involves underlying high-dimensionality, noted in Fig.~\ref{fig:intro_Ashby}. Both thixotropic breakdown and buildup processes must be considered, and the associated kinetics warrant the use of a distribution of timescales to describe the data, eventually reduced to obtain a secondary, average metric of $\tau_{\rm char}$. The amount of thixotropic change depends on the experimental protocol used; the four most common ones are shown in Fig.~\ref{fig:intro_signals}. Using shear stress to study the changes (and hence $\Delta\sigma$) allows us to include information about viscosity (rate-controlled tests) or modulus (oscillatory tests). Lastly, both $\Delta\sigma$ and $\tau_{\rm char}$ are functions of the specific experimental conditions native to the chosen protocol. In step shear rate tests, $\Delta\sigma$ and $\tau_{\rm char}$ are functions of the initial ($\dot{\gamma}_{\rm i}$) and final ($\dot{\gamma}_{\rm f}$) shear rates, or initial ($\gamma_{\rm i}$) and final ($\gamma_{\rm f}$) shear strain amplitudes for oscillatory amplitude step shear (oscillatory) tests. Fig.~\ref{fig:intro_Ashby} is thus a two-dimensional projection of this high-dimensional space onto a plane. On this plane we can still vary $\dot{\gamma}_{\rm i}$ and $\dot{\gamma}_{\rm f}$. Other metrics in addition to $\Delta\sigma$ and $\tau_{\rm char}$ may also exist, as noted in Fig.~\ref{fig:intro_Ashby}. Here we introduce the concept of thixotropic spectra, revealed by step changes, as a description of thixotropic effects. Step changes, rather than hysteresis loops, allow thixotropic spectra to be revealed. Many other step tests are available (Fig.~\ref{fig:intro_signals}), but we focus our attention on step shear rate since this can distinguish thixotropy and viscoelasticity \cite{MewisWagner_book2012}. It may be a step down ($\dot{\gamma}_{\rm f} < \dot{\gamma}_{\rm i}$) or step up ($\dot{\gamma}_{\rm f} > \dot{\gamma}_{\rm i}$) test, and the transient stress evolution is used to study thixotropic structure buildup or breakdown. Fig.~\ref{fig:intro_signals}(a) and (b) show typical stress signals for step down (buildup) and step up (breakdown) rate tests respectively. In step down tests, the stress may first show a viscoelastic relaxation (decrease) at very short times, which may or may not be observable \cite{MewisReview1979,MewisWagnerReview2009}. This may be followed by thixotropic recovery at long times \cite{MewisReview1979,MewisWagnerReview2009}. In step up tests, the stress typically shows a viscoelastic stress increase at short times, followed by thixotropic decay at long times \cite{MewisReview1979,MewisWagnerReview2009}. For a given test, the data is analyzed to obtain necessary information about $\Delta\sigma(\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f})$ and $\tau_{\rm char}(\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f})$ while keeping in mind the dimensionality of the problem. The analysis method is dictated by the protocol, both of which are described in detail in the following section. \begin{figure}[!ht] \centering \includegraphics[scale=0.15,trim={0 0 0 0},clip]{thixo-manu-fig2-v5} \caption{\label{fig:intro_signals}Example input protocols where a thixotropic spectrum of timescales can be applied. (a1) Step down in steady shear ($\dot{\gamma}_{\rm f}<\dot{\gamma}_{\rm i}$), (b1) step up in steady shear ($\dot{\gamma}_{\rm f}>\dot{\gamma}_{\rm i}$), (c1) step down in oscillatory shear amplitude ($\gamma_{\rm f}<\gamma_{\rm i}$), (d1) step up in oscillatory shear amplitude ($\gamma_{\rm f}>\gamma_{\rm i}$). Typical response functions (stresses and moduli) are shown in (a2)-(d2).} \end{figure} It must be noted that our approach only describes the thixotropic component of the stress response; it requires the viscoelastic component to be sub-dominant or negligible. As needed, the early-time viscoelastic response (as shown schematically in Fig.~\ref{fig:intro_signals}(a2),~(b2)) can be fit to a viscoelastic spectrum superimposed on the thixotropic one. This will truncate the range of observable thixotropic timescales. In our work, we only fit the thixotropic part following any initial viscoelastic response. \section{Theory\label{sec:theory}} \subsection{Flow conditions and rheometry protocols\label{subsec:theory-protocol}} We propose describing data in step tests as a superposition of stress components evolving over a spectrum of thixotropic buildup or breakdown timescales as a universal way to quantify thixotropic dynamics. The transient thixotropic stress signal $\sigma$ evolving over time $t$, expressed as a generalized discrete spectrum, is \begin{subequations}\label{eq:discspec-intro} \begin{align} \sigma^+(t;\mathbb{P}) &= \sigma_0 + \sum\limits_{i=1}^N \sigma^+_i \left( 1 - e^{-t/\tau^+_i} \right) \label{eq:discspec-intro-recovery},\\ \sigma^-(t;\mathbb{P}) &= \sigma_{\rm ss} + \sum\limits_{i=1}^N \sigma^-_i e^{-t/\tau^-_i} \label{eq:discspec-intro-breakdown}, \end{align} \end{subequations} where ``$+$'' is used for recovery (step down) and ``$-$'' for breakdown (step up). Subscripts ``0'' and ``ss'' refer to initial $(t=0)$ and steady state $(t\rightarrow\infty)$ values respectively. We assume the basis functions to be \emph{exponential} for each individual mode. It is shown here for $N$ discrete stress modes with mode strengths $\sigma_i$ and associated timescales $\tau_i$. These modes can be either for recovery or breakdown, and superscripts used to distinguish the two as $\tau^+_i$ or $\tau^-_i$, but one would omit the ``$\pm$'' superscripts for generalized discussions. $\mathbb{P}$ is the set of experimental parameters for a given protocol (e.g.\ $\mathbb{P} = \{\dot{\gamma}_{\rm i}$,$\dot{\gamma}_{\rm f}\}$ in step rate tests, a steady state at $\{\dot{\gamma}_{\rm i}$ was achieved). The distribution of $\sigma_i$ over $\tau_i$ can be reduced further to generate low-dimensional summarizing metrics, as discussed in \S~\ref{subsubsec:spectra-moments}. The key idea is to represent a signal as the sum (or integral) of multiple component signals, each with its own characteristic timescale of evolution. It can be applied to step changes of unidirectional or oscillatory shear forcing; the input scheduling for each shown in Fig.~\ref{fig:intro_signals}. Step rate tests are most commonly used for studying thixotropy \cite{MewisWagnerReview2009}. The exact form of Eq.~\ref{eq:discspec-intro} applied to step rate tests is \begin{subequations}\label{eq:spectra-steprate} \begin{align} \sigma^+(t;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f}) &= \sigma_0 + \sum\limits_{i=1}^N \sigma^+_i \left( 1 - e^{-t/\tau^+_i} \right) \label{eq:spectra-steprate-down},\\ \sigma^-(t;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f}) &= \sigma_{\rm ss} + \sum\limits_{i=1}^N \sigma^-_i e^{-t/\tau^-_i} \label{eq:spectra-steprate-up}, \end{align} \end{subequations} for recovery and breakdown respectively. We assume that the system has reached steady state during $\dot{\gamma}_{\rm i}$, and time at the step from $\dot{\gamma}_{\rm i}$ to $\dot{\gamma}_{\rm f}$ does not matter; $t = 0$ is measured from the beginning of the $\dot{\gamma}_{\rm f}$ step. The same idea can be applied to oscillatory shear with step strain amplitude. From the oscillatory signal, a chosen stress amplitude component evolves as \begin{subequations} \begin{align}\label{eq:spectra-stepstrain} \sigma^+(t;\omega,\gamma_{\rm i},\gamma_{\rm f}) &= \gamma_{\rm f} G_0(\omega) + \gamma_{\rm f}\sum\limits_{i=1}^N G^+_i(\omega) \left( 1 - e^{-t/\tau^+_i} \right),\\ \sigma^-(t;\omega,\gamma_{\rm i},\gamma_{\rm f}) &= \gamma_{\rm f} G_{\rm ss}(\omega) + \gamma_{\rm f}\sum\limits_{i=1}^N G^-_i(\omega) e^{-t/\tau^-_i}, \end{align} \end{subequations} for recovery and breakdown respectively. The step in strain amplitude occurs between oscillatory shear inputs $\gamma(t) = \gamma_{\rm i} \sin (\omega t)$ and $\gamma(t) = \gamma_{\rm f} \sin (\omega t)$. Once again, we assume that the oscillatory stress has reached steady state during $\gamma_{\rm i}$. The stress amplitude could be total stress $\sigma$, or in-phase with strain $\sigma^\prime$, or in-phase with strain rate $\sigma^{\prime\prime}$, in which case the parameters based on $G$ would either be $|G^*|$, or $G^\prime$ (as shown in Fig.\ref{fig:intro_signals}), or $G^{\prime\prime}$, respectively. Other oscillatory measures also fit in this framework (e.g.\ orthogonal superposition, where moduli measured in a direction orthogonal to rotational shear, $G^\prime_\perp$ and $G^{\prime\prime}_\perp$, can be used with step rate inputs \cite{Mewis_OSP,Wang_JoR}). Other generalizations of Eq.~\ref{eq:discspec-intro} are possible. For instance, stress in step rate tests can be normalized to define the material function $\eta_i(\tau_i) \equiv \sigma_i(\tau_i)/\dot{\gamma}_{\rm f}$ and reframe Eq.~\ref{eq:discspec-intro} in terms of $\eta_i^+$ and $\eta_i^-$. We use stresses throughout our work here to keep the analysis general and applicable to both strain and strain rate controlled tests. The following section elaborates the mathematical framework for obtaining the spectra. \subsection{Thixotropic spectra: discrete and continuous\label{subsec:spectra-spectra}} To derive the continuous spectrum representation, consider thixotropic recovery as an example, Eq.~\ref{eq:discspec-intro-recovery}. The transient contribution can be written in an integral form, using the properties of the Dirac delta function, as \cite{LucaRHEMAOS_2018} \begin{align}\label{eq:disc-cont1} \sum\limits_{i=1}^N \sigma^+_i \left( 1-e^{-t/\tau^+_i} \right) = \int\limits_{0}^{\infty}\left[ \lim_{N\rightarrow\infty} \sum\limits_{i=1}^{N} \sigma^+_i \cdot \left( 1-e^{-t/\tau^+} \right) \cdot \delta \left( \tau^+-\tau^+_i \right) \right] {\rm d}\tau^+. \end{align} Regrouping the terms, we get \begin{align}\label{eq:disc-cont2} \sum\limits_{i=1}^N \sigma^+_i \left(1-e^{-t/\tau^+_i} \right) = \int\limits_{0}^{\infty}\left[ \lim_{N\rightarrow\infty} \sum\limits_{i=1}^{N} \sigma^+_i \cdot \delta \left( \tau^+-\tau^+_i \right) \right] \left( 1-e^{-t/\tau^+} \right) {\rm d}\tau^+. \end{align} The grouping in square braces, in the limit of the number of modes $N\rightarrow\infty$, is the continuous spectrum \cite{Tschoegl_book1989,LucaRHEMAOS_2018} $X^+\left(\tau^+\right)$, \begin{align}\label{eq:line-contspec-recovery} X^+\left(\tau^+\right) \equiv \lim_{N\rightarrow\infty} \sum\limits_{i=1}^{N} \sigma^+_i \cdot \delta \left( \tau^+-\tau^+_i \right), \end{align} where $\delta(\tau - \tau_i)$ is the shifted Dirac delta function, with SI units of $\rm{s}^{-1}$ \cite{Tschoegl_book1989}. $X^+\left(\tau^+\right)$ is the distribution of thixotropic recovery modes over a domain of recovery timescales $\tau^+$, each mode of spectral strength $\sigma^+_i$ and associated timescale $\tau^+_i$, i.e.\ a small magnitude of \emph{recovered} stress on \emph{recovery} time. By analogy, from Eq.~\ref{eq:discspec-intro-breakdown}, the continuous spectrum of breakdown timescales is \begin{align}\label{eq:line-contspec-breakdown} X^-\left(\tau^-\right) \equiv \lim_{N\rightarrow\infty} \sum\limits_{i=1}^{N} \sigma^-_i \cdot \delta \left( \tau^- - \tau^-_i \right). \end{align} Integration over the entire domain of $\tau$ yields the total stress from all modes. This establishes an equivalence between the discrete and continuous spectra, e.g.\ for recovery \begin{align}\label{eq:disc-cont3} \lim_{N\rightarrow\infty} \sum\limits_{i=1}^N \sigma^+_i \left( 1-e^{-t/\tau^+_i} \right) = \int\limits_{0}^{\infty} X^+\left(\tau^+\right) \cdot \left( 1-e^{-t/\tau^+} \right) {\rm d}\tau^+. \end{align} $X^+\left(\tau^+\right)$ is a distribution over timescales, such that an increment in stress $\delta\sigma^+ = X^+\left(\tau^+\right)\delta\tau^+$. $X$ thus has SI units of Pa~s$^{-1}$. It can also be thought of as a spectrum distributed over logarithmically-spaced time increments, such that $\delta\sigma^+ = \Xi^+\left(\tau^+\right)\delta\ln\tau^+$, which gives \begin{subequations} \begin{align} X^+\left(\tau^+\right) \delta\tau^+ &= \Xi^+\left(\tau^+\right) \delta\ln\tau^+,\\ \implies X^+\left(\tau^+\right) &= \frac{\Xi^+\left(\tau^+\right)}{\tau^+}, \label{eq:stress-to-visc-spec} \end{align} \end{subequations} and $\Xi^+\left(\tau^+\right)$ has units of Pa. It is useful to recast $X^+\left(\tau^+\right)$ as $\Xi^+\left(\tau^+\right)$ for direct comparison between the discrete and continuous spectra plotted on log-scale $\tau^+$. Note that such comparison is only possible when the discrete spectrum $\sigma^+_i$ is log-spaced in $\tau^+_i$ \cite{LucaRHEMAOS_2018}, which we use in this work (see the following section for details). The equivalence between $\sigma^+_i$ (discrete) and $\Xi^+$ (continuous) spectra, using Eq.~\ref{eq:stress-to-visc-spec} in Eq.~\ref{eq:disc-cont3}, is \begin{subequations}\label{eq:disc-cont4} \begin{align} \lim_{N\rightarrow\infty} \sum\limits_{i=1}^N \sigma^+_i \left( 1-e^{-t/\tau^+_i} \right) &= \int\limits_{0}^{\infty} \frac{\Xi^+\left(\tau^+\right)}{\tau^+} \cdot \left( 1-e^{-t/\tau^+} \right) {\rm d}\tau^+ \\ &= \int\limits_{-\infty}^{\infty} \Xi^+\left(\tau^+\right) \cdot \left( 1-e^{-t/\tau^+} \right) {\rm d}\ln{\tau^+}, \end{align} \end{subequations} noting the change of variables to ${\rm d}\ln\tau^+$ changes the lower limit of integration. Here, we have focused on recovery with basis $(1-e^{-t/\tau^+_i})$. Breakdown is represented in the same manner, just by changing $(1-e^{-t/\tau^+_i})$ to $e^{-t/\tau^-_i}$ (see Table~\ref{ch2:tab:thixo-VE-compare}). To summarize, we can now generalize the discrete spectrum of Eq.~\ref{eq:discspec-intro} as continuous spectra. For step down in shear rate, from Eq.~\ref{eq:spectra-steprate-down}, the stress recovery is \begin{align}\label{eq:contspec-down} \sigma^+(t;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f}) = \sigma_0 + \int\limits_{-\infty}^{\infty} \Xi^+\left(\tau^+;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f}\right) \cdot \left( 1 - e^{-t/\tau^+} \right) {\rm d}\ln\tau^+, \end{align} and from Eq.~\ref{eq:spectra-steprate-up}, the stress breakdown is \begin{align}\label{eq:contspec-up} \sigma^-(t;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f}) = \sigma_{\rm ss} + \int\limits_{-\infty}^{\infty} \Xi^-\left(\tau^-;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f}\right) \cdot e^{-t/\tau^-} {\rm d}\ln\tau^-. \end{align} \subsection{Parameterized continuous spectra\label{subsec:theory-contspec}} The particular shape of the continuous distribution $\Xi(\tau)$ is unspecified in the general theory. Different shapes and parameterized continuous spectra may be possible, similar to viscoelastic spectra, e.g.\ BSW, Lorentzian, fractional Maxwell, Weibull distributions, and their numerous variations, to name a few \cite{LucaRHE_TSS2019}. One may fit either discrete or continuous spectra based on experimental observations (data fitting), or theoretical considerations from microstructure-based models. In the past, stretched exponential forms of stress recovery/decay have been widely used to phenomenologically describe thixotropic transients in step shear tests \cite{DullaertMewis_structkinetics2006,MewisWagnerReview2009,WeiSolomonLarson_SE-JOR2016,WeiSolomonLarson-JOR2018,LarsonWei-JoR2019}. Wei \emph{et al.}, in a series of papers, used a structure kinetics based constitutive model to study thixotropy in aggregating systems, where they employed a stretched exponential distribution of thixotropic structure parameters over a domain of characteristic thixotropic rate constants \cite{WeiSolomonLarson_SE-JOR2016,WeiSolomonLarson-JOR2018}. This idea of a distribution of structure parameters over characteristic rate constants is similar to our approach, except that we directly describe the timescales (rather than assuming a kinetic rate equation) and allow any shape of the distribution (rather than assuming stretched exponential) of stress modes to directly quantify thixotropic observations. One could use the stretched exponential form to fit recovery/breakdown data, and our framework enables its interpretation as a thixotropic continuous spectrum \cite{Johnston_SE-PRB2006,Santos2005}. Given this common practice, we also show fits to the stretched exponential recovery $(+)$ or decay $(-)$, given by \begin{subequations} \begin{align}\label{eq:strexp_stress} \sigma^+(t) &= \sigma_0 + \sigma^+_{\rm se} \left[ 1 - e^{- \left( t/\tau^+_{\rm se} \right)^\beta} \right],\\ \sigma^-(t) &= \sigma_{\rm ss} + \sigma^-_{\rm se} e^{- \left( t/\tau^-_{\rm se} \right)^\beta}, \end{align} \end{subequations} where $\sigma_{\rm se}$ is the total amount of stress change, happening over a timescale $\tau_{\rm se}$. The stretched exponential function can be related to an underlying continuous distribution of single-exponential modes \cite{Johnston_SE-PRB2006,Santos2005,WeiSolomonLarson_SE-JOR2016,WeiSolomonLarson-JOR2018}. It can therefore be written as a continuous distribution in our framework, given by \cite{Johnston_SE-PRB2006,Santos2005} (see Supplementary Information \S~1 for derivation) \begin{align}\label{eq:strexp_spectrum} \Xi\left(\tau\right) &\equiv \sigma_{\rm se} \frac{1}{\pi} \frac{\tau_{\rm se}}{\tau} \int\limits_{0}^{\infty} e^{-u^{\beta} \cos \left( \pi\beta/2 \right)} \cdot \cos\left[ \frac{\tau_{\rm se}}{\tau}u - u^\beta \sin \left( \frac{\pi\beta}{2} \right) \right] {\rm d}u. \end{align} The mathematical derivation takes care to distinguish between $X(\tau)$ and $\Xi(\tau)$. We will use $\Xi$ to compare to discrete thixotropic spectra with log-spaced $\tau_i$. Note that using the stretched exponential $\Xi(\tau)$ to describe step shear data directly is \emph{not} equivalent to the multi-lambda kinetic rate equation used by Wei \emph{et al.} Among the many possible shapes of $\Xi(\tau)$, we also consider a log-normal distribution. It is given by \cite{LucaRHEMAOS_2018,LucaRHE_TSS2019} \begin{align}\label{eq:logn} \Xi(\tau) = \Xi_{\rm m} \exp \left[ - \frac{1}{2} \left(\frac{\ln\tau - \ln\tau_{\rm m}}{\theta} \right)^2 \right], \end{align} where $\Xi_{\rm m}$, $\tau_{\rm m}$, and $\theta$ are parameters of the distribution pertaining to the peak value (strength of stress change), log-median relaxation timescale (mean timescale of change), and standard deviation of the spectrum (breadth of the distribution) respectively. This distribution captures the key ideas of thixotropic spectra (a dominant timescale and breadth of distribution) using just three parameters. It will be used to build intuition for comparing discrete and continuous spectra. \subsection{Reduced parameters: moments and timescales\label{subsubsec:spectra-moments}} The moments of the discrete $(\tau_i,\sigma_i)$ and the continuous distribution $\Xi\left(\tau\right)$ are defined as \cite{Tschoegl_book1989,LucaRHEMAOS_2018} \begin{subequations}\label{eq:moments-define} \begin{align} M_n &\equiv \sum_{i=1}^{N} \tau^n \cdot \sigma_i,\\ M_n &\equiv \int\limits_{0}^{\infty} \tau^n \cdot X\left(\tau\right) {\rm d}\tau \equiv \int\limits_{-\infty}^{\infty} \tau^n \cdot \Xi\left(\tau\right) {\rm d}\ln\tau, \end{align} \end{subequations} respectively, and so on, for $n \in \mathbb{Z}$. Based on these moments, meaningful low-dimensional metrics, such as characteristic timescales of recovery (or breakdown) can be defined. The general definition of the $n$-th mean timescale of a distribution is \cite{Tschoegl_book1989,LucaRHEMAOS_2018} \begin{align}\label{eq:moments-taun-define} \tau_n &\equiv \frac{M_n}{M_{n-1}}. \end{align} We propose the use of Ashby-style diagrams \cite{Ashby_book2011} (\S~\ref{sec:lowdim}) for plotting the quantities defined here. One meaningful quantity is $M_0$, the net change in stress during the recovery/breakdown process. We denote this total change in stress as $\Delta\sigma$, \begin{subequations}\label{eq:moments-M0-deltasigma} \begin{align} \Delta\sigma &= M_0 \equiv \sum_{i=1}^{N} \sigma_i,\\ \Delta\sigma &= M_0 \equiv \int\limits_{-\infty}^{\infty} \Xi\left(\tau\right) {\rm d}\ln\tau, \end{align} \end{subequations} respectively, for the discrete and continuous distributions. The average timescales $\tau_n$ are also meaningful. In terms of the spectra, Eq.~\ref{eq:moments-taun-define} is \begin{subequations}\label{eq:taun} \begin{align} \tau_n = \frac{M_n}{M_{n-1}} &\equiv \dfrac{\sum\limits_{i=1}^N \tau_i^n \cdot \sigma_i}{\sum\limits_{i=1}^N \tau_i^{n-1} \cdot \sigma_i},\\ \tau_n = \frac{M_n}{M_{n-1}} &\equiv \dfrac{\int\limits_{-\infty}^{\infty} \tau^n \cdot \Xi\left(\tau\right) {\rm d}\ln\tau}{\int\limits_{-\infty}^{\infty} \tau^{n-1} \cdot \Xi\left(\tau\right) {\rm d}\ln\tau}, \end{align} \end{subequations} respectively, for the discrete and continuous distributions. From this, we can define three important quantities: the first and second mean timescales, and a polydispersity of the timescales, as \begin{subequations} \label{eq:moments-tau-define} \begin{align} \tau_1 &\equiv \frac{M_1}{M_0}, \label{eq:moments-tau-define-tau1}\\ \tau_2 &\equiv \frac{M_2}{M_1}, \label{eq:moments-tau-define-tau2}\\ \rm{PDI} &\equiv \frac{\tau_2}{\tau_1} = \frac{M_2M_0}{M_1^2}. \label{eq:moments-tau-define-PDI} \end{align} \end{subequations} There are other ways to define mean timescales. For e.g., one may define mean timescales in the log space, such as \begin{align}\label{eq:moments-tau-define-log} \ln\tau_{n,{\rm log}} &\equiv \dfrac{\int\limits_{-\infty}^{\infty} (\ln\tau)^n \cdot \Xi\left(\tau\right) {\rm d}\ln\tau}{\int\limits_{-\infty}^{\infty} (\ln\tau)^{n-1} \cdot \Xi\left(\tau\right) {\rm d}\ln\tau}. \end{align} In essence, the definition of mean timescales is a matter of choice. We have defined the mean timescales in Eq.~\ref{eq:moments-tau-define} to mirror the definitions used in linear viscoelastic spectra \cite{Tschoegl_book1989,LucaRHEMAOS_2018}, and shall be using these to populate Ashby charts for thixotropy data. Defining the mean timescales in this manner is akin to a weighted arithmetic average of timescales, while Eq.\ref{eq:moments-tau-define-log} is meaningful for certain distributions such as the log-normal (Eq.~\ref{eq:logn}), where mean timescales defined on log space are equal to log-mean timescales of the specific distribution, $\ln\tau_{n,{\rm log}} = \ln\tau_{\rm m}$. Eq.~\ref{eq:spectra-steprate-down}, \ref{eq:spectra-steprate-up} (discrete spectra) and \ref{eq:contspec-down}, \ref{eq:contspec-up} (continuous spectra), in conjunction with Eq.~\ref{eq:moments-M0-deltasigma}~and~\ref{eq:taun} to obtain the reduced metrics, convey the core idea of this work. There are similarities between spectra used in viscoelasticity and the thixotropic spectra used here. In linear viscoelasticity, the stress relaxation shear modulus can be expressed in discrete or continuous forms as $G(t) = \sum\limits_{i = 1}^N G_i \cdot e^{-t/\tau_i}$, or $G(t) = \int\limits_0^\infty Q(\tau) \cdot e^{-t/\tau} {\rm d}\tau = \int\limits_{-\infty}^\infty H(\tau) \cdot e^{-t/\tau} {\rm d}\ln\tau$, where $Q(\tau)$ and $H(\tau)$ are the modulus-weighted and viscosity-weighted viscoelastic relaxation spectra, respectively \cite{LucaRHEMAOS_2018}. A summary of thixotropic spectra and the relevant basis functions, with parallels to viscoelastic spectra, is given in Table~\ref{ch2:tab:thixo-VE-compare}. \begin{table}[!ht] \caption{Thixotropic spectra with parallels to viscoelastic spectra. Definitions of $Q(\tau)$ and $H(\tau)$ following Martinetti \emph{et al.} \cite{Tschoegl_book1989,LucaRHEMAOS_2018,LucaRHE_TSS2019}} \label{ch2:tab:thixo-VE-compare} \begin{tabular}{l|lllll} \hline Property & Data & Basis function & Interpretation & Discrete & Continuous (on ${\rm d}\tau$)\\ \hline Thixo, $\dot{\gamma}_{\rm f}>\dot{\gamma}_{\rm i}$& $\sigma^-(t;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f})$ & $\sigma^-_i e^{-t/\tau^-_i}$ & Breakdown & $\sigma^-_i(\tau^-_i)$ & $X^-(\tau^-)$ \\ Thixo, $\dot{\gamma}_{\rm f}<\dot{\gamma}_{\rm i}$& $\sigma^+(t;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f})$ & $\sigma^+_i \left(1-e^{-t/\tau^+_i}\right)$ & Recovery & $\sigma^+_i(\tau^+_i)$ & $X ^+(\tau^+)$ \\ \hline Viscoelastic\cite{DPL_vol1} & $G(t;\gamma_0)$ & $G_i e^{-t/\tau_i}$ & Decay & $G_i(\tau_i)$ & $Q(\tau)$\\ (relaxation) & $\eta^+(t;\dot{\gamma}_0)$ & $\eta_i \left(1-e^{-t/\tau_i}\right)$ & Growth & $\eta_i(\tau_i) = \tau_i G_i(\tau_i)$ & $H(\tau) = \tau Q(\tau)$\\ \hline \end{tabular} \end{table} The reduced parameters from using thixotropic spectra, $\Delta\sigma$ and $\tau_n$, obtained from Eq.~\ref{eq:moments-taun-define},~\ref{eq:moments-M0-deltasigma}, can be used to describe any thixotropic data or constitutive model response. Using $\Delta\sigma$ and $\tau_n$, we can now populate Fig.~\ref{fig:intro_Ashby} to report and compare the degree of thixotropy in different materials. Such results are presented in \S~\ref{sec:lowdim}. The application of this framework to step shear data is illustrated through a theoretical example in the following section. \subsection{Illustration: relation between continuous, discrete, and reduced metrics\label{subsec:spectra-application}} \begin{figure}[!ht] \begin{minipage}[!ht]{0.49\textwidth} (a) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{thixo_demo_stress_v3} \end{minipage} \begin{minipage}[!ht]{0.49\textwidth} (b) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{thixo_demo_spec} \end{minipage} \caption{\label{fig:stress-spec_demo}Log-normal thixotropic recovery spectrum compared to discrete spectrum approximations. Simulated data with $\sigma_0 = 10$~Pa, $\Xi_{\rm m} = 10$~Pa, $\tau_{\rm m} = 1$~s, $\theta = 1$, in Eq.~\ref{eq:logn}. (a) Stress response to step down in shear rate, (b) co-plot of each spectrum with important reduced metrics shown.} \end{figure} Using the log-normal $\Xi^+(\tau^+)$ of Eq.~\ref{eq:logn} with parameters $\Xi_{\rm m} = 10$~Pa, $\tau_{\rm m} = 1$~s, $\theta = 1$, and $\sigma_0 = 10$~Pa, in Eq.~\ref{eq:contspec-down}, we can produce the typical stress recovery shape in Fig.~\ref{fig:stress-spec_demo}(a). This is similar to the schematic in Fig.~\ref{fig:intro_signals}(a2). The shape of $\Xi^+(\tau^+)$ is shown as the solid line in Fig.~\ref{fig:stress-spec_demo}(b). The log-median timescale $\tau_{\rm m} = 1$~s is associated with the transients shown in Fig.~\ref{fig:stress-spec_demo}(a), while the peak $\Xi_{\rm m} = 10$~Pa is related to the amount of stress change. The PDI of the timescales is related to $\theta$, the spread of the log-normal spectrum \cite{LucaRHEMAOS_2018}. From $\Xi^+$, we obtain the generalized low-dimensional metrics, namely the moments and characteristic timescales, using Eq.~\ref{eq:moments-M0-deltasigma} and \ref{eq:taun}. The timescales $\tau^+_1$ and $\tau^+_2$ for the spectra are shown on the plot. Note that $\tau^+_2 > \tau^+_1$. Also note that $\tau^+_1 = 1.6~{\rm s} > \tau_{\rm m} = 1~{\rm s}$. This is because we calculate mean timescales in $\tau$ space, and not $\ln\tau$ space. If we were to use Eq.~\ref{eq:moments-tau-define-log}, we would get $\tau^+_{1,{\rm log}} = \tau_{\rm m}$. But, as mentioned earlier, this is a matter of choice, and we use mean timescales defined in $\tau$ space similar to those in viscoelasticity. We also show high-density discrete, and low-density discrete distributions, each co-plotted in Fig.~\ref{fig:stress-spec_demo}(b). The high density (50 modes) and low density (5 modes) distributions $(\sigma^+_i)$ are discrete representations of $\Xi^+$ (obtained by fitting $\sigma^+_i$ over a set domain of $\tau^+_i$ to match the $\sigma^+(t)$ in Fig.~\ref{fig:stress-spec_demo}(a) resulting from the continuous distribution). Also note that $\sigma^+_i$ lies below $\Xi^+$ for a sufficiently large number of modes, whereas for fewer modes the values of $\sigma^+_i$ lie above $\Xi^+$. This is a consequence of the fact that each distribution must result in the same stress signal when integrated (or summed). As more modes are added, the value of each mode must decrease in order to add up to the same stress response. The plot also shows a single data point $(\tau^+_1,\Delta\sigma^+)$, which is the pair of reduced or summarizing metrics obtained using Eq.~\ref{eq:moments-M0-deltasigma} and \ref{eq:taun}, in essence is a single-mode representation of the polydisperse system. This can be plotted in the thixotropic Ashby charts. It lies above all other distributions in Fig.~\ref{fig:stress-spec_demo}(b) as expected. This can be visualized as a single-mode stress response $\sigma^+(t) = \sigma_0 + \Delta\sigma^+\left(1-e^{-t/\tau^+_1}\right)$, shown in Fig.~\ref{fig:stress-spec_demo}(a). This clearly shows that $\tau^+_1 \neq \tau_{\rm m}$, and $\tau^+_1$ is indeed a mean timescale of the original stress response since the $(\tau^+_1,\Delta\sigma^+)$ mode goes through the more complex $\sigma^+(t)$ response. It must be noted that the stress generated from $(\tau^+_1,\Delta\sigma^+)$ is not equivalent to a single-mode discrete spectrum fit of $\sigma^+(t)$ data. The illustrative example in Fig.~\ref{fig:stress-spec_demo} provides intuition for the breadth of a thixotropic spectrum. Here, ${\rm PDI} = \tau^+_2/\tau^+_1 = 2.08$, quantifying the deviation from a single timescale. It shows that discrete spectra can provide excellent descriptions of $\sigma^+(t)$, and that the log-spaced shape of $\sigma_i(\tau_i)$ will be similar to the shape of any underlying $\Xi(\tau)$ representation (if $\tau_i$ are equally spaced in $\ln\tau_i$). In what follows with experimental data, we use discrete spectra, log-spaced in $\tau_i$, to fit $\sigma^+(t)$ data and obtain reduced metrics on a number of thixotropic materials. The materials used and rheometry and fitting methods employed are shown in the next section, and the fitting results are shown in \S~\ref{sec:results-fits-disc}. \section{Experimental: materials and methods\label{sec:matmeth}} \subsection{Materials\label{subsec:materials}} We test two common yield-stress fluids: 4.0~wt\% suspension of Laponite RD colloidal clay particles in water, which forms a sparse physical gel when dispersed, typically identified as a thixotropic material \cite{MewisWagnerReview2009}; and 1.0~wt\% suspension of Carbopol 940 microgel particles in water, a model yield-stress fluid formed of jammed, swollen, crosslinked polyacrylic acid microgel particles, known to exhibit negligible thixotropy \cite{Piau_cpol2007,Divoux_PRL2013}. These suspensions were prepared using methods from the literature \cite{BCB_PhysFluids2015,SSRHE_JFM2020}. We also use step shear data for two model thixotropic yield-stress fluids from the open literature: 3.23~vol\% carbon black suspensions in naftenic oil \cite{DullaertMewis_structkinetics2006}, and 2.9~vol\% fumed silica suspension in paraffin oil with PIB \cite{ArmWag_JOR2016}. Both form thixotropic colloidal dispersions when suspended in their respective media. We show reduced metrics for all the above materials in the Ashby diagrams, but only show detailed step shear rate data for Laponite in the next section for the sake of brevity. \subsection{Rheometry methods\label{subsec:methods}} The data for all materials is either a step down from an initial shear rate of $\dot{\gamma}_{\rm i} = 5.00~\rm{s}^{-1}$ to various lower final shear rates $\dot{\gamma}_{\rm f} = 0.25,~0.50,~1.00,~2.50~{\rm s}^{-1}$, or a step up from $\dot{\gamma}_{\rm i} = 0.10~\rm{s}^{-1}$ to various higher shear rates $\dot{\gamma}_{\rm f} = 0.50,~1.00,~2.50,~5.00~{\rm s}^{-1}$ (as shown in Fig.~\ref{fig:intro_signals}(a)~and~(b) respectively). Data for fumed silica and carbon black were used directly from the published literature. For Laponite and Carbopol, the materials were sheared between 600 grit sandpaper-covered parallel plate geometries (uncorrected for parallel plate shear rate inhomogeneity), 25~mm in diameter, at two different gaps (750 and 1000~$\mu{\rm m}$) to verify the absence of wall slip \cite{RHE_baddata}, at $25^\circ$C, on an ARES-G2 strain-controlled, separated motor-transducer rheometer (TA Instruments). \subsection{Fitting methods\label{subsec:fitting}} The model fits were done using iterative linear regression algorithms that in their most general form minimize a weighted residual sum of squares, ${\rm RSS} = \sum\limits_{i=1}^d \left[ y_i - f(x_i) \right]^2/w_i$, where $y_i$ are the experimental data (shear stress $\sigma(t)$) and $f(x_i)$ are the model predictions over the control variable (time $t$). The residuals were weighted by the experimental data, so that $w_i = y_i$, which implicitly assumes an error proportional to the data \cite{Singh_2019}. For the stretched exponential fits, the data is fit to Eq.~\ref{eq:strexp_stress}, and Eq.~\ref{eq:strexp_spectrum} is used to interpret the fit as a continuous spectrum. For the discrete spectrum fits, obtaining the underlying distribution is an ill-posed, inverse problem, and we employed Tikhonov regularization \cite{Tikhonov_1979,Kont_2010,Kontogiorgos,WeiSolomonLarson-JOR2018} to obtain the optimal set of $\sigma_i$ for a pre-distributed, log-spaced set of $\tau_i$ values (using open-source MATLAB scripts available online \cite{PCH}). \section{Spectra from experimental data\label{sec:results-fits-disc}} We first show results for step up tests, followed by step down tests. We co-plot the fits to the discrete spectra and the stretched exponential for the transient stress data, and hence also co-plot the discrete and continuous spectra (corresponding to the underlying spectra for the stretched exponential). The fit parameters for the stretched exponential for each dataset ($\sigma_{\rm se}$ and $\tau_{\rm se}$) are shown in Supplementary Information, \S~2. \subsection{Step up in shear rate\label{subsec:results-fits-disc-up}} Fig.~\ref{fig:disc_up_stress} shows step up in shear rate data for 4.0~wt\% Laponite suspension in water, where (a) shows the transient stress data, and fits to the discrete spectrum (Eq.~\ref{eq:spectra-steprate-up}, solid lines) and the stretched exponential (Eq.~\ref{eq:strexp_stress}, dashed lines). The data in (a) suggest that the timescales of breakdown are $\mathcal{O}(10)~{\rm s}$, and the amount of stress change increases from $\approx 20$ to $100~{\rm Pa}$ as $\dot{\gamma}_{\rm f}$ increases. The effect of thixotropy is most extreme for $\dot{\gamma}_{\rm f} = 5.00~{\rm s}^{-1}$. \begin{figure}[!ht] \begin{minipage}[!ht]{0.49\textwidth} (a) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{lapo_up_stress} \end{minipage} \begin{minipage}[!ht]{0.49\textwidth} (b) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{lapo_up_spec} \end{minipage} \caption{\label{fig:disc_up_stress}Step up in shear rate data for 4.0~wt\% Laponite suspension in water, with $\dot{\gamma}_{\rm i}=0.1~\rm{s}^{-1}$. (a) Transient stress decay fit to a discrete thixotropic spectrum (solid lines) and to a stretched exponential (dashed lines) form of breakdown. (b) Co-plots of the discrete spectrum (circles) and the continuous spectrum interpretation of the stretched exponential (solid lines) for each value of $\dot{\gamma}_{\rm f}$. The $\bigstar$ shows the summarizing metric for the discrete spectrum at the most extreme case of thixotropy, corresponding to $\dot{\gamma}_{\rm f}=5.00~\rm{s}^{-1}$.} \end{figure} Thixotropic spectra help quantify these observations. The existence of multiple modes of breakdown, evidenced by the changing concavity in the $\sigma^-(t)$ data, is captured well by the discrete spectrum. The multi-modal nature of the thixotropic breakdown is exemplified in Fig.~\ref{fig:disc_up_stress}(b), where the discrete spectrum $\sigma^-_i(\tau^-_i)$ shows multiple peaks at distinct timescales, $\tau^-_i$. This feature is missing from $\Xi^-\left(\tau^-\right)$ for the single-mode stretched exponential. The summarizing metrics of the discrete spectra follow a systematic trend; both timescales and amount of stress change increase as $\dot{\gamma}_{\rm f}$ increases from 0.50 to 5.00~s$^{-1}$ ($\tau^-_1 = 1.06,~2.23,~2.56,~3.91$~s, and $\Delta\sigma^- = 18.52,~32.64,~64.77,~102.78$~Pa). The degree of thixotropy therefore is the most extreme for the highest $\dot{\gamma}_{\rm f}$, and the corresponding values of $(\tau^-_1,\Delta\sigma^-)$ are co-plotted with the spectra in Fig.~\ref{fig:disc_up_stress}(b). Obtaining such simple yet meaningful descriptions is the key result of this work. Similar features are retained for step down in shear data as shown in the next section, although the evidence is not conclusive if one only looks at the stress data; one must also look at the underlying spectrum to truly visualize the distribution of characteristic timescales during transient thixotropic processes. \subsection{Step down in shear rate\label{subsec:results-fits-disc-down}} \begin{figure}[!ht] \begin{minipage}[!ht]{0.49\textwidth} (a) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{lapo_down_stress} \end{minipage} \begin{minipage}[!ht]{0.49\textwidth} (b) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{lapo_down_spec} \end{minipage} \caption{\label{fig:disc_down_stress}Step down in shear rate data for 4.0~wt\% Laponite suspension in water, with $\dot{\gamma}_{\rm i}=5.0~\rm{s}^{-1}$. (a) Transient stress growth fit to a discrete thixotropic spectrum (solid lines) and to a stretched exponential (dashed lines) form of buildup. (b) Co-plots of the discrete spectrum (circles) and the continuous spectrum interpretation of the stretched exponential (solid lines) for each value of $\dot{\gamma}_{\rm f}$. The $\bigstar$ shows the summarizing metric for the discrete spectrum at the most extreme case of thixotropy, corresponding to $\dot{\gamma}_{\rm f}=0.25~\rm{s}^{-1}$.} \end{figure} Fig.~\ref{fig:disc_down_stress} shows step down in shear data for 4.0~wt\% Laponite suspension in water, where (a) shows the transient stress data, and fits to the discrete spectrum (Eq.~\ref{eq:spectra-steprate-down}, solid lines) and the stretched exponential (Eq.~\ref{eq:strexp_stress}, dashed lines). The difference between the discrete \emph{versus} continuous distributions is not apparent from the stress data; only from (b) do we see the existence of multiple modes of recovery, which a single-timescale stretched exponential function cannot capture. As for the step up data, the multi-modal nature of the thixotropic recovery is exemplified in Fig.~\ref{fig:disc_down_stress}(b), which is captured by $\sigma^+_i(\tau^+_i)$ but missed by $\Xi^+\left(\tau^+\right)$. The effect of $\dot{\gamma}_{\rm f}$ is more prominent for the recovery spectra, where the peaks in $\sigma^+_i$ shift toward longer timescales, indicating that the stress recovery takes longer at lower shear rates. This trend is also observed in the stretched exponential $\Xi^+$. This is reflected in the summarizing metrics, which again follow a systematic trend; both timescale and amount of stress recovered increase as $\dot{\gamma}_{\rm f}$ decreases from 2.50 to 0.25~s$^{-1}$ ($\tau_1^+ = 2.84,~5.20,~5.21,~6.14$~s, and $\Delta\sigma^+ = 18.99,~33.63,~36.83,~42.38$~Pa). The degree of thixotropy is the most extreme for the lowest $\dot{\gamma}_{\rm f}$, and the corresponding values of $(\tau_1^+,\Delta\sigma^+)$ are co-plotted with the spectra in Fig.~\ref{fig:disc_down_stress}(b). Such a monotonic trend of increasing recovery timescales with decrease in shear rate is also predicted by classical thixotropic constitutive equations \cite{Goodeve1938,Moore1959,MewisReview1979,LarsonWei-JoR2019} (see Supplementary Information, ~\S~3 for an example), and it is indeed a satisfying if not surprising result that the same trend exists in this dataset. Note that during any thixotropic process, breakdown or recovery, both shear-induced hydrodynamic forces/stresses (which tend to break up particle aggregates) and Brownian motion due to thermal fluctuations (which helps particles find minima in the interparticle potential energy landscape and assist floc growth) contribute to the net resultant transient stress evolution, i.e.\ the $\sigma(t)$ data shown in the plots. The relative strength of these two effects is often compared using the P\'eclet number \cite{MewisWagner_book2012}, defined as \begin{align}\label{eq:Peclet} {\rm Pe} \equiv \frac{6\pi\eta_{\rm m}\dot{\gamma}_{\rm c}a^3}{k_{\rm B}T} \sim \frac{\text{advective transport rate}}{\text{diffusive transport rate}}, \end{align} where $\eta_{\rm m}$ is the viscosity of the suspending medium, $a$ is the primary particle size, and $T$ is the absolute temperature of the system. For our step rate tests, the characteristic rate of shear in the system, $\dot{\gamma}_{\rm c} = \dot{\gamma}_{\rm f}$. The strength of Brownian hydrodynamics is independent of shear rate. At high $\dot{\gamma}_{\rm f}$, shear-induced stresses dominate, and the effect of intrinsic particle interactions and Brownian motion is limited. This is the case during step up tests, and advective hydrodynamic stresses dictate the rate of structure breakdown. On the other hand, in step down tests, the thixotropic structure buildup is primarily influenced by interparticle forces and Brownian motion, and less so by advective hydrodynamics, at very low values of $\dot{\gamma}_{\rm f}$ such that $\rm Pe \ll 1$. The resultant recovery processes take much longer and the thixotropic transients are more easily observable. We therefore choose to compare the thixotropic properties of Laponite to other fluids by looking at recovery in step down in shear data in \S~\ref{sec:lowdim}. We show step down in shear data for other materials in the next subsection. Also note that this limit of $\rm Pe \ll 1$ for $\dot{\gamma}_{\rm f} \rightarrow 0$ is a key feature to report, because it represents the most extreme limit of thixotropic recovery. Using the simplest, classical thixotropic constitutive equations \cite{Goodeve1938,Moore1959,MewisReview1979}, each timescale mode $\tau^+_i$ can be mapped to a kinetic aggregation rate constant in this limit, which helps connect our descriptions directly with physical quantities used in thixotropic constitutive modeling. The details are shown in Supplementary Information, \S~3. \subsection{Step down in shear rate for three model materials\label{subsec:results-fits-disc-down-others}} \begin{figure}[!ht] \begin{tabular}{ccc} \begin{minipage}[!ht]{0.32\textwidth} \begin{center} (a) carbon black \includegraphics[scale=0.25,trim={0 0 0 0},clip]{CB_down_stress} \end{center} \end{minipage} \begin{minipage}[!ht]{0.32\textwidth} \begin{center} (b) fumed silica \includegraphics[scale=0.25,trim={0 0 0 0},clip]{fc_down_disc-strexp_stress} \end{center} \end{minipage} \begin{minipage}[!ht]{0.32\textwidth} \begin{center} (c) Carbopol \includegraphics[scale=0.25,trim={0 0 0 0},clip]{cpol_down_stress} \end{center} \end{minipage} \end{tabular} \vspace{\floatsep} \begin{tabular}{ccc} \begin{minipage}[!ht]{0.32\textwidth} \begin{center} \includegraphics[scale=0.25,trim={0 0 0 0},clip]{CB_down_spec} \end{center} \end{minipage} \begin{minipage}[!ht]{0.32\textwidth} \begin{center} \includegraphics[scale=0.25,trim={0 0 0 0},clip]{fc_down_disc-strexp_spec} \end{center} \end{minipage} \begin{minipage}[!ht]{0.32\textwidth} \begin{center} \includegraphics[scale=0.25,trim={0 0 0 0},clip]{cpol_down_spec} \end{center} \end{minipage} \end{tabular} \caption{\label{fig:down-disc-others}Step down in shear data for (a) 3.23~vol\% carbon black (from Dullaert \emph{et al.} \cite{DullaertMewis_structkinetics2006}), (b) 2.9~vol\% fumed silica (from Armstrong \emph{et al.} \cite{ArmWag_JOR2016}), (c) 1~wt\% Carbopol. The top row shows the transient stress in recovery for each material, the fit lines correspond to the discrete (solid lines) and stretched exponential continuous (dashed lines) spectra. The bottom row shows the corresponding discrete (circles) and stretched exponential continuous (solid lines) spectra, similar to the data shown for Laponite in Fig.~\ref{fig:disc_down_stress}. The $\bigstar$ in each spectrum plot shows the summarizing metric for the most extreme case of thixotropy, corresponding to $\dot{\gamma}_{\rm f}=0.25~\rm{s}^{-1}$.} \end{figure} In addition to Laponite, we analyzed thixotropic recovery data for three other materials listed in \S~\ref{subsec:materials}. Fig.~\ref{fig:down-disc-others} shows the data for each material as separate columns, with the transient stress data and fit spectra for each, similar to Fig.~\ref{fig:disc_down_stress}. The features observed for both the stretched exponential $\Xi^+$ and discrete $\sigma^+_i$ in recovery for Laponite are similarly observed for these materials. The thixotropic timescales range from $<1~{\rm s}$ (e.g.\ carbon black at $\dot{\gamma}_{\rm f}=2.5~\rm{s}^{-1}$) to $>10~{\rm s}$ (e.g. fumed silica at $\dot{\gamma}_{\rm f}=0.25~\rm{s}^{-1}$). For carbon black and silica, the amount of stress change relative to the steady state value increases as $\dot{\gamma}_{\rm f}$ decreases, while this ratio is much smaller for Carbopol at all shear rates; this is quantified with $\Delta\sigma^+/\sigma_{\infty}$, shown in Fig.~\ref{fig:Ashby_all}. We also observe the existence of distinct peaks in the spectra, in addition to a spread in timescales, for Laponite, carbon black, and Carbopol, whereas we observe a broad spectrum and an absence of distinct, separate peaks in the fumed silica data. The polydispersity of thixotropic modes is captured in the spectral method developed here, including the ability to see distinct groups of modes via peaks in the distributions. The reduced summarizing metrics for Fig.~\ref{fig:disc_down_stress}, \ref{fig:down-disc-others} are shown in the following section. \section{Low-dimensional descriptions from spectra\label{sec:lowdim}} Fig.~\ref{fig:Ashby_all} is an Ashby-style co-plot \cite{Ashby_book2011} of reduced, low-dimensional metrics of thixotropic recovery spectra from Figs.~\ref{fig:disc_down_stress}~and~\ref{fig:down-disc-others}. This allows direct comparison between the four different materials using $\Delta\sigma^+/\sigma_{\infty}$ \emph{versus} $\tau^+_1$, where $\Delta\sigma^+$ is the total stress recovered under $\dot{\gamma}_{\rm f}$, and is obtained from Eq.~\ref{eq:moments-M0-deltasigma}. The recovered stress is normalized by the steady state stress at long times, defined as $\sigma_{\infty} \equiv \lim_{t\rightarrow\infty}\sigma(t)$. Since $\sigma_{\infty}$ is used as the reference stress and the recovery is quantified as a percentage of this value, we get \begin{align} 0 \leq \frac{\Delta\sigma^+}{\sigma_{\infty}} \leq 1, \end{align} and this allows for comparison of thixotropic recovery for different materials. $\tau^+_1$ is the first mean timescale of recovery, obtained by using $n=1$ in Eq.~\ref{eq:taun}. \begin{figure}[!ht] \begin{minipage}[!ht]{0.49\textwidth} (a) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{Ashby_all} \end{minipage} \begin{minipage}[!ht]{0.49\textwidth} (b) \centering \includegraphics[scale=0.39,trim={0 0 0 0},clip]{Ashby_all-max} \end{minipage} \caption{\label{fig:Ashby_all}Ashby co-plots for thixotropic recovery, based on data in Figs.~\ref{fig:disc_down_stress}, \ref{fig:down-disc-others}. (a) Reduced metrics $(\tau^+_1,\Delta\sigma^+/\sigma_\infty)$ are obtained from discrete spectrum fits to step down in shear data for different fluids. The symbols get smaller as the final shear rate decreases ($\dot{\gamma}_{\rm f} = 2.50,~1.00,~0.50,~0.25~{\rm s}^{-1}$) for each material. (b) The mean timescales $\tau^+_1,\tau^+_2$ (visualizing ${\rm PDI}=\tau^+_2/\tau^+_1$ for the most thixotropic response for each material (observed at $\dot{\gamma}_{\rm f} = 0.25~{\rm s}^{-1}$) plotted against $\Delta\sigma^+/\sigma_\infty$.} \end{figure} The closer the value of $\Delta\sigma^+/\sigma_{\infty}$ is to 1, the greater is the change in stress due to thixotropic recovery relative to the final state of the sample, and the greater the significance of thixotropic effects in the material for a given set of $\left( \dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f} \right)$. Among these data, there is a single value of $\dot{\gamma}_{\rm i} = 5~{\rm s}^{-1}$, and only $\dot{\gamma}_{\rm f}$ is varied. Accordingly, we see the effect of changing the shear history in Fig.~\ref{fig:Ashby_all}(a), where smaller symbols correspond to a lower $\dot{\gamma}_{\rm f}$, approaching the Brownian-dominated recovery limit. As $\dot{\gamma}_{\rm f}$ is decreased, $\Delta\sigma^+/\sigma_{\infty}$ increases, as observed in Fig.~\ref{fig:Ashby_all}, and hence the degree of thixotropy also increases in terms of the stress recovered. But as mentioned in \S~\ref{sec:intro}, one must consider both stresses and timescales to judge the degree of thixotropy in a material. The longer the timescale $\tau^+_1$, the longer the thixotropic transients last, and the greater the significance of thixotropic effects in the material, once again for a given set of $\left( \dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f} \right)$. The same trend with changing $\dot{\gamma}_{\rm f}$ is observed in $\tau^+_1$ as with $\Delta\sigma^+/\sigma_{\infty}$: as $\dot{\gamma}_{\rm f}$ is decreased, $\tau^+_1$ increases, and thus the degree of thixotropy is most significant at the lowest $\dot{\gamma}_{\rm f}$ (Fig.~\ref{fig:disc_down_stress}, Fig.~\ref{fig:down-disc-others}). This is observed for all four materials shown in Fig.~\ref{fig:Ashby_all}(a). We see that the timescales are comparable across all materials ($\approx 1-10$~s), but the fractional stress recovery is very different. Considering the lowest $\dot{\gamma}_{\rm f}$ for each, Carbopol has $\Delta\sigma^+/\sigma_{\infty} = 0.09$, and is therefore the least thixotropic, even though its $\tau^+_1 = 5.41$~s is similar to that of the other materials. Laponite is more thixotropic than Carbopol, while carbon black and fumed silica are the most thixotropic. The trends in Fig.~\ref{fig:Ashby_all}(a) form a compact summary and are consistent with basic thixotropic constitutive models. As mentioned in \S~\ref{sec:results-fits-disc}, the monotonic trend in $\tau^+_1$ with $\dot{\gamma}_{\rm f}$ is in accordance to classical structural kinetics-based thixotropic models in the literature \cite{MewisWagnerReview2009,LarsonWei-JoR2019} (see Supplementary Material, \S~3). Structure recovery through Brownian motion proceeds slowly, and at lower $\dot{\gamma}_{\rm f}$, the effect of Brownian dynamics is more prominent. The increase in $\Delta\sigma^+/\sigma_{\infty}$ with decreasing $\dot{\gamma}_{\rm f}$ also is physically meaningful; as $\dot{\gamma}_{\rm f}$ decreases, $\rm Pe$ decreases, and Brownian dynamics dominate over advective hydrodynamics, and more structure can build up (or equivalently, less structure is broken down due to shear-induced stresses). It is interesting that the two quantities used here qualitatively depend on $\dot{\gamma}_{\rm f}$ in the same way. The Ashby plots are simple to interpret and use: the further away a material lies from the origin, the more significant is the effect of thixotropy in the material (moving from the gray to the white region). Fig.~\ref{fig:Ashby_all}(a) does not indicate the breadth of underlying thixotropic spectra, but showing both $\tau^+_1$ and $\tau^+_2$, as in Fig.~\ref{fig:Ashby_all}(b) for the minimum $\dot{\gamma}_{\rm f}$ data, reveals this. The more distinct $\tau^+_1$ and $\tau^+_2$ are, the more polydisperse the thixotropic recovery spectrum, ${\rm PDI} \equiv \tau^+_2/\tau^+_1$ from Eq.~\ref{eq:moments-tau-define-PDI} (${\rm PDI} = 3.31,~1.77,~1.49,~1.41$ for Carbopol, Laponite, carbon black, and fumed silica respectively). This representation in Fig.~\ref{fig:Ashby_all}(b) is general, a description applicable to any material, even those with different underlying constitutive behavior. It is therefore a powerful tool that is useful for comparison and selection across different microstructural classes of thixotropic materials. This technique has already been applied by the authors in studying drop impact dynamics of thixotropic yield-stress fluids \cite{SSRHE_JFM2020,SSRHE_PRF2021}, where reduced thixotropic metrics obtained from step shear data were used to select a material (Laponite) with a long recovery timescale and significant stress changes which has important consequences in their drop impact stick-splash outcomes \cite{SSRHE_PRF2021}. \section{Conclusions\label{sec:conclusions}} In this work, we have developed a general framework for thixotropic spectra to quantitatively describe thixotropy using a distribution of recovery/breakdown timescales in step tests. The moments and means of the distributions are used to generate low-dimensional metrics of quantifying the degree of thixotropy in the material, and these descriptions are both model-independent and material agnostic. Alternative reduced metrics for quantifying thixotropic timescales have been used by Divoux \emph{et al.} \cite{Divoux_PRL2013} and Jamali \emph{et al.} \cite{Jamali_PRL2019} using areas of thixotropic hysteresis loops. However, the methods developed in our work employ faster experimental protocols, largely isolate the effect of viscoelastic transients from thixotropic ones, can be applied to not only unidirectional but also oscillatory shear tests, and most importantly, separately describe recovery and breakdown and reveal spectral distributions. We mention the analogy of the methods developed here with viscoelastic relaxation spectra in Table~\ref{ch2:tab:thixo-VE-compare}. Similarly, an analogy may be drawn between stress controlled step tests where the strain rate response is expressed analogous to retardation spectra distributed over retardation timescales. The methodology proposed in this work is not free from limitations, experimental or analytical. Rheometer motors that impose strains/strain rates on samples via angular displacements have a finite response time to ramp up or down $(t_{\rm mot})$, and step changes are not instantaneous. The shortest duration of change between $\dot{\gamma}_{\rm i}$ and $\dot{\gamma}_{\rm f}$ is limited by the inertia and response time of the rotor-geometry assembly. The quickest step changes are essentially ramps that at best occur over $t_{\rm mot} \sim \mathcal{O}(10)$~ms, and therefore transients in $\sigma(t;\dot{\gamma}_{\rm i},\dot{\gamma}_{\rm f})$ for $t \lesssim t_{\rm mot}$ cannot be observed. The credibility of short-time data is also limited by viscoelastic waves in the sample \cite{RHE_baddata}. Consequently, the range of $\tau$ and $\tau_i$ used for fitting spectra to stress data has to be truncated. The temporal location and value of $\sigma_0$ is thus also influenced by $t_{\rm mot}$. This in turn makes the value of $\Delta\sigma/\sigma_{\infty}$ and therefore the reduced descriptions used in the Ashby charts susceptible to instrument limitations used to collect the data. Specific details of the shape of the distribution are lost upon reduction into its moments. It is possible to generate completely different distributions and stress responses from the same set of moments, one prominent example being Anscombe's quartet \cite{Anscombe}. This also typifies the ill-posed nature of the fitting problem. Multiple distributions can give the same stress response, and therefore the same means and moments. Inverting the summation in Eq.~\ref{eq:discspec-intro} to obtain $\sigma_i$ is an ill-posed problem, similar to those faced with viscoelastic spectra. The approach of using a specific analytically defined continuous spectrum to fit to stress data and obtain $\Xi(\tau)$ may be a workaround, especially when employed with Bayesian inference methods and credibility metrics, as has been applied to LVE \cite{FreundEwoldt}, MAOS \cite{LucaRHEMAOS_2018}, and thixotropic constitutive model selection \cite{ArmWag_JOR2016}. Future work will benefit from this methodology of thixotropic spectra and Ashby diagrams. The high-density discrete spectra $\sigma_i$ can be employed to obtain dominant timescale modes, and a low-density superposition of continuous spectra centered around these timescales could be used to fit the data to very high accuracy. The method developed here is not predictive, but only descriptive. Yet, the value of descriptive methods cannot be overstated, e.g.\ material properties which are extremely useful in rheological analysis despite not being predictive. Our method also gives insight into thixotropic properties of fluids that might inform predictive models; for instance, revealing or suggesting shapes of underlying spectra for multi-mode constitutive equations, which can lead to better predictions. Shapes of the discrete spectra can also be used to inform the shape of parameterized continuous distributions of $\Xi(\tau)$. These make the fitting process less computationally intensive. The concept of thixotropic spectra also provides interpretation to commonly-used fit functions, such as the stretched exponential (Eq.~\ref{eq:strexp_stress},~\ref{eq:strexp_spectrum}). Lastly, the methodology developed here can be applied to any material system that evolves with time to obtain summarizing metrics that describe the data in a simple yet meaningful way. We now have a way to judge ``how thixotropic'' a material is in terms of both timescale and amount of change, which is important for dimensionless groups involving thixotropic and aging systems in general (e.g.\ the Mutation number \cite{Winter_1994,Winter_2018,GeriGHM_PRX2018}), and for rheological design requirements involving thixotropy \cite{ChaiRHE_ARFM} in applications such as 3D printing \cite{Poole_SM2019}, spray coating, fire suppression \cite{BCB_GFM2015,SSRHE_GFM2018}, flow batteries \cite{Helal_2014,Helal_2016,Narayanan}, food processing, and other uses not yet imagined. \section{Supplementary Material\label{sec:SI}} See Supplementary Information for more information about the continuous spectrum for the stretched exponential and the parameters for the step down data fits for all materials to the stretched exponential model. \section{Acknowledgements\label{sec:acknowledgements}} This project was funded by the National Science Foundation CAREER Award, CBET-1351342, and the Joint Center for Energy Storage Research (JCESR), U.S.\ Department of Energy. S.S.\ thanks Y.\ Wang, N.\ Ramlawi, and Dr.\ C.\ Saengow for helpful discussions.
train/arxiv
BkiUgA_xK7IADTol2r6J
5
1
\section{Introduction} Crowdsourcing is common in human-in-the-loop learning systems wherein data for a task is obtained through the services of a large number of people. This data could be used for training a machine learning algorithm, or could be used independently to make various decisions. Across a wide range of sectors such as marketing, advertising or industrial design, crowdsourcing has made a significant impact. Crowdsourcing is also making its way into more critical fields such as healthcare \cite{swan}, and is proving to be a faster alternative than conventional methods for predicting the spread of infectious diseases \cite{churana}, for diagnosis and treatment \cite{meyer}, and other healthcare applications. In machine learning systems, crowdsourcing can aid in several aspects such as in: producing data, debugging and checking of models \cite{wu}, for active learning \cite{yan}, and to improve human-computer interaction in multi-agent systems \cite{abhigna}. Data creation, perhaps, remains one of the most common purposes of crowdsourcing. This includes using humans-in-the-loop for curating the data, for pre-processing and cleaning the data, and for generating labels. In most cases, generating labels is a straightforward task (e.g., object classification, face recognition, parts of speech tagging, etc.). However, in some applications (e.g., evaluating aesthetics of an image, assessing quality of machine-generated music, etc.), there could be ambiguity about the ground truth due to the subjective nature of the task. In such applications, eliciting ground truth from noisy human evaluations becomes a challenge. There has been an active line of research focusing on addressing the aforementioned challenge of eliciting ground truth in highly subjective tasks \cite{anca,felt,subramanian,giancola,procaccia}. Strategies vary from a simple majority voting consensus to more sophisticated techniques such as multi-annotator statistical models, and prior knowledge models. Motivated by these works, we consider a subjective task --- that of assessing {\it experienced emotions} --- to compare the performance of a simple voting consensus scheme with an optimal aggregation methodology \cite{procaccia}. This task offers an excellent scenario to study some challenges associated with human evaluations and to analyze how human evaluation compares to assessments from machine learning systems. Specifically, our problem setting and research questions are as described next. \subsection{Setting} We consider a conversational user interface (CUI) that is designed to support textual conversations between people needing emotional support (i.e. the users) and trained counselors who offer listening and support on the backend of the conversation platform (i.e. the human listeners). First, depending on the type of issue discussed between the user and the human listener, the conversation could involve varying amounts of emotional content. Some of these emotions might be explicitly expressed in the conversation while others are only felt or experienced within the user \cite{ochs}. For example, a user may not be in touch with feeling scared and may instead express anger. Knowing the experienced emotions of a user can help in understanding their internal states and in addressing their concerns better. In short, the problem we consider is the assessment of experienced emotions of emotionally distressed users based on their textual conversations with human listeners. We deliberately choose experienced emotion assessment as this is more subjective than expressed emotion assessment. This problem is a highly subjective task. Each evaluator judges the user's experienced emotion based on their personal experiences, socio-cultural backgrounds, and introspective abilities \cite{mcarthy}. Furthermore, collecting information about their experienced emotions from the distressed users themselves is highly unreliable due to the following reasons: \begin{itemize} {\item {\it Limited introspective capabilities of users:} A recent study involving over 800 studies of self-awareness indicated that emotionally distressed people have limited self-introspection abilities, and response biases \cite{eurich}. } \item {{\it Transient nature of problems:} Many emotional conditions are short lived in terms of their time duration, intensity, frequency of occurrence, and often go unnoticed \cite{morris}. People with such problems appear normal in most ways and thus it is hard to even recognize that they have a problem. } \item { {\it Social stigma:} People are often less comfortable in opening up about such problems as they fear a social stigma associated with the treatment \cite{ahmedani}} \end{itemize} Thus, experienced emotion assessment is a highly subjective task with no objective ground truth available. \subsection{Research Questions (RQs)} The broad goal of our study is to explore if crowdsourcing is always helpful in extracting reliable information in highly subjective scenarios. Specifically, does crowdsourcing clarify or confound for highly subjective tasks? We explore this question by considering the problem setting described earlier. Within that setting, we investigated the following questions: \begin{itemize} \item {RQ1: Are some optimum evaluator aggregation strategies such as \cite{procaccia} better than simple majority voting consensus for highly subjective tasks such as for the one considered?. } \item {RQ2: For the task of experienced emotion assessment, how does a machine-learning algorithm that is not explicitly modeled to handle evaluator subjectivities compare with some common crowdsourcing aggregation strategies?} \end{itemize} \subsection{ Key Findings} We list some findings below. A detailed description can be found in the Section 5. \begin{itemize} \item {For the task considered, a simple voting consensus scheme is more or less as effective as an optimal aggregation strategy. } \item {For the task of choosing the top experienced emotion, the machine's result matched the human evaluators results for 75\% of the instances. For the task of choosing the top 3 experienced emotions (i.e. rankings) among the 6 emotions, the machine's evaluation matched the human evaluation for roughly 50\% of instances.} \end{itemize} \section{Related Works} We review related work pertaining to crowdsourcing for emotional/mental healthcare and crowdsourcing for subjective tasks. \subsection{Crowdsourcing for Mental Healthcare} Mental health problems have become very common globally. A recent study reported that in the United States alone, about 56\% of adults with mental health conditions do not receive the treatment they need \cite{nguyen}. This, in turn, has triggered an interest in developing crowdsourcing platforms for treating mental health conditions. Yet, there are only a few works that look at crowdsourcing techniques for addressing mental health conditions. In \cite{morris}, the author presents an online intervention called Panoply that administers emotion-regulatory support. In another work by \cite{naslund}, the authors surveyed the effect of randomized trials using online crowdsourced methods for recruitment, intervention delivery and data collection in people with mental conditions such as schizophrenia. As can be noted from these illustrations, most crowdsourcing platforms for healthcare focus on physiological conditions or well-defined mental conditions. In this work, we study the performance of crowdsourced evaluations for the assessment of emotional health conditions. \subsection{Crowdsourcing for Subjective Tasks} It has long been established that crowdsourcing methods can offer a reasonably accurate solution for subjective tasks. In \cite{rainer}, the authors analyze the benefit of using crowdsourcing for estimating the media playout in a multimedia system. In \cite{ghadirayam}, the authors evaluate the performance of crowdsourced data for assessing picture quality. In \cite{hobfeld}, the authors survey methods for assessing the quality of crowdsourced data for multimedia quality of experience. The authors in \cite{checco} propose an elegant technique to aggregate individual responses for interval data. However, this method is not applicable to nominal or ordinal scales, which is the focus of this study. The authors of \cite{alfaro} propose a method wherein the evaluators are asked to compare items to obtain top k lists. In a similar vein, the authors in \cite{procaccia}, propose a pairwise comparisons followed by estimating a minimum feedback arc set of the tournament to optimally aggregate the uncertain preferences. We evaluate the feasibility of such methods in our study and show that pairwise comparisons are not very efficient for the problem considered. \section{Data Collection} The dataset consists of four distinct user conversations with a single human listener (HL), and was collected using a CUI focused on connecting users with HLs. The total duration of the conversations was over two hours. The participants consented to the use of anonymized conversations for research and presentation purposes. Conversations between users and HLs dealt with topics such as relationship issues, work stress, etc. Each conversation was divided into transcripts. A transcript is defined as a part of the conversation wherein the user is continuously engaged in ex-pressing themselves for more than three minutes at a stretch, independent of any interleaving HL responses. As such conversations are mostly about personal issues, user privacy preferences constrain data collection. Not all users are comfortable sharing their data. This is a major bottleneck in being able to accumulate larger datasets of this nature that involve sensitive personal data. Many users abruptly left the CUI as the conversations progressed to-wards their personal issues. Additionally, not all transcripts specifically deal with emotionality; some transcripts consisted of the user and HL getting to know each other, engaging in back and forth small talk before any actual issues surfaced. Thus, although there were over fifty transcripts and 16 users in the dataset, emotion-related content was only available for twelve transcripts across 4 different users. We report results on this subset. \vspace{0.2in} {\bf Illustrative Transcript:} \\ {\it User:} Hi, can you please help me with anxiety. \\ {\it HL:} I'm sorry you're feeling anxious. Can you tell me more about it? \\ {\it User:} I have no self-confidence and have a girlfriend who I really like. I can't cope thinking she is going to find someone better. I am drinking to kill the anxiety. \\ {\it HL:} It sounds like you're feeling really anxious about your girlfriend staying with you. That sounds really difficult.\\ {\it User:} She is out with work tonight and a colleague who she dated for a bit is there. I don't know how to cope. \\ {\it HL:} It sounds like you're feeling really anxious that she is out with other people including her ex. And you not being there with her is making you feel worse. I'm sorry - that's a really hard feeling. \\ {\it User:} Can you help? \\ {\it HL:} I can listen to you. And I really am sorry that you're feeling so anxious. Maybe you can tell me more about your relationship and why you are feeling insecure.\\ {\it User:} I am an insecure person. I am a good-looking guy, always get chatted up, but I have no confidence.\\ \section{Amazon Mechanical Turk Survey} We conducted a survey on Amazon Mechanical Turk (AMT) to investigate how human evaluators assessed experienced emotions in the conversation snippets. All 195 participants were US residents, about 62\% were male and 38\% female. Eighty one percent of them had attended college. Seventy percent of them were aged between 25 and 49. First, participants were asked a screening question regarding prior active listening experiences (i.e. participation in any of the following relevant fields: counseling, psychology, psychiatry, nursing, caregiving, non-violent communication class, active listening training, mediation). We eliminated 79 people who did not have any relevant experience. In order to ensure that the participants had basic active listening abilities, we conducted another screen. Participants were given three conversation snippets and were instructed that each conversation snippet was an excerpt of a longer conversation between someone who was seeking counseling for their problems (i.e. a user) and someone who had offered to listen to their problems (i.e. a human listener). Note that these three conversation snippets did not correspond to the conversations in our dataset, but were designed for screening. They were asked to read each conversation snippet and answer the question: What is the primary emotion that the user is expressing in this conversation snippet? They were presented with six options: Angry, Happy, Sad, Scared, Surprised, and Worried, and asked to select one. For these three test cases, the answers were designed to be relatively easy and therefore seven participants who answered incorrectly were excluded from the study. The final part of the survey consisted of presenting actual transcripts from our dataset. They were asked to read each transcript and answer the following question, with the associated guidance: {\it Which emotions might the user be experiencing in this transcript text? To answer this, we would like you to look beyond the text and infer what you think the user might be experiencing. It can be a little tricky to see how an experienced emotion is different from an expressed emotion so here is a hypothetical example --- in some cultures, some people may not be comfortable directly expressing anger so they might express sadness instead.} First, participants were asked to choose the top experienced emotion from the 6 emotion choices mentioned earlier. Second, they were also asked to rank the top 3 experienced emotions given in the order of their likelihoods (from most likely experienced to least likely experienced). We did not ask the Turkers to rank all 6 because of the cognitive burden it imposes on the Turker (which, in turn, would lead to the increased probability of providing noisy assessments). That is, ranking all 6 emotion choices would be too difficult a task. For example, if surprised or happy are never mentioned or implied in the text, it is not possible to rank them either. \section{ Analysis of Research Questions} We analyze the results of the AMT survey in the context of the research questions (RQs) described earlier. \subsection{RQ1: Simple Voting Consensus vs Optimal Aggregation} The goal here is to compare the performance of an optimum voting aggregation strategy \cite{procaccia} with that of a simple majority voting consensus. We provide an overview of the method employed, the results obtained, and discuss the implications. \subsubsection{Method} While many methods have been proposed to effectively aggregate subjective evaluations, they are for specific tasks as outlined in Section 2.2. The closest pertinent one for our purposes is the method suggested in \cite{procaccia} wherein the authors propose to extract reliable responses through ranking preferences. We wish to evaluate the efficacy of this method for the task of estimating experienced emotions. We compare this method with a simple voting consensus scheme wherein the emotion with the highest number of votes/preferences is considered to be representative of the most experienced emotion. As is evident, the method suggested in \cite{procaccia} is only applicable for the case of ordinal data. An uncertain evaluator response can be viewed as a distribution over rankings. A confident evaluator will report a single emotion whereas a highly uncertain evaluator will report all emotions. As elaborated in \cite{procaccia}, this can be formulated as the popular NP-hard problem of finding the minimum feedback arc set of a tournament. Specifically, the set of possible experienced emotions constitute the vertices of a directed graph. The weights $w_{ab}$ of this directed graph are determined by the number of evaluators preferring emotion a to emotion b as the experienced emotion. This directed graph with two weighted edges between each pair of vertices (one in each direction) is called a weighted tournament. The minimum feedback arc set of a tournament (also known as minimum feedback ranking) is the problem of finding the ranking of the vertices such that the sum of weights of edges that disagree with the ranking provided by the evaluators is minimized. In other words, this is same as the popular voting rule called Kemeny rule \cite{lv} which finds a ranking that minimizes the sum of Kendall Tau (KT) distances from the input rankings. The authors in \cite{procaccia} show that the minimum feedback ranking of a tournament with weights defined in this manner minimizes the expected sum of Kendall Tau distances from the evaluator preferences. The method suggested in \cite{procaccia} requires the weights $w_{ab}$ for all possible emotions. It is to be noted that the evaluators only ranked the top 3 experienced emotions. However, without loss of generality, the emotions not chosen by an evaluator are considered to have a lower preference/ranking for that evaluator. Also, in obtaining individual evaluator rankings across all the 6 emotions under consideration, we assume equal ranks for all the emotions not ranked/preferred by an individual evaluator (since the evaluator is asked to pick 3 out of the 6 choices). We use a simple voting consensus method for comparison with the aforementioned method. The higher number of votes an emotion garners, the higher is its ranking. In this manner, the top 3 emotions are determined. \subsubsection{Results and Discussion} Surprisingly, for the aforementioned task, the top 3 emotions as determined by the optimal aggregation method was the same as that obtained by the simple voting consensus method. The above result indicates that in highly subjective scenarios, a simple voting consensus is as effective as a ranking scheme (if not better). This is somewhat intuitive. When someone is highly uncertain, their uncertainty increases when they are asked to perform additional tasks. Selecting an emotion is easier than selecting and ranking them. So, the extra task of ranking brings in additional uncertainty. \subsection{RQ2: Performance of a machine learning algorithm that is uninformed about human subjectivities} The goal here is to compare the performance of an objective assessment from a machine learning algorithm (i.e. the algorithm is uninformed about human prejudices) in the subjective evaluation task under study. We designed a machine learning algorithm without explicitly modeling the subjective prejudices of the human evaluators. As a reference, a brief summary of the algorithm is provided here. \subsubsection{Method} Motivated by the fact that Bayesian methods have been successful in modeling several aspects of human cognition \cite{grifthis}, we propose a novel Bayesian framework that fuses information about expressed emotion probabilities (which may be computed from existing emo-tion recognition methods) and sentiment embeddings \cite{tang} to compute probabilities of experienced emotions specific to individual users. We represent the probability of an emotion $i$ being experienced as $P(emo_{experienced}=i)$. An emotion recognition algorithm is run on the user's texts to determine the probability of all expressed emotions. Specifically, we construct a dictionary of synsets or synonyms for an emotion and based on the number of times a word appears in the conversation, a probability of an expressed emotion is calculated. We represent the probability of an emotion $j$ being expressed as $P(emo_{expressed}=j)$. Next, we use sentiment embeddings \cite{tang} to measure similarities between words depicting a pair of emotions. This information is computed across several people and through large datasets that contains emotional content (e.g. blogs, news articles, etc.). This is thus reflective of the general relatedness between two emotion-indicating words. These similarity measures are then normalized. Specifically, this constitutes the likelihood probability of expressing one emotion given that another emotion is experienced and is denoted by $P_l(emo_{expressed}=j | emo_{experienced}=i)$. Let $r_{emo-i,emo-j}$ be the measure of relatedness between two emotions ($i and j$) \cite{tang}. We compute likelihood probability $P_l(emo_{expressed}=j | emo_{experienced}=i)$ based on normalizing the similarities $r_{emo-i,emo-j}$ over the space of all possible emotions that are observed as \begin{equation} P_l(emo_{expressed}=j | emo_{experienced}=i)= R \end{equation} where $R=P_l(emo_{expressed }=j|emo_{experienced}=i)$. Using the Bayes rule of total probability, the probability of expressing an emotion $j$, i.e., $P(emo_{expressed}=j)$ can be written as \begin{equation} P(emo_{expressed}=j)=\sum_{emo_{experienced}} (R*Q) \end{equation} wherein $Q$ is defined as follows: \\ \\\hfill $Q = P(emo_{experienced}=i)$. \\ \\ Note that the experienced emotion probabilities, i.e., $P(emo_{experienced}=i)$ for all possible emotion state variables are of interest to us. Without loss of generality, let us assume that there are $m$ possible expressed emotion states and $n$ possible experienced emotion states and that $n>m$ (Our framework applies to cases where $n<=m$ as well). If the probabilities of all the experienced emotions of interest are denoted by the $n$ dimensional column vector $\bf{x}$, the likelihood probabilities for all possible $m.n$ emotion pairs by the matrix $\bf{L}$, and the probabilities of all expressed emotions by the $m$ dimensional column vector $\bf{t}$, then the resulting system of equations can be written as a constrained optimization problem as follows: \begin{equation} \min ||\bf{Lx-t}||_2^2 \end{equation} subject to the constraints $||\bf{x}||_1=1$ and $x_i>=0$, for all $i$. The aforementioned equation can be solved by applying Karush-Kuhn-Tucker (KKT) conditions \cite{kkt} as follows: \begin{equation} D||{\bf Lx-t}||_2^2 +\lambda D({\bf x^T 1} -1) +{\bf \mu x}=0 \end{equation} In eq. (4), { \bf1} is a $n$ dimensional column vector of 1, $\lambda$ is the Lagrange multiplier, ${\bf \mu} = [\mu _{1}, \mu_{2}, ....\mu_{n}] $ is the KKT multiplier such that $\mu {\bf x}=0 $ and $\mu_i$ $<$ 0 . $D$ denotes the derivative. \subsubsection{Results and Discussion} For the task of choosing the top experienced emotion, the machine's result matched the human evaluators results for 75\% of the transcripts. For the task of choosing the top 3 experienced emotions (i.e. rankings) among the 6 emotions, the machine's evaluation matched the human evaluation for roughly 50\% of transcripts. As is evident from the algorithm, the machine provides probabilities for possible experienced emotions. The set of probable experienced emotions chosen by the machine was always lesser than the set of probable emotions chosen by the human evaluators. As a result, the uncertainty associated with the machine evaluation was lesser than the uncertainty associated with human evaluation. The machine's assessment is based on the training data. As described in Section 5.3.1, likelihood probabilities are computed using large volumes of public text corpus. Such corpus are a shared influence on all of us, and thus the likelihood probabilities are reflective of the general correlations between emotions that we are routinely exposed to. As a result, the top experienced emotion chosen by the machine matches the top experienced emotion as chosen by the human evaluators as well. However, the distribution of possible experienced emotions was more spread out for the human evaluation. This could be due to the biases/perceptions of individual evaluators (evaluators evaluate the transcripts based on their personal experiences and prejudices). Moreover, some evaluators could have limited introspective abilities. Fatigue and lack of concentration in the task could be another reason for the flat distribution. This distribution of human evaluation thus contains valuable information. The mean of the distribution can inform us about the general beliefs (as is reflected by the top choices). Additionally, the tail end of the distribution could give us information about the beliefs and biases of individual turkers. \section{Conclusions} In this work, we investigated the effectiveness of some crowdsourcing methods in the highly subjective task of estimating experienced emotions of distressed users. The study revealed many interesting results. First, we found that a simple voting consensus is as effective as an optimal aggregation method for the task considered. Second, we found that a machine learning algorithm that is not explicitly modeled to characterize evaluator biases is as reliable as the human evaluation in terms of assessing the most dominant experienced emotions. We believe a comparison of human and machine evaluation can also help in distinguishing aspects such as general beliefs and evaluator-specific beliefs. \vskip -0.1in
train/arxiv
BkiUazjxK1yAgXBVCiC5
5
1
\section{Introduction} \label{sec:intro} Neurons are responsible for encoding information in the central nervous system. Lower level functions are many times gathered at specific parts of the brain, processing information received from neurons throughout the body, often by means of signaling nerves, which are bundles of axons that reach out from the neurons. Higher level functions depend on remarkably larger and complex networks of neurons with various types of feedback loops. The chemical connections between neurons are handled via synapses, where neurotransmitters are extruded into the extracellular space by the presynaptic neuron. The neurotransmitters form a part of a chemical process which may initiate a potential wave into the postsynaptic neuron --- this wave is known as the \emph{action potential}. During the action potential, the membrane potential quickly rises and falls, and the resulting signal propagates along the cell \cite{purves_neuroscience_2012}. The process that underlies this propagation is the regulation of ion concentrations in both the intracellular cytoplasm and the extracellular space caused by integral membrane proteins called \emph{ion channels}. There are many variants of ion channel proteins, whose functions are only recently better understood through studies using experimental techniques such as X-ray crystallography \cite{rees_crystallographic_2000}. To our current understanding, ion channels reside at one of many conformal states, which are either \emph{closed} (non-conducting) or \emph{open} (conducting). If the conformal state is open, ions are allowed to pass through pathways called pores from the extra- to the intracellular space, or in the opposite direction. If the conformal state is closed, ions are blocked from entering the channel \cite{hille}. Ion channels become activated either in response to a chemical ligand binding, or in response to voltage changes on the membrane \cite{hille,purves_neuroscience_2012}, so-called voltage-gated ion channels. In this work, we focus on voltage-gated ion channels, which are important for the initiation and the propagation of action potentials along the neuronal fiber. Mathematical models of neurons were initially formed by measuring electrically induced responses of a neuron using techniques such as the voltage- or current clamp. From those experiments, transition properties and parameters of single channel gating could be identified \cite{hille}. Historically, models of firing neurons were formulated as ordinary differential equations (ODEs). \review{However, in some specific cases experimental as well as theoretical findings suggest that the gating of ion channels is more accurately described by a stochastic process.} The variance of the gating process, also known as the \emph{channel noise}, is thought to be important for information processing in the dendrites and explain different phenomena regarding action potential initiation and propagation \cite{schneidman_ion_1998, white_channel_2000,faisal_noise_2008, cannon_stochastic_2010}. \review{For example, it has been shown that intrinsic noise is essential for the existence of subthreshold oscillations in stellate cells \cite{dorval_channel_2005}, that it contributes to irregular firing of cortical interneurons \cite{stiefel_origin_2013}, and that it can explain firing correlation in auditory nerves \cite{moezzi_ion_2016}.} Another important aspect of neuronal modeling is the investigation of action potentials propagating outward the neuron into the extracellular space. Simulations of such extracellular fields is one of the important methods used in computational neuroscience \cite{einevoll_modelling_2013}. Common usage includes the validation of experimental methods such as EEG and extracellular spike recordings, or in the modeling of physiological phenomena which can not be easily investigated empirically \cite{einevoll_modeling_2010}. In this paper we present a novel three-stage multiscale model\review{ing framework} consisting of the following components: \begin{inparaenum}[(a)] \item on the microscale, the gating process of the ion channels is governed by a continuous-time discrete-state Markov chain, \item on the intermediate scale, the current-balance and the cable equation which are responsible for the action potential initiation and propagation is integrated in time as an ODE, \item on the macroscale, the propagation of the trans-membrane current into an electrical field in extracellular space is achieved using partial differential equations (PDEs). \end{inparaenum} In \S\ref{sec:scales} the three modeling layers are explained in some detail. The numerical method via split-step modeling is summarized in \S\ref{sec:num}, where we also explain how the spatial coupling is achieved efficiently. We illustrate our method by some relevant examples in \S\ref{sec:experiments} and offer a concluding discussion in \S\ref{sec:conclusions}. \section{Neuronal modeling at different scales} \label{sec:scales} We now describe the modeling \review{framework} at the individual scales, including the associated modeling assumptions. The microscale physics in the form of continuous-time Markov chains (CTMC) \review{for the ion channels and the associated ion currents} are discussed in \S\ref{sec:microscale}, the ODE-model for the action potential \review{along the neuronal geometry} in \S\ref{sec:mesoscale}, and the PDE-model of the extracellular electric field via Maxwell's equations in \S\ref{sec:macroscale}. For convenience, a schematic explanation of the modeling framework is found in Figure~\ref{fig:scheme_overview}. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{fig/scheme} \caption{\review{Schematic overview of the proposed multiscale modeling framework. The CTMC solution on the microscale depends on the membrane voltage that is computed on the mesoscale. This coupling is bidirectional; the mesoscale solution depends on the microscopic stochastic ion current. The macroscopic solution considered here depends solely on the mesoscopic membrane current.}} \label{fig:scheme_overview} \end{figure} \subsection{Microscale: ion channel gating} \label{sec:microscale} The gating of voltage-dependent ion-channels is modeled by a Markov process. Hodgkin and Huxley \cite{hodgkin_quantitative_1952} first proposed that the gating is governed by a set of gating variables. We assume that each gating variable takes values in a discrete state space and that a single combination of gating variables corresponds to an open and conducting ion channel \cite{hille}. \begin{figure} \centering \schemestart $m_0h_0$ \arrow(A--B){<=>[$3 \alpha_m$][$\beta_m$]} $m_1h_0$ \arrow(B--C){<=>[$2 \alpha_m$][$2\beta_m$]} $m_2h_0$ \arrow(C--D){<=>[$\alpha_m$][$3\beta_m$]} $m_3h_0$ \arrow(@A--E){<=>[$\alpha_h$][$\beta_h$]}[-90] $m_0h_1$ \arrow(@B--F){<=>[$\alpha_h$][$\beta_h$]}[-90] $m_1h_1$ \arrow(@C--G){<=>[$\alpha_h$][$\beta_h$]}[-90] $m_2h_1$ \arrow(@D--H){<=>[$\alpha_h$][$\beta_h$]}[-90] $\mathbf{m_3h_1}$ \arrow(@E--@F){<=>[$3 \alpha_m$][$\beta_m$]} \arrow(@F--@G){<=>[$2 \alpha_m$][$2 \beta_m$]} \arrow(@G--@H){<=>[$\alpha_m$][$3\beta_m$]} \schemestop \caption{Kinetic scheme for the $m_3h_1$ sodium channel gating proposed by Hodgkin and Huxley \cite{hodgkin_quantitative_1952}. Only the gating state $m_3h_1$ represents an open ion channel, for all other states the ion channel is closed.} \label{fig:scheme} \end{figure} The gating states and the transitions between them can be written in the form of a kinetic scheme. An example for the voltage-gated $m_3h_1$ sodium channel is depicted in Figure~\ref{fig:scheme}. \review{The notation for the channel states indicates that there are two involved gating variables $m$ and $h$, where variable $m$ takes one of four states and, independently, variable $h$ takes one of two states. In total we thus arrive at 8 states with a total of 10 reversible transitions, see Figure~\ref{fig:scheme}. The total number of channels in the open state $m_3h_1$ implies a certain conductivity as will be detailed below.} The transition rates between the states depend on the membrane voltage and on the state itself. When these transitions take place in a microscopic environment where molecular noise is present, a continuous-time Markov chain (CTMC) is the most suitable model. \begin{figure}[htb!] \centering \schemestart $S_1$ \arrow(A--B){<=>} $S_2$ \arrow(B--C){<=>} $S_3$ \arrow(C--D){<=>} $S_4$ \arrow(@A--E){<=>}[-90] $S_5$ \arrow(@B--F){<=>}[-90] $S_6$ \arrow(@C--G){<=>}[-90] $S_7$ \arrow(@D--H){<=>}[-90] $\mathbf{S_8}$ \arrow(@E--@F){<=>} \arrow(@F--@G){<=>} \arrow(@G--@H){<=>} \schemestop \caption{An equivalent scheme to Figure~\ref{fig:scheme}, where each combination of gating variables corresponds to a discrete state of a Markov chain.} \label{fig:simple_scheme} \end{figure} We rewrite such schemes in a general, yet more manageable notation as follows. Assign the different combinations of gating variables an individual state $S_i$, $i = 1,2,\ldots,M_{\mbox{\scriptsize{states}}}$. For the sodium scheme in Figure~\ref{fig:scheme} there are 8 states and only the state $S_8$ corresponds to an open and conducting ion channel. The resulting equivalent scheme is shown in Figure~\ref{fig:simple_scheme} and an exemplary set of transitions and rates is \begin{align} \label{eq:S3toS4} S_3 &\longrightarrow S_4 \text{, rate: } \alpha_m(V_m), \\ S_3 &\longrightarrow S_2 \text{, rate: } 2\beta_m(V_m), \\ S_3 &\longrightarrow S_7 \text{, rate: } \alpha_h(V_m), \end{align} where $V_m$ is the membrane potential at the location of the ion channel. For this particular example, \review{the transition rates used by Hodgkin and Huxley are \cite{hodgkin_quantitative_1952}} \begin{align} \alpha_m(V_m) &= \frac{0.1(V_m+40)}{1-e^{-(V_m+40)/10}}, \label{eq:rate1} \\ \beta_m(V_m) &= 4e^{-(V_m+65)/18}, \label{eq:rate3} \\ \alpha_h(V_m) &= 0.07e^{-(V_m+65)/20}, \\ \beta_h(V_m) &= \frac{1}{1+e^{-(V_m+35)/10}}.\label{eq:rate2} \end{align} \review{The rates \eqref{eq:rate1}--\eqref{eq:rate2} were originally obtained by empirical fitting to experimental data and are formulated relative to the resting potential of the particular neuron model. We consider a small neural compartment $a$, obtained from a discretization of the neuronal fiber into small enough segments such that the potential is approximately constant within each compartment. Let there be} a total of $N^a$ ion channels of the considered type (e.g.~the $m_3h_1$ sodium channel) on the compartmental surface. At any instant in time $t$, let $s_i$ be the number of channels in state $S_i$, $i = 1,2,\ldots,M_{\mbox{\scriptsize{states}}}$. The scheme in Figure~\ref{fig:simple_scheme} now directly translates into Markovian transition rules for the states $s \in \mathbf{Z}_+^8$, \review{i.e.~the ion-channel counts}, thus defining a CTMC $(t,s) \in \mathbf{R}_+ \times \mathbf{Z}_+^8$. To model the electrophysiological properties of the ion channel, we define \review{the single open channel conductance $\gamma$}, as well as the neuronal membrane area $A^a$ such that the area density of ion channels is $\varrho^a = N^a/A^a$. If $O^a$ is the count of the open ion channels, e.g., for the above example this is $O^a = S^a_8$, \review{we arrive at the total conductance} \begin{align} G^a = \frac{O^a}{N^a} \varrho^a A^a \gamma. \label{eq:stochcurrent} \end{align} Next, the trans-membrane current produced by the ion channel is given by Ohm's law, \begin{align} I^a = G^a (V^a_m - E), \label{eq:ioncurrent} \end{align} where $E$ is the reverse potential of the ion channel, that is, the membrane potential under which the trans-membrane current $I^a$ is zero, \review{and where $V^a_m$ is the membrane potential in compartment $a$.} In the macroscopic setting proposed by Hodgkin and Huxley \cite{hodgkin_quantitative_1952}, the transitions between the gating states happened according to first order reaction kinetics. The fraction of closed channels $(1-o^a)$ becomes open with rate $\alpha$ and the fraction of open channels $o^a$ becomes closed with rate $\beta$. \review{Note that $\alpha$ and $\beta$ may be voltage-dependent for specific types of ion-channels, as in \eqref{eq:S3toS4}--\eqref{eq:rate2}.} The resulting ODE for each macroscopic gating variable $o^a$ can then be written as \begin{align} \frac{do^a}{dt} = \alpha (1 - o^a) - \beta o^a. \end{align} Under the macroscopic formulation, the ratio of open channels is approximated as \begin{align} \label{eq:macroion} \frac{O^a}{N^a} \approx \prod_{i=1}^{M}\limits [o^a_i]^{p_i}, \end{align} \review{where $M$ is the number of gating particles for the particular ion channel model, and $p_i$ is the exponent of the $i$th gating variable $o_i$.} \review{As a concrete example, the sodium channel $m_3h_1$ depicted in Figure~\ref{fig:scheme} is gated by gating variables $m$ and $h$, which in the macroscopic formulation have an exponent of three and one, respectively. In relation to \eqref{eq:macroion} this means that $o_1=m$ with $p_1=3$, and $o_2=h$ with $p_2=1$, where $m$ and $h$ are here just symbols denoting the concentration of the gating variables.} The ratio of open channels is then approximated as \begin{align} \frac{O_{Na}}{N_{Na}} \approx m^3 h^1, \label{eq:detratio} \end{align} and the deterministic gating function for each variable obeys \begin{align} \label{eq:detgating1} \frac{dm}{dt} &= \alpha_{m} (1 - m) - \beta_{m} m, \\ \frac{dh}{dt} &= \alpha_{h} (1 - h) - \beta_{h} h, \label{eq:detgating2} \end{align} where $\alpha$ and $\beta$ are the voltage dependent transition rates defined in \eqref{eq:rate1}--\eqref{eq:rate2}. \review{Solving \eqref{eq:detgating1}--\eqref{eq:detgating2}} and using \eqref{eq:ioncurrent} thus constitutes the classical Hodgkin-Huxley ODE model for the ionic current. By rather evolving the Markov chain defined implicitly in Figure~\ref{fig:simple_scheme} and using \eqref{eq:stochcurrent}--\eqref{eq:ioncurrent}, a stochastic current is defined. \review{This stochastic model is the result of the microphysical assumption of discrete ion states obeying Markovian (memoryless) transition rules. For large enough numbers of ion channels, a transition into the corresponding deterministic model can be expected under rather broad conditions \cite{Markovappr}. When a fine enough compartmentalization of a particular neuron is made, however, the stochastic model can be expected to be more realistic in the sense that the effects missing in a corresponding deterministic model can not be ignored.} We now proceed to discuss the appropriate model for evolving the resulting current, be it stochastic or deterministic, over the neuronal morphology. \subsection{Intermediate scale: current-balance and cable equation} \label{sec:mesoscale} On the intermediate scale, we solve for the current-balance equation of the neuron, describing the relation between the membrane voltage and the different current sources. We first take the ionic current sources from the microscale into account. We next model the capacitance of the membrane, a so-called passive neuronal property. As the neuronal morphology is divided into several compartments we also include a term for the propagation of voltage between the compartments, resembling the cable equation. \review{As before,} denote the compartments by the index $a \in \{1,\ldots,M_{\mbox{\scriptsize{compartments}}}\}$. For each connection between compartments, an entry is made in the adjacency matrix $\mathcal{A}$ \cite{cuntz2010one}. Typically, a compartment is connected to two or three neighbors. If a compartment is connected to only a single neighbor, it represents one of the end-points of the neuron. An example is found in Figure~\ref{fig:neuron_model}. \begin{figure}[htb!] \begin{center} \includegraphics[width=0.45\textwidth]{fig/tree} \includegraphics[width=0.45\textwidth]{fig/dA} \end{center} \caption{Geometry of a neuron and the associated adjacency matrix $\mathcal{A}$ (figure adapted from~\cite{cuntz2010one}). \review{A nonzero entry in $\mathcal{A}$ implies a connection between the corresponding compartments.}} \label{fig:neuron_model} \end{figure} The current flowing in each compartment is composed from different components. The current source of compartment $a$ is the total axial current with respect to its connected compartments $b\in\mathcal{B}(a)$. \review{Note that as $\mathcal{A}$ represents a directed graph, the connected component $b\in \mathcal{B}(a)$ represents in-neighbors that are connected via an edge in the direction to vertex $a$, as well as out-neighbors which are connected by an edge outwards of vertex $a$.} According to Ohm's law the current equals \begin{align} \label{eq:conduct} I^a_{axial} = \sum_{b\in\mathcal{B}(a)} G^a_b(V^a_m - V^b_m) = \sum_{b\in\mathcal{B}(a)} G^a_b V^a_m - \sum_{b\in\mathcal{B}(a)} G^a_b V^b_m, \end{align} where $G^a_b$ is the conductance between compartments $a$ and $b$, and where $V^a_m$ is the trans-membrane potential of compartment $a$. We then add the ionic currents computed at the microscale. For the sake of generality we extend the previous notation of a single type of ion channel into $c$ types, $\mathcal{T}=\{T^1,\ldots,T^c\}$, acting simultaneously. Generalizing \eqref{eq:ioncurrent} we arrive at the total ionic current \begin{align} I^a_{ionic}=\sum_T G^a_T (V^a_m-E_T). \label{eq:ion_middle} \end{align} \review{ We now deviate slightly from the discussion and first consider a space-continuous solution obtained by solving for the propagation of the potential on a line oriented along the $x$-axis. Adding the axial current flux $I_{axial}$, the total trans-membrane current flux $I_m$, and a possibly additional external current source $I_{inj}$ gives \begin{align} \label{eq:interscale0} C_m \frac{d}{dt} V_{m} &= I_{inj}-I_m-I_{axial}, \end{align} where $C_m$ is the capacitance of the neuronal membrane. We have that the gradient of the inter-compartmental leakage current $I_L$ is \begin{equation} \label{eq:intercompartment_current} \frac{\partial I_L}{\partial x} = -I_m. \end{equation} The leakage current accounts for the ions that diffuse out of the intracellular space due to the natural permeability of the neuronal membrane. } \review{ Given a suitable resting potential, Ohm's law is \begin{equation} I_L = G_L \frac{\partial V_m}{\partial x}. \label{eq:ohms_law} \end{equation} Substituting $I_m$ from \eqref{eq:interscale0} using \eqref{eq:intercompartment_current}--\eqref{eq:ohms_law}, we get what is commonly referred to as the \emph{cable equation}, \begin{equation} G_{L}\frac{\partial ^{2}V_m}{\partial x^{2}} = C_{m}\frac{\partial V_m}{\partial t} - I_{inj} + I_{axial}. \label{eq:cable_equation} \end{equation} Given the parabolic character of this equation, the Crank-Nicholson scheme \cite{crank_practical} is a natural choice when designing numerical methods. } \review{Turning now to the compartment-discrete version,} the leakage current in each compartment \review{$a$} is given as \begin{align} \label{eq:leak} I^a_L = G^a_L (V^a_m-E^a_L), \end{align} where the membrane leakage conductivity $G^a_L$ and the leak reverse potential $E^a_L$ are regarded as constants. The total trans-membrane current flux now becomes \begin{align} \label{eq:membraneflux} I^a_m = I^a_L + I^a_{ionic} = (G^a_L + \sum_T G^a_T)V^a_m - (G^a_LE^a_L + \sum_T G^a_T E_T). \end{align} \review{In order to solve for the membrane potential in each compartment $a$, we start from a discrete version of \eqref{eq:interscale0}, using \eqref{eq:conduct} and \eqref{eq:membraneflux},} \begin{align} \label{eq:interscale} C^a_m \frac{d}{dt} V^{a}_{m} &= I^a_{inj}-I^a_m-I^a_{axial} = I^a_{inj}-(I^a_L+\sum_T I^a_T)-I^a_{axial} \\ \nonumber &= I^a_{inj}+(G^a_LE^a_L+\sum_T G^a_T E_T)-(G^a_L + \sum_T G^a_T)V^a_{m} \\ \label{eq:membrane_ode} &\phantom{=} -\sum_b G^a_b V^a_{m} + \sum_b G^a_b V^b_{m}. \end{align} To summarize, on the intermediate scale we insert the \review{stochastic conductivity $G^a_T$ computed from \eqref{eq:stochcurrent}} into \eqref{eq:ion_middle} and solve the current-balance equation \eqref{eq:interscale} for each of the neural compartments. We simultaneously solve for the trans-membrane current defined in \eqref{eq:membraneflux}, to be coupled into the macroscopic field --- next to be discussed. \subsection{Macroscale: extracellular field potentials} \label{sec:macroscale} To simulate spatially non-homogeneous distributions of electrical fields produced by single neurons and neuronal networks, we use an electrostatic formulation of Maxwell's equations to be discretized with finite elements. We solve for the \emph{electric field intensity} $\mathbf{E}$ in terms of the \emph{electric scalar potential} $V$, \begin{equation} \label{eq:gauge} \mathbf{E} = -\nabla V. \end{equation} The relevant dynamic form of the continuity equation with current sources $Q_{j}$ is given by \begin{equation} \label{eq:cont} \nabla \cdot \mathbf{J} = -\frac{\partial \rho}{\partial t}+Q_{j}, \end{equation} with $\mathbf{J}$ and $\rho$ the \emph{current density} and \emph{electric charge density}, respectively. Further constitutive relations include \begin{equation} \label{eq:const1} \mathbf{D} = \varepsilon_{0} \varepsilon_{r} \mathbf{E}, \end{equation} and \emph{Ohm's law} \begin{equation} \label{eq:const2} \mathbf{J} = \sigma \mathbf{E}, \end{equation} in which $\mathbf{D}$ denotes the \emph{electric flux density}. Finally, \emph{Gauss' law} states that \begin{equation} \nabla \cdot \mathbf{D} = \rho. \end{equation} Upon taking the divergence of \eqref{eq:const2} and using the continuity equation \eqref{eq:cont} we get \begin{equation} \nabla \cdot \review{(\sigma \mathbf{E})} = -\frac{\partial \rho}{\partial t}+Q_{j}. \end{equation} Rewriting the electric charge density using Gauss' law together with the constitutive relation \eqref{eq:const1} and finally applying the gauge condition \eqref{eq:gauge} twice we arrive at the time-dependent potential formulation \begin{equation} \label{eq:form} -\nabla \cdot \left(\sigma \nabla V+ \varepsilon_{0}\varepsilon_{r} \frac{\partial}{\partial t} \nabla V \right) = Q_{j}. \end{equation} The values for the electric conductivity $\sigma$ and the relative permittivity $\varepsilon_{r}$ were obtained from \cite{bedard_modeling_2004}. The source currents $Q_{j}$ are computed from the compartmental model described in \S\ref{sec:mesoscale}. Specifically, in compartment $a$, we put \begin{align} \label{eq:source_currents} Q_j(t) &= \frac{I_m^a(t)}{A^a}, \end{align} where $I_m^a(t)$ is obtained by solving \eqref{eq:membraneflux} and where we recall that $A^a$ is the area of the neuronal membrane in compartment $a$. The boundary conditions here are homogeneous Neumann conditions (electric isolation) everywhere except \review{in} a single point which we take to be ground ($V = 0$). In all our simulations this point was placed at the axis of rotation of the enclosing cylindrical extracellular space, and directly underneath the neuronal geometry. This procedure ensures that the formulation has a unique solution; without this specification it is otherwise only specified up to a constant. \section{Numerical modeling} \label{sec:num} The processes at the three scales, that is, the microphysics of the ion channel gating, the neuron current at the intermediate scale, and the propagation of the electric field, all take place in continuous time. Also, all processes formally affect eachother through two-way couplings. In \S\ref{sec:micro_meso} we discuss the numerical coupling of the ion channel gating and the cable equation via a split-step strategy. In \S\ref{sec:meso_macro} the propagation of the electric potential into extracellular space is summarized and here we also explain how the often rather complicated neuronal geometry is handled. To simplify matters, we will disregard the usually much weaker coupling from the external electric field back to the gating process. \subsection{Numerical coupling of firing processes} \label{sec:micro_meso} To get to grip with the details of the coupling between the ion channel gating process and the propagation of the action potential along the neuron we need a more compact notation as follows. In compartment $a$, denote by $\mathbb{X}^a(t)$ the \emph{gating state}, that is, the number of channels being in each of the different states at time $t$. For our sodium channel example we have $\mathbb{X}^a(t) = [s_1^a,s_2^a,\ldots,s_8^a](t)^T$. The CTMC may be written compactly as \begin{align} \label{eq:micro} d\mathbb{X}^a(t) &= \mathbb{S} \boldsymbol{\mu}^a(\mathbb{X}^a,V_m^a; \, dt), \\ \intertext{for $a = 1,\ldots,M_{\mbox{\scriptsize{compartments}}}$, and where $\mathbb{S}$ is an $M_{\mbox{\scriptsize{states}}} \times N_{\mbox{\scriptsize{transitions}}}$ matrix of integer transition coefficients. The dependency on the state $(\mathbb{X}^a,V_m^a)$ is implicit in the \emph{random counting measure} $\boldsymbol{\mu}^a$ associated with an $N_{\mbox{\scriptsize{transitions}}}$-dimensional Poisson process. As a concrete example, the transition $S_3 \longrightarrow S_4$ in \eqref{eq:S3toS4} satisfies} \label{eq:Poisson_intensity} \mathbb{E}[\boldsymbol{\mu}^a_i(\mathbb{X}^a,V_m^a; \, dt)] &= \mathbb{E}[\mathbb{X}^a_3 \, \alpha_m(V_m^a)] \, dt, \\ \intertext{with} \mathbb{S}_{3,i} = -\mathbb{S}_{4,i} = -1, \end{align} that is, with the understanding that \eqref{eq:S3toS4} is the $i$th transition according to some \review{given} ordering. \review{Since the process is a Poisson process,} \eqref{eq:Poisson_intensity} expresses independent exponentially distributed waiting times of intensity $\alpha_m(V_m^a)$ for each of the $\mathbb{X}^a_3$ possible transitions of type $S_3 \longrightarrow S_4$. The propagation of the action potential can similarly be written in more compact ODE notation, \begin{align} \label{eq:meso} dV_m^a(t) &= G(\mathbb{X}^a,V_m^a,\{V_m^b\}_{b \in \mathcal{B}(a)}) \, dt, \end{align} where $G$ is just the current-balance equation \eqref{eq:interscale} and which depends on the state $(\mathbb{X}^a,V_m^a)$ via \eqref{eq:stochcurrent}, \eqref{eq:ion_middle}, and \eqref{eq:membraneflux}, respectively. Eqs.~\eqref{eq:micro} and \eqref{eq:meso} form a coupled CTMC-ODE model which falls under the scope of \emph{Piecewise Deterministic Markov Processes (PDMPs)} and for which numerical methods have been investigated \cite{hybridMarkov}. A highly accurate implementation is possible through event-detection in traditional ODE-solvers. This, however, has a performance drawback since the ODE-solver must continuously determine to sufficient accuracy \emph{what} micro-event happens \emph{when}. \review{In the experiments in \S\ref{sec:experiments} we have chosen to rely on} a simple split-step strategy as follows. Given a discrete time-step $\Delta t$, and $t_{n+1} = t_n+\Delta t$, \begin{align} \label{eq:micron} \mathbb{X}^a_{n+1} &= \mathbb{X}^a_{n}+\int_{t_n}^{t_{n+1}} \mathbb{S} \boldsymbol{\mu}^a(\mathbb{X}^a(s),V_{m,n}^a; \, ds), \\ \label{eq:meson} V_{m,n+1}^a &= V_{m,n}^a+\int_{t_n}^{t_{n+1}} G(\mathbb{X}^a_{n+1},V_m^a(s),\{V_m^b(s)\}_{b \in \mathcal{B}(a)}) \, ds. \end{align} That is, \eqref{eq:micron} evolves the CTMC \eqref{eq:micro} keeping the voltage potential fixed at its value from the previous time-step $t_n$. Similarly, \eqref{eq:meson} evolves the ODE \eqref{eq:meso} while keeping the state of the ion channels fixed at the time-step $t_{n+1}$. Importantly, the usually more expensive stochastic simulation in \eqref{eq:micron} is fully \emph{decoupled} as it only depends on the state of the compartment $a$. The global step is achieved separately by solving the connected ODE in \eqref{eq:meson}, and is usually quite fast. The approximation \eqref{eq:micron}--\eqref{eq:meson} can be understood as a split-step method and may also be analyzed as such \cite{jsdesplit,jsdevarsplit}. The \emph{order} of the approximation can then be expected to be $1/2$ in the root mean-square sense, \begin{align} \mathbb{E}[\|\mathbb{X}_n-\mathbb{X}(t_n)\|^2+\|V_{m,n}-V_m(t_n)\|^2] \le C \Delta t. \end{align} Although it appears to be difficult to increase the stochastic order of this approximation, the \emph{accuracy} is likely going to increase by turning to symmetric Strang-type splitting methods for the stochastic part \review{\cite{jsdesplit}}, possibly also adopting a higher order scheme for the ODE-part. However, the efficiency of such an approach ultimately depends on the strength of the nonlinear feedback terms and is difficult to analyze \textit{a priori}. In the present proof-of-concept context we aim at a convergent and consistent coupling for which \eqref{eq:micron}--\eqref{eq:meson} are a suitable starting point. We thus postpone the investigation of more advanced integration methods for another occasion. \subsection{Spatial extension of the firing process} \label{sec:meso_macro} The modeling in \S\ref{sec:mesoscale} did not take into account a possible free-space potential $V_{ext}$, external to the cell. The necessary modifications to incorporate this are as follows. For $x \in \Omega \subset \mathbf{R}^3$, let $V_{ext}(x)$ be a given external potential and denote by $V_{ext}^a$ the value at compartment $a$. Then replace \eqref{eq:interscale} with \begin{align} I^a_m &= I^a_L + I^a_{ionic} + I^a_{ext} \label{eq:membraneflux2} \\ &= (G^a_L + \sum_T G^a_T)V^a_m - (G^a_LE^a_L + \sum_T G^a_T E_T) + G^a_m V^a_{ext}. \nonumber \end{align} for $G^a_m$ the membrane transneuronal conductivity. With this modification, \eqref{eq:interscale} propagates the effect of the given external field $V_{ext}(x)$ along the neuron. In \S\ref{sec:macroscale} we discussed how the effect of a trans-membrane current propagates into extracellular space as an electric potential $V$ (cf.~\eqref{eq:form}--\eqref{eq:source_currents}). With the previous discussion in mind, the following two-way coupling thus emerges: the solution of the current-balance equation \eqref{eq:interscale} yields a current source, which feeds into \eqref{eq:source_currents}. In turn, this implies an external potential $V_{ext} := V$, obtained by solving \eqref{eq:form}, finally to be inserted into \eqref{eq:membraneflux2} above. The full model coupling thus arrived at takes the schematic form \begin{align} \mbox{\S\ref{sec:microscale}} \xrightleftharpoons{\hphantom{MM}} \mbox{\S\ref{sec:mesoscale}} \xrightleftharpoons{\hphantom{MM}} \mbox{\S\ref{sec:macroscale}}. \end{align} It is certainly possible to devise a numerical method using a similar split-step strategy as in \S\ref{sec:micro_meso} for this full coupling. \review{As mentioned before,} to simplify we shall assume that the feedback from the external field and onto the neuron is weak, such that $V_{ext} \approx 0$ in \eqref{eq:membraneflux2} and we thus consider the simplified model \review{(see also Figure~\ref{fig:scheme_overview})} \begin{align} \mbox{\S\ref{sec:microscale}} \xrightleftharpoons{\hphantom{MM}} \mbox{\S\ref{sec:mesoscale}} \xrightarrow{\hphantom{MM}} \mbox{\S\ref{sec:macroscale}}. \end{align} \review{This assumption is valid whenever the simulation consists of a small number of neighboring neurons, but becomes inaccurate if a large number of nearly parallel neurons is considered. See \cite{anastassiou_ephaptic_2011, buszaki_review_2012} for a further discussion.} It follows that the solution of the field potential $V$ can be done ``offline''. That is, this problem may be solved in isolation using a pre-recorded current source $I_m^a(t)$ obtained from the split-step method described in \S\ref{sec:micro_meso}. The three-dimensional neuronal geometry was constructed in Comsol Multiphysics with the help of the interface to Matlab (``LiveLink'') by morphological additions and Boolean unification of simple geometric objects, representing neuronal compartments. The TREES Toolbox~\cite{cuntz2010one} was used to conveniently access the geometrical properties of single compartments over the Matlab interface. \review{More specifically, the geometry was constructed by parsing the adjacency graph $\mathcal{A}$. Starting from the initial node of the graph, this procedure makes sure that each compartment is connected to the same neighbors as in the numerical model for the axial current flow in \eqref{eq:conduct}.} In an initial attempt, we aimed to represent the 3D geometry as an exact counterpart of the compartmental model, where each compartment is understood as a cylinder with a certain length and diameter. If the direction of the main axes of any two joining cylinders differed, a sphere was added in between the cylinders, followed by a removal of all interior boundaries. Although this created a direct volumetric representation of the neuronal compartments, the approach is difficult to generalize to neuronal branches with a more complicated connectivity. The reason is that the triangulation of the final object becomes extremely difficult to achieve as the mesh engine insists on fully resolving the curvature. See Figure~\ref{fig:mesh} for an example of a problematic mesh, emerging at the intersection of a cylinder and a sphere. A second and more successful attempt was made by which three-dimensional curves made up of line segments for each compartment was constructed. This simplifies the meshing process immensely, since the extracellular mesh is not constrained by high-curvature cylindrical boundaries. Implicit here is the assumption that the neuron is very thin compared to the external length-scale of practical interest. \review{This is valid as the diameter of a dendritic structure is $\approx 1 \mu m$ and the considered extracellular space is typically in the range of several $mm$ \cite{einevoll_modelling_2013}.} In Figure~\ref{fig:morpho} we show examples of both approaches. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{fig/mesh.png} \caption{Example of a problematic mesh at the intersection of two cylindrical compartments.} \label{fig:mesh} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{fig/secondTree.png} \includegraphics[width=0.42\textwidth]{fig/linetree.png} \caption{Example of a neuronal morphology created with cylindrical objects \textit{(left)}, and with curves \textit{(right)}.} \label{fig:morpho} \end{figure} \section{Experiments} \label{sec:experiments} We devote this section to some feasibility experiments of the method proposed. In \S\ref{sec:micromeso_exps} we look in some detail at the coupling of the microscopic and intermediate scales. Using the stochastic currents so computed, the induced electrical field is propagated outside the neuron in \S\ref{sec:full_exps}. \subsection{Micro-meso coupling} \label{sec:micromeso_exps} \review{In this section we provide numerical experiments of the coupling between the microscopic scale, introduced in \S \ref{sec:microscale}, and the mesoscopic scale, introduced in \S \ref{sec:mesoscale}.} We simulate the classical squid model proposed by Hodgkin and Huxley \cite{hodgkin_quantitative_1952}, which is also a part of a widely used benchmark for neuronal simulators \cite{rallpackpaper}. The model includes two types of voltage-gated ion channels, $Na^+$ and $K^+$ ions, which in our case are both modeled as continuous-time Markov processes (cf.~\S\ref{sec:microscale}). The kinetic gating scheme for the $Na^+$ channel is shown in Figure~\ref{fig:scheme}. The $K^+$ channel follows a similar scheme, but has only four discrete gating states $S_1, \ldots, S_4$, of which only the state $S_4$ is open. The transition rates in this case are voltage-dependent as follows \cite{hodgkin_quantitative_1952} \begin{align} \label{eq:rate_n_a} \alpha_n(V_m) &= \frac{0.01(V_m+10)}{e^{(V_m+10)/10}-1}, \\ \beta_n(V_m) &= 0.125e^{(V_m/80)}. \label{eq:rate_n_b} \end{align} \review{Note the similar form of \eqref{eq:rate_n_a} to \eqref{eq:rate1} and \eqref{eq:rate_n_b} to \eqref{eq:rate3}, respectively, which is due to using the same empirical fitting procedure.} The density of the ion channels is $\varrho_K = 30~\mu$m$^{-2}$ and $\varrho_{Na} = 330~\mu$m$^{-2}$, respectively \cite{hille}. The single channel conductance equals $\gamma^K = 360 / 30~pS$, and $\gamma^{Na} = 1200 / 330~pS$, while the reversal potentials are $E_K = -77~mV$ and $E_{Na} = 50~mV$ \cite{rallpackpaper}. The parameters for the intermediate scale are as follows. The specific membrane capacitance $c_m = 1 \mu F (cm)^{-2}$, resting potential $E_r = -65~mV$, cytoplasm resistivity $\varrho_c = 100~\Omega cm$, specific leak conductance $g_L = 40000^{-1}~S(cm)^{-2}$ and leak reversal potential $E_L = -65~mV$ \cite{rallpackpaper}. We let the model be confined to a cylindrical geometry with a length of $1~mm$ and a diameter of $1~\mu m$. The cylinder is compartmentalized into $M_{\mbox{\scriptsize{compartments}}}$ sub-cylinders of equal length and diameter. The root node is ignited by a current injection of $0.1~nA$. In practice we use the Gillespie's \textit{``Direct Method''} \cite{gillespie_exact_1977} to evolve \eqref{eq:micron} and the Crank-Nicholson scheme \cite{crank_practical} to solve \eqref{eq:meson}, relying on the fact that the latter is linear in $V_m^a$. \review{In Figure~\ref{fig:stoch_hh} we show the numerical solution of the coupled model, overlaid with the deterministic solution where ion channels are described by ODEs as in \eqref{eq:detgating1}--\eqref{eq:detgating2}. We show three traces of the membrane voltage $V_m$ in one neuronal compartment over time, where the injected current has been varied from a lower value ($0.045 nA$) to a larger value ($0.1 nA$). The dynamics of the stochastic model for the initial current injections clearly differ from the dynamics of the deterministic one, where a single spike or a train of spikes can be triggered in the stochastic representation, while no spike can be obtained in the deterministic one. At the higher amount of current injection, we observe a trace with similar characteristics for both model representations, but with an increasingly different phase shift.} \review{In Figure~\ref{fig:stoch_isi} we show a numerical convergence study of the coupled model, concerning two method parameters. We show how the interspike interval (ISI) changes as a function of the coupling time step $\Delta t$, as well as the discretization of the geometry $\Delta x$. The ISI is defined as the duration between the peaks of two spikes. As the neuronal firing is now a stochastic process, the ISI will be a distribution, and hence we present the first and second moments. We find that, for the study of the coupling time step, the ISI appears to be well resolved at $\Delta t = 10^{-1}~ms$ for the spatial discretization presented. For the spatial discretization, we find that the ISI distributions do not significantly differ for a voxel length of under 1$\mu m$.} \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{fig/spike_comparison} \caption{\review{Membrane voltage behavior around threshold current. The accuracy of the deterministic solver was verified with the reference solution of Rallpack 3. In all cases the discretization is $\Delta t=0.05$ and $\Delta x = 0.05$. Upper: injected current is $I_{inj} = 0.045$ $nA$, middle: injected current is $I_{inj} = 0.063$ $nA$, lower: injected current is $I_{inj} = 0.1$ $nA$.}} \label{fig:stoch_hh} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{fig/isi_convergence} \caption{\review{Top: The mean interspike interval (ISI) $\pm$ standard error as a function of the time step $\Delta t$. The compartment length is here $\Delta x = 0.02$. Bottom: The ISI as a function of the compartment length $\Delta x$ at a time step $\Delta t = 0.05$. The red line represents the interspike interval from the \textit{Rallpack 3} reference solution \cite{rallpackpaper}. The number of trajectories in all runs is $N=40$. }} \label{fig:stoch_isi} \end{figure} \subsection{Three-scale coupling} \label{sec:full_exps} For this example we took the morphological description of a pyramidal CA1 neuron from \cite{morse_abnormal_2010}, which contains about 1500 compartments. We re-sampled the geometry in order to aggregate short compartments of sizes less than $50~\mu m$ into larger compartments, leading to a final representation consisting of approximately 400 compartments. \review{Although the reduction might alter the properties of the model and more evolved protocols for compartment reduction could be used \cite{hendrickson_capabilities_2011, marasco_using_2013}, we ignore the induced discretization error here since the purpose of the model is to demonstrate the overall numerical method.} We solved \eqref{eq:micron} and \eqref{eq:meson} over the morphology, incorporating the active properties described in \S\ref{sec:micromeso_exps}. Next, we scaled the transmembrane currents $I_{m}(t)$ (cf.~\eqref{eq:source_currents}) and mapped this to the corresponding curve segment as a current source $Q_j(t)$ \cite{ComsolACDC}. It can be noted in passing that we here assume that transmembrane currents are the only cause of change of extracellular potential, which is not the case in a real neuron, as for example synaptic calcium-mediated currents are suspected to contribute to a large fraction of the extracellular signature \cite{buszaki_review_2012}. Equipped with the source currents \eqref{eq:source_currents}, the formulation \eqref{eq:form} is efficiently solved by Comsol's \emph{``Time discrete solver''}, which is based on the observation that the variable $W := \Delta V$ satisfies a simple ODE. Solving for $W$ in an independent manner up to some time $t$, it is then straightforward to solve a single static PDE to arrive at the potential $V$ itself. For the Time discrete solver the time-step was set to $\Delta t$ as used in the split-step method, thus ensuring a correct transition to the macroscopic scale. A tetrahedral mesh \cite{JinEM_FEM} was applied to discretize space (using the ``finest'' mesh setting; resolution of curvature 0.2, resolution of narrow regions 0.8). The simulations were verified against coarser mesh settings in order to ensure a practically converged solution. The result of the simulation is visualized in Figure~\ref{fig:plane}. We have inserted four ``point probes'' at a radial distance of $1000~\mu m$ to the neuron, measuring the extracellular voltage at single points. The electric potential thus monitored by the probes is shown in Figure~\ref{fig:probes}. \begin{figure} \centering {\includegraphics[width=35mm, angle=90]{fig/3.png}} {\includegraphics[width=35mm, angle=90]{fig/4.png}} {\includegraphics[width=35mm, angle=90]{fig/7.png}} {\includegraphics[width=35mm, angle=90]{fig/8.png}} {\includegraphics[width=35mm, angle=90]{fig/9.png}} \caption{Solution of the full three-scale model \review{framework}: propagation of an extracellular action-potential into homogeneous extracellular space.} \label{fig:plane} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{fig/probes} \caption{Electric potential (mV) in the four point probes placed around the neuron.} \label{fig:probes} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper we have proposed a model \review{framework for} neuronal firing processes consisting of three layers: on the microscale, ion channel gating is modeled as a continuous-time Markov chain. On the intermediate scale, the currents produced by open channels are integrated into the current-balance equation proposed by Hodgkin and Huxley. Finally, on the macroscale, the outward current of neurons are propagated into extracellular space, simulating the emission of an extracellular potential. We have also described a numerical approach to the coupling of the different scales and indicated through computational results the feasibility of the overall approach. \review{To date, several exact and approximate simulation methods \cite{fox_stochastic_1997, cannon_stochastic_2010, linaro_accurate_2011,goldwyn_stochastic_2011} of stochastically gated ion channel models have been proposed. However, to our knowledge, the coupling between the microscopic gating layer and the mesoscopic layer describing the action potential initiation and propagation, has not yet been rigorously studied. In this paper, we provide the formulation of the coupling as a split-step method and moreover numerically observe the convergence of the method with respect to the size of the coupling time-step as well as the spatial discretization. We show that the distribution of interspike intervals may be significantly affected by the choice of the coupling time-step, but does not appear to depend strongly on the spatial discretization. Finally, we discuss the theory and praxis of incorporating the macroscopic scale into the model, including the spatial representation of neuronal compartments and appropriate numerical procedures for the simulation of the local field potential (LFP) propagation.} While it has often been taken for granted that ion channel gating occurs deterministically, accumulating research evidence indicates that the presence of stochasticity significantly influences the neuronal behavior \cite{schneidman_ion_1998, white_channel_2000, faisal_noise_2008}. In turn, neurons respond with a high level of variability to the repeated presentation of equal stimuli. This leads to remarkable difficulties in studying the link between single cell biophysical properties and their function in larger neuronal networks, both in health and disease. Additionally, a recent study \cite{cannon_stochastic_2010} has shown that stochastic ion channel gating largely differs not only between different neuronal cell types, but also locally between different parts of the neuron. Given the technical difficulties of assessing the signal channel properties of smaller neuronal compartments such as axons and dendrites, developing reliable mathematical models is essential to tackle this problem. \review{Several studies \cite{schneidman_ion_1998, white_channel_2000,faisal_noise_2008, cannon_stochastic_2010,stiefel_origin_2013,moezzi_ion_2016} have demonstrated how incorporation of channel noise into the Hodgkin and Huxley equations could resemble multiple realistic neuronal behaviors. Although it is not the scope of this work to mimic any particular experimental problem, but rather to provide a modeling framework that could be used by a diverse group of scientists in the future, we foresee multiple interesting applications. For example, it was shown previously \cite{klink_ionic_1993, white_noise_1998,white_channel_2000} that the stochastic nature of voltage-gated ion channels in the medial entorhinal cortex stellate cells are crucial for generating subthreshold oscillations in theta ($\approx 8$Hz) frequency range. These intrinsic oscillations in stellate cells were suggested to generate theta oscillations in vivo LFP \cite{white_noise_1998}. It is well established that theta oscillatory activity in vivo provides a temporal window in which spatial and declarative memories are formed \cite{hartley_space_2014}. Thus, our framework could be used to test the link between the stochastic nature of single channels in specific cell types and their effect at both the cellular and the network level.} \section*{Acknowledgment} PB and SE were supported by the Swedish Research Council within the UPMARC Linnaeus center of Excellence. SM was supported by the Olle Engkvist post-doctoral fellowship. \section*{Availability and reproducibility} \review{The computational results can be reproduced within the upcoming release 1.4 of the URDME open-source simulation framework, available for download at \url{www.urdme.org}.} \bibliographystyle{abbrvnat}
train/arxiv
BkiUbpbxK6Ot9TMSpXP9
5
1
\section{Introduction} Consider a germ of linear analytic differential system \begin{equation} \label{equation} x^{k+1}Y^\prime=A(x)Y,\;\;\;x\in\mathbb{C},\;Y\in\mathbb{C}^n, \end{equation} with a non resonant irregular singularity of Poincar\'e rank $k$ at $0$. Then $A(x)$ is a matrix of germs of holomorphic functions at the origin and the eigenvalues of $A(0)$ are distinct. Without loss of generality we can suppose that $A(0)$ is diagonal. There exists a unique formal normalizing series tangent to the identity $Y=\hat{H}(x)Z= (\mathrm{id} + O(x))Z$ bringing \eqref{equation} to the diagonal normal form \begin{equation} \label{normal_form} x^{k+1} Z^\prime= (D_0+ D_1x+ \dots+D_kx^k)Z, \end{equation} with $D_i$ diagonal and $D_0=A(0)$. The normal form has a canonical diagonal fundamental matrix solution that we call $W(x)$. However, generically, the normalizing series $\hat{H}$ is divergent. Nevertheless, there exists $2k$ sectors $S_j$ of opening greater than $\frac{\pi}{k}$ (see Figure~\ref{sectors}) on which there exist unique normalizing holomorphic functions $H_j(x)$ that are asymptotic to $\hat H(x)$ on $S_j$. This defines a fundamental matrix solution $W_j(x)= H_j(x)W(x)$ of \eqref{equation} over each $S_j$. \begin{figure}\begin{center} \includegraphics[width=6cm]{secteurs}\caption{Four sectors when $k=2$. The bold lines are the separating rays.} \label{sectors}\end{center}\end{figure} In the abundant literature on the subject (see for instance \cite{S}, \cite{IY} and \cite{R2}), it is often assumed that the eigenvalues of $A(0)$ satisfy the following inequality, a hypothesis that can be realized by means of a rotation in $x$ and a permutation of the coordinates in $Y$. \begin{equation} \label{hypothesis} \mathfrak{R}(\lambda_1)>\mathfrak{R}(\lambda_2)>\dots>\mathfrak{R}(\lambda_n). \end{equation} Under this hypothesis, the columns $\{w_{1,j}, \dots, w_{n,j}\}$ of each $W_j$, which form a basis of the solution space, are ordered with respect to flatness: \begin{equation}\begin{cases} w_{1,j'} \prec \dots \prec w_{n,j'}, & \mathrm{on} \: S_{2j}\cap S_{2j+1},\quad \mathrm{for} \:j'=2j,2j+1,\\ w_{1,j'} \succ \dots \succ w_{n,j'}, & \mathrm{on} \: S_{2j-1}\cap S_{2j},\quad \mathrm{for} \:j'=2j-1,2j,\end{cases} \label{order_flatness} \end{equation} where indices are $\text{mod} \: 2k$. This comes from the fact that \begin{equation} w_{\ell,j}(x)= \exp\left(-\frac{\lambda_\ell}{kx^k}\right) v_{\ell,j}(x)\label{asympt_expansion}\end{equation} for some vector function $v_{\ell,j}(x)=O(1)$ on $S_j$. On the intersection $S_j\cap S_{j+1}$, the bases represented by $W_j$ and $W_{j+1}$ coexist and are linked by a matrix $C(j)\in GL(n,\mathbb C)$: \begin{equation} W_{j+1}=W_jC(j),\end{equation} where indices are modulo $2k$. The $C(j)$ are called \emph{Stokes matrices}. Generically, more precisely when $\hat{H}$ is divergent, some of the $C(j)$ are not diagonal. This is called the \emph{Stokes phenomenon}: the Stokes matrices measure the obstruction to have \eqref{equation} analytically equivalent to its normal form. Because of \eqref{order_flatness} we have that $C(j)$ is upper (resp. lower triangular) for $j$ even (resp. odd). When $x$ sweeps a sector $S_j$, with increasing argument, the relative order of flatness of the $w_{\ell,j}$ changes. The change occurs on the \emph{separating rays}, which are the half-lines determined by the condition $\mathfrak{R}\left(\frac{\lambda_\ell-\lambda_{\ell'}}{x^k}\right)=0$. Hence, there are $2k$ separating rays for each pair of eigenvalues $(\lambda_\ell,\lambda_{\ell'})$, one in each sector $S_j$. Of course, several pairs of eigenvalues can have the same separating rays. \\ In the presentation above, the choice of a rotation in $x$ corresponds to choosing a \emph{starting ray} $e^{i\theta} \mathbb R^+$ in $x$-space so that all eigenvalues have distinct projections on $e^{ik\theta}\mathbb R$. We say that the direction $e^{ik\theta}\mathbb R^+$ is \emph{non critical} in the eigenvalue space. When this direction is $\mathbb R^+$, the sector $S_1$ is chosen so that all separating rays inside $S_1$ have positive arguments. Hence, when starting on $\mathbb R^+$, we cross them when we turn in the positive direction. This choice is non canonical. We could have chosen another non critical direction. The starting ray $e^{i\theta} \mathbb R^+$ in $x$-space yields an order of the projections of the eigenvalues on the line $e^{ik\theta}\mathbb R$ oriented in the direction of $e^{ik\theta}$, (which will induce an order of flatness on the $\exp\left(-\frac{\lambda_\ell}{kx^k}\right)$ on $e^{i\theta}\mathbb R^+$), and the half-line $e^{i\theta}\mathbb R^+$ is called a \emph{non separating ray}. The coordinates of $Y$ are then permuted so that the order of flatness is in decreasing order. As mentioned above, there is no canonical way of choosing $\theta$. The \emph{separating rays} are the directions $e^{i\phi}$, for which $\mathfrak{R}\left(\left(\lambda_\ell-\lambda_{\ell'}\right)e^{-k\phi}\right)=0$. They divide the set of non separating rays $e^{i\theta}\mathbb R^+$ into a finite number of connected components. When constructing a sector $S_j$ containing a starting ray $e^{i\theta}\mathbb R^+$, we enlarge it with increasing argument, until it contains exactly one separating ray for each pair of eigenvalues. The other sectors are built in the same way with starting rays $e^{i\left(\theta+\frac{\ell \pi}{k}\right)}\mathbb R^+$, $\ell=1,\dots, 2k-1$. We describe how the collection of Stokes matrices associated to a starting ray $e^{i\theta}\mathbb R^+$ changes when we cross a separating ray. The change is nontrivial. On a separating ray $e^{i\phi}\mathbb R^+$, some blocks of consecutive eigenvalues have identical projections on the critical ray $e^{ik\phi}\mathbb R^+$. The orders of projections of eigenvalues of each block are opposite on the two sides of the critical ray. If we are considering an upper (resp. lower triangular Stokes matrix) $C$, then the new upper (resp. lower) Stokes matrix constructed after crossing the separating ray is obtained as $P^{-1}U^{-1}CVP$, where $U$ and $V$ are block diagonal matrices with blocks of the lower (resp. upper) adjacent Stokes matrices on both sides of $C$, for the eigenvalues that have changed order, and identity blocks elsewhere, and where $P$ is a permutation matrix representing the new order of eigenvalues. The precise statement will be given below after we have introduced the necessary notations. We illustrate the theorem on an example in $\mathbb C^3$ for $k=1$. \section{The main theorem} Before stating the theorem, let us introduce some notation adequate to our purpose. Indeed, we will need to change the order of the eigenvalues in all subsets of eigenvalues that project on a unique point on a critical ray $e^{i\psi} \mathbb{R}^+$. \begin{notation} \begin{enumerate} \item $I_\ell$ and $J_\ell$ represent respectively the $\ell\times \ell$ identity matrix and the matrix with $1$ on the anti-diagonal and $0$ elsewhere. \item Let $n= s_1+ r_2+ s_3+ r_4+ \dots+ s_{2m-1}+ r_{2m}+s_{2m+1}$ with $s_{2i+1}\in \mathbb N$ and $r_{2i}\geq 2$. We let \begin{equation}P_{s_1,r_2, \dots, s_{2m+1}}=\mathrm{diag}(I_{s_1}, J_{r_2}, \dots, J_{r_{2m}}, I_{s_{2m+1}}).\label{def_P} \end{equation} Calling $\overline{m}$ the ordered generalized partition of $n$ given by \begin{equation}\overline{m}= (s_1,r_2, \dots, s_{2m+1}),\label{def:m}\end{equation} we will also use the shortened notation $P_{\overline{m}}$. (Note that $P_{\overline{m}}=P_{\overline{m}}^{-1}$.) \end{enumerate} \end{notation} \begin{definition} \begin{enumerate} \item Let $f$ and $g$ be meromorphic functions on a neighbourhood of $0$ and $R$ be an open ray (i.e. $R=e^{i\theta}\mathbb R^+$). We say that $f$ is \emph{flatter} than $g$ on $R$, and write $f\prec g$, or $g\succ f$, if $f/g\rightarrow0$ as $x\rightarrow0$ along $R$. If $S$ is a sector, then $f\prec g$ on $S$ if it is the case for every ray in $S$. \item Similarly, let $w(x)= (w^1(x), \dots, w^n(x))$ and $\overline{w}(x)=(\overline{w}^1(x), \dots, \overline{w}^n(x))$ be two vectors, the coordinates of which are holomorphic on a sector $S$. We say that $w$ is flatter than $\overline{w}$ on $R$ (resp. $S$), and write $w\prec \overline{w}$ if, for all $\ell$, $w^\ell\prec \overline{w}^\ell$ on $R$ (resp. $S$).\end{enumerate} \end{definition} \begin{definition} In a system \eqref{equation} with $A(0)= \mathrm{diag}(\Lambda)$, where $\Lambda=(\lambda_1,...,\lambda_n)$, the \emph{separation rays} are given by the solutions to \begin{eqnarray} \mathfrak{R}\left(\frac{\lambda_p-\lambda_q}{x^{k-1}}\right)=0. \end{eqnarray}\end{definition} \begin{definition} A ray $e^{i\psi}\mathbb R^+$ is a \emph{critical ray} if several eigenvalues have equal projections on the line $e^{i\psi}\mathbb R$.\end{definition} \begin{remark} If $e^{i\psi}\mathbb R^+$ is a critical ray, then $e^{i\left(\frac{\psi+j\pi}{k}\right)}\mathbb R^+$, $j=0, \dots, 2k-1$ are its associated separating rays. The critical rays are in the complex plane of eigenvalues, while the separating rays are in the $x$-plane. In particular, a critical ray is a separating ray when $k=1$.\end{remark} Along the separating rays, the order of the solutions given by their respective order of flatness changes, and this happens nowhere else. This means that in a sector containing none of these rays, the ordering of solutions by their flatness is constant. Also, since $n$ is finite, is it possible to enumerate the separating rays as $R_1,R_2,...,R_{2ku}$, where $R_j$ has argument $\phi_j\in[0,2\pi)$ and the $\phi_j$ are increasing. Note that hypothesis \eqref{hypothesis} implies that $\mathbb{R}^+$ is not a separating ray. Therefore it is used as a starting point to build the Stokes matrices. \begin{definition} Let $R=e^{i\phi}\mathbb R^+$ be a ray and $\mathrm{pr} (\lambda_j)$ be the signed length of the projection of the eigenvalue $\lambda_j$ on its associated ray $\overline{R}=e^{ik\phi}\mathbb R^+$ (ie. $\mathrm{pr}(\lambda_j)=\mathfrak{R}(\lambda_je^{-ik\phi})$.) We say that the \emph{order of eigenvalues on the ray $R$} is given by $\overline{m}= (s_1,r_2, \dots, s_{2m+1})$ if the subsets of eigenvalues corresponding to indices $r_j$ have equal projections, more precisely: $$\begin{cases} \mathrm{pr}(\lambda_j)\geq \mathrm{pr}(\lambda_{j+1}), &\text{for all} \quad j,\\ \mathrm{pr}(\lambda_j)= \mathrm{pr}(\lambda_{j+1})& \text{if and only if } \quad j\in \sum_{i=1}^\ell r_{2(i-1)} +\sum_{i=1}^{\ell}s_{2i-1}+ [1, r_{2\ell}-1],\end{cases}$$ (see Figure~\ref{projection}). Note that when $R$ is not a separating ray, then $m=0$ and $s_1=n$. Also, the order of eigenvalues corresponds to the respective order of flatness of the $\exp\left(-\frac{\lambda_j}{x^k}\right)$ on $R$.\end{definition} \begin{figure}\begin{center} \includegraphics[width=7cm]{projection}\caption{The projections of the eigenvalues on a critical ray. Here, $\overline{m}=(2,2,2,3,1,3,1)$.} \label{projection}\end{center}\end{figure} \subsection{Statement of the theorem} \begin{theorem}\label{thm} We consider a system \eqref{equation} satisfying hypothesis \eqref{hypothesis}, and its Stokes matrices $C(j)$, $j=1, \dots, 2k,$ corresponding to the choice of $\mathbb{R}^+$ as starting ray. Let $\phi_1<\dots<\phi_{2ku}$ be the angles of the separating rays. Let $e^{i\theta}\mathbb R^+$, with $\theta\in (\phi_1,\phi_2) $, be a new starting ray such that the new sectors can be chosen as $\widetilde{S}_j=e^{i\theta} S_j$ (see Figure~\ref{turned_sectors}), and let $\widetilde{C}(j)$ be new Stokes matrices associated to the collection of sectors $\widetilde{S}_j$. We suppose that the order of eigenvalues on the separating ray $R_1=e^{i\phi_1}\mathbb R^+$ is given by $\overline{m}= (s_1,r_2, \dots, s_{2m+1})$. Using a block notation $C(j)_{i,\ell}$ with $i,\ell\in \{1, \dots, 2m+1\}$, the size of the blocks corresponding to the partition of $n$ given by $\overline{m}$, then the $\widetilde{C}(j)$ can be chosen as \begin{align} \begin{split} \widetilde{C}(j)&=P_{\overline{m}}\cdot \mathrm{diag}\left(I_{s_1}, C(j)_{2,2}^{-1}, I_{s_3}, \dots , C(j)_{2m,2m}^{-1}, I_{s_{2m+1}}\right) \\ &\qquad\cdot C(j)\cdot\mathrm{diag}\left(I_{s_1}, C(j+1)_{2,2}, I_{s_3}, \dots , C(j+1)_{2m,2m}, I_{s_{2m+1}}\right)\cdot P_{\overline{m}}.\end{split} \end{align} \end{theorem} \begin{figure}\begin{center} \includegraphics[width=6cm]{secteurs_tournes}\caption{The sectors $S_i$ (with black boundary) and $\widetilde{S}_i$ (with grey boundary) when $k=2$. The other lines are the separating rays.} \label{turned_sectors}\end{center}\end{figure} \begin{example}[Explicit computation in the case $k=1$ and $n=3$] Consider the case where $R_1,R_2,R_3$ are distinct, with $R_j=e^{i\phi_j}\mathbb R^+$. On $R_1$, let us suppose that the projections of $\lambda_2$ and $\lambda_3$ coincide. On $R_2$, the projections of $\lambda_1$ and $\lambda_3$ coincide, and on $R_3$ the projections of $\lambda_1$ and $\lambda_2$ coincide. Take starting rays $e^{i\theta_\ell}\mathbb R^+$, $\ell\in\{0,1,2,3\}$ such that \begin{eqnarray} 0=\theta_0<\phi_1<\theta_1<\phi_2<\theta_2<\phi_3<\theta_3=\pi. \end{eqnarray} Let us write $C(1)=C^+= (c_{ij}^+)$ and $C(2)= C^-= (c_{ij}^-)$. Then choosing $R=e^{i\theta_\ell}\mathbb{R}^+$ as the starting ray, one gets a pair of Stokes matrices {\small \begin{equation} \begin{array}{r|cc} R & C(1)=C^+ & C(2)=C^-\\ \hline &&\\ \mathbb{R}^+ & \begin{pmatrix} c_{11}^+& c_{12}^+ & c_{13}^+ \\ 0 & c_{22}^+ & c_{23}^+ \\ 0 & 0 & c_{33}^+ \end{pmatrix} & \begin{pmatrix} c_{11}^- & 0 & 0 \\ c_{21}^- & c_{22}^- & 0 \\ c_{31}^- & c_{32}^- & c_{33}^- \\ \end{pmatrix} \\ &&\\ e^{i\theta_1}\mathbb{R}^+ & \begin{pmatrix} c_{11}^+ & c_{13}^+c_{33}^- & c_{22}^-c_{12}^++c_{32}^-c_{13} ^+\\ 0 & c_{33}^- & c_{32}^- \\ 0 & 0 & c_{22}^- \end{pmatrix} & \begin{pmatrix} c_{11}^- & 0 & 0 \\ \frac{c_{22}^-c_{31}^--c_{32}^-c_{21}^-}{c_{22}^-c_{33}^-} & c_{33}^+ & 0 \\ \frac{c_{21}^-}{c_{22}^-} & c_{23}^+ & c_{22}^+ \end{pmatrix} \\ &&\\ e^{i\theta_2}\mathbb{R}^+ & \begin{pmatrix} c_{33}^+ & \frac{c_{22}^-c_{31}^--c^-_{21}c^-_{32}}{c^-_{22}c^-_{33}} & \frac{c^-_{32}}{c^-_{33}} \\ 0 & c_{11}^- & \frac{c^-_{22}c^+_{12}}{c^+_{11}} \\ 0 & 0 & c_{22}^- \end{pmatrix} & \begin{pmatrix} c^-_{33} & 0 & 0 \\ c^-_{33}c^+_{13} & c^+_{11} & 0 \\ c^-_{33}\left(\frac{c^-_{21}c^+_{13}}{c^-_{22}}+c^+_{23}\right) & \frac{c^-_{21}c^+_{11}}{c^-_{22}} & c^+_{22} \end{pmatrix} \\ &&\\ -\mathbb{R}^+ & \begin{pmatrix} c_{33}^+ & \frac{c^-_{32}c^+_{22}}{c^-_{33}} & \frac{c^-_{31}c^+_{11}}{c^-_{33}} \\ 0 & c_{22}^+ & \frac{c^-_{21}c^+_{11}}{c^-_{22}} \\ 0 & 0 & c_{11}^+ \end{pmatrix} & \begin{pmatrix} c^-_{33} & 0 & 0 \\ \frac{c^-_{33}c^+_{23}}{c^+_{22}} & c^-_{22} & 0 \\ \frac{c^-_{33}c^+_{13}}{c^+_{11}} & \frac{c^-_{22}c^+_{12}}{c^+_{11}} & c^-_{11} \end{pmatrix} \end{array} \end{equation}} Let us call the Stokes matrices $\overline{C}^\pm$ for $\theta_3=\pi$. One would expect that $\overline{C}^\mp$ would be equal to $P_{0,3,0}C^\pm P_{0,3,0}$. This is not the case, but the difference comes from the fact that the matrices are only determined up to diagonal matrices. Indeed, \begin{align}\begin{split} D\overline{C}^+D_*^{-1}&=\begin{pmatrix} c^-_{33} & 0 & 0 \\ 0 & c^-_{22} & 0 \\ 0 & 0 & c^-_{11} \end{pmatrix} \begin{pmatrix} c_{33}^+ & \frac{c^-_{32}c^+_{22}}{c^-_{33}} & \frac{c^-_{31}c^+_{11}}{c^-_{33}} \\ 0 & c_{22}^+ & \frac{c^-_{21}c^+_{11}}{c^-_{22}} \\ 0 & 0 & c_{11}^+ \end{pmatrix} \begin{pmatrix} c^+_{33} & 0 & 0 \\ 0 & c^+_{22} & 0 \\ 0 & 0 & c^+_{11} \end{pmatrix}^{-1}\\ &=\begin{pmatrix} c^-_{33} & c^-_{32} & c^-_{31} \\ 0& c^-_{22} & c^-_{21}\\ 0 &0 & c^-_{11} \end{pmatrix}=P_{0,3,0}\,C^-\,P_{0,3,0}. \end{split} \end{align} Similarly we can show that $D_*\overline{C}^-D^{-1}= P_{0,3,0} \,C^+\,P_{0,3,0}$. \end{example} \subsection{Proof of the theorem} The first step is the reduction to the case $m=1$ (see \eqref{def:m} for the definition of $m$). This comes from the fact that the phenomena at each block of eigenvalues having equal projections on the critical ray $\overline{R}_1= e^{ik\phi_1}\mathbb R^+$ are independent. Suppose for instance that the theorem is proved when $m=1$, and consider the case $m=2$. Consider a perturbation of the system in which we multiply by $e^{i\varepsilon}$, for some small $\varepsilon$, the eigenvalues of the second block of eigenvalues which have equal projection on the critical ray $\overline{R}_1$. Then, when $\varepsilon$ is real, small and nonzero, we have two critical rays $\overline{R}_1$ et $\overline{R}_1'= e^{i\varepsilon}R_1$, and two separating rays $R_1$ and $R_1'= e^{i\frac{\varepsilon}k}R_1$. For nonzero $\varepsilon$, the passage from the starting ray $\mathbb R^+$ to the starting ray $e^{i\theta} \mathbb R^+$ is obtained by applying Theorem~\ref{thm} twice: when $\varepsilon>0$, we first consider the change in the Stokes matrices when passing $R_1$ using Theorem~\ref{thm}; then we change $x\mapsto xe^{-i(\phi_1+\frac{\varepsilon}2)}$ and pass $R_1'$ using Theorem~\ref{thm} a second time. When $\varepsilon<0$, the passages are in the reverse order. The two passages commute and the final result is independent of the sign of $\varepsilon$. Moreover, the construction of the Stokes matrices shows that they depend analytically on the eigenvalues. Then the limit passage when $\varepsilon=0$ is the composition of the passages for each block of eigenvalues. The same reasoning can be done for any $m\geq2$. Hence, from now on, we treat the case $m=1$, i.e. $n= s_1+r_2+s_3$ and the eigenvalues $\lambda_j$ with $j\in[s_1+1, s_1+r_2]$ have equal projections on the critical ray $\overline{R}_1=e^{ik\phi_1}\mathbb R^+$. Let $$W(x)= \text{diag} (\omega_1(x), \dots , \omega_n(x))$$ be the diagonal fundamental matrix solution of the normal form of \eqref{equation} at 0. Hypothesis \eqref{hypothesis} implies that \begin{eqnarray} \omega_1\prec \omega_2\prec ...\prec \omega_n \end{eqnarray} on $\mathbb{R}^+$ and everywhere on $S_1\cap S_{2k}$. As a matter of fact, this ordering is precisely why we take \eqref{hypothesis} as a hypothesis. This order is respected on every thin intersection $S_{2j}\cap S_{2j+1}$ and is completely reversed on $S_{2j+1}\cap S_{2j+2}$. A direct consequence is that the Stokes matrices are alternatively upper and lower triangular.\\ \\ This whole construction depends on the choice of $\mathbb{R}^+$ as our starting ray, but this choice is not canonical. \subsubsection{New order on the eigenvalues} We start by describing the changes induced by choosing $\widetilde{R}=e^{i\theta}\mathbb{R}^+$ instead of $\mathbb{R}^+$ as starting ray commanding the order of the eigenvalues. \begin{lemma} Let $R_1=e^{i\phi_1}\mathbb R^+$ be the first separating ray by order of increasing argument. Let us suppose that on the associated critical ray $\overline{R}_1=e^{ik\phi_1}\mathbb R^+$ precisely the following signed projections are equal \begin{equation}\mathfrak{R}(e^{-ik\phi_1} \lambda_{j_1})= \dots = \mathfrak{R}(e^{-ik\phi_1} \lambda_{j_{r_2}}),\label{equal_signed_proj}\end{equation} and the others are distinct. If $j_1=s_1+1$, then $j_2=s_1+2, \dots,j_{r_2}=s_1+r_2$. Moreover, for $\phi_1<\theta<\phi_2$, the new order of the eigenvalues on $e^{i\theta}\mathbb R^+$ is obtained by completely reversing the order of the eigenvalues at positions $s_1+1$ to $s_1+r_2$ and leaving the others as they were. \end{lemma} \begin{proof} The signed projections on $e^{ik\theta}\mathbb{R}^+$ depend continuously on $\theta$, implying that if $j_1=s_1+1$, then $j_2=s_1+2, \dots,j_{r_2}=s_1+r_2$. Let us call $f(\theta,\lambda_j) = \mathfrak{R}(e^{-ik\theta}\lambda_j)$. Then $\frac{\partial}{\partial \theta} f(\theta,\lambda_j)= k\mathfrak{I} (e^{-ik\theta}\lambda_j).$ Since the $\lambda_\ell$ are distinct, and because \eqref{equal_signed_proj} is satisfied, it follows that the $\mathfrak{I} (e^{-ik\phi_1}\lambda_{j_m})$ are distinct. Since $\mathfrak{R}(e^{-ik\theta} \lambda_{j_1})<\dots < \mathfrak{R}(e^{-ik\theta} \lambda_{j_s})$ for $0\leq \theta<\phi_1$, then $\mathfrak{I} (e^{-ik\phi_1}\lambda_{j_1})> \dots >\mathfrak{I}(e^{-ik\phi_1}\lambda_{j_s})$, from which the conclusion follows. \end{proof} \begin{corollary} Let us suppose that on $R_1$, $m$ subsets of consecutive eigenvalues have equal order. Then for $ \phi_1<\theta<\phi_2$, the new order of the eigenvalues on $\widetilde{R}$ is obtained from the order in \eqref{hypothesis} by completely reversing the order in each group. \end{corollary} \subsubsection{New sectors $\widetilde{S}_j$} Under \eqref{hypothesis}, let $0\leq \phi_1< \dots <\phi_N<2\pi $ be the separating rays, and let $$\delta= \min\{\phi_2-\phi_1, \dots, \phi_N-\phi_{N-1},\phi_1,2\pi -\phi_N\}>0.$$ The sectors $S_j$ can be chosen so that $$S_j=\left\{x\: ;\: |x|<r, \arg(x) \in\left( \frac{(j-1)\pi}{k}-\frac{\delta}4,\frac{j\pi}{k}+\frac{\delta}4\right)\right\}.$$ This definition allows simply defining the new sectors as \begin{equation} \widetilde{S}_j=e^{i\left(\phi_1+\frac{\delta}{2}\right)}S_j. \end{equation} \subsubsection{New Stokes matrices} Let us now describe the Stokes matrices of the system using the starting ray $\widetilde{R}= e^{i\theta}\mathbb R^+$, i.e. the sectors $\widetilde{S}_j$, which we will denote by $\widetilde{C}(j)$. For that purpose we need to find, for each $j$, a new fundamental matrix solution $\widetilde{W}_j$ on $\widetilde{S}_j$, which exhibits the correct order of flatness on $\widetilde{S}_{j-1}\cap \widetilde{S}_{j}$ and $\widetilde{S}_{j}\cap \widetilde{S}_{j+1}$. Then we will have \begin{equation}\widetilde{C}_j= \widetilde{W}_j^{-1}\widetilde{W}_{j+1}.\label{obtaining_Stokes}\end{equation} We claim that such a new fundamental matrix solution can be taken as \begin{equation} \widetilde{W}_j=W_j\begin{pmatrix} I_{s_1}& 0 & 0 \\ 0 & C(j)_{2,2} & 0 \\ 0 & 0 & I_{s_3} \end{pmatrix}P_{s_1,r_2,s_3}.\label{new_Stokes} \end{equation} Without loss of generality we can suppose that $j=1$, the other cases being similar. In this case, we need only prove that $\widetilde{W}_1$ is a fundamental matrix solution, the columns of which satisfy \begin{equation}\begin{cases} \widetilde{w}_1\prec \dots \prec\widetilde{w}_n, &\text{on}\:\:\widetilde{S}_{2k}\cap\widetilde{S}_{1},\\ \widetilde{w}_1\succ \dots\succ\widetilde{w}_n, &\text{on}\:\:\widetilde{S}_{1}\cap\widetilde{S}_{2}.\end{cases} \label{right_order} \end{equation} The proof uses the following facts: \begin{itemize} \item We know that such a fundamental matrix solution exists. This comes from the sectorial normalization theorem (\cite{S}) for the system \eqref{equation} after a change $x\mapsto e^{-i\theta} x$. \item Moreover, we know that a matrix $\widetilde{W}_j$, the columns of which satisfy \eqref{right_order}, is unique up to right multiplication by a diagonal matrix. \end{itemize} Hence, as soon as we show that the choice \eqref{new_Stokes} is the only possible choice (up to right multiplication by a diagonal matrix) that could meet the constraints \eqref{right_order}, then we are sure that it indeed does satisfy the constraints. We discuss what occurs when we cross a separating ray. We say that we are \emph{before} (resp. \emph{after}) the separating ray $e^{i\phi}\mathbb R^+$ if we are in a region $\arg(x)<\phi$ (resp. $\arg(x)>\phi$). Also, note that each sector $S_j$ or $\widetilde{S}_j$ contains exactly one separating ray for each pair of eigenvalues. For instance, since $R_1$ is the separating ray inside $S_1$ and $\widetilde{S}_{2k}$ for any pair of eigenvalues within $\{\lambda_{s_1+1}, \dots, \lambda_{s_1+r_2}\}$, then $e^{i\left(\phi_1+ \frac{(j-1)\pi}{k}\right)}\mathbb R^+=e^{\frac{i(j-1)\pi}{k}}R_1$ is a separating ray for the same pair of eigenvalues inside $S_j$, and also inside the new sector $\widetilde{S}_{j-1}$. (This is a particular case of the general fact that if $R_p$ is some separating ray for some subset of eigenvalues inside a sector $S_\ell$, then $e^{\frac{is\pi}{k}}R_p$ is a separating ray for the same subset of eigenvalues inside $S_{\ell+s}$.) It is straightforward that the solutions $\widetilde{w}_\ell=w_\ell$ for $\ell=1,..,s_1$, are adequate because $R_1$ is a separating ray only for pairs of eigenvalues among $\{\lambda_{s_1+1},\dots,\lambda_{s_1+r_2}\}$. Hence, $\widetilde{w}_1\prec \dots \prec\widetilde{w}_{s_1}$ on $\widetilde{S}_{2k}\cap\widetilde{S}_{1}$ since it is the case on $S_{2k}\cap S_1$. Also, in $\widetilde{S_1} \cap \widetilde{S_2}$, we have $\widetilde{w}_1\succ \dots\succ\widetilde{w}_{s_1}$ since we passed one separating ray for each pair of eigenvalues among $\lambda_1, \dots, \lambda_{s_1}$. Similarly, it is straightforward that the solutions $\widetilde{w}_\ell=w_\ell$ for indices $\ell=s_1+r_2+1,\dots,n$ are adequate. Moreover, from \eqref{asympt_expansion} it is clear that on $\widetilde{S}_{2k}\cap\widetilde{S}_{1}$, for $ s_1+1\leq j\leq s_1+r_2$, \begin{equation}\widetilde{w}_1\prec\widetilde{w}_2 \prec \dots\prec\widetilde{w}_{s_1} \prec \widetilde{w}_{j}\prec \widetilde{w}_{s_1+r_2+1} \prec\dots\prec \widetilde{w}_n.\label{partial_asympt}\end{equation} This comes from the fact that $\widetilde{S}_{2k}\cap\widetilde{S}_{1}\subset S_1$, from \eqref{asympt_expansion}, and from the fact that we have only crossed the separating ray $R_1$. We have the asymptotic order reverse to \eqref{partial_asympt} on $\widetilde{S_1} \cap \widetilde{S_2}$. Indeed, $\widetilde{S_1} \cap \widetilde{S_2}\subset S_2$ is located after the separating ray $e^{i\frac{\pi}{k}}R_1$ (the second separating ray for the pairs of eigenvalues in the block), and before the second separating rays for the other pairs of eigenvalues. It remains to compare the solutions $\widetilde{w}_\ell$ for $\ell\in\{s_1+1,\dots,s_1+r_2\}$. If $C(1)=\left( c_{l,m}\right)_{l,m=1}^n$, then the linear combination \begin{equation} \overline{w}_\ell=w_{s_1+1}c_{s_1+1,\ell}+\dots+w_{s_1+r_2}c_{s_1+r_2,\ell}\label{def_w_ell} \end{equation} provides exactly vectors that have the same order of flatness with respect to $w_j$ for $j\notin\{s_1+1, \dots s_1+r_2\}$. We claim that they are ordered as: \begin{equation}\begin{cases} \overline{w}_{s_1+1}\succ \dots \succ\overline{w}_{s_1+r_2}, &\text{on}\:\;\widetilde{S}_{2k}\cap\widetilde{S}_{1},\\ \overline{w}_{s_1+1}\prec \dots\prec\overline{w}_{s_1+r_2}, &\text{on}\:\;\widetilde{S}_{1}\cap\widetilde{S}_{2}.\end{cases} \label{claim}\end{equation} Hence, reordering the vectors by letting $$ \widetilde{w}_{s_1+\ell} =\overline{w}_{s_1+r_2-\ell}$$ for $\ell\in\{1, \dots r_2\}$, which corresponds to applying the permutation matrix $P_{0,r_2,0}$ to the part of the fundamental matrix solution corresponding to $w_{s_1+1}, \dots, w_{s_1+r_2}$ (i.e. $P_{(s_1,r_2,s_3)}$ to the full $n\times n$ fundamental matrix solution), yields the theorem. Let us now prove the claim \eqref{claim}. The first part on $\widetilde{S}_{2k}\cap\widetilde{S}_{1}$ follows from \eqref{asympt_expansion} and the fact that we are after $R_1$. To derive the second conclusion, let $\{\widehat{w}_1, \dots ,\widehat{w}_n\}$ be the basis given by the fundamental matrix solution $W_2$ on $S_2$. Then the order of flatness of $\overline{w}_{s_1+1}, \dots,\overline{w}_{s_1+r_2}$ on $\widetilde{S}_{1}\cap\widetilde{S}_{2}$ is the same as that of $\widehat{w}_{s_1+1}, \dots ,\widehat{w}_{s_1+r_2} $ on $S_2\cap S_3$ because we passed $e^{\frac{\pi i}{k} }R_1$. Indeed, the difference $\widehat{w}_\ell-\overline{w}_\ell$ is a linear combination of the $w_i$ for $i>s_1+r_2$: $$\widehat{w}_\ell-\overline{w}_\ell = \sum_{i>s_1+r_2}b_{i\ell} w_i,$$ since $C(1)$ is lower triangular. From \eqref{asympt_expansion}, all these $w_i$ are flatter than $\overline{w}_\ell$ defined in \eqref{def_w_ell} after we have crossed a separating ray associated to them, which is the case on $\widetilde{S}_{1}\cap\widetilde{S}_{2}$. Hence $w_i\prec\overline{w}_\ell$ on $\widetilde{S}_1\cap\widetilde{S}_2$. This yields $\widehat{w}_{s_1+1}\prec \dots\prec\widehat{w}_{s_1+r_2}$ on $\widetilde{S}_{1}\cap\widetilde{S}_{2}\subset S_2$ since we are in $S_2$ and after $e^{\frac{\pi i}{k} }R_1$. Hence $\overline{w}_{s_1+1}\prec \dots\prec\overline{w}_{s_1+r_2}$ on $\widetilde{S}_{1}\cap\widetilde{S}_{2}$. \hfill $\Box$
train/arxiv
BkiUc485qhDACjc-RdkA
5
1
\section{Introduction} \IEEEPARstart{S}{ign} language is a basic communication tool between normal people and hearing-impaired people or between hearing-impaired people\cite{wei2020semantic}. The transmission of sign language information includes not only gestures and hand shapes, but also facial expressions and body postures. Sign language also has its own vocabulary like normal language. In the process of using sign language to communicate, vocabulary information is conveyed among people through one or a group of gesture actions as a bridge of information\cite{rastgoo2021sign}\cite{elakkiya2021machine}.\par Video-based sign language recognition was used to identify isolated words in sign language in the early days. A video clip corresponds to a sign language word without considering the continuity of sign language. In order to perform sign language recognition more accurately, the researchers conducted a continuous sign language recognition (CSLR) study, the purpose of which was to convert a sign language video into a continuous sign language vocabulary. Due to the huge cost of frame-level annotation when creating continuous sign language video datasets, continuous sign language datasets often only have video-level annotations but no frame-level annotations, which is why researchers usually regard CSLR as weakly supervised learning\cite{koller2017re}. In view of the fact that the dataset has no frame-level annotations in CSLR, many deep learning models have been proposed and applied in CSLR\cite{han2022sign}\cite{khedkar2021analysis}\cite{adaloglou2021comprehensive}\cite{wadhawan2021sign}. According to the methods of spatiotemporal feature extraction in these models, we divide the deep learning-based models for CSLR into two classes: One is the spatial-temporal hierarchical model, which first extracts feature information from frame-level images, then extracts temporal feature information based on continuous frame-level feature sequences, and finally identification and classification are performed. The other type is the non-spatial-temporal hierarchical model, which directly extracts spatial and temporal feature information from the video for identification and classification.\par \begin{figure*} \begin{center} \includegraphics[width=3.5in]{pdf/figures1.pdf}\\ \caption{Example of Autocorrelation Matrix Visualization Plot for Frame-Level Feature Sequences of Video Data.}\label{fig:ljxy1} \end{center} \end{figure*} In the spatial-temporal hierarchical model, the extraction of spatial and temporal information from continuous sign language data is performed in series. Usually, convolutional neural network(CNN)\cite{krizhevsky2012imagenet} is used for spatial embedding, high-dimensional feature information is extracted from low-dimensional image information, and then recurrent neural network(RNN)\cite{cui2019deep}, Transformer\cite{de2020sign} or Long Short Term Memory(LSTM)\cite{tran2020overall} is used for processing in the temporal dimension to obtain high-dimensional sparse semantic information for final recognition and classification. The spatial-temporal hierarchical model has the characteristics of simple model, few parameters and clear layers, etc. It is the mainstream direction of model research in CSLR at present. However, because the spatial and temporal information are extracted and fused separately, there will inevitably be a loss of spatial and temporal information in this process, which will affect the final recognition accuracy. In the non-spatial-temporal hierarchical model, the spatial and temporal information extraction of data is carried out in parallel, that is, the data is processed in the spatial and temporal dimensions at the same time, usually using 3D-CNN\cite{li2020spatio}\cite{huang2018attention}, 2+1D-CNN\cite{koishybay2021continuous}, spatial-temporal Transformer\cite{liu2021st} and other methods. The non-spatial-temporal hierarchical model can obtain more spatial-temporal information, but its model parameters are large and the model is cumbersome. Since reducing the model calculation amount will inevitably lead to the loss of model accuracy, this paper aims to reduce the model calculation amount and improve the real-time performance under a certain range of accuracy loss. This paper mainly focuses on the deep learning-based spatial-temporal hierarchical CSLR model. In our research on the spatial-temporal hierarchical model, we found that: 1) The images between adjacent frames have high similarity in content, as shown in Figure 1, after feature extraction of frame-level images, adjacent feature vectors in the generated autocorrelation matrix have high similarity. 2) In the spatial-temporal hierarchical model, most of the computation of the model is concentrated on the extraction of video frame-level features, and the feature information extraction for each frame of the video is independent of each other. Therefore, this paper reduces the computational complexity of the model by reconstructing the sparse data into dense feature sequences while keeping the original spatial-temporal hierarchical model unchanged.\par In this paper, a temporal super-resolution network(TSRNet) is proposed to reconstruct the sparse feature sequence into a dense feature sequence, which mainly includes two branches: a detailed descriptor and a rough descriptor, and the two types of features obtained from the two branches are fused as the reconstructed features. The model for CSLR based on this network is on the basis of the spatial-temporal hierarchical CSLR model MSTNet\cite{zhu2022multi} proposed in our previous research. First, the source video data is sparsely sampled to reduce the computational load of the frame-level feature extraction part in the spatial-temporal hierarchical model, and then the dense frame-level feature sequence reconstruction is performed through the TSRNet, and then the CTC loss\cite{graves2006connectionist} is used for training optimization after the time-series feature extraction part. The overall model is trained by the self-generating adversarial method proposed in this paper. The TSRNet is regarded as the generator, and the frame-level processing part and the time series processing part are regarded as the discriminator. The training process is divided into two steps. In addition, this paper also proposes WERD as a new criterion for evaluating the effectiveness of the proposed network. The error rate between the estimated WER and the reference WER obtained respectively by the reconstructed frame-level feature sequence and the complete original frame-level feature sequence is taken as WERD.\par The main three contributions of this paper are as follows:\par \begin{itemize} \item[$\bullet$] A TSRNet is proposed, which greatly reduces the computational complexity of the original spatial-temporal hierarchical CSLR model, and the network can be flexibly inserted into any spatial-temporal hierarchical CSLR model. \item[$\bullet$] The WERD is proposed as a new criterion to unify the criterion for model accuracy loss under different benchmarks. \item[$\bullet$] A self-generating adversarial training method is proposed to reduce the final error. \end{itemize} \section{Related work} This section will review related research from two aspects: existing CSLR methods, and related video super-resolution methods.\par \subsection {Continuous Sign Language Recognition} Video-based CSLR is the translation of continuous sign language videos into understandable written phrases or spoken words. In the early days of CSLR, methods such as Hidden Markov Model(HMM) were usually used for recognition, but with the development of deep learning, CNNs were introduced into CSLR, and combined with HMM to form a "CNN+HMM" hybrid model\cite{koller2017re}\cite{koller2016deep}. Koller et al.\cite{koller2018deep} embedded a CNN into an HMM end-to-end while interpreting the CNN output in a Bayesian framework, and the hybrid CNN-HMM combines the strong discriminative ability of CNNs with the sequence modeling ability of HMMs. Camgoz et al.\cite{koller2019weakly} follow a hybrid approach to embed a powerful CNN-LSTM model into each HMM stream to discover properties that themselves lack sufficient discriminative power for recognition. With the emergence of RNN and CTC, the hybrid model composed of "CNN+RNN+CTC" using RNN instead of HMM is widely used in continuous sign language recognition\cite{wei2020semantic}\cite{al2021deep}. The hybrid model mainly uses 2D-CNN for frame-level feature extraction, then uses RNN for time series processing, and finally uses CTC for training and decoding. Huang et al.\cite{huang2021boundary} developed a novel boundary-adaptive encoder-based approach for sign language recognition combined with window attention, achieving competitive results on popular benchmarks. Gao et al.\cite{gao2021rnn} proposed an efficient RNN converter-based approach for Chinese sign language processing, and designed a multi-level visual-level transcription network with frame-level, lexical-level and phrase-level BiLstm to explore multi-scale visual-semantic features . Min et al.\cite{min2021visual} proposed visual alignment constraints to enable CSLR networks to be end-to-end trainable by enforcing feature extractors to predict with more alignment supervision to address the overfitting problem of CTC in sign language recognition. These methods can be classified as spatial-temporal hierarchical models and are the most widely used CSLR methods.\par In addition, some non-spatial-temporal hierarchical models such as "3D-CNN" and "2+1D-CNN" are also used in CSLR. Although the hybrid model of "CNN+RNN+CTC" can effectively recognize continuous sign language, the extraction of spatial-temporal features is separated. In order to extract spatial-temporal features more effectively, the 3D-CNN method is applied in CSLR\cite{sharma2021asl}\cite{han2022efficient}. Ariesta et al.\cite{ariesta2018sentence} proposed a sentence-level sign language method for deep learning combining 3D-CNN and Bi-RNN. Specifically, a 3D-CNN is used to extract features from each video frame, a Bi-RNN is used to extract unique features from the sequential behavior of video frames, and then a possible sentence is generated. Han et al.\cite{han2022sign} used "2+1D-CNN" for feature extraction and proposed a lightweight spatiotemporal channel attention module, including two sub-modules channel temporal attention and spatiotemporal attention, by combining squeeze and excitation attention combined with self-attention enables the network to focus on important information in spatial, temporal, and channel dimensions.\par This paper mainly conducts research on the basis of the spatial-temporal hierarchical model. First, frame-level feature extraction is performed, then feature reconstruction is performed through a TSRNet, and time-series feature extraction is performed. Finally, CTC loss is used for identification and classification.\par \subsection {Video Super Resolution} Video super-resolution is an extension of the image super-resolution task, which restores low-resolution video images into high-resolution video images. It can fully utilize the inter-frame information in the restoration process to obtain better performance. Video super-resolution can be divided into two classes according to whether adjacent frames are aligned with the target frame: alignment methods and non-alignment methods. Alignment methods include motion compensation methods and deformable convolution methods, and non-alignment methods include spatial non-alignment methods and spatial-temporal non-alignment methods\cite{liu2022video}\cite{song2021multi}\cite{song2022learning}\cite{zhu2021video}. These methods are all aimed at restoring the spatial resolution of the video. In video super-resolution, there is another method aimed at restoring the video timing resolution, which is video frame interpolation. This method is different from the method of restoring the video from low resolution to high resolution, in that it needs to make full use of the inter-frame information to restore the current frame in the time dimension. Chen et al.\cite{cheng2021multiple} proposed an enhanced deformable separable network for video frame interpolation by processing adaptive offsets, kernels, masks, and biases learned from information in non-local neighborhoods, which has fewer parameters and improves the performance of kernel-based methods. Kalluri et al.\cite{kalluri2020flavr} propose a fully end-to-end trainable flowless method for multi-frame video interpolation, which aims to implicitly infer nonlinear motion trajectories and complex occlusions from unlabeled videos, and greatly simplifies the process of training, testing, and deploying frame interpolation models. In this study, our proposed method is most similar to the video frame interpolation method, which utilizes the inter-frame feature information to recover adjacent feature information in the temporal dimension using frame interpolation.\par \begin{figure*} \begin{center} \includegraphics[width=5in]{pdf/figures2.pdf}\\ \caption{Overall architecture diagram of continuous sign language recognition model via temporal super-resolution network.}\label{fig:ljxy2} \end{center} \end{figure*} \section{Methodology} The overall architecture of the CSLR model via TSRNet proposed in this paper is shown in Figure 2. The model mainly consists of three parts: frame-level feature extraction, time series processing and TSRNet. In the model training stage, for the input sign language video, the frame-level feature sequence is first obtained through the frame-level feature extraction part. After down-sampling, a dense frame-level feature sequence is obtained through the TSRNet. Then, the final time series features are obtained through the time series processing part. Finally, the CTC loss is used for training optimization, and the sign language recognition results are obtained. The entire training phase is trained using our proposed self-generating adversarial training method, where the temporal super-resolution network is regarded as the generator, and the frame-level processing part and the temporal processing part are regarded as the discriminator. The network is trained using the down-sampled data of the frame-level feature sequence as the input of the temporal super-resolution network. And the training is divided into two steps, the first step is to train the spatial-temporal hierarchical model, and the second step is to train the TSRNet. In the model testing stage, the position of down-sampling is different from that in the training stage. At this time, the input sign language video is directly down-sampled and then the frame-level feature sequence is obtained through the frame-level feature extraction part. Then a dense frame-level feature sequence is obtained through the temporal super-resolution network TSRNet. Then go through the time series processing part. For the obtained timing features, use CTC to decode to get the final sign language recognition result.\par The CSLR model via temporal super-resolution network is based on our previous research MSTNet\cite{zhu2022multi}. The frame-level feature extraction part and the time-series feature extraction part of the model are consistent with MSTNet, that is, the frame-level features are based on resnet-34, and the time-series features are extracted using the "1DCNN+Transformer" coding structure. In order to further reduce the computational cost of the model and improve the real-time performance of the model, this paper proposes the TSRNet. The details of the TSRNet will be introduced in section A. The proposed specific training and testing methods will be introduced in detail in section C.\par \subsection {Temporal Super-Resolution Network} The temporal super-resolution network mainly includes a detail descriptor and a coarse descriptor, as shown in Figure 3. The main branch is the detail descriptor. For the frame-level feature sequence with unified channel dimensions, the detailed description of the feature sequence is obtained by convolution of multiple 1-D residual blocks and 1-D transpose. Another branch is the rough descriptor, which directly up-samples the frame-level feature sequence to obtain a rough description of the feature sequence. The input of the temporal super-resolution network is the sparse frame-level feature sequence extracted by the spatiotemporal hierarchical model, and the output is the feature sequence obtained by fusing the detailed features and rough features. The feature sequence is the reconstructed dense frame-level feature sequence, which is used as the input of the sequential processing part of the subsequent spatiotemporal hierarchical model.\par \begin{figure*} \begin{center} \includegraphics[width=5in]{pdf/figures3.pdf}\\ \caption{Framework of Temporal Super-Resolution Network.}\label{fig:ljxy3} \end{center} \end{figure*} For an input sign language video $V=(x_1,x_2,...,x_T)=\{{x_t|_1^T\in \mathbb{R}^{T\times c\times h\times w}}\}$ containing $T$ frame, where $x_t$ is the t-th frame image in the video, $h\times w$ is the size of $x_t$, and $c$ is the number of channels. $V$ passes through the frame-level feature extractor $F_s$ of the spatial-temporal hierarchical model, and obtains the feature expression as follows:\par \begin{equation} f_1 = F_s(V)\in \mathbb{R}^{T\times c_1} \end{equation} where $c_1$ is the number of channels after feature extraction.\par The dense frame-level feature sequence $f_1$ is down-sampled by $n$ times to obtain a sparse frame-level feature sequence, and then the time dimension and the channel dimension are exchanged to obtain the feature sequence $f_2\in \mathbb{R}^{c_1\times T_1}$, where $T_1=T/n$.\par At this time, $f_2$ is the sparse feature sequence, which is the input of the temporal super-resolution network. Input $f_2$ to the two branches of the temporal super-resolution network, namely the detail descriptor and the coarse descriptor, respectively.\par In the detail descriptor branch, the number of channels of the sparse feature sequence is first dimensionally increased by a 1D-CNN, and then batch normalized and activated by the activation function to obtain the sparse feature sequence $f_3\in \mathbb{R}^{c_2\times T_1}$. The activation function $\sigma$ is Relu, and $c_2$ is the number of channels after the ascension. Then the dimension-raising process can be described as:\par \begin{equation} f_3 = \sigma(BN(1D-CNN(f_2)))\in \mathbb{R}^{c_2\times T_1} \end{equation} Assuming that the entire dimension-raising process is $F_{1DCNN-Relu}$, equation(2) can be described as:\par \begin{equation} f_3 = F_{1DCNN-Relu}(f_2)\in \mathbb{R}^{c_2\times T_1} \end{equation} Then, for the sparse feature sequence $f_3$, the detailed features are extracted through several 1D-Resblocks to obtain the sparse feature sequence $f_3^{'}$. The number of channels and the time length of $f_3^{'}$ remain unchanged, which is consistent with $f_3$.\par \begin{equation} f_3^{'} = F_{Res_m}(f_3)\in \mathbb{R}^{c_2\times T_1} \end{equation} where $F_{Res_m}$ indicates that the number of 1D-Resblocks is $m$.\par After that, use 1-dimensional transposed convolution for the sparse feature sequence $f_3^{'}$ in the time dimension, and up-sampling $n$ times to obtain the dense feature sequence $f_4\in \mathbb{R}^{c_3\times T}$, and the corresponding number of channels is also increased in dimension, $c_3$ is the number of channels after the up-dimension, $T$ is the original video frame length. For $f_4$, the process of formula (4) is performed again, and the dense feature sequence $f_4^{'}=F_{Res_k}(f_4)\in \mathbb{R}^{c_3\times T}$ is obtained by extracting features through $k$ 1D-Resblocks, and the number of channels and time length of $f_4^{'}$ remain unchanged.\par $f_4^{'}$ is subjected to a 1D-CNN for dimensionality reduction, so that the number of channels is restored to be consistent with the input feature sequence, and then batch normalization is performed to obtain a dense feature sequence $f_5$.\par \begin{equation} f_5 = BN(1D-CNN(f_4^{'}))\in \mathbb{R}^{c_1\times T} \end{equation} Assuming that the entire dimensionality reduction process is $F_{1DCNN}$, equation (5) can be described as:\par \begin{equation} f_5 = F_{1DCNN}(f_4^{'})\in \mathbb{R}^{c_1\times T} \end{equation} In the rough descriptor branch, only the nearest neighbor interpolation $F_{nearest}$ is used for the sparse feature sequence $f_2$ to up-sample n times in the time dimension, resulting in a dense feature sequence $f_2^{'}\in \mathbb{R}^{c_1\times T}$\par \begin{equation} f_2^{'} =F_{nearest}(f_2)\in \mathbb{R}^{c_1\times T} \end{equation} Finally, the dense feature sequences $f_2^{'}$ and $f_5$ obtained by the two branches are fused, and the final output dense feature sequence is obtained through the activation function. $f_{Result}$ is the dense frame-level feature sequence reconstructed by the temporal super-resolution network.\par \begin{equation} f_{Result} =\sigma(f_2^{'}+f_5) \end{equation} Resblock, an important component in our proposed TSRNet, was originally proposed by He et al.\cite{he2016deep} in 2016, and its purpose is to address the problem of network degradation. That is, as the number of layers of the network deepens, the accuracy of the network decreases instead. The proposal of Resblock effectively addresses this problem, which makes the number of layers of the network increase sharply. Initially, ResBlock was used in classification tasks. Due to its excellent performance, it was introduced into other related fields and achieved excellent results. In the proposed model ConvNeXt, Liu et al.\cite{liu2022convnet} verified that the replacement of ordinary convolution with depth-wise convolution in ResBlock has no effect on model performance, but the amount of parameters is greatly reduced. This paper introduces the ResBlock structure into our model. Because we deal with 1D temporal data, we use 1D depth-wise convolution in 1D-Resblock, and its structure is shown in Figure 4.\par \begin{figure} \begin{center} \includegraphics[width=1in]{pdf/figures4.pdf}\\ \caption{Structure of 1D-ResBlock.}\label{fig:ljxy4} \end{center} \end{figure} Let the 1D-ResBlock map of layer be: \begin{equation} y =\sigma(BN(F_d(x)+x) \end{equation} where $x$ is the input data, $y$ is the output data, and $F_d(\cdot)$ is the 1D depth-wise convolution.\par \subsection {Connectionist Temporal Classification} CSLR belongs to weakly supervised learning. The input video is an unsegmented sequence and lacks a strict correspondence between video frames and labeled sequences. After encoding the input video sequence, it is very appropriate to use CTC as a decoder. CTC was originally designed for speech recognition, mainly to perform end-to-end temporal classification of unsegmented data to address the problem of mismatched lengths of input and output sequences. In recent years, it is often used in CSLR. In the CSLR model via TSRNet proposed in this paper: in the model training stage, for the final dense feature sequence, CTC Loss is used to train and optimize to obtain the optimal model; in the model testing stage, for the final time series features, using CTC decoding to get the final sign language recognition result.\par CTC introduces a blank label $\{-\}$ to mark unclassified labels during decoding, that is, any words in the input video clip that do not belong to the sign language vocabulary, so that the input and output sequences can be matched, and the dynamic programming method is used for decoding\cite{min2021visual}.\par For the input video $V$ of $T$ frames, the label of each frame is represented by $\pi=(\pi_1,\pi_2,...,\pi_T)$, where $\pi_T\in v\cup \{-\}$, and $v$ is the sign language vocabulary, then the posterior probability of the label is:\par \begin{equation} p(\pi|V)=\prod_{\substack{t=1}}^{\substack{T}}p(\pi_t|V)=\prod_{\substack{t=1}}^{\substack{T}}Y_{t,\pi_t} \end{equation} For a given sentence-level label $s=(s_1,s_2,...,s_L)$, where L is the number of words in the sentence. CTC defines a many-to-one mapping B, whose operation is to remove blank labels and duplicate labels (eg, $B(-g-re-e-n-)=B(-gr-e-en-)=green$) in the path, then the conditional probability of label s is the sum of the occurrence probabilities of all corresponding paths:\par \begin{equation} p(s|V)=\sum_{\substack{\pi\in B^{-1}(s)}}p(\pi|V) \end{equation} Where $B^{-1}(s)=\{\pi|B(\pi)=s\}$ is the inverse mapping of B. CTC loss is defined as the negative log-likelihood of the conditional probability of s.\par \begin{equation} L_{CTC}=-\ln p(s|V) \end{equation} The final sign language recognition result is obtained by CTC decoding after normalization by Softmax function.\par \begin{figure} \begin{center} \includegraphics[width=2.5in]{pdf/figures5.pdf}\\ \caption{The spatial-temporal hierarchical model architecture.}\label{fig:ljxy5} \end{center} \end{figure} \subsection {Model Training and Testing Process} The TSRNet proposed in this paper belongs to the super-resolution model. The commonly used training method in super-resolution models is to use L1 loss or L2 loss as the loss function, which reflects the accuracy of the estimated value by judging the distance between the estimated value and the reference value, considering the gap of a single data level\cite{dong2015image}\cite{wang2019end}. However, what this paper wants to reconstruct is the feature vector. If only the gap between the data is considered, the result is bound to be unsatisfactory. Because each feature vector is a whole, it represents the integrated high-dimensional information, and the value may be very small after multiple feature extractions, so the similarity between vectors needs to be considered at this time. This paper draws on the training method of generative adversarial network(GAN)\cite{radford2015unsupervised}, applies it to our model training, and proposes a self-generating adversarial training method to train the temporal super-resolution network, which greatly improves the final error rate. The training and testing process of the self-generating adversarial training method is described in detail below.\par \emph{Training:} During the training process, we use the self-generating adversarial training method to train the temporal super-resolution network. We consider the temporal super-resolution network as the generator and the spatial-temporal hierarchical model as the discriminator. First, the original sign language video is input into the frame-level feature extraction part of the spatial-temporal hierarchical model to obtain the frame-level feature sequence, and the down-sampling data is used as the input of the temporal super-resolution network to train the network. And the training is divided into two steps, the first step is to train the spatial-temporal hierarchical model, and the second step is to train the TSRNet.\par \begin{figure*} \begin{center} \includegraphics[width=5in]{pdf/figures6.pdf}\\ \caption{Overall model architecture after inserting temporal super-resolution network.}\label{fig:ljxy6} \end{center} \end{figure*} Step 1: Train the spatial-temporal hierarchical model, as shown in Figure 5. The original sign language video data is used as the input of CNN to extract frame-level features, and then the processed time-series features are obtained through the time-series processing module. Finally, the labeled video-level phrases are used as labels, and CTC Loss is used for training, and the final model obtained is used as the discriminator.\par Step 2: Train the TSRNet, as shown in Figure 6. First, insert the TSRNet between the frame-level feature extraction part and the time-series feature extraction part of the spatial-temporal hierarchical model, freeze the parameters of the spatial-temporal hierarchical model trained in the first step, and only train the TSRNet parameters. Then, using the original sign language video as input, the frame-level feature sequence is obtained through the frame-level feature extraction part of the spatial-temporal hierarchical model, and the frame-level feature sequence is sparse according to the set down-sampling multiple. Then the sparse frame-level feature sequence is input into the temporal super-resolution network for reconstruction, and the reconstructed dense frame-level feature sequence is obtained. Finally, the sequence is input into the time-series processing part of the spatial-temporal hierarchical model to obtain the final time-series features, which are trained using CTC loss according to the phrase annotation.\par Here, in order to increase the robustness, proportional random sampling is used in the process of sparseness. Assuming that the multiple of down-sampling is 4 and the length of the frame-level feature sequence is T, then the dense frame-level feature sequence slices the sequence with a width of 4 and divides it into $n$ segments, where $n=T/4$. Then a random feature vector is taken from each segment and reconstructed into a sparse feature sequence, which is the input of the TSRNet.\par \emph{Testing:} During testing, the difference from training is the location of down-sampling. Firstly, the input video data is directly down-sampled to obtain sparse video data, which is input to the frame-level extraction part of the spatial-temporal hierarchical model to obtain frame-level features. Then, a dense frame-level feature sequence is obtained by reconstruction through the TSRNet. The resulting dense frame-level feature sequence is then input into the temporal processing part of the spatial-temporal hierarchical model. Finally, for the obtained timing features, the final recognition result is obtained through CTC decoding, as shown in the test part of Figure 2.\par \section{Experiment} In this section, we conduct comprehensive experiments on two widely used sign language recognition datasets to verify the effectiveness of the proposed model. A series of ablation experiments are performed to demonstrate the effect of each component of the proposed model. For the evaluation criteria proposed in this paper, we describe in detail in section C.\par \subsection {Dataset} RWTH-PHOENIX-Weather-2014(RWTH) dataset\cite{koller2015continuous}: RWTH is recorded by a public weather radio station in Germany. All presenters wear dark clothing and performed sign language in front of a clean background. The videos in this dataset are recorded by 9 different presenters with a total of 6841 different signed sentences (where the number of sign language word instances is 77321 and the number of vocabulary words is 1232). All videos are preprocessed to a resolution of $210\times 260$, and a frame rate of 25 frames per second (FP/S). The dataset is officially divided into 5,672 training samples, 540 validation samples, and 629 test samples.\par Chinese Sign Language(CSL) dataset\cite{huang2018video}: CSL is captured using a Microsoft Kinect camera and contains 100 sentences of everyday Chinese language, each sentence demonstrates 5 times by 50 presenters with a vocabulary size of 178. The video resolution is $1280\times 720$ and the frame rate is 30 FP/S. The performance diversity of this dataset is richer because the demonstrators wear different clothes and demonstrate different speeds and ranges of motion. In the absence of official segmentation, we divide the CSL dataset into a training set and a test set according to the 8:2 rule. The training set accounts for 80\% and the test set accounts for 20\%, that is, it is divided into a training set of 20,000 samples and a test set of 5,000 samples, and make sure that the sentences in the training and test sets are the same, but the presenters are different.\par \begin{table*}[!htbp] \centering \caption{Comparison of the experimental results of TSRNet on the RWTH dataset and the other two methods} \label{tab:aStrangeTable1 \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{3}{*}{Down-sampling factor}& \multicolumn{4}{c|}{TSRNet}& \multicolumn{4}{c|}{Nearest neighbor interpolation}& \multicolumn{4}{c}{Linear interpolation}\\ \cline{2-13} & \multicolumn{2}{c|}{WER(\%)}& \multicolumn{2}{c|}{WERD(\%)}& \multicolumn{2}{c|}{WER(\%)}& \multicolumn{2}{c|}{WERD(\%)}& \multicolumn{2}{c|}{WER(\%)}& \multicolumn{2}{c}{WERD(\%)}\\ \cline{2-13} & dev& test& dev& test& dev& test& dev& test& dev& test& dev& test\\ \hline 1& 20.3& 21.4& 0& 0& 20.3& 21.4& 0& 0& 20.3& 21.4& 0& 0\\ \hline 2& 20.7& 21.5& 1.9& 0.5& 21.8& 22.3& 7.0& 4.3& 22.9& 23.4& 12.3& 9.5\\ \hline 3& 21.1& 22.2& 3.8& 3.8& 24.8& 25.5& 21.1& 19.3& 26.2& 26.4& 27.4& 23.4\\ \hline 4& 23.4& 24.7& 14.7& 15.6& 26.3& 27.4& 27.8& 27.8& 29.5& 30.3& 41.2& 40.0\\ \hline 5& 25.4& 25.3& 23.8& 18.4& 30.5& 30.2& 45.1& 39.6& 33.9& 33.5& 57.0& 52.0\\ \hline 6& 28.2& 28.9& 36.0& 34.3& 33.5& 33.9& 55.7& 53.4& 38.3& 38.4& 69.5& 67.0\\ \hline 7& 31.1& 31.5& 47.4& 44.7& 38.5& 38.9& 70.0& 68.3& 44.4& 44.0& 81.7& 79.2\\ \hline 8& 35.3& 34.9& 61.4& 56.7& 43.8& 43.4& 80.8& 78.1& 51.1& 50.3& 89.9.4& 88.0\\ \hline \end{tabular} \end{table*} \subsection {Implementation Rules} In the overall model of this paper, the Adam optimizer\cite{kingma2014adam} is used for training, the initial learning rate and weight factor are set to $10^{-4}$, and the batch size used is 2. During model training, random cropping and random flipping are used for data augmentation. For random cropping, the frame size of each video sequence is first resized to $256\times 256$, and then randomly cropped to a size of $224\times 224$ to fit the shape of the input. For random flips, set flip probability to 0.5. Flip and crop processing is performed on video sequences. In addition, temporal enhancement processing is performed to randomly increase or shorten the length of the video sequence within $\pm 20\%$. During model testing, only center cropping is used for data enhancement, and the beam search algorithm is used for decoding in the final CTC decoding stage, with a beam width of 10. For the RWTH dataset, there are 30 epochs in the training phase, and the learning rate is reduced by 80\% at the 10th and 20th epochs. For the CSL dataset, there are 15 epochs in the training phase, and the learning rate is reduced by 80\% at the 5th and 10th epochs. The graphics card used in this experiment is RTX2080Ti, the GPU dedicated memory size is 12G, the CPU memory is 8G, and the number of cores is 4.\par \subsection {Judgment Criteria} The WER is widely used as a criterion for CSLR\cite{koller2015continuous}. It is the sum of the minimum insertion operations, replacement operations, and deletion operations required to convert the recognition sequence into a standard reference sequence. Lower WER means better recognition performance, which is defined as follows:\par \begin{equation} WER=100\%\times \frac{ins+del+sub}{sum} \end{equation} where "ins" represents the number of words to be inserted, "del" represents the number of words to be deleted, "sub" represents the number of words to be replaced, and "sum" represents the total number of words in the label.\par For the evaluation of the performance of the model in this paper, only using WER can represent the recognition performance of the model, but it cannot accurately represent the gap between the recognition results of the model in this paper and the recognition results of the original model. Therefore, this paper proposes WERD to further measure the performance of this model. The frame-level feature sequence reconstructed by the model and the complete original frame-level feature sequence are both processed through the time-series processing part of the spatial-temporal hierarchical model and the final CTC decoding to obtain the final recognition result, and the WER is calculated to obtain the estimated WER and reference WER respectively, and then the error rate between them is calculated as a further evaluation criterion for the model in this paper.\par \begin{equation} WERD=100\%\times \frac{1-1.1^{-(WER_E-WER_R)}}{1+1.1^{-(WER_E-WER_R)}} \end{equation} where $WER_E$ represents the estimated $WER$ and $WER_R$ represents the reference WER. Because $WER_E$ is a process that approximates $WER_R$, $WER_E$ is greater than or equal to $WER_R$. The smaller the $WERD$, the closer the estimated value is to the reference value, which means that the dense feature sequence reconstructed by the model is more similar to the original feature sequence. Ideally, $WERD=0$, that is, the estimated sequence is equal to the original sequence. When experimenting on the CSL dataset, we treat a single Chinese character as a word.\par \subsection {Experimental Results} In this paper, the model MSTNet in our previous study is used as the base model, and the TSRNet is inserted on its basis. The original data of the RWTH dataset and CSL dataset are down-sampled by different multiples as the input of the overall model, and the feature sequence is reconstructed using the proposed TSRNet. The experimental results are presented in Table \RNum{1} and Table \RNum{2}, respectively. The experimental results are compared with the other two methods for data recovery after down-sampling, and the comparison results are also presented in Table \RNum{1} and Table \RNum{2}. To further analyze the experimental data, the experimental results in Tables \RNum{1} and \RNum{2} are presented as graphs, as shown in Figures 7 and 8.\par \begin{table*}[!htbp] \centering \caption{Comparison of the experimental results of TSRNet on the CSL dataset and the other two methods} \label{tab:aStrangeTable2 \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Down-sampling factor}& \multicolumn{2}{c|}{TSRNet}& \multicolumn{2}{c|}{Nearest neighbor interpolation}& \multicolumn{2}{c}{Linear interpolation}\\ \cline{2-7} & WER(\%)& WERD(\%)& WER(\%)& WERD(\%)& WER(\%)& WERD(\%)\\ \hline 1& 0.7& 0& 0.7& 0& 0.7& 0\\ \hline 2& 0.7& 0& 0.7& 0& 0.7& 0\\ \hline 3& 0.7& 0& 0.8& 0.5& 0.8& 0.5\\ \hline 4& 0.7& 0& 0.9& 1.0& 0.9& 1.0\\ \hline 5& 0.7& 0& 0.9& 1.0& 1.0& 1.4\\ \hline 6& 0.8& 0.5& 1.3& 2.9& 1.4& 3.3\\ \hline 7& 0.9& 1.0& 1.8& 5.2& 2.1& 6.7\\ \hline 8& 1.1& 1.9& 2.6& 9.0& 3.1& 11.4\\ \hline 9& 1.5& 3.8& 3.6& 13.7& 4.6& 18.4\\ \hline 10& 1.9& 5.7& 5.4& 22.0& 6.9& 28.7\\ \hline 11& 2.5& 8.6& 7.9& 32.6& 10.0& 41.6\\ \hline 12& 3.8& 14.7& 11.1& 45.9& 13.6& 54.7\\ \hline 13& 5.1& 20.7& 14.4& 57.4& 17.0& 65.1\\ \hline 14& 6.9& 28.7& 17.6& 66.7& 21.3& 75.4\\ \hline 15& 8.4& 35.1& 20.4& 73.5& 24.0& 80.3\\ \hline 16& 10.3& 42.8& 24.0& 80.3& 28.3& 86.6\\ \hline \end{tabular} \end{table*} \begin{figure*} \begin{minipage}{0.5\textwidth} \includegraphics[width=3.5in,height=2.33in]{pdf/figures7.pdf} \caption{Comparison of the WER of TSRNet on the RWTH dataset and the other two methods.} \label{fig:ljxy7} \end{minipage} \hfill \begin{minipage}{0.5\textwidth} \includegraphics[width=3.5in,height=2.33in]{pdf/figures8.pdf} \caption{Comparison of the WER of TSRNet on the CSL dataset and the other two methods.} \label{fig:ljxy8} \end{minipage} \end{figure*} From Table \RNum{1} and Table \RNum{2}, it can be obtained: Using different methods to reconstruct the down-sampled data under the same down-sampling factor, the final WERD obtained by our proposed temporal super-resolution network is smaller than the nearest neighbor interpolation method and the linear interpolation method, and when the down-sampling factor is larger, the advantage achieved by our method is more obvious.\par For the RWTH dataset, the reference WER obtained on the validation set and test set, that is, the WER obtained without down-sampling the data, is 20.3\% and 21.4\%, respectively. When the down-sampling factor is 2, the WERD obtained by our method on the validation set and the test set is 1.9\% and 0.5\%, respectively, and the WERD using the nearest neighbor interpolation method is 7.0\% and 4.3\%, compared with our method, the error increased by 5.1\% and 3.8\% respectively, the WERD using the linear interpolation method is 12.3\% and 9.5\%, and the error is increased by 10.4\% and 9\% respectively compared with our method. When the down-sampling factor is 3, the errors on the validation set and test set using the nearest neighbor interpolation method are increased by 17.3\% and 15.5\%, respectively. Compared with our method, the errors obtained by using the linear interpolation method are increased by 23.6\% and 19.6\% respectively. When the down-sampling factor is 4, the errors on the validation set and the test set using the nearest neighbor interpolation method are increased by 13.1\% and 12.2\% respectively compared with our method, and compared with our method, the errors obtained by using the linear interpolation method are respectively increased by 26.5\% and 24.4\%. By analogy, the error is increasing.\par \begin{table*}[!htbp] \centering \caption{Overall performance of continuous sign language recognition model via temporal super-resolution network} \label{tab:aStrangeTable3 \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Down-sampling factor& 1& 2& 3& 4& 5& 6& 7& 8\\ \hline \multicolumn{9}{c}{Frame-level feature extraction part}\\ \hline Params& $23\times 10^6$& $23\times 10^6$& $23\times 10^6$& $23\times 10^6$& $23\times 10^6$& $23\times 10^6$& $23\times 10^6$& $23\times 10^6$\\ \hline FLOPs(GFlops)& 734.74& 367.37& 242.47& 183.69& 149.95& 121.23& 102.87& 91.84\\ \hline MemR+W(GByte)& 14.94& 7.47& 4.93& 3.73& 2.99& 2.46& 2.09& 1.87\\ \hline \multicolumn{9}{c}{Temporal Super-Resolution Network}\\ \hline Params& -& $61\times 10^6$& $61\times 10^6$& $61\times 10^6$& $61\times 10^6$& $61\times 10^6$& $61\times 10^6$& $61\times 10^6$\\ \hline FLOPs(GFlops)& -& 11.57& 11.41& 11.25& 11.19& 11.37& 11.28& 11.09\\ \hline MemR+W(GByte)& -& 0.36& 0.35& 0.34& 0.34& 0.34& 0.34& 0.33\\ \hline \multicolumn{9}{c}{Time-series feature extraction part}\\ \hline Params& $97\times 10^6$& $97\times 10^6$& $97\times 10^6$& $97\times 10^6$& $97\times 10^6$& $97\times 10^6$& $97\times 10^6$& $97\times 10^6$\\ \hline FLOPs(GFlops)& 1.79& 1.79& 1.79& 1.79& 1.79& 1.79& 1.79& 1.79\\ \hline MemR+W(GByte)& 0.035& 0.035& 0.035& 0.035& 0.035& 0.035& 0.035& 0.035\\ \hline \multicolumn{9}{c}{Total}\\ \hline Params& $120\times 10^6$& $181\times 10^6$& $181\times 10^6$& $181\times 10^6$& $181\times 10^6$& $181\times 10^6$& $181\times 10^6$& $181\times 10^6$\\ \hline FLOPs(GFlops)& 736.53& 380.73& 253.88& 196.73& 159.93& 134.39& 115.94& 104.72\\ \hline MemR+W(GByte)& 14.94& 7.87& 5.31& 4.11& 3.36& 2.83& 2.46& 2.24\\ \hline Params MEM(MB)& 458.75& 691.63& 691.63& 691.63& 691.63& 691.63& 691.63& 691.63\\ \hline Run Time(ms)& 221.57& 137.71& 100.78& 73.5& 63.93& 58.74& 52.08& 48.56\\ \hline \end{tabular} \end{table*} \begin{figure*} \begin{minipage}{0.5\textwidth} \includegraphics[width=3.5in,height=2.33in]{pdf/figures9.pdf} \caption{The relationship between the down-sampling multiple and the overall model calculation amount.} \label{fig:ljxy9} \end{minipage} \hfill \begin{minipage}{0.5\textwidth} \includegraphics[width=3.5in,height=2.33in]{pdf/figures10.pdf} \caption{The relationship between the down-sampling factor and the overall model running time.} \label{fig:ljxy10} \end{minipage} \end{figure*} For the CSL dataset, the reference WER for the test set is 0.7\%. When the down-sampling multiples are 2, 3, 4, and 5, the WERD obtained by our method on the test set is all 0, that is, there is no error. This is because the frame rate of CSL is 30FPS/s, and the sign language demonstrator is slow to demonstrate sign language, resulting in more redundancy in the continuous sign language dataset. When the down-sampling factor is 6, the WERD obtained by our method on the test set is 0.5\%, and the errors obtained by the nearest neighbor interpolation method and the linear interpolation method are increased by 2.4\% and 2.8\% respectively compared with our method. When the down-sampling factor is 7, the errors obtained by using the nearest neighbor interpolation method and the linear interpolation method are increased by 4.2\% and 5.7\%, respectively, compared with our method. By analogy, the error is increasing.\par As can be seen from the curves in Figures 7 and 8, as the down-sampling factor increases, the WER obtained on the two datasets using the three methods also increases. And the WER obtained using our method TSRNet is much smaller than the nearest neighbor interpolation method and the linear interpolation method, proving the effectiveness of the TSRNet.\par As can be seen from Figure 7, for the RWTH dataset, the WER curve of TSRNet increases abruptly after the down-sampling multiple is 3, which indicates that the down-sampling factor of 3 is an inflection point. From the calculation of the experimental results in Table \RNum{1}, it can be seen that the WERD obtained on the validation set and the test set when the down-sampling factor is 2 is 1.9\% and 0.5\% higher than that when the down-sampling factor is 1; When the down-sampling factor is 3, it increases by 1.9\% and 3.3\%, respectively, compared with the down-sampling factor of 2; when the down-sampling multiple is 4, it increases by 11.4\% and 11.7\%, respectively, compared with the down-sampling multiple of 3. This is consistent with the conclusion that the down-sampling factor of 3 is an inflection point obtained in Figure 7, which proves that TSRNet has the best model performance when the down-sampling factor is 3 on the RWTH dataset.\par As can be seen from Figure 8, for the CSL dataset, the WER curve of TSRNet increases sharply after the down-sampling multiple is 11, which indicates that the down-sampling multiple is 11 is an inflection point. From the calculation of the experimental results in Table \RNum{2}, it can be seen that when the down-sampling multiple is 10, the WERD obtained on the test set is 1.9\% larger than that when the down-sampling multiple is 9; When the down-sampling factor is 11, the WERD increases by 2.9\% compared with the down-sampling factor of 10; When the down-sampling factor is 12, the WERD increases by 6.1\% when the down-sampling factor is 11. This is consistent with the conclusion that the down-sampling factor of 11 is an inflection point obtained in Figure 8, which proves that TSRNet has the best model performance when the down-sampling factor is 11 on the CSL dataset.\par In order to further analyze the effectiveness and superiority of TSRNet, under different down-sampling times, this paper analyzes the overall model from three aspects: calculation amount, parameter amount, and running time. The original data, that is, the video frame length of 200 and the image size of $224\times 224$, are input into the overall model. The calculation amount, parameter amount and running time of the model under different down-sampling times are shown in Table \RNum{3}. The running time is the average of five consecutive model running times.\par \begin{table*}[!htbp] \centering \caption{Overall model performance comparison of inserting TSRNet into different spatiotemporal hierarchical models} \label{tab:aStrangeTable4 \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{3}{*}{Down-sampling factor}& \multicolumn{4}{c|}{CNN+BiLSTM+CTC}& \multicolumn{4}{c|}{VAC\cite{min2021visual}}& \multicolumn{4}{c}{MSTNet\cite{zhu2022multi}}\\ \cline{2-13} & \multicolumn{2}{c|}{WER(\%)}& \multicolumn{2}{c|}{WERD(\%)}& \multicolumn{2}{c|}{WER(\%)}& \multicolumn{2}{c|}{WERD(\%)}& \multicolumn{2}{c|}{WER(\%)}& \multicolumn{2}{c}{WERD(\%)}\\ \cline{2-13} & dev& test& dev& test& dev& test& dev& test& dev& test& dev& test\\ \hline 1& 26.1& 26.7& 0& 0& 21.8& 22.8& 0& 0& 20.3& 21.4& 0& 0\\ \hline 4& 29.8& 30.4& 17.5& 17.5& 25.1& 26.2& 15.6& 16.1& 23.4& 24.7& 14.7& 15.6\\ \hline \end{tabular} \end{table*} \begin{table*}[!htbp] \centering \caption{Overall model performance comparison using different types of 1D-ResBlock} \label{tab:aStrangeTable5 \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Types of 1D-ResBlock}& \multicolumn{2}{c|}{WER(\%)}& \multirow{2}{*}{The amount of parameters of TSRNet}\\ \cline{2-3} & dev& test& \\ \hline A& 23.7& 24.9& $350\times 10^6$\\ \hline B& 23.4& 24.9& $230\times 10^6$\\ \hline C& 23.4& 24.7& $61\times 10^6$\\ \hline D& 23.8& 24.7& $61\times 10^6$\\ \hline \end{tabular} \end{table*} As can be seen from Table \RNum{3}, for the amount of parameters, the introduction of TSRNet on the basis of MSTNet results in an increase of $61\times 10^6$ in the amount of parameters, and with the increase of the down-sampling multiple, the size of the parameter amount remains unchanged. As for the amount of computation, with the increase of the down-sampling multiple, the amount of computation keeps decreasing. And the model calculation amount is positively correlated with the running time, as the calculation amount decreases, the running time also decreases, as shown in Figures 9 and 10.\par As can be seen from Table \RNum{3}, for the amount of parameters, the introduction of TSRNet on the basis of MSTNet results in an increase of $61\times 10^6$ in the amount of parameters, and with the increase of the down-sampling multiple, the size of the parameter amount remains unchanged. As for the amount of computation, with the increase of the down-sampling multiple, the amount of computation keeps decreasing. And the model calculation amount is positively correlated with the running time. As the calculation amount decreases, the running time also decreases, as shown in Figures 9 and 10. It can be seen from the previous experiments that TSRNet has the best model performance when the down-sampling multiple is 3 on the RWTH dataset. For Figure 9 and Figure 10, when the down-sampling multiple is 3, the overall calculation of the model is reduced by 65.53\% and the running time was reduced by 54.52\%.\par Through the above experiments, it can be found that for the spatial-temporal hierarchical model, the calculation amount is mainly concentrated in the frame-level feature extraction part, and most of its parameters are concentrated in the time-series feature extraction part. The introduction of TSRNet greatly reduces the computational complexity of frame-level feature extraction, but increases the overall model parameters.\par \subsection {Ablation Experiment} In this section, we conduct ablation experiments on the RWTH dataset to further verify the effectiveness of each component of the model. WER and WERD are used as metrics in ablation experiments, and the down-sampling factor is set to 4.\par \emph{1) Experiment 1:} Overall model performance of inserting TSRNet into different spatial-temporal hierarchical models. The TSRNet proposed in this paper is inserted into different spatial-temporal hierarchical models as a sub-network to reduce the overall model computation and running time with an acceptable error rate. In this paper, TSRNet is inserted into 4 different spatial-temporal hierarchical models, and the WER and WERD obtained by the overall model are calculated respectively, as shown in Table \RNum{4}.The two models compared in the Table \RNum{4} "CNN+BiLSTM+CTC" and "VAC"\cite{min2021visual} are retrained by us.\par It can be seen from Table \RNum{4} that under the same down-sampling factor, inserting TSRNet into different spatial-temporal hierarchical models obtains less variation in the WERD of the overall model. It means that TSRNet has good generalization in different spatial-temporal hierarchical models, so that the overall performance of the model remains stable.\par \emph{2) Experiment 2:} Overall model performance using different types of 1D-ResBlock. This paper introduces 1D-ResBlock in TSRNet, but the overall model performance obtained by using different types of 1D-ResBlock is different. This paper conducts experiments using 4 different types of 1D-ResBlock respectively, and calculates the parameters of WER and TSRNet of the overall model under different types, as shown in Table \RNum{5}. The 4 different types of 1D-ResBlock used are shown in Figure 11.\par \begin{figure} \begin{center} \includegraphics[width=3.5in]{pdf/figures11.pdf}\\ \caption{4 different types of 1D-ResBlock.}\label{fig:ljxy11} \end{center} \end{figure} It can be obtained from Table \RNum{5} that the overall model with 1D-ResBlock type C has the best performance. When 1D-ResBlock types are C and D, the number of parameters is smaller than that when types are A and B, because types C and D use depth-wise convolution 1D-DCONV. For types C and D with the same amount of parameters, type C reduces the WER by 1.7\% on the validation set compared to type D.\par \emph{3) Experiment 3:} Overall model performance using different down-sampling methods. When down-sampling the data, this paper uses proportional random sampling. The overall model WER obtained when using different down-sampling methods is shown in Table \RNum{6}.\par \begin{table*}[!htbp] \centering \caption{Overall model performance comparison using different down-sampling methods} \label{tab:aStrangeTable6 \begin{tabular}{c|c|c} \hline \multirow{2}{*}{Down-sampling factor}& \multicolumn{2}{c}{WER(\%)}\\ \cline{2-3} & dev& test\\ \hline Equally spaced sampling& 23.4& 25.0\\ \hline Proportional random sampling& 23.4& 24.7\\ \hline Random sampling& 26.0& 26.5\\ \hline \end{tabular} \end{table*} \begin{table*}[!htbp] \centering \caption{Overall model performance comparison using different training methods} \label{tab:aStrangeTable7 \begin{tabular}{c|c|c} \hline \multirow{2}{*}{Training method}& \multicolumn{2}{c}{WER(\%)}\\ \cline{2-3} & dev& test\\ \hline Conventional super-resolution network training method& 24.9& 26.1\\ \hline Self-generating adversarial training method& 23.4& 24.7\\ \hline \end{tabular} \end{table*} It can be seen from Table \RNum{6} that when the proportional random sampling method is used, the WER obtained by the overall model is the lowest and the performance is the best. Compared with the equally spaced sampling method, the WER obtained on the test set is reduced by 1.2\%, and the WER obtained by the random sampling method on the validation set and test set is reduced by 11.1\% and 7.3\%, respectively. Proportional sampling can ensure the integrity of video data as much as possible, and random sampling can make the overall model have better generalization.\par \emph{4) Experiment 4:} Overall model performance using different training methods. The overall model in this paper is trained using our proposed self-generating adversarial training method. We compare the self-generating adversarial training method with the conventional super-resolution network training method, and the WER of the overall model is shown in Table \RNum{7}. The conventional super-resolution network training method here refers to: down-sample the dense feature sequence and input it into TSRNet, use L2 Loss to train the reconstructed dense feature sequence and the reference dense feature sequence, and then insert the trained model into the spatial-temporal hierarchical model to obtain the WER of the overall model.\par As can be seen from Table \RNum{7}, the overall model WER obtained by using our proposed self-generating adversarial training method is reduced by 6.4\% and 5.7\% on the validation set and test set, respectively, compared with the conventional super-resolution network training method, proving that the effectiveness of self-generating adversarial training methods. Using conventional super-resolution network training methods only considers gaps between data levels, while ignoring gaps at semantic levels. For CSLR, semantic-level information plays an extremely important role, and the semantic-level information can be better recovered using our proposed self-generating adversarial training method.\par \section{Conclusion} A deep learning-based spatial-temporal hierarchical continuous sign language recognition model uses dense sampling when extracting information from raw videos. However, as the video length increases, the amount of computation increases exponentially, making the model unsuitable for processing long video data in practical applications, limiting the real-time application of the model. In response to this problem, this paper proposes a new TSRNet to reduce the computational complexity of the CSLR model and improve real-time performance. The CSLR model via TSRNet mainly consists of three parts: frame-level feature extraction, time-series feature extraction and TSRNet. The TSRNet is located in the middle of the frame-level feature extraction part and the time-series feature extraction part, and mainly includes two branches: the detail descriptor and the rough descriptor. The sparse frame-level features are fused through the features obtained by the two designed branches as the reconstructed dense frame-level feature sequence, and the CTC loss is used for training and optimization after the time-series feature extraction part. In this paper, a self-generating adversarial training method is proposed to train the model. The temporal super-resolution network is regarded as the generator, and the frame-level processing part and the time-series processing part are regarded as the discriminator, which can better restore the semantic-level information and greatly reduce the model error rate. In addition, this paper proposes WERD as a new criterion to unify the criterion for model accuracy loss under different benchmarks. Experiments on two large-scale sign language datasets demonstrate the effectiveness of the proposed model, which greatly reduces the overall model computation load and greatly improves real-time performance under a certain range of accuracy loss. And TSRNet can be flexibly inserted into any spatial-temporal hierarchical model.\par CSLR is designed to solve the communication problem between hearing-impaired people and normal people, and the computational load and memory footprint of the model need to meet real-time requirements. The network proposed in this paper improves the real-time performance of the original model, but there are two problems, that is, it only optimizes the calculation amount of the frame-level spatial feature extraction part, and increases the memory footprint on the basis of the original model. And the model in this paper is only for the spatial-temporal hierarchical model in the CSLR model, but not involved in the non-spatial-temporal hierarchical model. Therefore, how to design a more lightweight, more real-time and more generalized model is a problem worth studying.\par \section*{Acknowledgment} This work was supported in part by the Development Project of Ship Situational Intelligent Awareness System, China under Grant MC-201920-X01, in part by the National Natural Science Foundation of China under Grant 61673129. \par \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
train/arxiv
BkiUbOLxK1Thg9qFbi_U
5
1
\section{Introduction} For $s\in\mathbb{C}$ with $\mathrm{Re}(s)>0$, the gamma function is defined by \begin{equation} \Gamma(s)=\int_{0}^{\infty}e^{-t}t^{s-1}dt,\quad (\mathrm{see}\ [5]).\label{1} \end{equation} From \eqref{1}, we note that $\Gamma(s+1)=s\Gamma(s)$ and $\Gamma(n+1)=n!$, $(n\in\mathbb{N})$, (see [1,2,3]). \par As is well known, the Beta function is defined for $\mathrm{Re}\,\alpha>0,\, \mathrm{Re}\,\beta>0$ by \begin{equation} B(\alpha,\beta)=\int_{0}^{1}t^{\alpha-1}(1-t)^{\beta-1}dt\label{2},\quad (\mathrm{see}\ [1,5]). \end{equation} From \eqref{2} we note that \begin{equation} B(\alpha,\beta)=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)},\quad (\mathrm{see}\ [5]).\label{3} \end{equation} The Harmonic numbers are defined by \begin{equation} H_{n}=1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}, \quad (n\ge 1), \end{equation} and more generally, for any $r\in\mathbb{N}$, the Harmonic numbers of order $r$ are given by \begin{displaymath} H_{n}^{(r)}=1+\frac{1}{2^{r}}+\frac{1}{3^{r}}+\cdots+\frac{1}{n^{r}},\quad (n\ge 1). \end{displaymath} \par In this note, we derive a finite summation formula in \eqref{A} and an infinite summation formula \eqref{B}, where $f_{n,r}(0)$ is determined by the recurrence relation in \eqref{C} and given by Harmonic numbers of order $j$, for $j=1,2,\dots,r$. Our results are illustrated for $r=3$ and $r=4$ in the Example below. It is amusing that the infinite sum in \eqref{B} involving Harmonic numbers of order $\le r$ boils down to $(-1)^{r}(r+1)!$. They are derived from several definite integrals in an elementary way. After considering the two summation formulas in \eqref{A} and \eqref{B} for $r=1$ and $r=2$, we will show the results in Theorem 1. \begin{theorem} Let $n, r$ be positive integers. Then we have \begin{align} &(-1)^{r}r!\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1)^{r+1}}=\frac{2^{2n}}{(2n+1)\binom{2n}{n}}f_{n,r}(0), \label{A}\\ &\sum_{n=1}^{\infty}\frac{2^{2n-1}}{n(2n+1)\binom{2n}{n}}f_{n,r}(0)=(-1)^{r}(r+1)!, \label{B} \end{align} where $f_{n,s}(x)$ are determined by the recurrence relation \begin{align} f_{n,s+1}(x)=\beta_{n}(x)f_{n,s}(x)+\frac{d}{dx}f_{n,s}(x), \,\,(s \ge 1),\,\, f_{n,1}(x)=\beta_{n}(x)=-\sum_{k=0}^{n}(x+2k+1)^{-1}, \label{C} \end{align} so that $f_{n,s}(x)$ is a polynomial in $x$ involving $\beta_{n}(x), \beta_{n}^{(1)}(x),\cdots,\beta_{n}^{(s-1)}(x)$, with \begin{align} \beta_{n}^{(j)}(0)=(-1)^{j-1}j!\big(H_{2n+1}^{(j+1)}-\frac{1}{2^{j+1}}H_{n}^{(j+1)}\big),\quad(j \ge 0).\label{D} \end{align} \end{theorem} \noindent{\bf{Example:}}\\ (a) \begin{align*} &-3!\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1)^{4}}=\frac{2^{2n}}{(2n+1)\binom{2n}{n}}\\ & \times \left\{-(H_{2n+1}-\frac{1}{2}H_{n})^{3}-3(H_{2n+1}-\frac{1}{2}H_{n})(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)})-2(H_{2n+1}^{(3)}-\frac{1}{8}H_{n}^{(3)}) \right\}, \\ &\sum_{n=1}^{\infty}\frac{2^{2n-1}}{n(2n+1)\binom{2n}{n}}\left\{-(H_{2n+1}-\frac{1}{2}H_{n})^{3}-3(H_{2n+1}-\frac{1}{2}H_{n})(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)})-2(H_{2n+1}^{(3)}-\frac{1}{8}H_{n}^{(3)}) \right\} \\ &=-4!. \end{align*} (b) \begin{align*} & 4!\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1)^{5}}=\frac{2^{2n}}{(2n+1)\binom{2n}{n}} \left\{(H_{2n+1}-\frac{1}{2}H_{n})^{4}+6(H_{2n+1}-\frac{1}{2}H_{n})^{2}(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)}) \right. \\ & \quad\quad\quad\quad\quad\quad\quad\quad \left. 3(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)})^{2}+8(H_{2n+1}-\frac{1}{2}H_{n})(H_{2n+1}^{(3)}-\frac{1}{8}H_{n}^{(3)})+6(H_{2n+1}^{(4)}-\frac{1}{16}H_{n}^{(4)}) \right\}, \\ & 5!=\sum_{n=1}^{\infty}\frac{2^{2n-1}}{n(2n+1)\binom{2n}{n}}\left\{(H_{2n+1}-\frac{1}{2}H_{n})^{4}+6(H_{2n+1}-\frac{1}{2}H_{n})^{2}(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)}) \right. \\ & \quad\quad\quad\quad\quad\quad\quad\quad \left. 3(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)})^{2}+8(H_{2n+1}-\frac{1}{2}H_{n})(H_{2n+1}^{(3)}-\frac{1}{8}H_{n}^{(3)})+6(H_{2n+1}^{(4)}-\frac{1}{16}H_{n}^{(4)}) \right\}. \end{align*} \section{Derivation of Summation Formulas} First, we observe that \begin{equation} \begin{aligned} \int_{0}^{1}(1-x^{2})^{n}dx&=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\int_{0}^{1}x^{2k}dx \\ &=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{2k+1}. \end{aligned} \label{5} \end{equation} On the other hand, \begin{align} \int_{0}^{1}(1-x^{2})^{n}dx&=\frac{1}{2}\int_{0}^{1}(1-y)^{n}y^{-\frac{1}{2}}dy=\frac{1}{2}\int_{0}^{1}(1-y)^{n+1-1}\cdot y^{\frac{1}{2}-1}dy \label{6}\\ &=\frac{1}{2}B\bigg(n+1,\frac{1}{2}\bigg)=\frac{1}{2}\frac{\Gamma(n+1)\Gamma\big(\frac{1}{2}\big)}{\big(n+\frac{1}{2}\big)\big(n-\frac{1}{2}\big)\big(n-\frac{3}{2}\big)\cdots\frac{1}{2}\Gamma\big(\frac{1}{2}\big)}\nonumber \\ &=\frac{2^{n}n!}{(2n+1)(2n-1)(2n-3)\cdots 1}=\frac{2^{2n}n!n!}{(2n+1)(2n)!}=\frac{2^{2n}}{(2n+1)\binom{2n}{n}}.\nonumber \end{align} Thus, from \eqref{5} and \eqref{6} we note that \begin{equation} \binom{2n}{n}\sum_{k=0}^{n}\binom{n}{k}\frac{(-1)^{k}}{2k+1}=\frac{2^{2n}}{2n+1}.\label{7} \end{equation} From \eqref{7}, we have \begin{equation} \sum_{n=1}^{\infty}\frac{2^{2n}}{n(2n+1)\binom{2n}{n}}=\sum_{n=1}^{\infty}\frac{1}{n}\int_{0}^{1}(1-x^{2})^{n}dx=-2\int_{0}^{1}\log x\ dx=2.\label{8} \end{equation} Let \begin{equation} \begin{aligned} F(x)&=\int_{0}^{1}(1-t^{2})^{n}t^{x}dt=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\int_{0}^{1}t^{2k+x}dt=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{2k+x+1}.\label{9} \end{aligned} \end{equation} Note that \begin{equation} \begin{aligned} F^{\prime}(x)=\frac{d}{dx} \int_{0}^{1}(1-t^{2})^{n}t^{x}dt=\int_{0}^{1}(1-t^{2})^{n}t^{x}\log t dt=-\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1+x)^{2}}.\label{10} \end{aligned} \end{equation} On the other hand \begin{align} \int_{0}^{1}(1-t^{2})^{n}t^{x}dt&=\frac{1}{2}\int_{0}^{1}(1-y)^{n}y^{\frac{x+1}{2}-1}dy=\frac{1}{2}B\bigg(n+1,\frac{x+1}{2}\bigg) \label{11}\\ &=\frac{1}{2}\frac{\Gamma(n+1)\Gamma\big(\frac{x+1}{2}\big)}{\Gamma\big(n+1+\frac{x+1}{2}\big)}=\frac{1}{2}\frac{n!\Gamma\big(\frac{x+1}{2}\big)}{\big(n+\frac{x+1}{2}\big)\big(n+\frac{x-1}{2}\big)\big(n+\frac{x-3}{2}\big)\cdots\big(\frac{x+1}{2}\big)\Gamma\big(\frac{x+1}{2}\big)}\nonumber \\ &=\frac{n!2^{n}}{(2n+x+1)(2n+x-1)(2n+x-3)\cdots(x+1)}=\frac{n!2^{n}}{\prod_{k=0}^{n}(x+2k+1)}.\nonumber \end{align} Thus, we have \begin{equation} \begin{aligned} F^{\prime}(x)&=\frac{d}{dx}F(x)=\frac{d}{dx}\int_{0}^{1}(1-t^{2})^{n}t^{x}dx=\frac{d}{dx}\bigg(\frac{n!2^{n}}{\prod_{k=0}^{n}(x+2k+1)}\bigg)\\ &=-\bigg(\frac{n!2^{n}}{\prod_{k=0}^{n}(x+2k+1)}\bigg)\bigg(\sum_{k=0}^{n}\frac{1}{x+2k+1}\bigg). \end{aligned} \label{12} \end{equation} Thus, by \eqref{12}, we get \begin{equation} F^{\prime}(0)=-\bigg(\frac{n!n!2^{2n}}{(2n+1)(2n)!}\bigg)\bigg(\sum_{k=0}^{n}\frac{1}{2k+1}\bigg)=-\frac{2^{2n}}{(2n+1)\binom{2n}{n}}\bigg(H_{2n+1}-\frac{1}{2}H_{n}\bigg).\label{13} \end{equation} From \eqref{10} and \eqref{13}, we note that \begin{align} \binom{2n}{n}\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1)^{2}}=\frac{2^{2n}}{2n+1}\bigg(H_{2n+1}-\frac{1}{2}H_{n}\bigg).\label{14} \end{align} By \eqref{14}, we have \begin{align} \sum_{n=1}^{\infty}\frac{2^{2n}}{n(2n+1)\binom{2n}{n}}\big(H_{2n+1}-\frac{1}{2}H_{n}\big)&=\sum_{n=1}^{\infty}\frac{1}{n}\sum_{k=0}^{n}\binom{n}{k}\frac{(-1)^{k}}{(2k+1)^{2}}\label{15}\\ &=\sum_{n=1}^{\infty}\frac{1}{n}\int_{0}^{1}\int_{0}^{1}(1-x^{2}y^{2})^{n}dxdy\nonumber\\ &=-2\int_{0}^{1}\int_{0}^{1}(\log x+\log y)dxdy=4.\nonumber \end{align} Thus, we have \begin{align} \sum_{n=1}^{\infty}\frac{2^{2n}}{n(2n+1)\binom{2n}{n}}\bigg(H_{2n+1}-\frac{1}{2}H_{n}\bigg)=4.\label{16} \end{align} From \eqref{9}, we note that \begin{align} F^{\prime\prime}(x)=\frac{d^{2}}{dx^{2}}F(x)&=\frac{d^{2}}{dx^{2}}\int_{0}^{1}(1-t^{2})^{n}t^{x}dt\label{17} \\ &=\int_{0}^{1}(1-t^{2})^{n}\big(\log t\big)^{2}t^{x}dt\nonumber \\ &=2!\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1+x)^{3}}.\nonumber \end{align} Hence, by \eqref{17}, we get \begin{equation} F^{\prime\prime}(x)=\int_{0}^{1}(1-t^{2})^{n}\big(\log t\big)^{2}t^{x}dt=2!\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1+x)^{3}}.\label{18} \end{equation} From \eqref{11} and \eqref{12}, we have \begin{equation} \begin{aligned} F^{\prime\prime}(x)&=\frac{d^{2}}{dx^{2}}\int_{0}^{1}(1-t^{2})^{n}t^{x}dt \\ &=\frac{n!2^{n}}{(2n+1+x)(2n+x-1)\cdots(x+1)}\bigg\{\bigg(\sum_{k=0}^{n}\frac{1}{2k+x+1}\bigg)^{2}+\sum_{k=0}^{n}\frac{1}{(2k+x+1)^{2}}\bigg\}. \end{aligned}\label{19} \end{equation} Thus, we note that \begin{equation} \begin{aligned} F^{\prime\prime}(0)& =\frac{n!2^{n}}{(2n+1)(2n-1)(2n-3)\cdots 1}\bigg\{\bigg(\sum_{k=0}^{n}\frac{1}{2k+1}\bigg)^{2}+\sum_{k=0}^{n}\frac{1}{(2k+1)^{2}}\bigg\}\\ &=\frac{n!n!2^{2n}}{(2n+1)(2n)!}\bigg\{\big(H_{2n+1}-\frac{1}{2}H_{n}\big)^{2}+\bigg(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)}\bigg)\bigg\}. \end{aligned}\label{20} \end{equation} Therefore, by \eqref{17} and \eqref{20}, we get \begin{align} 2!\binom{2n}{n}\sum_{k=0}^{n}\binom{n}{k}\frac{(-1)^{k}}{(2k+1)^{3}}=\frac{2^{2n}}{(2n+1)}\bigg\{\bigg(H_{2n+1}-\frac{1}{2}H_{n}\bigg)^{2}+\bigg(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)}\bigg) \bigg\}.\label{21} \end{align} Note that \begin{equation} \begin{aligned} \sum_{n=1}^{\infty}\frac{1}{n}\bigg\{\sum_{k=0}^{n}\binom{n}{k}\frac{(-1)^{k}}{(2k+1)^{3}}\bigg\}&=\sum_{n=1}^{\infty}\frac{1}{n}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}(1-x^{2}y^{2}z^{2})^{n}dxdydz\\ &=-2\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}(\log x+\log y+\log z)dxdydz=6. \end{aligned}\label{22} \end{equation} From \eqref{21} and \eqref{22}, we have \begin{align} \sum_{n=1}^{\infty}\frac{2^{2n-1}}{n(2n+1)\binom{2n}{n}}\bigg\{\bigg(H_{2n+1}-\frac{1}{2}H_{n}\bigg)^{2}+ \bigg(H_{2n+1}^{(2)}-\frac{1}{4}H_{n}^{(2)}\bigg)^{2} \bigg\} =6. \label{23} \end{align} Now, we begin to prove Theorem 1. Let $F(x)=\int_{0}^{1}(1-t^2)^{n}t^{x} dt$ be as in \eqref{9}.\\ Let $r$ be a positive integer. Then repeated integrating by parts gives us \begin{align} F^{(r)}(x)&=\bigg(\frac{d}{dx}\bigg)^{r}\int_{0}^{1}(1-t^2)^{n}t^{x} dt =\int_{0}^{1}(1-t^2)^{n}(\log t)^{r}t^{x} dt \nonumber \\ &=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\int_{0}^{1}(\log t)^{r}t^{x+2k} dt \nonumber\\ &=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{-r}{x+2k+1}\int_{0}^{1}(\log t)^{r-1}t^{x+2k} dt \label{24}\\ &=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{-r}{x+2k+1}\cdots\frac{-1}{x+2k+1}\int_{0}^{1}t^{x+2k} dt \nonumber\\ &=(-1)^{r}r!\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(x+2k+1)^{r+1}}.\nonumber \end{align} As we saw in \eqref{11} and \eqref{12}, $F(x)$ is alternatively expressed by \begin{align} F(x)=\int_{0}^{1}(1-t^2)^{n}t^{x} dt=\frac{n!2^{n}}{\prod_{k=0}^{n}(x+2k+1)},\,\,\,F(0)=\frac{2^{2n}}{(2n+1)\binom{2n}{n}},\label{25} \end{align} and its derivative is given by \begin{align} F^{(1)}(x)=F(x)\beta_{n}(x),\quad \beta_{n}=\beta_{n}(x)=-\sum_{k=0}^{n}(x+2k+1)^{-1}.\label{26} \end{align} Repeated differentiations give us \begin{align} &F^{(2)}(x)=F(x)(\beta_{n}^{2}+\beta_{n}^{(1)}),\,\,F^{(3)}(x)=F(x)(\beta_{n}^{3}+3\beta_{n}\beta_{n}^{(1)}+\beta_{n}^{(2)}), \nonumber\\ &F^{(4)}(x)=F(x)(\beta_{n}^{4}+6\beta_{n}^{2}\beta_{n}^{(1)}+3(\beta_{n}^{(1)})^{2}+4\beta_{n}\beta_{n}^{(2)}+\beta_{n}^{(3)}), \label{27}\\ &F^{(5)}(x)=F(x)\left(\beta_{n}^{5}+10\beta_{n}^{3}\beta_{n}^{(1)}+10\beta_{n}^{2}\beta^{(2)}+15\beta(\beta_{n}^{(1)})^{2} \right.\nonumber \\ & \left.\quad\quad\quad\quad\quad\quad\quad+5\beta_{n}\beta_{n}^{(3)} +10\beta_{n}^{(1)}\beta_{n}^{(2)}+\beta_{n}^{(4)}\right),\dots.\nonumber \end{align} In general, we let $F^{(s)}(x)=F(x)f_{n,s}(x)$, for $s \ge 1$. Then further differentiation gives us \begin{align*} F^{(s+1)}(x)=F(x)(\beta_{n}(x)f_{n,s}(x)+\frac{d}{dx}f_{n,s}(x)). \end{align*} Thus we get the recurrence relation for $\left\{f_{n,s}(x)\right\}_{s=1}^{\infty}$: \begin{align} f_{n,s+1}(x)=\beta_{n}(x)f_{n,s}(x)+\frac{d}{dx}f_{n,s}(x), \,\,(s \ge 1),\,\, f_{n,1}(x)=\beta_{n}(x).\label{28} \end{align} Now, from \eqref{24} and \eqref{25}, we obtain \begin{align} (-1)^{r}r!\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(x+2k+1)^{r+1}}=\frac{n!2^{n}}{\prod_{k=0}^{n}(x+2k+1)}f_{n,r}(x). \label{29} \end{align} Letting $x=0$ in \eqref{29}, we get \begin{align} \sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1)^{r+1}}=\frac{(-1)^{r}}{r!}\frac{2^{2n}}{(2n+1)\binom{2n}{n}}f_{n,r}(0).\label{30} \end{align} Multiplying both sides of \eqref{30} by $\frac{1}{n}$ and summing over $n$, we get \begin{align} \frac{(-1)^{r}}{r!}\sum_{n=1}^{\infty}\frac{2^{2n}}{n(2n+1)\binom{2n}{n}}f_{n,r}(0)&=\sum_{n=1}^{\infty}\frac{1}{n}\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}\frac{1}{(2k+1)^{r+1}} \nonumber \\ &=\sum_{n=1}^{\infty}\frac{1}{n}\int_{0}^{1} \cdots \int_{0}^{1}(1-x_{1}^{2}x_{2}^{2}\cdots x_{r+1}^{2})^{n} dx_{1}dx_{2}\cdots dx_{r+1}\label{31}\\ &=-2\int_{0}^{1} \cdots \int_{0}^{1}(\log x_1 +\cdots+\log x_{r+1})dx_{1} \cdots dx_{r+1}\nonumber\\ &=2(r+1).\nonumber \end{align} Finally, we note that $f_{n,s}(x)$ is a polynomial in $x$ involving $\beta_{n}(x), \beta_{n}^{(1)}(x),\cdots,\beta_{n}^{(s-1)}(x)$, as we can see from the recurrence relation in \eqref{28}. From \eqref{26}, we see that \begin{align} \beta_{n}^{(j)}(0)=(-1)^{j-1}j!\sum_{k=0}^{n}(2k+1)^{-(j+1)}=(-1)^{j-1}j!\big(H_{2n+1}^{(j+1)}-\frac{1}{2^{j+1}}H_{n}^{(j+1)}\big).\label{32} \end{align} Combining \eqref{28} and \eqref{30}-\eqref{32} altogether, we obtain our main result in Theorem 1.
train/arxiv
BkiUaTrxK3xgpdbcgYUe
5
1
\section{Introduction} In 1896 Heffter, \cite{H}, introduced his now famous first difference problem: partition the set $\{1,\dots,3m\}$ into $m$ triples $\{a,b,c\}$ such that either $a+b=c$ or $a+b+c$ is divisible by $6m+1$. However, it was not until 1939 that Peltesohn, \cite{Pe}, showed that a solution exists whenever $m\neq 3$. A key interest in this problem is that solutions to Heffter's first difference problem yield cyclic Steiner triple systems; see \cite{CD}. A natural extension to this question is: can we identify a set of $m$ subsets $\{x_1,\dots,x_s\}\subset \{-ms,\dots, -1,1,\dots,ms\}$ such that the sum of the entries in each subset is divisible by $2ms+1$ and further if $x$ occurs in one of the subsets, $-x$ does not occur in any of the subsets? We call the set of $m$ such subsets a {\em Heffter system}. Two Heffter systems, $H_1=\{H_{11},\dots, H_{1m}\}$, $|H_{1i}|=s$ for $i=1,\dots m$, and $H_2=\{H_{21},\dots, H_{2n}\}$ $|H_{2j}|=r$ for $j=1,\dots n$, where $sm=nt$, are said to be {\em orthogonal} if for all $i,j$, $|H_{1i}\cap H_{2j}|\leq 1$. As observed by Dinitz and Mattern \cite{DM}, a Heffter system is equivalent to a {\em Heffter array} $H(m,n;s,t)$, which is an $m\times n$ array of integers, such that: \begin{itemize} \item each row contains $s$ filled cells and each column contains $t$ filled cells; \item the elements in every row and column sum to $0$ in ${\mathbb Z}_{2ms+1}$; and \item for each integer $1\leq x\leq ms$, either $x$ or $-x$ appears in the array. \end{itemize} Henceforth the set of integers $\{0,1,\dots ,n-1\}$ is denoted by $[n]$. In the current paper the rows and columns of an $m\times n$ array will be indexed by $[m]$ and $[n]$, respectively. A Heffter array is {\em square}, if $m = n$ and necessarily $s = t$, and is denoted $H(n;t)$. The following is an example of a pair of orthogonal Heffter systems that are equivalent to a Heffter array $H(6,12;8,4)$, given by Archdeacon in \cite{A}. \begin{example} Let $m=6$ and $s=8$. Then for each set \begin{eqnarray*} \begin{array}{ll} \{-1 ,2 ,5 ,-6 , -25,26 ,29 ,-30\},& \{3 ,-4 ,-7 ,8 , 27 ,-28,-31, 32\},\\ \{ 9,-10, -13,14 ,33 ,-34, -37,38\},& \{-11, 12, 15 ,-16,-35,36 , 39 ,-40\},\\ \{-17, 18,21 ,-22, -41,42 ,45 ,-46\}\ \mbox{\rm and} & \{ 19,-20,-23,24 , 43 ,-44,-47,48 \}, \end{array} \end{eqnarray*} its elements sum to zero. Also for each $x\in \{1,\dots ,48\}$, precisely one of $x$ or $-x$ occurs in a subset. Thus these 6 subsets form a Heffter system. Let $n=12$ and $t=4$. Then again for each of the following sets \begin{eqnarray*} \begin{array}{lll} \{ 9,-11,-17,19\},&\{-10,12,18,-20 \},&\{ -1,3,21,-23 \},\\ \{2,-4,-22,24 \},&\{ 5,-7,-13,15\},&\{ -6,8,14,-16 \},\\ \{33,-35,-41,43\},& \{-34,36,42,-44\},& \{-25,27,45,-47\},\\ \{26,-28,-46,48\},& \{29,-31,-37,39\}\ \mbox{\rm and} & \{-30,32,38,-40\}, \end{array} \end{eqnarray*} its elements sum to zero. Also for each $x\in \{1,\dots ,48\}$, precisely one of $x$ or $-x$ occurs in a subset. Thus these 12 subsets form a Heffter system. These two systems are orthogonal and thus we have equivalence with the following Heffter array $H(6,12;8,4)$. \begin{scriptsize} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & &-1 &2 &5 &-6 & & &-25&26 &29 &-30\\ \hline & &3 &-4 &-7 &8 & & &27 &-28&-31& 32\\ \hline 9 &-10& & &-13&14 &33 &-34& & &-37&38\\ \hline -11 & 12& & &15 &-16&-35&36 & & &39 &-40\\ \hline -17 & 18&21 &-22& & &-41&42 &45 &-46& &\\ \hline 19 &-20&-23&24 & & &43 &-44&-47&48 & &\\ \hline \end{tabular} \end{center} \end{scriptsize} \end{example} A cycle decomposition of a complete graph is the edge-disjoint decomposition of its edges into fixed length cycles. It was Archdeacon \cite{A} who first showed that a Heffter array, together with a certain ordering of its elements, yields a biembedding of a pair of cycle decompositions of the complete graph onto an orientable surface. Since then a number of papers have appeared on the connection between Heffter arrays and the biembedding of cycle decompositions as well as a number of papers studying more general biembeddings of the complete graph. See for examples the papers \cite{A,BDF,CDP,CMPP,DM,GG,GrM,GM,M,V} and \cite{ItalE}. We next describe the above orderings. Given a row $r$ of a Heffter array $H(m,n;s,t)$, if there exists a cyclic ordering $\phi_r=(a_0,a_1,\dots ,a_{s-1})$ of the entries of row $r$ such that, for $i=0,\dots, s-1$, the partial sums $$\alpha_i=\displaystyle\sum_{j=0}^{i} a_j \md{2ms+1}$$ are all distinct, we say that $\phi_r$ is {\em simple}. A simple ordering of the entries of a column may be defined similarly. If every row and column of a Heffter array $H(m,n;s,t)$ has a simple ordering, we say that the array is {\em simple}. The existence of a simple $H(m,n;s,t)$ implies the existence of {\em orthogonal} decompositions ${\mathcal C}$ and ${\mathcal C}'$ of the graph $K_{2ms+1}$ into $s$-cycles and $t$-cycles (respectively); that is, any cycle from ${\mathcal C}$ shares at most one edge with any cycle from ${\mathcal C}'$ \cite{A}. Orthogonal cycle systems of the complete graph are studied in \cite{AHL}, \cite{CY} and \cite{CY2}. Observe that, if $s,t\leq5$ then any $H(m,n;s,t)$ is simple. The composition of the cycles $\phi_r$, for $r\in [m]$, is a permutation, denoted here $\omega_r$, on the entries of the Heffter array. Similarly we may define the permutation $\omega_c$ as the composition of the cycles $\phi_c$, for $c\in [n]$. If, the permutation $\omega_r\circ\omega_c$ can be written as a single cycle of length $ms=nt$, we say that $\omega_r$ and $\omega_c$ are {\em compatible} orderings for the Heffter array. Archdeacon \cite{A} proved the following theorem, showing that a Heffter array with a pair of compatible and simple orderings can be used to construct an embedding of the complete graph $K_{2ms+1}$ on a surface. \begin{theorem}\label{Archdeacon} {\rm \cite{A}} Suppose there exists a Heffter array $H(m, n; s, t)$ with orderings $\omega_r$ of the symbols in the rows of the array and $\omega_c$ on the symbols in the columns of the array, where $\omega_r$ and $\omega_c$ are both simple and compatible. Then there exists a face $2$-colourable embedding of $K_{2ms+1}$ on an orientable surface such that the faces of one colour are cycles of length $s$ and the faces of the other colour are cycles of length $t$. Moreover, in such an embedding the vertices may be labelled with the elements of $\mathbb{Z}_{2ms+1}$ such that the permutation $x \rightarrow x + 1$ preserves the faces of each colour. \end{theorem} If we relax the condition of simplicity in the above theorem, we still have a biembedding on an orientable surface but the faces collapse into smaller ones (and the cycles become circuits). On the other hand if we relax only the condition of compatibility, we have an embedding onto a pseudosurface rather than surface, but ${\mathcal C}$ and ${\mathcal C}'$ remain orthogonal. To date, the existence of Heffter arrays with orderings that are {\em both} compatible and simple is known in only a few specific cases: $H(3,n;n,3)$ \cite{DM}; $H(5,n;n,5)$ and $n\leq 100$ \cite{DM}; $H(n;t)$, $nt\equiv 3$ \md{4} and $t\in \{3,5,7,9\}$ \cite{ADDY, DW, CMPP}. Ignoring orderings, in \cite{ABD} it was shown that a $H(m,n;n,m)$ exists for all possible values of $m$ and $n$. The spectrum for square Heffter arrays has been completely determined in \cite{ADDY}, \cite{DW} and \cite{CDDY}. \begin{theorem} There exists an $H(n; k)$ if and only if $3 \leq k \leq n$. \label{mainthm} \end{theorem} For the sake of ease of description, Heffter arrays often possess some extra properties that we now describe. A Heffter array is called an {\em integer} Heffter array if the sum of each row and column is $0$ in ${\mathbb Z}$. Suppose that a simple cyclic ordering $\phi_r=(a_1,a_2,\dots ,a_s)$ of a row $r$ of a Heffter array has the property that whenever entry $a_i$ lies in cell $(r,c)$ and entry $a_{i+1}$ lies in cell $(r,c')$, then $c<c'$. That is, the ordering for the row $r$ is taken from left to right across the array. We say that $\phi_r$ is the {\em natural} ordering for the rows and define a natural column ordering in a similar way from top to bottom. If the natural ordering for every row and column is also a simple ordering, we say that the Heffter array is {\em globally simple}. We focus on square Heffter arrays in this paper and now can state our main results. \begin{theorem} If $p>0$ and $n\geq 4p$ then there exists a globally simple integer Heffter array $H(n;4p)$. \label{main1} \end{theorem} We prove Theorem \ref{main1} in Section 2. \begin{corollary} If $p>0$ and $n\geq 4p$, there exists a pair of orthogonal decompositions of $K_{8np+1}$ into cycles of length $4p$. \end{corollary} \begin{theorem} Let $n\equiv 1 \md{4}$, $p>0$ and $n\geq 4p+3$, then there exists a globally simple integer Heffter array $H(n;4p+3)$. \label{main2} \end{theorem} \begin{theorem} Let $n\equiv 0\md{4}$. Then there exist constants $c$ and $n_0$ such that if $n\geq n_0$ and $n-4p\geq c\log^2{n}$, then there is a globally simple Heffter array $H(n;4p+3)$. \label{main3} \end{theorem} We prove Theorems \ref{main2} and \ref{main3} in Sections 3 and 4. \begin{corollary} If either {\rm (a)} $n\equiv 1 \md{4}$, $p>0$ and $n\geq 4p+3$, or {\rm (b)} $n\equiv 0\md{4}$ and $n\gg 4p$, there exists a pair of orthogonal decompositions of $K_{2n(4p+3)+1}$ into cycles of length $4p+3$. \end{corollary} Even though not explicitly stated, the partial sums (given by the natural ordering) will also be distinct $\md{2nk+2}$. As shown in \cite{CMPP}, our results thus also yield orthogonal cycle decompositions of the complete graph of order $2nk+2$ minus a $1$-factor. The following are useful conventions and results which will be used through out the paper. It is important to be aware that row and column indices are {\em always} calculated modulo $n$, while entries of arrays are {\em always} evaluated as integers. The {\em support} of an array $A$ is the set containing the absolute values of the entries of $A$ and denoted $s(A)$. In what follows, for a partially filled array $A=[A(i,j)]$ we use $A(i,j)$ to denote the entry in cell $(i,j)$ of array $A$. The cells of an $n\times n$ array can be partitioned into $n$ disjoint {\em diagonals} $D_d$, $d\in [n]$, where \begin{eqnarray*} D_d:=\{(i+d,i)\mid i\in [n]\}. \end{eqnarray*} Let the entry in row $a$ and column $a$ of diagonal $D_i$ be denoted by $d_i(r_a)$ and $d_i(c_a)$, respectively, with these values defined to be $0$ when there is no entry. For a given row $a$ we define $\Sigma(x)=\sum_{i=0}^x d_i(r_a)$ and for a given column $a$ we define $\overline{\Sigma}(x)=\sum_{i=0}^x d_i(r_a)$. For a given row $a$, the values of $\Sigma(x)$ such that $d_x(r_a)$ is non-zero are called the {\em row partial sums} for $a$. For a given column $a$, the values of $\overline{\Sigma}(x)$ such that $d_x(c_a)$ is non-zero are called the {\em column partial sums} for $a$. Thus to show an array is globally simple, it suffices to show that the row partial sums are distinct (modulo $2nk+1$) for each row $a$ and that the column partial sums are distinct (modulo $2nk+1$) for each column $c$. To aid the reader, we will often refer to the following straightforward observations. \begin{remark} Let $m,x_1,x_2,\alpha_1,\alpha_2,\beta_1,\beta_2$ be integers and $m>0$. Then for: \begin{eqnarray} -m\leq x_1,x_2\leq m,&& x_1\equiv x_2\md{2m+1}\Rightarrow x_1=x_2; \label{mods}\\ 0\leq x_1,x_2< m,&& x_1\equiv x_2\md{m} \Rightarrow x_1=x_2; \label{modl}\\ -\frac{m}{2}<\alpha_1,\alpha_2<\frac{m}{2},&& \beta_1m+\alpha_1=\beta_2m+\alpha_2\Rightarrow \beta_1=\beta_2\ and\ \alpha_1=\alpha_2; \label{n}\\ -m<x_1<0<x_2<m,&& x_1\equiv x_2\md{m} \Rightarrow x_2=m+x_1. \label{mod=} \end{eqnarray} \end{remark} \section{Globally simple integer $H(n;4p)$ constructions} In this section we prove Theorem \ref{main1}. That is, we construct a globally simple Heffter array $H(n;4p)$ for each $n$ and $p\geq3$ such that $n\geq 4p$. Note that a globally simple $H(n,8)$ was constructed in \cite{CMPP} and it is easy to see that all Heffter arrays $H(n,4)$ are globally simple. We will divide this section according to the parity of $p$. Throughout this section $k=4p$. \subsection{$p$ is odd} Let $p>1$ be odd and $I=[\frac{p-1}{2}]$. We remind the reader that throughout this paper, rows and columns are evaluated modulo $n$, while entries are always evaluated as integers. For $x\in [n]$ and $i\in I$ define the array $A$ to have the following entries: \begin{eqnarray*} 4i+1+kx&\mbox{ in cell}&(4i+x,x)\in D_{4i}\nonumber,\\ -(4i+2)-k(x+1\md{n}) &\mbox{ in cell}&(4i+1+x,x)\in D_{4i+1},\\ -(k-(4i+3))-kx &\mbox{ in cell}&(4i+2+x,x)\in D_{4i+2},\\ k-(4i+4)+k(x+1\md{n})&\mbox{ in cell}&(4i+3+x,x)\in D_{4i+3}, \end{eqnarray*}\vspace{-0.7cm}\begin{eqnarray*} 2p-1+kx &\mbox{ in cell}&(2p-2+x,x)\in D_{2p-2},\\ -2p-k(x+1\md{n}) &\mbox{ in cell}&(2p-1+x,x)\in D_{2p-1}, \end{eqnarray*}\vspace{-0.7cm}\begin{eqnarray*} -(2p-2-4i)-kx &\mbox{ in cell}&(2p+4i+x,x)\in D_{2p+4i},\\ 2p-3-4i+k(x+1 \md{n})&\mbox{ in cell}&(2p+4i+1+x,x)\in D_{2p+4i+1},\\ 2p+4+4i+kx &\mbox{ in cell}&(2p+4i+2+x,x)\in D_{2p+4i+2}, \\ -(2p+5+4i)-k(x+1\md{n}) &\mbox{ in cell}&(2p+4i+3+x,x)\in D_{2p+4i+3}, \end{eqnarray*}\vspace{-0.7cm}\begin{eqnarray*} -(2p+1+kx) &\mbox{ in cell}&(k-2+x,x)\in D_{k-2},\\ k+k(x+1\md{n}) &\mbox{ in cell}&(k-1+x,x)\in D_{k-1}.\\ \end{eqnarray*} \begin{example} A globally simple Heffter array $H(17;12)$ ($n=17$ and $p=3$). \begin{scriptsize} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline 1& & & & & & 96&-91&-119&118&135&-136&-162&161&188&-189&-2\\ \hline -14&13& & & & & & 108&-103&-131&130&147&-148&-174&173&200&-201\\ \hline -9&-26&25& & & & & & 120&-115&-143&142&159&-160&-186&185&8\\ \hline 20&-21&-38&37& & & & & & 132&-127&-155&154&171&-172&-198&197\\ \hline 5&32&-33&-50&49& & & & & & 144&-139&-167&166&183&-184&-6\\ \hline -18&17&44&-45&-62&61& & & & & & 156&-151&-179&178&195&-196\\ \hline -4&-30&29&56&-57&-74&73& & & & & & 168&-163&-191&190&3\\ \hline 15&-16&-42&41&68&-69&-86&85& & & & & & 180&-175&-203&202\\ \hline 10&27&-28&-54&53&80&-81&-98&97& & & & & & 192&-187&-11\\ \hline -23&22&39&-40&-66&65&92&-93&-110&109& & & & & & 204&-199\\ \hline -7&-35&34&51&-52&-78&77&104&-105&-122&121& & & & & &12 \\ \hline 24&-19&-47&46&63&-64&-90&89&116&-117&-134&133& & & & & \\ \hline &36&-31&-59&58&75&-76&-102&101&128&-129&-146&145& & & & \\ \hline & &48&-43&-71&70&87&-88&-114&113&140&-141&-158&157& & & \\ \hline & & &60&-55&-83&82&99&-100&-126&125&152&-153&-170&169& & \\ \hline & & & &72&-67&-95&94&111&-112&-138&137&164&-165&-182&181& \\ \hline & & & & &84&-79&-107&106&123&-124&-150&149&176&-177&-194&193 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{example} \subsubsection{Support of the array when $p$ is odd} Observe that for each $\alpha\in[k]$, $\displaystyle s(D_{\alpha})=\{ S_\alpha+kx| x\in [n]\}$ in $A$ where $S_\alpha$ satisfies: $ \begin{array}{ll} \{S_{4i} | i\in I\}=\{1,5,\dots ,2p-5\},&\{S_{4i+2} | i\in I\}=\{2p+3,2p+7,\dots,4p-3\},\\ \{S_{4i+1}| i\in I\}=\{2,6,\dots,2p-4\},&\{S_{4i+3} | i\in I\}=\{2p+2,2p+6,\dots,4p-4\},\\ \{S_{2p+4i} | i\in I\}=\{4,8,\dots,2p-2\},&\{S_{2p+4i+2} | i\in I\}=\{2p+4,2p+8,\dots,4p-2\}, \\ \{S_{2p+4i+1}| i\in I\}=\{3,7,\dots,2p-3\},&\{S_{2p+4i+3} | i\in I\}=\{2p+5,2p+9,\dots,4p-1\}, \\ \end{array}$ $\begin{array}{llll} S_{2p-2}=2p-1,&S_{4p-2}=2p+1,& S_{2p-1}=2p,& S_{4p-1}=4p.\\ \end{array} $ Hence it is easy to see that $s(A)=\{1,2,\dots,nk\}$. \subsubsection{Distinct column partial sums when $p$ is odd} Recall that $k=4p$. In this section we will show that in the array $A$, $|\overline{\Sigma}(\alpha)|\leq nk$ for all $\alpha\in [k]$. Hence by (\ref{mods}), $$\overline{\Sigma}(\alpha_1)\equiv \overline{\Sigma}(\alpha_2)\md{2nk+1} \Rightarrow \overline{\Sigma}(\alpha_1)=\overline{\Sigma}(\alpha_2).$$ But then we will show that $\overline{\Sigma}(\alpha_1)\not=\overline{\Sigma}(\alpha_2)$ by comparing these values modulo $k$. Hence we obtain the required result $\overline{\Sigma}(\alpha_1)\not\equiv \overline{\Sigma}(\alpha_2)\md{2nk+1}$. First observe that for each column $a$ and $i\in I$: $\displaystyle \sum_{j=0}^3 d_{4i+j}=-2$, and $\displaystyle \sum_{j=0}^3 d_{2p+4i+j}=-2.$ Now the partial column sums for each column $a$ can be calculated as follows: \begin{eqnarray*} \overline{\Sigma}(4i)&=&4i+1+ak-2i=2i+1+ak<nk\\ &\Rightarrow&\{\overline{\Sigma}(4i)\md{k} \mid i\in I\}=\{1,3,\dots, p-2\}.\\ \overline{\Sigma}(4i+1)&=&2i+1+ak-(4i+2)-k(a+1\md{n})\\ &=&-(2i+1)+ka-k(a+1\md{n}),\\ &\Rightarrow&\{\overline{\Sigma}(4i+1)\md{k} \mid i\in I\}=\{3p+2,\dots, 4p-3, 4p-1\}.\\ \overline{\Sigma}(4i+2)&=&-(2i+1)+ka-k(a+1\md{n})-k+(3+4i)-ak\\ &=&2i+2-k-k(a+1\md{n}),\\ &\Rightarrow&\{\overline{\Sigma}(4i+2)\md{k} \mid i\in I\}=\{2,4,\dots,p-1\}.\\ \overline{\Sigma}(4i+3)&=&2i+2-k-k(a+1\md{n})+k-(4+4i)+k(a+1\md{n})\\ &=&-2i-2,\\ &\Rightarrow&\{\overline{\Sigma}(4i+3)\md{k} \mid i\in I\}=\{3p+1,\dots, 4p-4,4p-2\}.\\ \overline{\Sigma}(2p-2)&=&-2(p-3)/2-2+2p-1+ak=p+ak.\\ \overline{\Sigma}(2p-1)&=&p+ak-2p-k(a+1\md{n})=-p+ak-k(a+1\md{n}).\\ \overline{\Sigma}(2p+4i)&=&-p+ak-k(a+1\md{n})-2p+2+4i-ak-2i\\ &=&-3p+2i+2-k(a+1\md{n}),\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i)\md{k} \mid i\in I\}=\{p+2,\dots 2p-3,2p-1\}.\\ \overline{\Sigma}(2p+4i+1)&=&-3p+2i+2-k(a+1\md{n})+2p-3-4i+k(a+1\md{n})\\ &=&-p-(2i+1),\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i+1)\md{k}\mid i\in I\}=\{2p+2,\dots ,3p-3, 3p-1\}.\\ \overline{\Sigma}(2p+4i+2)&=&-p-(2i+1)+2p+4+4i+ak=p+2i+3+ak,\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i+2)\md{k}\mid i\in I\}=\{p+3,p+5,\dots,2p\}.\\ \overline{\Sigma}(2p+4i+3)&=&p+2i+3+ak-2p-5-4i-k(a+1\md{n}),\\ &=&-p-2(i+1)+ak-k(a+1\md{n})\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i+4)\ mod\ k\mid i\in I\}=\{2p+1,\dots ,3p-4,3p-2\}.\\ \overline{\Sigma}(k-2)&=&-p-2((p-3)/2+1)+ak-k(a+1\md{n})-2p-1-ak\\ &=&-k-k(a+1\md{n})\neq 0. \\ \overline{\Sigma}(k-1)&=&0. \end{eqnarray*} One can easily check from above calculations that for column $a$, $\overline{\Sigma}(\alpha_1)\neq \overline{\Sigma}(\alpha_2) \md{k}$ for all $\alpha_1,\alpha_2\in[k-1]$ and $\alpha_1\neq \alpha_2$. Furthermore $\overline{\Sigma}(k-2)=-d_{k-1}(c_a)\neq 0$. Also it is not hard to check that for all $\alpha\in[k]$, $|\overline{\Sigma}(\alpha)|\leq nk$. Hence all the column partial sums are distinct $\md{2kn+1}$. \subsubsection{Distinct row partial sums when $p$ is odd} As elements in $s(D_\alpha)$ are all congruent modulo $k$ in $A$, we have $d_{\alpha}(r_a)\equiv d_\alpha(c_b) \md{k}$ for all $a,b\in[n], \alpha\in [k]$. Hence $\displaystyle{ \Sigma_{j=0}^\alpha d_j(r_a)\equiv \Sigma_{j=0}^\alpha d_j(c_b)\md{k}}$. Now as the partial column sums up to and including diagonal $4p-2$ are distinct modulo $k$, partial sums of rows up to and including diagonal $4p-2$ are distinct modulo $k$. To use the same argument as above, we thus just need to show that $|\Sigma(\alpha)|\leq nk$ for each row $a$ and $\alpha\in [k]$. First observe that for each $0\leq j\leq 2p-2$, $d_{2j}(r_a)$ and $d_{2j+1}(r_a)$ are in the form: $d_{2j}(r_a)=\alpha +s_jk(a-\beta\md{n})$ and $d_{2j+1}(r_a)=-\alpha-1 -s_jk(a-\beta\md{n})$ where $\alpha$ and $\beta$ are integers and $s_j\in\{1,-1\}$. Hence $d_{2j}(r_a)+d_{2j+1}(r_a)=-1$ and $\Sigma(2j+1)=-(j+1)$, for each $0\leq j\leq 2p-2$. Now $d_{4i}(r_a),d_{2p-2}(r_a), d_{2p+4i+2}\geq 0$. Hence $-nk\leq \Sigma(2\alpha)=-\alpha+d_{2\alpha}(r_a)\leq nk$ for $\alpha\in\{2i,p-1,p+2i+1|i\in I\}. $ Finally \begin{eqnarray*} \Sigma(4i+2)&=&-(2i+1)-k+4i+3-k(a-4i-2\md{n})\\ &=&2i+2-k-k(a-4i-2\md{n})\geq -nk,\\ \Sigma(2p+4i)&=&-(p+2i)-2p+2+4i-k(a-4i-2p\md{n})\\ &=&-3p+2i+2-k(a-4i-2p\md{n})\geq -nk,\\ \Sigma(4p-2)&=&-(2p-1)-2p-1-k(a-k+2\md{n})\\ &=&-k-k(a-k+2\md{n})\geq -nk, \\ \Sigma(4p-1)&=&0. \end{eqnarray*} \subsection{$p$ is even} Let $p>2$ be even and $I=[\frac{p-2}{2}]$. For $x\in [n]$ and $i\in I$ define the array $A$ to have the following entries: \begin{eqnarray*} 4i+1+kx&\mbox{ in cell}&(4i+x,x)\in D_{4i}\nonumber,\\ -(4i+2)-k(x+1\md{n}) &\mbox{ in cell}&(4i+1+x,x)\in D_{4i+1},\\ -(k-(4i+3))-kx &\mbox{ in cell}&(4i+2+x,x)\in D_{4i+2},\\ k-(4i+4)+k(x+1\md{n})&\mbox{ in cell}&(4i+3+x,x)\in D_{4i+3}, \end{eqnarray*}\vspace{-0.7cm}\begin{eqnarray*} 2p-3+kx &\mbox{ in cell}&(2p-4+x,x)\in D_{2p-4},\\ -2p+2-k(x+1\md{n}) &\mbox{ in cell}&(2p-3+x,x)\in D_{2p-3}, \end{eqnarray*}\vspace{-0.7cm}\begin{eqnarray*} -(2p-4i)-kx &\mbox{ in cell}&(2p+4i-2+x,x)\in D_{2p+4i-2},\\ 2p-1-4i+k(x+1 \md{n})&\mbox{ in cell}&(2p+4i-1+x,x)\in D_{2p+4i-1},\\ 2p+2+4i+kx &\mbox{ in cell}&(2p+4i+x,x)\in D_{2p+4i}, \\ -(2p+3+4i)-k(x+1\md{n}) &\mbox{ in cell}&(2p+4i+1+x,x)\in D_{2p+4i+1}, \end{eqnarray*}\vspace{-0.7cm}\begin{eqnarray*} -4-kx &\mbox{ in cell}&(k-6+x,x)\in D_{k-6},\\ 3+k(x+1 \md{n})&\mbox{ in cell}&(k-5+x,x)\in D_{k-5},\\ k-2+kx &\mbox{ in cell}&(k-4+x,x)\in D_{k-4}, \\ -(k-1)-k(x+1\md{n}) &\mbox{ in cell}&(k-3+x,x)\in D_{k-3},\\ \\ -(2p+1+kx) &\mbox{ in cell}&(k-2+x,x)\in D_{k-2},\\ k+k(x+1\md{n}) &\mbox{ in cell}&(k-1+x,x)\in D_{k-1},\\ \end{eqnarray*} \begin{example} A globally simple Heffter array $H(17,16)$ ($n=17$ and $p=4$). \begin{scriptsize} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline 1& &64 &-57 &-95 &94 & 115&-91&-119&118&183&-184&-214&213&252&-253&-2\\ \hline -18&17& &80 &-73 &-111&110& 131&-103&-131&130&199&-200&-230&229&268&-269\\ \hline -13&-34&33& &96 &-89 &-127 &126 & 147&-115&-143&142&215&-216&-246&245&12\\ \hline 28&-29&-50&49& &112 &-105 &-143 &142&163&-127&-155&154&231&-232&-262&261\\ \hline 5&44&-45&-66&65& &128 &-121 &-159 &158& 179&-139&-167&166&247&-248&-6\\ \hline -22&21&60&-61&-82&81& &144 &-137 &-175 &174 & 195&-151&-179&178&263&-264\\ \hline -8&-38&37&76&-77&-98&97& &160 & -153&-191&190 & 211&-163&-191&190&7\\ \hline 23&-24&-54&53&92&-93&-114&113& &176 &-169& -207&206 & 227&-175&-203&202\\ \hline 10&39&-40&-70&69&108&-109&-130&129& &192 &-185&-223 &222 & 243&-187&-11\\ \hline -27&26&55&-56&-86&85&124&-125&-146&145& &208 &-201&-239 &238 & 259&-199\\ \hline -4&-43&42&71&-72&-102&101&140&-141&-162&161& &224 & -217&-255 &254 &3 \\ \hline 19&-20&-59&58&87&-88&-118&117&156&-157&-178&177& &240 &-233&-271 &270 \\ \hline 14 &35&-36&-75&74&103&-104&-134&133&172&-173&-194&193& &256 &-249 &-15 \\ \hline -31 &30 &51&-52&-91&90&119&-120&-150&149&188&-189&-210&209& &272 &-265 \\ \hline -9 &-47 &46 &67&-68&-107&106&135&-136&-166&165&204&-205&-226&225& &16 \\ \hline 32 &-25&-63 &62 &83&-84&-123&122&151&-152&-182&181&220&-221&-242&241& \\ \hline &48 &-41&-79 &78&99&-100&-139&138&167&-168&-198&197&236&-237&-258&257 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{example} \subsubsection{Support when $p$ is even} Observe that for each $\alpha\in[k]$, $\displaystyle s(D_{\alpha})= \{S_\alpha +kx | x\in [n]\}$ in $A$ where $S_\alpha$ satisfies: $\begin{array}{ll} \{S_{4i}|i\in I\}=\{1,5,\dots ,2p-7\},&\{S_{4i+2}|i\in I\}=\{2p+5,2p+9,\dots, 4p-3\}, \\ \{S_{4i+1}|i\in I\}=\{2,6,\dots ,2p-6\},& \{S_{4i+3}|i\in I\}=\{2p+4,2p+8,\dots, 4p-4\},\\ \{S_{2p+4i-2}|i\in I\}=\{8,12,\dots, 2p\},&\{S_{2p+4i}|i\in I\}=\{2p+2,2p+6,\dots, 4p-6\},\\ \{S_{2p+4i-1}|i\in I\}=\{7,11,\dots, 2p-1\},&\{S_{2p+4i+1}|i\in I\}=\{2p+3,2p+7,\dots, 4p-5\},\\ \end{array}$ $ \begin{array}{llll} S_{2p-4}=2p-3,&S_{4p-2}=2p+1,& S_{2p-3}=2p-2,&S_{4p-1}=4p,\\ S_{4p-6}=4,&S_{4p-4}=4p-2,& S_{4p-5}=3,& S_{4p-3}=4p-1.\\ \end{array}$ Hence it is easy to see that $s(A)=\{1,2,\dots,nk\}$. \subsubsection{Distinct column partial sums when $p$ is even} First observe that for each column $a$ and $i\in I$: $\displaystyle \sum_{j=0}^3 d_{4i+j}(c_a)=-2$, and $\displaystyle \sum_{j=0}^3 d_{2p+4i-2+j}(c_a)=-2.$ Similarly to the previous subsection, the column partial sums for each column $a$ can be calculated as follows: \begin{eqnarray*} \overline{\Sigma}(4i)&=&2i+1+ka \Rightarrow\{\overline{\Sigma}(4i)\md{k}\mid i\in I\}=\{1,3,\dots, p-3\},\\ \overline{\Sigma}(4i+1)&=&-(2i+1)+ka-k(a+1\md{n})\\ &\Rightarrow&\{\overline{\Sigma}(4i+1)\md{k}\mid i\in I\}=\{3p+3,\dots, 4p-3, 4p-1\},\\ \overline{\Sigma}(4i+2)&=&2i+2-k-k(a+1\md{n})\\ &\Rightarrow&\{\overline{\Sigma}(4i+2)\md{k}\mid i\in I\}=\{2,4,\dots,p-2\},\\ \overline{\Sigma}(4i+3)&=&-2i-2\Rightarrow\{\overline{\Sigma}(4i+3)\md{k}\mid i\in I\}=\{3p+2,\dots, 4p-4,4p-2\},\\ \overline{\Sigma}(2p-4)&=&-2(p-4)/2-2+2p-3+ak=p-1+ka,\\ \overline{\Sigma}(2p-3)&=&-p+1+ka-k(a+1\md{n}),\\ \overline{\Sigma}(2p+4i-2)&=&-3p+2i+1-k(a+1\md{n})\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i-2)\md{k} \mid i\in I\}=\{p+1,\dots ,2p-1,2p-3\},\\ \overline{\Sigma}(2p+4i-1)&=&-p-2i,\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i-1)\md{k} \mid i\in I\}=\{2p+4,\dots ,3p-2, 3p\},\\ \overline{\Sigma}(2p+4i)&=&-p-2i+2p+4i+ak+2=p+2i+2+ka\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i+1)\md{k} \mid i\in I\}=\{p+2,p+4,\dots, 2p-2\},\\ \overline{\Sigma}(2p+4i+1)&=&-p-2i-1+ka-k(a+1\md{n})\\ &\Rightarrow&\{\overline{\Sigma}(2p+4i+4)\md{k} \mid i\in I\}=\{2p+3,\dots ,3p-3,3p-1\},\\ \overline{\Sigma}(k-6)&=&-(2p+1)-k(a+1\md{n}),\\ \Sigma(k-5)&=&-2p+2,\\ \Sigma(k-4)&=& 2p+ka,\\ \Sigma(k-3)&=&-2p+1+ka-k(a+1\md{n}),\\ \Sigma(k-2)&=&-k-k(a+1\md{n})\neq 0,\\ \Sigma(k-1)&=&0. \end{eqnarray*} One can easily check from above calculations that for column $a$, $\overline{\Sigma}(\alpha_1)\neq \overline{\Sigma}(\alpha_2)\md{k}$ for all $\alpha_1,\alpha_2\in[k-1]$ and $\alpha_1\neq \alpha_2$. It is also straightforward to check that $|\overline{\Sigma}(\alpha)|\leq nk$. Hence all the column partial sums are distinct modulo $2kn+1$. \subsubsection{Distinct row partial sums when $p$ is even} Similarly to the case when $p$ is odd, we just need to show that $|\Sigma(\alpha)|\leq nk$ for each row $a$ and $\alpha\in [k]$. As before, $d_{2j}(r_a)+d_{2j+1}(r_a)=-1$ and $\Sigma(2j+1)=-(j+1)$, for each $0\leq j\leq 2p-2$. Now $d_{4i}(r_a),d_{2p-4}(r_a), d_{2p+4i}(r_a), d_{k-4}(r_a)\geq 0$ for $i\in I$. Hence $-nk\leq \Sigma(2\alpha)=-\alpha+d_{2\alpha}(r_a)\leq nk$ for $\alpha\in\{2i,p-2,p+2i,2p-2|i\in I\}.$ Finally, \begin{eqnarray*} \Sigma(4i+2)&=&-(2i+1)-k+4i+3-k(a-4i-2\md{n}),\\ &=&2i+2-k-k(a-4i-2\md{n})\geq -nk,\\ \Sigma(2p+4i-2)&=&-(p+2i-1)-2p+4i-k(a-4i-2p+2\md{n}),\\ &=&-3p+2i+1-k(a-4i-2p\md{n})\geq -nk,\\ \Sigma(k-6)&=&-(2p-3)-4-k(a-4i-2p\md{n})\geq -nk,\\ \Sigma(k-2)&=&-(2p-1)-2p-1-k(a-k+2\md{n}) \\ &=&-k-k(a-k+2\md{n})\geq -nk,\\ \Sigma(k-1)&=&0. \end{eqnarray*} So Theorem 1.4 is proven. \section{Support shifted globally simple Heffter arrays}\label{thmex} The array $A$ is defined to be a {\em support shifted Heffter array} $H(n;4p,\gamma)$ if it satisfies the following properties: \begin{itemize} \item[{\bf P1.}] Every row and every column of $A$ has $4p$ filled cells. \item[{\bf P2.}] $s(A)=\{\gamma n+1,\dots,(4p+\gamma)n\}$. \item[{\bf P3.}] Elements in every row and every column sum to $0$. \item[{\bf P4.}] Partial sums are distinct in each row and each column of $A$ modulo $2(4p+\gamma)n+1$. \end{itemize} A related generalization of Heffter arrays is studied in \cite{ItalE2}. Note that a support shifted Heffter array $H(n;4p,0)$ is in fact an integer Heffter array $H(n;4p)$. In the following section we let $\gamma=3$ and we merge the support shifted Heffter array $H(n;4p,3)$ constructed below with a Heffter array $H(n;3)$ to obtain Heffter arrays $H(n;4p+3)$. In this section we write our results generally in terms of $\gamma$ in case the following theorem is useful for future research. \begin{theorem}\label{main} Let $p>0$, $n\geq 4p$, and $\gamma > 0$. If there exists $2p-1\leq \alpha\leq n-1-2p$ with $\mbox{gcd}(n,\alpha)=1$ then there exists a globally simple support shifted Heffter array $A$ where the non-empty cells are precisely on the diagonals $D_i$ for $i\in[4p-1]\cup\{2p+\alpha\}$. \end{theorem} The proof of Theorem \ref{main} will be broken into sections. In Subsection \ref{definition} we will define an array $A$ that has $4p$ entries per row and column, with the right support, thus verifying that $A$ satisfies Properties P1 and P2. Then in Subsection \ref{rowcolsums} we will show that each row and column of $A$ sums equal to $0$, thus verifying $A$ satisfies Property P3. Finally in Subsections \ref{partrowsum}, \ref{partcolsum} and \ref{partcolsum0} we will verify that $A$ satisfies Property P4 by showing, respectively, that the row partial sums are distinct, the partial sums for the non-zero columns are distinct and then finally the partial sums for column $0$ are distinct modulo $2(4p+\gamma)n+1$. \begin{remark} Throughout Section \ref{thmex} it will be assumed that $p>0$, $n\geq 4p$, and $\gamma> 0$, $2p-1\leq \alpha\leq n-1-2p$ and gcd$(\alpha,n)=1$. We will define $I=[p]$, $2I=\{2i\mid i\in I\}$, $J=[p-1]$, $\mathbb{D}=\{0,1,\dots,4p-2,2p+\alpha\}$ and $T=\mathbb{D}\setminus 2I.$ Further we remind the reader that row and column numbers will be calculated modulo $n$ with residues from $[n]$, while entries are calculated as integers. \end{remark} \subsection{Definition of the array $A$}\label{definition} Let $A=[A(i,j)]$ be an $n\times n$ array with filled cells defined by the $4p$ diagonals $$D_{2i}, D_{2i+1}, D_{2p}, D_{2p+1+2j}, D_{2p+2+2j},D_{2p+\alpha},$$ where $ i\in I$ and $ j\in J$, and with entries for each $x\in [n]$: \begin{eqnarray} (\gamma+2)n+4in-2x&\mbox{ in cell}&(2i-x,-x)\in D_{2i}\nonumber,\nonumber\\ -\gamma n-4in-1-2x &\mbox{ in cell}&(2i+1+x,x)\in D_{2i+1},\nonumber \\ -(4p+\gamma)n+2x&\mbox{ in cell}&(2p-\alpha x,-\alpha x)\in D_{2p},\nonumber\label{A}\\ (4p+\gamma-6)n-4jn+1+2x&\mbox{ in cell}&(2p+1+2j-x,-x)\in D_{2p+1+2j},\nonumber\\ -(4p+\gamma-4)n+4jn+2x&\mbox{ in cell}&(2p+2+2j+x,x)\in D_{2p+2+2j},\nonumber\\ (4p+\gamma-2)n+1+2x&\mbox{ in cell}&(2p+\alpha+\alpha x,\alpha x)\in D_{2p+\alpha}\nonumber. \end{eqnarray} It is useful to note that the set $\mathbb{D}$ contains the indices for the non-empty diagonals of $A$. Then for each $i\in I$ and $j\in J$: \begin{eqnarray} s(D_{2i}\cup D_{2i+1}) &=& \{\gamma n+4in+1,\dots,(\gamma+2)n+4in\};\nonumber\\ s(D_{2p+1+2j}\cup D_{2p+2+2j}) & = & \{(4p+\gamma-6)n-4n+1,\dots,(4p+\gamma-4)n-4jn\}\nonumber \\ & = & \{(\gamma+2)n+4\epsilon n+1,\dots,(\gamma+4)n+4\epsilon n\}\ (\epsilon=p-2-j);\nonumber\\ s(D_{2p}\cup D_{2p+\alpha}) & = & \{(4p+\gamma)n-2n+1,\dots,(4p+\gamma)n\}.\nonumber \end{eqnarray} Hence $s(A)=\{\gamma n+1,\dots,(4p+\gamma)n\}$. \begin{example} \label{appie} Here we display a support shifted Heffter array $H(17;12,3)$ (the array $A$ above) illustrating Theorem $\ref{main}$ with $\alpha=6$. \begin{scriptsize} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline 85&&&&&252&&-105&104&-169&168&-253&-212&213&-148&149&-84\\ \hline -52&53&&&&&224&&-103&102&-167&166&-225&-214&215&-150&151\\ \hline 153&-54&55&&&&&230&&-101&100&-165&164&-231&-216&217&-152\\ \hline -120&121&-56&57&&&&&236&&-99&98&-163&162&-237&-218&219\\ \hline 221&-122&123&-58&59&&&&&242&&-97&96&-161&160&-243&-220\\ \hline -188&189&-124&125&-60&61&&&&&248&&-95&94&-159&158&-249\\ \hline -255&-190&191&-126&127&-62&63&&&&&254&&-93&92&-157&156\\ \hline 154&-227&-192&193&-128&129&-64&65&&&&&226&&-91&90&-155\\ \hline -187&186&-233&-194&195&-130&131&-66&67&&&&&232&&-89&88\\ \hline 86&-185&184&-239&-196&197&-132&133&-68&69&&&&&238&&-87\\ \hline -119&118&-183&182&-245&-198&199&-134&135&-70&71&&&&&244&\\ \hline &-117&116&-181&180&-251&-200&201&-136&137&-72&73&&&&&250\\ \hline 222&&-115&114&-179&178&-223&-202&203&-138&139&-74&75&&&&\\ \hline &228&&-113&112&-177&176&-229&-204&205&-140&141&-76&77&&&\\ \hline &&234&&-111&110&-175&174&-235&-206&207&-142&143&-78&79&&\\ \hline &&&240&&-109&108&-173&172&-241&-208&209&-144&145&-80&81&\\ \hline &&&&246&&-107&106&-171&170&-247&-210&211&-146&147&-82&83\\ \hline \end{tabular} \end{center} \end{scriptsize} \end{example} To prove that the array is globally simple we must verify that all sequential partial sums are distinct. For the above example we give the row and column partial sums in the Appendix, where we have listed the row (column) number and each partial sum beginning with the entry in diagonal $D_0$. These partial sums are considered modulo $2(4p+\gamma)n+1=511$ so it is important to check carefully when the absolute value of the partial sums exceeds $255$. \subsection{Row sums and column sums}\label{rowcolsums} For a given row $a\in [n]$ and all $i\in I$, there exists $x_1,x_2\in [n]$ such that $a=2i-x_1\md{n}$ and $a=2i+1+x_2\md{n}$. Thus $x_1+x_2+1=0\md{n}$ and so $x_1+x_2=n-1.$ Consequently for all $ i\in I$, \begin{eqnarray} d_{2i}(r_a)+d_{2i+1}(r_a)&=&(\gamma+2)n+4in-2x_1-\gamma n-4in-1-2x_2\nonumber\\ &=&2n-1-2(n-1)=1.\label{RowiSum} \end{eqnarray} Similarly, for a given row $a$ and for all $j\in J$, there exists $ x_1,x_2\in [n]$ such that $a=2p+1+2j-x_1\md{n}$ and $a=2p+2+2j+x_2\md{n}$, implying $x_1+x_2+1=0\md{n}$, and so $x_1+x_2=n-1$. Consequently for all $j\in J$, \begin{eqnarray} d_{2p+2j+1}(r_a)+d_{2p+2j+2}(r_a)&=&(4p+\gamma-6-4j)n+1+2x_1-(4p+\gamma-4+4j)n+2x_2\nonumber\\ &=&-1.\label{RowjSum} \end{eqnarray} Finally for $ x_1,x_2\in [n]$, $a=2p-\alpha x_1\md{n}$ and $a=2p+\alpha+\alpha x_2\md{n}$ implies that $\alpha(x_1+x_2+1)=0\md{n}$. Since gcd$(\alpha,n)=1$, $x_1+x_2+1=0\md{n}$ and again $x_1+x_2=n-1$. Hence \begin{eqnarray} d_{2p}(r_a)+d_{2p+\alpha}(r_a)&=& -(4p+\gamma)n+2x_1+(4p+\gamma-2)n+1+2x_2\nonumber\\ &=&-1.\label{RowAlphaSum} \end{eqnarray} Therefore as required, the sum of the entries in row $a$ of $A$ is $(1\times p) +(-1\times(p-1))-1=0$. In column $a=0$ the sum of the entries is \begin{eqnarray}\label{col0sum} &&\displaystyle (d_{2p}(c_0)+d_{2p+\alpha}(c_0))+\sum_{i=0}^{p-1} \left(d_{2i}(c_0)+d_{2i+1}(c_0)\right)+\sum_{j=0}^{p-2} \left(d_{2p+2j+1}(c_0)+d_{2p+2j+2}(c_0)\right)\nonumber \\ &=&\displaystyle -(2n-1)+\sum_{i=0}^{p-1} (2n-1)+\sum_{j=0}^{p-2} -(2n-1)=0. \end{eqnarray} For a given column $a\neq 0$, there exists $ x_1,x_2\in [n]$ such that $a=-x_1\md{n}$ and $a=x_2\md{n}$ or equivalently $x_1=n-a$ and $x_2=a$. Thus for all $i\in I$ and for all $j\in J$ \begin{eqnarray} d_{2i}(c_a)+d_{2i+1}(c_a)&=&(\gamma+2)n+4in-2(n-a)-(\gamma n+4in+1)-2a=-1,\label{ColiSum}\\ d_{2p+2j+1}(c_a)+d_{2p+2j+2}(c_a)&=&(4p+\gamma-6-4j)n+1+2(n-a)-(4p+\gamma-4-4j)n+2a\nonumber\\ &=&1.\label{ColjSum} \end{eqnarray} Furthermore setting $a=-\alpha x_1\md{n}$ and $a=\alpha x_2\md{n}$ we see that $0=\alpha(x_1+x_2)\md{n}$. Now since gcd$(\alpha,n)=1$, $x_1+x_2=0\md{n}$ and so $x_1+x_2=n$. Hence \begin{eqnarray}\label{csum-2p+alpha} d_{2p}(c_a)+d_{2p+\alpha}(c_a)= -(4p+\gamma)n+2x_1+(4p+\gamma-2)n+1+2x_2=1. \end{eqnarray} Hence, the sum of the entries in column $a\neq 0$ of $A$ is $(-1\times p)+(1\times(p-1))+1=0$. \subsection{Distinct partial sums for rows}\label{partrowsum} For a given row $a$ we will calculate $\Sigma(x)=\sum_{i=0}^x d_i(r_a)$, for each $x\in \mathbb{D}$, and show that $\Sigma(x_1)\neq \Sigma(x_2)\md{2(4p+\gamma)+1}$ for each $x_1,x_2\in \mathbb{D}$ (Note that these sums cover the entries of the non-empty diagonals). Recall that Equations \eqref{RowiSum} and \eqref{RowjSum} give $d_{2i}(r_a)+d_{2i+1}(r_a)=1$ and $d_{2p+2j+1}(r_a)+d_{2p+2j+2}(r_a)=-1$, for all $i\in I$ and for all $j\in J$. Then using the definition of the array $A$ we may evaluate and determine bounds for $\Sigma(x)$ as follows. \begin{eqnarray*} \gamma n+1<& \Sigma(2i)=d_{2i}(r_a)+i&< (4p+\gamma-2)n+p,\\ 0<&\Sigma(2i+1)=i+1 &< p+1,\\ -(4p+\gamma)n<& \Sigma(2p)=d_{2p}(r_a)+p &< -(4p+\gamma-2)n+p-1,\\ \Sigma(2p)<&\Sigma(2p+2j+1)=d_{2p}(r_a)+p + d_{2p+2j+1}(r_a)-j&<0,\\ -(4p+\gamma)n<&\Sigma(2p+2j+2)=d_{2p}(r_a)+ p- (j+1)&<\Sigma(2p),\\ &\Sigma(2p+\alpha)= 0.& \end{eqnarray*} Also, for all $i^\prime\in I\setminus\{p-1\}$ and for all $j^\prime\in J\setminus\{p-2\}$, $$\Sigma(2(i^\prime+1))-\Sigma(2i^\prime)=d_{2i^\prime+2}(r_a)+i^\prime+1-d_{2i^\prime}(r_a)-i^\prime$$ $$=4n+1-2(2i^\prime+2-a\md{n})+2(2i^\prime-a\md{n})\geq 4n-3 \mbox{\rm \quad and}$$ $$\Sigma(2p+2(j^\prime+1)+1)-\Sigma(2p+2j^\prime+1)=d_{2p+2j^\prime+3}(r_a)-(j^\prime+1)-d_{2p+2j^\prime+1}(r_a)+j^\prime$$ $$=-4n-1+2(2p+2j^\prime+3-a\md{n})-2(2p+2j^\prime+1-a\md{n})\leq -4n+3.$$ Thus for $i\in I$ and $j\in J$, the function $f(i)=\Sigma(2i)$ is strictly increasing and the function $g(j)=\Sigma(2p+2j+1)$ is strictly decreasing. Hence\begin{eqnarray} &&\Sigma(4p-2)<\Sigma(4p-4)<\dots<\Sigma(2p+2)<\Sigma(2p)<-(4p+\gamma-3)n, \nonumber\\ &&\Sigma(2p)<\Sigma(4p-3)<\Sigma(4p-5)<\dots<\Sigma(2p+1)<0,\label{Rowparsum4p}\\ &&0< \Sigma(1) <\Sigma(3)<\dots<\Sigma(2p-1)<p+1 <\gamma n < \Sigma(0)<\Sigma(2)<\dots<\Sigma(2p-2). \nonumber \end{eqnarray} Furthermore, for all $x\in \mathbb{D}$, the row partial sums $|\Sigma(x)|\leq (4p+\gamma)n$, and so by Remark (\ref{mods}) $\Sigma(x)\equiv \Sigma(y)\md{2(4p+\gamma)n+1}$ if and only if $\Sigma(x)=\Sigma(y)$. Hence for all $i\in I$ and for all $j\in J$ the partial sums calculated on row $a$ are all distinct modulo $2(4p+\gamma)n+1$. \subsection{Distinct partial sums for non-zero columns}\label{partcolsum} Similarly to above, we calculate $\overline{\Sigma}(x)=\sum_{i=0}^x d_i(c_a)$, for each $ x\in \mathbb{D}$ and show that $\overline{\Sigma}(x_1)\neq \overline{\Sigma}(x_2)\md{2(4p+\gamma)+1}$ for each $x_1,x_2\in \mathbb{D}$. Equations \eqref{ColiSum} and \eqref{ColjSum} imply that $d_{2i}(c_a)+d_{2i+1}(c_a)=-1$ and $d_{2p+2j+1}(c_a)+d_{2p+2j+2}(c_a)=1$, for all $i\in I$ and for all $j\in J$. Then using the definition of the array $A$ we may evaluate and determine bounds for $\overline{\Sigma}(x)$ as follows. \begin{eqnarray*} \gamma n<&\overline{\Sigma}(2i)=d_{2i}(c_a)-i&<(4p+\gamma-2)n, \\ -p\leq& \overline{\Sigma}(2i+1)=-(i+1)&< 0,\\ -(4p+\gamma)n-p\leq& \overline{\Sigma}(2p)=d_{2p}(c_a)-p &\leq-(4p+\gamma-2)n-p-2,\\ -(4p-2)n<& \overline{\Sigma}(2p+2j+1)=&\\ &d_{2p}(c_a)-p+d_{2p+2j+1}(c_a)+j&<-2n-p,\\ \overline{\Sigma}(2p)<&\overline{\Sigma}(2p+2j+2)=d_{2p}(c_a)-p+(j+1)&<-(4p+\gamma-2)n,\\ &\overline{\Sigma}(2p+\alpha)= 0.& \end{eqnarray*} Furthermore, for all $i^\prime\in I\setminus\{p-1\}$ and for all $j^\prime\in J\setminus\{p-2\}$, $$\overline{\Sigma}(2(i^\prime+1))-\overline{\Sigma}(2i^\prime)=d_{2i^\prime+2}(c_a)-(i^\prime+1)-d_{2i^\prime}(c_a)+i^\prime= 4n-1\mbox{\rm \quad and}$$ $$\overline{\Sigma}(2p+2(j^\prime+1)+1)-\overline{\Sigma}(2p+2j^\prime+1)=d_{2p+2j^\prime+3}(c_a)+j^\prime+1-d_{2p+2j^\prime+1}(c_a)-j^\prime=-4n+1.$$ Thus for $i\in I$ and $j\in J$ the function $f(i)=\overline{\Sigma}(2i)$ is strictly increasing and $g(j)=\overline{\Sigma}(2p+2j+1)$ is strictly decreasing. Hence \begin{eqnarray} &&-(4p+\gamma+1)n<\overline{\Sigma}(2p)<\overline{\Sigma}(2p+2)<\dots<\overline{\Sigma}(4p-2)<\overline{\Sigma}(4p-3)\nonumber\\ &&\overline{\Sigma}(4p-3) <\overline{\Sigma}(4p-5)<\dots<\overline{\Sigma}(2p+1)<-n<\nonumber\\ &&\overline{\Sigma}(2p-1)<\dots<\overline{\Sigma}(3)<\overline{\Sigma}(1)<0<\gamma n < \overline{\Sigma}(0)<\overline{\Sigma}(2)<\dots<\overline{\Sigma}(2p-2).\label{Colparsum4p} \end{eqnarray} Thus for column $a\neq 0$ and each $x\in \mathbb{D}$, the partial sum $|\overline{\Sigma}(x)|< (4p+\gamma+1)n$. Further for $x\in \mathbb{D}\setminus F$, where $F=\{2p,2p+2,\dots,4p-2\}$, $|\overline{\Sigma}(x)|\leq(4p+\gamma-2)n<(4p+\gamma)n$. Thus for all $x,y\in \mathbb{D}\setminus F$, $\overline{\Sigma}(x)\equiv \overline{\Sigma}(y)\md{2(4p+\gamma)n+1}$ if and only if $\overline{\Sigma}(x)=\overline{\Sigma}(y)$ by (\ref{mods}). Furthermore, for all $x\in F$, $2(4p+\gamma)n+1+\overline{\Sigma}(x)>(4p+\gamma)n-p+1>(4p+\gamma-2)n>\overline{\Sigma}(y)$ for all $y\in \mathbb{D}$. Hence, in column $a\neq 0$ the partial sums calculated on diagonals $D_{x}$, $x\in {\mathbb D}$, are distinct modulo $2(4p+\gamma)n+1$. \subsection{Distinct partial sums for column zero}\label{partcolsum0} From Section \ref{definition}, for $i\in I$ and $j\in J$ the entries in column $0$ are \begin{eqnarray*} d_{2i}(c_0) &=& \gamma n+4in+2n,\\ d_{2i+1}(c_0) &=& -\gamma n-4in-1,\\ d_{2p}(c_0) &=& -4pn-\gamma n,\\ d_{2p+1+2j}(c_0) &=& 4pn+\gamma n-6n-4jn+1,\\ d_{2p+2+2j}(c_0) &=& -4pn-\gamma n+4n+4jn,\\ d_{2p+\alpha}(c_0) &=& 4pn+\gamma n-2n+1, \end{eqnarray*} and thus $d_{2i}(c_0)+d_{2i+1}(c_0)=2n-1$ and $d_{2p+2j+1}(c_0)+d_{2p+2j+2}(c_0)=-(2n-1)$. Thus, for $i\in I$ and $j\in J$, the partial sums may be calculated and bounded as follows. \begin{eqnarray*} \overline{\Sigma}(2i)&=&(\gamma+2+4i)n+(2n-1)i=(\gamma+2+6i)n-i\\ &\Rightarrow&0< \overline{\Sigma}(0)< \overline{\Sigma}(2)<\dots<\overline{\Sigma}(2p-2)<2(4p+\gamma)n+1,\\ \overline{\Sigma}(2i+1)&=&(2n-1)(i+1)=2(i+1)n-(i+1)\\ &\Rightarrow&0<2n-1=\overline{\Sigma}(1)<\overline{\Sigma}(3)<\dots <\overline{\Sigma}(2p-1)=(2n-1)p,\\ \overline{\Sigma}(2p)&=&(2n-1)p-(4p+\gamma)n=-(2p+\gamma)n-p\\ &\Rightarrow& \overline{\Sigma}(2p)<0, \\ \overline{\Sigma}(2p+2j+1)&=&(p-j)(2n-1)-(4p+\gamma)n +(4p+\gamma-6)n-4jn+1\\ &=&(2p-6j-6)n-(p-j)+1\\ &\Rightarrow& -(4p-6)n-1=\overline{\Sigma}(4p-3)<\dots< \overline{\Sigma}(2p+1) <2pn,\\ \overline{\Sigma}(2p+2j+2)&=& (p-j-1)(2n-1)-(4p+\gamma)n= -(2p+2j+\gamma+2)n-(p-j-1) \\ &\Rightarrow&-(4p+\gamma-2)n-1= \overline{\Sigma}(4p-2)<\dots<\overline{\Sigma}(2p+2)<\overline{\Sigma}(2p),\\ \overline{\Sigma}(2p+\alpha)&=&0. \end{eqnarray*} Note that for all $x\in \mathbb{D}$ and $y\in \mathbb{D}\setminus 2I$: \begin{eqnarray} && \overline{\Sigma}(x)=\beta n+\epsilon\mbox{\rm \ for some\ } \beta,\epsilon\in \mathbb{Z}\mbox{\ with\ }-\frac{n}{4}\leq-p\leq\epsilon\leq p\leq \frac{n}{4},\label{star4}\\ && |\overline{\Sigma}(x)|\leq 2(4p+\gamma)n+1, \label{star1}\\ && |\overline{\Sigma}(y)|\leq (4p+\gamma)n. \label{star3a} \end{eqnarray} We will proceed by checking a number of cases individually. In what follows we will make extensive use of (\ref{star4}) and (\ref{n}). For all $ i_1,i_2,i\in I$ and $ j_1,j_2,j\in J$: \begin{itemize} \item[1(i)] Suppose that $\overline{\Sigma}(2i_1)=\overline{\Sigma}(2i_2+1)\md{2(4p+\gamma )n+1}$. Then (\ref{modl}) and (\ref{star1}) implies $(2+\gamma+6i_1)n-i_1= 2(i_2+1)n-(i_2+1)$. Hence $i_1=i_2+1$ and $2+\gamma+6i_1=2i_2+2$. But then $\gamma=-4i_2-6$. This contradicts $\gamma>0$. \item[1(ii)] Suppose that $\overline{\Sigma}(2i)=\overline{\Sigma}(2p)\md{2(4p+\gamma )n+1}$. Then by (\ref{mod=}) \begin{eqnarray*} (2+\gamma+6i)n-i&=&2(4p+\gamma)n+1-(2p+\gamma)n-p\\ \Rightarrow \quad (2+\gamma+6i)n-i&=&(6p+\gamma)n+1-p. \end{eqnarray*} This implies $i=p-1$ and $2+6i=6p$ leading to the contradiction $-4=0.$ \item[1(iii)] Suppose that $\overline{\Sigma}(2i)=\overline{\Sigma}(2p+2j+1)\md{2(4p+\gamma )n+1}$. Then \begin{eqnarray*} (2+\gamma+6i)n-i&=& (2p-6j-6)n-(p-j)+1 \mbox{ or} \\ - 2(4p+\gamma)n-1+(2+\gamma+6i)n-i&=& (2p-6j-6)n-(p-j)+1. \end{eqnarray*} The former case implies $i=p-j-1$ and so $2+\gamma+6p-6j-6=2p-6j-6$ but then $\gamma=-4p-2$ which contradicts $\gamma\geq 0$. In the latter case we have $(-8p-\gamma+6i+2)n-(i+1)=(2p-6j-6)n-(p-j)+1$, which implies $p-2=i+j$ and so $\gamma=-10p+6(i+j)+8=-10p+6p-12+8=-4p-4$, a contradiction. \item[1(iv)] Suppose that $\overline{\Sigma}(2i)=\overline{\Sigma}(2p+2j+2)\md{2(4p+\gamma )n+1}$. Then by (\ref{mod=}) \begin{eqnarray*} (2+\gamma+6i)n-i&=& 2(4p+\gamma)n+1 -(2p+2j+\gamma+2)n-(p-j-1)\\ \Rightarrow \quad (2+\gamma+6i)n-i&=&(6p+\gamma-2j-2)n-(p-j-2). \end{eqnarray*} This implies $i=p-j-2$ and $2+6i=6p-2j-2$ or equivalently $2+6(p-j-2)=6p-6j-2$ and so $-10=-2$, a contradiction. \\ \\ \noindent For the remaining cases we will use (\ref{mods}) together with (\ref{star3a}). \\ \item[2(i)] $\overline{\Sigma}(2i+1)>0$ and $\overline{\Sigma}(2p),\overline{\Sigma}(2p+2j+2)<0$ and so we have $\overline{\Sigma}(2i+1)\not\equiv\overline{\Sigma}(2p)$ $\md{2(4p+\gamma )n+1}$ and $\overline{\Sigma}(2i+1)\not\equiv \overline{\Sigma}(2p+2j+2)\md{2(4p+\gamma )n+1}$. \item[2(ii)] Suppose that $\overline{\Sigma}(2i+1)\equiv \overline{\Sigma}(2p+2j+1)\md{2(4p+\gamma )n+1}$. Then we have $(2p-6j-6)n-(p-j)+1=(2i+2)n-(i+1)$. So $p=j+i+2$ and $2p-6j-2i-8=0$ which implies $2(i+j+2)-6j-2i-8=0$ and then $j=-1$, a contradiction. \item[3(i)] Suppose that $\overline{\Sigma}(2p)\equiv \overline{\Sigma}(2p+2j+1)\md{2(4p+\gamma )n+1}$. Then $-(2p+\gamma)n-p= (2p-6j-6)n-(p-j)+1$. So $-p=-p+j+1$ but then $j=-1$. \item[3(ii)] Suppose that $\overline{\Sigma}(2p)\equiv \overline{\Sigma}(2p+2j+2)\md{2(4p+\gamma )n+1}$. Then $-(2p+\gamma)n-p= -(2p+2j+\gamma+2)n-(p-j-1)$. So $p=p-j-1$ but then $j=-1$. \item[4(i)] Suppose that $\overline{\Sigma}(2p+2j_1+1)\equiv \overline{\Sigma}(2p+2j_2+2)\md{2(4p+\gamma )n+1}$. Then $(2p-6j_1-6)n-(p-j_1)+1=-(2p+2j_2+\gamma+2)n-(p-j_2-1)$. So $-(p-j_1)+1=-p+j_2+1$ and $4p-6j_1+2j_2-4+\gamma=0$. Then $j_1=j_2$ and $\gamma=-4(p-j_1-1)<0$. \item[4(ii)] Suppose that $\overline{\Sigma}(2p+2j+1)\equiv 0\md{2(4p+\gamma)n+1}$ then $-p+j+1=0$ which implies $j=p-1>p-2$, a contradiction. \end{itemize} Hence for column $0$ all the partial sums are distinct modulo $2(4p+\gamma)n+1$. This proves Theorem \ref{main}. \section{Globally simple Heffter arrays $H(n;4p+3)$} In this section we will merge a Heffter array $H(n;3)$ with the support shifted Heffter array $H(n;4p,3)$ given by Theorem \ref{main} to obtain a globally simple Heffter array $H(n;4p+3)$. First we need a suitable $H(n;3)$. \begin{theorem} {\rm \cite{ADDY}} Let $n\equiv 0,1\md{4}$. Then there exists a Heffter array $H(n;3)$ that has the following properties: non-empty cells are only on diagonals $D_0$, $D_{n-1}$ and $D_{1}$; $s_{L'}(D_0)=\{1,\dots,n\}$; entries of $L'$ on $D_{n-1}$ are all positive and entries of $L'$ on $D_{1}$ are all negative. \label{addict} \end{theorem} \begin{theorem} \label{Ladder} Let $n\equiv 1\md{4}$. Then for each $0\leq \beta\leq n-5$ there exists a Heffter array $H(n;3)$, denoted by $L$, with the following properties \begin{itemize} \item The non empty cells are exactly on the diagonals $D_{\beta}$, $D_{\beta+2}$ and $D_{\beta+4}$, \item $s(D_{\beta+2})=\{1,\dots,n\}$, \item $s(D_\beta\cup D_{\beta+4})=\{n+1,\dots,3n\}$, \item entries on $D_{\beta}$ are all positive, \item entries on $D_{\beta+4}$ are all negative, \item the array defined by $M=[M(i,j)]$ where $M(i,j)=L(i+1,j+1)$, $i,j\in [n]$ retains the above properties. \end{itemize} \end{theorem} \begin{proof} Let $L'$ be a Heffter array $H(n;3)$ with the properties from Theorem \ref{addict} where $n\equiv 1\md{4}$. Now define $L(2(i+1)+\beta,2j)=L'(i,j)$ for all $i,j\in [n]$ where operations on coordinates are taken modulo $n$. As $n$ is odd, for any given $a,b\in [n]$ there exists unique $i,j\in [n]$ such that $2(i+1)+\beta\equiv a\md{n}$ and $2j\equiv b\md{n}$. Hence we may obtain $L$ by applying row and column permutations to $L'$. Therefore $L$ is also a Heffter array $H(n;3)$. Furthermore, the entries on $D_\beta$ of $L$ are exactly the entries on $D_{n-1}$ of $L'$; consequently entries on $D_\beta$ of $L$ are all positive. Also the set of entries on $D_{\beta+2}$ of $L$ are exactly the set of entries on $D_{0}$ of $L'$; consequently $s_{L}(D_{\beta+2})=\{1,\dots,n\}$. Similarly the set of entries on $D_{\beta+4}$ of $L$ are exactly the set of entries on $D_{1}$ of $L'$; consequently entries on $D_\beta+4$ of $L$ are all negative. Finally, it is clear that the array $M$ retains the above properties, since each diagonal retains the same set of symbols under this transformation. \end{proof} \begin{figure} \begin{footnotesize} \begin{tabular}{cc} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline -8 &18 & & & & & & &-10\\\hline -19&-7 &26 & & & & & & \\\hline &-11&-6 &17 & & & & &\\\hline & &-20&-5 &25 & & & &\\\hline & & &-12&-9 &21 & & &\\\hline & & & &-16&3 &13 & &\\\hline & & & & &-24&2 &22&\\\hline & & & & & &-15&1 &14\\\hline 27 & & & & & & &-23&-4\\ \hline \end {tabular} & \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & & & &-20& &-5 & &25\\\hline 27 & & & & &-23& &-4& \\\hline &21 & & & & &-12& &-9\\\hline -8 & &18 & & & & & -10 &\\\hline &3 & &13 & & & & &-16\\\hline -19& &-7 & &26 & & & &\\\hline &-24& &2 & &22 & & &\\\hline & &-11& &-6 & &17 & & \\\hline & & &-15& &1 & &14&\\\hline \end{tabular} \end{tabular} \caption{The arrays $L$ and $L'$ from Theorem \ref{Ladder} with $\beta=1$.} \end{footnotesize} \end{figure} By similar reasoning to the previous theorem, we also have the following. \begin{corollary} \label{Ladder2} Let $n\equiv 0\md{4}$ and gcd$(n,\epsilon)=1$. Then for each $0\leq \beta\leq n-2\epsilon-1$ there exists a Heffter array $H(n;3)$, denoted by $L$, with the following properties \begin{itemize} \item The non empty cells are exactly on the diagonals $D_{\beta}$, $D_{\beta+\epsilon}$ and $D_{\beta+2\epsilon}$, \item $s(D_{\beta+2\epsilon})=\{1,\dots,n\}$ and $s(D_\beta\cup D_{\beta+4\epsilon})=\{n+1,\dots,3n\}$, \item entries on $D_{\beta}$ are all positive, \item entries on $D_{\beta+2\epsilon}$ are all negative, \item the array defined by $M=[M(i,j)]$ where $M(i,j)=L(i+1,j+1)$, $i,j\in [n]$ retains the above properties. \end{itemize} \end{corollary} \subsection{Globally simple $H(n;4p+3)$ when $n\equiv 1\md{4}$} \begin{theorem}\label{4p+3} Let $n\equiv 1\md{4}$, $p>0$ and $n\geq 4p+3$. Let $\alpha$ be an integer such that $2p+2\leq \alpha\leq n-2-2p$ and $\mbox{gcd}(n,\alpha)=1$. Let $\beta=2p+\alpha-3$ and let $L$ be a Heffter array $H(n;3)$ based on $\beta$ satisfying the properties of Theorem $\ref{Ladder}$. Then the union of arrays $L$ and the support shifted Heffter array $H(n;4p,3)$ (given by Theorem $\ref{main}$) is a globally simple Heffter array $H(n;4p+3)$ where the entries are on the set of diagonals $D_i$ such that $i$ is in ${\mathbb D}=\{0,1,\dots ,4p-2,2p+\alpha\}\cup \{2p+\alpha-3,2p+\alpha-1,2p+\alpha+1\}$. \end{theorem} \begin{proof} First we will assume that there exists $\alpha$ coprime to $n$ such that $2p+2\leq \alpha\leq n-2-2p$. Next, construct the array $A$ as in Theorem \ref{main} with $\gamma=3$. Then we will merge this array with $L$ as constructed in Theorem \ref{Ladder} with $\beta=2p+\alpha-3$ to get a Heffter array $H(n;4p+3)$ that will be globally simple, which we denote by $B$. Note that since $n\equiv 1\md{4}$, such an $L$ exists. Define $B(i,j)= A(i,j)$ if $i-j\not\in\{2p+\alpha-3,2p+\alpha-1,2p+\alpha+1\},$ $B(i,j)= L(i,j)$ if $i-j\in\{2p+\alpha-3,2p+\alpha-1,2p+\alpha+1\}.$ Hence we are positioning diagonals $D_{2p+\alpha-3}$, $D_{2p+\alpha-1}$ and $D_{2p+\alpha+1}$ of $L$ to the empty diagonals $D_{2p+\alpha-3}$, $D_{2p+\alpha-1}$ and $D_{2p+\alpha+1}$ of $A$. \begin{example} The array $B$ (a Heffter array $H(17;15)$) when $n=17$, $p=3$ and $\alpha=8$. \begin{scriptsize} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline 85&&\textbf{-33}&244&\textbf{13}&&\textbf{20}&-105&104&-169&168&-245&-212&213&-148&149&-84\\ \hline -52&53&&\textbf{-24}&240&\textbf{-4}&&\textbf{28}&-103&102&-167&166&-241&-214&215&-150&151\\ \hline 153&-54&55&&\textbf{-49}&236&\textbf{12}&&\textbf{37}&-101&100&-165&164&-237&-216&217&-152\\ \hline -120&121&-56&57&&\textbf{-41}&232&\textbf{-3}&&\textbf{44}&-99&98&-163&162&-233&-218&219\\ \hline 221&-122&123&-58&59&&\textbf{-32}&228&\textbf{11}&&\textbf{21}&-97&96&-161&160&-229&-220\\ \hline -188&189&-124&125&-60&61&&\textbf{-25}&224&\textbf{-2}&&\textbf{27}&-95&94&-159&158&-225\\ \hline -255&-190&191&-126&127&-62&63&&\textbf{-48}&254&\textbf{10}&&\textbf{38}&-93&92&-157&156\\ \hline 154&-251&-192&193&-128&129&-64&65&&\textbf{-42}&250&\textbf{-1}&&\textbf{43}&-91&90&-155\\ \hline -187&186&-247&-194&195&-130&131&-66&67&&\textbf{-31}&246&\textbf{9}&&\textbf{22}&-89&88\\ \hline 86&-185&184&-243&-196&197&-132&133&-68&69&&\textbf{-26}&242&\textbf{8}&&\textbf{26}&-87\\ \hline -119&118&-183&182&-239&-198&199&-134&135&-70&71&&\textbf{-47}&238&\textbf{17}&&\textbf{30}\\ \hline \textbf{35}&-117&116&-181&180&-235&-200&201&-136&137&-72&73&&\textbf{-51}&234&\textbf{16}&\\ \hline &\textbf{46}&-115&114&-179&178&-231&-202&203&-138&139&-74&75&&\textbf{-39}&230&\textbf{-7}\\ \hline \textbf{15}&&\textbf{19}&-113&112&-177&176&-227&-204&205&-140&141&-76&77&&\textbf{-34}&226\\ \hline 222&\textbf{-6}&&\textbf{29}&-111&110&-175&174&-223&-206&207&-142&143&-78&79&&\textbf{-23}\\ \hline \textbf{-50}&252&\textbf{14}&&\textbf{36}&-109&108&-173&172&-253&-208&209&-144&145&-80&81&\\ \hline &\textbf{-40}&248&\textbf{-5}&&\textbf{45}&-107&106&-171&170&-249&-210&211&-146&147&-82&83\\ \hline \end{tabular} \end{center} \end{scriptsize} \end{example} We know that $s(A)=\{\gamma n+1,\dots, (4p+\gamma)n\}$, $s(L)=\{1,\dots,3n\}$ and $\gamma=3$ so $s(B)=\{1,\dots,(4p+3)n\}$. Also as row and column sums of both $A$ and $L$ are $0$, it is easy to see that the row and column sums of $B$ are $0$. Now we just need to show that row partial sums and column partial sums of $B$ are distinct. We will use the notation $\Sigma_A(x)$ and $\overline{\Sigma}_A(x)$ to denote the partial sum in the array $A$ as given in the previous section and; $\Sigma_{B}(x)$ and $\overline{\Sigma}_{B}(x)$ to denote the partial sum in the array $B$ as constructed here. Firstly, $\Sigma_B(i)=\Sigma_A(i)$ and $\overline{\Sigma}_B(i)=\overline{\Sigma}_A(i)$ for all $1\leq i\leq 4p-2$ for all rows and columns of $B$ so row partial sums and column partial sums are distinct modulo $2(4p+3)n+1$ from diagonal $0$ to $4p-2$. Consider row $a$. First note that $L(a,a-2p-\alpha+3)+L(a,a-2p-\alpha+1)+L(a,a-2p-\alpha-1)=0$, $L(a,a-2p-\alpha+3)>0$ and $L(a,a-2p-\alpha-1)<0$. It was shown in Section \ref{partrowsum} that $\Sigma_B(4p-2)=d_{2p}(r_a)+1$. Hence we have \begin{eqnarray} \Sigma_B(2p+\alpha-3)&=&d_{2p}(r_a)+1+L(a,a-2p-\alpha+3)<0,\nonumber\\ \Sigma_B(2p+\alpha-1)&=&d_{2p}(r_a)+1-L(a,a-2p-\alpha-1)<0,\nonumber\\ \Sigma_B(2p+\alpha)&=&-L(a,a-2p-\alpha-1)>0,\nonumber\\ \Sigma_B(2p+\alpha+1)&=&0.\nonumber \end{eqnarray} By Theorem \ref{Ladder}, it follows that \begin{eqnarray} n+1\leq\Sigma_B(2p+\alpha)\leq 3n \label{RangeL}\end{eqnarray} so by the inequality (\ref{Rowparsum4p}), $\Sigma_B(2p+\alpha)\neq \Sigma_B(i)$ for each $0\leq i\leq 4p-2$. Now, from the definition of the array $A$, any entry in diagonal $D_{4p-3}$ is greater than $5n$. Thus, from Section 3.3 $\Sigma_B(4p-3)=d_{2p}(r_a)+d_{4p-3}(r_a)+2>d_{2p}(r_a)+5n$. So, \begin{eqnarray*} \Sigma_B(2p) & = & d_{2p}(r_a)+p <d_{2p}(r_a)+n+2 \\ & \leq & \Sigma_B(2p+\alpha-3), \Sigma_B(2p+\alpha-1)\leq d_{2p}(r_a)+1+3n< \Sigma_B(4p-3) \end{eqnarray*} so by inequality (\ref{Rowparsum4p}) $\Sigma_B(2p+\alpha-3)\neq \Sigma_B(i)$ and $\Sigma_B(2p+\alpha-1)\neq \Sigma_B(i)$ for all $0\leq i\leq 4p-2$. Next consider column $a\neq 0$. We have $L(2p+a+\alpha-3,a)+L(2p+a+\alpha-1,a)+L(2p+a+\alpha+1,a)=0$, with the first of these terms positive and the final term negative. Also it was shown in Section \ref{partcolsum} that $\overline{\Sigma}_B(4p-2)=d_{2p}(c_a)-1$. So \begin{eqnarray} \overline{\Sigma}_B(2p+\alpha-3)&=&d_{2p}(c_a)-1+L(2p+a+\alpha-3,a)<0,\nonumber\\ \overline{\Sigma}_B(2p+\alpha-1)&=&d_{2p}(c_a)-1-L(2p+a+\alpha+1,a)<0,\nonumber\\ \overline{\Sigma}_B(2p+\alpha)&=&-L(2p+a+\alpha+1,a)>0,\nonumber\\ \overline{\Sigma}_B(2p+\alpha+1)&=&0.\nonumber \end{eqnarray} Similarly to (\ref{RangeL}), \begin{eqnarray} n+1\leq\overline{\Sigma}_B(2p+\alpha)\leq 3n. \label{RangeLc}\end{eqnarray} Thus for each column $a\neq 0$ and $1\leq i\leq 4p-2$ we have $\overline{\Sigma}_B(2p+\alpha)\neq \overline{\Sigma}_B(i)$ by inequalities (\ref{RangeLc}) and (\ref{Colparsum4p}). Also \begin{eqnarray*} \overline{\Sigma}_B(4p-2) & = & d_{2p}(c_a)-1 <d_{2p}(c_a)+n \\ & \leq & \overline{\Sigma}_B(2p+\alpha-3), \overline{\Sigma}_B(2p+\alpha-1)\leq d_{2p}(c_a)-1+3n<\overline{\Sigma}_B(4p-3) \end{eqnarray*} so by inequality (\ref{Colparsum4p}) $\overline{\Sigma}_B(2p+\alpha-3)\neq \overline{\Sigma}_B(i)$ and $\overline{\Sigma}_B(2p+\alpha-1)\neq \overline{\Sigma}_B(i)$ for all $0\leq i\leq 4p-1$. Finally consider column $0$. By Theorem \ref{Ladder}, the Heffter array $L$ may be replaced by the array $M=[M(i,j)]$ where $M(i,j)=L(i+1,j+1)$, $i,j\in [n]$, while retaining the properties we have so far required. In effect, we may thus apply this transformation to the diagonals $D_{2p+\alpha-3}$, $D_{2p+\alpha-1}$ and $D_{2p+\alpha+1}$ without changing the rest of the array, and without changing the validity of the above arguments. Since $n\geq 9$, we may thus assume that $\{L(2p+\alpha-3,0),-L(2p+\alpha+1,0)\}\cap\{2n-1,2n-(2p+1)/3\}=\emptyset$. By Section \ref{partcolsum0} we have: \begin{eqnarray*} \overline{\Sigma}_B(2i)&=&(5+6i)n-i,\nonumber\\ \overline{\Sigma}_B(2i+1)&=&(2n-1)(i+1)=2(i+1)n-(i+1),\nonumber\\ \overline{\Sigma}_B(2p)&=&-(2p+3)n-p<0,\nonumber\\ \overline{\Sigma}_B(2p+2j+1)&=&(2p-6j-6)n-(p-j)+1,\nonumber\\ \overline{\Sigma}_B(2p+2j+2)&=& -(2p+2j+5)n-(p-j-1)<0, \nonumber\\ \overline{\Sigma}_B(2p+\alpha-3)&=&-(4p+1)n-1+L(2p+\alpha-3,0)<0,\nonumber\\ \overline{\Sigma}_B(2p+\alpha-1)&=&-(4p+1)n-1-L(2p+\alpha+1,0)<0,\nonumber\\ \overline{\Sigma}_B(2p+\alpha)&=&-L(2p+\alpha+1,0)>0,\nonumber\\ \overline{\Sigma}_B(2p+\alpha+1)&=&0\nonumber. \end{eqnarray*} \begin{itemize} \item[1(i)] $\overline{\Sigma}_B(2i)=(5+6i)n-i\geq 5n$ so by (\ref{star1}), (\ref{RangeLc}) and (\ref{modl}), $\overline{\Sigma}_B(2p+\alpha)\neq \overline{\Sigma}_B(2i)$ for all $i\in I$. \item[1(ii)] $\overline{\Sigma}_B(2i+1)=(2n-1)(i+1)$ then $\overline{\Sigma}_B(2p+\alpha)\neq \overline{\Sigma}_B(2i+1)\md{2(4p+3)n+1}$ by (\ref{star3a}), (\ref{RangeLc}) and (\ref{mods}) as $-L(2p+\alpha+1,0)\neq 2n-1$. \item[1(iii)] $\overline{\Sigma}(2p),\overline{\Sigma}(2p+2j+2)<0$ hence $\overline{\Sigma}(2p+\alpha)\neq \overline{\Sigma}(2p)\md{2(4p+3)n+1}$, $\overline{\Sigma}(2p+\alpha)\neq \overline{\Sigma}(2p+2j+2)\md{2(4p+3)n+1}$ for all $j\in J$ by (\ref{star3a}), (\ref{RangeLc}) and (\ref{star3a}). \item[1(iv)] Suppose that $\overline{\Sigma}(2p+2j+1)=\overline{\Sigma}(2p+\alpha)\md{2(4p+3)n+1}$. Then by (\ref{mods}) and (\ref{star3a}) $$(2p-6j-6)n-(p-j)+1=-L(2p+\alpha+1,0).$$ Now by (\ref{n}), ((\ref{star4}) and \ref{RangeLc}) $2p-6j-6=2$ which implies $p-4=3j$ hence $j=(p-4)/3$ and $-L(2p+\alpha+1,0)=2n-(2p+1)/3.$ \item[2(i)] Suppose that $\overline{\Sigma}(2i)=\overline{\Sigma}(2p+\alpha-3)\md{2(4p+3)n+1}$ for some $i\in I$. Then $\overline{\Sigma}(2i)=(5+6i)n-i=2(4p+3)n+1 -(4p+1)n-1+L(2p+\alpha-3,0)=2(4p+3)n+1+\overline{\Sigma}_B(2p+\alpha-3)$ hence $ -(4p-6i)n-i=L(2p+\alpha-3,0)$ which implies $-(4p-6i)=2$ and so $L(2p+\alpha-3,0)=2n-(2p+1)/3.$ \item[2(ii)] $\overline{\Sigma}(2i+1)>0$ for all $i\in [p]$ so $\overline{\Sigma}(2p+\alpha-1)\neq \Sigma(2i+1)\md{2(4p+3)n+1}$ and $\overline{\Sigma}(2p+\alpha-3)\neq \overline{\Sigma}(2i+1)\md{2(4p+3)n+1}$. \item[2(iii)] Suppose that $\overline{\Sigma}(2p)\equiv\overline{\Sigma}(2p+\alpha-3)\md{2(4p+3)n+1}$ then $-(4p+1)n-1+L(2p+\alpha-3,0)=-(2p+3)n-p$ so $L(2p+\alpha-3,0)=(2p-2)n-p+1$. Then $2p-2=2$ hence $p=2$ and $L(2p+\alpha-3,0)=2n-1$. \item[2(iv)] Suppose that $\overline{\Sigma}(2p+2j+1)\equiv\overline{\Sigma}(2p+\alpha-3)\md{2(4p+3)n+1}$ for some $j\in J$. Then by (\ref{star1}) and (\ref{star3a}), \begin{eqnarray*} (2p-6j-6)n-(p-j)+1&=&-(4p+1)n-1+L_n(2p+\alpha-3,0)\\ \Rightarrow \quad (6p-6j-5)n&=&p-j-2+L_n(2p+\alpha-3,0)\leq (p-2)+3n<4n \end{eqnarray*} and also by (\ref{RangeLc}) $n+1\leq p-j-2+L_n(2p+\alpha-3,0)$ so $1< 6p-6j-5<4$. Hence $6< 6(p-j)<9$, a contradiction. \item[2(v)] Suppose that $\overline{\Sigma}(2p+2j+2)\equiv \overline{\Sigma}(2p+\alpha-3)\md{2(4p+3)n+1}$ for some $j\in J$. Then \begin{eqnarray*} -(2p+2j+5)n-(p-j-1)&=&-(4p+1)n-1+L_n(2p+\alpha-3,0)\\ \Rightarrow \quad (2p-2j-4)n&=&p-j-2+L(2p+\alpha-3,0)\leq (p-2)+3n<4n \end{eqnarray*} and also $n+1\leq p-j-2+L(2p+\alpha-3,0)$ so $1< 2(p-j-2)<4$. Hence $1=p-j-2$ then $j=p-3$ which implies $L(2p+\alpha-3,0)=2n-1$. \end{itemize} Note that all parts of item 2 can be similarly verified for $\overline{\Sigma}(2p+\alpha-1)$. This proves Theorem \ref{4p+3}. \end{proof} Finally, to prove Theorem \ref{main2}, it remains to choose an $\alpha$ that satisfies the conditions of Theorem \ref{4p+3}. Let $n=4h+1$ and $\alpha=2h$; then gcd$(n,\alpha)=1$. Now since $n\geq 4p+3$, $h\geq p+1$ so $2p+2\leq \alpha=2h\leq n-2h\leq n-2-2p$ so we are done. \subsection{Globally simple $H(n;4p+3)$ when $n\equiv 0\md{4}$} Finally it remains to prove Theorem \ref{main3}. Using Theorem \ref{Ladder2}, we can construct a suitable Heffter array $H(n;3)$ which merges with the support shifted Heffter array $H(n;4p,3)$ from Theorem \ref{main}, similarly to Theorem \ref{4p+3}. In this process the diagonals of the Heffter array $H(n;3)$ become diagonals $D_{2p+\alpha-\epsilon-1}$, $D_{2p+\alpha-1}$ and $D_{2p+\alpha+\epsilon-1}$ in the Heffter array $H(n;4p+3)$. Then, so long as $2p+\alpha-\epsilon-1> 4p-2$ and $2p+\alpha+\epsilon-1<n$, the partial sums will have all the same properties as in the $n\equiv 1\md{4}$ construction. We thus have the following theorem. \begin{theorem} Let $n\equiv 0\md{4}$, $p\geq 0$ and $n\geq 4p+3$. Suppose $\epsilon\leq (n-4p)/2$ is coprime to $n$. If there exists an integer $\alpha$ coprime to $n$ such that $2p+\epsilon \leq \alpha\leq n-\epsilon-2p$, then there exists a globally simple Heffter array $H(n;4p+3)$. \end{theorem} For example, if $n\equiv 0\md{4}$ but $n\not\equiv0\md{12}$ and $n\geq 4p+8$, then choosing $\epsilon=3$ and $\alpha=n/2-1$ yields a globally simple $H(n;4p+3)$. The {\em Jacobsthal function} $j(n)$ is defined to be the smallest $m$ such that every sequence of $m$ consecutive integers contains an integer coprime to $n$. It was shown in \cite{Iw} that $j(n)=O(\log^2{n})$. Thus for $n-4p$ sufficiently large we can choose $\epsilon=O(\log^2{n})$ and $\alpha=2p+O(\log^2{n})$ which are each coprime to $n$ and satisfy the inequalities of the above theorem. Thus Theorem \ref{main3} is true. \section{Conclusion and Future Work} As shown in \cite{ADDY} and \cite{DW}, an integer Heffter array $H(n;k)$ exists if and only if $nk\equiv 0,3 \md{4}$. In this paper we have shown the existence of an integer Heffter array $H(n;k)$ which is globally simple whenever (a) $k\equiv 0\md{4}$; (b) $n\equiv 1\md{4}$ and $k\equiv 3\md{4}$; or (c) $n\equiv 0\md{4}$, $k\equiv 3\md{4}$ and $n\gg k$. In future work we will show that in most cases (in particular when $n$ is prime), the array $H(n;4p+3)$ given in Section 4 has an ordering which is both simple and compatible. As discussed in the introduction, this will yield biembeddings of cycle systems on orientable surfaces. We will also give lower bounds on the number of such non-isomorphic biembeddings. \vspace{5mm} \noindent{\bf Acknowledgment:} The fourth author would like to acknowledge support from TUBITAK 2219 and the School of Mathematics and Physics, The University of Queensland, through the awarding of a Ethel Raybould Visiting Fellowship.
train/arxiv
BkiUbjI5qsJBjkzigRia
5
1
\section{Introduction} Structural optimization is a research field developing concepts for the design of efficient structures. It was pioneered by \citet{Michell_1904}, who showed that minimum-weight truss structures under a~single load case are fully-stressed, and the optimal trajectories of their bars align with the principal stress directions. Hence, optimal designs can contain an infinite number of bars in general. This drawback, which hinders their manufacturability, was overcome in the work of \citet{Dorn1964} by introducing the ground structure approach, effectively discretizing the continuum into a~finite set of potential nodes and their interconnections of finite elements. Because the dimensionality of these potential elements is lower than that of the continuum and the presence and sizing of each element are investigated, this setting is referred to as discrete topology optimization. \begin{figure*}[!t] \begin{subfigure}{0.25\linewidth} \begin{tikzpicture} \scaling{2} \point{a}{0.000000}{0.000000} \notation{1}{a}{\circled{$1$}}[below right=0mm] \point{b}{1.000000}{0.500000} \notation{1}{b}{\circled{$2$}}[above right=0mm] \point{c}{0.000000}{1.000000} \notation{1}{c}{\circled{$3$}}[above right=0mm] \beam{2}{a}{b} \notation{4}{a}{b}[$1$] \beam{2}{b}{c} \notation{4}{b}{c}[$2$] \support{3}{a}[270] \support{3}{c}[270] \point{d1}{0.000000}{-0.250000} \point{d2}{1.000000}{-0.250000} \dimensioning{1}{d1}{d2}{-1.000000}[$1.0$] \point{e1}{1.250000}{0.000000} \point{e2}{1.250000}{0.500000} \dimensioning{2}{e1}{e2}{-0.75}[$0.5$] \point{e3}{1.250000}{1.000000} \dimensioning{2}{e2}{e3}{-0.75}[$0.5$] \load{1}{b}[0][1.0][0.0] \notation{1}{b}{$1.6$}[right=9mm] \load{1}{b}[90][-0.625][0.0] \notation{1}{b}{$1.0$}[below=5.5mm] \end{tikzpicture} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.22\linewidth} \includegraphics[width=\linewidth]{include/motivation2_feasibleset} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.22\linewidth} \includegraphics[width=\linewidth]{include/motivation2_relaxation1} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.22\linewidth} \includegraphics[width=\linewidth]{include/motivation2_relaxation2} \caption{} \end{subfigure} \caption{(a) Boundary conditions for the motivating problem, (b) the sublevel set $c \le 5$ of its feasible design space, and its (c) first and (d) second convex outer approximations constructed by the moment-sum-of-squares approach. Variables $a_1$ and $a_2$ stand for the cross-section areas of the two elements and $c$ denotes the corresponding compliance (assuming moments of inertia $I_i = a_i^2$, $i \in \{1,2\}$).} \label{fig:motivation} \end{figure*} A tremendous progress has been made for the case of trusses. While \citet{Dorn1964} developed a linear programming formulation for the single-load-case plastic design, \citet{BENDSOE_1991} and \citet{Achtziger_1992} introduced a convex displacement-based elastic-design quadratic program that additionally allowed for multiple load cases. Its dual, which incorporates the cross-section variables explicitly, was shown by \citet{Lobo_1998} and \citet{Ben_Tal_2001} to be a second-order conic program. This latter formulation handles multiple load cases with stress constraints efficiently \citep{Tyburec_2020}. Convexity prevails even for fundamental free-vibration constraints, in which case semidefinite programming can be used \citep{Achtziger_2008, Tyburec_2019}. A completely different situation holds for bending-resistant structures with continuous design variables. To the best of our knowledge, no convex formulation has been established so far, and, therefore, solely local optimization techniques have been used. Among these, \citet{Saka_1980} developed a sequential linear programming approach to design minimum-weight framed structures, and \citet{Wang_2006} improved over its solution efficiency by using sequential quadratic programming instead. Another, relaxation-based sequential semidefinite programming method was proposed by \citet{Yamada_2015} to deal with vibration problems. Nonlinear programming \citep{Fredricson_2003}, Optimality Criteria (OC) \citep{Khan_1984,Chan_1995}, the Method of Moving Assymptotes (MMA) \citep{Svanberg1987,Fredricson_2005}, and meta-heuristics \citep{An_2017} are other commonly used alternatives. Except for our earlier conference paper \citep{Tyburec_2020b}, which is a very preliminary version of this manu\-script, the only global approach that can be found in the literature considers a discrete setting of the problem, in which case the cross-sections are selected from a predefined catalog, allowing to use the branch-and-bound method \citep{Kanno_2016}. \subsection{Motivation}\label{sec:motivation} This contribution investigates a~conceptual design of bending-resistant structures with continuous design variables. As follows from the previous survey, only local optimization approaches have been adopted so far to tackle the frame/shell structure optimization problem. It is, therefore, not surprising that they fail to converge to globally optimal solutions even for toy problems, such as the one shown in Fig.~\ref{fig:motivation}a. Consider a linear-elastic material with the dimensionless Young modulus $E=1$, available material volume $\overline{V}=1$, element cross-sections parameterized by their area $a_i$ and the moment of inertia $I_i(a_i) = a_i^2$, which corresponds to rectangular cross-sections with the height-to-width ratio of $12$, for example. Our goal is to find the cross-section areas $a_i$ associated with the elements $i = \{1,2\}$ that induce minimum compliance $c$ within all non-negative $a_i$ satisfying the volume bound $\overline{V}$. Standard local optimization techniques such as OC, MMA, and \textsc{Matlab} inbuilt optimizer \texttt{fmincon} all converge\footnote{For OC and MMA, we adopted the commonly-used starting point of uniform mass distribution, i.e., $a_1=a_2=0.2\sqrt{5}$. For \texttt{fmincon}, the default starting point was used.} to the optimized compliance $c = 2.895$ and the corresponding areas $a_1 = 0.652$ and $a_2 = 0.242$. However, the globally optimal design possesses compliance $c^* = 2.719$, which requires $a_1^* = 0.4 \sqrt{5}$ and $a_2^* = 0$. This problem exhibits three local optima in particular, with the last one being $a_1 = 0$, $a_2 = 0.4\sqrt{5}$ and $c=4.429$, see Fig.~\ref{fig:motivation}b. Thus, the global optimum may be reached by examining a~few different starting points in the optimizers. Such a~procedure, however, cannot assure global optimality and cannot neither assess quality of the optimized designs with respect to the global optimum. When the number of structural elements increases, design-dependent loads are considered, and higher-order polynomials for the moments of inertia are used, finding globally-optimal minimum-compliance designs becomes extremely challenging. \subsection{Aims and novelty} We address this problem by exploiting the simple fact that the constraints can be formulated as polynomial functions, hence forming a~(basic) semi-algebraic feasible set. Using a polynomial objective function in addition, the moment-sum-of-squares (Lasserre) hierarchy of convex outer approximations (relaxations) can be used to solve and extract all globally-optimal solutions. These relaxations provide a~non-decreasing sequence of lower bounds, eventually attaining the optimal compliance, Figs \ref{fig:motivation}c-\ref{fig:motivation}d. In addition, we show how to correct such obtained lower-bound designs, and hence generate feasible upper bounds. A comparison of these bounds then assesses the design quality, and their equality establishes a simple sufficient condition of global optimality. We further show that when a unique global optimum exists, we can expect the occurrence of a~bound equality. Fortunately, this situation occurs quite often when the design domain lacks structural and boundary conditions symmetries. This paper is organized as follows. In Section \ref{sec:mom-sos}, we introduce polynomial optimization and the moment-sum-of-squares hierarchy. Section \ref{sec:nsdp} develops a~non-linear semidefinite programming formulation for topology optimization of frame structures, which we modify subsequently for the moment-sum-of-squares hierarchy in Section \ref{sec:eff}. Section \ref{sec:ub} reveals how to correct the lower-bound designs generated by the hierarchy to obtain feasible upper bounds, and Section \ref{sec:opt} introduces the sufficient condition of global $\varepsilon$-optimality as well as a~zero optimality gap for the case of a unique global optimum. Section~\ref{sec:shells} outlines the required changes in notation to allow for thickness optimization of shell structures. These results are then illustrated on five selected optimization problems in Section~\ref{sec:examples}. We finally summarize our contributions in Section~\ref{sec:conclusion}. \section{Moment-sum-of-squares hierarchy}\label{sec:mom-sos} In this section, we briefly outline the moment-sum-of-squares hierarchy when applied to solution of problems with polynomial matrix inequalities. For more information, we refer the reader to an expository text \citep{Henrion_2006} and to the excellent books \citep{Anjos2012,Lasserre_2015}. Suppose we aim to solve an optimization problem of the form \begin{subequations}\label{eq:po} \begin{alignat}{2} f^* = \inf_{\mathbf{x}}\, & f(\mathbf{x})\\ \mathrm{s.t.}\; & \mathbf{G} (\mathbf{x}) &{}\succeq{} 0,\label{eq:po_pmi} \end{alignat} \end{subequations} where $f (\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}$ is a real polynomial function and $\mathbf{G} (\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{S}^{m}$ is a~real polynomial mapping, so that $\forall i, j: G_{i,j}(\mathbf{x}) = G_{j,i}(\mathbf{x})$ are real polynomial functions of $\mathbf{x}$. The degree of these polynomials is less than or equal to $k \in \mathbb{N}$. The symbol $\mathbb{S}^{m}$ denotes the space of real symmetric square matrices of size $m$, and ``$\succeq$'' establishes an ordering of fundamental eigenvalues, i.e., $\mathbf{G}(\mathbf{x})$ in \eqref{eq:po_pmi} is positive semidefinite. Hence, we call \eqref{eq:po_pmi} a polynomial matrix inequality (PMI) in what follows and denote its feasible set by $\mathcal{K}(\mathbf{G})$. Clearly, the nonlinear semidefinite program \eqref{eq:po} covers a~variety of convex optimization problems as special cases, including linear and quadratic programming or linear semidefinite programming, see, e.g., \citep[Section 4.2]{Ben_Tal_2001}. Although these instances can be solved in a polynomial time, hence efficiently, \eqref{eq:po} exhibits $\mathcal{NP}$-hardness in general. This can be seen, for example, by a~reduction from binary programming, in which case the main diagonal of $\mathbf{G}(\mathbf{x})$ contains both $x_i^2-x_i$ and $x_i - x_i^2$ terms for all $i \in \{1\dots n\}$. Despite the fact that the admissible set of $\mathbf{x}$ is generally non-convex, \eqref{eq:po}~admits an equivalent reformulation to a~convex optimization problem over a finite-dimensional cone of polynomials $C_k(\mathcal{K}(\mathbf{G}))$ of degree at most $k$ which are non-negative on $\mathcal{K}(\mathbf{G})$, i.e., \begin{equation}\label{eq:non-neg_cone} \begin{aligned} f^* &= \sup_{\lambda} \{\lambda: f(\mathbf{x})-\lambda \ge 0, \forall \mathbf{x} \in \mathcal{K}(\mathbf{G})\}\\ &= \sup_{\lambda}\, (f-\lambda), \mathrm{s.t.\;} (f-\lambda) \in C_k(\mathcal{K}(\mathbf{G})). \end{aligned} \end{equation} Unfortunately, it is not known how to handle $C_k(\mathcal{K}(\mathbf{G}))$ simply and in a tractable way. To introduce an approach that allows to solve \eqref{eq:non-neg_cone}, we first adopt the following notation. Let $\mathbf{x} \mapsto \mathbf{b}_k (\mathbf{x})$ be the polynomial space basis of polynomials of degree at most $k$, \begin{equation} \begin{multlined} \mathbf{b}_k(\mathbf{x}) = \left(1\;\; x_1\;\; x_2\;\; \dots\;\; x_n\;\; x_1^2\;\; x_1 x_2\;\; \dots\;\; x_1 x_n\right.\\ \left. x_2^2\;\; x_2 x_3\;\; \dots\;\; x_n^2\;\; \dots\;\; x_2^3\;\; \dots\;\; x_n^k \right), \end{multlined} \end{equation} Then, any polynomial $p(\mathbf{x})$ of degree at most $k$ can be written as \begin{equation} p (\mathbf{x}) = \mathbf{q}^\mathrm{T} \mathbf{b}_k (\mathbf{x}), \end{equation} in which $\mathbf{q}$ denotes a vector of coefficients associated with the basis $\mathbf{b}_k(\mathbf{x})$. \begin{definition}\label{def:sos} The polynomial matrix $\bm{\Sigma}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{S}^m$ is a~(matrix) sum-of-squares (SOS) if there exists a polynomial matrix $\mathbf{H}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^{m \times o}$, $o \in \mathbb{N}$, such that % \begin{equation} \bm{\Sigma} (\mathbf{x})= \mathbf{H}(\mathbf{x})\left[\mathbf{H}(\mathbf{x})\right]^\mathrm{T}, \;\forall \mathbf{x} \in \mathbb{R}^n. \end{equation} \end{definition} Let $\langle \cdot, \cdot \rangle$ denote the standard inner product on matrices, $\bm{\alpha} \in \mathbb{N}^{\lvert \mathbf{b}_k(\mathbf{x}) \rvert}$ with $\mathbf{1}^\mathrm{T}\bm{\alpha} \le k$ be the multi-index associated with the basis $\mathbf{b}_k (\mathbf{x})$, and let $\mathbf{y} \in \mathbb{R}^{\lvert \mathbf{b}_k(\mathbf{x}) \rvert }$ be the moments (of probability measures supported on $\mathcal{K}(\mathbf{G}(\mathbf{x}))$) indexed in $\mathbf{b}_k (\mathbf{x})$. In what follows, we adopt the following notation for the elements of $\mathbf{y}$: \begin{equation} y_{\bm{\alpha}} = y_{\prod_{i=1}^n x_i^{\alpha_i}} \text{ is associated with } \prod_{i=1}^{n}x_i^{\alpha_i}. \end{equation} For example, when $\bm{\alpha} = (0\;\;0\;\;1\;\;2)^\mathrm{T}$, $y_{0012} = y_{x_3^1 x_4^2}$ corresponds to the polynomial $x_3^1 x_4^2 \in \mathbf{b}_k(\mathbf{x})$, where $k \ge 3$. \begin{assumption}\label{ass:comp}\citep{Henrion_2006} Assume that there exist SOS polynomials $\mathbf{x} \mapsto p_0(\mathbf{x)}$ and $\mathbf{x} \mapsto \mathbf{R}(\mathbf{x})$ such that the superlevel set $\{\mathbf{x} \in \mathbb{R}^n : p_0(\mathbf{x}) + \langle \mathbf{R}(\mathbf{x}),\mathbf{G}(\mathbf{x})\rangle \geq 0\}$ is compact. \end{assumption} Note that Assumption \ref{ass:comp} is an algebraic certificate of compactness of the feasible set in problem \eqref{eq:po}. When Assumption \ref{ass:comp} holds, then, the dual of \eqref{eq:non-neg_cone} can be written equivalently as an infinite-dimensional generalized problem of moments, which is equipped with a finite-dimensional truncation: \begin{subequations}\label{eq:truncmoment} \begin{alignat}{2} f^{(r)} = \min_{\mathbf{y}}\, && \mathbf{q}^\mathrm{T} \mathbf{y}\qquad\qquad\;\;\\ \mathrm{s.t.}\;&& y_0 &= 1,\\ && \mathbf{M}_k (\mathbf{y}) &\succeq 0,\\ && \mathbf{M}_{k-d} (\mathbf{G}(\mathbf{x}) \mathbf{y}) &\succeq 0, \end{alignat} \end{subequations} in which $2r \ge k$, $r$ is the relaxation degree, and $d$ stands for the maximum degree of polynomials in $\mathbf{G}(\mathbf{x})$. In addition, $\mathbf{M}_k(\mathbf{y})$ and $\mathbf{M}_{k-d} (\mathbf{G}(\mathbf{x})\mathbf{y})$ are the truncated moment and localizing matrices associated with $\mathbf{y}$ and $\mathbf{G}(\mathbf{x})$. For a precise definition of $\mathbf{M}_k(\mathbf{y})$ and $\mathbf{M}_{k-d} (\mathbf{G}(\mathbf{x})\mathbf{y})$, we refer the reader to \citep{Henrion_2006}. These moment matrices are linear in $\mathbf{y}$, hence \eqref{eq:truncmoment} is a linear semidefinite program. Because \eqref{eq:truncmoment} is a~finite-dimensional convex relaxation of \eqref{eq:po}, we have $f^{(r)} \le f^*$, $\forall r \in \mathbb{N}$. Moreover, these relaxations are tighter with increasing $r$, making the sequence $\left(f^{(r)}\right)_{r}^{\infty}$ monotonically increasing and converging towards $f^*$. \begin{theorem}\citep[Theorem 2.2]{Henrion_2006}\label{th:convergence} Let Assumption~\ref{ass:comp} be satisfied. Then, $f^{(r)} \uparrow f^*$ as $r \rightarrow \infty$ in \eqref{eq:truncmoment}. \end{theorem} Moreover, all globally optimal solutions of \eqref{eq:po} can be extracted from \eqref{eq:truncmoment} based on the flat extension theorem of \citet{Curto_1996}. Indeed, finite convergence occurs when \begin{equation}\label{eq:rank} s = \mathrm{Rank}(\mathbf{M}_k(\mathbf{y}^*)) = \mathrm{Rank}(\mathbf{M}_{k-d}(\mathbf{y}^*)), \end{equation} where $\mathbf{y}^*$ denotes the vector of optimal moments, and $s$ stands for the minimum number of distinct global minimizers \citep[Theorem 2.4]{Henrion_2006}. \paragraph{Example} We illustrate the process of building the (Lasserre) moment-sum-of-squares hierarchy on an elementary example. \begin{subequations} \begin{alignat}{2} \min_{a,c}\; && c\qquad\;\;\,\\ \mathrm{s.t.}\; && \begin{pmatrix} c & \overline{f} \\ \overline{f} & a^2 \end{pmatrix} &\succeq 0,\\ && \overline{V} - a &\ge 0,\\ && a &\ge 0. \end{alignat} \end{subequations} In the first relaxation, $\mathbf{y} = \begin{pmatrix} y_{00} & y_{10} & y_{01} & y_{20} & y_{11} & y_{02}\end{pmatrix}^\mathrm{T}$ is indexed in the polynomial space basis $\mathbf{b}_1 (a,c) = \begin{pmatrix}1 & c & a & c^2 & c a & a^2\end{pmatrix}^\mathrm{T}$. Then, the associated relaxation reads \begin{subequations} \begin{alignat}{2} \min_{\mathbf{y}} && y_{10}\qquad\qquad\;\,\\ \mathrm{s.t.} && \begin{pmatrix} y_{10} & \overline{f} \\ \overline{f} & y_{02} \end{pmatrix} &\succeq 0,\\ && \overline{V} - y_{01} &\ge 0,\\ && y_{01} &\ge 0,\\ && y_{00} &= 1,\\ && \begin{pmatrix} y_{00} & y_{10} & y_{01}\\ y_{10} & y_{20} & y_{11}\\ y_{01} & y_{11} & y_{02} \end{pmatrix} &\succeq 0. \end{alignat} \end{subequations} For $r=2$, we have $\mathbf{y} = (y_{00}\;\,\allowbreak y_{10}\;\,\allowbreak y_{01}\;\,\allowbreak y_{20}\;\,\allowbreak y_{11}\;\,\allowbreak y_{02}\;\,\allowbreak y_{30}\;\,\allowbreak y_{21}\allowbreak y_{12}\;\,\allowbreak y_{03}\;\,\allowbreak y_{40}\;\,\allowbreak y_{31}\;\,\allowbreak y_{22}\;\,\allowbreak y_{13}\;\,\allowbreak y_{04})^\mathrm{T}$ indexed in $\mathbf{b}_2(a,c) = (1\allowbreak c\;\,\allowbreak a\;\,\allowbreak c^2\;\,\allowbreak c a\;\,\allowbreak a^2\;\,\allowbreak c^3\;\,\allowbreak c^2 a\;\,\allowbreak c a^2\;\,\allowbreak a^3\;\,\allowbreak c^4\;\,\allowbreak c^3 a\;\,\allowbreak c^2 a^2\;\,\allowbreak c a^3\;\,\allowbreak a^4)^\mathrm{T}$. The corresponding relaxation is written as \begin{subequations} \begin{alignat}{2} \min_{\mathbf{y}}\; && y_{10}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\\ \mathrm{s.t.}\; && \begin{pmatrix} y_{10} & \overline{f} & y_{20} & \overline{f}y_{10} & y_{11} & \overline{f} y_{01} \\ \overline{f} & y_{02} & \overline{f}y_{10} & y_{12} & \overline{f} y_{01} & y_{03}\\ y_{20} & \overline{f}y_{10} & y_{30} & \overline{f}y_{20} & y_{21} & \overline{f} y_{11} \\ \overline{f}y_{10} & y_{12} & \overline{f}y_{20} & y_{22} & \overline{f} y_{11} & y_{13}\\ y_{11} & \overline{f}y_{01} & y_{21} & \overline{f}y_{11} & y_{12} & \overline{f} y_{02} \\ \overline{f}y_{01} & y_{03} & \overline{f}y_{11} & y_{13} & \overline{f} y_{02} & y_{04} \end{pmatrix} &\succeq 0,\hspace{-4mm}\\ && \begin{pmatrix} \overline{V} - y_{01} & \overline{V}y_{10} - y_{11} & \overline{V}y_{01} - y_{02}\\ \overline{V}y_{10} - y_{11} & \overline{V}y_{20} - y_{21} & \overline{V}y_{11} - y_{12}\\ \overline{V}y_{01} - y_{02} & \overline{V}y_{11} - y_{12} & \overline{V}y_{02} - y_{03} \end{pmatrix} &\succeq 0,\hspace{-4mm}\\ && \begin{pmatrix} y_{01} & y_{11} & y_{02}\\ y_{11} & y_{21} & y_{12}\\ y_{02} & y_{12} & y_{03} \end{pmatrix} &\succeq 0,\hspace{-4mm}\\ && y_{00} &= 1,\hspace{-4mm}\\ && \begin{pmatrix} y_{00} & y_{10} & y_{01} & y_{20} & y_{11} & y_{02}\\ y_{10} & y_{20} & y_{11} & y_{30} & y_{21} & y_{12}\\ y_{01} & y_{11} & y_{02} & y_{21} & y_{12} & y_{03}\\ y_{20} & y_{30} & y_{21} & y_{40} & y_{31} & y_{22}\\ y_{11} & y_{21} & y_{12} & y_{31} & y_{22} & y_{13}\\ y_{02} & y_{12} & y_{03} & y_{22} & y_{13} & y_{04} \end{pmatrix} &\succeq 0.\hspace{-4mm} \end{alignat} \end{subequations} When solved, this relaxation allows for extracting the global solution of $a^*=\overline{V}$ and $c^*=\overline{f}^2/\overline{V}^2$. \section{Methodology} Topology optimization of discrete structures provides a natural application for the ground structure approach \citep{Dorn1964}, a~discretized design domain composed of a~fixed set of $n_\mathrm{n} \in \mathbb{N}$ nodes and their subsets of admissible $n_\mathrm{e} \in \mathbb{N}$ finite elements. Here, we employ the simplest two-node Euler-Bernoulli frame elements that adopt linear shape functions to interpolate the longitudinal displacements and cubic shape functions to interpolate the lateral displacements and rotations. Another elements can be adopted though, see Section \ref{sec:examples_cantilever} for applications to the Timoshenko beam element and the MITC4 shell element. Each of these finite elements (indexed with $i$) must be supplied with the non-negative cross-section area $a_i \in \mathbb{R}_{\ge 0}$ and the area moment of inertia $I_i \in \mathbb{R}_{\ge 0}$. These are to be found in the optimization process. In this contribution, we assume, for convenience, that the moment of inertia is a second- or third-order polynomial function of the cross-sections, \begin{equation}\label{eq:inertia} I_i(a_i) = c_\mathrm{II} a_i^2 + c_\mathrm{III} a_i^3, \end{equation} with $c_\mathrm{II}, c_\mathrm{III} \in \mathbb{R}_{\ge 0}$ being fixed constants. When $I_i (a_i) = 0$ and $a_i = 0$, the finite element vanishes and does not contribute to the load transfer. Different topology optimization formulations exist, accommodating specific needs of particular applications. Here, we consider the problem of searching the minimum-compliant design under multiple load cases \eqref{eq:original_compliance} while satisfying the linear-elastic equilibrium equation \eqref{eq:original_equilibrium} and limiting the material volume from above by $\overline{V} \in \mathbb{R}_{>0}$ \eqref{eq:original_volume}. Physical admissibility of the resulting designs is ensured by the non-negative cross-section areas \eqref{eq:original_areas}. Combination of these ingredients establishes the basic elastic-design formulation \begin{subequations}\label{eq:original} \begin{alignat}{2} &\min_{\mathbf{a}, \mathbf{u}_1,\dots,\mathbf{u}_{n_\mathrm{lc}}}\, & \sum_{j=1}^{n_\mathrm{lc}} \omega_j \mathbf{f}_j(\mathbf{a})^\mathrm{T} \mathbf{u}_j\;\;\label{eq:original_compliance}\\ &\quad\;\;\mathrm{s.t.} & \mathbf{K}_j(\mathbf{a}) \mathbf{u}_j - \mathbf{f}_j(\mathbf{a}) &{}={}\mathbf{0},\;\forall j \in \{1\dots n_\mathrm{lc}\},\label{eq:original_equilibrium}\hspace{-6mm}\\ && \overline{V} - \bm{\ell}^\mathrm{T} \mathbf{a} &{}\ge{}0,\label{eq:original_volume}\\ && \mathbf{a} &{}\ge{}\mathbf{0},\label{eq:original_areas} \end{alignat} \end{subequations} in which $\bm{\omega} \in \mathbb{R}_{>0}^{n_\mathrm{lc}}$ are positive weights associated with $n_\mathrm{lc}$ load cases, and $\bm{\ell} \in \mathbb{R}_{\ge 0}^{n_\mathrm{e}}$ stands for the element lengths column vector. Further, $\mathbf{f}_j (\mathbf{a}) \in \mathbb{R}^{n_{\mathrm{dof},j}}$ and $\mathbf{u}_j \in \mathbb{R}^{n_{\mathrm{dof},j}}$ denote the force and displacement column vectors of the $j$-th load case, $n_{\mathrm{dof},j} \in \mathbb{N}$ stands for the associated number of degrees of freedom, and $\mathbf{K}_j(\mathbf{a}) \in \mathbb{R}^{n_{\mathrm{dof},j} \times n_{\mathrm{dof},j}}$ is the corresponding symmetric positive semidefinite stiffness matrix. For these stiffness matrices, we require $\forall \mathbf{a}>\mathbf{0}: \mathbf{K}_j(\mathbf{a}) \succ 0$ to exclude rigid body motions. Using the finite element method, $\mathbf{K}_j(\mathbf{a})$ is assembled as \begin{equation}\label{eq:assembly_K} \mathbf{K}_j (\mathbf{a}) = \mathbf{K}_{j,0} + \sum_{i=1}^{n_\mathrm{e}} \left[ \mathbf{K}_{j,i}^{(1)} a_i + \mathbf{K}_{j,i}^{(2)} a_i^2 + \mathbf{K}_{j,i}^{(3)} a_i^3 \right], \end{equation} with $\mathbf{K}_{j,0} \succeq 0$ standing for a~design-independent stiffness (such as fixed structural elements), $\mathbf{K}_{j,i}^{(1)} \succeq 0$ being the unit-cross-section-area membrane stiffness of the $i$-th element in the $j$-th load case, and $\mathbf{K}_{j,i}^{(2)} \succeq 0$ with $\mathbf{K}_{j,i}^{(3)} \succeq 0$ are the corresponding bending stiffness counterparts associated with the unit cross-section area. The force column vector $\mathbf{f}_j$ is assumed in the form \begin{equation}\label{eq:assembly_f} \mathbf{f}_j (\mathbf{a}) = \mathbf{f}_{j,0} + \sum_{i=1}^{n_\mathrm{e}} \left[ \mathbf{f}_{j,i}^{(1)} a_i \right], \end{equation} where $\mathbf{f}_{j,0}$ stands for the design-independent load and $\mathbf{f}_{j,i}^{(1)}$ are the design-dependent loads such as self-weight. One can also add higher-order terms to \eqref{eq:assembly_f} to handle non-zero displacement boundary conditions. The formulation \eqref{eq:original} is nonlinear and lacks convexity in general. The non-convexity comes not only from the polynomial entries in the stiffness matrix \eqref{eq:assembly_K}, but also from its possible singularity caused by zero cross-section areas \eqref{eq:original_areas}. \subsection{Semidefinite programming formulation for topology optimization of frame structures}\label{sec:nsdp} An approach to simplify \eqref{eq:original} relies on eliminating the displacement variables $\mathbf{u}_j$ from the problem formulation. In the nested approach, which is commonly used in topology optimization, the cross-section areas are bounded from below by a strictly positive $\varepsilon \in \mathbb{R}_{>0}$, allowing for a~computation of $\left[\mathbf{K}(\mathbf{a})\right]^{-1}$. Recall that $\mathbf{K}(\mathbf{a}) \succ 0$ for all $\mathbf{a}>\mathbf{0}$. The optimization procedure then traditionally adopts, e.g., the Method of Moving Asymptotes (MMA) \citep{Svanberg1987}, or the Optimality Criteria (OC) method \citep{Rozvany_1989}. Notice that $\varepsilon \rightarrow 0$ results in a~high condition number of $\mathbf{K}(\mathbf{a})$ and that larger values of $\varepsilon$ may impair quality of optimized designs, as sizing optimization is solved instead of the original topology optimization. In contrast, here, we eliminate $\mathbf{u}_j$ and allow the cross sections to truly attain zero. Because $\left[\mathbf{K}(\mathbf{a})\right]^{-1}$ may not exist in our case, we rely on the Moore-Penrose pseudo-inverse $\left[\mathbf{K}(\mathbf{a})\right]^\dagger$ instead. Its role in enforcing the equilibrium conditions is clarified in the next lemma. \begin{lemma}\label{prop:pinv} Consider the equation $\mathbf{K}_j(\mathbf{a}) \mathbf{u}_j = \mathbf{f}_j(\mathbf{a}) - \mathbf{r}_j$ with $\mathbf{u}_j = \mathbf{K}_j(\mathbf{a})^{\dagger} \mathbf{f}_j(\mathbf{a})$ and a residual vector $\mathbf{r}_j \in \mathbb{R}^{n_\mathrm{dof}}$. Then, $\mathbf{r}_j=\mathbf{0}$ if and only if $\mathbf{f}_j(\mathbf{a}) \in \mathrm{Im}(\mathbf{K}_j (\mathbf{a}))$. \end{lemma} \begin{proof} Let $\mathbf{f}_j (\mathbf{a}) = \mathbf{v}_j (\mathbf{a}) + \mathbf{w}_j (\mathbf{a})$, in which $\mathbf{v}_j(\mathbf{a}) \in \mathrm{Im}\left(\mathbf{K}_j(\mathbf{a})\right)$, and $\mathbf{w}_j(\mathbf{a}) \in \mathrm{Ker}\left(\mathbf{K}_j(\mathbf{a})\right)$. Then, because $\mathbf{K}_j(\mathbf{a}) \mathbf{K}_j(\mathbf{a})^{\dagger}$ is an orthogonal projector onto the range of $\mathbf{K}_j(\mathbf{a})$, we obtain $\mathbf{K}_j(\mathbf{a}) \mathbf{K}_j(\mathbf{a})^{\dagger} \mathbf{f}_j(\mathbf{a}) = \mathbf{v}_j(\mathbf{a})$. Clearly, when $\mathbf{f}_j(\mathbf{a}) \in \mathrm{Im}(\mathbf{K}_j (\mathbf{a}))$, we have $\mathbf{v}_j(\mathbf{a}) = \mathbf{f}(\mathbf{a})$ with $\mathbf{w}_j(\mathbf{a}) = \mathbf{0}$, implying that $\mathbf{r}_j = \mathbf{0}$. For the case of $\mathbf{f}_j(\mathbf{a}) \notin \mathrm{Im}(\mathbf{K}_j (\mathbf{a}))$, $\mathbf{w}_j (\mathbf{a}) \neq \mathbf{0}$, showing that $\mathbf{r}_j = -\mathbf{w}_j (\mathbf{a})$. \end{proof} Lemma \ref{prop:pinv} allows us to eliminate the displacement variables and write the optimization problem \eqref{eq:original} only in terms of the cross-section areas $\mathbf{a}$ as \begin{subequations}\label{eq:fpinv} \begin{alignat}{2} \min_{\mathbf{a}}\; && \sum_{j=1}^{n_\mathrm{lc}} \omega_j \mathbf{f}_j(\mathbf{a})^\mathrm{T} \left[\mathbf{K}_j(\mathbf{a}) \right]^{\dagger} \mathbf{f}_j (\mathbf{a})\label{eq:fpinv_compliance}\\ \mathrm{s.t.}\; && \overline{V} - \bm{\ell}^\mathrm{T} \mathbf{a} &{}\ge{} 0,\label{eq:fpinv_volume}\\ && \mathbf{a} &{}\ge{} \mathbf{0},\label{eq:fpinv_areas}\\ && \forall j \in \{1\dots n_\mathrm{lc}\}: \quad \mathbf{f}_j(\mathbf{a}) &{}\in{} \mathrm{Im}(\mathbf{K}_j(\mathbf{a})).\label{eq:fpinv_image} \end{alignat} \end{subequations} Notice that \eqref{eq:fpinv_image} essentially eliminates the nonphysical setup when $\mathbf{K}_j(\mathbf{a}) = \left[\mathbf{K}_j (\mathbf{a})\right]^\dagger = \mathbf{0}$ produces zero compliance. Because $\left[ \mathbf{K}_j(\mathbf{a})\right]^\dagger \succeq 0, \forall j \in \{1\dots n_\mathrm{lc}\}$, and $\bm{\omega}>\mathbf{0}$ by definition, \eqref{eq:fpinv_compliance} is bounded from below by $0$. Thus, we introduce slack variables $\mathbf{c} \in \mathbb{R}_{\ge0}^{n_\mathrm{lc}}$ comprising the to-be-minimized upper bounds on compliances for the load cases and rewrite \eqref{eq:fpinv} equivalently as \begin{subequations}\label{eq:fpinvs} \begin{alignat}{2} \min_{\mathbf{a}, \mathbf{c}}\, && \bm{\omega}^\mathrm{T} \mathbf{c}\qquad\qquad\qquad\qquad\;\;\; \label{eq:fpinvs_compliance}\\ \mathrm{s.t.}\, && c_j - \mathbf{f}_j(\mathbf{a})^\mathrm{T} \left[\mathbf{K}_j(\mathbf{a}) \right]^{\dagger} \mathbf{f}_j (\mathbf{a}) &{}\ge{}0, \forall j \in \{1\dots n_\mathrm{lc}\},\label{eq:fpinvs_equilibrium}\hspace{-12mm}\\ && \overline{V} - \bm{\ell}^\mathrm{T} \mathbf{a} &{}\ge{}0,\label{eq:fpinvs_volume}\\ && \mathbf{a} &{}\ge{}\mathbf{0},\label{eq:fpinvs_areas}\\ && \forall j \in \{1\dots n_\mathrm{lc}\}: \quad \mathbf{f}_j(\mathbf{a}) &{}\in{} \mathrm{Im}(\mathbf{K}_j(\mathbf{a})).\label{eq:fpinvs_image} \end{alignat} \end{subequations} To derive a nonlinear semidefinite programming formulation, let us now recall the generalized Schur complement lemma: \begin{lemma}\label{lemma:schur} \citep[Theorem 16.1]{Gallier_2011} Let $\mathbf{A}$ and $\mathbf{C}$ be symmetric square matrices, $\mathbf{B}$ have appropriate dimensions, and $\mathbf{I}$ denote an identity matrix. Then, the following conditions are equivalent: \begin{enumerate} \item $\begin{pmatrix} \mathbf{A} && \mathbf{B}^\mathrm{T}\\ \mathbf{B} && \mathbf{C} \end{pmatrix} \succeq 0$, \item $\mathbf{C} \succeq 0$, $\mathbf{A}- \mathbf{B}^\mathrm{T} \mathbf{C}^\dagger \mathbf{B} \succeq 0$, $(\mathbf{I}-\mathbf{C}\mathbf{C}^\dagger)\mathbf{B} = \mathbf{0}$. \end{enumerate} \end{lemma} Since we already have $\mathbf{K}_j (\mathbf{a}) \succeq 0$ by definition and $c_j-\left[\mathbf{f}_j(\mathbf{a})\right]^\mathrm{T} \left[\mathbf{K}_j(\mathbf{a})\right]^\dagger \mathbf{f}_j (\mathbf{a}) \ge 0$ in \eqref{eq:fpinvs_equilibrium}, to use Lemma \ref{lemma:schur} it suffices to show that \begin{equation}\label{eq:schur_eq} (\mathbf{I}-\mathbf{K}_j(\mathbf{a})\left[\mathbf{K}_j(\mathbf{a})\right]^\dagger)\mathbf{f}_j (\mathbf{a}) = \mathbf{0}. \end{equation} \begin{proposition}\label{prop:image} The condition \eqref{eq:schur_eq} is equivalent to $\mathbf{f}_j(\mathbf{a}) \in \mathrm{Im}(\mathbf{K}_j(\mathbf{a}))$. \end{proposition} \begin{proof} First, consider $\mathbf{f}_j (\mathbf{a}) \in \mathrm{Im}(\mathbf{K}_j (\mathbf{a}))$. Then, $\mathbf{f}_j (\mathbf{a}) = \mathbf{K}_j (\mathbf{a}) \mathbf{u}_j$ for some displacement vector $\mathbf{u}_j$. After inserting it into the left-hand-side of \eqref{eq:schur_eq}, we have % \begin{equation} \left( \mathbf{K}_j(\mathbf{a}) - \mathbf{K}_j(\mathbf{a}) \left[\mathbf{K}_j(\mathbf{a})\right]^\dagger \mathbf{K}_j(\mathbf{a}) \right) \mathbf{u}_j = \mathbf{0}, \end{equation} % which holds for all such $\mathbf{u}_j$ as $\mathbf{K}_j(\mathbf{a}) \left[\mathbf{K}_j(\mathbf{a})\right]^\dagger \mathbf{K}_j(\mathbf{a}) = \mathbf{K}_j(\mathbf{a})$ by the definition of the Moore-Penrose pseudo-inverse \citep[Lemma 14.1]{Gallier_2011}. Otherwise, consider $\mathbf{f}_j (\mathbf{a}) \notin \mathrm{Im}(\mathbf{K}_j (\mathbf{a}))$ and let $\tilde{\mathbf{u}} = \left[\mathbf{K}_j (\mathbf{a})\right]^\dagger \mathbf{f}_j (\mathbf{a})$. Then, $\mathbf{K}_j (\mathbf{a}) \tilde{\mathbf{u}} = \mathbf{f}_j (\mathbf{a}) - \mathbf{r}_j$ for some $\mathbf{r}_j \in \mathrm{Ker}(\mathbf{K}(\mathbf{a}))$, $\mathbf{r}_j \neq \mathbf{0}$ by Lemma \ref{prop:pinv}. Thus, the left-hand-side of \eqref{eq:schur_eq} simplifies to % \begin{equation} \begin{multlined} \mathbf{f}_j (\mathbf{a}) - \mathbf{K}_j (\mathbf{a}) \left[\mathbf{K}_j (\mathbf{a})\right]^{\dagger} \mathbf{f}_j (\mathbf{a}) = \\ =\mathbf{f}_j (\mathbf{a}) -\mathbf{K}_j (\mathbf{a})\tilde{\mathbf{u}} = \mathbf{r}_j \neq \mathbf{0}, \end{multlined} \end{equation} % which completes the proof. \end{proof} Finally, Proposition \ref{prop:image} and Lemma \ref{lemma:schur} facilitate an equivalent reformulation of the optimization problem \eqref{eq:fpinvs} as a~nonlinear semidefinite program \begin{subequations}\label{eq:nsdp} \begin{alignat}{3} & \min_{\mathbf{a}, \mathbf{c}} \;& \bm{\omega}^\mathrm{T} \mathbf{c} \qquad\qquad\qquad\label{eq:nsdp_obj}\\ & \;\mathrm{s.t.}\;\; & \begin{pmatrix} c_j & -\mathbf{f}_j(\mathbf{a})^\mathrm{T}\\ -\mathbf{f}_j(\mathbf{a}) & \mathbf{K}_j(\mathbf{a}) \end{pmatrix}&{}\succeq{} 0,\; \forall j \in \{1\dots n_\mathrm{lc}\},\hspace{-4mm}\label{eq:pmi}\\ && \overline{V} - \bm{\ell}^\mathrm{T} \mathbf{a} &{}\ge{}0,\label{eq:nsdp_vol}\\ && \mathbf{a} &{}\ge{} \mathbf{0},\label{eq:nsdp_areas} \end{alignat} \end{subequations} in which only the constraint \eqref{eq:pmi} lacks convexity. Importantly, all constraints are polynomial functions of $\mathbf{a}$, forming therefore a semi-algebraic feasible set. \subsection{Efficient polynomial reformulation}\label{sec:eff} The optimization problem \eqref{eq:nsdp} constitutes a minimization of a linear function over a semi-algebraic set, allowing for a~solution using the moment-sum-of-squares hierarchy, as briefly discussed in Section \ref{sec:mom-sos}. However, efficiency of the hierarchy can be improved after modifying \eqref{eq:nsdp} to provide a~tighter feasible set of relaxed problems and to reduce numerical issues by scaling the design variables. These modifications are outlined in the following paragraphs. \subsubsection{Compactness of the feasible set} We start by enforcing compactness of the feasible set of the optimization problem \eqref{eq:nsdp} because of two reasons. First, compactness is required for Theorem \ref{th:convergence}, in the form of Assumption~\ref{ass:comp}. Second, compactness also allows tightening the feasible sets of relaxed problems, notably improving numerical performance. \begin{proposition}\label{prop:bounds} Assume that $\mathbf{a}^*$ and $\mathbf{c}^*$ are optimal cross-section areas and compliances associated with the optimization problem \eqref{eq:nsdp}. Then, $\forall i \in \{1\dots n_e\}: 0 \le a_i^* \le \overline{a}_i$ with $\overline{a}_i = \overline{V}/\ell_i$ and $\forall j \in \{1\dots n_\mathrm{lc}\}: 0 \le c_j^* \le \overline{c}/\omega_j$, where $\overline{c} = \sum_{j=1}^{n_\mathrm{lc}} \left[ \omega_j \mathbf{f}(\hat{\mathbf{a}})^\mathrm{T} \mathbf{K}(\hat{\mathbf{a}})^{-1} \mathbf{f}(\hat{\mathbf{a}}) \right]$ with $\hat{\mathbf{a}} = \mathbf{1} \overline{V}/\sum_{i=1}^{n_\mathrm{e}} \ell_i$. \end{proposition} \begin{proof} The cross-section areas are non-negative by definition \eqref{eq:nsdp_areas}. Therefore, \eqref{eq:nsdp_vol} represents a conic combination and none of the structural elements can occupy a~larger volume than the volume bound $\overline{V}$, $\forall i \in \{1\dots n_\mathrm{e}\}: a_i^* \le \overline{V}/\ell_i$. The compliance variables are placed at the main diagonal of the polynomial matrix inequality (PMI) \eqref{eq:pmi} and are hence non-negative, $c_j^* \ge 0$. Then, because $\bm{\omega} > \mathbf{0}$, the conic combination $\bm{\omega}^\mathrm{T} \mathbf{c}^*$ is an upper bound for its summands, $\omega_j c_j^* \le \bm{\omega}^\mathrm{T} \mathbf{c}^*$. Moreover, since $\hat{\mathbf{a}}$ determines uniquely the compliances $\hat{\mathbf{c}}$, $\hat{c}_j = \mathbf{f}_j (\hat{\mathbf{a}})^\mathrm{T}\mathbf{K}_j(\hat{\mathbf{a}})^{-1}\mathbf{f}_j(\hat{\mathbf{a}})$, the pair $(\hat{\mathbf{a}}, \hat{\mathbf{c}})$ is a feasible solution to \eqref{eq:pmi}--\eqref{eq:nsdp_areas}, so that we also have $\bm{\omega}^\mathrm{T} \mathbf{c}^* \le \overline{c} = \bm{\omega}^\mathrm{T} \hat{\mathbf{c}}$. Consequently, $\forall j \in \{1\dots n_\mathrm{lc}\}: \omega_j c_j^* \le \overline{c}$. \end{proof} Among the bounds in Proposition \ref{prop:bounds}, only the compliance upper bounds are not enforced in the formulation \eqref{eq:nsdp}. Indeed, for any fixed $\mathbf{a}>\mathbf{0}$, $\mathbf{c} \rightarrow \bm{\infty}$ is feasible to \eqref{eq:nsdp}, so that Assumption \ref{ass:comp} is not satisfied. To make the design space bounded, we add the (redundant) upper-bound compliance constraint from Proposition \ref{prop:bounds}, or, eventually, an upper-bound obtained by solving the convex truss topology optimization problem instead, see Appendix~\ref{app:tto}. Subsequently, we arrive at the optimization problem \begin{subequations}\label{eq:nsdpC} \begin{alignat}{2} \min_{\mathbf{a}, \mathbf{c}}\; && \bm{\omega}^\mathrm{T} \mathbf{c} \qquad\qquad\qquad\label{eq:nsdpC_obj}\\ \mathrm{s.t.}\; && \begin{pmatrix} c_j & -\mathbf{f}_j(\mathbf{a})^\mathrm{T}\\ -\mathbf{f}_j(\mathbf{a}) & \mathbf{K}_j(\mathbf{a}) \end{pmatrix}&{}\succeq{} 0,\; \forall j \in \{1\dots n_\mathrm{lc}\},\label{eq:ndspC_pmi}\hspace{-3mm}\\ && \overline{V}-\bm{\ell}^\mathrm{T} \mathbf{a} &{}\ge{}0,\label{eq:nsdpC_vol}\\ && \overline{c}-\bm{\omega}^\mathrm{T} \mathbf{c} &{}\ge{}0,\\ && \mathbf{a}&{}\ge{}\mathbf{0},\label{eq:nsdpC_areas} \end{alignat} \end{subequations} for which we have the following result: \begin{proposition}\label{prop:compact} The feasible set of \eqref{eq:nsdpC} is compact. \end{proposition} \begin{proof} The feasible set is bounded based on Proposition \ref{prop:bounds}. Moreover, $\mathbf{a}$ and $\mathbf{c}$ satisfying conditions \eqref{eq:nsdpC_vol}--\eqref{eq:nsdpC_areas} form a~closed set. Thus, it suffices to show that \eqref{eq:ndspC_pmi} is closed. But the elements in \eqref{eq:ndspC_pmi} are polynomial functions that are continuous. Moreover, the set of semidefinite matrices is closed so $\mathbf{a}$ and $\mathbf{c}$ satisfying \eqref{eq:ndspC_pmi} live in a closed set. Boundedness and closeness imply compactness because we are in a finite dimensional space. \end{proof} \subsubsection{Scaling and box constraints} After introducing box constraints in formulation \eqref{eq:nsdpC}, we can scale all variables domains to $\left[-1,1\right]$. This scaling reduces numerical issues that may arise during the problem solution. To this goal, we have \begin{subequations} \begin{align} &c_j = \frac{1}{2 \omega_j} \left(c_{\mathrm{s},j} + 1\right) \overline{c}, &\forall j &\in \{1\dots n_\mathrm{lc}\},\label{eq:scaled_c}\\ &a_i = 0.5 \left(a_{\mathrm{s},i} + 1\right) \overline{a}_i,\quad &\forall i &\in \{1\dots n_\mathrm{e}\},\label{eq:scaled_a} \end{align} \end{subequations} where $\mathbf{a}_\mathrm{s}$ and $\mathbf{c}_\mathrm{s}$ are the scaled cross-section areas and compliance variables. In addition, we explicitly insert the box constraints into the optimization problem formulation to tighten feasible sets of the relaxed problems. There are multiple options how to write these box constraints $\mathbf{a}_\mathrm{s}, \mathbf{c}_\mathrm{s} \in \left[-1,1\right]$, e.g., \begin{subequations} \begin{align} \begin{aligned}\label{eq:csc_lin} -1 \le a_{\mathrm{s},i} &\le 1, \quad \forall i \in \{1\dots n_\mathrm{b}\},\\ -1 \le c_{\mathrm{s},j} &\le 1, \quad \forall j \in \{1\dots n_\mathrm{lc}\}, \end{aligned}\\ \begin{aligned}\label{eq:csc_all} a_{\mathrm{s},i}^2 &\le 1, \quad \forall i \in \{1\dots n_\mathrm{b}\},\\ c_{\mathrm{s},j}^2 &\le 1, \quad \forall j \in \{1\dots n_\mathrm{lc}\}. \end{aligned}\\ \begin{aligned}\label{eq:csc_all2} a_{\mathrm{s},i}^4 &\le 1, \quad \forall i \in \{1\dots n_\mathrm{b}\},\\ c_{\mathrm{s},j}^4 &\le 1, \quad \forall j \in \{1\dots n_\mathrm{lc}\}. \end{aligned} \end{align} \end{subequations} Despite equivalent in what they enforce, their numerical performance in the moment-sum-of-squares hieararchy varies considerably; we refer the reader to the recent note of \citet{Anjos2020}. Here, we use the quadratic bounds \eqref{eq:csc_all} when the third-order terms $\mathbf{K}_{i,j}^{(3)} a_i^3$ are absent, and quartic bounds \eqref{eq:csc_all2} otherwise. Then, the optimization problem reads \begin{subequations}\label{eq:nsdpSC} \begin{alignat}{2} \min_{\mathbf{a}_\mathrm{s}, \mathbf{c}_\mathrm{s}}\; &&\sum_{j=1}^{n_\mathrm{lc}} \left[ 0.5 \left(c_{\mathrm{s},j} + 1\right) \overline{c}\right]\qquad\qquad\quad\;\; \label{nsdpSC_obj}\hspace{-6mm}\\ \mathrm{s.t.}\; && \begin{alignedat}{1} \begin{pmatrix} \frac{1}{2 \omega_j} \left(c_{\mathrm{s},j} + 1\right) \overline{c} & -\mathbf{f}_j(\mathbf{a}_\mathrm{s})^\mathrm{T}\\ -\mathbf{f}_j(\mathbf{a}_\mathrm{s}) & \mathbf{K}_j(\mathbf{a}_\mathrm{s}) \end{pmatrix}\succeq{} 0,\\ \forall j \in \{1\dots n_\mathrm{lc}\}, \end{alignedat} \label{eq:pmiSC}\hspace{-6mm}\\ && 2 - n_\mathrm{e} - \mathbf{1}^\mathrm{T} \mathbf{a}_{\mathrm{s}} {}\ge{}0,\label{eq:volSC}\hspace{-6mm}\\ && 1 - \bm{\omega}^\mathrm{T} \mathbf{c}_\mathrm{s} {}\ge{}0,\label{eq:compSumSC}\hspace{-6mm}\\ && \text{bound constraints } \eqref{eq:csc_all} \text{ or } \eqref{eq:csc_all2}.\hspace{-6mm}\label{eq:boxSC} \end{alignat} \end{subequations} Because the feasible set of \eqref{eq:nsdpSC} is compact by Proposition \ref{prop:compact}, one may tempt to add a redundant polynomial inequality constraint to satisfy Assumption \ref{ass:comp}. However, the assumption is already satisfied in our case. \begin{proposition}\label{prop:archimedean} The optimization problem \eqref{eq:nsdpSC} satisfies Assumption \ref{ass:comp}. \end{proposition} \begin{proof} Let $\mathbf{G}(\mathbf{a}_\mathrm{s},\mathbf{c}_\mathrm{s})$ be a block-diagonal matrix with the blocks \eqref{eq:pmiSC}--\eqref{eq:boxSC} and let $\mathbf{H}$ be a sparse matrix of the same dimensions with the structure % \begin{equation} \mathbf{H} = \begin{pmatrix} \mathbf{0} & \mathbf{0}\\ \mathbf{0} & \mathbf{I} \end{pmatrix}, \end{equation} % in which the identity matrix $\mathbf{I} \in \mathbb{S}^{n_\mathrm{e} + n_\mathrm{lc}}$ matches the positions of \eqref{eq:boxSC} in $\mathbf{G}(\mathbf{a}_\mathrm{s},\mathbf{c}_\mathrm{s})$. Clearly, $\mathbf{H}$ is a SOS because of $\mathbf{H} = \mathbf{H} \mathbf{H}^\mathrm{T}$, recall Definition \ref{def:sos}. Then, if \eqref{eq:csc_all} is used, $p_{\eqref{eq:csc_all}} = \langle \mathbf{H}\mathbf{H}^\mathrm{T}, \mathbf{G}(\mathbf{a}_\mathrm{s},\mathbf{c}_\mathrm{s}) \rangle = n_\mathrm{e}+n_\mathrm{lc} - \sum_{i=1}^{n_\mathrm{e}}a_{\mathrm{s},i}^2 - \sum_{j=1}^{n_\mathrm{lc}}c_{\mathrm{s},j}^2$, so that the level set $\{\mathbf{a}_\mathrm{s} \in \mathbb{R}^{n_\mathrm{e}}, \mathbf{c}_\mathrm{s} \in \mathbb{R}^{n_\mathrm{lc}}\;\vert\; p_{\eqref{eq:csc_all}} \ge 0\}$ is compact. For \eqref{eq:csc_all2}, $p_{\eqref{eq:csc_all2}} = n_\mathrm{e}+n_\mathrm{lc} - \sum_{i=1}^{n_\mathrm{e}}a_{\mathrm{s},i}^4 - \sum_{j=1}^{n_\mathrm{lc}}c_{\mathrm{s},j}^4$, showing that also the level set $\{\mathbf{a}_\mathrm{s} \in \mathbb{R}^{n_\mathrm{e}}, \mathbf{c}_\mathrm{s} \in \mathbb{R}^{n_\mathrm{lc}}\;\vert\; p_{\eqref{eq:csc_all2}} \ge 0\}$ is compact. \end{proof} \begin{remark}\label{rem:tigher} The higher-order constraints in \eqref{eq:boxSC} are tighter in the moment representation, i.e., \eqref{eq:csc_all} are tighter than \eqref{eq:csc_lin} and \eqref{eq:csc_all2} are tighter than \eqref{eq:csc_all}. \end{remark} To see this, assume that $\left(y_0, y_1, y_2, y_3, y_4\right)$ are the moments associated with the canonical basis of the vector space of polynomials of degree at most four, $(1, x, x^2, x^3, x^4)$, where one can substitute $x$ by any element of $\mathbf{a}_\mathrm{s}$ or $\mathbf{c}_\mathrm{s}$. Then, in the first relaxation of the moment-sum-of-squares hierarchy, the fourth-order constraint $1-x^4 \ge 0$ becomes \begin{equation}\label{eq:4} y_0 - y_4 \ge 0, \end{equation} the quadratic constraint $1-x^2 \ge 0$ yields \begin{equation} y_0 - y_{2} \ge 0, \label{eq:1} \end{equation} and the box constraint $-1 \le x \le 1$ provides \begin{equation} -y_0 \le y_1 \le y_0, \label{eq:box} \end{equation} with $y_0 = 1$. Moreover, the localizing matrix of the entire optimization problem contains principal submatrices \begin{subequations}\label{eq:lmat} \begin{align} \begin{pmatrix} y_0 & y_1 \\ y_1 & y_2 \end{pmatrix} &\succeq 0,\label{eq:lmat_a}\\ \begin{pmatrix} y_0 & y_2 \\ y_2 & y_4 \end{pmatrix} &\succeq 0,\label{eq:lmat_b}\\ \begin{pmatrix} y_2 & y_3 \\ y_3 & y_4 \end{pmatrix} &\succeq 0,\label{eq:lmat_c} \end{align} \end{subequations} that must be positive semi-definite as the entire localizing matrix is. Notice that \eqref{eq:lmat_b} and \eqref{eq:lmat_c} are not present in the degree-one relaxation, which can, therefore, be used only if the bending stiffness is a polynomial of degree at most two. Thus, we omit them in the reasoning behind the relaxation tightness for \eqref{eq:csc_lin} and \eqref{eq:csc_all}. In the lowest, degree-two, relaxation the fourth-order constraints \eqref{eq:csc_all2} provide $y_4 \le 1$ from Eq. \eqref{eq:4} and $y_2 \ge 0$ with $y_4 \ge 0$ based on \eqref{eq:lmat}. Moreover, the determinants of \eqref{eq:lmat} must be non-negative, implying that $y_1^2 \le y_2$, $y_3^2 \le y_2 y_4$, and $y_2^2 \le y_4$. A combination of these inequalities then results in $0 \le y_1^4 \le y_2^2 \le y_4 \le 1$ and $0 \le y_3^2 \le y_2 y_4 \le 1$. For the quadratic constraints \eqref{eq:csc_all}, $y_2 \le 1$ from Eq. \eqref{eq:1} and $y_2 \ge 0$ because of Eq. \eqref{eq:lmat}. Writing the determinant of \eqref{eq:lmat} then provides us with $y_1^2 \le y_2$. Consequently, we observe that $0 \le y_1^2 \le y_2 \le 1$. Notice that this inequality is automatically satisfied in the preceding case. In the case of pure box constraints \eqref{eq:csc_lin}, we only have $0 \le y_1^2 \le 1$, Eq. \eqref{eq:box}, and $y_1^2 \le y_2$, Eq.~\eqref{eq:lmat}. Note that there is no upper bound for $y_2$, which can attain arbitrarily large values in the first relaxation. From the mechanical point of view, this allows for an arbitrarily-large rotational stiffnesses $\mathbf{K}_{j,i}^{(2)} a_i^2$ of the elements. These observations then allow us to show feasibility of the first-order moments for \eqref{eq:volSC}--\eqref{eq:boxSC}: \begin{proposition}\label{prop:feasiblemom} Let $\mathbf{y}^*_{\mathbf{c}^1}$ and $\mathbf{y}^*_{\mathbf{a}^1}$ be the first-order moments associated with the variables $\mathbf{c}_\mathrm{s}$ and $\mathbf{a}_\mathrm{s}$ obtained from a solution to any relaxation of \eqref{eq:nsdpSC} using the moment-sum-of-squares hierarchy. Then, these moments satisfy \begin{subequations} \begin{align} 2 - n_\mathrm{e} - \mathbf{1}^\mathrm{T} \mathbf{y}^*_{\mathbf{a}^1} &\ge 0,\label{eq:fom1}\\ 1 - \bm{\omega}^\mathrm{T} \mathbf{y}^*_{\mathbf{c}^1} &\ge 0,\label{eq:fom2}\\ 1 - \left(y^*_{a_i^1}\right)^2 & \ge 0 \quad\text{if \eqref{eq:csc_all} is used},\label{eq:fom3}\\ 1 - \left(y^*_{a_i^1}\right)^4 & \ge 0 \quad\text{if \eqref{eq:csc_all2} is used},\\ 1 - \left(y^*_{c_j^1}\right)^2 & \ge 0 \quad\text{if \eqref{eq:csc_all} is used},\\ 1 - \left(y^*_{c_j^1}\right)^4 & \ge 0 \quad\text{if \eqref{eq:csc_all2} is used}.\label{eq:fom4} \end{align} \end{subequations} \end{proposition} \begin{proof} \eqref{eq:fom1} and \eqref{eq:fom2} hold trivially from construction of the hierarchy. \eqref{eq:fom3}--\eqref{eq:fom4} follow from Remark \ref{rem:tigher}. \end{proof} \subsection{Recovering feasible upper-bound solutions}\label{sec:ub} In Proposition \ref{prop:feasiblemom}, we have shown that the first-order moments obtained by solving any relaxation of the moment-sum-of-squares hierarchy satisfy all the constraints of \eqref{eq:nsdpSC} except for \eqref{eq:pmiSC}. This section is therefore devoted to the question how to ``correct'' these moments to produce feasible upper-bounds to the original problem \eqref{eq:nsdp} and provide a natural sufficient condition of global optimality. We start by proving the following essential result: \begin{proposition}\label{prop:inimage} Let $\mathbf{y}^*_{\mathbf{c}^1}$ and $\mathbf{y}^*_{\mathbf{a}^1}$ be the first-order moments associated with the variables $\mathbf{c}_\mathrm{s}$ and $\mathbf{a}_\mathrm{s}$ obtained from a solution to any relaxation of \eqref{eq:nsdpSC} using the moment-sum-of-squares hierarchy and let $\forall i \in \{1\dots n_\mathrm{e}\}: \tilde{a}_i = 0.5(y_{a_i^1}^{*}+1)\overline{a}$ be the corresponding cross-section areas. Then, % \begin{equation}\label{eq:propinimage} \mathbf{f}_{j,0} + \sum_{i=1}^{n_\mathrm{e}} \mathbf{f}_{j,i} \tilde{a}_i \in \mathrm{Im}\left(\mathbf{K}_{j,0} + \sum_{i=1}^{n_\mathrm{e}}\sum_{k=1}^3 \mathbf{K}_{j,i}^{(k)} \tilde{a}_i^k\right). \end{equation} \end{proposition} \begin{proof} In the lowest relaxation of the moment-sum-of-squares hierarchy, the PMI constraint \eqref{eq:pmiSC} becomes % \begin{equation} \begin{pmatrix} \frac{1}{2 \omega_j} \left(y_{c_j^1}^{*} + 1\right) \overline{c} & -\mathbf{f}_j^\mathrm{T}\left(\mathbf{y}^{*}_{\mathbf{a}^1}\right)^\mathrm{T}\\ -\mathbf{f}_j\left(\mathbf{y}^{*}_{\mathbf{a}^1}\right) & \mathbf{K}_j\left(\mathbf{y}^{*}_{\mathbf{a}^1}, \mathbf{y}^{*}_{\mathbf{a}^2}, \mathbf{y}^{*}_{\mathbf{a}^3}\right) \end{pmatrix} \succeq 0, \end{equation} % where $\mathbf{y}^*_{\mathbf{a}^2}$ and $\mathbf{y}^*_{\mathbf{a}^3}$ are the second- and third-order moments associated with $\mathbf{a}_\mathrm{s}$ and, with a slight abuse of notation, $\mathbf{K}_j\left(\mathbf{y}^{*}_{\mathbf{a}^1}, \mathbf{y}^{*}_{\mathbf{a}^2}, \mathbf{y}^{*}_{\mathbf{a}^3}\right)$ and $\mathbf{f}_j \left(\mathbf{y}^{*}_{\mathbf{a}^1}\right)$ are the stiffness matrix and force column vector constructed from the moments $\mathbf{y}$. Using Lemma \ref{lemma:schur} and Proposition \ref{prop:image}, we observe that % \begin{equation} \mathbf{f}(\mathbf{y}^{*}_{\mathbf{a}^1}) \in \mathrm{Im}\left(\mathbf{K}_j\left(\mathbf{y}^{*}_{\mathbf{a}^1}, \mathbf{y}^{*}_{\mathbf{a}^2}, \mathbf{y}^{*}_{\mathbf{a}^3}\right)\right). \end{equation} % Because we have considered solely degree-one moments in \eqref{eq:propinimage} and $\forall \mathbf{a}>\mathbf{0}: \mathbf{K}_j(\mathbf{a}) \succ 0$ was our initial assumption, we must show that the combination of $a_i = 0$ with $I_i > 0$ cannot occur for any $i$, because that would result in a lower rank of $\mathbf{K}_j (\tilde{\mathbf{a}})$ when compared with $\mathbf{K}_j\left(\mathbf{y}^{*}_{\mathbf{a}^1}, \mathbf{y}^{*}_{\mathbf{a}^2}, \mathbf{y}^{*}_{\mathbf{a}^3}\right)$. To this goal, let $a_i = 0$, which is equivalent to $y_{a_i^1}^{*} = -1$. Then, the non-negative determinant of \eqref{eq:lmat_a} and \eqref{eq:lmat_b} with the inequalities \eqref{eq:csc_all} or \eqref{eq:csc_all2} imply that $y_{a_i^2}^{*} = y_{a_i^4}^{*} = 1$. Moreover, also the determinant of the principal submatrix % \begin{equation}\label{eq:mom2} \begin{pmatrix} 1 & y_{a_i^1}^{*} & y_{a_i^2}^{*}\\ y_{a_i^1}^{*} & y_{a_i^2}^{*} & y_{a_i^3}^{*}\\ y_{a_i^2}^{*} & y_{a_i^3}^{*} & y_{a_i^4}^{*} \end{pmatrix} \succeq 0 \end{equation} % of the moment matrix must be non-negative. Inserting $y_{a_i^2}^{*} = y_{a_i^4}^{*} = 1$ with $y_{a_i^1}^{*}=-1$ into \eqref{eq:mom2} yields a unique feasible $y_{a_i^3}^{*} = -1$. Thus, we write the moment of inertia in terms of the scaled cross-section areas \eqref{eq:scaled_a}. After inserting first-order moments, we obtain % \begin{multline} I_i = 0.25 c_\mathrm{II} \overline{a}^2 \left( y_{a_i^2}^{*} + 2 y_{a_i^1}^{*} + 1\right) + \\ 0.125 c_\mathrm{III} \overline{a}^3 \left(y_{a_i^3}^{*} + 3 y_{a_i^2}^{*} + 3 y_{a_i^1}^{*} + 1\right) = 0. \end{multline} \end{proof} Using Proposition \ref{prop:inimage}, we can correct $\mathbf{c}$ based on $\mathbf{y}_{\mathbf{a}^1}^{*}$ to provide a feasible solution to \eqref{eq:nsdp}. \begin{theorem}\label{th:feasible} Let $\mathbf{y}_{\mathbf{c}^1}^{*}$ and $\mathbf{y}_{\mathbf{a}^1}^{*}$ be the first-order moments associated with the variables $\mathbf{c}_\mathrm{s}$ and $\mathbf{a}_\mathrm{s}$ obtained from a solution to any relaxation of \eqref{eq:nsdpSC} using the moment-sum-of-squares hierarchy. Then, % \begin{subequations} \begin{align} \tilde{a}_i &= 0.5 (y_{a_i^1}^{*} + 1) \overline{a}_i,\;&\forall i \in \{1\dots n_\mathrm{e}\},\\ \tilde{c}_j &= \left[\mathbf{f}_j (\tilde{\mathbf{a}})\right]^\mathrm{T} \mathbf{K}_j^{\dagger}(\tilde{\mathbf{a}}) \mathbf{f}_j(\tilde{\mathbf{a}}), \;&\forall j \in \{1\dots n_\mathrm{lc}\}\label{eq:corrcompl} \end{align} \end{subequations} % is feasible (upper-bound) to \eqref{eq:nsdp}. \end{theorem} \begin{proof} Based on Proposition \ref{prop:feasiblemom}, $\tilde{\mathbf{a}}$ satisfies the constraints imposed on the cross-section areas. By correcting the compliance variables according to \eqref{eq:corrcompl}, the equilibrium equation (and so the PMI \eqref{eq:pmi}) is satisfied due to Proposition \ref{prop:inimage}. Consequently, all the constraints of \eqref{eq:nsdp} are feasible for the pair $\tilde{\mathbf{a}}$, $\tilde{\mathbf{c}}$, showing that $\bm{\omega}^\mathrm{T} \mathbf{c}^* \le \bm{\omega}^\mathrm{T} \tilde{\mathbf{c}} < \infty$. \end{proof} We wish to emphasize that in Theorem \ref{th:feasible}, we have proved feasibility of the upper bounds to \eqref{eq:nsdp} and such upper bounds may violate the compliance bound constraints \eqref{eq:compSumSC}. Thus, knowledge of $\bm{\omega}^\mathrm{T} \mathbf{c}^*$ does not assure convergence of the lowest relaxation to the optimal cross-section areas. \subsection{Certificate of global $\varepsilon$-optimality}\label{sec:opt} Because the hierarchy generates a sequence of lower bounds and we have just shown in Theorem \ref{th:feasible} how to compute upper bounds in each relaxation, we naturally arrive at a simple sufficient condition of global $\varepsilon$-optimality. \begin{theorem}\label{th:suff} Let $\mathbf{y}_{\mathbf{c}^1}^{*}$ and $\mathbf{y}_{\mathbf{a}^1}^{*}$ be the first-order moments associated with the variables $\mathbf{c}_\mathrm{s}$ and $\mathbf{a}_\mathrm{s}$ obtained from a solution to any relaxation of \eqref{eq:nsdpSC} using the moment-sum-of-squares hierarchy. Then, % \begin{equation}\label{eq:suff} \bm{\omega}^\mathrm{T}\tilde{\mathbf{c}} - \sum_{j=1}^{n_\mathrm{lc}}\left[ 0.5 (y_{c_j^1}^{*}+1) \overline{c} \right] \le \varepsilon \end{equation} % is a sufficient condition of global $\varepsilon$-optimality. \end{theorem} Theorem \ref{th:suff} is very simple to verify computationally, significantly simpler than the traditional rank-based certificate of global optimality \eqref{eq:rank}, e.g., \citep{Henrion_2006}. However, \eqref{eq:suff} fails to be a necessary condition. Indeed, the optimality gap $\varepsilon$ may remain strictly positive even when the hierarchy converged according to \eqref{eq:rank} in the case of multiple globally optimal solutions. Then, the optimal first-order moments $\mathbf{y}$ are not unique; for instance, they may correspond to any convex combination of the global optima, we refer to Section \ref{sec:failed} for a specific example. A stronger result holds, however, when the optimization problem possesses a~unique global optimum. To show this, we first prove that, with an increasing relaxation degree $r$, the feasible space of relaxations converges to the convex hull of the initial (non-convex) problem. \begin{proposition}\label{prop:hull} Let $\mathcal{K}^{(r)}$ be the feasible set of the first-order moments in the $r$-th relaxation of the moment-sum-of-squares hierarchy of \eqref{eq:nsdpSC}. Then, $\mathcal{K}^{(r)} \uparrow \text{conv}(\mathcal{K})$ as $r\rightarrow \infty$, where $\mathcal{K}$ is the intersection of \eqref{eq:pmiSC}--\eqref{eq:boxSC}. \end{proposition} \begin{proof} Let $f(\mathbf{a}_\mathrm{s},\mathbf{c}_\mathrm{s})$ be an arbitrary affine function. Based on Proposition \ref{prop:archimedean}, Assumption \ref{ass:comp} holds for \eqref{eq:nsdpSC} independently of the objective function. Hence, optimization of $f(\mathbf{a}_\mathrm{s}, \mathbf{c}_\mathrm{s})$ over $\mathcal{K}$ yields $f(\mathbf{a}_\mathrm{s},\mathbf{c}_\mathrm{s}) \uparrow f^*(\mathbf{a}_\mathrm{s}, \mathbf{c}_\mathrm{s})$ as $r\rightarrow \infty$ due to Theorem \ref{th:convergence}. Because $f(\mathbf{a}_\mathrm{s}, \mathbf{c}_\mathrm{s})$ is arbitrary, $\mathcal{K}^{(r)} \uparrow \text{conv}(\mathcal{K})$ as $r\rightarrow \infty$. \end{proof} Finally, we can prove that the hierarchy eventually attains a~zero optimality gap. \begin{theorem}\label{th:zero} If there is a unique global solution to \eqref{eq:nsdpSC}, then % \begin{equation} \bm{\omega}^\mathrm{T}\tilde{\mathbf{c}} - \sum_{j=1}^{n_\mathrm{lc}}\left[ 0.5 (y_{c_j^1}^{(r)*}+1) \overline{c} \right] = 0 \end{equation} % as $r\rightarrow \infty$. \end{theorem} \begin{proof} Assuming $r \rightarrow \infty$, optimization of \eqref{nsdpSC_obj} over $\mathcal{K}^{(r)}$ is equivalent to optimization of \eqref{nsdpSC_obj} over $\text{conv}(\mathcal{K})$ by Proposition \ref{prop:hull}. Because $\mathcal{K}$ is compact, its convex hull must be also compact. Hence, it can be equivalently expressed as the convex hull of the limit points of $\mathcal{K}$ that are denoted by $\mathbf{d}_1, \mathbf{d}_2, \dots$, i.e., % \begin{equation} \text{conv}(\mathcal{K}) = \text{conv}(\cup_{i=1}^{\infty} \{\mathbf{d}_i\}). \end{equation} % Because we assume there is the unique global optimum when optimizing over $\mathcal{K}$, there must be a unique limit point $\mathbf{d}^*$ associated with this optimum. \end{proof} \begin{remark} Although Theorem \ref{th:zero} relies on $r \rightarrow \infty$, a finite (and fairly small) $r$ is required in all our test cases to reach the zero optimality gap. Moreover, this bound equality has occurred when the hierarchy converged based on the rank test \eqref{eq:rank}. It might be possible, therefore, to strengthen Theorem \ref{th:zero} to a finite termination result. \end{remark} \subsection{Global topology optimization of shell structures}\label{sec:shells} Until now, solely frame structures have been considered. However, the optimization formulations \eqref{eq:nsdp} and \eqref{eq:nsdpSC} allow for simple modifications to optimize other discrete structures such as shells. Let $\mathbf{t} \in \mathbb{R}_{\ge 0}^{n_\mathrm{e}}$ be the vector of shell element thicknesses. Then, the formulation \eqref{eq:nsdp} becomes \begin{subequations}\label{eq:nsdpSH} \begin{alignat}{5} \min_{\mathbf{t}, \mathbf{c}}\; && \bm{\omega}^\mathrm{T} \mathbf{c} \qquad\qquad\qquad\;\;\label{eq:nsdp_objSH}\\ \mathrm{s.t.}\;\; && \begin{pmatrix} c_j & -\mathbf{f}_j(\mathbf{t})^\mathrm{T}\\ -\mathbf{f}_j(\mathbf{t}) & \mathbf{K}_j(\mathbf{t}) \end{pmatrix}\;&&\succeq&&\; 0,&&\; \forall j \in \{1\dots n_\mathrm{lc}\},\hspace{-4mm}\label{eq:pmiSH}\\ && \overline{V} - \mathbf{s}^\mathrm{T} \mathbf{t} \;&&\ge&&\;0,\label{eq:nsdp_volSH}\\ && \mathbf{t}\;&&\ge&&\; \mathbf{0},\label{eq:nsdp_areasSH} \end{alignat} \end{subequations} where $\mathbf{s} \in \mathbb{R}_{>0}^{n_\mathrm{e}}$ is a vector of the surface areas of individual shell elements, and $\mathbf{K}_j(\mathbf{t})$ is assembled as \begin{equation} \mathbf{K}_j (\mathbf{t}) = \mathbf{K}_{j,0} + \sum_{i=1}^{n_\mathrm{e}}\left[ \mathbf{K}_{j,i}^{(1)} t_i + \mathbf{K}_{j,i}^{(3)} t_i^3 \right]. \end{equation} Because the design variables can be bounded very similarly to Proposition \ref{prop:bounds} and scaled, all proven results hold true. \begin{figure*}[t] \centering \begin{subfigure}{0.15\linewidth} \begin{tikzpicture} \centering \scaling{1.25} \point{a}{0.000000}{0.000000} \notation{1}{a}{\circled{$1$}}[below right=0mm] \point{b}{1.000000}{1.000000} \notation{1}{b}{\circled{$2$}}[below=1mm] \point{c}{0.000000}{2.000000} \notation{1}{c}{\circled{$3$}}[above right=0mm] \beam{2}{a}{b} \notation{4}{a}{b}[$1$] \beam{2}{b}{c} \notation{4}{b}{c}[$2$] \support{3}{a}[270] \support{3}{c}[270] \point{d1}{0.000000}{-0.500000} \point{d2}{1.000000}{-0.500000} \dimensioning{1}{d1}{d2}{-1.000000}[$1$] \point{e1}{1.250000}{0.000000} \point{e2}{1.250000}{1.000000} \dimensioning{2}{e1}{e2}{-0.7500000}[$1$] \point{e3}{1.250000}{2.000000} \dimensioning{2}{e2}{e3}{-0.75000000}[$1$] \load{1}{b}[0][-1.0][0.0] \notation{1}{b}{$1$}[above=10mm] \load{1}{b}[90][1.0][0.0] \notation{1}{b}{$1$}[left=9mm] \end{tikzpicture} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.1\linewidth} \vspace{9.6mm} \centering \begin{tikzpicture}[scale=0.75] \scaling{0.15} \point{a}{0}{0}; \point{b}{5}{0}; \point{c}{5}{10}; \draw[black, fill=gray, fill opacity=0.2] (0.0,0.0) rectangle ++(1,2); \dimensioning{1}{a}{b}{-0.75}[$0.3$]; \dimensioning{2}{b}{c}{1.75}[$h_i$]; \end{tikzpicture} \vspace{11mm} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/multipleGlobalOptima2_feasibleset.png} \vspace{4.5mm} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/multipleGlobalOptima2_relaxation2.png} \vspace{4.5mm} \caption{} \end{subfigure} \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/multipleGlobalOptima2_relaxation3.png} \vspace{4.5mm} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/multipleGlobalOptima2_relaxation4.png} \vspace{4.5mm} \caption{} \end{subfigure} \caption{Frame structure composed of two elements: (a) boundary conditions, (b) the cross-section parametrization, and the sub-level set $\bm{\omega}^\mathrm{T} \mathbf{c}\le 10$ of the (c) feasible space and of the (d) second, (e) third, and (f) fourth outer approximations with the associated lower- and upper-bounds. Variables $a_1$ and $a_2$ stand for the cross-section areas of the two elements and $\bm{\omega}^\mathrm{T} \mathbf{c}$ denotes the corresponding weighted compliance of the two load cases (assuming the moments of inertia $I_i = 25/27 a_i^3$, $i \in \{1,2\}$), and $h_i$ is the cross-section height.} \label{fig:mg} \end{figure*} \section{Sample problems}\label{sec:examples} This section investigates global topology optimization of selected small-scale structural design problems using the proposed strategy solved numerically by the \textsc{Mosek} optimizer \citep{mosek}. These examples demonstrate strengths and weaknesses of the presented approach: certificate of global $\varepsilon$-optimality using Theorem \ref{th:suff}, extraction\footnote{For rank computation we considered the eigenvalues with the absolute value smaller than $10^{-8}$ to be singular.} of all guaranteed globally optimal solutions based on the flat extension theorem \citep{Curto_1996}, but also higher computational demands when compared to selected local optimization techniques: OC and MMA adopting the nested approach, see, e.g., \citep{Bendsoe_2004}, \textsc{Matlab}'s inbuilt optimizer \texttt{fmincon} solving \eqref{eq:original} directly, and non-linear semidefinite programming (NSDP) formulation \eqref{eq:nsdp} solved by the \textsc{Penlab} optimizer \citep{Fiala2013}. Except for the nested approaches, all optimization problems were modeled using the \textsc{Yalmip} toolbox \citep{Lofberg2004}. Our implementation and the corresponding source codes written in \textsc{Matlab} can be accessed at \citep{tyburec_marek_2020_4048828}. The first three examples involve two finite elements only to allow visualization of the feasible sets and provide intuition about the solution approach. In the later part, we investigate the influence of finite element types on optimal design and increase the number of elements to evaluate scalability of the approach. All computations were performed on a personal laptop with $16$~GB of RAM and Intel\textsuperscript{\textregistered} Core\texttrademark~i5-8350U CPU. Times of individual optimization approaches are measured to allow a simple comparison of the computational demands. \subsection{Structure possessing multiple global optima}\label{sec:multiple} As the first problem, we consider a frame structure composed of two Euler-Bernoulli frame elements, see Fig.~\ref{fig:mg}a. Two loads are applied, each of them acting as a separate load case, and weighted equally by $\bm{\omega} = \mathbf{1}$. Both these frame elements posses the Young modulus $E = 1$, and their overall volume is bounded by $\overline{V}=0.816597322$ from above\footnote{Fewer digits may prevent the solver from reaching all three global optima. Although an analytical formula for this specific $\overline{V}$ can be derived, we omit it for the sake of brevity.}. Accordingly with Fig.~\ref{fig:mg}b, the elements $i = \{1,2\}$ have rectangular cross-sections with areas $a_i = 0.3h_i$. Then, $I_i = \frac{1}{40}h^3$, which implies that $c_\mathrm{II} = 0$ and $c_\mathrm{III} = 25/27$ in Eq.~\eqref{eq:inertia}. The feasible domain of the optimization problem shown in Fig.~\ref{fig:mg}c reveals that there are three global optima of the objective function value $7.738$, corresponding to the following cases: (i) $a_1^* = \overline{V}/\sqrt{2}$ and $a_2^* = 0$, (ii) $a_1^* = 0$ and $a_2^* = \overline{V}/\sqrt{2}$, and (iii) $a_1^* = a_2^* = \overline{V} \sqrt{2}/4$. All these solutions are extracted in the fourth relaxation of the moment-sum-of-squares hierarchy (Fig. \ref{fig:mg}f), which converged based on the rank condition \eqref{eq:rank} with the rank equal to $s=3$ and also based on Theorem~\ref{th:suff}, $\varepsilon = 6 \times 10^{-9}$. Notice that in all the relaxations, the upper bounds recovered by Theorem~\ref{th:feasible} are global minima, Figs.~\ref{fig:mg}d-\ref{fig:mg}f. Because all local minima are also global, all tested optimization algorithms converge to the optimal objective function value, see Table \ref{tab:multiple2}. Among these algorithms, OC and MMA exhibited the best performance in terms of computational time. \begin{table}[!b] \centering\setlength{\tabcolsep}{5pt} \begin{tabular}{lccccc} method & $a_1$ & $a_2$ & LB & UB & time [s]\\ \hline OC & $0.289$ & $0.289$ & - & $7.738$ & $0.009$ \\ MMA & $0.289$ & $0.289$ & - & $7.738$ & $0.011$ \\ \texttt{fmincon} & $0.289$ & $0.289$ & - & $7.738$ & $0.113$\\ NSDP & $0.289$ & $0.289$ & - & $7.738$ & $0.409$\\ PO$^{(2)}$, Th.~\ref{th:suff} & $0.289$ & $0.289$ & $5.065$ & $7.738$ & $0.023$\\ PO$^{(3)}$, Th.~\ref{th:suff} & $0.289$ & $0.289$ & $7.647$ & $7.738$ & $0.071$\\ PO$^{(4)}$, Th.~\ref{th:suff} & $0.289$ & $0.289$ & $7.738$ & $7.738$ & $0.438$ \end{tabular} \caption{Different optimization methods applied to the first optimization problem. LB denotes lower bound, UB abbreviates feasible upper bounds, and PO stands for polynomial optimization. The entries $a_i$ denote cross-section areas of the $i$-th element, or the areas constructed from the first-order moments in the case of PO.} \label{tab:multiple2} \end{table} \subsection{Irreducible positive optimality gap}\label{sec:failed} Let us now modify the optimization problem described in the preceding section by fixing the volume bound to some $\overline{V} \in (0.816597322, 2.73603242)$. Whilst the boundary points of this open interval match the cases when three global optima occur, the interval interior removes $a_1 = a_2 = \overline{V} \sqrt{2}/4$ from the set of globally optimal solutions. In what follows, we set $\overline{V}$ to the center of the interval. \begin{figure}[!b] \centering \begin{subfigure}{0.33\linewidth} \includegraphics[width=\linewidth]{include/sufficientConditionFailed2_feasibleset.png} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.33\linewidth} \includegraphics[width=\linewidth]{include/sufficientConditionFailed2_relaxation2.png} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.33\linewidth} \includegraphics[width=\linewidth]{include/sufficientConditionFailed2_relaxation3.png} \caption{} \end{subfigure} \caption{Frame structure possessing a~non-zero optimality gap. The sub-level set $\bm{\omega}^\mathrm{T} \mathbf{c}\le 4$ of the (a) feasible space and of the (b) second, and (c) third outer approximations with the associated lower- and upper-bounds. Variables $a_1$ and $a_2$ stand for the cross-section areas of the two elements and $\bm{\omega}^\mathrm{T} \mathbf{c}$ denotes the corresponding weighted compliance of the two load cases (assuming the moments of inertia $I_i = 25/27 a_i^3$, $i \in \{1,2\}$).} \label{fig:sufficient} \end{figure} \begin{table}[!t] \centering\setlength{\tabcolsep}{5pt} \begin{tabular}{lccccc} method & $a_1$ & $a_2$ & LB & UB & time [s]\\ \hline OC & $0.628$ & $0.628$ & - & $2.161$ & $0.003$ \\ MMA & $0.628$ & $0.628$ & - & $2.161$ & $0.004$ \\ \texttt{fmincon} & $0.628$ & $0.628$ & - & $2.161$ & $0.049$\\ NSDP & $0.628$ & $0.628$ & - & $2.161$ & $0.200$\\ PO$^{(2)}$, Th.~\ref{th:suff} & $0.628$ & $0.628$ & $0.936$ & $2.161$ & $0.022$\\ PO$^{(3)}$, Th.~\ref{th:suff} & $0.628$ & $0.628$ & $1.640$ & $2.161$ & $0.063$\\ \multirow{2}{*}{PO$^{(3)}$, Eq.~\eqref{eq:rank}} & $1.256$ & $0.000$ & $1.640$ & $1.640$ & $0.063$\\ & $0.000$ & $1.256$ & $1.640$ & $1.640$ & $0.063$ \end{tabular} \caption{Different optimization methods applied to the second optimization problem. LB denotes lower bound, UB abbreviates feasible upper bounds, and PO stands for polynomial optimization. The entries $a_i$ denote cross-sectional areas of the $i$-th element, or the areas constructed from the first-order moments in the case of PO.} \label{tab:sufficient} \end{table} \begin{figure*}[b] \centering \begin{subfigure}{0.15\linewidth} \begin{tikzpicture} \centering \scaling{1.25} \point{a}{0.000000}{0.000000} \notation{1}{a}{\circled{$1$}}[below right=0mm] \point{b}{1.000000}{1.000000} \notation{1}{b}{\circled{$2$}}[below=1mm] \point{c}{0.000000}{2.000000} \notation{1}{c}{\circled{$3$}}[above right=0mm] \beam{2}{a}{b} \notation{4}{a}{b}[$1$] \beam{2}{b}{c} \notation{4}{b}{c}[$2$] \support{3}{a}[270] \support{3}{c}[270] \point{d1}{0.000000}{-0.500000} \point{d2}{1.000000}{-0.500000} \dimensioning{1}{d1}{d2}{-1.000000}[$1$] \point{e1}{1.250000}{0.000000} \point{e2}{1.250000}{1.000000} \dimensioning{2}{e1}{e2}{-0.7500000}[$1$] \point{e3}{1.250000}{2.000000} \dimensioning{2}{e2}{e3}{-0.75000000}[$1$] \load{1}{b}[0][-1.0][0.0] \notation{1}{b}{$1$}[above=10mm] \load{1}{b}[90][1.0][0.0] \notation{1}{b}{$1$}[left=9mm] \end{tikzpicture} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.1\linewidth} \vspace{4mm} \centering \begin{tikzpicture}[scale=1] \scaling{0.2} \point{a}{0}{0}; \point{a1}{0}{1}; \point{a2}{0}{9}; \point{a3}{0}{10}; \point{b}{5}{0}; \point{b1}{5}{1}; \point{b2}{5}{9}; \point{b3}{5}{10}; \point{e1}{2}{1}; \point{e2}{2}{9}; \point{f1}{3}{1}; \point{f2}{3}{9}; \point{c}{5}{10}; \point{A1}{1}{0.5}; \point{A2}{2.5}{2.5}; \point{A}{1}{4}; \point{B}{-1}{4}; \draw (B) -- node[above]{$t_{\mathrm{p},i}$} (A); \draw[black, fill=gray, fill opacity=0.2] (a)--(b)--(b1)--(f1)--(f2)--(b2)--(b3)--(a3)--(a2)--(e2)--(e1)--(a1)--(a); \draw [{Latex}-](A1) -- (A); \draw [{Latex}-](A2) -- (A); \dimensioning{1}{a}{b}{-0.75}[$5 t_{\mathrm{p},i}$]; \dimensioning{2}{b}{c}{1.5}[$10 t_{\mathrm{p},i}$]; \end{tikzpicture} \vspace{4.6mm} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/selfweight2_feasibleset.png} \vspace{4mm} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/selfweight2_relaxation1.png} \vspace{4mm} \caption{} \end{subfigure} \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/selfweight2_relaxation2.png} \vspace{4mm} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.17\linewidth} \vspace{4mm}\includegraphics[width=\linewidth]{include/selfweight2_relaxation3.png} \vspace{4mm} \caption{} \end{subfigure} \caption{Frame structure with self-weight: (a) boundary conditions, (b) the cross-section parametrization, and the sub-level set $c\le 140$ of the (c) feasible space and of the (d) first, (e) second, and (f) third outer approximations with the associated lower- and upper-bounds. Variables $a_1$ and $a_2$ stand for the cross-section areas of the two elements and $c$ denotes the corresponding compliance (assuming the moments of inertia $I_i = 41/54 a_i^2$, $i \in \{1,2\}$), and $t_{\mathrm{p},i}$ stands for the flanges and web thickness.} \label{fig:sw} \end{figure*} Solving this modified optimization problem with the moment-sum-of-squares hierarchy produces the sequence of lower- and upper-bounds shown in Fig.~\ref{fig:sufficient}. Although the hierarchy exhibited a finite convergence based on the rank condition \eqref{eq:rank} with $s=2$, the corresponding optimality gap remains strictly positive ($\varepsilon = 0.521$) and cannot be reduced in the subsequent relaxations. Clearly, all outer convex approximations must contain the convex combination of their limit points. Hence, if the limit points denote the global optima, also their convex combinations attain the globally optimal objective function value. Therefore, they are also optimal for the associated relaxation, but may lack feasibility to the original problem. Depending on the optimization algorithm and its settings, one can either reach a lower bound that is actually feasible for the original problem (as was the case in Section \ref{sec:multiple}), i.e., a zero optimality gap, or a~positive optimality gap that cannot be further reduced, which is the case here. For this particular problem, all local optimization techniques, using their default starting points and settings, missed the global optima, see Table~\ref{tab:sufficient}. In fact, they approached the feasible upper-bound that was provided by Theorem \ref{th:feasible}. \subsection{Frame structure with self-weight} For the previous examples, $\mathbf{f}(\mathbf{a}) = \mathbf{f}$ was constant, so that the optimum designs utilized the entire available volume $\overline{V}$. In these cases, the volume inequality constraint could have been changed to equality, and, therefore, one design variable eliminated. However, such a~procedure cannot be applied when design-dependent loads are present. \begin{table}[t] \centering\setlength{\tabcolsep}{5pt} \begin{tabular}{lccccc} method & $a_1$ & $a_2$ & LB & UB & time [s]\\ \hline OC & $0.022$ & $0.166$ & - & $70.442$ & $1.129$\\ MMA & $0.022$ & $0.166$ & - & $70.442$ & $0.935$\\ NSDP & $0.707$ & $0.000$ & - & $85.846$ & $1.448$\\ PO$^{(1)}$, Th.~\ref{th:suff} & $0.050$ & $0.119$ & $48.246$ & $74.171$ & $0.006$\\ PO$^{(2)}$, Th.~\ref{th:suff} & $0.034$ & $0.220$ & $68.328$ & $71.594$ & $0.015$\\ PO$^{(3)}$, Th.~\ref{th:suff} & $0.022$ & $0.166$ & $70.442$ & $70.442$ & $0.058$ \end{tabular} \caption{Different optimization methods applied to the optimization problem with self-weight. LB denotes lower bound, UB abbreviates feasible upper bounds, and PO stands for polynomial optimization. The entries $a_i$ denote cross-sectional areas of the $i$-th element, or the areas constructed from the first-order moments in the case of PO.} \label{tab:sw} \end{table} To visualize this, let our third illustration be the single-load-case frame structure in Fig.~\ref{fig:sw}a, composed of two frame elements with $E = 1$ with I-shaped cross-sections, Fig.~\ref{fig:sw}b, parametrized by the thickness $t_{\mathrm{p},i}$. The overall volume is bounded from above by $\overline{V} = 1$. The self-weight applies in the vertical direction and is parametrized by the material density $\rho = 10$. For the considered cross-sections, we have $a_i = 18 t_{\mathrm{p},i}^2$ and $I_i = 246 t_{\mathrm{p},i}^4$. Hence, $c_{\mathrm{II}} = 41/54$ and $c_\mathrm{III} = 0$ in Eq.~\eqref{eq:inertia}. The feasible domain of this optimization problem, Fig.~\ref{fig:sw}c, reveals three local optima, and one of them is the global solution. Computation of the optimum by the moment-sum-of-squares hierarchy required three relaxations, Figs.~\ref{fig:sw}d--\ref{fig:sw}f, which converged based on both the rank condition \eqref{eq:rank} with $s=1$ and on Theorem~\ref{th:suff} with $\varepsilon = -7 \times 10^{-8}$; the slightly negative value of $\varepsilon$ is due to the numerical accuracy of the optimizer. Also notice that the upper-bounds based on Theorem \ref{th:feasible} are of very high qualities, see Table~\ref{tab:sw}. Using local optimization techniques, only OC and MMA were able to arrive at the global optimum, see Table~\ref{tab:sw}. Among other formulations, NSDP approached the worst local optimum and \texttt{fmincon} failed even to find a feasible solution. \subsection{Different element types on a cantilever beam}\label{sec:examples_cantilever} \begin{figure}[t] \newcommand{\snewpoint}[4]{\dpoint{#1m}{#2}{0}{0};\dpoint{#1a}{#2}{-#3}{-#4};\dpoint{#1b}{#2}{-#3}{#4};\dpoint{#1c}{#2}{#3}{#4};\dpoint{#1d}{#2}{#3}{-#4};\dpoint{#1am}{#2}{0}{-#4};\dpoint{#1bm}{#2}{0}{#4};} \newcommand{\surfcube}[2]{\dbeam{3}{#1a}{#1b};\dbeam{2}{#1b}{#1c};\dbeam{2}{#1c}{#1d};\dbeam{3}{#1d}{#1a};\dbeam{3}{#2a}{#2b};\dbeam{2}{#2b}{#2c};\dbeam{2}{#2c}{#2d};\dbeam{3}{#2d}{#2a};\dbeam{3}{#1a}{#2a};\dbeam{2}{#1b}{#2b};\dbeam{2}{#1c}{#2c};\dbeam{2}{#1d}{#2d};} \newcommand{\surflastcube}[2]{\dbeam{3}{#1a}{#1b};\dbeam{2}{#1b}{#1c};\dbeam{2}{#1c}{#1d};\dbeam{3}{#1d}{#1a};\dbeam{2}{#2a}{#2b};\dbeam{2}{#2b}{#2c};\dbeam{2}{#2c}{#2d};\dbeam{2}{#2d}{#2a};\dbeam{3}{#1a}{#2a};\dbeam{2}{#1b}{#2b};\dbeam{2}{#1c}{#2c};\dbeam{2}{#1d}{#2d};} \begin{tikzpicture} \snewpoint{a}{0.0}{0.2}{1.0}; \snewpoint{b}{0.9}{0.2}{1.0}; \snewpoint{c}{1.8}{0.2}{1.0}; \snewpoint{d}{2.7}{0.2}{1.0}; \snewpoint{e}{3.6}{0.2}{1.0}; \snewpoint{f}{4.5}{0.2}{1.0}; \dpoint{ax}{-1.2}{1}{0}; \dpoint{bx}{-1}{1}{0}; \support{3}{aam}[270]; \support{3}{abm}[270]; \support{3}{am}[270]; \surfcube{a}{b}; \surfcube{b}{c}; \surfcube{c}{d}; \surfcube{d}{e}; \surflastcube{e}{f}; \dlineload{1}{yz}{fbm}{fam}[.75][.75]; \dlineload{1}{xz}{fbm}{fam}[1.3][1.3]; \ddimensioning{xz}[-1.2]{aa}{ba}{.5}[$1$][1.2]; \ddimensioning{xz}[-1.2]{ba}{ca}{.5}[$1$][1.2]; \ddimensioning{xz}[-1.2]{ca}{da}{.5}[$1$][1.2]; \ddimensioning{xz}[-1.2]{da}{ea}{.5}[$1$][1.2]; \ddimensioning{xz}[-1.2]{ea}{fa}{.5}[$1$][1.2]; \ddimensioning{xz}[-1.75]{aa}{fa}{.5}[$5$][1.75]; \ddimensioning{zx}[1.5]{aa}{ab}{.5}[$1$][1.5]; \dnotation{1}{fm}{$\cos(30^\circ)$}[below right=4mm and -2mm]; \dnotation{1}{fm}{$\sin(30^\circ)$}[above right=7mm and 4mm]; \dnotation{4}{am}{bm}[$1$]; \dnotation{4}{bm}{cm}[$2$]; \dnotation{4}{cm}{dm}[$3$]; \dnotation{4}{dm}{em}[$4$]; \dnotation{4}{em}{fm}[$5$]; \dscaling{3}{0.5} \setaxis{3}[$A$][$B$][$C$][$x$][$y$][$z$] \setaxis{4}[right][below left][below]; \daxis{3}{0}[ax][bx][0][200.0][180]; \end{tikzpicture} \caption{Boundary conditions of the cantilever beam design problem.} \label{fig:cantilever} \end{figure} \begin{figure}[b] \includegraphics[width=\linewidth]{include/conv} \caption{Convergence of the moment-sum-of-squares hierarchy for the cantilever problem with three finite element types. Variable $c$ denotes compliance, $r$ stands for the relation degree, and LB with UB abbreviate lower- and upper-bound.} \label{fig:cantilever_convergence} \end{figure} A certain generality of the developed approach is illustrated on a~cantilever beam/plate design problem, Fig.~\ref{fig:cantilever}. The dimensions of the cantilever are $5$ in length and $1$ in width, and the thicknesses of $5$ finite elements are to be found in the optimization. The beam is made of a linear-elastic material with the Young modulus $E=1$ and Poisson ratio $\nu=0.25$. This structure is subjected to a tip distributed load of magnitude $1$ induced under $30^\circ$ angle with respect to the midline/midsurface. We optimize the frame/shell thicknesses $t_i$ (of rectangular cross-sections) while satisfying $\overline{V}=10$. The shear correction factor is set to $5/6$ where appropriate. In what follows, we compare the optimization results of the cantilever problem for three finite element types: Euler-Bernoulli and Timoshenko frame elements, and the quadrilateral Mixed Interpolation Tensorial Component (MITC4) shell element~\citep{Bathe_1986}. For both of the frame element types, we have $c_\mathrm{II} = 0$ and $c_\mathrm{III} = I_i (a)/a_i^3 = 1/12$ in Eq~\eqref{eq:inertia}, whereas $c_\mathrm{II} = 0$ and $c_\mathrm{III} = 1$ for the MITC4 element. \begin{table}[t] \centering\setlength{\tabcolsep}{5pt} \begin{tabular}{cccc} & Euler-Bernoulli & Timoshenko & MITC4\\ \hline $a_1^*$ & $2.775$ & $2.724$ & $2.754$\\ $a_2^*$ & $2.454$ & $2.414$ & $2.462$\\ $a_3^*$ & $2.086$ & $2.060$ & $2.091$\\ $a_4^*$ & $1.639$ & $1.643$ & $1.651$\\ $a_5^*$ & $1.047$ & $1.159$ & $1.041$\\ $c^*$ & $12.025$ & $12.922$ & $13.734$\\ time [s] & $65.363$ & $58.633$ & $967.086$\\ Th. \ref{th:suff}, $\varepsilon$ & $-2\times 10^{-9}$ & $-9 \times 10^{-10}$ & $-2 \times 10^{-9}$\\ Eq.~\ref{eq:rank}, $s$ & $1$ & $1$ & $1$ \end{tabular} \caption{Globally optimal thicknesses $a_1^*,\dots,a_5^*$ and compliances $c^*$ for the cantilever problem for three element types: Euler-Bernoulli and Timoshenko frame elements, and the MITC4 shell element. Variables $\varepsilon$ and $s$ denote the optimality gap in Theorem \ref{th:suff} and the rank of the moment matrices according to \eqref{eq:rank}, respectively.} \label{tab:optimal solutions} \end{table} \begin{figure*}[t] \begin{subfigure}{0.375\linewidth} \centering \begin{tikzpicture} \scaling{2.3}; \point{a}{0}{0}; \point{b}{1}{0}; \point{c}{2}{0}; \point{d}{0}{1}; \point{e}{1}{1}; \point{f}{2}{1}; \point{g}{0}{2}; \point{h}{1}{2}; \point{i}{2}{2}; \beam{2}{a}{b}; \beam{2}{a}{e}; \beam{2}{a}{f}; \beam{2}{a}{h}; \beam{2}{b}{c}; \beam{2}{b}{d}; \beam{2}{b}{e}; \beam{2}{b}{f}; \beam{2}{b}{g}; \beam{2}{c}{d}; \beam{2}{c}{e}; \beam{2}{c}{f}; \beam{2}{c}{h}; \beam{2}{d}{e}; \beam{2}{e}{f}; \beam{2}{d}{h}; \beam{2}{e}{g}; \beam{2}{e}{h}; \beam{2}{g}{h}; \beam{2}{f}{g}; \beam{2}{f}{h}; \support{3}{a}[270]; \support{3}{d}[270]; \support{3}{g}[270]; \load{1}{c}[90][0.7][-0.7]; \load{1}{f}[90][1][0.12]; \notation{1}{c}{$2$}[below right=2mm]; \notation{1}{f}{$3.5$}[above right=2mm]; \draw[->] (0.0,0)--(1,0); \node at (0.6, 0.15) {$x$}; \draw[->] (0.0,0)--(0,1); \node at (0.15, 0.6) {$y$}; \dimensioning{1}{a}{b}{-1.2}[$1$] \dimensioning{1}{b}{c}{-1.2}[$1$] \dimensioning{2}{a}{d}{-0.8}[$1$] \dimensioning{2}{d}{g}{-0.8}[$1$] \notation{1}{a}{\circled{$1$}}[align=center]; \notation{1}{b}{\circled{$2$}}[align=center]; \notation{1}{c}{\circled{$3$}}[align=center]; \notation{1}{d}{\circled{$4$}}[align=center]; \notation{1}{e}{\circled{$5$}}[align=center]; \notation{1}{f}{\circled{$6$}}[align=center]; \notation{1}{g}{\circled{$7$}}[align=center]; \notation{1}{h}{\circled{$8$}}[align=center]; \end{tikzpicture} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.15\linewidth} \centering \begin{tikzpicture}[scale=1] \scaling{0.2} \point{a}{0}{0}; \point{a1}{4}{0}; \point{a2}{5}{0}; \draw[black, fill=gray, fill opacity=0.2] (0,0) circle (1); \draw[black, fill=white, fill opacity=1.0] (0,0) circle (0.8); \dimensioning{1}{a}{a1}{1.2}[\raisebox{2mm}{$4r_i$}]; \dimensioning{1}{a1}{a2}{1.2}[\raisebox{2mm}{$r_i$}]; \end{tikzpicture} \caption{} \end{subfigure}% \hfill\begin{minipage}{0.43\linewidth} \begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=0.88\linewidth]{include/frame22_po1} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=0.88\linewidth]{include/frame22_po2} \caption{} \end{subfigure}\\ \begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=0.88\linewidth]{include/frame22_fmincon} \caption{} \end{subfigure}% \hfill\begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=0.88\linewidth]{include/frame22_nsdp} \caption{} \end{subfigure} \end{minipage} \caption{(a) Ground structure of the $22$-elements frame optimization problem, (b) cross-section parameterized by $r_i$, and optimized designs of compliances: (c) $c=3276.3$ obtained by PO$^{(1)}$, (d) $c^* = 1668.6$ resulting from PO$^{(2)}$ and OC, (e) $c = 1697.7$ reached by MMA and \texttt{fmincon}, and (f) $c = 1741.1$ optimized by NSDP.} \label{fig:10} \end{figure*} The moment-sum-of-squares hierarchy required three steps (degree-four relaxation) to converge in all three cases, Fig.~\ref{fig:cantilever_convergence}, and approached very similar optimal thicknesses, Table~\ref{tab:optimal solutions}. As expected, the lowest compliance is provided by the Euler-Bernoulli frame elements, which neglect the shear effects. We account for these effects in the Timoshenko frame elements, increasing so the value of optimal compliance. Another generalization occurs when using the MITC4 shell elements, which not only consider the effects of shear, but further incorporate effects induced by bending about $z$ axis, recall Fig.~\ref{fig:cantilever}. Therefore, the optimal compliance associated with the MITC4 elements is the highest. These results thus reveal the importance of using an appropriate finite element type for particular problem, as neglecting a~physical phenomenon may result in a~suboptimal design. Moreover, this influence on optimal minimum-energy designs can be rigorously studied by the proposed approach. \subsection{22-elements frame structure} Our final example investigates topology optimization of a~$22$-element frame structure shown in Fig.~\ref{fig:10}a. Two loads are applied at nodes $3$ and $6$ in a single load case. In addition, we set $E=1$ and $\overline{V} = 0.5$. All the structural elements possess a thin-walled circular cross-section with the radius $5 r_i$ and the wall thickness $r_i$, Fig.~\ref{fig:10}b. Hence, $a_i = 9 \pi r_i^2$ and $I_i = 46.125 \pi r_i^4$, so that $c_\mathrm{II} = 46.125/(81\pi)$ and $c_\mathrm{III} = 0$. \begin{table}[b] \centering \centering\setlength{\tabcolsep}{3pt} \begin{tabular}{lccc} method & LB & UB & time [s]\\ \hline OC & - & $1668.585$ & $2.454$\\ MMA & - & $1697.749$ & $13.816$\\ \texttt{fmincon} & - & $1697.665$ & $0.650$ \\ NSDP & - & $1741.062$ & $4.406$\\ PO$^{(1)}$, Th.~\ref{th:suff}, Eq.~\eqref{eq:rank} & $1062.105$ & $3276.294$ & $0.103$\\ PO$^{(2)}$, Th.~\ref{th:suff}, Eq.~\eqref{eq:rank} & $1668.584$ & $1668.584$ & $1492.842$ \end{tabular} \caption{Different optimization methods applied to the $22$-frame structure design problem. LB denotes lower bounds, UB abbreviates feasible upper bounds, and PO stands for polynomial optimization.} \label{tab:22} \end{table} The moment-sum-of-squares hierarchy requires two relaxations to achieve a guaranteed global optimum, both based on Theorem \ref{th:suff} with $\varepsilon = 2\times 10^{-5}$ and on Eq.~\eqref{eq:rank} with $s = 1$. However, even the second relaxation is fairly computationally expensive (see Table \ref{tab:22}), prohibiting solution of higher relaxations of similarly-sized problems on standard hardware. Evaluation of the local optimization algorithms revealed that only OC converged to the global optimum ($c^* = 1668.6$ shown in Fig.~\ref{fig:10}d). The remaining optimization approaches reached local optima of comparable performance but considerably different topologies: MMA and \texttt{fmincon} converged to the design shown in Fig.~\ref{fig:10}e with $c=1697.7$, and NSDP reached the design in Fig.~\ref{fig:10}f with $c=1741.1$. \section{Conclusions}\label{sec:conclusion} Our contribution has addressed the fundamental question in the structural design: how to find globally-optimal minimum-compliant bending-resistant structures in discrete topology optimization with continuous design variables. For the cases of frame and shell structures, multiple loading conditions and design-dependent loads, we have formulated this optimization problem as a~(non-linear) semidefinite program constrained by a polynomial matrix inequality. The feasible space of this optimization problem forms a~semialgebraic set; hence, powerful results on polynomial optimization---the moment-sum-of-squares hierarchy---facilitate computation of the global solutions. This hierarchy generates a sequence of tightening outer convex approximations in the space of moments of the probability measures, so that the first-order moments converge monotonically to the convex hull of the original problem. Therefore, a~non-decreasing sequence of lower bounds is established. Using the first-order moments only, we have shown that a sequence of feasible upper bounds can be obtained by a simple correction. Consequently, because lower and upper bounds are available in each relaxation, the upper-bound design quality can be assessed, establishing a~sufficient condition of global $\varepsilon$-optimality. This condition is very simple to check and complements the traditional rank-based certificate of global optimality, e.g., \citep{Henrion_2006}. Our condition fails to be necessary because the first-order moments are not unique when considered optimization problem possesses multiple global optima, potentially leaving a strictly positive optimality gap. For the case of the unique global optimum, we have shown that the hierarchy eventually attains a zero optimality gap as the relaxation number approaches infinity. We note here that the possibility of the global minima multiplicity can almost be avoided in practice when the symmetry of structure and boundary conditions are exploited. These theoretical results have been illustrated on five problems, which indicate the merits and weaknesses of the presented strategy. First, all of our test problems exhibited a~rapid convergence of the hierarchy, allowing for extraction of all global solutions based on the flat extension theorem of \citet{Curto_1996}. However, the computational complexity is currently fairly high, also when compared to investigated local optimization techniques, leaving the ability to compute proven global optima for small-scale optimization problems only. Yet, even for middle-scale problems, the hierarchy still provides a~sequence of upper bounds of reasonable qualities, and, especially, the certificate of their $\varepsilon$-optimality. In the future, we plan to extend these results in multiple directions. First, we believe that Theorem \ref{th:zero} can be strengthened to certify a zero optimality gap when convergence of the hierarchy in a finite number of steps occurs. Second, the computational demands could be decreased by exploiting the structural sparsity via clique-based chordal decomposition in the spirit of~\citep{Kim_2010,Kocvara_2020}. Another research directions may explore eigenvalue constraints and optimization~\citep{Achtziger_2008,Tyburec_2019}, the minimum-weight setting, or, eventually, investigate performance of the hierarchy for different topology optimization formulations. \begin{acknowledgement} We thank Edita Dvo{\v{r}}{\'{a}}kov{\'{a}} for providing us with her implementation of the \textsc{Mitc4} shell elements \citep{Dvorakova2015}. Marek Tyburec, Jan Zeman, and Martin Kru{\v{z}}{\'{i}}k acknowledge the support of the Czech Science Foundation project No. 19-26143X. \end{acknowledgement} \section*{Compliance with ethical standards} {\small\noindent \textbf{Conflict of interest}\hspace{1mm} The authors declare that they have no conflict of interest.} {\small\noindent \textbf{Replication of results}\hspace{1mm} Source codes are available at \citep{tyburec_marek_2020_4048828}.}
train/arxiv
BkiUebw4eIOjSZd0Ftg9
5
1
\section{Introduction} Because of the rapid global spread of COVID-19, and the cooperation of medical institutions worldwide, a tremendous amount of public data --- more data than ever before for a single virus --- has been made available for researchers~\cite{gisaid_website_url,ali2021spike2vec,ali2021classifying}. This ``big data'' opens up new opportunities to analyse the behavior of this virus~\cite{leung2020big,leung2020machine}. Despite these opportunities, the huge size of the data poses a challenge for its processing on smaller systems~\cite{ali2021spike2vec}. On one hand, this creates scalability issues, and on the other hand, it creates the problem of high dimensionality (the curse of dimensionality)~\cite{Ali2020ShortTerm,ali2019short}. Since such data was not previously available to the research community at this magnitude and ease of access, new and more sophisticated methods are required to extract useful information from this big data. At the same time, the shortage of medical resources may occur when such a severe pandemic happens, and the surging number of patients exceeds the capacity of the clinical system. This situation happens in many countries and regions during continuous outbreaks of the COVID-19 pandemic, and clinicians have to make the tough decision of which individual patients have a higher possibility to recover and should receive a limited amount of medical care. What is more difficult is the decision of which patients have little to no chance of survival, regardless of treatment level, and should hence be abandoned for the sake of optimizing the use of limited resources for others who still have a chance. In addition to this, patients with different levels of severity and urgency of symptoms require the medical system to create a complete plan to provide various treatments in proper order~\cite{abdulkareem2021realizing}. Hence, a clinical decision support system is of utmost importance to optimize the use of the limited medical resources and thus save more lives overall~\cite{abdulkareem2021realizing}. In order to develop a such a clinical decision support system with high quality, it is necessary to build a model that can predict the possible complications of patients, assessing the likelihood that they will survive under certain levels of care. Machine learning (ML) based algorithms are proven to perform well in terms of classification and clustering. Therefore, we work on building machine learning models that can scale to larger datasets and reduce the run time by selecting the proper attributes. Since ML models take a feature vector representation as input~\cite{ali2021predicting,grover2016node2vec}, designing such vectors while preserving a maximum amount of information is a challenging task~\cite{yang2018multi}. Moreover, when the size of data becomes this large, even scalability of these models becomes an issue. Much work has been done on genomic data of COVID-19 patients~\cite{kuzmin2020machine,ali2021effective,melnyk2021alpha}. One major challenge in this case is the conversion of genomic sequences into fixed-length numerical feature vectors so that they can be given as input to the underlying ML classifiers for prediction purposes. In this paper, we propose an algorithm that efficiently predicts with high accuracy patient mortality as a function many different factors. This problem can help doctors to prescribe medications and design strategies in advance that can help to save the highest number of lives. In this paper, our contributions can be summarised as follows: \begin{enumerate} \item We propose a pipeline to efficiently predict patient mortality as a function of a few dozen factors. We show that with basic information about a patient (gender, race, exposure, etc.), we can predict in advance the likelyhood of a mortality in the future. We also predict if a patient is COVID-19 positive or negative using attributes like red blood cells and hemoglobin, etc. \item We show that our model is scalable on larger dataset (achieves accuracies $>$90\%). \item From our results, it is evident that the proposed model (using efficient feature selection) outperforms the baselines (without using any feature selection) in terms of prediction accuracy and runtime. \item We show the importance of each attribute by measuring the information gain of the attributes with the class labels. This study can help doctors and other relevant authorities to focus more on specific attributes rather than dealing with all information at once, which can be difficult for humans to fathom. \end{enumerate} The rest of the paper is organised as follows: Section~\ref{sec_related_work} contains literature review for the problem. Our proposed model is give in Section~\ref{sec_proposed_approach}. Dataset statistics and experimental details are given in Section~\ref{sec_experimental_setup}. We show results and their comparisons in Section~\ref{sec_results}. We discuss the importance of different attributes in Section~\ref{sec_attribute_importance}. Finally, in Section~\ref{sec_conclusion}, we conclude our paper. \begin{comment} \textcolor{red}{ \begin{enumerate} \item The statistic of the employed dataset is represented in pie charts as shown in Figure 1, which has 13 various features: (a) Month and Year; (b) Resident State; \todo{Murray: any way to write the category beside the small numbers?, because the colors are useless in this case}(c) Age Group; (d) Gender; (e) Race; (f) Ethnicity; (g) Process; (h) Exposure; (i) Status; (j) Hospital; (k) ICU; (l) Deceased; (m) Underlying Condition. The dataset contains reported COVID-19 cases that range from March 2020 to August 2021 locating in 16 State in the United States. Patients with different ages are divided into 4 age groups: 0 - 17 years, 18 - 49 years, 50 - 64 years and 65 + years; more than half (56.4 \%) patients are in the age groupd of 18 - 49 years old. \\ \item the authors should demonstrate (from a statistical perspective) that the employed dataset (multi-center or single-center) should be representative of the entire population.\todo{@Murray} \item the authors should deal with any bias factor (e.g. age, sex, pathology) present in the data that might lead to an overestimation of the predictive performance. \\ This dataset from CDC has been processed by the Population level and geography specific checks, including \textcolor{blue}{"Checking that sex, race, ethnicity demographic values should never be populated with the county subpopulation by those demographics is under 220 (k*20)" \todo{Murray: this first one doesn't make any sense to me}, and "Checking for case county by sex, race, ethnicity in a county is never higher than 50 \% of the subpopulation by those demographics for the county."} (Cited from \url{https://github.com/CDCgov/covid_case_privacy_review}) \\ \item the authors should evaluate the robustness of the dataset in terms of the presence of missing values and outliers.\\ The dataset can be considered robust because \textcolor{blue}{"CDC's Case Surveillance Section routinely performs data quality assurance procedures (i.e., ongoing corrections and logic checks to address data errors)." \\"To date, the following data cleaning steps have been implemented:\\- Questions that have been left unanswered (blank) on the case report form are reclassified to a Missing value, if applicable to the question. For example, in the question "Was the individual hospitalized?" where the possible answer choices include "Yes," "No," or "Unknown," the blank value is recoded to "Missing" because the case report form did not include a response to the question.\todo{so there should be a 4th slice in the corresponding pie chart called "Missing"? or are these filtered out?}\\- Logic checks are performed for date data. If an illogical date has been provided, CDC reviews the data with the reporting jurisdiction. For example, if a symptom onset date in the future is reported to CDC, this value is set to null until the reporting jurisdiction updates the date appropriately.\todo{Murray: again should there be "null" category in the date pie chart?}\\- Additional data quality processing to recode free text data are ongoing. Data on symptoms, race and ethnicity, and healthcare worker status have been prioritized." }\\(Cited from \url{https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data-with-Ge/n8mc-b4w4?category=Case-Surveillance\&view_name=COVID-19-Case-Surveillance-Public-Use-Data-with-Ge})\\ \item the authors are encouraged (especially for a dataset with moderate sample size) to perform a cross-validation procedure for checking the ML performance and a nested cross-validation procedure for optimizing the hyperparameters.\todo{Murray: Done, I guess ?} \item we require authors to present a dataset with > 1,000 patients, as well as replication samples of these patients over time. High sample size will avoid a strongly-biased performance estimation while increasing the robustness of the found results and the impact of the clinical evidence.\\ The dataset used in the paper contains 95984 patients' data after preprocessing, which is large enough to avoid the biased performation estimation and guarantee the the robustness of the found results and the impact of the clinical evidence.\\ \end{enumerate} } \end{comment} \section{Related Work}\label{sec_related_work} Machine learning based models that take fixed length feature vectors as input has been successfully applied (for data analytics tasks) in many domains such as graphs analytics~\cite{ali2021predicting,AHMAD2020Combinatorial}, smart grid~\cite{ali2019short,Ali2020ShortTerm}, electromyography (EMG)~\cite{ullah2020effect}, and text classification~\cite{Shakeel2020LanguageIndependent,Shakeel2020Multi,Shakeel2019MultiBilingual}. It is important to perform objective evaluation of the underlying model rather than just doing subjective evaluation~\cite{hassan2021locally}. Many methodologies of data science have been applied to objectively analyze the data of COVID-19 and provide support to the medical system. The synergy between data scientists and the biomedical communities is helpful to improve the detection of diseases and illnesses, as well as the prediction of possible complications. Authors in~\cite{leung2020data} developed a framework with to predict cancer trends accurately. This type of framework could be used for the analysis of other clinical data. S. Ali et al. in~\cite{ali2021k} uses spike sequences to classify the variants of the COVID-19 infected humans. An effective approach to cluster the spike sequences based on the virus's variants is conducted in~\cite{ali2021effective}. Several studies discussed different data mining techniques to study the behavior of the COVID-19~\cite{li2020using,albahri2020role,leung2020big}. Authors in~\cite{fung2021predictive} uses neural networks, which takes advantage of few-shot learning and autoencoder to perform predictive analysis on COVID-19 data. A study for predicting the clinical outcomes of patients and indicating whether patients are more likely to recover from coronavirus or in danger of death is performed in~\cite{leung2020machine}. They presented a tool called online analytical processing (OLAP), which can help the researchers learn more about the confirmed cases and mortality of COVID-19 by conducting machine learning methods on the big dataset of COVID-19. \section{Proposed Approach}\label{sec_proposed_approach} Most of the machine learning (ML) models take fixed-length feature vectors as an input to perform different tasks such as classification and clustering. We design a fixed-length feature vector representation, which includes the values of different attributes of the clinical data. One important point to mention here is that not all the features in the vectors are important in terms of predicting the class labels. Therefore, it is required to apply feature selection to not only improve the predictive performance of the underlying classifiers (by removing unnecessary features), but also improve the training runtime. The feature selection methods that we used in this paper are discussed below. \subsection{Feature Selection Methods} We use different supervised and unsupervised feature selection methods to improve the runtime of the underlying classifiers and also to improve the predictive performance. For supervised models, we use Boruta (shadow features)~\cite{kursa2010feature}, and Ridge Regression (RR)~\cite{hoerl1975ridge}. For unsupervised methods, we use Approximate kernel approach called Random Fourier Features (RFF)~\cite{rahimi2007random}. \subsubsection{Boruta (shadow features)} The main idea of Boruta is that features do not compete among themselves but rather they compete with a randomized version of them. Boruta captures the non-linear relationships and interactions using the random forest algorithm. It then extract the importance of each feature (corresponding to the class label) and only keep the features that are above a specific threshold of importance. To compute the importance of the features, it performs the following task: From the original features set in the data, it creates dummy features (shadow features) by randomly shuffling each feature. Now the shadow features are combined with the original features set to obtain a new dataset, which has twice the number of features of the original data. Using random forest, it compute the importance of the original features and the shadow features separately. Now the importance of the original features are compared with the threshold. The threshold is defined as the highest feature importance recorded among the shadow features. The feature from the original feature set is selected is its importance (computed using random forest) is greater than the threshold (highest importance value among shadow features). In Boruta, a feature is useful only if it is capable of doing better than the best randomized feature. Note that we are using two datasets in this paper, namely Clinical Data1, and Clinical Data2 (see Section~\ref{sec_dataset_statistics} for detail regarding datasets). For the Clinical Data1, Boruta selected $11$ features out of $19$ and removed Year, Gender, Race, Case Positive Specimen Interval, Case Onset Interval, Exposure, Current Status, and Symptom Status. For the Clinical Data2, Boruta selected $7$ features from $18$ features in total. The selected features are Red blood Cells, Platelets, Hematocrit, Monocytes, Leukocytes, Eosinophils, and Proteina C reativa mg/dL. \subsubsection{Ridge Regression} Ridge Regression (RR) is a supervised algorithm for parameter estimation that is used to address the collinearity problem that arises in multiple linear regression frequently \cite{mcdonald2009ridge}. Its main idea is to increase the bias (it first introduces a Bias term for the data) to improve the variance, which shows the generalization capability of RR as compared to simple linear regression. RR ignores the datapoints that are far away from others and it try to make the regression line more horizontal. RR is useful for Feature selection because it gives insights on which independent variables are not very important (can reduce the slope close to zero). The un-important independent variables are then removed to reduce the dimensions of the overall dataset. The objective function of ridge regression is the following \begin{equation} min(\text{Sum of square residuals} + \alpha \times \text{slope}^2) \end{equation} where $\alpha \times {slope}^2$ is called penalty terms. \subsubsection{Random Fourier Features (RFF)} A popular approach for the classification is using kernel based algorithms, which computes a similarity matrix that can be used as input for traditional classification algorithms such as support vector machines. However, pair-wise computation for the kernel matrix is an expensive task. To make this task efficient, an efficient approach, called kernel trick is used. \begin{definition}[Kernel Trick] It works by taking dot product between the pairs of input points. Kernel trick avoid the need to map the input data (explicitly) to a high-dimensional feature space. \end{definition} The main idea of the Kernel Trick is the following: \textit{Any positive definite function f(x,y), where $x,y \in \mathcal{R}^d$, defines an inner product and a lifting $\phi$ for the purpose of computing the inner product quickly between the lifted data points}~\cite{rahimi2007random}. More formally: \begin{equation} \langle \phi (x), \phi (y) \rangle = f(x,y) \end{equation} The main problem of kernel method is that when we have large sized data, they suffers from large initial computational and storage costs. To solve these problems, we use an approximate kernel method called Random Fourier Features (RFF)~\cite{rahimi2007random}. The RFF maps the given data to a low dimensional randomized feature space (euclidean inner product space). More formally: \begin{equation}\label{eq_rff_1} z: \mathcal{R}^d \rightarrow \mathcal{R}^D \end{equation} RFF basically approximate the inner product between a pair of transformed points. More formally: \begin{equation}\label{eq_z_value} f(x,y) = \langle \phi (x), \phi (y) \rangle \approx z(x)' z(y) \end{equation} In Equation~\eqref{eq_z_value}, $z$ is low dimensional (unlike the lifting $\phi$). In this way, we can transform the original input data with $z$. Now, $z$ is the approximate low dimensional embedding for the original data. We can then use $z$ as the input for different classification algorithms. \subsection{Classification Algorithms} For classification, we use Support Vector Machine (SVM), Naive Bayes (NB), Multiple Linear Regression (MLP), K-Nearest Neighbors (KNN), Random Forest (RF), Logistic Regression (LR), and Decision Tree (DT). All algorithms are used with default parameters. The value for K in case of KNN is taken as $5$ (using standard validation set approach~\cite{validationSetApproach}). We are also using a deep learning model, called Keras Classifier for the classification purposes. For this model, we use a sequential constructor. We create a fully connected network with one hidden layer that contains $p$ neurons, where $p$ is equal to the length of the feature vector. We use "rectifier" as an activation function and ``softmax" activation function in the output layer. We also use ab efficient Adam gradient descent optimization algorithm with ``sparse categorical crossentropy" loss function (used for multi-class classification problem), which computes the crossentropy loss between the labels and predictions. The batch size and number of epochs are taken as $100$ and $10$, respectively for training the model. Since deep learning model does not require any feature selection, we use the original data without any feature selection as input to keras classifiers. \begin{remark} We use ``sparse categorical crossentropy" instead of simple ``categorical crossentropy" because we are using integer labels rather than one-hot representation of labels. \end{remark} \section{Experimental Setup}\label{sec_experimental_setup} In this section, we describe our dataset in detail. All experiments are performed on a Core i5 system running the Windows operating system, 32GB memory, and a 2.4 GHz processor. Implementation of the algorithms is done in Python. Our code and prepossessed dataset is available online\footnote{\url{https://github.com/sarwanpasha/COVID_19_Clinical_Data_Analytics}}. \subsection{Dataset Statistics}\label{sec_dataset_statistics} In this paper, we are using clinical data from two different sources. The description of both datasets is given below. \subsection{Clinical Data1} We use COVID-19 Case Surveillance dataset (we call it Clinical Data1 for reference), which is publicly available on center of disease control CDC, USA's website\footnote{\url{https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data-with-Ge/n8mc-b4w4/data}}. After preprocessing (removing missing values), we got $95984$ patients data record. The attributes in the dataset are following: \begin{enumerate} \item \textbf{Year:} The earlier of year the Clinical Date. date related to the illness or specimen collection or the Date Received by CDC \item \textbf{Month:} The earlier of month the Clinical Date. date related to the illness or specimen collection or the Date Received by CDC (see Figure~\ref{fig_month_year} for month and year distribution). \begin{figure}[h!] \centering \includegraphics[scale = 0.6]{Figures/month_year_distribution.png} \caption{Month and Year attribute distribution.} \label{fig_month_year} \end{figure} \item \textbf{State of residence:} This attributes shows the name of the state (of United States of America) in which the patient is living (see Figure~\ref{fig_states_distribution} for states distribution). \begin{figure}[h!] \centering \includegraphics[scale = 0.6]{Figures/states_distribution.png} \caption{State of residence distribution.} \label{fig_states_distribution} \end{figure} \item \textbf{State FIPS code:} Federal Information Processing Standards (FIPS) code for different states \item \textbf{County of residence:} Name of the County \item \textbf{County fips code:} Federal Information Processing Standards (FIPS) code for different Counties \item \textbf{Age group:} Age groups of patients that include 0 - 17 years, 18 - 49 years, 50 - 64 years, and 65 + years. \item \textbf{Gender:} Female, Male, Other, Unknown. \item \textbf{Race:} American Indian/Alaska Native, Asian, Black, Multiple/Other, Native Hawaiian/Other Pacific Islander, White, Unknown (see Table~\ref{tbl_race_attribute} for the distribution of values for race attribute). \begin{table}[h!] \centering \begin{tabular}{lc} \hline Race & Count \\ \hline\hline American Indian/ Alaska Native & 94 \\ Asian & 3067 \\ Black & 8806 \\ Multiple/Other & 1833 \\ Native Hawaiian/Other Pacific Islander & 859 \\ Unknown & 3081 \\ White & 78244\\ \hline \end{tabular} \caption{Race attribute distribution.} \label{tbl_race_attribute} \end{table} \item \textbf{Ethnicity:} Hispanic, Non-Hispanic, Unknown. \item \textbf{Case positive specimen interval:} Weeks between earliest date and date of first positive specimen collection. \item \textbf{Case onset interval:} Weeks between earliest date and date of symptom onset. \item \textbf{Process:} Under what process was the case first identified, e,g, Clinical evaluation, Routine surveillance, Contact tracing of case patient, Multiple, Other, Unknown. (see Table~\ref{tbl_process_attribute}). \begin{table}[h!] \centering \begin{tabular}{lc} \hline Process & Count \\ \hline\hline Clinical evaluation & 43768 \\ Contact tracing of case patient & 6813 \\ Laboratory reported & 11848 \\ Multiple & 22595 \\ Other & 556 \\ Other detection method (specify) & 164 \\ Provider reported & 212 \\ Routine physical examination & 22 \\ Routine surveillance & 8641 \\ Unknown & 1365 \\ \hline \end{tabular} \caption{Process attribute distribution.} \label{tbl_process_attribute} \end{table} \item \textbf{Exposure:} In the $14$ days prior to illness onset, did the patient have any of the following known exposures e.g. domestic travel, international travel, cruise ship or vessel travel as a passenger or crew member, workplace, airport/airplane, adult congregate living facility (nursing, assisted living, or long-term care facility), school/university/childcare center, correctional facility, community event/mass gathering, animal with confirmed or suspected COVID-19, other exposure, contact with a known COVID-19 case? Possible values for this attribute are Yes and Unknown. \item \textbf{Current status:} What is the current status of this person? Possible values are Laboratory-confirmed case, Probable case. \item \textbf{Symptom status:} What is the symptom status of this person? Possible values are Asymptomatic, Symptomatic, Unknown, Missing. \item \textbf{Hospital:} Was the patient hospitalized? Possible values are Yes, No, Unknown. \item \textbf{ICU:} Was the patient admitted to an intensive care unit (ICU)? Possible values are Yes, No, Unknown. \item \textbf{Death/Deceased:} This attribute highlights whether the patient die as a result of this illness. The possible values are ``Yes", ``No", and ``Unknown". \item \textbf{Underlying Conditions:} This attribute highlights if the patient have single or multiple medical conditions and risk behaviors. These conditions includes diabetes mellitus, hypertension, severe obesity (occurs when BMI is greater than $40$), cardiovascular disease, chronic renal disease, chronic liver disease, chronic lung disease, other chronic diseases, immunosuppressive condition, autoimmune condition, current smoker, former smoker, substance abuse or misuse, disability, psychological/psychiatric, pregnancy, other. The possible values are for this attribute are ``Yes" and ``No". \end{enumerate} The Distributions of values for different attributes is shown in Figure~\ref{fig_all_pie_charts}. \begin{figure}[h!] \centering \includegraphics[scale = 0.6]{Figures/all_pie_charts.pdf} \caption{Pie charts for the distribution of different attributes values} \label{fig_all_pie_charts} \end{figure} To check if there is any natural clustering in Clinical Data1, we use t-distributed stochastic neighbor embedding (t-SNE) approach~\cite{van2008visualizing}. We maps input data to 2d real vectors representation using t-SNE and Deceased attribute (for Clinical Data1) as class label (see Figure~\ref{fig_t_sne}). We can observe in the figure that there is no visible clustering corresponding to different values of the deceased attribute. All values (No, Yes, and Unknown) are scattered around in the whole plot. This behavior shows that performing any ML task on such data will not directly give us efficient results (since the data is not properly grouped together). \begin{figure}[h!] \centering \includegraphics[scale = 0.58]{Figures/clinical_data_tnse_plot.png} \caption{t-SNE plot for deceased attribute of Clinical Data1.} \label{fig_t_sne} \end{figure} \subsection{Clinical Data2} We got the Clinical Data2 data from~\cite{alakus2020comparison}. This study used a laboratory dataset of patients with COVID-19 in the Israelita Albert Einstein Hospital in Sao Paulo, Brazil. The patient samples were collected to identify who were infected by COVID-19 in the beginning of year $2020$. The laboratory dataset contains information on $608$ patients with $18$ laboratory findings. In this dataset, $520$ had no findings, and $80$ were patients with COVID-19. The attributes are Red blood Cells, Hemoglobin, Platelets, Hematocrit, Aspartate transaminase, Lymphocytes, Monocytes, Sodium, Urea, Basophils, Creatinine, Serum Glucose, Alanine transaminase, Leukocytes, Potassium, Eosinophils, Proteina C reativa mg/dL, Neutrophils, SARS-Cov-2 exam result (positive or negative). All the attributes (other than ``SARS-Cov-2 exam result") contains integer values. \subsection{Evaluation Metrics} To measure the performance of underlying machine learning classifiers, we use different evaluation metrics such as Average Accuracy, Precision, Recall, weighted and Macro F1, and Receiver Operator Curve (ROC) Area Under the Curve (AUC). We also computed the training runtime of all ML models to see which model is the best in terms of runtime. \section{Results and Discussion}\label{sec_results} The results for Clinical Data1 are given in Table~\ref{tbl_clinical_data_1}. For classifying the Deceased attribute, we can see that all methods are able to classify the label (Deceased attribute) with very high accuracy (accuracy $>90$ in most of the cases). Note that feature selection based models are not only better in terms of prediction accuracy, but also outperforms the setting in which we are not using any feature selection approach (No Feat. Selec.). Also, Boruta feature selection model is outperforming all other feature selection approaches. In terms of training runtime, RFF with Logistic Regression classifier is performing better than the other classifiers. \begin{table}[h!] \centering \begin{tabular}{ccp{0.5cm}p{0.5cm}p{0.5cm}p{1.1cm}p{1.1cm}p{0.5cm}|p{1.1cm}} \hline & & Acc. & Prec. & Recall & F1 (Weighted) & F1 (Macro) & ROC AUC & Train Time (Sec.) \\ \hline \hline \multirow{6}{*}{No Feat. Selec.} & NB & 0.78 & 0.93 & 0.78 & 0.83 & 0.49 & 0.80 & 0.19 \\ & MLP & 0.94 & 0.93 & 0.94 & 0.93 & 0.59 & 0.66 & 35.28 \\ & KNN & 0.94 & 0.93 & 0.94 & 0.93 & 0.60 & 0.69 & 4.71 \\ & RF & 0.94 & 0.94 & 0.94 & 0.94 & 0.64 & 0.71 & 4.88 \\ & LR & 0.93 & 0.87 & 0.93 & 0.90 & 0.32 & 0.50 & 1.38 \\ & DT & 0.93 & 0.93 & 0.93 & 0.93 & 0.62 & 0.73 & 0.37 \\ \hline \multirow{6}{*}{Boruta} & NB & 0.83 & 0.94 & 0.83 & 0.87 & 0.54 & \textbf{0.81} & 0.149 \\ & MLP & 0.94 & 0.93 & 0.94 & 0.93 & 0.58 & 0.66 & 22.76 \\ & KNN & 0.94 & 0.94 & 0.94 & 0.94 & 0.62 & 0.70 & 1.814 \\ & RF & \textbf{0.95} & \textbf{0.94} & \textbf{0.95} & \textbf{0.94} & \textbf{0.64} & 0.72 & 3.346 \\ & LR & 0.93 & 0.89 & 0.93 & 0.90 & 0.33 & 0.50 & 0.968 \\ & DT & 0.94 & 0.94 & 0.94 & 0.94 & 0.64 & 0.73 & 0.227 \\ \hline \multirow{6}{*}{RR} & NB & 0.84 & 0.93 & 0.84 & 0.87 & 0.45 & 0.72 & \textbf{0.129} \\ & MLP & 0.93 & 0.87 & 0.93 & 0.90 & 0.32 & 0.50 & 5.658 \\ & KNN & 0.93 & 0.92 & 0.93 & 0.92 & 0.48 & 0.60 & 1.660 \\ & RF & 0.94 & 0.93 & 0.94 & 0.93 & 0.51 & 0.64 & 2.214 \\ & LR & 0.93 & 0.87 & 0.93 & 0.90 & 0.32 & 0.50 & 0.338 \\ & DT & 0.94 & 0.93 & 0.94 & 0.93 & 0.51 & 0.64 & 0.154 \\ \hline \multirow{6}{*}{RFF} & NB & 0.93 & 0.87 & 0.93 & 0.90 & 0.32 & 0.50 & 0.144 \\ & MLP & 0.93 & 0.89 & 0.93 & 0.90 & 0.32 & 0.50 & 24.22 \\ & KNN & 0.93 & 0.91 & 0.93 & 0.92 & 0.45 & 0.58 & 3.280 \\ & RF & 0.94 & 0.93 & 0.94 & 0.93 & 0.56 & 0.64 & 27.87 \\ & LR & 0.93 & 0.87 & 0.93 & 0.90 & 0.32 & 0.50 & 0.261 \\ & DT & 0.91 & 0.92 & 0.91 & 0.91 & 0.51 & 0.65 & 1.461 \\ \hline \multirow{1}{*}{Keras Class.} & - & 0.93 & 0.87 & 0.93 & 0.90 & 0.32 & 0.50 & 11.582 \\ \hline \end{tabular} \caption{Classification Results for Clinical Data1. Best values are shown in bold. In terms of training time, each classifier's runtime is compared separately and best values for each of them are shown in bold.} \label{tbl_clinical_data_1} \end{table} The results for Clinical Data2 are given in Table~\ref{tbl_clinical_data_2}. For classifying whether a patient is COVID-19 positive or negative, we can see that random forest classifier with Boruta feature selection approach outperforms all other feature selection methods and also the deep learning model. In terms of runtime, logistic regression classifier with RFF is outperforming other approaches. \begin{remark} We note that the deep learning model is slightly worse that the traditional classifiers in case of Clinical Data1 while performing worst on Clinical Data2. This is because the deep learning models are usually ``Data Hungry" and require a lot more data to efficiently learn the patters in the data. Since we have small number of data points in both datasets, the deep learning model is unable to beat the traditional classification algorithms. \end{remark} \begin{table}[h!] \centering \begin{tabular}{ccp{0.5cm}p{0.5cm}p{0.5cm}p{1.1cm}p{1.1cm}p{0.5cm}|p{1.1cm}} \hline & & Acc. & Prec. & Recall & F1 (Weighted) & F1 (Macro) & ROC AUC & Train Time (Sec.) \\ \hline \hline \multirow{6}{*}{No Feat. Selec.}& NB & 0.89 & 0.88 & 0.89 & 0.88 & 0.71 & 0.70 & 0.025 \\ & MLP & 0.86 & 0.85 & 0.86 & 0.85 & 0.65 & 0.64 & 1.327 \\ & KNN & 0.88 & 0.87 & 0.88 & 0.87 & 0.68 & 0.66 & 0.013 \\ & RF & 0.85 & 0.79 & 0.85 & 0.82 & 0.49 & 0.50 & 0.178 \\ & LR & 0.87 & 0.86 & 0.87 & 0.86 & 0.65 & 0.63 & 0.013 \\ & DT & 0.81 & 0.81 & 0.81 & 0.81 & 0.56 & 0.56 & 0.01 \\ \hline \multirow{6}{*}{Boruta} & NB & 0.83 & 0.89 & 0.83 & 0.85 & 0.71 & 0.79 & 0.01 \\ & MLP & 0.87 & 0.89 & 0.87 & 0.88 & 0.75 & 0.78 & 1.621 \\ & KNN & 0.86 & 0.85 & 0.86 & 0.86 & 0.66 & 0.66 & 0.015 \\ & RF & \textbf{0.91} & \textbf{0.90} & \textbf{0.91} & \textbf{0.90} & \textbf{0.77} & \textbf{0.74} & 0.125 \\ & LR & 0.87 & 0.88 & 0.87 & 0.88 & 0.73 & 0.74 & 0.01 \\ & DT & 0.85 & 0.86 & 0.85 & 0.86 & 0.68 & 0.69 & 0.007 \\ \hline \multirow{6}{*}{RR} & NB & 0.83 & 0.80 & 0.83 & 0.81 & 0.57 & 0.56 & 0.016 \\ & MLP & 0.85 & 0.84 & 0.85 & 0.85 & 0.67 & 0.66 & 1.024 \\ & KNN & 0.85 & 0.84 & 0.85 & 0.84 & 0.66 & 0.64 & 0.01 \\ & RF & 0.85 & 0.83 & 0.85 & 0.84 & 0.65 & 0.64 & 0.137 \\ & LR & 0.87 & 0.84 & 0.87 & 0.84 & 0.61 & 0.59 & 0.009 \\ & DT & 0.82 & 0.82 & 0.82 & 0.82 & 0.62 & 0.62 & 0.009 \\ \hline \multirow{6}{*}{RFF} & NB & 0.89 & 0.78 & 0.89 & 0.83 & 0.47 & 0.50 & 0.022 \\ & MLP & 0.77 & 0.79 & 0.77 & 0.78 & 0.48 & 0.48 & 1.565 \\ & KNN & 0.86 & 0.80 & 0.86 & 0.83 & 0.50 & 0.51 & 0.019 \\ & RF & 0.88 & 0.78 & 0.88 & 0.83 & 0.47 & 0.50 & 0.163 \\ & LR & 0.89 & 0.78 & 0.89 & 0.83 & 0.47 & 0.50 & \textbf{0.008} \\ & DT & 0.73 & 0.80 & 0.73 & 0.76 & 0.50 & 0.52 & 0.009 \\ \hline \multirow{1}{*}{Keras Class.} & - & 0.83 & 0.76 & 0.83 & 0.79 & 0.48 & 0.50 & 10.928 \\ \hline \end{tabular} \caption{Classification Results for Clinical Data2. Best values are shown in bold. In terms of training time, each classifier's runtime is compared separately and best values for each of them are shown in bold.} \label{tbl_clinical_data_2} \end{table} \section{Importance of Attributes}\label{sec_attribute_importance} To evaluate importance positions in spike sequences, we find the importance of each attribute with respect to class label (for Clinical Data1). For this purpose, we computed the Information Gain (IG) between each attribute and the true class label. The IG is defined as follows: \begin{equation} IG(Class,position) = H(Class) - H(Class | position) \end{equation} \begin{equation} H = \sum_{ i \in Class} -p_i \log p_i \end{equation} where $H$ is the entropy, and $p_i$ is the probability of the class $i$. The IG values for each attribute is given in Figure~\ref{fig_attribute_correlation}. \begin{figure}[h!] \centering \centering \includegraphics{Tikz_Figures/information_gain.tikz} \caption{Information Gain of different attributes with respect to Class label (deceased attribute) for Clinical Data1.} \label{fig_attribute_correlation} \end{figure} What is particularly interesting is that State and County code of the are two major predictors of patient outcome. This is likely due to the current vaccination situation in the US, which varies quite widely from county to county~\cite{nyt_url}. \section{Conclusion} \label{sec_conclusion} We propose an efficient model for the classification of COVID-19 patients using efficient feature selection methods and machine learning classification algorithms. We show that with Boruta for feature selection, the simple classification algorithms like random forest can also beat the deep learning model when dataset size is not too big. We also show the importance of each attribute in the Clinical Data1 by computing the information gain values for each attribute corresponding to class label. In the future, we will extract more data and apply other sophisticated deep learning models such as LSTM and GRU to improve the predictive performance. We will also use other factors such as weather along with the clinical data to further improve the classification results. These results have many practical meanings. The most direct real-world application of the machine learning model is to provide support to medical doctors during the COVID-19 pandemic. By predicting the risk level of individual patients, our model enables clinicians to wisely assign, in real-time, limited medical resources, especially during periods of medical shortage, and provide immediate treatment to the most vulnerable groups. With the help of the risk prediction system, clinicians learn which individual patients may be in danger of death and can thus conduct personalized prevention treatment in due time. Moreover, our research can be used to build a general clinical decision support system that serves not only COVID-19 but also other potential future pandemics. The patterns found in this data may also help biologies to design effective vaccines or vaccination strategies. Finally, these methodologies can be applied for future studies on big data and machine learning in the broader sense. \bibliographystyle{spmpsci}
train/arxiv
BkiUe-TxK0-nUh8iKtse
5
1
\section{Introduction} Let $\mathcal{L}_n$ be a configuration of $n$ lines $L_1, \cdots, L_n$ in the complex projective plane $\p^2$ and $D$ be a smooth conic in the same plane. We assume that $\mathcal{L}_n\cap D$ consists in $2n$ distinct points. From a point $A_1$ on $L_1$ (not being on the other lines neither on $D$) we draw a tangent line to $D$. This line cuts $L_2$ in a point $A_2$. In the same way we define successively $$A_3\in L_3, \cdots, A_n\in L_n, A_{n+1}\in L_{n-1}, \cdots , A_{2n}\in L_2.$$ The second tangent line to $D$ from $A_{2n}$ meets $L_1$ in $A_{2n+1}$. Now two situations can occur : $A_{2n+1}=A_1$ or $A_{2n+1}\neq A_1$. This second case is clearly the general case. \smallskip If $A_{2n+1}=A_1$ the polygon $(A_1A_2)\cup \cdots \cup (A_{2n}A_{2n+1})$ is simultaneously inscribed in $\mathcal{L}_n$ and circumscribed around $D$. Our aim is to prove that in this case, there are infinitely many polygons with $2n$ sides simultaneously inscribed in $\mathcal{L}_n$ and circumscribed around $D$ (see theorem \ref{produitdedroites}). It means that the existence of such polygons does not depend on the initial point but only on $D$ and $\mathcal{L}_n$. \smallskip On the contrary, if $A_{2n+1}\neq A_1$ the polygon will never close after $2n$ steps for any initial point on $\mathcal{L}_n$. \smallskip This kind of result is called a porism and it is of the same nature than Steiner porism (see \cite{BB}, thm.7.3) for circles tangent to two given circles or Poncelet porism for two conics (see \cite{GH}, \cite{BKOR} or \cite{Po}). \smallskip To prove this porism we \textit{dualize} the situation and we consider the dual polygon inscribed in $D$. To be more explicit, let us assume that $A_{2n+1}=A_1$. Then the situation can be dualized in the following way. Any line $L_i$ in the configuration is the polar line of a point $c_i$ (its pole). Any tangent line $(A_iA_{i+1})$ is the polar line of a point $x_{i,i+1} \in D$ for $1\le i\le 2n$. For $1\le j\le n$, the lines $(x_{j-1,j}x_{j,j+1})$ and $(x_{j+n-1,j+n}x_{j+n,j+n+1})$ meet in $c_i$. By this way we obtain an inscribed polygon $(x_{1,2}x_{2,3}) \cup \cdots \cup (x_{2n,2n+1}x_{1,2})$ with $2n$ sides passing through $n$ fixed points $c_1, \cdots, c_n$ (this inscribed polygon corresponds to the choice of a $2n$-cycle among the permutations of $2n$ points). \begin{figure}[h!] \centering \includegraphics[height=8.5cm]{octagon2.pdf} \caption{Octagon inscribed in $D$ and its dual inscribed in $4$ lines and circumscribed around $D$. } \end{figure} This inscribed polygon leads to study Fr\'egier's involutions that are particular automorphisms of $D$. Indeed we verify that the porism (seetheorem \ref{produitdedroites}) is true when the product of the $n$ Fr\'egier's involutions giving the incribed polygon is also a Fr\'egier's involution. \smallskip So first of all we recall the definition and the main properties of these automorphisms. Then we study the product of involutions. More precisely we wonder when a product of $n$ involutions is still an involution.\\ For $n=2$, it is very well known (see proposition \ref{two}).\\ For $n=3$, we prove that the product is an involution if and only if the centers are aligned (see proposition \ref{uvw2}). It is an other way to set out the so-called Pascal theorem\footnote{Another way but probably not a new way. According to its simplicity this argument is certainly already written somewhere.}.\\ For $n\ge 4$, we propose a new proof of a generalization of Pascal theorem due to M\"obius (see theorem \ref{mob}). We prove also that the product of $2n+1$ involutions is an involution when their centers are aligned (see proposition \ref{alignes}). \smallskip The projective duality give us the dual versions of all these results, like Brianchon theorem for instance (see theorem \ref{dmob} and proposition \ref{dalignes}). \smallskip In the last section we prove our Poncelet type theorem for lines (see theorem \ref{produitdedroites}) and we conclude by an explicit computation in the case of two lines simultaneously inscribed and circumscribed around a smooth conic. \section{Fr\'egier's involutions} The group $ \mathrm{PGL}(2,\C)$ acts on $\p^1=\p^1(\C)$ in the usual way. \begin{defi} An element $g\in \mathrm{PGL}(2,\C)$ which is not the identity $I$ on $\p^1$ is called an involution if $g^2= I$. \end{defi} Considering $ \mathrm{GL}(2,\C)$ as the space of $2\times 2$ invertible matrices it is clear that any $g\in \mathrm{PGL}(2,\C)$ has at most two fixed points ($g$ has three fixed points if and only if $g= I$). Moreover, when $g$ is an involution it is easy to verify that it has exactly two fixed points and that it is determined by the data of these fixed points. \smallskip The Veronese embedding $\p^1 \stackrel{v_2}\hookrightarrow \p^2$ induces an embedding for groups $\mathrm{PGL}(2,\C) \subset \mathrm{PGL}(3,\C)$. Let $g\in \mathrm{PGL}(2,\C)$ be an involution on $\p^1$. The corresponding transformation $g$ in $\p^2$ has two fixed points on the smooth conic $D=v_2(\p^1)$: the images of the fixed points on $\p^1$. \\ Let us consider now the intersection point $x\notin D$ of the two tangent lines in the fixed points of $g$. A general line through $x$ cuts $D$ in two points. The map exchanging these points is an involution on $D$ (i.e. on $\p^1$). Its fixed points are the intersection points of $D$ with the polar line of $x$. Such an involution is called \textit{Fr\'egier involution} on $D$ with center $x$. Since an involution is determined by its fixed points, this involution is $g$. We have verified that: \begin{pro} Any involution on $D\subset \p^2$ is a Fr\'egier's involution. \end{pro} \subsection{Product of two and three involutions} Rigorously speaking we should not write product but composition of involutions. Anyway, from now, since for matrices it becomes a product we will write product in any case. Moreover we will denote $uv$ the product (i.e the composition) of two involutions $u$ and $v$ and $u^n$ the product (i.e the composition) of $u$ with itself $n$ times. The following proposition is classical and easy to prove. \begin{pro} \label{two} Let $u$ and $v$ be two involutions with distincts fixed points. Then, $ uv$ is involutive if and only if the two fixed points of $u$ and the two fixed points of $v$ form an harmonic divison of $D$. \end{pro} For three involutions we recognize Pascal theorem. \begin{pro} \label{uvw2} Let $u$, $v$, and $w$ be three involutions with distincts fixed points and let $x_u$, $x_v$ and $x_w$ be their respective centers. Then, $$ uvw\,\,\textrm{is involutive}\,\,\Leftrightarrow x_u, x_v\,\,\textrm{and}\,\, x_w \,\,\textrm{are aligned}.$$ \end{pro} \begin{proof}\footnote{This proposition, its proof and its corollary appear in a book project with Giorgio Ottaviani (see \cite{wykno}).} Assume that the three centers are aligned on a line $L$. The line $L$ is not tangent to $D$ because the three involutions do not have a common fixed point. Then let $\{x,y\}= L\cap D$. We verify that $uvw(x)=y$ and $uvw(x)=y$. The automorphism $uvw$ has at least one fixed point $z\in D$. Now the three points $x,y$ and $z$ are fixed points for $(uvw)^2$. \begin{figure}[h!] \centering \includegraphics[height=6.5cm]{pascal.pdf} \caption{Three involutions with aligned centers. } \end{figure} It means that $ uvw$ is an involution on $D$. \smallskip Conversely, assume that $uvw$ is involutive. Let $x\in D$ such that $v(x)=w(x)\neq x$. Call $L$ the line joining $x$ to $v(x)$, i.e passing trough $x_{v}$ and $x_{w}$. From $v(x)=w(x)\neq x$ and the assumption $uvw=wvu$, we find four fixed points of $wv$: $x, v(x), u(x), uv(x)$. The automorphism $wv$ has at most two distinct fixed points. The first two are distinct. Consider the third one $u(x)$. If $u(x)=v(x)$ we have $u(x)=v(x)=w(x)$ and we have finished, hence $u(x)=x$. Consider the fourth one $uv(x)$. If $uv(x)=x $ we have again $u(x)=v(x)=w(x)$ and we have finished, hence $uv(x)=v(x)$. It follows that $x, v(x)$ are fixed points of both $u$ and $uvw$, that is a contradiction since an involution is uniquely determined by its fixed points. \end{proof} \begin{cor}[Pascal's theorem] Let $p_1, p_2, p_3, q_3, q_2, q_1$ be six (ordered) points on a smooth conic $D$. Let $x_{ij}, i<j$ the intersection point of the two lines joining $p_i$ to $q_j$ and $p_j$ to $q_i$. Then the three points $x_{12}, x_{13}$ and $x_{23}$ are aligned. \end{cor} \begin{proof} We denote by $u$ the involution defined by $x_{13}$, $v$ the one defined by $x_{23}$ and $w$ the last one defined by $x_{12}$. Then by following lines we verify that $$(uvw)(p_1)=q_1, (uvw)(q_1)=p_1.$$ Let $z$ be a fixed point of $uvw$. Then $z,p_1,q_1$ are fixed points of $ (uvw)^2$. Since an element of $\textrm{PGL}(2,\C)$ that has more than three fixed points is the identity, we have proved that $ uvw$ is an involution. The result now follows from proposition \ref{uvw2}. \end{proof} \begin{rem} \label{coropascal} The center of the product of three involutions $u_1, u_2$ and $u_3$ with aligned centers (on a line $L$) belongs also to $L$. Indeed let us define $v=u_1 u_2 u_3$; since the centers are aligned $v$ is also an ivolution. Then $u_1 u_2 u_3v=I$. It implies $u_1=u_2u_3 v$ i.e that the product of $u_2,u_3$ and $v$ is an involution. According to theorem \ref{uvw2} their centers are aligned. \end{rem} \subsection{Product of $n\ge 4$ involutions and M\"obius theorem} We show that the product of an odd number of involutions with aligned centers is still an involution. \begin{rem} We cannot expect an equivalence as it was in proposition \ref{uvw2} but only an implication. Indeed let us give a product of five involutions with three centers on a line $L$ and two centers on another line $D$ that is also an involution. So, let us consider a line $L$ and three points $x_1,x_2,x_3$ on it. We associate three involutions $u_1,u_2, u_3$ to these centers. The product $w=u_1 u_2 u_3$ is also an involution by proposition \ref{uvw2} . Moreover, according to remark \ref{coropascal} its center $x$ belongs to $L$. Now we introduce another line $D$ passing through $x$, and two others points $x_4,x_5$ on $D$. Let us call $u_4, u_5$ the associated involutions. Then $$ u_1 u_2 u_3 u_4 u_5=w u_3 u_4,$$ and $w u_3 u_4$ is an involution because their centers $x,x_4,x_5$ are aligned. \end{rem} \begin{pro} \label{alignes} Let $u_1, \cdots, u_{2n+1}$ be involutions on $D\subset \p^2$ with respective centers $c_1,\cdots, c_{2n+1}$. If $c_1,\cdots, c_{2n+1}$ are aligned then $\prod_{i=1}^{2n+1}u_{i}$ is an involution. \end{pro} \begin{proof} Let $v=\prod_{i=1}^{2n+1}u_{i} $. We recall that the fixed points of $v$ are also fixed points of $v^2$. The automorphism $v$ possesses at least one fixed point $x$. Let $L$ be the line of centers and $L\cap D =\lbrace y,z \rbrace$. These two points are exchanged by $v$ and then are not fixed points of $v$. The points $x,y,z$ are fixed points of $v^2$, then $v^2=I$. \end{proof} \begin{rem} The proposition is not valid for an even number of involutions. Indeed since two points are always aligned it is clearly not valid for two involutions. Consider now three involutions with aligned centers $u_1,u_2,u_3$. Call $w$ their product. Let $u_4$ be another involution with its center on the previous line of centers. Then the product $w u_4$ is an involution if and only if the four fixed points form an harmonic division. But if we move the center of $u_4$ on $L$ the cross-ratio is changing. So for the general point on $L$ the product will not be involutive. \end{rem} In general when the product of involutions is an involution we are not able to say something pertinent about the position of their centers. But, when the product of $n$ involutions is still an involution and at least $n-1$ centers are aligned, M\"obius proved that all the centers are aligned (see \cite{Mo}, page 219). We prove again this theorem in the terminology of Fr\'egier's involution. The formulation below is the one given in Adler's article (see \cite{A}, thm. 1). \begin{thm}[M\"obius theorem] \label{mob} Let $x_1,y_1, \cdots, x_n, y_n$ be points on a smooth conic. Consider the intersection points $a_{j}=(x_jx_{j+1})\cap (y_jy_{j+1})$, $j=1,\cdots,n-1$ and $$ a_n= \left \{ \begin{array}{ccc} (x_ny_{1})\cap (y_nx_{1}) & \mathrm{if} & n=2m+1 \\ (x_nx_{1})\cap (y_ny_{1}) & \mathrm{if} & n=2m. \end{array} \right.$$ If all of these points except possibly one are collinear then the same is true for the remaining point. \end{thm} \begin{proof} As we have seen before (in proposition \ref{uvw2}) it is true for three involutions since the product of three involutions is an involution if and only if the centers are aligned. Moreover as we said in remark \ref{coropascal} the center of the product is also aligned with the three others. \begin{figure}[h!] \centering \includegraphics[height=8.5cm]{mobius.pdf} \caption{Four aligned centers. } \end{figure} We need only to prove the result for four involutions. Indeed let us verify that we can reduce the general case ($n\ge 4$) to three or four involutions. Consider $n> 4$ involutions. Any group of three terms among $u_{i-1}, u_i, u_{i+1}$ for $i=2, \cdots, n-2$ gives a new involution with a center on the same line. So when $n$ is odd it remains three involutions after reductions $u u_{n-1} u_{n}$. Since the product is an involution the centers are aligned. But the centers of $u$ and $u_{n-1}$ are already on the line $L$. Then the center or $u_n$ is also on $L$. When $n$ is even it remains four involutions after reduction $u u_{n-2} u_{n-1} u_{n}$ with the first three centers on $L$. \smallskip So let us consider the case $u_1 u_{2} u_{3} u_{4}=v$ with $v$ an involution. We have $u_1 v=u_2 u_3 u_4$. The center $x_4$ belongs to $(x_2x_3)$ if and only if $u_2 u_3 u_4$ is an involution, i.e. if and only if $u_1v$ is an involution. To prove it let us show that $u_1 v=v u_1$. Since $x_1,x_2,x_3$ are aligned the product $u_1u_2u_3$ is involutive, so $u_1u_2u_3=u_3u_2u_1$. Then $u_2u_3=u_1u_3u_2u_1$ and $u_1v=(u_2u_3)u_4=(u_1u_3u_2u_1)u_4$. Since $u_3u_2u_1=u_1u_2u_3$ we obtain $$u_1v=(u_1u_3u_2u_1)u_4=u_1(u_1u_2u_3)u_4=u_1v.$$ \end{proof} \subsection{Projective duality and involutions} All the results obtained above can be \textit{dualized} by considering polar lines of points and poles of lines with respect to the smooth conic $D$. By this way any inscribed polygon into $D$ induces a circumscribed polygon (with the same number of sides) around $D$. Even if M\"obius theorem was certainly \textit{dualized} by M\"obius himself, we write one more time this dual version below. \begin{thm}[dual M\"obius] \label{dmob} Let $L_1\cup \cdots \cup L_{2n}$ be a polygon tangent circumscribed to the smooth conic $D$. If the diagonals joining $L_i\cap L_{i+1}$ to $L_{i+n}\cap L_{i+1+n}$ for $i=1, \cdots, n-1$ are concurrent lines then the same is true for the remaining diagonal joining $L_{2n}\cap L_{1}$ to $L_{n}\cap L_{n+1}$. \end{thm} \begin{rem} For $n=3$ it is Brianchon theorem. \end{rem} Proposition \ref{alignes} implies that one can construct an inscribed polygon in $D$ when $n$ aligned points $c_1,\cdots, c_n$ (not on $D$) are given. Indeed let $x$ be a point on $D$ and let us take successively its images by the involutions $u_i$ associated to the centers $c_i$: $$x, u_1(x), (u_2 u_1)(x) , \cdots , y=(u_{2n+1} \cdots u_1)(x). $$ Then let us take successively the images of $y$ by the involutions $u_i$: $$u_1(y), (u_2 u_1)(y) , \cdots , (u_{2n+1} \cdots u_1)(y). $$ Since the product is involutive the process stops and we have: $$(u_{2n+1} \cdots u_1)(y)=(u_{2n+1} \cdots u_1)^2(x)=x.$$ In other words, from a general point on $D$ we can draw by this method an inscribed polygon with $4n+2$ sides. Dualizing this statement, we verify the following proposition: \begin{pro}[dual version of proposition \ref{alignes}] \label{dalignes} Let us consider $2n+1$ concurrent lines $L_i$ meeting a smooth conic $D$ in $4n+2$ distinct points. Then take any point $P_1$ on $L_1$ and draw a tangent to $D$ from this point. This tangent cuts $L_2$ in one point $P_2$. Let us draw successively $P_i\in L_i$ for $1\le i\le 2n+1$ and $P_{2n+1+j}\in L_j$ for $1\le j\le 2n+1$. Then the line $(P_1P_{4n+2})$ is tangent to $D^{\vee}$. \end{pro} \section{A Poncelet theorem for lines} The following theorem is not a consequence nor of the well known Poncelet closure theorem (except when the configuration consists in two lines) neither of Darboux theorem (the configuration is not a Poncelet curve associated to $D$ and described in \cite{Tr}). We say that a polygon with $2n$ sides joining $2n$ vertices is well inscribed in a configuration $\mathcal{L}_n$ of $n$ lines when each line of the configuration contains exactly two vertices. \begin{thm} \label{produitdedroites} Let $\mathcal{L}_n$ be a configuration of $n$ lines and $D$ a smooth conic in $\p^2$. If it exists a polygon with $2n$ sides well inscribed into $\mathcal{L}_n$ and circumscribed around $D$ then there are infinitely many such polygons. In particular a general point in $\mathcal{L}_n$ is a vertex of such a polygon. \end{thm} \begin{proof} The given polygon of $2n$ sides well inscribed into $\mathcal{L}_n$ and circumscribed to $D$ corresponds by duality (polarity) to an inscribed polygon into $D$. It gives $2n$ points on $D$ linked by $2n$ lines that are the polar lines of the considered $2n$ vertices in $\mathcal{L}_n$. These $2n$ lines meet two by two in $n$ points $L_1^{\vee}, \cdots, L_n^{\vee}$ (poles of the $n$ lines of the configuration). \smallskip Let us show that the product $v=(u_n\cdots u_1)$ of the $n$ involutions $u_1, \cdots , u_n$ with respective centers $L_1^{\vee}, \cdots, L_n^{\vee}$ is involutive. Let $x_1$ be an intersection point of $L_1\cap D$ and $x_2=v(x_1)$. Following the sides of this inscribed polygon, we have $v^2(x_{1})=x_{2}$. We have, in the same way, $v^2(x_{2})=x_{2}$. Since the inscribed polygon has $2n$ vertices and not only $n$, these two fixed points of the automorphism $v^2$ do not co\"{\i}ncide; indeed they are exchanged by $v$. Let $x$ be a fixed point of the product $v$. This point $x$ is also a fixed point of $v^2$. Since $x_{1}$ and $x_{2}$ are exchanged by $v$ they do not co\"{\i}ncide with $x$. It implies that $v^2$ has three fixed points, i.e. $v^2= I$. \smallskip Then, a polygon constructed from a general point $p\in D$ by joining the vertices $$\lbrace p,u_1(p),(u_2 u_1)(p),\cdots , (u_{n} \cdots u_1)(p), \cdots , (u_{n-1} \cdots u_1 u_{n} \cdots u_1)(p) \rbrace $$ is inscribed in $D$ and corresponds by duality to an inscribed polygon in $\mathcal{L}_n$ circumscribed around $D$. \end{proof} \subsection{Poncelet theorem for singular conics} Assume that $u_{2i}=u$ and $u_{2i+1}=v$ and let $x$ and $y$ be their respective centers. We can consider the two polar lines $L_x=x^{\vee}$ and $L_y=y^{\vee}$. Then theorem \ref{produitdedroites} (with $u_{2i}=u$ and $u_{2i+1}=v$) implies that $(uv)^n=I$ if and only if there exists a polygon with $2n$ sides inscribed in $L_x\cup L_y$ and circumscribed around $D$. In that case since an union of two lines is a conic it is a consequence of Poncelet theorem (see \cite{Va}, thm.2.2). \begin{figure}[h!] \centering \includegraphics[height=4.5cm]{uv4id.pdf} \caption{Octagon inscribed in $L_x\cup L_y$. } \end{figure} This situation can be described in an elementary way. Since $\mathrm{PGL(2,\C)}$ acts transitively on triplets of points in $\p^1$ we can choose three among the four fixed points of $u$ and $v$. Then we will obtain a good matrix description. Let us introduce first a family of polynomials\footnote{Quite similar than Fibonacci polynomials.} on the affine line: \smallskip $P_0(x) =1, P_1(x)=x$ and for $n\ge 2$, $P_n(x)=xP_{n-1}(x)-P_{n-2}(x)$. \smallskip We can give now a simple characterization for an union of two lines to be Poncelet associated to a smooth conic. \begin{pro} Let $\{ 1,-1\}$ be the fixed points of $u$ and $\{0,2/x\}$ be the fixed points of $v$. Then, $$ (uv)^n= I \Leftrightarrow P_{n-1}(x)=0\,\, \mathrm{and} \,\, P_{n-2}(x)\neq 0.$$ \end{pro} \begin{proof} Two representatives matrices of $u$ and $v$ are $$M_u=\left( \begin{array}{cc} 0 & 1\\ 1 & 0 \end{array} \right) \,\, \mathrm{and}\,\, M_v=\left( \begin{array}{cc} 1 & 0\\ x & -1 \end{array} \right).$$ The product is given by the matrix $$ (M_uM_v)^{n}=\left( \begin{array}{cc} xP_{n-1}(x)-P_{n-2}(x) & -P_{n-1}(x)\\ P_{n-1}(x) & -P_{n-2}(x) \end{array} \right) $$ and this matrix is a multiple of the matrix identity if and only if $$P_{n-1}(x)=0.$$ \end{proof}
train/arxiv
BkiUbqTxK4tBVhat2gNq
5
1
\section{Introduction} \label{ch:intro} Continuum mechanics provides an efficient theoretical framework for modeling materials science phenomena. To characterize the behavior of materials, \emph{constitutive relations} serve as an input to the continuum theory. These constitutive models have functional forms which must be consistent with material frame-indifference and the laws of thermodynamics and include parameters that are fitted to reproduce experimental observations. With the advent of modern computing power, atomistic simulations through ``numerical experiments'' offer the potential for studying different materials and arriving at their constitutive laws from first principles. This could make it possible to design new materials and to improve the properties of existing materials in a systematic fashion. To use the data obtained from an atomistic simulation to build a constitutive law, which is framed in the language of continuum mechanics, it is necessary to understand the connection between continuum fields and the underlying microscopic dynamics. Another arena where the connection between continuum and atomistic concepts is important is the field of \emph{multiscale modeling}. This discipline involves the development of computational tools for studying problems where two or more length and/or time scales play a major role in determining macroscopic behavior. A prototypical example is fracture mechanics where the behavior of a crack is controlled by atomic-scale phenomena at the crack-tip, while at the same time long-range elastic stress fields are set up in the body. Many advances have been made in the area of multiscale modeling in recent years. Some common atomistic/continuum coupling methods are quasicontinuum \cite{tadmor1996,shenoy1999}, coupling of lengthscales \cite{rudd2000}, cluster quasicontinuum \cite{knap2001}, bridging domain \cite{xiao2004}, coupled atomistics and discrete dislocations \cite{shilkrot2004}, and heterogeneous multiscale methods \cite{e2007}, to name just a few. Refer to \cite{tadmor2009} for a comparison of some prominent atomistic/continuum coupling multiscale methods. In a multiscale method, a key issue involves the transfer of information between the discrete model and the continuum model. It is therefore of practical interest to understand how to construct definitions of continuum fields for an atomistic system, to ensure a smooth transfer of information between the discrete and continuum domains. In this paper, we focus on just one aspect of the continuum-atomistic connection, namely the interpretation of the (Cauchy) stress tensor in a discrete system. This question has been explored from many different perspectives for nearly two hundred years and this has led to various definitions that do not appear to be consistent with each other. As a result, the ``correct'' definition for the stress tensor has been a subject of great debate and controversy. We begin with a brief historical survey. \subsubsection*{A brief history of microscopic definitions for the stress tensor} Interest in microscopic definitions for the stress tensor dates back at least to Cauchy in the 1820s \cite{Cauchy1828a,Cauchy1828b} with his aim to define stress in a crystalline solid. Cauchy's original definition emerges from the intuitive idea of identifying stress with the force per unit area carried by the bonds that cross a given surface. A comprehensive derivation of Cauchy's approach is given in Note B of Love's classic book on the theory of elasticity \cite{love}. Since this approach is tied to the particular surface being considered, it actually constitutes a definition for the {\em traction} (or {\em stress vector}) and not for the stress tensor. The first definition of stress as a tensorial quantity follows from the works of Clausius \cite{clausius1870} and Maxwell \cite{maxwell1870,maxwell1874} in the form of \emph{virial theorem}. Clausius states the virial theorem as \begin{quotation} \emph{The mean vis viva of a system is equal to its virial.} \end{quotation} By ``vis viva'' (literally ``living force''), Clausius means kinetic energy, while the term ``virial'' comes from the Latin ``vis'' (pl. ``vires'') meaning force. The virial theorem leads to a definition for pressure in a gas. Maxwell \cite{maxwell1870,maxwell1874} extended Clausius' work and showed the existence of a tensorial version of the virial theorem (see Appendix \ref{ch:virial}). The {\em virial stress} resulting from the virial theorem is widely used even today in many atomistic simulations due to its simple form and ease of computation. Unlike Cauchy's original definition for stress, the virial stress includes a contribution due to the kinetic energy of the particles. This discrepancy was addressed by Tsai \cite{tsai1979}, who extended the definition given by Cauchy to finite temperature by taking into consideration the momentum flux passing through the surface. Let us refer to this stress vector as the \emph{Tsai traction}. An alternative approach for defining the stress tensor was pioneered in the landmark paper of Irving and Kirkwood \cite{ik1950}. Irving and Kirkwood derived the equations of hydrodynamics from the principles of non-equilibrium classical statistical mechanics and in the process established a pointwise definition for various continuum fields including the stress tensor. Although their work was indeed noteworthy, the stress tensor obtained involved a series expansion of the Dirac delta distribution which is not mathematically rigorous. Continuing their work, Noll \cite{noll1955} proved two lemmas, which allowed him to avoid the use of the Dirac delta distribution, and thus arrive at a closed-form expression for the stress tensor which does not involve a series expansion. We refer to the procedure introduced by Irving and Kirkwood and extended by Noll as the \emph{Irving--Kirkwood--Noll procedure}. Schofield and Henderson \cite{schofield1982} highlighted the non-uniqueness present in the stress tensor derived by Irving and Kirkwood, and pointed out that it could result in a non-symmetric stress tensor. There have been several attempts to improve on the Irving and Kirkwood procedure. In particular, Lutsko \cite{lutsko1988} reformulated this procedure in Fourier space. An error in Lutsko's derivation was corrected by Cormier et al. \cite{cormier2001}. Due to the stochastic nature of the Irving and Kirkwood stress, many difficulties arise when one tries to use their expression in atomistic simulations. To avoid these difficulties, Hardy and co-workers \cite{hardy1982,hardy2002} and independently Murdoch \cite{murdoch1982,murdoch1993,murdoch1994,murdoch2003,murdoch2007} developed a new approach that bypasses the mathematical complexity of the Irving and Kirkwood procedure. This is done by defining continuum fields as direct spatial averages of the discrete equations of motion using weighting functions with compact support. In particular, this approach leads to the so-called \emph{Hardy stress} \cite{hardy1982} often used in molecular dynamics simulations. Murdoch in \cite{murdoch2007} provides an excellent description of the spatial averaging approaches currently being used and discusses the non-uniqueness of the stress tensor resulting from the spatial averaging procedure. We refer to the direct spatial averaging approach as the \emph{Murdoch--Hardy procedure}. Another approach, which leads to a stress tensor very similar to that obtained by Irving and Kirkwood is the reformulation of elasticity theory using peridynamics \cite{silling2000}. Lehoucq and Silling \cite{lehoucq2008} have recently shown that Noll's solution is a minimum solution in a variational sense. Morante et al. \cite{morante2006} proposed a new approach for defining the stress tensor using the invariance of partition function under infinitesimal canonical point transformations. However, their approach is limited to equilibrium statistical mechanics and involves taking derivatives of delta distributions. We can summarize the ``state of the art'' for the microscopic definition of the stress tensor as follows. There are currently at least three definitions for the stress tensor which are commonly used in atomistic simulations: the virial stress, the Tsai traction, and the Hardy stress \cite{zim2004}. The importance of the Irving and Kirkwood formulation is recognized, however, it is not normally used in practice and its connection with the other stress definitions is not commonly understood. The difference between \emph{pointwise} stress measures and temporal and/or spatially-averaged quantities is often not fully appreciated. The result is that the connection between the Cauchy stress tensor defined in continuum mechanics and its analogue, defined for a discrete system, remains controversial and continues to be a highly-debated problem. \subsubsection*{A unified framework for the microscopic definition for the stress tensor} In this paper, a unified framework based on the Irving--Kirkwood--Noll procedure is established which \emph{leads to all of the major stress definitions} discussed above and identifies additional possible definitions. Since all of the definitions are obtained from a common framework the connections between them can be explored and analyzed and the uniqueness of the stress tensor can be established. An overview of the approach and the organization of the paper are described below. Before turning to the general framework, we begin in \sref{ch:canonical} with a derivation of the virial stress tensor within the framework of equilibrium statistical mechanics using the technique of canonical transformations. Although this derivation is quite different from the Irving--Kirkwood--Noll procedure, it provides insight into how the geometric ideas of mechanics can be used to derive the stress tensor. It also provides a limit to which the general non-equilibrium stress tensor must converge under equilibrium conditions in the thermodynamic limit. This is used later to establish the uniqueness of the stress tensor obtained from our general unified framework. Next, we turn to the construction of the new unified framework. In \sref{ch:phase}, we extend the Irving--Kirkwood--Noll procedure \cite{ik1950,noll1955}, originally derived for pair potential interactions, to multi-body potentials. Due to the invariance of the potential energy function with respect to the Euclidean group, it can be shown that any multi-body potential can be expressed as a function of distances between particles. When expressed in this form, we note that for a system of more than 4 particles, this function is only defined on a manifold since the $N(N-1)/2$ distances between $N$ particles in $\real{3}$ are not independent for $N\ge5$. To apply the Irving--Kirkwood--Noll procedure to multi-body potentials, we recognize that the potential energy function must be {\em extended} from its manifold to a higher-dimensional Euclidean space as a continuously differentiable function. We show that if such an extension exists, then an infinite number of equivalent extensions can be constructed using \emph{Cayley-Menger determinants}, which describe the constraints that the distances between particles embedded in $\real{3}$ must satisfy. Then for multi-body potentials that possess continuously differentiable extensions (which is the case for most practical interatomic potentials), we establish the key result that due to the balance of linear and angular momentum, \emph{the force on a particle in a discrete system can always be decomposed as a sum of central forces between particles}, i.e., forces that are parallel to the lines connecting the particles. In other words, the \emph{strong law of action and reaction} is always satisfied for such multi-body potentials. We show, that although the net force on a particle calculated using \emph{any} extension is the same, its decomposition into central forces is generally different for different extensions. Using this result we show that the pointwise stress tensor resulting from the Irving--Kirkwood--Noll procedure is non-unique and symmetric. We also show, that a generalization of Noll's lemmas \cite{noll1955} to \emph{non-straight bonds} gives a non-symmetric stress tensor that may be important for particles with internal structure, such as liquid crystals. The {\em macroscopic} stress tensor corresponding to the pointwise stress tensor described above is obtained in \sref{ch:spatial} through a procedure of spatial averaging. The connection between this stress and the stress tensors obtained via the direct spatial averaging procedure introduced by Murdoch \cite{murdoch1982,murdoch1993,murdoch1994,murdoch2003,murdoch2007} and Hardy \cite{hardy1982} is explored and in the process the Murdoch--Hardy procedure is systematized and generalized to multi-body potentials using the results of \sref{ch:phase}. The non-uniqueness of the stress tensor, inherent in the Murdoch--Hardy procedure is studied and a general class of possible definitions under this procedure are identified. The connection between the non-uniqueness in the Murdoch--Hardy procedure and the non-uniqueness mentioned in \sref{ch:phase} is addressed. In \sref{ch:compare}, various stress definitions including the Hardy stress, the Tsai traction and the virial stress are shown to be special cases of the macroscopic stress tensor derived from the extended Irving--Kirkwood--Noll procedure in \sref{ch:spatial}. The original definitions for these measures are generalized in this manner to multi-body potentials. The existence of different extensions for the potential energy function, which led to non-uniqueness of the pointwise stress tensor discussed in \sref{ch:phase}, also result in the non-uniqueness of these definitions. However it is shown that the difference in the macroscopic stress tensor resulting from this non-uniqueness tends to zero in the \emph{thermodynamic limit}\footnote{\label{foot:tdlimit} The thermodynamic limit is the state obtained as the number of particles, $N$, and the volume, $V$, of the system tend to infinity in such a way that the ratio $N/V$ is constant.}. Another source of non-uniqueness explored in this section is that given any definition for the stress tensor, a new definition, which also satisfies the balance of linear momentum, can be obtained by adding to it an arbitrary tensor field with zero divergence. It is shown that in the thermodynamic limit the macroscopic stress tensor obtained in \sref{ch:spatial} converges to the virial stress derived in \sref{ch:canonical}. To address practical aspects of the different definitions obtained within the unified framework, \sref{ch:experiment} describes several ``numerical experiments'' involving molecular dynamics and lattice statics. These simulations are designed to examine the behavior of these stress definitions, including their convergence with averaging domain size and their symmetry properties. Our conclusions and directions for future research are presented in \sref{ch:conclusions}. \subsubsection*{Notation} In this paper, vectors are denoted by lower case letters in bold font and tensors of higher order are denoted by capital letters in bold font. The tensor product of two vectors is denoted by the symbol ``$\otimes$'' and the inner product of two vectors is denoted by a dot ``$\cdot$''. The inner product of two second-order tensors is denoted by ``:''. A second-order tensor operating on a vector is denoted by juxtaposition, e.g., $\bm{T}\bm{v}$. The gradient of a vector field, $\bm{v}(\bm{x})$, is denoted by $\nabla_{\bm{x}} \bm{v}(\bm{x})$, which in indicial notation is given by $[ \nabla_{\bm{x}} \bm{v} ]_{ij} = \partial \bm{v}_i/\partial \bm{x}_j$. The divergence of a tensor field, $\bm{T}(\bm{x})$, is denoted by $\operatorname{div}_{\bm{x}} \bm{T}(\bm{x})$. The divergence of a vector field is defined as the trace of its gradient. The divergence of a second-order tensor field in indicial notation (with Einstein's summation convention) is given by $[ \operatorname{div}_{\bm{x}} \bm{T} ]_i = \partial \bm{T}_{ij}/\partial \bm{x}_j $. The notation described above is followed unless otherwise explicitly stated. \section{Stress in an equilibrium system} \label{ch:canonical} In this section, we obtain expressions for the Cauchy stress in an equilibrium system using the technique of canonical transformations. The basic philosophy behind canonical transformation is explained in the next section. \subsection{Canonical transformations} \label{sec:can_trans} Consider a system consisting of $N$ point masses whose behavior is governed by classical mechanics. Let $\bm{q}_\alpha(t)$ and $\bm{p}_\alpha(t)$ ($\alpha = 1,2,\dots,N)$ denote the generalized coordinates and momenta of the system.\footnote{In a general theory of canonical transformations, $\bm{q}_\alpha$ and $\bm{p}_\alpha$ need not denote the actual position and momentum of particle $\alpha$.} For brevity, we sometimes use $\bm{q}(t)$ and $\bm{p}(t)$ to denote the vectors $(\bm{q}_1(t),\bm{q}_2(t),\dots,\bm{q}_N(t))$ and $(\bm{p}_1(t), \bm{p}_2(t),\dots, \bm{p}_N(t))$, respectively. The time evolution of the system can be studied through three well-known approaches, referred to as the {\em Newtonian formulation}, the {\em Lagrangian formulation}, and the {\em Hamiltonian formulation}. The first approach is used in molecular dynamics simulations, while the latter two approaches are more elegant and can sometimes be used to obtain useful information from systems in the absence of closed-form solutions. In the Lagrangian formulation, a system is characterized by the vector $\bm{q}(t)$ and a Lagrangian function $\mathcal{L}$, given by \begin{equation} \mathcal{L}(\bm{q},\dot{\bm{q}};t) = \mathcal{T}(\dot{\bm{q}}) - \mathcal{V}(\bm{q}), \end{equation} where $\mathcal{T}$ is the kinetic energy of the system, $\mathcal{V}$ is the potential energy of the system, and $\dot{\bm{q}}(t)$ represents the time derivative of $\bm{q}(t)$. It is useful to think of $\bm{q}$ as a point in a $3N$-dimensional {\em configuration space}. The time evolution of $\bm{q}(t)$ in configuration space is described by a variational principle called \emph{Hamilton's principle}. Hamilton's principle states that the time evolution of $\bm{q}(t)$ corresponds to the extremum of the action integral defined as a functional of $\bm{q}$ by \begin{equation} \mathcal{A}[\bm{q}] = \int_{t_1}^{t_2} \mathcal{L}(\bm{q},\dot{\bm{q}};t) \, dt, \label{eqn:action} \end{equation} where $t_1$, $t_2$, $\bm{q}(t_1)$ and $\bm{q}(t_2)$ are held fixed with respect to the class of variations being considered \cite[Section V.1]{lanczos}. In mathematical terms, we require that \begin{equation} \delta \mathcal{A} = 0, \label{eqn:var_lag} \end{equation} while keeping the ends fixed as described above. The Euler--Lagrange equation arising from \eref{eqn:var_lag} is \begin{equation} \frac{d}{dt} \left (\frac{\partial \mathcal{L}}{\partial \bmd{q}_\alpha} \right) - \frac{\partial \mathcal{L}}{\partial \bm{q}_\alpha} = \bm{0}. \label{eqn:lagrange} \end{equation} The Lagrangian formulation is commonly used as a calculation tool in solving simple problems. Next, we note that the Lagrangian is the Legendre transform of the Hamiltonian $\mathcal{H}$, \cite[Section VI.2]{lanczos}, \begin{equation} \mathcal{L}(\bm{q},\dot{\bm{q}}; t) = \sup_{\bm{p}} [ \bm{p} \cdot \bmd{q} - \mathcal{H}(\bm{p},\bm{q};t) ]. \end{equation} The Hamiltonian is the total energy of the system. Using the Hamiltonian, equation~\eref{eqn:var_lag} can be rewritten as \begin{equation} \delta \int_{t_1}^{t_2} \left [ \bm{p} \cdot \bmd{q} - \mathcal{H}(\bm{p},\bm{q};t) \right ] \, dt = 0. \label{eqn:var_hamilton} \end{equation} Note that in \eref{eqn:var_lag}, the variation is only with respect to $\bm{q}$, whereas in \eref{eqn:var_hamilton}, the functional depends on the functions $\bm{q}$ and $\bm{p}$, and variations are taken with respect to both $\bm{q}$ and $\bm{p}$ independently. In both cases, $t_1$, $t_2$, $\bm{q}(t_1)$ and $\bm{q}(t_2)$ are held fixed. The variational principle given in \eref{eqn:var_hamilton} is commonly referred as the \emph{modified Hamilton's principle} \cite{goldstein} or simply as the ``Hamiltonian formulation''. The advantage of the Hamiltonian formulation lies not in its use as a calculation tool, but rather in the deeper insight it affords into the formal structure of mechanics. The Euler--Lagrange equations associated with \eref{eqn:var_hamilton} are \begin{align} \bmd{q}_\alpha &= \nabla_{\bm{p}_\alpha} \mathcal{H},\label{eqn:ham1}\\ \bmd{p}_\alpha &= -\nabla_{\bm{q}_\alpha} \mathcal{H},\label{eqn:ham2} \end{align} commonly called Hamilton's equations. The above equations are also referred to as the \emph{canonical equations of motion}\footnote{The term ``canonical'' in this context has nothing to do with the canonical ensemble of statistical mechanics. The terminology was introduced by Jacobi to indicate that Hamilton's equations constitute the simplest form of the equations of motion.}. It is important to note that the Hamiltonian formulation is more general than the Lagrangian formulation, since it accords the coordinates and momenta independent status, thus providing the analyst with far greater freedom in selecting generalized coordinates. We now think of $(\bm{q},\bm{p})$ as a point in a $6N$-dimensional \emph{phase space}, as opposed to the $3N$-dimensional configuration space of the Lagrangian formulation. The choice of $\bm{q}$ and $\bm{p}$ is not arbitrary, however, since the selected variables must satisfy the canonical equations of motion. For this reason $\bm{q}$ and $\bm{p}$ are called \emph{canonical variables}. The requirement that the generalized coordinates and momenta must be canonical means that new sets of generalized coordinates can be derived from a given set through a special kind of transformation defined below. \spnewtheorem{definitions}{Definition} \begin{definitions} Any transformation of generalized coordinates that preserves the canonical form of Hamilton's equations is said to be a canonical transformation.\footnote{This definition suffices for our purpose, but a more correct definition can be found in \cite{arnold} using \emph{differential forms}.} \end{definitions} The construction of canonical transformations is facilitated by the introduction of \emph{generating functions} as explained below. \subsubsection*{Generating functions} Consider two sets of canonical variables $(\bm{Q},\bm{P})$ and $(\bm{q},\bm{p})$, related to each other through a canonical transformation given by \begin{equation} \bm{Q} = \bm{Q}(\bm{q},\bm{p},t), \qquad \bm{P} = \bm{P}(\bm{q},\bm{p},t). \label{eqn:trans_pq_PQ} \end{equation} Since the variables are canonical, they satisfy the modified Hamilton's principle in \eref{eqn:var_hamilton}, \begin{align} \delta \int_{t_1}^{t_2} \left [ \bm{p} \cdot \bmd{q} - \mathcal{H}(\bm{p},\bm{q};t) \right ] \, dt &= 0, \label{eqn:actionvarone} \\ \delta \int_{t_1}^{t_2} \left [ \bm{P} \cdot \bmd{Q} - \hat{\mathcal{H}}(\bm{P},\bm{Q};t) \right ] \, dt &= 0, \label{eqn:actionvartwo} \end{align} where $\hat{\mathcal{H}}$ is defined later as part of the canonical transformation. The integrands of \eref{eqn:actionvarone} and \eref{eqn:actionvartwo} can therefore only differ by a quantity whose variation after integration is identically zero. A possible solution is \begin{equation} \delta \int_{t_1}^{t_2} \left [ \bm{p} \cdot \bmd{q} - \bm{P} \cdot \bmd{Q} - (\mathcal{H} - \hat{\mathcal{H}}) \right ] \, dt = \delta \int_{t_1}^{t_2} \frac{dG}{dt} \, dt, \end{equation} where $G$ is an arbitrary scalar function of the canonical variables and time, with continuous second derivatives. The integral on the right is only evaluated at fixed integration bounds and its variation is zero. This is not obvious since there is no restriction on the variation of the momenta at the ends. We assume this to be true to avoid the introduction of differential forms. For a mathematically rigorous argument refer to \cite[Section 45]{arnold}\footnote{Briefly the proof is based on the symmetry present in the geometry of any Hamiltonian system commonly called \emph{symplectic geometry}.}. The difference between the integrands of \eref{eqn:actionvarone} and \eref{eqn:actionvartwo} therefore satisfies, \begin{equation} dG - \bm{p} \cdot d\bm{q} + \bm{P} \cdot d\bm{Q} + (\mathcal{H} - \hat{\mathcal{H}})dt = 0. \label{eqn:diff_G_1} \end{equation} Now, consider the case where $G = G_1(\bm{q},\bm{Q},t)$. The total differential of $G$ is then \begin{equation} dG = \nabla_{\bm{q}} G_1 \cdot d\bm{q} + \nabla_{\bm{Q}} G_1 \cdot d\bm{Q} + \frac{\partial G_1}{\partial t} dt. \label{eqn:diff_G_3} \end{equation} Substituting \eref{eqn:diff_G_3} into \eref{eqn:diff_G_1} gives \begin{equation} \left ( \nabla_{\bm{q}} G_1 - \bm{p} \right ) \cdot d\bm{q} + \left ( \nabla_{\bm{Q}} G_1 + \bm{P} \right ) \cdot d\bm{Q} + \left ( \frac{\partial G_1}{\partial t} + \mathcal{H} - \hat{\mathcal{H}} \right ) dt = 0. \end{equation} Since $\bm{q}$, $\bm{Q}$ and $t$ are independent, the above equation is satisfied provided that \begin{equation} \bm{p}_\alpha = \frac{\partial G_1}{\partial \bm{q}_\alpha}, \qquad \bm{P}_\alpha = -\frac{\partial G_1}{\partial \bm{Q}_\alpha}, \qquad \hat{\mathcal{H}} = \mathcal{H} + \frac{\partial G_1}{\partial t}. \label{eqn:tranform_1} \end{equation} The above relations define the canonical transformation. Since $G_1$ generates the transformation, it is commonly called the \emph{generating function} of the canonical transformation. Note that if $G_1$ does not depend on time $t$, then $\hat{\mathcal{H}} = \mathcal{H}$. The generating functions of the form, $G = G_1(\bm{q},\bm{Q},t)$, does not generate all possible canonical transformations. In general, there are four primary classes of generating functions where the functional dependence is $(\bm{q},\bm{Q})$, $(\bm{q},\bm{P})$, $(\bm{p},\bm{Q})$ and $(\bm{p},\bm{P})$.\footnote{In addition to these four classes of transformation, it is possible to have a mixed dependence, where each degree of freedom can belong to a different class \cite{goldstein}.} We have already encountered the first class, where $G = G_1(\bm{q},\bm{Q},t)$. The remaining classes can be obtained from the first through Legendre transformations. Consider for example, the following definition, \begin{equation} G = G_3(\bm{p},\bm{Q},t) + \bm{q} \cdot \bm{p}. \end{equation} The total differential of this expression is \begin{equation} dG = \nabla_{\bm{p}} G_3 \cdot d\bm{p} + \nabla_{\bm{Q}} G_3 \cdot d\bm{Q} + \frac{\partial G_3}{\partial t} dt + \bm{q} \cdot d\bm{p} + \bm{p} \cdot d\bm{q}. \end{equation} Substituting the above equation into \eref{eqn:diff_G_1} gives \begin{equation} \left ( \nabla_{\bm{p}} G_3 + \bm{q} \right ) \cdot d\bm{p} + \left( \nabla_{\bm{Q}} G_3 + \bm{P} \right ) \cdot d\bm{Q} + \left( \frac{\partial G_3}{\partial t} + \mathcal{H} - \hat{\mathcal{H}} \right ) dt = 0, \end{equation} which leads to the following canonical transformation: \begin{equation} \bm{q}_\alpha = -\frac{\partial G_3}{\partial \bm{p}_\alpha}, \qquad \bm{P}_\alpha = -\frac{\partial G_3}{\partial \bm{Q}_\alpha}, \qquad \hat{\mathcal{H}} = \mathcal{H} + \frac{\partial G_3}{\partial t}. \label{eqn:transform_2} \end{equation} The other two classes of transformation can be derived in a similar way. Finally, an important property of a canonical transformation is that it preserves the volume of any element in phase space, i.e., $d\bm{q}d\bm{p}=d\bm{Q}d\bm{P}$ \cite[page 402]{goldstein}. This means that for a change of variables between $(\bm{p},\bm{q})$ and $(\bm{P},\bm{Q})$, the Jacobian of the transformation is unity. \subsection{A derivation of the stress tensor under equilibrium conditions} In this section, we use the method of canonical transformations to derive an expression for the Cauchy stress tensor. In continuum mechanics, a body is identified with a regular region of Euclidean space $\mathcal{E}$ referred to as the reference configuration. Any point $\bm{X} \in \mathcal{B}$ is referred to as a material point. The body $\mathcal{B}$ is deformed via a smooth, one-to-one mapping $\bm{\varphi}:\mathcal{E} \to \mathcal{E}$, which maps each $\bm{X} \in \mathcal{B}$ to a point, \begin{equation} \bm{x} = \bm{\varphi}(\bm{X}), \end{equation} in the deformed configuration,\footnote{We adopt the continuum mechanics convention of denoting variables in the reference configuration with upper-case letters, and variables in the deformed configuration with lower-case letters.} where we have assumed that the deformation is independent of time. The deformation gradient $\bm{F}$ is defined as \begin{equation} \bm{F}(\bm{X}) = \nabla_{\bm{X}} \bm{\varphi}. \end{equation} The mapping $\varphi$ is assumed to satisfy the condition that $\det \bm{F}$ is strictly positive. The Cauchy stress, $\bm{\sigma}$, is defined by \cite{malvern} \begin{equation} \label{eqn:cauchy_psi} \bm{\sigma}(T,\bm{F}) = \frac{1}{\det \bm{F}}\nabla_{\bm{F}} \bm{\psi} \bm{F}^{{\rm T}}, \end{equation} where $\psi(T,\bm{F})$ is the Helmholtz free energy density function relative to the reference configuration. We are only focusing on a conservative elastic body. A system in thermodynamic equilibrium\footnote{\label{foot:tdequil} A system is said to be in a state of thermodynamic equilibrium when all of its properties are independent of time and all of its intensive properties are independent of position \cite{weiner}. To stress this, the term {\em uniform state of thermodynamic equilibrium} is sometimes used to describe this state.} can by definition only support a uniform state of deformation. Therefore, our material system is deformed via the affine mapping\footnote{To understand this mapping, consider a system of $N$ particles with positions $\bm{q}_\alpha$ ($\alpha=1,2,\dots,N$) confined to a parallelepiped container defined by the three linearly independent vectors $\bm{l}_1$, $\bm{l}_2$ and $\bm{l}_3$, which need not be orthogonal. This selection is done for convenience and does not limit the generality of the derivation as explained below. The position of a particle in the container can be expressed in terms of scaled coordinates $\xi^\alpha_i\in[0,1]$ as \begin{equation*} \bm{q}_\alpha = \xi^\alpha_i\bm{l}_i, \tag{*} \label{eq:atomposnorm} \end{equation*} where Einstein's summation convention is applied to spatial indices. The deformation of the container is defined relative to a reference configuration where the cell vectors are $\bm{L}_1$, $\bm{L}_2$ and $\bm{L}_3$. The current and reference cell vectors are related through an affine mapping defined by $\bm{F}$, \begin{equation*} \bm{l}_i = \bm{F} \bm{L}_i. \tag{**} \label{eq:cellaffine} \end{equation*} Equations \eref{eq:atomposnorm} and \eref{eq:cellaffine} can be combined to relate the position $\bm{q}_\alpha$ of particle $\alpha$ in the deformed configuration with its position in the reference configuration $\bm{Q}_\alpha$, \begin{equation*} \bm{q}_\alpha = \xi^\alpha_i(\bm{F}\bm{L}_i) = \bm{F}(\xi^\alpha_i\bm{L}_i) = \bm{F}\bm{Q}_\alpha. \tag{***} \label{eq:posaffinemap} \end{equation*} This is exactly the mapping defined in \eref{eqn:map}. It provides a direct relationship between the positions of particles in the reference configuration and their position in the deformed configuration. Note that the assumed (parallelepiped) shape of the container does not enter into the relation, $\bm{q}_\alpha=\bm{F}\bm{Q}_\alpha$, which means that this relation holds for a container of any shape. It is important to note that \eref{eq:posaffinemap} does {\em not} impose a kinematic constraint that dictates the position of particle $\alpha$ in the deformed configuration based on its position in the reference configuration (as does the Cauchy--Born rule used in multiscale methods \cite{tadmor2009}). We will see later that this will merely be used as a change of variables, where, instead of integrating over the deformed configuration with the variables $\bm{q}$, the integration is carried out over a given reference configuration using the variables $\bm{Q}$. In both cases the same result is obtained. However, by using the referential variables the dependence on the deformation gradient is made explicit. } \begin{equation} \label{eqn:map} \bm{q}_\alpha = \bm{F} \bm{Q}_\alpha. \end{equation} It is clear that if we enforce this mapping on our system, with no change in the momentum coordinates, then the newly obtained variables will not satisfy Hamilton's equations. Therefore any change of variables should be governed by a canonical transformation. The following generator function provides the desired canonical transformation \begin{equation} G_3(\bm{p},\bm{Q}) = -\sum_{\alpha} \bm{p}_{\alpha} \cdot \bm{F} \bm{Q}_{\alpha}. \end{equation} Substituting this generating function into \eref{eqn:transform_2} gives \begin{equation} \label{eqn:transform_3} \bm{q}_\alpha = -\frac{\partial G_3}{\partial \bm{p}_\alpha} = \bm{F} \bm{Q}_\alpha, \qquad \bm{P}_\alpha = -\frac{\partial G_3}{\partial \bm{Q}_\alpha} = \bm{F}^{{\rm T}}\bm{p}_\alpha, \qquad \hat{\mathcal{H}} = \mathcal{H}. \end{equation} The first relation in the above equation is the desired transformation in \eref{eqn:map}. The second relation is the corresponding transformation that the momentum degrees of freedom must satisfy, so that the new set of coordinates $(\bm{Q},\bm{P})$ are canonical. The third relation refers to the Hamiltonian of the system, which is assumed to be given by \begin{equation} \mathcal{H}(\bm{p},\bm{q}) = \sum_{\alpha=1}^N \frac{\bm{p}_\alpha\cdot\bm{p}_\alpha}{2m_\alpha} + \mathcal{V}(\bm{q}_1,\dots,\bm{q}_N), \label{eq:ham} \end{equation} where $\mathcal{V}$ denotes the potential energy of the system. Expressed in terms of the reference variables, \eref{eq:ham} becomes \begin{align} \hat{\mathcal{H}}(\bm{P},\bm{Q},\bm{F}) &= \mathcal{H}(\bm{p}(\bm{Q},\bm{P},\bm{F}), \bm{q}(\bm{Q},\bm{P},\bm{F})) \notag \\ &= \sum_{\alpha=1}^{N} \frac{\bm{F}^{-{\rm T}} \bm{P}_\alpha \cdot \bm{F}^{-{\rm T}} \bm{P}_\alpha}{2m_\alpha} + \mathcal{V}(\bm{F} \bm{Q}_1,\dots,\bm{F} \bm{Q}_N). \end{align} We now proceed to derive the expression for the Cauchy stress tensor using \eref{eqn:cauchy_psi}. The Helmholtz free energy density for the canonical ensemble is given by \cite{huang} \begin{equation} \psi(T,\bm{F}) = -\frac{k_B T \ln{Z}}{V_0}, \label{eqn:psi} \end{equation} where $k_B$ is the Boltzmann's constant, $T$ is the absolute temperature, $V_0$ is the volume of the body in the reference configuration, and $Z(T,\bm{F})$ is the \emph{partition function} defined as \begin{equation} Z(T,\bm{F}) := \frac{1}{N! h^{3N}} \int_{\Gamma_0} e^{-\hat{\mathcal{H}}/k_B T} \, d\bm{P} d\bm{Q}, \label{eqn:part_func} \end{equation} where $h$ is Planck's constant and $\Gamma_0$ denotes the phase space in the reference configuration. With this definition, the statistical mechanics phase average of a function $A(\bm{P},\bm{Q})$ in the canonical ensemble is \begin{equation} \langle A \rangle(T,\bm{F}) = \int_{\Gamma_0} A(\bm{P},\bm{Q}) W_{\rm{c}}(\bm{P},\bm{Q},T,\bm{F}) \, d\bm{P} \, d\bm{Q}, \label{eqn:phaseave} \end{equation} where \begin{equation} \label{eqn:w_canonical} W_{\rm{c}}(\bm{P},\bm{Q},T,\bm{F}) = \frac{1}{N! h^{3N} Z} e^{-\hat{\mathcal{H}}(\bm{P},\bm{Q},\bm{F})/k_B T} \end{equation} is the canonical distribution function. Substituting \eref{eqn:psi} and \eref{eqn:part_func} into \eref{eqn:cauchy_psi}, we obtain \begin{equation} \label{eqn:cauchy_psi_2} \bm{\sigma} = -\frac{k_B T}{(\det \bm{F}) V_0 Z} \nabla_{\bm{F}} Z \bm{F}^{{\rm T}} = \frac{1}{V} \left \langle \nabla_{\bm{F}} \hat{\mathcal{H}} \right \rangle \bm{F}^{{\rm T}}, \end{equation} where in the last step we have used the identity $V = (\det \bm{F}) V_0$ and where \begin{align} \nabla_{\bm{F}} Z &= \frac{\partial}{\partial \bm{F}} \left [ \frac{1}{N! h^{3N}} \int_{\Gamma_0} e^{-\hat{\mathcal{H}}/k_B T} \, d\bm{Q} d\bm{P} \right ] \notag \\ &= -\frac{1}{k_B T N! h^{3N}} \int_{\Gamma_0} \nabla_{\bm{F}} \hat{\mathcal{H}} e^{-\hat{\mathcal{H}}/k_B T} \, d\bm{Q} d\bm{P}. \end{align} Next, we compute $\nabla_{\bm{F}} \hat{\mathcal{H}}$. In our derivation, we make use of indicial notation and the Einstein summation rule. To accommodate for the spatial indices, we push $\alpha$ representing the particle to the superscript position. Following this adjustment, we have \begin{equation} \label{eqn:dH_dF} \frac{\partial \hat{\mathcal{H}}}{\partial F_{iJ}} = \frac{\partial}{\partial F_{iJ}} \left [ \sum_{\alpha} \frac{p_{k}^{\alpha} p_{k}^{\alpha}}{2m^\alpha} + \mathcal{V}(\bm{q}^1,\dots,\bm{q}^N) \right ] = \sum_\alpha \left [ \frac{1}{m^\alpha}\frac{\partial p_{k}^{\alpha}}{\partial F_{iJ}} p_{k}^{\alpha} + \frac{\partial \mathcal{V}}{\partial q_{k}^\alpha} \frac{\partial q_{k}^{\alpha}}{\partial F_{iJ}} \right ]. \end{equation} From \eref{eqn:transform_3}, we have \begin{align} \frac{\partial q_{k}^{\alpha}}{\partial F_{iJ}} &= \frac{\partial}{\partial F_{iJ}} (F_{kL} Q_{L}^{\alpha}) = \delta_{ik} Q_{J}^{\alpha}, \label{eqn:dr_dF} \\ \frac{\partial p_{k}^{\alpha}}{\partial F_{iJ}} &= \frac{\partial}{\partial F_{iJ}} (\inv{F}_{Lk} P_{L}^{\alpha}) = -\inv{F_{Jk}} \inv{F_{Li}} P_{L}^{\alpha} = -\inv{F_{Jk}}p^\alpha_{i}, \label{eqn:dp_dF} \end{align} where in \eref{eqn:dp_dF}, we have used the following identity: \begin{equation} \frac{\partial \inv{F_{Lk}}}{\partial F_{iJ}} = -\inv{F_{Li}} \inv{F_{Jk}}. \end{equation} Substituting \eref{eqn:dr_dF} and \eref{eqn:dp_dF} into \eref{eqn:dH_dF}, we have \begin{equation} \frac{\partial \hat{\mathcal{H}}}{\partial F_{iJ}} = -\sum_{\alpha} \left [\frac{p_{i}^{\alpha} \inv{F_{Jk}} p_{k}^\alpha}{m^\alpha} + f^{\rm int}_{\alpha,i} Q_{J}^\alpha \right ], \end{equation} where $\bm{f}^{\rm int}_\alpha = -\partial \mathcal{V}/\partial \bm{r}_\alpha$ is the internal force, defined in the deformed configuration, on particle $\alpha$.\footnote{There is a subtle point here. Since we are using the canonical ensemble, the Hamiltonian $\mathcal{H}$ neglects the interaction term of the system with the surrounding ``heat bath''. This means that the potential energy $\mathcal{V}$ in $\mathcal{H}$ only includes the {\em internal} energy of the system and, therefore, its derivative with respect to the position of particle $\alpha$ gives the force $\bm{f}^{\rm int}_{\alpha}$ on this particle due to its interactions with other particles in the system.} In direct notation, we have \begin{equation} \nabla_{\bm{F}} \hat{\mathcal{H}} = -\sum_\alpha \left [\frac{\bm{p}_\alpha \otimes \inv{\bm{F}} \bm{p}_\alpha}{m_\alpha} + \bm{f}^{\rm int}_\alpha \otimes \bm{Q}_\alpha \right ]. \end{equation} Substituting the above equation into \eref{eqn:cauchy_psi_2} and using \eref{eqn:transform_3}, we obtain an expression for the Cauchy stress: \begin{equation} \bm{\sigma}(T,\bm{F}) = -\frac{1}{V} \sum_\alpha \left \langle \frac{\bm{p}_\alpha \otimes \bm{p}_\alpha}{m_\alpha} + \bm{f}^{\rm int}_\alpha \otimes \bm{q}_\alpha \right \rangle, \label{eqn:cauchy_canonical} \end{equation} where the phase averaging is now being performed with respect to the variables $\bm{p}$ and $\bm{q}$. The switch from phase averaging over $\bm{P}$ and $\bm{Q}$ in \eref{eqn:phaseave} to $\bm{p}$ and $\bm{q}$ above can be made because canonical transformations preserve the volume element in phase space as explained at the end of \sref{sec:can_trans}. The expression in \eref{eqn:cauchy_canonical} for the Cauchy stress tensor is called the \emph{virial stress}. A simpler derivation of the virial stress, based on time averages, is given in Appendix \ref{ch:virial}. Although, the derivation here made use of the canonical ensemble, it is expected to apply to any ensemble in the thermodynamic limit (see footnote~\ref{foot:tdlimit} on page~\pageref{foot:tdlimit}) where all ensembles are equivalent. Continuum mechanics also tells us that the Cauchy stress tensor is symmetric, something that is not evident from the above equation. Discussion of the symmetry of the stress tensor, which hinges on an important property of $\bm{f}^{\rm int}_\alpha$, is postponed to \sref{ch:compare}. \medskip The virial stress defined above corresponds to the macroscopic stress tensor only under conditions of thermodynamic equilibrium in the thermodynamic limit. We now show that this expression for the stress tensor, as well as all other expressions in common use, can be derived as limiting cases of a more general formulation which begins with the Irving--Kirkwood--Noll procedure. We refer to this as the ``unified framework'' for the stress tensor. \section{Continuum fields as phase averages} \label{ch:phase} In this section, we discuss the Irving and Kirkwood procedure \cite{ik1950}, which laid the foundation for the microscopic definition of continuum fields for non-equilibrium systems. This work was later extended by Walter Noll \cite{noll1955}\footnote{An English translation of this article appears in the current issue of the {\em Journal of Elasticity}.}, who showed how closed-form analytical solutions can be obtained for the definition of certain continuum fields, which otherwise involved a non-rigorous\footnote{The derivation is non-rigorous in the sense that expressing the stress tensor as a series expansion is only possible when the probability density function, which is used in the derivation, is an analytic function of the spatial variables \cite{noll1955}.} series expansion of the Dirac delta distribution in the original procedure. We refer to the procedure proposed by Noll in \cite{noll1955} as the \emph{Irving--Kirkwood--Noll procedure}. The derivation presented in this section largely follows that of Noll \cite{noll1955}, but extends it to more general atomistic models. Consider a system $\mathcal{M}$ modeled as a collection of $N$ point masses/particles, each particle referred to as $\alpha$ $(\alpha=1,2,\ldots,N)$. We use the terms ``particle'' and ``atom'' interchangeably. The position, mass and velocity of particle $\alpha$ are denoted by $\bm{x}_\alpha$, $m_\alpha$ and $\bm{v}_\alpha$, respectively. The complete microscopic state of the system is known, at any instant of time, from the knowledge of position and velocity of each particle in $\real{3}$. Hence, the state of the system at time $t$, may be represented by a point $\bm{\Xi}(t)$ in a $6N$-dimensional phase space\footnote{The usual convention is to represent the phase space via positions and momenta of the particles. For convenience, in this section, we represent the phase space via positions and velocities of the particles.}. Let $\Gamma$ denote the phase space. Therefore any point $\bm{\Xi}(t) \in \Gamma$, can be represented as, \begin{align} \bm{\Xi}(t) &= (\bm{x}_1(t),\bm{x}_2(t),\dots,\bm{x}_N(t);\bm{v}_1(t),\bm{v}_2(t),\dots,\bm{v}_N(t)) \notag \\ &=: (\bm{x}(t);\bm{v}(t)). \label{eqn:define_X} \end{align} In reality, the microscopic state of the system is never known to us, and the only observables identified are the macroscopic fields as defined in continuum mechanics. We identify the continuum fields with macroscopic observables obtained in a two-step process: (1) a pointwise field is obtained as a statistical mechanics phase average; (2) a macroscopic field is obtained as a spatial average over the pointwise field. The phase averaging in step (1) is done with respect to a probability density function $W:\Gamma \times \mathbb{R}^+ \rightarrow \mathbb{R}^+$ of class $C^1$ defined on all phase space for all $t$ ($W_{\rm{c}}$, defined in \eref{eqn:w_canonical}, is an example of a stationary (time-independent) probability density function defined for the canonical ensemble). The explicit dependence of $W$ on time $t$, indicates that our system need not be in thermodynamic equilibrium. As discussed in \sref{ch:canonical}, the evolution of $\bm{\Xi}(t)$ in the phase space is given by the following set of $2N$ first-order equations (Hamilton's equations of \eref{eqn:ham1}--\eref{eqn:ham2}): \begin{subequations} \label{eqn:hamilton} \begin{align} \dot{\bm{p}} &= -\nabla_{\bm{x}}\mathcal{H}, \\ \dot{\bm{x}} &= \nabla_{\bm{p}}\mathcal{H}, \end{align} \end{subequations} where $\bm{p} := (\bm{p}_1,\bm{p}_2,\dots,\bm{p}_N)$, $\bm{p}_\alpha$ denotes the momentum of each particle, and $\mathcal{H}(\bm{p},\bm{x})$ is the Hamiltonian of the system. The basic idea behind the original Irving and Kirkwood procedure is to prescribe/derive microscopic definitions for continuum fields, such that they are consistent with the balance laws of mass, momentum and energy. To arrive at these definitions, we repeatedly use the following theorem, commonly referred to as \emph{Liouville's theorem}, which relates to the conservation of volume in phase space. As a system evolves, the phase space $\Gamma$ is mapped into itself at every instant of time, and this mapping is governed by \eref{eqn:hamilton}. If $g_t$ denotes this mapping, then Liouville's theorem essentially says that for any subset $U$ of $\Gamma$, the volume of $U$ remains invariant under the mapping $g_t$. This can be be formally stated as, \spnewtheorem*{liouville}{Liouville's Theorem}{\bfseries}{\itshape} \begin{liouville} For any $U \subseteq \Gamma$, volume is preserved under the one-parameter group of transformations of phase space, $g_t:U \rightarrow \Gamma$, given by the mapping \[ (\bm{x}(0),\bm{p}(0)) \mapsto (\bm{x}(t),\bm{p}(t)), \] where $\bm{x}(t)$ and $\bm{p}(t)$ are solutions of the Hamilton's system of equations {\rm \eref{eqn:hamilton}}, i.e., \\ \begin{equation} \label{eqn:volume} \operatorname{vol}(U) = \operatorname{vol}(g_tU). \end{equation} \end{liouville} \begin{proof} Let $\dot{\overline{\operatorname{vol}(g_tU)}}$ denote the material time derivative of $\operatorname{vol}(g_tU)$ in the sense that $\bm{\Xi}(0)$ is held fixed while performing this differentiation. Then we have, \[ \dot{\overline{\operatorname{vol}(g_tU)}} = \dot{\overline{\int_{g_tU} d\bm{\Xi}(t)}} = \int_{U} (\dot{\overline{\det \bm{F}}}) d \bm{\Xi}_0, \] where $\bm{F}(\bm{\Xi}_0,t):= \nabla_{\bm{\Xi}_0} \bm{\Xi}(\bm{\Xi}_0,t)$, $\bm{\Xi}(\bm{\Xi}_0,t) = g_t(\bm{\Xi_0})$ and $\bm{\Xi}_0 = \bm{\Xi}(0)$. Using the fact that \[ \dot{\overline{\det \bm{F}}} = (\det \bm{F}) \operatorname{tr}(\dot{\bm{F}}\bm{F}^{-1}), \] we obtain \begin{equation} \dot{\overline{\operatorname{vol}(g_tU)}} = \int_{U} (\det \bm{F}) \operatorname{tr}(\dot{\bm{F}}\bm{F}^{-1}) d\bm{\Xi}_0. \label{eqn:liouville_proof_1} \end{equation} Let \begin{equation} \dot{\bm{\Xi}}(\bm{\Xi}) := \left. \frac{d\bm{\Xi}}{dt} \right |_{\bm{\Xi}_0 = g_t^{-1}(\bm{\Xi})}. \end{equation} From chain rule, we have \begin{equation} \nabla \dot{\bm{\Xi}} = \left. \frac{d (\nabla \bm{\Xi})}{dt} \right |_{\bm{\Xi}_0 = g_t^{-1}(\bm{\Xi})} \nabla_{\bm{\Xi}} \bm{\Xi}_0 = \dot{\bm{F}}\bm{F}^{-1}. \end{equation} Therefore $\operatorname{div} \, \dot{\bm{\Xi}} = \operatorname{tr}(\dot{\bm{F}}\bm{F}^{-1})$. Equation \eref{eqn:liouville_proof_1} can now be rewritten as \begin{equation} \dot{\overline{\operatorname{vol}(g_tU)}} = \int_{U} (\det \bm{F}) (\operatorname{div} \, \dot{\bm{\Xi}})\mid_{\bm{\Xi}(t) = g_t(\bm{\Xi}_o)} d\bm{\Xi}_0. \end{equation} But from \eref{eqn:define_X} and \eref{eqn:hamilton} we also have, \[ \operatorname{div} \dot{\bm{\Xi}} = \operatorname{div}_{\bm{x}} \dot{\bm{x}} + \operatorname{div}_{\bm{p}} \dot{\bm{p}} = \operatorname{div}_{\bm{x}} (\nabla_{\bm{p}} \mathcal{H}) - \operatorname{div}_{\bm{p}}(\nabla_{\bm{x}} \mathcal{H}) = 0. \] Therefore $\dot{\overline{\operatorname{vol}(g_tU)}} = 0$ for arbitrary $t$. Thus \eref{eqn:volume} holds.\qed \end{proof} Let $W(\bm{\Xi};t)$ denote the probability density function defined on $g_t(\Gamma)$. Hence, we have \begin{equation} \int_{g_tU} W(\bm{\Xi}(t);t) d \bm{\Xi}(t) = \int_{U} W(\bm{\Xi}_0;0) (\det \bm{F}) d \bm{\Xi}_0. \end{equation} As a consequence of Liouville's theorem, we have, $\det \bm{F} =1$. Therefore \begin{equation} \label{eqn:corr} \frac{d}{dt}\int_{g_tU} W(\bm{\Xi}(t);t) d \bm{\Xi}(t) = 0. \end{equation} Since \eref{eqn:corr} holds for all $U \subseteq \Gamma$, we have $\dot{W}(\bm{\Xi(t)};t) = 0$. Hence, the time evolution of the probability density function is given by \begin{equation} \label{eqn:liouville} \frac{\partial W}{\partial t} + \sum_{\alpha=1}^{N} \left [ \bm{v}_\alpha \cdot \nabla_{\bm{x}_\alpha}W + \dot{\bm{v}}_\alpha \cdot \nabla_{\bm{v}_\alpha}W \right ] = 0. \end{equation} The above equation can be rewritten as \begin{equation} \frac{\partial W}{\partial t} + \sum_{\alpha=1}^{N} \left [ \bm{v}_\alpha \cdot \nabla_{\bm{x}_\alpha}W - \frac{\nabla_{\bm{x}_\alpha} \mathcal{V} }{m_\alpha} \cdot \nabla_{\bm{v}_\alpha}W \right ] = 0, \label{eqn:useful_liouville} \end{equation} where, as before, $\mathcal{V}(\bm{x}_1,\bm{x}_2,\dots,\bm{x}_N)$ denotes the potential energy of the system. Equation~\eref{eqn:useful_liouville} is called \emph{Liouville's equation}. \subsection{Phase averaging} \label{sec:phase} Under the Irving--Kirkwood--Noll procedure, pointwise fields are defined as phase averages. This phase averaging is expressed via weighted marginal densities. For example, the pointwise mass density field is defined as \begin{equation} \rho(\bm{x},t) := \sum_{\alpha} m_\alpha \int_{\real{{3N}} \times \real{{3N}}} W \delta (\bm{x}_\alpha - \bm{x}) \, d\bm{x} d\bm{v}, \label{eqn:define_density_delta} \end{equation} where the integral represents a marginal density defined on $\real{3}$, $\delta$ denotes the Dirac delta distribution, and $\sum_\alpha$ denotes summation from $\alpha=1$ to $N$. To avoid the Dirac delta distribution and for greater clarity we adopt Noll's notation as originally used in \cite{noll1955}. Hence \eref{eqn:define_density_delta} can be rewritten as \begin{align} \rho(\bm{x},t) &= \sum_{\alpha} m_\alpha \int W \, d\bm{x}_1 \dots d\bm{x}_{\alpha-1} d\bm{x}_{\alpha+1}\dots d\bm{x}_N d\bm{v} \notag \\ &=: \sum_{\alpha} m_\alpha \left \langle W \mid \bm{x}_\alpha = \bm{x} \right \rangle, \label{eqn:define_density} \end{align} where $\left \langle W \mid \bm{x}_\alpha = \bm{x} \right \rangle$ denotes an integral of $W$ over all its arguments except $\bm{x}_\alpha$ and $\bm{x}_\alpha$ is substituted with $\bm{x}$. Now consider the continuum velocity field. Unlike the definition of pointwise density field, which appears unambiguous, the pointwise velocity field can be defined in different ways. It may seem more natural to define the continuum velocity in an analogous fashion to the density field, i.e., \begin{equation} \bm{v}(\bm{x},t) = \frac{\sum_{\alpha} \left \langle W \bm{v}_\alpha \mid \bm{x}_\alpha = \bm{x} \right \rangle}{\sum_{\alpha} \left \langle W \mid \bm{x}_\alpha = \bm{x} \right \rangle}. \label{eqn:define_velocity_alt} \end{equation} Alternatively, the pointwise velocity field can be defined via the momentum density field, $\bm{p}(\bm{x},t)$, as follows: \begin{align} \bm{p}(\bm{x},t) &:= \sum_{\alpha} m_\alpha \left \langle W \bm{v}_\alpha \mid \bm{x}_\alpha = \bm{x} \right \rangle,\label{eqn:define_mom_density}\\ \bm{v}(\bm{x},t) &:= \frac{\bm{p}(\bm{x},t)}{\rho(\bm{x},t)}. \label{eqn:define_velocity} \end{align} Note that definitions \eref{eqn:define_velocity_alt} and \eref{eqn:define_velocity} are equivalent for a single species material, but are not so in general. The definition given by \eref{eqn:define_velocity} is the one used in practice. There are two reasons for this. First, the definition in \eref{eqn:define_velocity} makes more physical sense since, following spatial averaging, it associates the continuum velocity with the velocity of the center of mass of the system of particles. Second, the definition in \eref{eqn:define_velocity} satisfies the continuity equation as shown in \sref{sec:continuity}, whereas \eref{eqn:define_velocity_alt} does not.\footnote{It would be interesting to explore how the equation of continuity fails for the definition in \eref{eqn:define_velocity_alt} by identifying the regions that act as sinks and sources. This is difficult to do for a general $N$ particle system because the continuity equation quickly becomes unwieldy. Even for the much simpler case of a two-particle system, the answer is not trivial. A quick examination shows that the distribution of sinks and sources depends not only on the ratio of the masses but also on the probability density function $W$.} \subsection{Regularity assumptions for the probability density function} It is clear from the definitions in \eref{eqn:define_density}, \eref{eqn:define_mom_density} and \eref{eqn:define_velocity} that the integrals in these equations converge under appropriate decay conditions on $W$. The following two conditions are sufficient for the convergence of all the integrals and the validity of the results in this section \cite{noll1955}: \begin{enumerate} \item There exists a $\delta>0$ such that the function \label{cond_1} \begin{equation} W(\bm{\Xi};t) \prod_{\alpha=1}^{N} \vnorm{\bm{x}_\alpha}^{3+\delta} \prod_{\beta=1}^{N} \vnorm{\bm{v}_\beta}^{6+\delta} \label{eqn:w_decay} \end{equation} and its first derivatives are bounded by a constant that only depends on time. \item $\mathcal{V}(\bm{x}_1,\bm{x}_2,\cdots \bm{x}_N)$ is a bounded $C^1$ function defined on the phase space, and having bounded first derivatives.\footnote{If any two particles overlap, we would normally expect $\mathcal{V} \to \infty$. By specifying additional decay conditions for $W$, the case of unbounded $\mathcal{V}$ can be handled. For simplicity, we assume $\mathcal{V}$ to be bounded.} \label{cond_2} \end{enumerate} Conditions (\ref{cond_1}) and (\ref{cond_2}) ensure the convergence of all the integrals considered in this section and swapping of integration and differentiation. Furthermore, let $\bm{G}(\bm{\Xi};t)$ be any vector or tensor-valued function of class $C^1$ defined on the phase space for all $t$, and which, for suitable functions $g(t)$ and $h(t)$, satisfies the condition \begin{equation} \sup_{\bm{x}_1 \in \real{3},\bm{x}_2 \in \real{3},\cdots,\bm{x}_N \in \real{3}}(\vnorm{\bm{G}},\vnorm{\operatorname{div}_{\bm{v}_\alpha} \bm{G}},\vnorm{\operatorname{div}_{\bm{x}_\alpha}\bm{G}}) < g(t) \prod_{\beta=1}^{N}\vnorm{\bm{v}_\beta}^3 +h(t), \end{equation} where $\vnorm{\cdot}$ refers to the norm defined through the inner product. Since the space of all tensors has a natural inner product defined as \begin{equation} \bm{S} : \bm{T} = \operatorname{tr}(\bm{S}^{{\rm T}} \bm{T}), \end{equation} we have $\vnorm{\bm{S}} = \sqrt{\bm{S} : \bm{S}}$. Under these conditions on $\bm{G}(\bm{\Xi};t)$, we have\footnote{If $\bm{G}$ is a second-order tensor or higher, then the dot product indicates tensor operating on a vector. Note that in \eref{eqn:reg}, in the interest of brevity, we are breaking our notation of denoting a second-order tensor operating on a vector by juxtaposition.} \begin{subequations} \label{eqn:reg} \begin{align} \int_{\real{3}}\bm{G} \cdot \nabla_{\bm{x}_\alpha}W \, d\bm{x}_\alpha &= -\int_{\real{3}} W \operatorname{div}_{\bm{x}_\alpha} \bm{G} \, d\bm{x}_\alpha,\label{eqn:reg_1} \\ \int_{\real{3}}\bm{G} \cdot \nabla_{\bm{v}_\alpha}W \, d\bm{v}_\alpha &= -\int_{\real{3}} W \operatorname{div}_{\bm{v}_\alpha} \bm{G} \, d\bm{v}_\alpha. \label{eqn:reg_2} \end{align} \end{subequations} The above identities are repeatedly used in deriving the equation of continuity and the equation of motion in the following sections. \subsection{Equation of continuity} \label{sec:continuity} Let us demonstrate that the pointwise fields defined in \sref{sec:phase} satisfy the equation of continuity. The equation of continuity from continuum mechanics is given by \cite{malvern} \begin{equation} \label{eqn:continuity} \frac{\partial \rho}{\partial t} + \operatorname{div}_{\bm{x}}(\rho \bm{v}) = 0. \end{equation} From \eref{eqn:define_density} we have \[ \frac{\partial \rho} {\partial t} (\bm{x},t) = \sum_{\alpha}m_\alpha \left \langle \left. \frac{\partial W}{\partial t} \right | \bm{x}_\alpha = \bm{x} \right \rangle. \] Using Liouville's equation in \eref{eqn:useful_liouville}, we have \[ \frac{\partial \rho} {\partial t} (\bm{x},t) = \sum_\alpha m_\alpha \left \langle \left. \sum_\beta \left ( -\bm{v}_\beta \cdot \nabla_{\bm{x}_\beta}W + \frac{\nabla_{\bm{x}_\beta} \mathcal{V}}{m_\beta}\cdot \nabla_{\bm{v}_\beta}W \right ) \right | \bm{x}_\alpha=\bm{x} \right \rangle. \] Now, consider the summand on the right-hand side of the above equation for a fixed $\alpha$. From \eref{eqn:reg_2}, it is clear that $\left \langle \left. \frac{\nabla_{\bm{x}_\beta} \mathcal{V}}{m_\beta}\cdot \nabla_{\bm{v}_\beta}W \right | \bm{x}_\alpha=\bm{x} \right \rangle=0$, for $\beta=1,2,\cdots N$, and from \eref{eqn:reg_1}, we also have $\left \langle \bm{v}_\beta \cdot \nabla_{\bm{x}_\beta}W \mid \bm{x}_\alpha=\bm{x} \right \rangle = 0$, for $\beta \ne \alpha$. Therefore the above equation simplifies to \[ \frac{\partial \rho} {\partial t} (\bm{x},t) = -\sum_{\alpha} m_\alpha \left \langle \bm{v}_\alpha \cdot \nabla_{\bm{x}_\alpha} W\mid \bm{x}_\alpha =\bm{x} \right \rangle. \] Using the identity, \begin{equation} \label{eqn:id1} \operatorname{div}_{\bm{x}}(a \bm{w}) = \nabla_{\bm{x}} a \cdot \bm{w}, \end{equation} where $a(\bm{x})$ is any $C^1$ scalar function of $\bm{x}$, and $\bm{w}$ is any vector independent of $\bm{x}$, we obtain \[ \frac{\partial \rho} {\partial t} (\bm{x},t) = -\sum_{\alpha} m_\alpha \operatorname{div}_{\bm{x}} \left \langle W \bm{v}_\alpha \mid \bm{x}_\alpha = \bm{x} \right \rangle. \] Using \eref{eqn:define_mom_density} and \eref{eqn:define_velocity} for the definition of the pointwise momentum density field, we have \[ \frac{\partial \rho} {\partial t} (\bm{x},t) + \operatorname{div}_{\bm{x}} (\rho \bm{v}) = 0, \] which is the continuity equation. We have established that the definitions given in \eref{eqn:define_density_delta} and \eref{eqn:define_mom_density} identically satisfy conservation of mass. \subsection{Equation of Motion} \label{sec:s_motion} The equation of motion from continuum mechanics is given by \cite{malvern} \begin{eqnarray} \label{eqn:motion} \frac{\partial(\rho \bm{v})}{\partial t} + \operatorname{div}_{\bm{x}}(\rho \bm{v} \otimes \bm{v}) = \operatorname{div}_{\bm{x}}\bm{\sigma} + \bm{b}. \end{eqnarray} Here we identify $\bm{\sigma}$ with the pointwise stress tensor. From \eref{eqn:define_mom_density}, we have \[ \frac{\partial \bm{p}}{\partial t}(\bm{x},t) = \sum_{\alpha} m_\alpha \left \langle \left. \bm{v}_\alpha \frac{\partial W}{\partial t} \right | \bm{x}_\alpha = \bm{x} \right \rangle. \] Again, using \eref{eqn:useful_liouville} we obtain, \begin{align} \frac{\partial \bm{p}}{\partial t} (\bm{x},t) &= \sum_{\alpha} m_\alpha \left \langle \left. \bm{v}_\alpha \sum_\beta \left (-\nabla_{\bm{x}_\beta} W \cdot \bm{v}_\beta + \frac{\nabla_{\bm{x}_\beta} \mathcal{V}}{m_\beta} \cdot \nabla_{\bm{v}_\beta} W \right ) \right | \bm{x}_\alpha = \bm{x} \right \rangle \notag \\ &= \sum_\alpha m_\alpha \sum_\beta \left \langle \left. -\left ( \bm{v}_\alpha \otimes \bm{v}_\beta \right ) \nabla_{\bm{x}_\beta} W + \left ( \bm{v}_\alpha \otimes \frac{\nabla_{\bm{x}_\beta} \mathcal{V}}{m_\beta} \right ) \nabla_{\bm{v}_\beta} W \right | \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:motion_alt} \end{align} Now, consider the summand on the right-hand side of the above equation for fixed $\alpha$ and $\beta$. Using \eref{eqn:reg_1}, we have $\left \langle ( \bm{v}_\alpha \otimes \bm{v}_\beta ) \nabla_{\bm{x}_\beta} W \mid \bm{x}_\alpha = \bm{x} \right \rangle = \bm{0}$, for $\beta \ne \alpha$. From \eref{eqn:reg_2}, we have $\left \langle (\bm{v}_\alpha \otimes \nabla_{\bm{x}_\beta} \mathcal{V} ) \nabla_{\bm{v}_\beta}W \mid \bm{x}_\alpha = \bm{x} \right \rangle = \bm{0}$, for $\beta \ne \alpha$, and for $\beta = \alpha$, we have \[ \left \langle (\bm{v}_\alpha \otimes \nabla_{\bm{x}_\alpha} \mathcal{V} ) \nabla_{\bm{v}_\alpha} W\mid \bm{x}_\alpha = \bm{x} \right \rangle = -\left \langle \nabla_{\bm{x}_\alpha} \mathcal{V} W \mid \bm{x}_\alpha = \bm{x} \right \rangle, \] using the fact that $\operatorname{div}_{\bm{u}}(\bm{u} \otimes \bm{w}) = \bm{w}$, for any vector $\bm{u}$ and for any vector $\bm{w}$ independent of $\bm{u}$. Therefore \eref{eqn:motion_alt} simplifies to \begin{equation} \frac{\partial \bm{p}}{\partial t} (\bm{x},t) = -\sum_{\alpha} m_\alpha \left \langle (\bm{v}_\alpha \otimes \bm{v}_\alpha) \nabla_{\bm{x}_\alpha} W \mid \bm{x}_\alpha = \bm{x} \right \rangle - \sum_{\alpha} \left \langle W \nabla_{\bm{x}_\alpha} \mathcal{V} \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:motion_1} \end{equation} Using the identity, \begin{equation} \operatorname{div}_{\bm{x}}(a \bm{T})= \bm{T}\nabla_{\bm{x}}a, \label{eqn:id2} \end{equation} where $a(\bm{x})$ is any $C^1$ scalar function of $\bm{x}$, and $\bm{\bm{T}}$ is any tensor independent of $\bm{x}$, we can rewrite \eref{eqn:motion_1} as \begin{equation} \frac{\partial \bm{p}}{\partial t} (\bm{x},t) = -\operatorname{div}_{\bm{x}} \sum_{\alpha} m_\alpha \left \langle (\bm{v}_\alpha \otimes \bm{v}_\alpha) W \mid \bm{x}_\alpha = \bm{x} \right \rangle - \sum_{\alpha} \left \langle W \nabla_{\bm{x}_\alpha} \mathcal{V} \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:motion_2} \end{equation} Now, note that the term $ \bm{v}_\alpha \otimes \bm{v}_\alpha $ can be written as \\ \begin{align} \bm{v}_\alpha \otimes \bm{v}_\alpha &= (\bm{v}_\alpha - \bm{v}) \otimes (\bm{v}_\alpha - \bm{v}) + \bm{v} \otimes \bm{v}_\alpha + \bm{v}_\alpha \otimes \bm{v} - \bm{v} \otimes \bm{v} \notag \\ &= \bm{v}_\alpha^{\rm{rel}} \otimes \bm{v}_\alpha^{\rm{rel}} + \bm{v} \otimes \bm{v}_\alpha + \bm{v}_\alpha \otimes \bm{v} - \bm{v} \otimes \bm{v}, \label{eqn:id3} \end{align} where $\bm{v}_\alpha^{\rm{rel}}$ is the velocity of particle $\alpha$ relative to the pointwise velocity field. Consider the first term on the right-hand side of \eref{eqn:motion_2}. Substituting \eref{eqn:id3} into this expression we have, \begin{align} &-\operatorname{div}_{\bm{x}} \sum_{\alpha} m_\alpha \left \langle (\bm{v}_\alpha \otimes \bm{v}_\alpha) W \mid \bm{x}_\alpha = \bm{x} \right \rangle \notag\\ &= -\sum_{\alpha} m_\alpha \operatorname{div}_{\bm{x}} \left \langle (\bm{v}_\alpha ^{\rm{rel}} \otimes \bm{v}_\alpha ^{\rm{rel}}) W \mid \bm{x}_\alpha = \bm{x} \right \rangle - \operatorname{div}_{\bm{x}} \sum_\alpha \big [ \bm{v} \otimes m_\alpha \left \langle \bm{v}_\alpha W \mid \bm{x}_\alpha = \bm{x} \right \rangle \notag\\ &\qquad + m_\alpha \left \langle \bm{v}_\alpha W \mid \bm{x}_\alpha = \bm{x} \right \rangle \otimes \bm{v} - m_\alpha \left \langle W \mid \bm{x}_\alpha = \bm{x} \right \rangle \bm{v} \otimes \bm{v} \big ] \notag \\ &=-\operatorname{div}_{\bm{x}} \sum_{\alpha} m_\alpha \left \langle (\bm{v}_\alpha^{\rm{rel}} \otimes \bm{v}_\alpha^{\rm{rel}}) W \mid \bm{x}_\alpha = \bm{x} \right \rangle - \operatorname{div}_{\bm{x}}(\rho \bm{v} \otimes \bm{v}), \label{eqn:motion_3} \end{align} where we have used \eref{eqn:define_density}, \eref{eqn:define_mom_density} and \eref{eqn:define_velocity} in the last step. Substituting \eref{eqn:motion_3} into \eref{eqn:motion_2}, we obtain \begin{align} \frac{\partial \bm{p}}{\partial t} (\bm{x},t) + \operatorname{div}_{\bm{x}}(\rho \bm{v} \otimes \bm{v}) = &-\sum_{\alpha} m_\alpha \operatorname{div}_{\bm{x}} \left \langle (\bm{v}_\alpha^{\rm{rel}} \otimes \bm{v}_\alpha^{\rm{rel}}) W \mid \bm{x}_\alpha = \bm{x} \right \rangle \notag \\ &- \sum_{\alpha} \left \langle W \nabla_{\bm{x}_\alpha} \mathcal{V} \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:motion_4} \end{align} The left-hand sides of \eref{eqn:motion_4} and \eref{eqn:motion} are identical. Therefore, the right-hand sides must also be equal. Hence \begin{equation} \operatorname{div}_{\bm{x}}\bm{\sigma} + \bm{b} = -\sum_{\alpha} m_\alpha \operatorname{div}_{\bm{x}} \left \langle (\bm{v}_\alpha^{\rm{rel}} \otimes \bm{v}_\alpha ^{\rm{rel}}) W \mid \bm{x}_\alpha = \bm{x} \right \rangle - \sum_{\alpha} \left \langle W \nabla_{\bm{x}_\alpha} \mathcal{V} \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:motion_5} \end{equation} To proceed, we divide the potential energy $\mathcal{V}(\bm{x}_1,\bm{x}_2,\dots,\bm{x}_N)$ into two parts: \begin{enumerate} \item An \emph{external} part, $\mathcal{V}_{\rm{ext}}$, associated with long-range interactions such as gravity or electromagnetic fields. \item An \emph{internal} part, $\mathcal{V}_{\rm{int}}$, associated with short-range particle interactions. \end{enumerate} It is natural to associate $\mathcal{V}_{\rm{ext}}$ with the body force field $\bm{b}$ in \eref{eqn:motion_5}. We therefore define $\bm{b}(\bm{x},t)$ as \begin{equation} \label{eqn:body} \bm{b}(\bm{x},t) := - \sum_{\alpha} \left \langle W \nabla_{\bm{x}_\alpha} \mathcal{V}_{\rm{ext}} \mid \bm{x}_\alpha = \bm{x} \right \rangle. \end{equation} Substituting \eref{eqn:body} into \eref{eqn:motion_5}, we have \begin{equation} \operatorname{div}_{\bm{x}} \bm{\sigma} = -\sum_{\alpha} m_\alpha \operatorname{div}_{\bm{x}} \left \langle (\bm{v}_\alpha ^{\rm{rel}} \otimes \bm{v}_\alpha^{\rm{rel}}) W \mid \bm{x}_\alpha = \bm{x} \right \rangle - \sum_{\alpha} \left \langle W \nabla_{\bm{x}_\alpha} \mathcal{V}_{\rm{int}} \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:motion_6} \end{equation} From \eref{eqn:motion_6}, we see that the pointwise stress tensor has two contributions: \begin{equation} \label{eqn:stress_split} \bm{\sigma}(\bm{x},t) = \bm{\sigma}_{\rm{k}}(\bm{x},t) + \bm{\sigma}_{\rm{v}}(\bm{x},t), \end{equation} where $\bm{\sigma}_{\rm{k}}$ and $\bm{\sigma}_{\rm{v}}$ are, respectively, the \emph{kinetic} and \emph{potential} parts of the pointwise stress. The kinetic part is given by \begin{equation} \bm{\sigma}_{\rm{k}}(\bm{x},t) = -\sum_{\alpha} m_\alpha \left \langle (\bm{v}_\alpha^{\rm{rel}} \otimes \bm{v}_\alpha ^{\rm{rel}}) W \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:stress_kinetic} \end{equation} It is evident that the kinetic part of the stress tensor is symmetric. The presence of a kinetic contribution to the stress tensor appears at odds with the continuum definition of stress that is stated solely in terms of the forces acting between different parts of the body. This discrepancy has led to controversy in the past about whether the kinetic term belongs in the stress definition \cite{zhou2003}. The confusion is related to the difference between absolute velocity and relative velocity defined in \eref{eqn:id3} \cite{tsai1979}. The kinetic stress reflects the momentum flux associated with the vibrational kinetic energy portion of the internal energy. Continuing with \eref{eqn:motion_6}, the potential part of the stress must satisfy the following differential equation: \begin{equation} \operatorname{div}_{\bm{x}} \bm{\sigma}_{\rm{v}}(\bm{x},t) = \sum_{\alpha} \left \langle W \bm{f}^{\rm int}_\alpha \mid \bm{x}_\alpha = \bm{x} \right \rangle, \label{eqn:stress_force_differential} \end{equation} where \begin{equation} \label{eqn:def_fi} \bm{f}^{\rm int}_\alpha := -\nabla_{\bm{x}_\alpha} \mathcal{V}_{\rm{int}}, \end{equation} is the force on particle $\alpha$ due to internal interactions. Equation~\eref{eqn:stress_force_differential} needs to be solved in order to obtain an explicit form for $\bm{\sigma}_{\rm{v}}$. In the original paper of Irving and Kirkwood \cite{ik1950}, this was done by applying a Taylor expansion to the Dirac delta distribution appearing in the right-hand side of the equation. In contrast, Noll showed that a closed-form solution for $\bm{\sigma}_{\rm{v}}$ can be obtained by recasting the right-hand side in a different form and applying a lemma proved in \cite{noll1955}. We proceed with Noll's approach, except we place no restriction on the nature of the interatomic potential energy $\mathcal{V}_{\rm{int}}$. The potential energy considered in \cite{ik1950} and \cite{noll1955} is limited to pair potentials. \subsubsection*{General interatomic potentials} In general, the internal part of the potential energy, also called the \emph{interatomic potential energy}, depends on the positions of all particles in the system: \begin{equation} \mathcal{V}_{\rm{int}} = \widehat{\mathcal{V}}_{\rm int}(\bm{x}_1,\bm{x}_2,\dots,\bm{x}_N), \end{equation} where the ``hat'' indicates that the functional dependence is on absolute particle positions (as opposed to distances later on). We assume that $\widehat{\mathcal{V}}_{\rm int}:\mathbb{R}^{3N} \to \mathbb{R}$ is a continuously differentiable function.\footnote{Note that this assumption may fail in systems undergoing first-order magnetic or electronic phase transformations.} This function must satisfy the following invariance principle: \label{page:invariance} \begin{quote} The internal energy of a material system is invariant with respect to the Euclidean group $\mathcal{G} := \{\bm{x} \mapsto \bm{Q}\bm{x}+\bm{c} \mid \bm{x} \in \real{3},\bm{Q} \in O(3),\bm{c} \in \real{3}\}$, where $O(3)$ denotes the full orthogonal group. \end{quote} To exploit this invariance, let us consider the action of $\mathcal{G}$ on $\mathbb{R}^{3N}$, i.e., the action of any combination of translation and rotation (proper or improper), which is represented by an element $g:\bm{x} \mapsto \bm{Q}\bm{x}+\bm{c}$ in $\mathcal{G}$, on any configuration of $N$ particles represented by a vector $(\bm{x}_1,\dots,\bm{x}_N) \in \mathbb{R}^{3N}$: \begin{align} \label{eqn:group_action} g \cdot (\bm{x}_1,\dots,\bm{x}_N) = (\bm{Q}\bm{x}_1 + \bm{c},\dots,\bm{Q}\bm{x}_N+\bm{c}). \end{align} This action splits $\mathbb{R}^{3N}$ into disjoint sets of equivalence classes \cite{dummit}, which we now describe. For any $\bm{u}=(\bm{x}_1,\dots,\bm{x}_N) \in \mathbb{R}^{3N}$, let $\mathcal{O}_{\bm{u}} \subset \mathbb{R}^{3N}$ denote an equivalence class which is defined as\footnote{The notation ``$\{g\cdot \bm{u} \mid g \in \mathcal{G}\}$'' should be read as ``the set of all $g\cdot \bm{u}$, such that $g$ is in the Euclidean group $\mathcal{G}$''.} \begin{align} \mathcal{O}_{\bm{u}} := \{g\cdot \bm{u} \mid g \in \mathcal{G}\}, \label{eqn:orbit} \end{align} where $g \cdot \bm{u}$ denotes the action of $g$ on $\bm{u}$ defined in \eref{eqn:group_action}. In other words, $\mathcal{O}_{\bm{u}}$ represents the set of all configurations which are related to the configuration $\bm{u}$ by a rigid body motion and/or reflection. Due to the invariance of the potential energy, we can view the function $\mathcal{V}_{\rm int}$ as a function on the set of equivalence classes, i.e., \begin{align} \overline{\mathcal{V}}_{\rm int}(\mathcal{O}_{\bm{u}}) = \widehat{\mathcal{V}}_{\rm int}(\bm{u}), \label{eqn:pot_orbit} \end{align} because \begin{align} \widehat{\mathcal{V}}_{\rm int}(\bm{v}) = \widehat{\mathcal{V}}_{\rm int}(\bm{u}) \qquad \forall \bm{v} \in \mathcal{O}_{\bm{u}}. \end{align} Now, consider a set $\mathcal{S} \subset \mathbb{R}^{N(N-1)/2}$, defined as \begin{align} \mathcal{S} := \{(r_{12},r_{13}, \dots,& r_{1N}, r_{23}, \dots, r_{(N-1)N}) \mid \notag \\ &r_{\alpha \beta} = \vnorm{\bm{x}_\alpha - \bm{x}_\beta}, (\bm{x}_1,\dots,\bm{x}_N) \in \mathbb{R}^{3N}\}. \end{align} In other words, the set $\mathcal{S}$ consists of all possible $N(N-1)/2$-tuples of real numbers which correspond to the distances between $N$ particles in $\real{3}$.\footnote{\label{fn:phys_dist}The key here is that not all $N(N-1)/2$ combinations of real numbers constitute a valid set of physical distances. The distances must satisfy certain geometric constraints in order to be physically meaningful as explained below.} In technical terms, the coordinates of any point in $\mathcal{S}$ are said to be \emph{embeddable} in $\real{3}$. Note that $\mathcal{S}$ is a proper subset of $\mathbb{R}^{N(N-1)/2}$ as it consists of only those $N(N-1)/2$-tuple distances which satisfy certain geometric constraints. In fact, the set $\mathcal{S}$ represents a $(3N-6)$-dimensional manifold in $\mathbb{R}^{N(N-1)/2}$, commonly referred to as the \emph{shape space}. Let $\phi$ be the mapping taking a point in configuration space to the corresponding set of distances in $\mathcal{S}$, i.e., $\phi:\mathbb{R}^{3N} \to \mathcal{S}:(\bm{x}_1,\dots,\bm{x}_N) \mapsto (r_{12},\dots,r_{(N-1)N})$, where $r_{\alpha \beta}$ from here onwards is used to denote $\vnorm{\bm{x}_\alpha-\bm{x}_\beta}$. Since the Euclidean group preserves distances, it immediately follows that the map \begin{align} \label{eqn:bijection} \bar{\phi}:\{\text{Equivalence classes}\} \to \mathcal{S}, \end{align} defined as $\bar{\phi}(\mathcal{O}_{\bm{u}}) = \phi(\bm{u})$, is a bijection (one-to-one and onto mapping) from the set of equivalence classes to the set $\mathcal{S}$.\footnote{$\bar{\phi}$ is surjective (onto) by the definition of $\mathcal{S}$. The proof that it is injective (one-to-one) is similar to the proof of the \emph{basic invariance theorem} for the simultaneous invariants of vectors due to Cauchy, which can be found in \cite[Section 11]{truesdell}.} This essentially means that for every set of equivalent configurations, i.e., configurations related to each other by a rigid body motion and/or reflection, there exists a unique $N(N-1)/2$-tuple of distances and vice versa. From \eref{eqn:pot_orbit} and \eref{eqn:bijection}, it immediately follows that the potential energy of the system can be completely described by a function $\breve{\mathcal{V}}_{\rm int}:\mathcal{S} \to \mathbb{R}$, defined as \begin{align} \breve{\mathcal{V}}_{\rm int}(\bm{s}) := \overline{\mathcal{V}}_{\rm int}(\bar{\phi}^{-1}(\bm s)) \qquad \forall \bm{s} \in \mathcal{S}. \label{eqn:pot_shape} \end{align} We now restrict our discussion to those systems for which there exists a continuously differentiable extension of $\breve{\mathcal{V}}_{\rm int}$, defined on the shape space, to $\mathbb{R}^{N(N-1)/2}$.\footnote{The extension is necessary since $\breve{\mathcal{V}}_{\rm int}$ is defined in \eref{eqn:pot_shape} only on the set $\mathcal{S}$. We need to extend the definition to {\em all} points in $\mathbb{R}^{N(N-1)/2}$, whether they correspond to a set of physical distances or not, in order to be able to compute derivatives as explained later in the text. This issue has been overlooked in the past (see for example \cite{delph2005}), which leads to the conclusion that the stress tensor is always symmetric. It turns out that this conclusion is correct (at least for point masses without internal structure), but the reasoning is more involved as we show later.} This is justifiable because of the fact that all interatomic potentials used in practice, for a system of $N$ particles, are either continuously differentiable functions on $\mathbb{R}^{N(N-1)/2}$, or can easily be extended to one. For example, the pair potential and the embedded-atom method (EAM) potential \cite{eam} are continuously differentiable functions on $\mathbb{R}^{N(N-1)/2}$, while the Stillinger-Weber \cite{stillinger1985} and the Tersoff \cite{tersoff1988} potentials can be easily extended to $\mathbb{R}^{N(N-1)/2}$ by expressing the angles appearing in them as a function of distances between particles. Therefore, we assume that there exist a continuously differentiable function $\mathcal{V}_{\rm int}:\mathbb{R}^{N(N-1)/2} \to \mathbb{R}$, such that the restriction of $\mathcal{V}_{\rm int}$ to $\mathcal{S}$ is equal to $\breve{\mathcal{V}}_{\rm int}$: \begin{align} \mathcal{V}_{\rm int} (\bm{s}) = \breve{\mathcal{V}}_{\rm int}(\bm{s}) \quad \forall \bm{s}=(r_{12},\dots,r_{(N-1)N}) \in \mathcal{S}. \label{eqn:restriction} \end{align} An immediate question that arises is whether this extension is unique in a neighborhood of $\bm{s} \in \mathcal{S}$. Note that for $N \le 4$, $3N-6 = N(N-1)/2$. Therefore, for $N \le 4$, for every point $\bm{s} \in \mathcal{S}$, there exists a neighborhood in $\mathbb{R}^{N(N-1)/2}$ which lies in $\mathcal{S}$. However, for $N>4$, there may be multiple extensions of $\breve{\mathcal{V}}_{\rm{int}}$. As noted above, the reason we are considering an extension is to define the partial derivative of the potential energy with respect to each coordinate of a point in $\mathbb{R}^{N(N-1)/2}$. This will be used later to define the stress tensor. For example, the partial derivative of $\mathcal{V}_{\rm int}(\zeta_{12},\dots,\zeta_{(N-1)N})$ with respect to $\zeta_{12}$ at any point $\bm{s}=(r_{12},\dots,r_{(N-1)N}) \in \mathcal{S}$, defined as \begin{align} \label{eqn:diff_extension} \frac{d \mathcal{V}_{\rm int}}{d \zeta_{12}}(\bm{s}) = \lim_{\epsilon \to 0} \frac{\mathcal{V}_{\rm int}(r_{12}+\epsilon,\dots,r_{N(N-1)/2}) -\mathcal{V}_{\rm int}(r_{12},\dots,r_{N(N-1)/2})}{\epsilon}, \end{align} requires us to evaluate the function at non-embeddable points. It will be shown later that the quantity evaluated in \eref{eqn:diff_extension} may differ for different extensions. On the other hand, $\nabla_{\bm{x}_\alpha} \mathcal{V}_{\rm int}$ is uniquely defined for any extension. This is because \begin{align} \nabla_{\bm{x}_\alpha} \mathcal{V}_{\rm int}(\bm{s}) &= \nabla_{\bm{x}_\alpha} \breve{\mathcal{V}}_{\rm int}(\bm{s}) \notag \\ &= \nabla_{\bm{x}_\alpha}\overline{\mathcal{V}}_{\rm int}(\bar{\phi}^{-1}(\bm{s})) \notag \\ &= \nabla_{\bm{x}_\alpha}\widehat{\mathcal{V}}_{\rm int}(\bm{u}), \label{eqn:force_equality} \end{align} where $\bar{\phi}^{-1}(\bm{s})=\mathcal{O}_{\bm{u}}$, which implies $\phi(\bm{u})=\bm{s}$,\footnote{Note that the vector $\bm{u}$ appearing in \eref{eqn:force_equality} can be replaced by any $\bm{v} \in \mathcal{O}_{\bm{u}}$.} and we have used \eref{eqn:restriction}, \eref{eqn:pot_shape} and \eref{eqn:pot_orbit} in the first, second and the last equality respectively. We next address the possibility of having multiple extensions for the potential energy by studying the various constraints that the distances between particles have to satisfy in order to be embeddable in $\real{3}$. We demonstrate, through a simple example, how multiple extensions for the potential energy can lead to a non-unique decomposition of the force on a particle, which in turn leads to a non-unique pointwise stress tensor. \subsubsection*{Central-force decomposition and the possibility of alternate extensions} \label{page:altext} We will now show that the force on a particle can always be decomposed as a sum of central forces. The force on a particle due to internal interactions is defined in \eref{eqn:def_fi}. Therefore, for any configuration $\bm{u} \in \mathbb{R}^{3N}$, we have \begin{align} \label{eqn:force_alpha} \bm{f}^{\rm int}_\alpha(\bm{u}) &= -\nabla_{\bm{x}_\alpha} \widehat{\mathcal{V}}_{\rm{int}}(\bm{u}). \end{align} Using \eref{eqn:force_equality}, \eref{eqn:force_alpha} takes the form \begin{align} \bm{f}^{\rm int}_\alpha(\bm{u}) &= -\nabla_{\bm{x}_\alpha} \mathcal{V}_{\rm{int}}(\bm{s}) \vert_{\bm{s}=\phi(\bm{u})} \notag \\ &= \sum_{\substack{\beta \\ \beta \ne \alpha}} \bm{f}_{\alpha\beta}(\bm{u}), \label{eqn:f_decomp_general} \end{align} where $\bm{s}=\phi(\bm{u})=(r_{12},\dots,r_{(N-1)N})$ and \begin{equation} \label{eqn:define_fij} \bm{f}_{\alpha \beta}(\bm{u}) := \left \{ \begin{array}{ll} \frac{\partial\mathcal{V}_{\rm int}}{\partial \zeta_{\alpha\beta}}(\phi(\bm{u})) \frac{\bm{x}_\beta - \bm{x}_\alpha}{r_{\alpha \beta}} & \mbox{if $\alpha<\beta$}, \\ \frac{\partial\mathcal{V}_{\rm int}}{\partial \zeta_{\beta\alpha}}(\phi(\bm{u})) \frac{\bm{x}_\beta - \bm{x}_\alpha}{r_{\alpha \beta}} & \mbox{if $\alpha>\beta$}, \end{array} \right. \end{equation} is the contribution to the force on particle $\alpha$ due to the presence of particle $\beta$. Note that $\bm{f}_{\alpha\beta}$ is parallel to the direction $\bm{x}_\beta - \bm{x}_\alpha$ and satisfies $\bm{f}_{\alpha \beta}=-\bm{f}_{\beta \alpha}$. We therefore note the important result that the \emph{internal force on a particle, for any interatomic potential that has a continuously differentiable extension, can always be decomposed as a sum of central forces, i.e., forces parallel to directions connecting the particle to its neighbors}.\footnote{ The result that the force on a particle, modeled using any interatomic potential with a continuously differentiable extension, can be decomposed as sum of central forces may seem strange to some readers. This may be due to the common confusion in the literature of using the term ``central-force models'' to refer to simple pair potentials. In fact, we see that due to the invariance requirement stated on Page~\pageref{page:invariance}, {\em all} interatomic potentials (including those with explicit bond angle dependence) that can be expressed as a continuously differentiable function as described in the text, are central-force models. By this we mean that the force on any particle (say $\alpha$) can be decomposed as a sum of terms, $\bm{f}_{\alpha\beta}$, aligned with the vectors joining particle $\alpha$ with its neighbors and satisfying action and reaction. The difference between the general case and that of a pair potential is that for a pair potential, $\vnorm{\bm{f}_{\alpha\beta}}$ depends {\em only} on the distance $r_{\alpha\beta}$ between the particles, whereas for a general potential, the dependence is on a larger set of distances, $\vnorm{\bm{f}_{\alpha\beta}}=\frac{\partial \mathcal{V}_{\rm int}}{\partial \zeta_{\alpha\beta}}(r_{12},r_{13},\dots,r_{(N-1),N})$, i.e., $\vnorm{\bm{f}_{\alpha\beta}}$ depends on the {\em environment} of the ``bond'' between $\alpha$ and $\beta$. For this reason, $\bm{f}_{\alpha\beta}$ for a pair potential is a property of particles $\alpha$ and $\beta$ alone and can be physically interpreted as the ``force exerted on particle $\alpha$ by particle $\beta$''. Whereas, in the more general case of arbitrary interatomic potentials, the physical significance of the interatomic force is less clear and at best we can say that $\bm{f}_{\alpha\beta}$ is the ``contribution to the force on particle $\alpha$ due to the presence of particle $\beta$''. } We will see later in \sref{sec:stronglaw} that the central-force decomposition is the only physically-meaningful partitioning of the force. The remaining question is how different potential energy extensions affect the force decomposition in \eref{eqn:f_decomp_general}. We have already established in \eref{eqn:force_equality} and \eref{eqn:force_alpha} that the force $\bm{f}_\alpha^{\rm int}$ is independent of the particular extension used. However, we show below that the individual terms in the decomposition, $\bm{f}_{\alpha\beta}$, are {\em not} unique. These terms depend on the manner in which the potential energy, defined on the shape space, is extended to its neighborhood in the higher-dimensional Euclidean space. In order to construct different extensions, we use the geometric constraints that the distances have to satisfy in order for them to be embeddable in $\real{3}$.\footnote{We thank Ryan Elliott for suggesting this line of thinking.} The nature of these constraints is studied in the field of \emph{distance geometry}, which describes the geometry of sets of points in terms of the distances between them (see Appendix \ref{sec:geometry}). One of the main results of this theory, is that the constraints are given by \emph{Cayley-Menger determinants}, which are related to the volume of a simplex formed by $N$ points in an $N-1$ dimensional space. For simplicity let us restrict our discussion to one dimension. It is easy to see that in one dimension the number of independent coordinates are $N-1$ and for $N>2$ the number of interatomic distances exceeds the number of independent coordinates. Therefore, let the material system $\mathcal{M}$ consist of three point masses interacting in one dimension. The standard pair potential representation for this system, which is an extension of the potential energy to the higher-dimensional Euclidean space, is given by \begin{equation} \mathcal{V}_{\rm{int}}(\zeta_{12},\zeta_{13},\zeta_{23}) = \mathcal{V}_{12}(\zeta_{12}) + \mathcal{V}_{13}(\zeta_{13}) + \mathcal{V}_{23}(\zeta_{23}). \label{eqn:pair_pot} \end{equation} Since the calculation gets unwieldy, let us consider the special case when the particles are arranged to satisfy $x_1<x_2<x_3$, for which $r_{13}=r_{12}+r_{23}$. Using \eref{eqn:f_decomp_general}, the internal force, $f_1^{\rm{int}}$, evaluated at this configuration, is decomposed as \begin{align} f_1^{\rm{int}}(r_{12},r_{13},r_{23}) = -\frac{d\mathcal{V}_{\rm{int}}}{dx_1} &= -\frac{d\mathcal{V}_{12}}{dx_1}-\frac{d\mathcal{V}_{13}}{dx_1} \notag \\ &= \mathcal{V}'_{12}(r_{12}) + \mathcal{V}'_{13}(r_{13}) \notag \\ & =: f_{12} + f_{13}. \label{eqn:define_fij_pair} \end{align} We now provide an alternate extension to the standard pair potential representation given in \eref{eqn:pair_pot}. The Cayley-Menger determinant corresponding to a cluster of three points (see \eref{eq:CMD4}) is identically equal to zero at every point on the shape space. This is because the shape space corresponds to a configuration of three collinear points, and the area of the triangle formed by three collinear points is zero. Thus, we have \begin{align} \chi(r_{12},r_{13},r_{23}) &= (r_{12}-r_{13}-r_{23})(r_{23}-r_{12}-r_{13}) \notag \\ & \quad \times (r_{13}-r_{23}-r_{12})(r_{12}+r_{13}+r_{23}) \notag \\ &= 0. \label{eqn:cayley_1d} \end{align} Using the identity in \eref{eqn:cayley_1d}, an alternate extension $\mathcal{V}^{\mathcal{A}}_{\rm{int}}$ is constructed: \begin{align} \mathcal{V}^{\mathcal{A}}_{\rm{int}}(\zeta_{12},\zeta_{13},\zeta_{23}) &= \mathcal{V}_{\rm int}(\zeta_{12},\zeta_{13},\zeta_{23}) + \chi(\zeta_{12},\zeta_{13},\zeta_{23}). \label{eqn:pot_rep_2} \end{align} Note that $\mathcal{V}^{\mathcal{A}}_{\rm{int}}$ is indeed an extension because from \eref{eqn:cayley_1d} it is clear that $\mathcal{V}^{\mathcal{A}}_{\rm{int}}$ is equal to $\mathcal{V}_{\rm{int}}$ at every point on the shape space of the system and it is continuously differentiable because $\chi(\zeta_{12},\zeta_{13},\zeta_{23})$, being a polynomial, is infinitely differentiable. Let us now see how the internal force, $f_1^{\rm int}$, for the special configuration considered in this example, is decomposed using the new extension: \begin{align} f_1^{\rm{int}} = -\frac{d\mathcal{V}_{\rm{int}}^{\mathcal{A}}}{dx_1} &= -\frac{d\mathcal{V}_{\rm{int}}}{dx_1} - \frac{d\chi}{dx_1} \notag \\ &= \left (\mathcal{V}'_{12} - \frac{\partial \chi}{\partial \zeta_{12}}(\bm{s}) \frac{\partial \zeta_{12}}{\partial x_1}(\bm{s}) \right ) + \left( \mathcal{V}'_{13} - \frac{\partial \chi}{\partial \zeta_{13}}(\bm{s}) \frac{\partial \zeta_{13}}{\partial x_1}(\bm{s}) \right ) \notag \\ &= \left (f_{12}-8r_{12}r_{23}(r_{12} + r_{23}) \right ) + \left ( f_{13}+8r_{12}r_{23}(r_{12} + r_{23}) \right ) \notag \\ &=: \tilde{f}_{12} + \tilde{f}_{13}, \label{eqn:f1_decomp} \end{align} It is clear from \eref{eqn:define_fij_pair} and \eref{eqn:f1_decomp} that the central-force decomposition is not the same for the two representations, i.e., $f_{12} \ne \tilde{f}_{12}$ and $f_{13} \ne \tilde{f}_{13}$, however the force on particle 1, $f_1^{\rm int}$, is the same in both cases as expected. It is very interesting to note that $\mathcal{V}^{\mathcal{A}}_{\rm int}$ is \emph{not} a pair potential (based on the definition of a pair potential), but it is equivalent to a pair potential, i.e., it agrees with a pair potential on the shape space. Thus, the set of continuously differentiable extensions of a given interatomic potential function form an equivalence class. It is not clear at this stage if these equivalence classes can be fully expressed in terms of the Cayley-Menger determinant constraints. Although the above example is quite elementary, this process can be extended to any arbitrary number of particles in three dimensions. Any given potential can be altered to an equivalent potential by adding a function of the Cayley-Menger determinants corresponding to any cluster of 5 or 6 particles (see Appendix \ref{sec:geometry}). This function must be continuously differentiable and equal to zero when all of its arguments are zero. For example, a new representation in three dimensions can be constructed by adding a linear combination of the Cayley-Menger determinants: \begin{equation} \mathcal{V}_{\rm int}^* = \mathcal{V}_{\rm int}(\zeta_{12}, \dots, \zeta_{(N-1)N}) + \sum_{k=1}^m \lambda_k \chi_k, \label{eqn:3drep} \end{equation} where there are $m$ constraints defined by the Cayley-Menger determinants $\chi_k$, and $\lambda_k$ are constants.\footnote{Note that \eref{eqn:3drep} has the same form as a Lagrangian with the $\lambda$ terms playing the role of Lagrange multipliers. For a static minimization problem, we seek to minimize $\mathcal{V}_{\rm int}^*$, without violating the physical constraints relating the distances to each other. (This is equivalent to minimizing $\widehat{\mathcal{V}}_{\rm int}$ with respect to the positions of particles.) Thus, the original constrained minimization of $\mathcal{V}_{\rm int}$ is replaced by the problem of finding the saddle points of $\mathcal{V}_{\rm int}^*$.} From this point on, we abuse our notation slightly, and write for any $\bm{s}=(r_{12}, \dots,\\ r_{(N-1)N})\in\mathcal{S}$: \begin{align} \frac{\partial \mathcal{V}_{\rm int}}{\partial r_{\alpha \beta}} \quad \text{for} \quad \frac{\partial \mathcal{V}_{\rm int}}{\partial \zeta_{\alpha \beta}}(\bm{s}). \end{align} Also, we assume that there exists a continuously differentiable extension whenever we write $\bm{f}_{\alpha\beta}$ and sometimes refer to a continuously differentiable extension as an extension. \subsubsection*{Derivation of the pointwise stress tensor} We now return to the differential equation in \eref{eqn:stress_force_differential} for the potential part of the pointwise stress tensor. Substituting the force decomposition given in \eref{eqn:f_decomp_general} corresponding to a continuously differentiable extension, into \eref{eqn:stress_force_differential}, we obtain \begin{equation} \label{eqn:stress_force_differential_*} \operatorname{div}_{\bm{x}} \bm{\sigma}_{\rm{v}}(\bm{x},t) = \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \langle W \bm{f}_{\alpha \beta} \mid \bm{x}_\alpha = \bm{x} \rangle. \end{equation} On using the identity \begin{equation} \left \langle \bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha = \bm{x} \right \rangle = \int_{\real{3}} \left \langle \bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha = \bm{x}, \bm{x}_\beta = \bm{y} \right \rangle \, d \bm{y}, \end{equation} equation \eref{eqn:stress_force_differential_*} takes the form \begin{equation} \label{eqn:stress_force_differential_**} \operatorname{div}_{\bm{x}} \bm{\sigma}_{\rm{v}}(\bm{x},t) = \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \int_{\real{3}} \left \langle W \bm{f}_{\alpha\beta} \mid \bm{x}_\alpha = \bm{x}, \bm{x}_\beta = \bm{y} \right \rangle \, d\bm{y}. \end{equation} We now note that, being anti-symmetric, the integrand in the right-hand side of the above equation satisfies all the necessary conditions for the application of Lemma \ref{lem1} given in Appendix A. Conditions (1) and (2) in Appendix A are satisfied through the regularity conditions on $W$. Therefore, using Lemma \ref{lem1}, which was proved by Noll in \cite{noll1955}, we have \begin{align} \label{eqn:stress_force_general} &\bm{\sigma}_{\rm{v}}(\bm{x},t) \\ &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3}} \int_{s=0}^{1} \left \langle -\bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha=\bm{x}+s\bm{z},\bm{x}_\beta = \bm{x} - (1-s)\bm{z} \right \rangle \, ds \otimes \bm{z} \, d\bm{z} \notag \\ &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3}} \frac{\bm{z} \otimes \bm{z}}{\vnorm{\bm{z}}} \int_{s=0}^{1} \left \langle \frac{\partial\mathcal{V}_{\rm int}}{\partial r_{\alpha\beta}} W \mid \bm{x}_\alpha=\bm{x}+ s\bm{z},\bm{x}_\beta = \bm{x} - (1-s)\bm{z} \right \rangle \, ds \, d\bm{z}, \end{align} where in passing to the second line, we have used \eref{eqn:define_fij} and the identity $\bm{x}_\alpha-\bm{x}_\beta = \bm{x}+s\bm{z} - [\bm{x}-(1-s)\bm{z}] = \bm{z}$. For the special case of a pair potential, $\partial\mathcal{V}_{\rm int}/\partial r_{\alpha\beta}=\mathcal{V}'_{\alpha\beta}(r_{\alpha\beta})$, and \eref{eqn:stress_force_general} reduces to the expression originally given in \cite{noll1955}. \begin{figure} \centering \includegraphics[scale=0.4]{fig3_2} \caption{A schematic diagram helping to explain the vectors appearing in the pointwise potential stress expression in \eref{eqn:stress_force_general}. The bond $\alpha$--$\beta$ is defined by the vector $\bm{z}$. When $s=0$, atom $\alpha$ is located at point $\bm{x}$, and when $s=1$, atom $\beta$ is located at $\bm{x}$.} \label{fig:stressbond} \end{figure} \medskip The expression for the potential part of the pointwise stress tensor in \eref{eqn:stress_force_general} is a general result applicable to all interatomic potentials. We make some important observations regarding this expressions below: \begin{enumerate} \item Although the expression for $\bm{\sigma}_{\rm{v}}$ appears complex, it is actually conceptually quite simple. $\bm{\sigma}_{\rm{v}}$ at a point $\bm{x}$ is the superposition of the expectation values of the forces in all possible bonds passing through $\bm{x}$. The variable $\bm{z}$ selects a bond length and direction and the variable $s$ slides the bond through $\bm{x}$ from end to end (see \fref{fig:stressbond}). \item $\bm{\sigma}_{\rm{v}}$ is symmetric. This is clear because the term $\bm{z}\otimes\bm{z}$ is symmetric. Since the kinetic part of the stress in \eref{eqn:stress_kinetic} is also symmetric, the conclusion is that the \emph{pointwise stress tensor is symmetric for all interatomic potentials}. \item Since $\bm{\sigma}_{\rm{v}}$ depends on the nature of the force decomposition and different extensions of a given potential energy can result in different force decompositions, we conclude that the pointwise stress tensor is \emph{non-unique} for all interatomic potentials (including the pair potential). We show in \sref{sec:unique_macro_stress} that the difference due to any two pointwise stress tensors, resulting from different extensions for the interatomic potential energy, tends to zero as the volume of the domain over which these pointwise quantities are spatially averaged tends to infinity. Therefore, as expected, the macroscopic stress tensor, which is defined in the thermodynamic limit (see footnote~\ref{foot:tdlimit} on page~\pageref{foot:tdlimit}), is always unique and is independent of the potential energy extension. \item Another source of non-uniqueness is that any expression of the form, $\bm{\sigma}_{\rm{v}}+\tilde{\bm{\sigma}}$, where $\operatorname{div}_{\bm{x}} \tilde{\bm{\sigma}} = {\bf 0}$, also satisfies the balance of linear momentum and is therefore also a solution. We address this issue in \sref{sec:hardy}, where we show that in the thermodynamic limit under equilibrium conditions, the spatially averaged counterpart to $\bm{\sigma}_{\rm{v}}$ converges to the virial stress derived in \sref{ch:canonical}. \end{enumerate} The above results hinge on the use of the central-force decomposition in \eref{eqn:f_decomp_general}. One may wonder whether other \emph{non-central} decompositions exist, and if yes, why are these discarded. This is discussed in the next section. \subsection{Non-central-force decompositions and the strong law of action and reaction} \label{sec:stronglaw} In the previous section, we showed that as a consequence of the invariance of the potential energy with respect to the Euclidean group, for any interatomic potential with a continuously differentiable extension, the force on a particle can always be represented as a sum of central forces. In this section, we show that other \emph{non-central-force decompositions} are possible, however that these violate the \emph{strong law of action and reaction}, which we prove below, and therefore do not constitute physically-meaningful force decompositions. \subsubsection*{A proposal for a non-symmetric stress tensor for a three-body potential} As an example, let us now consider the case of a three-body potential. For simplicity, we assume that the potential only has three-body terms and all particles are identical. Under these conditions, the internal potential energy is \begin{equation} \label{eqn:pot_rep_3body} \mathcal{V}_{\rm{int}} = \sum_{\substack{\alpha,\beta,\gamma \\ \alpha < \beta < \gamma}} \widehat{\mathcal{V}}(\bm{x}_{\alpha},\bm{x}_{\beta},\bm{x}_{\gamma}), \end{equation} where $\widehat{\mathcal{V}}(\bm{x}_\alpha,\bm{x}_\beta,\bm{x}_\gamma)$ is the potential energy of an isolated cluster, $\{\alpha,\beta,\gamma\}$, and $\sum_{\substack{\alpha,\beta,\gamma \\ \alpha < \beta <\gamma}}$ represents a triple sum. We know that a central-force decomposition can be obtained by following the procedure outlined in the previous section and that this leads to a symmetric pointwise stress tensor in \eref{eqn:stress_force_general}. Alternatively, a \emph{non-symmetric} three-body stress tensor is derived as follows. To keep things simple, we derive the stress tensor for a system containing only three particles. Rewriting \eref{eqn:pot_rep_3body} for this case, we have \begin{equation} \label{eqn:summation_split} \mathcal{V}_{\rm{int}} = \widehat{\mathcal{V}}(\bm{x}_1,\bm{x}_2,\bm{x}_3) = \sum_{\alpha=1}^3 \phi_\alpha, \end{equation} where \begin{equation} \label{eqn:def_phi_i} \phi_\alpha= \frac{1}{3} \widehat{\mathcal{V}}(\bm{x}_1,\bm{x}_2,\bm{x}_{3}) \end{equation} is the potential energy assigned to particle $\alpha$, equal to one-third of the total potential energy. Substituting \eref{eqn:summation_split} into \eref{eqn:stress_force_differential}, we obtain \begin{align} \operatorname{div}_{\bm{x}} \bm{\sigma}_{\rm{v}}(\bm{x},t) &= - \sum_{\alpha,\beta} \left \langle W \nabla_{\bm{x}_\alpha} \phi_\beta \mid \bm{x}_\alpha = \bm{x} \right \rangle \notag \\ &=- \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \left \langle W \nabla_{\bm{x}_\alpha} \phi_\beta \mid \bm{x}_\alpha = \bm{x} \right \rangle - \sum_{\alpha} \left \langle W \nabla_{\bm{x}_\alpha} \phi_\alpha \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:stress_force_differential_3body_*} \end{align} Since the cluster of three particles is isolated, the net force on the cluster due to internal interactions is zero. Therefore, from \eref{eqn:summation_split}, we have \begin{equation} \label{eqn:postulate} \nabla_{\bm{x}_\alpha}\phi_\alpha = -\sum_{\beta \ne \alpha} \nabla_{\bm{x}_\beta}\phi_\alpha. \end{equation} Using this relation, equation \eref{eqn:stress_force_differential_3body_*} simplifies to \begin{equation} \label{eqn:stress_force_differential_3body_**} \operatorname{div}_{\bm{x}} \bm{\sigma}_{\rm{v}}(\bm{x},t) = - \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \left \langle W (\nabla_{\bm{x}_\alpha} \phi_\beta - \nabla_{\bm{x}_\beta} \phi_\alpha) \mid \bm{x}_\alpha = \bm{x} \right \rangle. \end{equation} Let \begin{equation} \label{eqn:f_ij_3body} \bar{\bm{f}}_{\alpha\beta} := \nabla_{\bm{x}_\beta} \phi_\alpha - \nabla_{\bm{x}_\alpha} \phi_\beta. \end{equation} Now, using the identity \[ \left \langle \bar{\bm{f}}_{\alpha\beta} W \mid \bm{x}_\alpha = \bm{x} \right \rangle = \int_{\real{3}} \left \langle \bar{\bm{f}}_{\alpha\beta} W \mid \bm{x}_\alpha = \bm{x}, \bm{x}_\beta = \bm{y} \right \rangle \, d \bm{y}, \] and the definition given in \eref{eqn:f_ij_3body}, equation \eref{eqn:stress_force_differential_3body_**} takes the form \begin{equation} \label{eqn:stress_force_differential_3body_***} \operatorname{div}_{\bm{x}} \bm{\sigma}_{\rm{v}}(\bm{x},t) = \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \int_{\real{3}} \left \langle W \bar{\bm{f}}_{\alpha\beta} \mid \bm{x}_\alpha = \bm{x}, \bm{x}_\beta = \bm{y} \right \rangle \, d\bm{y}. \end{equation} Let us now study the definition of $\bar{\bm{f}}_{\alpha\beta}$ given in \eref{eqn:f_ij_3body}. From \eref{eqn:def_phi_i} we have \begin{equation} \bar{\bm{f}}_{\alpha\beta} = -\nabla_{\bm{x}_\alpha} \phi_\beta + \nabla_{\bm{x}_\beta} \phi_\alpha = \frac{1}{3} \left [ - \frac{\partial \widehat{\mathcal{V}}}{\partial \bm{x}_\alpha} + \frac{\partial \widehat{\mathcal{V}}}{\partial \bm{x}_\beta} \right ] = \frac{1}{3} (\bm{f}^{\rm int}_\alpha - \bm{f}^{\rm int}_\beta). \label{eqn:phi_ij_antisym} \end{equation} The above equation suggests how the force $\bm{f}^{\rm int}_\alpha$ is decomposed. For example, $\bm{f}^{\rm int}_1$ is decomposed as \begin{equation} \bm{f}^{\rm int}_1 = \bar{\bm{f}}_{12} + \bar{\bm{f}}_{13} = \frac{1}{3}(\bm{f}^{\rm int}_1 - \bm{f}^{\rm int}_2) + \frac{1}{3}(\bm{f}^{\rm int}_1 - \bm{f}^{\rm int}_3). \end{equation} Rearranging this relation gives \begin{equation} \bm{f}^{\rm int}_1 + \bm{f}^{\rm int}_2 + \bm{f}^{\rm int}_3 = \bm{0}, \end{equation} which is true since the cluster $\{1,2,3\}$ is isolated. \begin{figure} \centering \subfigure[]{\includegraphics[scale=0.6]{fig3_3a} \label{fig:force}} \subfigure[]{\includegraphics[scale=0.6]{fig3_3b} \label{fig:force_decomp}} \caption{(a) shows the force on each particle in a system consisting of 3 particles which interact through a 3-body potential given in \eref{eqn:summation_split}. Since the potential is derived from an energy decomposition, we have $\bm{f}_1^{\rm{int}} + \bm{f}_2^{\rm{int}} +\bm{f}_3^{\rm{int}}=\bm{0}$. (b) shows the force decomposition of each $\bm{f}_{\alpha}^{\rm{int}}$ such that $\bm{f}_{\alpha\beta}=-\bm{f}_{\beta\alpha}$, but not necessarily parallel to the line joining particles $\alpha$ and $\beta$.} \label{fig:force_3body} \end{figure} From \eref{eqn:phi_ij_antisym} it is clear that $\bar{\bm{f}}_{\alpha\beta}$ is anti-symmetric with respect to its arguments. Therefore, the integrand on the right-hand side of \eref{eqn:stress_force_differential_3body_***} satisfies all the necessary conditions for the application of Lemma \ref{lem1} given in Appendix \ref{ch:noll}. Conditions (1) and (2) in Appendix \ref{ch:noll} are satisfied through the regularity conditions on W. Therefore, using Lemma \ref{lem1}, we have \begin{align} \label{eqn:stress_general} \bar{\bm{\sigma}}_{\rm{v}}(\bm{x},t) &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \int_{\real{3}} \int_{s=0}^{1} \notag \\ & \left \langle -( \nabla_{\bm{x}_\beta} \phi_\alpha - \nabla_{\bm{x}_\alpha} \phi_\beta ) W \mid \bm{x}_\alpha = \bm{x} + s\bm{z}, \bm{x}_\beta = \bm{x} - (1-s) \bm{z} \right \rangle \, ds\otimes \bm{z} \, d\bm{z}. \end{align} The stress $\bar{\bm{\sigma}}_{\rm{v}}$ is non-symmetric in general because $\bar{\bm{f}}_{\alpha\beta}$, defined in \eref{eqn:phi_ij_antisym}, need not be parallel to the line joining particles $\alpha$ and $\beta$ as shown in \fref{fig:force_3body}. We therefore have two expressions for the stress for the same three-body potential. The symmetric expression in \eref{eqn:stress_force_general} and the non-symmetric expression in \eref{eqn:stress_general}. We show next that the non-central-force decomposition that led to the non-symmetric stress tensor is not physically meaningful since it violates the strong law of action and reaction. \subsubsection*{Weak and strong laws of action and reaction\footnote{This derivation is due to Roger~Fosdick \cite{fosdick}.}} The following derivation hinges on the fact that in a material system the balance laws of linear and angular momentum must be satisfied for any part of the body. Consider a system of $N$ particles with masses $m_\alpha$ $(\alpha=1,\dots,N)$. The total force on particle $\alpha$ is \begin{equation} \bm{f}_{\alpha} = \bm{f}^{\rm{ext}}_{\alpha} + \sum_{\substack{\beta \\ \beta \ne \alpha}} \bm{f}_{\alpha\beta}, \end{equation} where $\bm{f}^{\rm{ext}}_{\alpha}$ is the external force on particle $\alpha$, and, as above, $\bm{f}_{\alpha\beta}$ is the contribution to the force on particle $\alpha$ due to the presence of particle $\beta$. No assumptions are made regarding the terms $\bm{f}_{\alpha\beta}$ or the interatomic potential from which they are derived. A ``part'' $\wp_t$ of the system consists of $K \le N$ particles. We suppose $\bm{x}_0$ is a fixed point in space. Let $\bm{F}^{\rm{ext}}(\wp_t)$ denote the total force on the part $\wp_t$ external to the part. Let $\bm{M}^{\rm{ext}}(\wp_t; \bm{x}_0)$ denote the total external moment on $\wp_t$ about $\bm{x}_0$. Let $\bm{L}(\wp_t)$ be the linear momentum of the part $\wp_t$ and $\bm{H}(\wp_t; \bm{x}_0)$ be the angular momentum of $\wp_t$ about $\bm{x}_0$. We adopt the following balance laws, valid for all parts of the system:\footnote{The view that the balance of linear momentum and the balance of angular momentum are fundamental laws of mechanics lies at the basis of continuum mechanics. See, for example, Truesdell's article ``Whence the Law of Moment and Momentum?'' in \cite{truesdell_essays}.} \begin{align} \bm{F}^{\rm{ext}}(\wp_t) &= \frac{d\bm{L}}{dt}(\wp_t), \label{eqn:force_balance}\\ \bm{M}^{\rm{ext}}(\wp_t;\bm{x}_0) &= \frac{d\bm{H}}{dt}(\wp_t;\bm{x}_0).\label{eqn:mom_balance} \end{align} We now show that by applying these balance laws to particular parts of the system, that the strong law of action and reaction can be established. As a first observation, let $\wp_t$ consist of the single particle $\alpha$. The external force and linear momentum for $\wp_t = \{\alpha\}$ is \begin{align} \bm{F}^{\rm{ext}}(\{\alpha\}) &= \bm{f}^{\rm{ext}}_{\alpha}(t) + \sum_{\substack{\gamma \\ \gamma \ne \alpha}}\bm{f}_{\alpha\gamma}(t), \\ \bm{L}(\{\alpha\}) &= m_{\alpha}\dot{\bm{x}}_{\alpha}(t) \qquad \text{(no sum)}. \end{align} The balance of linear momentum in \eref{eqn:force_balance} requires \begin{equation} \label{eqn:newton_law_1} \bm{f}^{\rm{ext}}_{\alpha} + \sum_{\substack{\gamma \\ \gamma \ne \alpha}} \bm{f}_{\alpha\gamma} = m_{\alpha} \ddot{\bm{x}}_{\alpha}. \end{equation} The external moment and angular momentum of $\wp_t$ is \begin{equation} \bm{M}^{\rm{ext}}(\{\alpha\};\bm{x}_0) = (\bm{x}_{\alpha}(t) - \bm{x}_0) \times (\bm{f}^{\rm{ext}}_{\alpha} + \sum_{\substack{\gamma \\ \gamma \ne \alpha}} \bm{f}_{\alpha\gamma}) = m_{\alpha}(\bm{x}_{\alpha}(t) - \bm{x}_0) \times \ddot{\bm{x}}_{\alpha}(t), \end{equation} where we have used \eref{eqn:newton_law_1}, and \begin{equation} \bm{H}(\{\alpha\};\bm{x}_0) = (\bm{x}_\alpha-\bm{x}_0) \times m_{\alpha} \dot{\bm{x}}_{\alpha}(t). \end{equation} The balance of angular momentum in \eref{eqn:mom_balance} is satisfied identically, since \begin{align*} m_{\alpha}(\bm{x}_{\alpha}(t) - \bm{x}_0) \times \ddot{\bm{x}}_\alpha(t) &= \frac{d}{dt} \left [ (\bm{x}_{\alpha}(t) - \bm{x}_0) \times m_{\alpha} \dot{\bm{x}}_{\alpha}(t) \right ] \\ &= \dot{\bm{x}}_{\alpha}(t) \times m_{\alpha} \dot{\bm{x}}_{\alpha}(t) + (\bm{x}_{\alpha}(t) - \bm{x}_0) \times m_{\alpha} \ddot{\bm{x}}_{\alpha}(t) \\ &= m_{\alpha} (\bm{x}_{\alpha}(t) - \bm{x}_0) \times \ddot{\bm{x}}_{\alpha}(t). \end{align*} \smallskip As a second observation, let $\wp_t$ consist of the union of the two particles $\alpha$ and $\beta$. The external force and linear momentum are \begin{align} \bm{F}^{\rm{ext}}(\{\alpha,\beta\}) &= \bm{f}^{\rm{ext}}_{\alpha} + \bm{f}^{\rm{ext}}_{\beta} + \sum_{\substack{\gamma \\ \gamma \ne \alpha \ne \beta}} ( \bm{f}_{\alpha\gamma} + \bm{f}_{\beta\gamma}), \\ \bm{L}(\{\alpha,\beta\}) &= m_{\alpha} \dot{\bm{x}}_{\alpha} + m_{\beta} \dot{\bm{x}}_{\beta}. \end{align} The balance of linear momentum in \eref{eqn:force_balance} requires \begin{equation} \bm{f}^{\rm{ext}}_{\alpha} + \bm{f}^{\rm{ext}}_{\beta} + \sum_{\substack{\gamma \\ \gamma \ne \alpha \ne \beta}}( \bm{f}_{\alpha\gamma} + \bm{f}_{\beta\gamma}) = m_{\alpha} \ddot{\bm{x}}_{\alpha} + m_{\beta} \ddot{\bm{x}}_{\beta}. \end{equation} Subtracting \eref{eqn:newton_law_1} for particles $\alpha$ and $\beta$ gives \begin{equation} \sum_{\substack{\gamma \\ \gamma \ne \alpha \ne \beta}}(\bm{f}_{\alpha\gamma} + \bm{f}_{\beta\gamma}) - \sum_{\substack{\gamma \\ \gamma \ne \alpha}} \bm{f}_{\alpha\gamma} - \sum_{\substack{\gamma \\ \gamma \ne \beta}} \bm{f}_{\beta\gamma} = \bm{0}, \end{equation} from which \begin{equation} \bm{f}_{\alpha\beta} + \bm{f}_{\beta\alpha} = \bm{0}. \label{eqn:f_antisym} \end{equation} This relation is called the \emph{weak law of action and reaction} \cite{goldstein}. It shows that $\bm{f}_{\alpha\beta} = -\bm{f}_{\beta\alpha}$, but does not guarantee that $\bm{f}_{\alpha\beta}$ lies along the line connecting particles $\alpha$ and $\beta$. \smallskip Next, the external moment and angular momentum of $\wp_t$ is \begin{align} \bm{M}^{\rm{ext}}&(\{\alpha,\beta\};\bm{x}_0) \notag \\ &= (\bm{x}_{\alpha} - \bm{x}_0) \times ( \bm{f}^{\rm{ext}}_{\alpha} + \sum_{\substack{\gamma \\ \gamma \ne \alpha \ne \beta}} \bm{f}_{\alpha\gamma} ) + (\bm{x}_{\beta} -\bm{x}_0) \times ( \bm{f}^{\rm{ext}}_{\beta} + \sum_{\substack{\gamma \\ \gamma \ne \beta \ne \alpha}} \bm{f}_{\beta\gamma} ) \notag \\ &= (\bm{x}_{\alpha} - \bm{x}_0) \times (m_\alpha \ddot{\bm{x}}_\alpha - \bm{f}_{\alpha\beta}) + (\bm{x}_{\beta} - \bm{x}_0) \times (m_\beta \ddot{\bm{x}}_\beta- \bm{f}_{\beta\alpha}), \end{align} where we have used \eref{eqn:newton_law_1}, and \begin{equation} \bm{H}(\{\alpha,\beta\};\bm{x}_0) = (\bm{x}_\alpha-\bm{x}_0) \times m_{\alpha} \dot{\bm{x}}_{\alpha} + (\bm{x}_\beta-\bm{x}_0) \times m_{\beta} \dot{\bm{x}}_{\beta}. \end{equation} The balance of angular momentum in \eref{eqn:mom_balance} requires \begin{align} (\bm{x}_\alpha &- \bm{x}_0) \times (m_\alpha \ddot{\bm{x}}_\alpha - \bm{f}_{\alpha\beta}) + (\bm{x}_\beta- \bm{x}_0) \times (m_\beta \ddot{\bm{x}}_\beta- \bm{f}_{\beta\alpha}) \notag \\ &= \frac{d}{dt} \left [ (\bm{x}_{\alpha} - \bm{x}_0) \times m_{\alpha} \dot{\bm{x}}_{\alpha} + (\bm{x}_{\beta} - \bm{x}_0) \times m_{\beta} \dot{\bm{x}}_{\beta} \right ] \notag \\ &= \dot{\bm{x}}_{\alpha} \times m_{\alpha} \dot{\bm{x}}_{\alpha} + (\bm{x}_{\alpha} - \bm{x}_0) \times m_{\alpha} \ddot{\bm{x}}_{\alpha} + \dot{\bm{x}}_{\beta} \times m_{\beta} \dot{\bm{x}}_{\beta} + (\bm{x}_{\beta} - \bm{x}_0) \times m_{\beta} \ddot{\bm{x}}_{\beta} \notag \\ &= (\bm{x}_{\alpha} - \bm{x}_0) \times m_\alpha \ddot{\bm{x}}_{\alpha} + (\bm{x}_{\beta} - \bm{x}_0) \times m_\beta \ddot{\bm{x}}_{\beta}, \end{align} which simplifies to \[ (\bm{x}_\alpha - \bm{x}_0) \times \bm{f}_{\alpha\beta} + (\bm{x}_\beta- \bm{x}_0) \times \bm{f}_{\beta\alpha} = \bm{0}, \] and, after using \eref{eqn:f_antisym}, we obtain \begin{equation} (\bm{x}_\alpha - \bm{x}_\beta) \times \bm{f}_{\alpha\beta} = \bm{0}. \end{equation} This shows that $\bm{f}_{\alpha\beta}$ must be {\em parallel} to the line joining particles $\alpha$ and $\beta$. This is the \emph{strong law of action and reaction}. We have shown that this law must hold for any force decomposition, in order for the balance of linear and angular momentum to hold for any subset of a system of particles. \subsubsection*{The possibility of non-symmetric stress} Based on the proof given above for the strong law of action and reaction, we argue that only force decompositions that satisfy the strong law of action and reaction provide a physically-meaningful definition for $\bm{f}_{\alpha\beta}$. For example, the definition in \eref{eqn:f_ij_3body} is not physical because if it were used to compute the external moment acting on a sub-system of particles, as is done above, the balance of angular momentum would be violated. For this reason, this decomposition and the corresponding non-symmetric stress in \eref{eqn:stress_general} are discarded. The conclusion is that \emph{a pointwise stress tensor for a discrete system of point masses without internal structure has to be symmetric}. In the next section, we discuss the possibility of expanding the class of solutions resulting from Irving--Kirkwood--Noll procedure in a way that makes it possible to obtain non-symmetric stress tensors for systems where the point particles have internal structure. This involves a relaxation of the assumption that the ``bonds'' connecting two particles are necessarily straight. \subsection{Generalized non-symmetric pointwise stress for particles with internal structure} \label{sec:gen_stress} In \sref{sec:s_motion}, we saw that the Irving--Kirkwood--Noll procedure, when applied to multi-body potentials, results in a symmetric non-unique pointwise stress tensor. We now seek to find additional solutions to \eref{eqn:stress_force_differential_**}, which are not obtained using the standard Irving--Kirkwood--Noll procedure. In arriving at \eref{eqn:stress_force_general} using Lemma \ref{lem1}, we can see that the contribution to the potential part of the stress at position $\bm{x}$ is due to all possible bonds, \emph{assumed to be straight lines}, that pass through $\bm{x}$. The question that naturally arises is to what extent can this assumption be weakened. In other words, can Lemma \ref{lem1} be generalized in a suitable manner so that non-straight bonds can be accommodated? Such a possibility was first discussed by Schofield and Henderson \cite{schofield1982}, who used the Irving and Kirkwood approach with a series expansion of the Dirac-delta distribution. It will be shown in this section, using Noll's more rigorous approach, that solutions giving rise to non-straight bonds are possible. From a physical standpoint, non-straight bonds are possible for systems with internal degrees of freedom. An example would be the dipole-dipole interactions between water molecules resulting from the electrical dipole of each molecule. The possibility of internal degrees of freedom was already raised by Kirkwood in his 1946 paper \cite{kirkwood1946}. The idea is to relate the shape of the non-straight bonds to the additional physics associated with the internal degrees of freedom. This issue will be further explored in future work. For now, we only investigate the possible existence of additional solutions other than that given by \eref{eqn:stress_force_general}. We begin by describing the shape of a bond in a more precise way through the following definition. \begin{definitions} The ``path of interaction'' between any two interacting particles $\alpha$ and $\beta$ is the unique contour that connects $\alpha$ and $\beta$, such that there is a non-zero contribution to the potential part of the pointwise stress, $\bm{\sigma}_{\rm{v}}$, at any point on this contour. \end{definitions} In this section, the terms \emph{bond} and \emph{path of interaction} are used synonymously. Therefore, for the case of the pointwise stress tensor in \eref{eqn:stress_force_general}, this path of interaction is given by the straight line joining $\alpha$ and $\beta$. It is shown in Appendix \ref{ch:noll}, that under certain restrictions on the path of interaction, Lemma \ref{lem1} can be generalized to Lemma \ref{lem2} given in Appendix \ref{ch:noll}. Roughly speaking these restrictions are given by the following conditions: \begin{enumerate} \item \label{item:bond_shape_1} The shape of the bond connecting particles $\alpha$ and $\beta$ only depends on the distance between $\alpha$ and $\beta$. \footnote{\label{foot:curveshape} This is in essence a constitutive postulate similar to the assumption in pair potentials that the energy depends on only the distance between particles. A more general dependence of the shape of the path of interaction on the environment of a pair of particles might be possible, but is not pursued here. See also Definition~\ref{def:poi} in Appendix~\ref{ch:noll} and following discussion.} \item For any two pairs of particles $(\alpha,\beta)$ and $(\gamma,\delta)$ separated by the same distance, the bonds $\alpha-\beta$ and $\gamma-\delta$ are related by a rigid body motion. In addition, if $\bm{x}_\alpha - \bm{x}_\beta = \bm{x}_\gamma - \bm{x}_\delta$ then this rigid body motion involves only translation.\label{item:bond_shape_2} \end{enumerate} From condition \ref{item:bond_shape_1}, it is clear that the shape of the bonds can be described by contours $\bm{\Upsilon}_{l}:[0,1] \to \real{3}$, for $l>0$, with $\bm{\Upsilon}_l(0)=(0,0,0)$, $\bm{\Upsilon}_l(1)=(l,0,0)$ along with some mild restrictions. Hence, defining the contours $\bm{\Upsilon}_l$ for $l>0$ is equivalent to defining the paths of interaction between any two points in $\real{3}$. For a precise definition of $\bm{\Upsilon}_l$ and the paths of interaction see Appendix \ref{ch:noll}. Since all the necessary conditions for the application of Lemma \ref{lem2} are same as those for Lemma \ref{lem1}, we can use Lemma \ref{lem2} in the Irving--Kirkwood--Noll procedure instead of Lemma \ref{lem1}. In doing so, we arrive at a definition for the generalized pointwise stress tensor $\bm{\sigma}_{\rm{v}}^{\rm{G}}$, for given paths of interaction with the above mentioned properties. This is given by \begin{align} \label{eqn:stress_force_schofield} &\bm{\sigma}_{\rm{v}}^{\rm{G}}(\bm{x},t) = \notag \\ & \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3}} \int_{s=0}^{1} \left \langle \bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha=\bm{x}_{\perp}+s\bm{z},\bm{x}_\beta = \bm{x}_{\perp} - (1-s)\bm{z} \right \rangle \otimes \bm{Q}_{\bm{z}} \bm{\Upsilon}'_{\vnorm{\bm{z}}}(s) \, ds \, d\bm{z}, \end{align} where $\bm{f}_{\alpha\beta}$ is defined in \eref{eqn:define_fij}, and $\bm{x}_{\perp}(s,\bm{x},\bm{z}) = \bm{x} - s\bm{z}-\bm{Q}_{\bm{z}}\bm{\Upsilon}_{\vnorm{\bm{z}}}(s), \bm{Q}_{\bm{z}} \in \bm{SO}(3)$. Here, $\bm{x}_\perp$ represents the projection of $\bm{x}$ onto the line joining the endpoints, $\bm{x}_\alpha = \bm{x}_\perp + s\bm{z}$ and $\bm{x}_\beta = \bm{x}_\perp - (1-s)\bm{z}$, of the path of interaction being considered. $\bm{Q}_{\bm{z}}$ represents the rotation part of the rigid body motion described in condition \ref{item:bond_shape_2} that maps the contour $\bm{\Upsilon}_{\vnorm{\bm{x}_\alpha - \bm{x}_\beta}}$ to the path of interaction that connects $\bm{x}_\alpha$ and $\bm{x}_\beta$. \medskip Equation \eref{eqn:stress_force_schofield} is a general expression for the potential part of the pointwise stress tensor, of which \eref{eqn:stress_force_general} is a special case. We discuss several key features of this expression below: \begin{enumerate} \item Equation \eref{eqn:stress_force_schofield} is unique only up to given paths of interaction for a given potential energy extension. It is a more general result than \eref{eqn:stress_force_general}, since \eref{eqn:stress_force_general} can be obtained from \eref{eqn:stress_force_schofield} by assuming that the path of interaction between any two points is the straight line connecting them. For this special case it is easy to see that $\bm{x}_{\perp} = \bm{x}$ and $\bm{Q}_{\bm{z}} \bm{\Upsilon}'_{\vnorm{\bm{z}}}(s) = -\bm{z}$. \item $\bm{\sigma}_{\rm{v}}^{\rm{G}}$ is in general non-symmetric, whereas the stress obtained through the standard Irving--Kirkwood--Noll procedure is always symmetric for any multi-body potential with an extension. Since the kinetic part of the stress tensor $\bm{\sigma}_{\rm{k}}$ (see \eref{eqn:stress_kinetic}) is symmetric, it follows that the total pointwise stress tensor obtained from the generalized stress tensor is usually non-symmetric. Therefore, under the present setting, the balance of angular momentum is satisfied only through the presence of couple stresses. This suggests that non-straight bonds might correspond to systems with particles having internal degrees of freedom. \begin{figure} \centering \subfigure[]{\includegraphics[totalheight=0.2\textheight]{fig3_4a}} \\ \subfigure[]{\includegraphics[totalheight=0.2\textheight]{fig3_4b}} \\ \subfigure[]{\includegraphics[totalheight=0.2\textheight]{fig3_4c}} \caption{A schematic diagram helping to explain the vectors appearing in the inner integral of \eref{eqn:stress_force_schofield} for a given point $\bm{x}$. The integral in \eref{eqn:stress_force_schofield} in an integral over all possible paths of interaction that pass through the point $\bm{x}$. The inner integral with respect to $s$, with $\bm{z}$ fixed, is an integral over those paths, where $\bm{z}$ is the vector joining its endpoints. Frame (a) shows a path of interaction when $s=0$. As $s$ is increased the path ``slides'' through $\bm{x}$. Frame (b) shows the path for an arbitrary $s$ in the interval $[0,1]$. The end points are represented by $\bm{x}_{\perp}+s\bm{z}$ and $\bm{x}_{\perp}-(1-s)\bm{z}$. Frame (c) shows the position of the path for $s=1$.} \label{fig:bond} \end{figure} \item Since both \eref{eqn:stress_force_general} and \eref{eqn:stress_force_schofield} are valid definitions for the potential part of the pointwise stress, the question of which one to choose depends on the presence of internal degrees of freedom in each particle. In the absence of internal degrees of freedom only straight bonds are possible due to symmetry. The issue of the pointwise stress being unique only up to a divergence-free tensor-valued function is partially addressed here, since the expression given by the difference between the two definitions is divergence-free. \item The expression in \eref{eqn:stress_force_schofield} is very similar to \eref{eqn:stress_force_general}. The pointwise stress at $\bm{x}$ is a superposition of the expectations of the force of all possible \emph{bonds}/\emph{paths of interaction} passing through $\bm{x}$. The vector $\bm{z}$ selects an orientation and the size of the vector connecting the two ends of the bond, and $s$ slides it through $\bm{x}$ from end to end as shown in \fref{fig:bond}. \end{enumerate} \subsection{Definition of the pointwise traction vector} \label{sec:traction_vector} In this section, we derive the formula for the pointwise traction vector $\bm{t}(\bm{x},\bm{n};t)$ defined on the surface passing through $\bm{x}$, with normal $\bm{n}$ at time $t$. The following derivation is based on \cite{noll1955} and it can be easily extended to curved paths of interaction. As usual, let $\mathcal{M}$ denote our material system. Let $\Omega \subset \real{3}$ be a domain in three-dimensional space with continuously differentiable surface $\mathcal{S}$, representing a part of the body. By this definition, each of the $N$ point masses described by $\mathcal{M}$ either belong to $\Omega$ or in the space surrounding $\Omega$, denoted by $\Omega^{\rm{c}}$. Let $\bm{f}$ denote the force exerted by the particles in $\Omega^{\rm{c}}$ on particles in $\Omega$. We note that in continuum mechanics $\bm{f}$ is related to $\bm{t}$ by \begin{equation} \bm{f}(t) = \int_{\mathcal{S}} \bm{t}(\bm{x},\bm{n},t) \, d\mathcal{S}(\bm{x}), \label{eqn:continuum_force_1} \end{equation} where $\bm{n}(\bm{x})$ is the outer normal at $\bm{x} \in \mathcal{S}$. Using the Cauchy relation, $\bm{t}(\bm{x},\bm{n},t) = \bm{\sigma}(\bm{x},t) \bm{n}$, we obtain \begin{equation} \label{eqn:continuum_force_2} \bm{f}(t) = \int_{\mathcal{S}} \bm{\sigma} \bm{n} \, d\mathcal{S}(\bm{x}). \end{equation} Now, note that the net force exerted by $\Omega^{\rm{c}}$ on $\Omega$ due to particle interaction, denoted by $\bm{f}_{\rm{v}}(t)$, is given by \begin{equation} \label{eqn:discrete_force_1} \bm{f}_{\rm{v}}(t) = \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\bm{u} \in \Omega} \int_{\bm{v} \in \Omega^{\rm{c}}} \left \langle \bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha = \bm{u}, \bm{x}_\beta = \bm{v} \right \rangle \, d\bm{u} \, d\bm{v}, \end{equation} where $\bm{f}_{\alpha\beta}$ is defined in \eref{eqn:define_fij}. Since the integrand in \eref{eqn:discrete_force_1} satisfies all the conditions for the application of the lemmas in Appendix \ref{ch:noll}, we can now use a special case of Lemma \ref{lem3} by restricting to straight bonds.\footnote{Specifically, for straight bonds, we set: $\bm{x}_\perp = \bm{x}$ and $\bm{Q}_{\bm{z}}\Upsilon'_{\vnorm{\bm{z}}}(s) = -\bm{z}$.} We therefore have, \begin{align} \label{eqn:discrete_force_2} &\bm{f}_{\rm{v}}(t) = \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \notag \\ & \int_{\mathcal{S}} \int_{\real{3}} \int_{s= 0}^{1} \left \langle -\bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha = \bm{x} + s\bm{z}, \bm{x}_\beta = \bm{x} - (1-s) \bm{z} \right \rangle (\bm{z} \cdot \bm{n}) \, ds\, d\bm{z} \, d\mathcal{S}(\bm{x}). \end{align} We now note that $\bm{f}_{\rm{v}}$ in \eref{eqn:discrete_force_2} exactly satisfies \begin{equation} \label{eqn:discrete_force_3} \bm{f}_{\rm{v}}(t) = \int_{\mathcal{S}} \bm{\sigma}_{\rm{v}} \bm{n} \, d\mathcal{S}(\bm{x}), \end{equation} where $\bm{\sigma}_{\rm{v}}$ is given by \eref{eqn:stress_force_general}. It is therefore clear that $\bm{f}_{\rm{v}}$ describes the potential part of the interaction force $\bm{f}$. Hence, it is natural to assign a potential part of the pointwise traction vector, $\bm{t}_{\rm{v}}$ to $\bm{f}_{\rm{v}}$, given by \begin{align} \bm{t}_{\rm{v}}&(\bm{x},\bm{n};t) := \bm{\sigma}_{\rm{v}} \bm{n} \notag \\ &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3}} \int_{s= 0}^{1} \left \langle -\bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha = \bm{x} + s\bm{z}, \bm{x}_\beta = \bm{x} - (1-s) \bm{z} \right \rangle (\bm{z} \cdot \bm{n}) \, ds\, d\bm{z}. \label{eqn:discrete_traction_force} \end{align} The above formula is conceptually quite simple. \emph{It gives the measure of the force per unit area of all the bonds that cross the surface, where the force is calculated with respect to a surface measure (see {\rm \eref{eqn:discrete_force_2}})}. Using this viewpoint, we motivate the definitions for the macroscopic traction vector and the stress tensor, when we incorporate spatial averaging in the next section. It is now natural to assign the kinetic contribution to the force across the surface to the kinetic part of the pointwise stress tensor. Subtracting \eref{eqn:discrete_force_3} from \eref{eqn:continuum_force_2}, we obtain the kinetic contribution to the force across a surface, \[ \bm{f}_{\rm{k}}(t) := \int_{\mathcal{S}} (\bm{\sigma} - \bm{\sigma}_{\rm{v}}) \bm{n} \, d\mathcal{S}(\bm{x}) = \int_{\mathcal{S}} \bm{\sigma}_{\rm{k}} \bm{n} \, d\mathcal{S}(\bm{x}). \] Therefore the kinetic contribution to the pointwise traction vector $\bm{t}_{\rm{k}}$ is given by \begin{align} \bm{t}_{\rm{k}}(\bm{x},\bm{n};t) &:= \bm{\sigma}_{\rm{k}} \bm{n} \notag \\ &= -\sum_\alpha m_\alpha \left \langle \bm{v}_\alpha^{\rm{rel}} (\bm{v}_{\alpha}^{\rm{rel}} \cdot \bm{n}) W \mid \bm{x}_\alpha = \bm{x} \right \rangle. \label{eqn:discrete_traction_kinetic} \end{align} Finally, we note that the definitions of $\bm{t}_{\rm{v}}$ and $\bm{t}_{\rm{k}}$ are functions of $\bm{x}$ and $\bm{n}$ alone. Hence, this result is related to the work of Fosdick and Virga \cite{fosdick1989}, who give a variational proof for the stress theorem of Cauchy in the continuum version. In that work the traction vector is allowed to depend on the unit normal and the surface gradient and is shown to be independent of the surface gradient. \medskip The fields defined and derived in this section are pointwise quantities. In the next section, expressions for macroscopic fields are obtained by spatially averaging the pointwise fields over an appropriate macroscopic domain. \section{Spatial averaging} \label{ch:spatial} In the previous section, the Irving--Kirkwood--Noll procedure was used to construct pointwise fields from the underlying discrete microscopic system using the principles of classical statistical mechanics. Although the resulting fields resemble the continuum mechanics fields and satisfy the continuum conservation equations, they are not macroscopic continuum fields. For example, the pointwise stress field in \eref{eqn:stress_force_general}, at sufficiently low temperature, will be highly non-uniform, exhibiting a criss-cross pattern with higher stresses along bond directions, even when macroscopically the material is nominally under uniform or even zero stress. To measure the fields derived in the previous section in an experiment, one needs a probe which can extract data only from a single point of interest in space. Since this is not possible practically, there is no way we can correlate the experimental data with theoretical predictions. Therefore a true macroscopic quantity is by necessity an average over some spatial region surrounding the continuum point where it is nominally defined.\footnote{We do not include time averaging, because this is indirectly performed due to the presence of $W$. The reasoning for this comes from the \emph{frequentist's} interpretation of probability, wherein the probability of a state is equal to the fraction of the total time spent by the system in that state.} Thus, if $f(\bm{x},t)$ is an Irving--Kirkwood--Noll pointwise field, such as density or stress, the corresponding macroscopic field ${f}_w(\bm{x},t)$ is given by \begin{equation} \label{eqn:define_se_0} f_w(\bm{x},t) = \int_{\real{3}} w(\bm{y} - \bm{x}) f(\bm{y},t) \, d\bm{y}, \end{equation} where $w(\bm{r})$ is a weighting function representing the properties of the probe and its lengthscale. The important thing to note is that due to the linearity of the phase averaging in the Irving--Kirkwood--Noll procedure, the averaged macroscopic function ${f}_w(\bm{x},t)$ satisfies the same balance equations as does the pointwise measure $f(\bm{x},t)$. \subsubsection*{Weighting function} The weighting function $w(\bm{r})$ is an $\mathbb{R}^+$-valued function with compact support so that $w(\bm{r})=0$ for $\vnorm{\bm{r}} > \lambda$, where $\lambda$ is a microstructural lengthscale. The weighting function has units of $\rm{volume}^{-1}$ and must satisfy the normalization condition \begin{equation} \label{eqn:normal} \int_{\real{3}} w(\bm{r}) d\bm{r} = 1. \end{equation} This condition ensures that the correct macroscopic stress is obtained when the pointwise stress is uniform. For a spherically-symmetric distribution, $w(\bm{r}) = \hat{w}(r)$, where $r=\vnorm{\bm{r}}$. The normalization condition in this case is \[ \int_{0}^{\infty} \hat{w}(r)4 \pi r^2 dr = 1. \] The simplest choice for $\hat{w}(r)$ is a spherically-symmetric uniform distribution over a specified radius $r_w$, given by \begin{equation} \label{eqn:constant_w} \hat{w}(r) = \left \{ \begin{array}{ll} 1/V_w & \mbox{if $r \leq r_w$},\\ 0 & \mbox{otherwise},\end{array} \right. \end{equation} where $V_w = \frac{4}{3}\pi r_{w}^{3}$ is the volume of the sphere. This function is discontinuous at $r=r_w$. If this is a concern, a "mollifying" function that smoothly takes $w(r)$ to zero at $r_w$ over some desired range can be added \cite{murdoch2007}.\footnote{An example of a mollifying function is given later in equation \eref{eqn:weight_molly}.} Another possible choice for $\hat{w}(r)$ is a Gaussian function \cite{hardy1982} \begin{equation} \label{eqn:gaussian_w} \hat{w}(r) = \pi^{-\frac{3}{2}} r_{w}^{-3} \exp \left [ -r^2/r_{w}^{2} \right ]. \end{equation} This function does not have compact support. However it decays rapidly with distance so that a numerical cutoff can be imposed where its value drops below a specified tolerance. Another possibility is a quartic spline used in meshless method applications (where it is called a \emph{kernel function} \cite{belytchko1996}), \begin{equation} \hat{w}(r) = \left \{ \begin{array}{ll} \frac{105}{16 \pi r_{w}^{3}}(1+3\frac{r}{r_w})(1- \frac{r}{r_w})^3 & \mbox{if $r \leq r_w$},\\ 0 & \mbox{otherwise}.\end{array} \right. \label{eqn:qspline_w} \end{equation} This spline has the advantage that it goes smoothly to zero at $r=r_w$, i.e., $\hat{w}(r_w)=0$, $\hat{w}'(r_w)=0$, and $\hat{w}''(r_w)=0$. \fref{fig:weight} shows the plots of the three weighting functions given above. \begin{figure} \centering \includegraphics[totalheight=0.2\textheight]{fig4_1} \caption{Three weighting functions for spatial averaging: uniform weighting (solid line) in \eref{eqn:constant_w}; Gaussian weighting (dashed line) in \eref{eqn:gaussian_w}; Quartic spline weighting (dash-dot line) in \eref{eqn:qspline_w}. Note that the areas under the curves are not equal because the normalization in \eref{eqn:normal} is according to volume.} \label{fig:weight} \end{figure} \subsection{Spatial averaging and macroscopic fields} \label{sec:spatial_average} Continuum fields such as density and momentum density fields are defined using \eref{eqn:define_se_0} as the ensemble average via the probability density function $W$, followed by a spatial average via the weight function $w$ as follows: \begin{align} \rho_w(\bm{x},t) &:= \sum_{\alpha} m_\alpha \int_{\real{3}}w(\bm{y}-\bm{x})\langle W\mid \bm{x}_\alpha=\bm{y} \rangle \, d\bm{y}, \label{eqn:define_se_1} \\ \bm{p}_w(\bm{x},t) &:= \sum_{\alpha} m_\alpha \int_{\real{3}}w(\bm{y}-\bm{x}) \langle W \bm{v}_\alpha \mid \bm{x}_\alpha = \bm{y} \rangle \, d\bm{y}. \label{eqn:define_se_2} \end{align} It is straightforward to show that using definitions, \eref{eqn:define_se_1} and \eref{eqn:define_se_2}, the macroscopic version of the generalized pointwise stress tensor given by \eref{eqn:stress_force_schofield} divides into potential and kinetic parts as, \begin{align} \bm{\sigma}_{w,\rm{v}}(\bm{x},t) &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3}}{w(\bm{y} - \bm{x})} \int_{\real{3}} \int_{s=0}^{1} \notag \\ &\langle \bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha=\bm{y}_{\perp} + s \bm{z},\bm{x}_\beta = \bm{y}_{\perp} - (1-s)\bm{z} \rangle \otimes \bm{Q}_{\bm{z}} \bm{\Upsilon}'_{\vnorm{\bm{z}}}(s) \, ds \, d\bm{z} \, d\bm{y} \label{eqn:stress_force_w}, \end{align} where $\bm{f}_{\alpha\beta}$ is defined in \eref{eqn:define_fij}, $\bm{y}_{\perp} = \bm{y}-s\bm{z} - \bm{Q}_{\bm{z}} \bm{\Upsilon}_{\vnorm{\bm{z}}}(s)$, $\bm{Q}_{\bm{z}} \in \bm{SO}(3)$, and \begin{equation} \bm{\sigma}_{w,\rm{k}}(\bm{x},t) = -\sum_{\alpha} \int_{\real{3}} w(\bm{y} - \bm{x}) m_\alpha \langle (\bm{v}_{\alpha}^{\rm{rel}} \otimes \bm{v}_{\alpha}^{\rm{rel}}) W \mid \bm{x}_\alpha = \bm{y} \rangle \, d\bm{y} \label{eqn:stress_kinetic_w}. \end{equation} We now intend to express the potential part of stress in a more convenient form. This is done by two consecutive changes of variables. Under the assumption that $\bm{Q}_{\bm{z}}$ and $\bm{\Upsilon}_{\vnorm{\bm{z}}}$ are differentiable with respect to $\bm{z}$ and $\vnorm{\bm{z}}$, respectively, the Jacobian of the transformation $(s,\bm{y},\bm{z}) \mapsto (s,\bm{y}_{\perp},\bm{z})$ is unity. Therefore, \begin{align} \bm{\sigma}_{w,\rm{v}}(\bm{x},t) &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3}}{w(\bm{y} - \bm{x})} \int_{\real{3}} \int_{s=0}^{1} \notag \\ &\langle \bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha=\bm{y}_{\perp}+s\bm{z},\bm{x}_\beta = \bm{y}_{\perp} - (1-s)\bm{z} \rangle \otimes \bm{Q}_{\bm{z}} \bm{\Upsilon}'_{\vnorm{\bm{z}}} \, ds\, d\bm{z} \, d\bm{y}_{\perp}, \label{eqn:stress_force_w_1} \end{align} where $\bm{y} = \bm{y}(s,\bm{y}_{\perp},\bm{z})$. A second change of variables is introduced as follows \begin{equation} \bm{y}_{\perp} + s\bm{z} = \bm{u},\qquad \bm{y}_{\perp} - (1-s)\bm{z} = \bm{v}, \label{eqn:var_change_1} \end{equation} which implies, \begin{equation} \bm{z}=\bm{u}-\bm{v},\qquad \bm{y}_{\perp}=(1-s)\bm{u} + s\bm{v}. \label{eqn:var_change_2} \end{equation} The Jacobian of the transformation is \begin{equation} J = \det \left [ \begin{array} {cc} \nabla_{\bm{u}} \bm{z} & \nabla_{\bm{v}} \bm{z} \\ \nabla_{\bm{u}} \bm{y}_{\perp} & \nabla_{\bm{v}} \bm{y}_{\perp} \end{array} \right ] = \det \left [ \begin{array}{cc} \bm{I} & -\bm{I} \\ (1-s) \bm{I} & s\bm{I} \end{array} \right] = 1. \label{eqn:jacobian} \end{equation} Using \eref{eqn:var_change_1}, \eref{eqn:var_change_2} and \eref{eqn:jacobian} to rewrite \eref{eqn:stress_force_w_1}, we obtain \begin{equation} \bm{\sigma}_{w,\rm{v}}(\bm{x},t) = \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3} \times \real{3}} \langle -\bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha=\bm{u},\bm{x}_\beta=\bm{v} \rangle \otimes \bm{\mathfrak{b}}(\bm{x};\bm{u},\bm{v}) \, d\bm{u} \, d\bm{v}, \label{eqn:stress_force_w_hardy} \end{equation} where \begin{equation} \label{eqn:bond_vector} \bm{\mathfrak{b}}(\bm{x};\bm{u},\bm{v}) := -\int_{s=0}^{1} w(\hat{\bm{y}}-\bm{x}) \bm{Q}_{\bm{u}-\bm{v}} \bm{\Upsilon}'_{\vnorm{\bm{u} - \bm{v}}} \, ds \end{equation} is called the \emph{bond vector}, with \begin{equation} \notag \hat{\bm{y}}(s,\bm{u},\bm{v}) = \bm{y}(s, \bm{y}_{\perp}(s,\bm{u},\bm{v}),\bm{z}(\bm{u},\bm{v})). \end{equation} For the special case of straight bonds, we have \[ \hat{\bm{y}} = (1-s) \bm{u} + s \bm{v} \quad\text{and}\quad \bm{Q}_{\bm{u}-\bm{v}} \bm{\Upsilon}'_{\vnorm{\bm{u}-\bm{v}}}(s) = -(\bm{u}-\bm{v}). \] Therefore the bond vector simplifies to \begin{align} \bm{\mathfrak{b}}(\bm{x};\bm{u},\bm{v}) &=(\bm{u} - \bm{v}) \int_{s=0}^{1} w((1-s) \bm{u} + s\bm{v} - \bm{x}) \, ds \notag \\ &= (\bm{u} - \bm{v}) b(\bm{x};\bm{u},\bm{v}), \label{eqn:bond_function} \end{align} where $b(\bm{x};\bm{u},\bm{v})$ is commonly referred to as the \emph{bond function}. The geometrical significance of the bond function is explained in \fref{fig:bond_function}. \begin{figure} \centering \includegraphics[totalheight=0.2\textheight]{fig4_2} \caption{The bond function $b(\bm{x};\bm{u},\bm{v})$ is the integral of the weighting function centered at $\bm{x}$ along the line connecting points $\bm{u}$ and $\bm{v}$. The graph shows the result for a quartic spline weighting function. The bond function is the area under the curve.} \label{fig:bond_function} \end{figure} For the special case of straight bonds, equation \eref{eqn:stress_force_w_hardy} simplifies to \begin{equation} \label{eqn:stress_force_w_hardy_straight} \bm{\sigma}_{w,\rm{v}}(\bm{x},t) = \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3} \times \real{3}} \langle -\bm{f}_{\alpha\beta} W \mid \bm{x}_\alpha=\bm{u},\bm{x}_\beta=\bm{v} \rangle \otimes (\bm{u}-\bm{v}) b(\bm{x};\bm{u},\bm{v}) \, d\bm{u} \, d\bm{v}. \end{equation} The expressions for the potential and kinetic parts of the spatially-averaged stress tensor in equations \eref{eqn:stress_kinetic_w} and \eref{eqn:stress_force_w_hardy_straight} are our main result and constitute the general definitions for the macroscopic stress computed for a discrete system. It will be shown in \sref{ch:compare} that these relations reduce to the Hardy stress tensor \cite{hardy1982} under suitable approximations. The issue of the uniqueness of the stress tensor (in the sense that any divergence-free field can be added to it) is deferred to \sref{sec:unique_macro_stress}. \subsection{Comparison with the Murdoch--Hardy procedure} \label{sec:murdoch_proc} An alternative procedure of defining continuum fields to the one described above, due to Murdoch \cite{murdoch1982,murdoch1993,murdoch1994} and Hardy \cite{hardy1982}, only involves spatial averaging. We refer to this approach as the \emph{Murdoch--Hardy procedure}. Under the Murdoch--Hardy procedure, continuum fields are defined as direct spatial averages of microscopic variables without incorporating statistical mechanics ideas. Therefore, the Murdoch--Hardy procedure is purely deterministic in nature. For example, the density and momentum density fields at a particular instant of time, corresponding to a given weighting function $w$, are defined as \begin{subequations} \label{eqn:define_s} \begin{align} \tilde{\rho}_w(\bm{x},t) &:= \sum_{\alpha} m_\alpha w( \bm{x}_\alpha(t) - \bm{x}), \label{eqn:define_s_1} \\ \tilde{\bm{p}}_w(\bm{x},t) &:= \sum_{\alpha} m_\alpha \bm{v}_\alpha(t) w( \bm{x}_\alpha(t) - \bm{x} ), \label{eqn:define_s_2} \end{align} \end{subequations} respectively, where $\bm{x}_\alpha$ and $\bm{v}_\alpha$ are deterministic quantities. We denote spatially-averaged variables obtained from the Murdoch--Hardy procedure with a superposed tilde to distinguish them from quantities obtained in \sref{sec:spatial_average}. Equation \eref{eqn:define_s} is used to ``smear'' a discrete system to form a continuum. The reasoning for abandoning statistical mechanics is the lack of knowledge of the ensemble of the system as explained by Murdoch and Bedeaux in \cite{murdoch1993}: \begin{quotation} Physical interpretations of any given ensemble average clearly depends on the definition of the ensemble $\dots$ for example, if a container is filled to a given level with water and then poured onto a surface, the lack of precision with which the pouring is effected may result in many different macroscopic flows. Here no single description is available within deterministic continuum mechanics: in this case the ensemble (defined in terms of the water molecules and limited knowledge of how the pouring takes place) relations involve averages associated with all possible flows. Clearly relations involving ensemble averages are associated with a much greater variety of behavior than is describable in terms of deterministic continuum mechanics. \end{quotation} We share the same concern regarding the ambiguity in the definition of an ensemble. For example, in an experiment where an austenite-martensite phase transformation occurs, the resulting micro-structure consists of a complex spatial configuration of martensitic variants, and this depends largely on the microscopic details of the system, such as cracks, lattice defects, etc. Therefore, in this case, macroscopic variables cannot completely describe the ensemble of interest. To avoid this difficulty, Murdoch proposes a time average in place of ensemble average. Nevertheless, it should be noted that from classical statistical mechanics, the ensemble of interest and its corresponding distribution exists \emph{in principle}. Therefore the framework described in \sref{sec:spatial_average} is a correct framework in which to phrase the problem. A practical calculation can then be performed, for example, by replacing the ensemble averages with time averages in a molecular dynamics calculation (see \sref{ch:compare}). We stress the importance of writing a continuum field variable as an ensemble average followed by spatial average, rather than a spatial average followed by a time average, as is done in the Murdoch--Hardy procedure, because it helps to give a unified picture of all the previous definitions for continuum fields and stress in particular. This is discussed in the next section. It is interesting to note that by relaxing the connection with statistical mechanics, the Murdoch--Hardy procedure allows for a much wider class of definitions for the stress tensor \cite{murdoch2007} in addition to the non-uniqueness characterized so far, due to the presence of multiple extensions for the potential energy and allowing non-straight bonds. In this section we intend to systematize this procedure. The source of non-uniqueness resulting from multiple definitions of the stress tensor is studied, thus helping us to identify a much larger class of possible definitions. In this new systematic approach, the steps involved in the Murdoch--Hardy procedure are as follows: \begin{enumerate} \item Develop a continuum system by smearing out the discrete system using \eref{eqn:define_s}. \item Introduce a \emph{non-local} constitutive law for the continuum that is consistent with the discrete version of force balance given later in \eref{eqn:balance}. \item For each constitutive law, define a stress tensor, which satisfies the equation of motion for the continuum. \end{enumerate} To understand the above three steps, we explore the Murdoch--Hardy procedure in more detail. The continuity equation is satisfied in a trivial way \cite{murdoch2003}. We now look at the equation of motion. \subsubsection*{Equation of motion} The motion of particle $\alpha$ is governed by Newton's second law, \begin{equation} \label{eqn:balance} \sum_{\substack{\beta \\ \beta \ne \alpha}} \bm{f}_{\alpha\beta}^{\rm d}(t) + \bm{b}_\alpha^{\rm d}(t) = m_\alpha \dot{\bm{v}}_\alpha(t), \end{equation} where $\bm{f}_{\alpha\beta}^{\rm d}(t) := \bm{f}_{\alpha\beta}(\bm{u}(t))$, $\bm{f}_{\alpha\beta}(\bm{u})$ are the terms in the central-force decomposition obtained from a multi-body potential with an extension (see \sref{sec:s_motion}) and $\bm{b}_{\alpha}^{\rm d}(t)$ is defined as \begin{equation} \notag \bm{b}_\alpha^{\rm d}(t) := -\nabla_{\bm{x}_\alpha} \mathcal{V}_{\rm{ext}}(\bm{x}_1(t),\dots,\bm{x}_N(t)). \end{equation} The superscript ``$\rm d$'' in \eref{eqn:balance} and the above equation are used to the stress the fact that the quantities are deterministic in nature. Equation \eref{eqn:balance} is a force balance equation for the discrete system. We now design an analogous force balance equation for the smeared continuum defined by \eref{eqn:define_s}, such that \eref{eqn:balance} always holds. For the sake of simplicity in notation, from here onwards we use $\bm{f}_{\alpha\beta}$ to denote both $\bm{f}_{\alpha\beta}(\bm{u})$ and $\bm{f}_{\alpha\beta}^{\rm d}(t)$, whenever it is clear from the context. The same goes with the usage of $\bm{b}_\alpha(t)$ for $\bm{b}_{\alpha}^{\rm d}(t)$. \subsubsection*{Force balance for the smeared continuum} Multiplying \eref{eqn:balance} by $w(\bm{x}_i-\bm{x})$ and summing over all particles, we have \begin{equation} \label{eqn:weighted_balance_1} \tilde{\bm{f}}_w + \tilde{\bm{b}}_w = \sum_{\alpha} m_\alpha \dot{\bm{v}}_\alpha w(\bm{x}_\alpha - \bm{x}), \end{equation} where \begin{align} \label{eqn:f_w_1} \tilde{\bm{f}}_w(\bm{x},t) &:= \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \bm{f}_{\alpha\beta}(t)w(\bm{x}_\alpha(t) - \bm{x}), \\ \tilde{\bm{b}}_w(\bm{x},t) &:= \sum_{\beta} \bm{b}_\beta(t) w(\bm{x}_\beta(t) - \bm{x}). \label{eqn:b_w} \end{align} To arrive at a form similar to the equation of motion of continuum mechanics given in \eref{eqn:motion}, equation \eref{eqn:weighted_balance_1} is rewritten as \begin{align} \tilde{\bm{f}}_w + \tilde{\bm{b}}_w &= \frac{\partial}{\partial t} \sum_{\alpha} m_\alpha w(\bm{x}_\alpha-\bm{x}) \bm{v}_\alpha - \sum_{\alpha} m_\alpha \bm{v}_\alpha (\nabla w(\bm{x}_\alpha - \bm{x}) \cdot \bm{v}_\alpha) \notag \\ &= \frac{\partial \tilde{\bm{p}}_w}{\partial t} + \operatorname{div}_{\bm{x}} \sum_{\alpha} m_\alpha w(\bm{x}_\alpha - \bm{x}) \bm{v}_\alpha \otimes \bm{v}_\alpha. \label{eqn:weighted_balance_*} \end{align} Similar to \eref{eqn:define_velocity}, we define the continuum velocity as \begin{equation} \label{eqn:cont_velocity} \tilde{\bm{v}}_w(\bm{x},t) := \frac{\tilde{\bm{p}}_w(\bm{x},t)}{\tilde{\rho}_w(\bm{x},t)}, \end{equation} and the relative velocity of a particle with respect to the continuum velocity as \begin{equation} \label{eqn:rel_velocity} \tilde{\bm{v}}_{\alpha}^{\rm{rel}}(\bm{x},t) := \bm{v}_{\alpha}(t) - \tilde{\bm{v}}_w(\bm{x},t). \end{equation} Using \eref{eqn:define_s} and \eref{eqn:rel_velocity}, we obtain \begin{align} \sum_{\alpha} m_\alpha \tilde{\bm{v}}_{\alpha}^{\rm{rel}} w(\bm{x}_{\alpha}(t) - \bm{x}) &= \tilde{\bm{p}}_w(\bm{x},t) - \tilde{\rho}_w(\bm{x},t) \tilde{\bm{v}}_w(\bm{x},t) \notag \\ &= \bm{0}, \notag \end{align} the last equality being true by the definition \eref{eqn:cont_velocity}. From \eref{eqn:rel_velocity} and the above equation, it follows that \begin{equation} \notag \sum_{\alpha} m_\alpha w(\bm{x}_\alpha - \bm{x}) \bm{v}_{\alpha} \otimes \bm{v}_\alpha = \sum_{\alpha} m_\alpha w(\bm{x}_\alpha - \bm{x}) \tilde{\bm{v}}_{\alpha}^{\rm{rel}} \otimes \tilde{\bm{v}}_\alpha^{\rm{rel}} + \tilde{\rho}_w \tilde{\bm{v}}_w \otimes \tilde{\bm{v}}_w. \end{equation} Substituting this into \eref{eqn:weighted_balance_*} and rearranging, we have \[ \tilde{\bm{f}}_w - \operatorname{div}_{\bm{x}} \sum_{\alpha} m_\alpha (\tilde{\bm{v}}_\alpha^{\rm{rel}} \otimes \tilde{\bm{v}}_\alpha^{\rm{rel}}) w(\bm{x}_\alpha - \bm{x}) + \tilde{\bm{b}}_w = \frac{\partial \tilde{\bm{p}}_w}{\partial t} + \operatorname{div}_{\bm{x}} (\tilde{\rho}_w \tilde{\bm{v}}_w \otimes \tilde{\bm{v}}_w). \] Comparing the above equation with the equation of motion, \eref{eqn:motion}, we have \begin{equation} \label{eqn:weighted_balance_**} \operatorname{div}_{\bm{x}}\tilde{\stress}_{w}(\bm{x},t) = \tilde{\bm{f}}_w(\bm{x},t) - \operatorname{div}_{\bm{x}} \sum_{\alpha} m_\alpha (\tilde{\bm{v}}_\alpha^{\rm{rel}} \otimes \tilde{\bm{v}}_\alpha^{\rm{rel}}) w(\bm{x}_\alpha - \bm{x}), \end{equation} where $\tilde{\stress}_w$ is the stress tensor corresponding to the weighting function $w$. From \eref{eqn:weighted_balance_**} it is clear that the kinetic part and the potential part of the stress tensor, $\tilde{\stress}_{w,\rm{k}}$ and $\tilde{\stress}_{w,\rm{v}}$, respectively, are given by \begin{subequations} \label{eqn:pde_w_stress} \begin{align} \tilde{\stress}_{w,\rm{k}}(\bm{x},t) &= - \sum_{\alpha} m_\alpha (\tilde{\bm{v}}_\alpha^{\rm{rel}} \otimes \tilde{\bm{v}}_\alpha^{\rm{rel}}) w(\bm{x}_\alpha - \bm{x}), \label{eqn:w_stress_kinetic} \\ \operatorname{div}_{\bm{x}}\tilde{\stress}_{w,\rm{v}}(\bm{x},t) &= \tilde{\bm{f}}_w(\bm{x},t) \label{eqn:pde_w_stress_force}. \end{align} \end{subequations} Any solution to \eref{eqn:pde_w_stress_force} is a valid candidate for the definition of $\tilde{\stress}_{w,\rm{v}}$. Murdoch \cite{murdoch2007} proposes several possible candidates, and highlights the possibility of having multiple definitions. To understand the connection between the different possible definitions, we look back at \eref{eqn:weighted_balance_1} and \eref{eqn:f_w_1}. Equation \eref{eqn:weighted_balance_1} is a force balance equation for any ``continuum particle'' at $\bm{x}$, and $\tilde{\bm{f}}_w$, defined in \eref{eqn:f_w_1}, is the force per unit volume acting on it. It is not immediately clear from \eref{eqn:f_w_1} how two continuum particles at positions $\bm{x}$ and $\bm{y}$ interact with each other. This interaction can be given by a non-local constitutive law. The main idea is to recast \eref{eqn:f_w_1} as \begin{equation} \label{eqn:interaction} \tilde{\bm{f}}_w(\bm{x},t) = \int_{\real{3}} \bm{g}(\bm{x},\bm{y},t) \, d\bm{y}, \end{equation} for some $\bm{g}(\bm{x},\bm{y},t)$, which we call the \emph{generator of the non-local constitutive law}. This function describes the interaction between the continuum particles at $\bm{x}$ and $\bm{y}$. To satisfy Newton's third law, we also need $\bm{g}$ to be anti-symmetric with respect to its arguments $\bm{x}$ and $\bm{y}$. Unfortunately the representation given in \eref{eqn:interaction} is not unique and, since every choice of $\bm{g}$ leads to a different stress definition, this is one of the sources of non-uniqueness in the definition for the stress tensor in the Murdoch--Hardy procedure. We describe two different constitutive laws, which lead to the Hardy stress and the doubly-averaged stress (DA\ stress)\footnote{Murdoch \cite{murdoch2007} refers to this stress as ``Noll's choice''. To avoid confusion with the stress derived through the Irving--Kirkwood--Noll procedure in \sref{ch:phase}, we name it the ``DA\ stress''.} \cite{murdoch1994}. \medskip For the case of Hardy stress, the generator $\bm{g}^{\rm{H}}$ is given by the equation \begin{equation} \label{eqn:g_1} \bm{g}^{\rm{H}}(\bm{x},\bm{y},t) = \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \bm{f}_{\alpha\beta} w(\bm{x}_\alpha - \bm{x}) \delta(\bm{x}_\beta - \bm{x}_\alpha + \bm{x} - \bm{y}), \end{equation} where $\delta$ denotes the Dirac delta distribution. \medskip The generator $\bm{g}^{\rm{D}}$ for the DA\ stress is given by \begin{equation} \label{eqn:g_2} \bm{g}^{\rm{D}}(\bm{x},\bm{y},t) = \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \bm{f}_{\alpha\beta} w(\bm{x}_\alpha - \bm{x}) w(\bm{x}_\beta - \bm{y}). \end{equation} \begin{figure} \begin{center} \subfigure[]{\label{fig:hardy}\includegraphics[totalheight=0.2\textheight]{fig4_3a}} \subfigure[]{\label{fig:murdoch}\includegraphics[totalheight=0.2\textheight]{fig4_3b}} \subfigure[]{\label{fig:admal}\includegraphics[totalheight=0.2\textheight]{fig4_3c}} \end{center} \caption{A continuum particle $\bm{x}$ interacts with: (a) only that continuum particle at $\bm{y}$, which is identically oriented to $\bm{x}_\beta$ as $\bm{x}$ is oriented to $\bm{x}_\alpha$, when the interaction is given by $\bm{g}^{\rm{H}}$; (b) any continuum particle in the shaded region, when the interaction is given by $\bm{g}^{\rm{D}}$; (c) any continuum particle on the shaded line, when the interaction is given by $\bm{g}^{\rm{HD}}$.} \label{fig:generators} \end{figure} \smallskip \fref{fig:generators} shows the interaction between two continuum particles, with positions $\bm{x}$ and $\bm{y}$, that are in a neighborhood of two interacting particles $\alpha$ and $\beta$ respectively and not in a neighborhood of any other particle in the system. In this setup, it is clear from the generator for the Hardy stress, given in \eref{eqn:g_1}, that two continuum particles at $\bm{x}$ and $\bm{y}$ interact only when $\bm{y}-\bm{x}_\beta = \bm{x} - \bm{x}_\alpha$, as shown in \fref{fig:hardy}. On the other hand, there is no such restriction on the generator for the DA\ stress, described by \eref{eqn:g_2} (see \fref{fig:murdoch}).\footnote{Note that for the DA\ stress, the force between two continuum particles at $\bm{x}$ and $\bm{y}$ is \emph{not} parallel to $\bm{x} - \bm{y}$ in general. This is not a violation of the strong law of action and reaction, because the strong law only applies to discrete systems. It has been used in this derivation by requiring that $\bm{f}_{\alpha\beta} = -\bm{f}_{\beta\alpha}$ and $\bm{f}_{\alpha\beta} \times (\bm{x}_\alpha - \bm{x}_\beta) = \bm{0}$.} Although at this point there is no systematic way of suggesting additional possible generators, we can suggest a third generator, $\bm{g}^{\rm{HD}}$, which has properties that lie in between $\bm{g}^{\rm{H}}$ and $\bm{g}^{\rm{D}}$. As shown in \fref{fig:admal}, when interaction is governed by $\bm{g}^{\rm{HD}}$, a continuum particle $\bm{x}$ interacts with $\bm{y}$ only when $\bm{y}$ lies on the line passing through $\bm{x}$ and parallel to $\bm{x}_\alpha - \bm{x}_\beta$. In all the three cases, the interaction force is always directed along the vector $\bm{x}_\alpha - \bm{x}_\beta$. Therefore by \eref{eqn:interaction} we have three different integral representations for $\tilde{\bm{f}}_w$ with generators $\bm{g}^{\rm{D}}$, $\bm{g}^{\rm{H}}$, $\bm{g}^{\rm{HD}}$. Now, in each of the integral representations of $\tilde{\bm{f}}_w$ given by \eref{eqn:g_1} and \eref{eqn:g_2}, the integrand satisfies all the necessary conditions for the application of Lemma \ref{lem1} or Lemma \ref{lem2} in Appendix \ref{ch:noll}.\footnote{The integrand should be continuously differentiable for Noll's lemma to be applicable. Although $\bm{g}^{\rm{H}}$ is not continuously differentiable due to the presence of the Dirac delta distribution, this does not hinder us from applying the lemma since we can replace the Dirac delta distribution by an appropriate infinitely differentiable delta sequence and take a limit. See Appendix \ref{sec:hardy_limit} for a rigorous derivation of this.\label{fn:mollifier}} For instance, using Lemma \ref{lem1} we obtain an expression for the potential part of the stress tensor, given by \begin{equation} \label{eqn:w_stress_force} \tilde{\stress}_{w,\rm{v}}(\bm{x},t) = -\frac{1}{2} \int_{\real{3}} \left [ \int_{s= 0}^{1} \bm{g}(\bm{x}+s\bm{z}, \bm{x}-(1-s)\bm{z}, t) \, ds\right] \otimes \bm{z} \, d\bm{z}. \end{equation} Substituting \eref{eqn:g_1} into \eref{eqn:w_stress_force}, we have the potential part of the Hardy stress: \begin{align} \tilde{\stress}_{w,\rm{v}}^{\rm{H}} &= -\frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{\real{3}} \left [ \int_{s= 0}^{1} \bm{f}_{\alpha\beta} w(\bm{x}_\alpha-\bm{x} - s \bm{z}) \delta(\bm{x}_\beta - \bm{x}_\alpha + \bm{z}) \, ds \right ] \otimes \bm{z} \, d\bm{z} \notag \\ &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{s= 0}^{1} [-\bm{f}_{\alpha\beta} w((1-s)\bm{x}_\alpha + s\bm{x}_\beta-\bm{x}) \otimes (\bm{x}_\alpha - \bm{x}_\beta)] \, ds. \label{eqn:hardy_stress} \end{align} Substituting \eref{eqn:g_2} into \eref{eqn:w_stress_force} we have the potential part of the DA\ stress: \begin{equation} \label{eqn:murdoch_stress} \tilde{\stress}_{w,\rm{v}}^{\rm{DA}} = \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \int_{z \in \real{3}} \int_{s=0}^{1} [-\bm{f}_{\alpha\beta} w(\bm{x}_\alpha - \bm{x} - s\bm{z}) w(\bm{x}_\beta - \bm{x} + (1-s)\bm{z}) \otimes \bm{z}] \, ds \, d\bm{z}, \end{equation} which was derived by Murdoch \cite{murdoch1994}. The conclusion is that the non-uniqueness of the generator in the systematized Murdoch--Hardy procedure leads to a non-unique definition for the stress tensor. Further sources of non-uniqueness can by introduced by having a different force decomposition corresponding to a different potential extension, or using curved paths of interaction instead of straight bonds and applying Lemma \ref{lem2}, which is a generalization of Lemma \ref{lem1} in Appendix \ref{ch:noll}. We do not pursue this generalization further here. \medskip It is important to point out that the systematized Murdoch--Hardy procedure presented here does \emph{not} describe all possible solutions to \eref{eqn:pde_w_stress_force}. An example of a solution that cannot be obtained via our systematized Murdoch--Hardy procedure is the following definition suggested in \cite{murdoch2007}: \begin{equation} \tilde{\stress}_{w,\rm{v}}^*(\bm{x},t):=\sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \bm{f}_{\alpha\beta} \otimes (\bm{x} - \bm{x}_\alpha) \hat{a}(\vnorm{\bm{x}-\bm{x}_\alpha}), \label{eqn:mur2} \end{equation} where $\hat{a}(u) := \frac{1}{u^3} \int_{0}^{u} s^2 \hat{w}(s) \, ds$. As is pointed out in \cite{murdoch2007}, the expansion in \eref{eqn:mur2} is not a physically-relevant definition for stress due to the following test case. Consider a stationary deformed body at zero temperature (i.e., where the particles occupy fixed positions without vibrating). In this case, the net force acting on any particle is zero. Since $\bm{f}_{\alpha\beta}$ is the only term in the summand of \eref{eqn:mur2} which depends on $\beta$, \eref{eqn:mur2} is equivalent to \begin{equation} \tilde{\stress}_{w,\rm{v}}^*(\bm{x},t):=\sum_\alpha (\sum_{\substack{\beta \\ \beta \ne \alpha}} \bm{f}_{\alpha\beta}) \otimes (\bm{x} - \bm{x}_\alpha) \hat{a}(\vnorm{\bm{x}-\bm{x}_\alpha}). \end{equation} In our case, $\sum_{\beta}\bm{f}_{\alpha\beta} = \bm{f}_\alpha = \bm{0}$, for each particle $\alpha$ in the \emph{interior} of the body which is considerably away from the surface compared to the interatomic distance. Hence, the only non-zero contribution to the stress is due to those particles close to the surface on which the net force due to other particles is non-zero. Moreover, although $\hat{a}(u)$ decays to zero as $u$ increases, $\hat{a}(u) \ne 0$ for all $u \ne 0$. Thus, there is a non-zero stress at every point $\bm{x} \in \real{3}$ (even outside the body!) due to particles close to the surface of the body, which is obviously not physically reasonable. Nevertheless, $\tilde{\stress}_{w,\rm{v}}^*$ is mathematically still a valid definition since it satisfies the force balance equation \cite{murdoch2007}. However, it cannot be derived using the systematized Murdoch--Hardy procedure proposed here. Thus, the systematized Murdoch--Hardy procedure does not lead to all possible definitions that satisfy the force balance equation. \medskip Finally, it is also worth noting that the fact that all balance laws are satisfied under the Murdoch--Hardy procedure should not come as a surprise, since $w$ in \eref{eqn:define_s} serves the same purpose as $W$ in \eref{eqn:define_density}. In this view, $w$ is seen as a function defined on a phase space, although one that does not evolve according to a flow described by Hamilton's equations of motion, but still satisfies \eref{eqn:liouville}.\footnote{This is only a mathematical argument. No physical significance should be drawn from this analogy.} The corresponding flow in phase space is described as follows. Continuing with our notation introduced \eref{eqn:define_X}, let $\bm{\Xi}(0)=(\bm{x}^{\rm s}(0),\bm{v}^{\rm s}(0))=(\bm{x}_1^{\rm s}(0),\dots,\bm{x}_N^{\rm s}(0),\bm{v}_1^{\rm s}(0),\dots,\bm{v}_N^{\rm s}(0))$ denote any arbitrary point in phase space. We add a superscript ``$\rm s$'' to stress the fact that an element in phase space is stochastic in nature. Consider the flow in phase space given by the mapping \begin{align} \notag \bm{\Xi}(0) = (\bm{x}^{\rm s}(0),\bm{v}^{\rm s}(0)) \mapsto &(\bm{x}^{\rm s}(0)+\bm{x}(t)-\bm{x}(0), \bm{v}^{\rm s}(0)+\bm{v}(t)-\bm{v}(0)) \\ &=(\bm{x}^{\rm s}(t),\bm{v}^{\rm s}(t)) = \bm{\Xi}(t), \label{eqn:flowmap} \end{align} where the quantities $\bm{x}(t) = (\bm{x}_1(t),\cdots,\bm{x}_N(t)), \bm{v}(t) = (\bm{v}_1(t),\cdots,\bm{v}_N(t))$ denote the position and velocity of the particle and these are assumed to be known. (Typically these quantities are obtained from a molecular dynamics simulation.) Therefore the Murdoch--Hardy procedure can be interpreted as a probabilistic model constructed from the data, $\bm{x}(t)$ and $\bm{v}(t)$, obtained from a deterministic model -- a molecular dynamics simulation. Note that $\bm{x}^{\rm s}$ and $\bm{v}^{\rm s}$ in \eref{eqn:flowmap} denote the positions and velocities of the particles in the probabilistic model. Then it is easy to see that if $W(\bm{\Xi};t)$ is given by \begin{equation} W(\bm{\Xi};t) = w(\bm{x}_1(t) - \bm{x}_1^{\rm s}) \cdots w(\bm{x}_N(t) - \bm{x}_N^{\rm s}), \label{eqn:W_w} \end{equation} then the definitions given by \eref{eqn:define_density}, \eref{eqn:define_mom_density} and \eref{eqn:define_s} are consistent and $W$ given by the above formula satisfies Liouville's equation (see \eref{eqn:liouville}), which was used in deriving the balance equations in \sref{ch:phase}. Note that unlike \sref{ch:phase}, $W(\bm{\Xi};t)$ defined in \eref{eqn:W_w} is \emph{not} a probability density function. (Its integral over phase space diverges, since it is independent of $\bm{v}$.) The key difference between the two approaches is that all quantities in the Irving--Kirkwood--Noll procedure are probabilistic, while this is not true for the Murdoch--Hardy procedure, if the above probabilistic interpretation is adopted. For example, $\bm{f}_{\alpha\beta}$ in the Murdoch--Hardy procedure is deterministic. Therefore the structure inherent in \eref{eqn:stress_force_differential_**} through the marginal densities is absent in the Murdoch--Hardy procedure, thus giving additional non-uniqueness. It is shown in \sref{ch:compare} that the Hardy stress can be derived using both approaches, while the DA\ stress is a result of the Murdoch--Hardy procedure alone. \subsection{Definition of the spatially-averaged traction vector} \label{sec:define_traction_weight} We close this section by defining the spatially-averaged traction vector, $\bm{t}_w(\bm{x},\bm{n};t)$, for a weighting function $w$, at a point $\bm{x}$ relative to a plane with normal $\bm{n}(\bm{x})$. One possibility is to adopt the Cauchy relation using the spatially-averaged stress tensor, \begin{equation} \label{eqn:traction_pw} \bm{t}_{w}(\bm{x},\bm{n};t):=\bm{\sigma}_{w}(\bm{x},t) \bm{n}. \end{equation} However, since $\bm{\sigma}_{w}$ is defined as a volume average, we immediately see that with this definition $\bm{t}_{w}$ depends not only on the bonds that cross the surface, but also on nearby bonds that do not cross it \cite{murdoch2003}. Hence, equation \eref{eqn:traction_pw} does not appear to be consistent with Cauchy's definition of traction. We therefore seek an alternative definition for the spatially-averaged traction vector. In \sref{sec:traction_vector}, we showed that the pointwise traction vector at a point on a surface is the expectation of the force per unit area of all the bonds that cross the surface, making it a property of the surface. We would like the spatially-averaged traction to have the same property. We therefore define it as an average over a \emph{surface} rather than over a volume as for the stress. For simplicity, we consider the weighting function $w_h$, defined to be constant on the averaging domain, which is taken to be a generalized cylinder of height $h$, with its axis parallel to $\bm{n}$ and enclosing $\bm{x}$. The traction $\bm{t}_w(\bm{x},\bm{n};t)$ is defined as \begin{equation} \label{eqn:discrete_traction_w} \bm{t}_{w}(\bm{x},\bm{n};t) := \lim_{h \to 0} \bm{\sigma}_{w_h}(\bm{x},t)\bm{n}, \end{equation} where $\bm{\sigma}_{w_h}$ is the stress associated with the weighting function, $w_h$. In a more general case, an arbitrary averaging domain can be collapsed onto a surface passing through $\bm{x}$, in many ways. Although this can be made mathematically more precise, we do not pursue that in this work. Definition \eref{eqn:discrete_traction_w} has a two-fold advantage over the definition in \eref{eqn:traction_pw}: \begin{enumerate} \item The traction vector is defined to be non-local on a surface, thus making it a property of the surface. This is physically more meaningful, and closer to the continuum definition. \item The above definition differs from the traction definition in \eref{eqn:traction_pw}, because only the bonds which cross the surface contribute to the traction field. \end{enumerate} In \sref{sec:tsai}, we use definition \eref{eqn:discrete_traction_w} to define the Tsai traction starting with the spatial averaging discussed in \sref{sec:spatial_average} and in this way establish a link between the Tsai traction and the Irving--Kirkwood--Noll procedure. \section{Derivation of different stress definitions and the issue of uniqueness} \label{ch:compare} In this section, we systematically derive various stress tensors commonly found in the literature from the methods developed in \sref{ch:phase} and \sref{ch:spatial}. The stress tensors discussed in this section are the Hardy, virial and DA\ stress tensors and the Tsai traction. \subsection{Hardy stress tensor} \label{sec:hardy} The Murdoch--Hardy procedure described in \sref{ch:spatial} was independently developed by Murdoch \cite{murdoch1982} and Hardy \cite{hardy1982}. The motivation for Hardy's study was to test the validity of the continuum description of phenomena in shock waves. The formulas suggested by Irving and Kirkwood were not useful due to the lack of knowledge regarding the probability density function and the infinite series expansion in the definition of the stress. As an alternative, Hardy used what we now term as the ``Murdoch--Hardy procedure'' to propose an instantaneous definition for stress, for the special case of pair potential, given by \begin{subequations} \label{eqn:hardy} \begin{align} \bm{\sigma}^{\rm H}_{\rm{v}}(\bm{x},t) &= \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \frac{(\bm{x}_\alpha(t)-\bm{x}_\beta(t)) \otimes (\bm{x}_\alpha(t)-\bm{x}_\beta(t))}{\vnorm{\bm{x}_\alpha-\bm{x}_\beta}} \mathcal{V}'_{\alpha\beta} b(\bm{x};\bm{x}_\alpha,\bm{x}_\beta), \label{eqn:hardy_force}\\ \bm{\sigma}^{\rm H}_{\rm{k}}(\bm{x},t) &= -\sum_{\alpha} w(\bm{x}_\alpha(t) - \bm{x}) m_\alpha \bm{v}_{\alpha}^{\rm{rel}}(t) \otimes \bm{v}_{\alpha}^{\rm{rel}}(t), \label{eqn:hardy_kinetic} \end{align} \end{subequations} where $b$ is the bond function defined in \eref{eqn:bond_function} and $\bm{v}_\alpha^{\rm{rel}}$ is the velocity of particle $\alpha$ with respect to the continuum velocity, as defined in \eref{eqn:cont_velocity}. To simplify the notation, the explicit dependence of $\bm{x}_\alpha$ and $\bm{v}^{\rm{rel}}_\alpha$ on time is dropped from here onwards. Equations \eref{eqn:hardy_force} and \eref{eqn:hardy_kinetic} may look familiar. They are similar to the spatially-averaged generalized stress in \eref{eqn:stress_kinetic_w} and \eref{eqn:stress_force_w_hardy_straight} (for the special case of a pair potential). If in these relations, the ensemble average is replaced by a time average, we obtain a time-averaged Hardy stress. However, in performing such an operation, we must note the following: \begin{enumerate} \item Under conditions of thermodynamic equilibrium (see footnote~\ref{foot:tdequil} on page~\pageref{foot:tdequil}), ensemble averages can be replaced by time averages provided that the system is assumed to be ergodic. Strictly speaking this time average should be done for infinite time, but for practical reasons we are restricted to finite time. \item The Hardy stress tensor is valid under non-equilibrium conditions assuming that the system is in {\em local thermodynamic equilibrium}\footnote{Local thermodynamic equilibrium is a weaker condition than uniform thermodynamic equilibrium (see footnote \ref{foot:tdequil}). The assumption is that the microscopic domain associated with each continuum particle is locally in a state of uniform thermodynamic (or at least metastable) equilibrium. This is the reason why concepts like temperature can be defined as field variables in continuum mechanics. See for example \cite{evansmorriss}.} at all points at every instant of time. This is plausible only when there is a clear separation of time scales between the microscopic equilibration time scale $\tau$ and macroscopic times. Here, $\tau$ is not being defined rigorously. Roughly speaking, $\tau$ must be sufficiently small so that macroscopic observables do not vary appreciably over it. \end{enumerate} Under these assumptions, we may replace ensemble averages with time averages in \eref{eqn:stress_kinetic_w} and \eref{eqn:stress_force_w_hardy_straight} to obtain \begin{subequations} \label{eqn:hardy_stress_tavg} \begin{align} \bm{\sigma}_{w,\rm{k}}(\bm{x},t) &= -\frac{1}{\tau} \sum_{\alpha} \int_{t}^{t+\tau} w(\bm{x}_\alpha - \bm{x}) m_\alpha \bm{v}_{\alpha}^{\rm{rel}} \otimes \bm{v}_{\alpha}^{\rm{rel}} dt, \label{eqn:hardy_stress_kin_tavg} \\ \bm{\sigma}_{w,\rm{v}}(\bm{x},t) &= \frac{1}{2\tau}\sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{t}^{t+\tau} [-\bm{f}_{\alpha\beta} \otimes (\bm{x}_\alpha-\bm{x}_\beta) b(\bm{x};\bm{x}_\alpha,\bm{x}_\beta)] dt \label{eqn:hardy_stress_force_tavg}, \end{align} \end{subequations} where $\bm{f}_{\alpha\beta}$, corresponding to a given potential extension, is defined in \eref{eqn:define_fij}, and $\tau$ represents a microscopic time scale. We see that the Hardy stress is obtained through a rigorous process beginning with the statistical mechanics concepts introduced in \sref{ch:phase}. From here on, we will denote the stress in \eref{eqn:hardy_stress_tavg} as the ``Hardy stress'', although we note that this definition constitutes a generalization of the original Hardy stress to arbitrary potentials and includes time averaging. Incidentally, the Hardy stress can also be derived from the systematized Murdoch--Hardy procedure described in \sref{sec:murdoch_proc}. The kinetic part of the Hardy stress is the same as that obtained in the Murdoch--Hardy procedure. The potential part of the Hardy stress is derived using the generator $\bm{g}^{\rm{H}}$ given in \eref{eqn:g_1}. This was done in \sref{ch:spatial} (see \eref{eqn:hardy_stress}). Note that the Hardy stress tensor is symmetric. One could modify this to a general form by choosing an arbitrary path of interaction, thus leading to a non-symmetric form (see \sref{sec:murdoch_proc}). Also note that the stress tensor resulting from the generator $\bm{g}^{\rm{HD}}$ (see \fref{fig:generators}) would be symmetric, because the interaction force between two continuum particles is always aligned with the line connecting them. It is very important to observe that under non-equilibrium conditions, where we assume a local thermodynamic equilibrium at every instant of macroscopic time, we may assume that the averaging domain centered at a position $\bm{x}$ moves with the continuum velocity $\bm{v}(\bm{x},t)$. This fact will be used in \sref{sec:tsai}. \subsection{Tsai traction} \label{sec:tsai} Cauchy's original definition of stress emerges from the concept of traction acting across the internal surfaces of a solid via the bonds that cross the surface. It is therefore natural to attempt to define traction at the atomic level in a similar vein in terms of the force in bonds intersecting a given plane. This approach actually goes back to Cauchy himself as part of his effort in the 1820s to define the stress in crystalline systems \cite{Cauchy1828a,Cauchy1828b}, which is described in detail in Note B in Love's classical book on the theory of elasticity \cite{love}. Cauchy's derivation is limited to zero temperature equilibrium where the atoms are stationary. This approach was extended by Tsai \cite{tsai1979} to the dynamical setting by also accounting for the momentum flux of atoms moving across the plane. The expression for the traction given in \cite{tsai1979} appears to be based on intuition. In this section, we show how the Tsai traction can be systematically derived from the Hardy stress tensor, which itself was derived from the generalized stress tensor defined in \sref{sec:spatial_average}. We will see that the potential part of Tsai's original definition agrees with the results of our unified framework. However, Tsai's expression for the kinetic part of the traction depends on the absolute velocity of the particles and therefore is not invariant with respect to Galilean transformations. We show below that the correct expression for the Tsai traction vector $\bm{t}(\bm{x},\bm{n};t)$ across a plane $P$ with normal $\bm{n}$ is \begin{align} \bm{t}_{w}(\bm{x},\bm{n};t) &= \frac{1}{A\tau} \int_{t}^{t+\tau} \sum_{\alpha\beta \cap P} \bm{f}_{\alpha\beta} \frac{(\bm{x}_\alpha - \bm{x}_\beta) \cdot \bm{n}} {\abs{(\bm{x}_\alpha-\bm{x}_\beta) \cdot \bm{n}}} \,dt \notag \\ &-\frac{1}{A\tau} \sum_{\alpha \leftrightarrow P} \frac{m_\alpha \bm{v}_\alpha^{\rm{rel}}(t_\leftrightarrow)( \bm{v}_{\alpha}^{\rm{rel}}(t_\leftrightarrow) \cdot \bm{n} ) }{\abs{\bm{v}_{\alpha}^{\rm{rel}}(t_\leftrightarrow) \cdot \bm{n} }}, \label{eqn:tsai_traction} \end{align} where $\tau$ indicates the microscopic time scale, $\sum_{\alpha\beta \cap P}$ indicates the summation over all bonds $\alpha-\beta$ crossing the plane $P$, $\sum_{\alpha \leftrightarrow P}$ indicates summation over all particles that cross $P$ in the time interval $[t,t+\tau]$,\footnote{A particle is counted multiple times if it crosses the plane multiple times.} $\bm{v}_{\alpha}^{\rm{rel}}$ denotes the local relative velocity of particle $\alpha$, and $t_\leftrightarrow$ indicates the time at which the particle crosses the plane. The correct form for $\bm{v}_{\alpha}^{\rm{rel}}$ is not immediately obvious. Below, we derive equation \eref{eqn:tsai_traction} and obtain an explicit expression for $\bm{v}_{\alpha}^{\rm{rel}}$. \medskip We start with the Hardy stress in \eref{eqn:hardy_stress_tavg}. Recall from \sref{sec:define_traction_weight} that if the averaging domain is taken to be a generalized cylinder $\mathcal{C}_h$ of height $h$, the spatially-averaged traction field, $\bm{t}_w(\bm{x},\bm{n};t)$, on a surface passing through $\bm{x}$ with normal $\bm{n}$ is \begin{align} \bm{t}_w(\bm{x},\bm{n};t) = \lim_{h \to 0} \bm{\sigma}_{w_h} \bm{n} &= \lim_{h \to 0} (\bm{\sigma}_{w_h,\rm{v}} \bm{n} + \bm{\sigma}_{w_h,\rm{k}} \bm{n}) \label{eqn:traction} \\ &=: \bm{t}_{w,\rm{v}} + \bm{t}_{w,\rm{k}}. \notag \end{align} Using \eref{eqn:hardy_stress_tavg}, we rewrite the potential part and kinetic part of \eref{eqn:traction} as \begin{subequations} \begin{align} \bm{t}_{w,\rm{v}}(\bm{x},\bm{n};t) &= \frac{1}{2\tau} \lim_{h \to 0} \int_{t}^{t+\tau} \sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} [-\bm{f}_{\alpha\beta} \otimes (\bm{x}_\alpha - \bm{x}_\beta) b_h(\bm{x};\bm{x}_\alpha,\bm{x}_\beta)] dt \label{eqn:traction_contact},\\ \bm{t}_{w,\rm{k}}(\bm{x},\bm{n};t) &= -\frac{1}{\tau} \lim_{h \to 0} \int_{t}^{t+\tau} \sum_{\alpha} m_\alpha w(\bm{x_\alpha} - \bm{x};h) \bm{v}_{\alpha}^{\rm{rel}}(t;h) ( \bm{v}_{\alpha}^{\rm{rel}}(t;h) \cdot \bm{n} ) dt \label{eqn:traction_kin}, \end{align} \end{subequations} where $b_h$ denotes the bond function for a generalized cylinder of height $h$. Also note the dependence of $\bm{v}_{\alpha}^{\rm{rel}}$ on $h$ in \eref{eqn:traction_kin}. Let us first consider the potential part of the traction in \eref{eqn:traction_contact}. As $h$ approaches zero, the generalized cylinder will no longer contain complete bonds. Assuming a constant weighting function, the bond function $b_h$ equals the fraction of the length of the bond lying within the generalized cylinder per unit volume: \begin{equation} b_h(\bm{x};\bm{x}_\alpha,\bm{x}_\beta) = \frac{1}{hA}\frac{h}{\abs{\left (\bm{x}_\alpha - \bm{x}_\beta \right) \cdot \bm{n}}} =\frac{1}{A \abs{\left (\bm{x}_\alpha - \bm{x}_\beta \right) \cdot \bm{n}}}, \end{equation} for any bond $\alpha-\beta$ crossing the cylinder. Therefore \eref{eqn:traction_contact} takes the form \begin{equation} \bm{t}_{w,\rm{v}}(\bm{x},\bm{n};t) = \frac{1}{A\tau} \int_{t}^{t+\tau} \sum_{\alpha\beta \cap P} \left [-\bm{f}_{\alpha\beta} \frac{(\bm{x}_\alpha - \bm{x}_\beta) \cdot \bm{n}}{\abs{(\bm{x}_\alpha - \bm{x}_\beta) \cdot \bm{n}}}\right ] \, dt. \end{equation} Note that the $1/2$ factor is dropped because of the definition of the summation in the above equation. This is the first term in \eref{eqn:tsai_traction}. Turning to the kinetic part of the traction in \eref{eqn:traction_kin}, we interchange the summation and integral to obtain \begin{equation} \bm{t}_{w,\rm{k}}(\bm{x},\bm{n};t) = -\frac{1}{\tau} \lim_{h \to 0} \sum_{\alpha \in \mathcal{C}_h} \int_{t_1(\alpha;h)}^{t_2(\alpha;h)} m_\alpha w(\bm{x_\alpha} - \bm{x};h) \bm{v}_{\alpha}^{\rm{rel}}(t;h) ( \bm{v}_{\alpha}^{\rm{rel}}(t;h) \cdot \bm{n} ) dt, \end{equation} where $t_1(\alpha;h)$ and $t_2(\alpha;h)$ are the times of entry and exit of particle $\alpha$, respectively, from a cylinder of height $h$. The summation in the above equation is over all particles that are in the generalized cylinder during the time interval $[t,t+\tau]$, with a particle counted $k$ times if it enters and exits the cylinder $k$ times. Multiplying and dividing the above equation by $t_2(\alpha;h)-t_1(\alpha;h)$ and substituting in $w$, we have \begin{align} \bm{t}_{w,\rm{k}}(\bm{n}) &= -\frac{1}{A\tau} \lim_{h \to 0} \sum_{\alpha \in \mathcal{C}_h} \frac{t_2(\alpha;h)-t_1(\alpha;h)}{h} \frac{\int_{t_1(\alpha;h)}^{t_2(\alpha;h)} m_\alpha \bm{v}_{\alpha}^{\rm{rel}}(t;h) ( \bm{v}_{\alpha}^{\rm{rel}}(t;h) \cdot \bm{n} ) dt}{t_2(\alpha;h)-t_1(\alpha;h)} \notag \\ & = -\frac{1}{A\tau} \sum_{\alpha \leftrightarrow P}\lim_{h \to 0} \frac{t_2(\alpha;h)-t_1(\alpha;h)}{h} m_\alpha \bm{v}_{\alpha}^{\rm{rel}}(t_\leftrightarrow) ( \bm{v}_{\alpha}^{\rm{rel}}(t_\leftrightarrow) \cdot \bm{n}), \label{eqn:twk} \end{align} where we have used the Lebesgue differentiation theorem \cite{folland} in the last equality. Note that the interchange of limit and summation in the above step is valid since we can assume that the summation for any $\mathcal{C}_h$ is a finite summation which is physically meaningful. Since the averaging domain moves with a continuum velocity we note that \begin{equation} \lim_{h \to 0} \frac{t_2(\alpha;h)-t_1(\alpha;h)}{h} = \frac{1}{\abs{\bm{v}_{\alpha}^{\rm{rel}}(t_\leftrightarrow) \cdot \bm{n}}}. \label{eqn:toh} \end{equation} In words, this equality states that the net time spent by particle $\alpha$ in the cylinder, divided by its height, is equal to the inverse of the velocity of particle $\alpha$ along the axis of the cylinder. This is correct in the limit, $h\to 0$, where particles only enter and exit the cylinder at its ends. Substituting \eref{eqn:toh} into \eref{eqn:twk}, we have \begin{align} \bm{t}_{w,\rm{k}}(\bm{n}) &= -\frac{1}{A\tau} \sum_{\alpha \leftrightarrow P} \frac{m_\alpha \bm{v}_\alpha^{\rm{rel}}(t_\leftrightarrow)( \bm{v}_{\alpha}^{\rm{rel}}(t_\leftrightarrow) \cdot \bm{n} ) }{\vert \bm{v}_{\alpha}^{\rm{rel}}(t_{\leftrightarrow}) \cdot \bm{n} \vert} \notag \\ &= -\frac{1}{A\tau} \sum_{\alpha \leftrightarrow P} m_\alpha \bm{v}_\alpha^{\rm{rel}}(t_{\leftrightarrow}) \textrm{sign}( \bm{v}_{\alpha}^{\rm{rel}}(t_\leftrightarrow) \cdot \bm{n} ). \end{align} This is the second term in \eref{eqn:tsai_traction}. Note that \begin{equation} \bm{v}_{\alpha}^{\rm{rel}}(t) = \lim_{h \to 0}\bm{v}_{\alpha}^{\rm{rel}}(t;h) = \bm{v}_\alpha(t) - \lim_{h \to 0}\bm{v}(\bm{x};h). \end{equation} Hence, we have implicitly assumed that $\lim_{h \to 0}\bm{v}(\bm{x};h)$ is well-defined for our averaging domain (plane $P$) which is a limit of the generalized cylinder $\mathcal{C}_h$. In the following calculation, we show that $\bm{v}(\bm{x};h)$ is well-defined and its exact form is derived. We know that for a generalized cylinder \begin{align} \bm{v}(\bm{x};h) &:= \frac{\frac{1}{\tau} \int_{t}^{t+\tau} \sum_{\alpha} m_\alpha w(\bm{x}_\alpha(t) - \bm{x};h) \bm{v}_\alpha(t) dt}{\frac{1}{\tau}\int_{t}^{t+\tau} \sum_{\beta} m_\beta w(\bm{x}_\beta(t)-\bm{x}) dt} \notag\\ &= \frac{ \sum^{'}_{\alpha} \int_{t_1(\alpha;h)}^{t_2(\alpha;h)} m_\alpha \bm{v}_\alpha(t) dt}{\sum^{'}_{\beta} \int_{t_1(\beta;h)}^{t_2(\beta;h)} m_\beta dt} \notag \\ &= \frac{\sum^{'}_{\alpha} m_\alpha \left [ \bm{x}_\alpha(t_2(\alpha;h)) - \bm{x}_\alpha(t_1(\alpha;h)) \right ]}{\sum^{'}_{\beta} m_\beta \left [ t_2(\beta;h) - t_1(\beta;h) \right ]},\label{eqn:cont_velocity_h} \end{align} where $\sum'$ indicates summation over those particles that cross $P$ in the time interval $[t,t+\tau]$, including multiple entries and exits. Considering the limit $h \to 0$ of the first partial fraction of the last equation we have \begin{equation} \lim_{h \to 0} \frac{ m_\alpha \frac{\bm{x}_\alpha(t_2(\alpha;h)) - \bm{x}_\alpha(t_1(\alpha;h))} {t_2(\alpha;h) - t_1(\alpha;h)} } { \sum_{\beta}^{'} m_\beta \frac{t_2(\beta;h) - t_1(\beta;h)} {t_2(\alpha;h) - t_1(\alpha;h)} } = \frac{m_\alpha \bm{v}_\alpha( t_\leftrightarrow)} {\sum^{'}_{\beta} m_\beta \abs{ \bm{v}_\alpha(t_\leftrightarrow) \cdot \bm{n} / \bm{v}_\beta(t_\leftrightarrow) \cdot \bm{n} }}, \label{partialfraction} \end{equation} using the fact that in the limit $h \to 0$, ($t_2(\beta;h) - t_1(\beta;h))/(t_2(\alpha;h) - t_1(\alpha;h))$, which is the ratio of the times spent by particles $\beta$ and $\alpha$ in one of their sojourns into the cylinder, is equal to the inverse ratio of their normal velocities. Using \eref{partialfraction} and taking the limit $h \to 0$ of \eref{eqn:cont_velocity_h}, we obtain \begin{equation} \bm{v}(\bm{x}) = \lim_{h \to 0} v(\bm{x};h) = \sum_{\alpha \leftrightarrow P} \frac{m_\alpha \bm{v}_\alpha(t_\leftrightarrow)} {\sum^{'}_{\beta} m_\beta \abs{ \bm{v}_\alpha(t_\leftrightarrow) \cdot \bm{n} / \bm{v}_\beta(t_\leftrightarrow) \cdot \bm{n}}}. \end{equation} Note that the above expression for the continuum velocity is far from intuitive. One might expect the continuum velocity to be the average velocity of particles crossing the surface, but this is not true. It is clear from the above equation that the averaging is not trivial. \medskip From the relationship between the Tsai traction in \eref{eqn:tsai_traction} and the Hardy stress tensor in \eref{eqn:hardy_stress_tavg}, it is apparent that the Tsai traction is a more local quantity than the Hardy stress tensor. The Tsai traction performs better than the Hardy stress in systems with free surfaces. This was studied by Cheung and Yip \cite{cheung1991} for a one-dimensional case, in which virial stress and Tsai stress are compared (the virial stress is a special case of Hardy stress as shown in the next section). The Tsai traction definition can be used to evaluate the stress tensor at a point by evaluating the traction on three perpendicular planes.\footnote{For example, if the normals to the planes are aligned with the axes of a Cartesian coordinate system with basis vectors $\bm{e}_i$, then $\bm{t}(\bm{e}_1)$ would give the components $\sigma_{11}$, $\sigma_{21}$, $\sigma_{31}$, $\bm{t}(\bm{e}_2)$ would give the components $\sigma_{12}$, $\sigma_{22}$, $\sigma_{32}$, and $\bm{t}(\bm{e}_3)$ would give the components $\sigma_{13}$, $\sigma_{23}$, $\sigma_{33}$.} However, it is not clear from the perspective put forward by Tsai \cite{tsai1979} whether the resulting stress tensor would be symmetric or even well-defined, i.e., it is not clear if another choice of planes will give suitably transformed components of the same stress tensor. Our derivation suggests that a stress tensor constructed from the Tsai traction should be well-defined and symmetric, at least in a weak sense, since it is a limit of the Hardy stress, which has these properties. The numerical experiments presented in \sref{ch:experiment}, suggest that the Tsai traction is invariant with respect to the position of the Tsai plane $P$ and the resulting stress tensor is symmetric. \subsection{Virial stress tensor} \label{sec:virial} In this section, we show that the virial stress tensor derived in \sref{ch:canonical} and in Appendix \ref{ch:virial} can be re-derived from the time-averaged version of the Hardy stress given in \eref{eqn:hardy_stress_tavg}. The expression for the virial stress tensor is obtained from \eref{eqn:hardy_stress_tavg} as a special case for a weighting function which is constant on its support. The bond function, $b$, in \eref{eqn:hardy_stress_tavg} is evaluated approximately using its definition \eref{eqn:bond_function} by only counting those bonds $\alpha-\beta$ that lie entirely within the averaging domain and neglecting the bonds that cross the averaging domain. Hence, $b(\bm{x};\bm{x}_\alpha,\bm{x}_\beta)$ is given by \begin{equation} \label{eqn:bond_function_virial} b(\bm{x};\bm{x}_\alpha,\bm{x}_\beta) = \left \{ \begin{array}{ll} 1/\operatorname{vol}(\Omega_{\bm{x}}) & \mbox{if bond $\alpha-\beta \in \Omega_{\bm{x}}$}, \\ 0 & \mbox{otherwise}, \end{array} \right. \end{equation} where $\Omega_{\bm{x}}$ denotes the averaging domain centered at $\bm{x}$. Substituting \eref{eqn:bond_function_virial} into \eref{eqn:hardy_stress_tavg}, we have \begin{equation} \bm{\sigma}(\bm{x},t) = \frac{1}{\tau \operatorname{vol}(\Omega_{\bm{x}})} \int_{t}^{t+\tau} \Bigg [ -\sum_{\alpha \in \Omega_{\bm{x}}} m_\alpha \bm{v}_{\alpha}^{\rm{rel}} \otimes \bm{v}_{\alpha}^{\rm{rel}} + \frac{1}{2} \sum_{\substack{\alpha,\beta \in \Omega_{\bm{x}} \\ \alpha \ne \beta}} [-\bm{f}_{\alpha\beta} \otimes (\bm{x}_\alpha - \bm{x}_\beta)] \Bigg ] \,dt , \label{eqn:virial_stress_tavg} \end{equation} which is identical to \eref{eqn:virial_2} in Appendix \ref{ch:virial}. It is clear from this that the virial stress tensor is only an approximation and tends to the Hardy stress as the volume of the averaging domain is increased. This is because the ratio of the measure of bonds that cross the surface to those which are inside the averaging domain decreases as the size of the domain increases. The difference between the virial stress tensor and the Tsai traction was analytically calculated for a one-dimensional chain by Tsai (see \cite{tsai1979}). Since, taking the averaging domain size to infinity is equivalent to taking the thermodynamic limit in this context, the Hardy and virial stress expressions become identical in this limit. Since the virial theorem was also derived in \sref{ch:canonical} for the case of equilibrium statistical mechanics, it follows that the Irving--Kirkwood--Noll procedure is consistent with the results of equilibrium statistical mechanics in the thermodynamic limit. \subsection{DA\ stress tensor} It was seen in \sref{sec:murdoch_proc}, that the DA\ stress tensor, defined in \eref{eqn:murdoch_stress}, is derived using an appropriate generator in the systematized Murdoch--Hardy procedure. However, unlike the Hardy stress, the DA\ stress cannot be derived from the Irving--Kirkwood--Noll procedure. It is also worth noting that the stress tensor given by \eref{eqn:murdoch_stress} is in general non-symmetric, and only under very special conditions yields a symmetric tensor \cite{murdoch2007}. \subsection{Uniqueness of the macroscopic stress tensor} \label{sec:unique_macro_stress} Three possible sources of non-uniqueness for the stress tensor have been identified in our discussion: \begin{enumerate} \item Given that there are multiple potential extensions (see Page~\pageref{page:altext}), different force decompositions are possible and hence different pointwise stress tensors can be obtained. \item For a given pointwise stress tensor, a new pointwise stress, which also satisfies the balance of linear momentum, can be obtained by adding on an arbitrary tensor field with zero divergence. \item The generalization of the Irving--Kirkwood--Noll procedure in \sref{sec:gen_stress} to arbitrary ``paths of interaction'' leads to the possibility of non-symmetric expressions for the pointwise stress tensor. \end{enumerate} We address the first two issues in this section. The third source of non-uniqueness is only possible in systems where the discrete particles making up the system possess internal structure, such as internal polarization or spin. For systems of discrete particles without internal structure only straight bonds are possible due to symmetry arguments. We leave the discussion of particles with internal structure to future work. \subsubsection*{Uniqueness and potential energy extensions} The first source of non-uniqueness of the stress tensor is related to the potential energy extension discussed in \sref{sec:s_motion}. We show below that the macroscopic stress tensor, calculated as a spatial average of the pointwise stress tensor with constant weighting function, is always unique in the thermodynamic limit (see footnote~\ref{foot:tdlimit} on page~\pageref{foot:tdlimit}), i.e., the difference between the spatially-averaged pointwise stress tensors resulting from two different extensions tends to zero, as the volume of the averaging domain is increased. The discussion below is limited to 5-body potentials since it can be easily extended to any interatomic potential. We first show that the contribution due to any cluster of $5$ particles within the averaging domain is zero. Without loss of generality, we may assume that our system consists of $5$ particles interacting with an interatomic potential energy given by \begin{equation} \mathcal{V}_{\rm{int}} = \widehat{\mathcal{V}}(\bm{x}_1,\dots,\bm{x}_5). \end{equation} Let $\mathcal{V}_{\rm{int}}(\zeta_{12},\dots,\zeta_{45})$ and $\mathcal{V}^*_{\rm{int}}(\zeta_{12},\dots,\zeta_{45})$ be two different extensions of $\mathcal{V}_{\rm int}$ from the shape space $\mathcal{S}$ to $\mathbb{R}^{10}$ (see \sref{sec:s_motion}), and for any $\bm{s}=(r_{12},\dots,r_{45}) \in \mathcal{S}$, let \begin{align} \bm{f}_{\alpha\beta}(\bm{x}_1,\dots,\bm{x}_5) &:= \frac{\partial \mathcal{V}_{\rm{int}}}{\partial \zeta_{\alpha\beta}}(\bm{s}) \frac{\bm{x}_\beta - \bm{x}_\alpha}{r_{\alpha\beta}}, \\ \bm{f}^*_{\alpha\beta} (\bm{x}_1,\dots,\bm{x}_5)&:= \frac{\partial \mathcal{V}^*_{\rm{int}}}{\partial \zeta_{\alpha\beta}}(\bm{s})\frac{\bm{x}_\beta - \bm{x}_\alpha}{r_{\alpha\beta}}, \end{align} be their corresponding force decompositions. Let $\bm{\sigma}$ and $\bm{\sigma}^*$ denote the resulting pointwise stress tensors in the Irving--Kirkwood--Noll procedure from $\mathcal{V}_{\rm{int}}$ and $\mathcal{V}^*_{\rm{int}}$, respectively. Let $\Omega_{\bm{x}}$ denote the averaging domain\footnote{For simplicity assume that the averaging domain is convex.} centered at $\bm{x}$ that is used to calculate the Hardy stress tensor. Using \eref{eqn:hardy_stress_force_tavg} and noting that all the bonds lie within $\Omega$, the difference between the Hardy stress tensors resulting from these two representations, for the special case of a constant weighting function, is given by \begin{equation} \label{eqn:delta_stress} \Delta \bm{\sigma}(\bm{x},t) := \bm{\sigma}_w - \bm{\sigma}^*_w = \frac{1}{2\tau \operatorname{vol}(\Omega_{\bm{x}})}\sum_{\substack{\alpha,\beta \\ \alpha \neq \beta}} \int_{t}^{t+\tau} [-\Delta \bm{f}_{\alpha\beta} \otimes (\bm{x}_\alpha-\bm{x}_\beta)] dt, \end{equation} where $\Delta \bm{f}_{\alpha\beta} := \bm{f}_{\alpha\beta} - \bm{f}^*_{\alpha\beta}$. \begin{figure} \centering \includegraphics[scale=0.7]{fig5_1} \caption{A cluster of $5$ particles that lie completely inside the averaging domain, does not contribute to the ambiguity in the stress tensor.} \label{fig:delta_sigma} \end{figure} We would like to show that $\Delta \bm{\sigma} \bm{n}_1=\bm{0}$, where $\bm{n}_1$ is the normal vector as shown in \fref{fig:delta_sigma}. The essential idea to is to interchange the integration and summation in \eref{eqn:delta_stress} and split the terms appearing in the summation into fractions, such that each fraction yields a zero contribution to $\Delta\bm{\sigma}\bm{n}_1$. In order to show this, we partition the averaging domain into regions such that no region contains a particle in its interior and the partition surfaces are perpendicular to the normal (see \fref{fig:delta_sigma}). Let $h_P$ denote the width of the partition $P$. Using \eref{eqn:delta_stress}, we can now write $\Delta \bm{\sigma} \bm{n}_1$ as \begin{align} \Delta \bm{\sigma} \bm{n}_1 &= \frac{1}{2 \tau \operatorname{vol}(\Omega_{\bm{x}})} \int_t^{t+\tau} \sum_{P} \sum_{\alpha\beta \cap P} \left [-\Delta \bm{f}_{\alpha\beta} (\bm{x}_\alpha - \bm{x}_\beta) \cdot \bm{n}_1\frac{h_P}{\abs{(\bm{x}_\alpha - \bm{x}_\beta) \cdot \bm{n}_1}} \right], \notag \\ &= \frac{1}{2 \tau \operatorname{vol}(\Omega_{\bm{x}})} \int_t^{t+\tau} \sum_{P} h_P \Delta\bm{F}_P, \end{align} where $\sum_{\alpha\beta \cap P}$ denotes the summation over the bonds crossing partition $P$, and \begin{equation} \Delta\bm{F}_P = \sum_{\alpha\beta \cap P} \left [-\Delta \bm{f}_{\alpha\beta} \frac{(\bm{x}_\alpha - \bm{x}_\beta) \cdot \bm{n}_1}{\abs{(\bm{x}_\alpha - \bm{x}_\beta) \cdot \bm{n}_1}} \right ] \end{equation} is the \emph{net} force on particles on one side of the partition due to particles on the other side. Since both representations give the same total force on each particle, the force difference, or net force, on each particle is zero and therefore, $\Delta\bm{F}_P=\bm{0}$. For example, for the partition shown in the figure, \begin{equation} \Delta\bm{F}_P = -2(\Delta \bm{f}_{51} + \Delta \bm{f}_{43}). \end{equation} Since $\Delta \bm{f}_{45} = - \Delta \bm{f}_{54}$, we have \begin{align} \Delta\bm{F}_P &= -2(\Delta \bm{f}_{51} + \Delta \bm{f}_{43} + \Delta \bm{f}_{45} + \Delta \bm{f}_{54}) \notag \\ &= -2(\Delta \bm{f}_{43} + \Delta \bm{f}_{45}) - 2(\Delta \bm{f}_{51} + \Delta \bm{f}_{54}) \notag \\ &= -2\Delta \bm{f}^{\rm int}_4 - 2\Delta \bm{f}^{\rm int}_5 = \bm{0} + \bm{0} = \bm{0}. \end{align} Hence, $\Delta\bm{\sigma} \bm{n}_1 = \bm{0}$. Undertaking a similar argument in the other directions, we see that $\Delta\bm{\sigma} \bm{n}_i=\bm{0}$. These results together imply that $\Delta\bm{\sigma}=\bm{0}$. Given this, we can conclude that any cluster of particles that lies entirely within the averaging domain does not contribute to the spatial average of the difference between two stress definitions. Consequently, the only non-zero contribution comes from those clusters for which the bonds connecting its particles cross the averaging domain. Since this contribution scales as surface area, it tends to zero as volume tends to infinity. \subsubsection*{Uniqueness and the addition of a divergence-free field to the stress} The second source of non-uniqueness of the stress tensor involves the addition to it of a divergence-free field. This issue is partly addressed by the result (shown in \sref{sec:virial}) that the spatially-averaged pointwise stress converges to the virial stress in the thermodynamic limit (see footnote~\ref{foot:tdlimit} on page~\pageref{foot:tdlimit}). Consider the pointwise stress, $\bm{\sigma}$, obtained through the Irving--Kirkwood--Noll procedure, which satisfies the balance of linear momentum, and a new pointwise stress, $\hat{\bm{\sigma}}=\bm{\sigma}+\tilde{\bm{\sigma}}$, where $\operatorname{div}_{\bm{x}}\tilde{\bm{\sigma}}=\bm{0}$. Clearly, $\hat{\bm{\sigma}}$ also satisfies the balance of linear momentum and is therefore also a valid solution. The spatially-averaged stress obtained from the new definition is \begin{equation} \hat{\bm{\sigma}}_w(\bm{x},t) = \int_{\real{3}} w(\bm{y} - \bm{x}) \hat{\bm{\sigma}}(\bm{y},t) \, d\bm{y} = \int_{\real{3}} w(\bm{y} - \bm{x}) (\bm{\sigma}(\bm{y},t)+\tilde{\bm{\sigma}}(\bm{y},t))\, d\bm{y}. \label{eqn:tildesigw} \end{equation} We showed in \sref{sec:virial} that in the thermodynamic limit, the spatially-averaged pointwise stress, $\bm{\sigma}$, converges to the virial stress. We also expect $\hat{\bm{\sigma}}_w$ to equal the virial stress in this limit (since any macroscopic stress must converge to this value under equilibrium conditions). Therefore, \eref{eqn:tildesigw} reduces to \begin{equation} \lim_{\rm TD} \int_{\real{3}} w(\bm{y} - \bm{x}) \tilde{\bm{\sigma}}(\bm{y},t)\, d\bm{y}=\bm{0}, \label{eqn:tildesigconstraint} \end{equation} where $\lim_{\rm TD}$ refers to the thermodynamic limit. Equation~\eref{eqn:tildesigconstraint} places a strong constraint on allowable forms for $\tilde{\bm{\sigma}}$, the implications of which are left for future work. \section{Numerical Experiments} \label{ch:experiment} In this section, we describe several numerical experiments, involving molecular dynamics and lattice statics simulations, conducted to capture differences in the spatially-averaged stress measures derived in \sref{ch:compare}. We consider the Hardy stress defined in \eref{eqn:hardy_stress_tavg}, the Tsai traction defined in \eref{eqn:tsai_traction}, the virial stress defined in \eref{eqn:virial_stress_tavg} and the DA\ stress defined in \eref{eqn:murdoch_stress}. We will sometimes refer to these as the ``microscopic definitions'' or the ``microscopically-based stress tensors''. \subsection{Experiment 1} \label{sec:exp_1} We begin with the study of the kinetic part of the stress tensor. From the discussion in \sref{ch:compare}, it is clear that unlike the definition for the potential part of the stress tensor, there is no ambiguity in the definition for the kinetic part of stress. However, the kinetic part of the stress may appear to be at odds with the continuum definition of stress that is stated solely in terms of the forces acting between different parts of the body. The need for the kinetic part of stress becomes apparent when considering an ideal gas, where the potential interaction term is zero by definition and therefore it is the kinetic term that is wholly responsible for the transmission of pressure. \begin{figure} \centering \includegraphics[totalheight=0.3\textheight]{fig6_1} \caption{The virial pressure as a function of time is plotted for an isolated cube of aluminum at $300\rm{K}$. The total pressure/virial pressure is the sum of the kinetic and potential pressures.} \label{fig:isolated} \end{figure} To demonstrate that the kinetic term in the stress tensor does indeed exist, we perform the following constant energy molecular dynamics simulation of an isolated cube. The cube, consisting of $4000$ aluminum atoms in a face-centered cubic (fcc) arrangement ($10\times10\times10$ unit cells), is floating freely in a vacuum. The atoms interact according to an EAM potential for aluminum due to Ercolessi and Adams \cite{ercolessi1994}: \begin{align} \mathcal{V}^{\rm EAM}_{\rm int} = \frac{1}{2}\sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \mathcal{V}_{\alpha\beta}(r_{\alpha\beta}) + \sum_\alpha \mathcal{U}_\alpha(\rho_\alpha), \quad \rho_\alpha=\sum_{\substack{\beta \\ \beta \ne \alpha}} f_\beta(r_{\alpha\beta}). \end{align} Here $\mathcal{U}_\alpha$, called the \emph{embedding function}, is the energy required to embed particle $\alpha$ in the electron density, $\rho_\alpha$, due to the surrounding particles, and $f_\beta(r_{\alpha\beta})$ is the electron density of particle $\beta$ at $\bm{x}_\alpha$. The initial positions of the atoms are randomly perturbed by a small amount relative to their zero temperature equilibrium positions and the system is evolved by integrating the equations of motion. The initial perturbation is adjusted so that the temperature of the cube is about $300\rm{K}$ (small fluctuations in temperature are expected since temperature is not controlled in the simulation). Since the block is unconstrained, we expect the stress, $\bm{\sigma}$, in the box and consequently the pressure, defined by $p = - \frac{1}{3}\operatorname{tr} \bm{\sigma}$, to be zero. The virial expression for calculating the pressure follows from \eref{eqn:virial_1} as \begin{equation} \label{eqn:virial_pressure} p= \frac{1}{3V}\Bigg [ \sum_\alpha m_\alpha \ol{\vnorm{\bm{v}_\alpha}^2} - \frac{1}{2} \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \ol{\vnorm{\bm{f}_{\alpha\beta}} r_{\alpha\beta}} \Bigg ], \end{equation} where \begin{align} \bm{f}_{\alpha\beta} = \frac{\partial \mathcal{V}^{\rm EAM}_{\rm int}}{\partial r_{\alpha\beta}} \frac{\bm{x}_\beta-\bm{x}_\alpha}{r_{\alpha\beta}}. \end{align} The three curves shown in \fref{fig:isolated} are the potential and kinetic parts of the pressure and the total pressure as a function of time, calculated using \eref{eqn:virial_pressure}. As expected the total pressure tends to zero as the system equilibrates. However, the potential and kinetic parts are \emph{non-zero}, converging to values that are equal and opposite such that their sum is zero. More interestingly, the kinetic part is not insignificant for our system. This clearly shows that kinetic part cannot be neglected even when considering solid systems. This can be quantified by noting that the kinetic part in \eref{eqn:virial_pressure} is simply the temperature per unit volume given by the equipartition theorem \cite{huang}, $k_B T = 2 \mathcal{T}/3N$, where $\mathcal{T}$ is the kinetic energy. Therefore \eref{eqn:virial_pressure} reduces to \begin{equation} p= \frac{1}{V}\Bigg [ Nk_BT - \frac{1}{6} \sum_{\substack{\alpha,\beta \\ \alpha \ne \beta}} \ol{\mathcal{V}'_{\alpha\beta} r_{\alpha\beta}} \Bigg ]. \end{equation} For example at 300~K, $\ensuremath{k_{\rm B}} T = 0.02585~\rm{eV}$. The lattice spacing for the system considered is equal to $4.032 \text{\AA}$. Hence, the volume per atom is $V/N = 4.023^3/4 = 16.387\text{\AA}$ and the kinetic pressure is $1.577~\rm{meV}/\text{\AA}$. This translates to $252.394$ MPa, which is a considerable stress. \subsection{Experiment 2} \label{sec:exp2} It is clear from \sref{sec:exp_1}, that the kinetic stress is a sizable quantity and cannot be neglected. In this experiment, we further explore the interplay between the potential and kinetic parts of the stress. \begin{figure} \centering \subfigure[]{\label{fig:midway}\includegraphics[totalheight=0.25\textheight]{fig6_2a}} \subfigure[]{\label{fig:ontop}\includegraphics[totalheight=0.25\textheight]{fig6_2b}} \subfigure[]{\label{fig:al_4000_periodic}\includegraphics[totalheight=0.25\textheight]{fig6_2c}} \caption{The effect of the position of the Tsai plane on the potential and kinetic parts of stress. Frames (a) and (b) show schematic diagrams of a two-dimensional triangular lattice with (a) the Tsai plane positioned midway between the lattice planes and (b) the Tsai plane positioned almost on top of a lattice plane. The open circles correspond to the ideal lattice positions. The black circles are atoms that are shown in mid-vibration relative to the lattice site as indicated by the arrow. The Tsai plane is indicated by a vertical dashed line. The bonds crossing it appear as dotted lines. Frame (c) shows the plot of the kinetic part of stress $\sigma^{\rm{k}}_{11}$, potential part of stress $\sigma^{\rm{v}}_{11}$ and the total stress $\sigma_{11}$, as a function of the normalized position $s_P = (x_P - x_L)/\Delta x$ of the Tsai plane P, where $x_P$ is the position of $P$, $x_L$ is the position of the lattice plane, and $\Delta x$ is the spacing between the lattice planes.} \end{figure} Consider a crystalline solid at a relatively low temperature under uniform stress. The atoms will vibrate about their mean positions with an amplitude that is small relative to the nearest-neighbor spacing. Now imagine placing a Tsai plane $P$ between two crystal lattice planes and measuring the traction across it. If $P$ is midway between the lattice planes (see \fref{fig:midway}), we expect that relatively few atoms will cross $P$ and that consequently the kinetic stress will be small or even zero. In contrast, if $P$ is close to the lattice plane there will be many such crossings and the kinetic stress will be large in magnitude. This seems to suggest that the traction will change as a function of the position of $P$, which would be incorrect since the system is under uniform stress. The reason that this does not occur is that every time an atom crosses $P$, the bonds connected with it reverse directions with respect to $P$, changing a positive contribution to the contact stress to a negative one and vice versa (see the bonds connected with atom $A$ in \fref{fig:midway} and \fref{fig:ontop}). This effect on the potential part of the stress exactly compensates for the change in magnitude of the kinetic stress leaving the total stress constant. This is demonstrated numerically in \fref{fig:al_4000_periodic}. This graph shows the results obtained from a molecular dynamics simulation of the system described in \sref{sec:exp_1}, with periodic boundary conditions. The periodic length of the box is set based on the zero temperature lattice spacing. Consequently upon heating by a temperature change of $\Delta T$, a compressive stress is built up in the box according to \begin{equation} \label{eqn:strain} \bm{\epsilon} = \bm{s}:\bm{\sigma} + \bm{I}\alpha_T \Delta T = \bm{0}, \end{equation} where $\bm{s}$ is the elastic compliance tensor and $\alpha_T$ is the coefficient of thermal expansion. Inverting this relation for an fcc crystal with cubic symmetry oriented along the crystallographic axes, we have \begin{equation} \label{eqn:constitutive} \sigma_{11} = \sigma_{22} = \sigma_{33} = -(c_{11} + 2c_{12})\Delta T = \sigma, \end{equation} with the rest of the stress components zero. In \eref{eqn:constitutive}, $c_{ij}$ are the elastic constants of the material. Substituting in the appropriate values for Ercolessi-Adams EAM aluminum \cite{ercolessi1994} ($c_{11}=118.1$ GPa, $c_{12} = 62.3$ GPa, $\alpha_T = 1.6 \times 10^{-5} \rm{K}^{-1}$) and $\Delta T = 310 \rm{K}$, gives $\sigma=-1.2$ GPa. We see that the total stress in \fref{fig:al_4000_periodic} is constant regardless of the position of the Tsai plane and equal to the expected value of $-1.2$ GPa computed above. However, the kinetic and potential parts change dramatically. When the Tsai plane is away from the lattice planes ($s_P = \pm 0.1$), the kinetic stress is zero and the entire stress is given by the potential part of the stress. As the Tsai plane moves closer to a lattice plane ($\abs{s_P} \to 0$), the kinetic stress becomes more negative (increasing in magnitude) and the potential part of stress increases in exactly the right amount to compensate for this effect. When the Tsai plane is right on top of a lattice plane ($s_P=0$), both the kinetic stress and potential stress are maximum in magnitude, but their sum remains equal to the constant total stress. This is a striking demonstration of the interplay between the kinetic and potential parts of the stress tensor. \subsection{Experiment 3 and 4} In this section, the predictions of the microscopically-based stress tensors are compared with analytical solutions from elasticity theory for two simple boundary-value problems. This is a revealing test, since stress is a continuum concept and therefore the microscopic definitions should reproduce the results of a continuum theory under the same conditions. We perform two numerical experiments. In each experiment, an atomistic boundary-value problem is set up, and the values computed from the discrete system are compared with the ``exact'' result computed from elasticity theory for the same problem using material properties predicted by the interatomic potential used in the atomistic calculations. The numerical experiments are conducted at zero-temperature since there is no controversy regarding the form of of the kinetic stress which is the same for all stress definitions. Therefore, a comparison at zero temperature is sufficient to probe the differences between the stress measures, at least under equilibrium conditions. The properties we are interested in studying are: \begin{enumerate} \item Symmetry of the stress tensor. \item Convergence of the stress tensor to the continuum value with the size of the averaging domain (a three-dimensional volume in the case of virial, Hardy and DA\ stresses and a plane in the case of the Tsai traction). \end{enumerate} \subsubsection*{Inter-atomic Model} The numerical experiments in this section are carried out using a Lennard-Jones potential. The exact choice of material parameters is unimportant, since the objective of the experiment is to compare the values obtained from the microscopically-based stress for the discrete system with the ``exact'' values obtained from the continuum elasticity theory for the same material. The Lennard-Jones parameters, $\epsilon$ and $\sigma$, are therefore arbitrarily set to 1. The potential has the following form: \begin{equation} \label{eqn:lennard_jones} \phi(r) = 4 \left [ \frac{1}{r^{12}} - \frac{1}{r^6} \right ] - 0.0078r^2 +0.0651. \end{equation} Note that the above equation has been rendered dimensionless by expressing lengths in units of $\sigma$ and energy in units of $\epsilon$. As seen in the above equation, the Lennard-Jones potential is modified by the addition of a quadratic term. This is done to ensure that $\phi(r_{\rm{cut}}) = 0$ and $\phi'(r_{\rm{cut}})=0$, where $r_{\rm{cut}} = 2.5$, denotes the cutoff radius for the potential. We refer to this as the ``modified Lennard-Jones potential''. The ground state of this potential is an fcc crystal with a lattice constant of $a=1.556$ and elastic constants, $c_{11}=87.652, c_{12}=c_{44}=50.379$. The conventional elastic moduli associated with the cubic elastic constants are \cite{lekhnitskii}: \begin{align} E&=(c_{11}^2 + c_{11}c_{12}-2c_{12}^2) / (c_{11} + c_{12}) = 50.877,\\ \mu &= c_{44} = 50.379, \\ \nu &= c_{12}/(c_{11}+c_{12}) = 0.365, \end{align} where $E$ is Young's modulus, $\mu$ is the shear modulus, and $\nu$ is Poisson's ratio. (In the above, elastic constants are given in units of $\epsilon/\sigma^3$. Poisson's ratio is dimensionless.) \subsubsection*{Experiment 3: Dependence of the microscopically-based stress on the averaging domain size} The main aim of this experiment is to study the dependence of the stress given by various definitions (Hardy, Tsai, virial and DA) on the size of the averaging domain. We consider the special case of uniform uniaxial loading with $\sigma_{11}=1$ (all other stress components zero). Our system is a cube of $10\times10\times10$ unit cells ($4000$ atoms) with periodic boundary conditions applied in all directions. To impose the uniaxial loading, the periodic lengths $l_i$ $(i=1,2,3)$ in the three directions are modified according to the linear elastic solution for uniform straining: \begin{align} l_1 &= 10a(1+\sigma_{11}/E) = 10.197,\\ l_2 = l_3 &= 10a(1-\nu \sigma_{11}/E) = 9.928. \end{align} We then compute the stress at the center of the periodic cell, while increasing the size of the averaging domain. In comparing the different stress definitions, the domain size is set by the Tsai plane which is taken to be a square normal to the 1-direction with the same dimension $w$ in the 2 and 3-directions. The averaging domain for the Hardy, virial and DA\ stresses is a sphere of diameter $d=w$. The weighting function, $w(r)$, for the Hardy stress is taken to be constant with a suitable mollifying function, \begin{equation} \label{eqn:weight_molly} w(r) = \left \{ \begin{array}{ll} c & \mbox{if $r<R-\epsilon$}\\ \frac{1}{2}c\left[ 1- \cos \left( \frac{R-r}{\epsilon}\pi \right ) \right ] & \mbox{if $R-\epsilon<r<R$} \\ 0 & \mbox{otherwise} \end{array} \right., \end{equation} where $c$ is chosen appropriately to normalize $w$. The results are presented in \fref{fig:window}, where the stress $\sigma_{11}$ is plotted as a function of the normalized domain size, $s=w/10a$. (Recall that the applied value is $\sigma_{11}=1$.) We make the following qualitative observations based on these results: \begin{figure} \centering \includegraphics[totalheight=0.27\textheight]{fig6_3} \caption{Plot showing the dependence of $\sigma_{11}$, calculated using different definitions on the averaging domain size. The variable $s$ represents the ratio of the domain size to the length of the system ($10$ unit cells).} \label{fig:window} \end{figure} \begin{enumerate} \item The Hardy stress converges to the exact value most quickly of all stress definitions and has the least noise. \item The normal stress computed from the Tsai traction oscillates about the exact value with a fluctuation amplitude that decays rather slowly with domain size. The oscillations reflect the symmetry of the crystal as new bonds enter the calculation with increasing plane size. \item The virial stress is always smaller than the Hardy and Tsai stresses since it does not take into account the bonds that cross out of the averaging domain. It appears to be converging towards the exact value, but convergence is slow and even at the maximum domain size studied, the virial stress still has a significant error. \item The DA\ stress is much smaller than all other stresses due to greater averaging. \end{enumerate} \subsubsection*{Experiment 4: A plate with a hole under tension} We now consider one of the classical elasticity boundary-value problems: an infinite plate with a hole subjected to uniaxial tension $\sigma_{\infty}$ at infinity. This is traditionally named the \emph{Kirsch} problem for an isotropic material model. Our objective is to compare the microscopically-based stresses computed for a discrete system set up for the Kirsch problem with the exact solution. A complication in making this comparison is that the fcc Lennard-Jones material we are considering is crystalline with cubic symmetry and is not isotropic. We must therefore compare the discrete solution with the more general solution for the Kirsch problem from the theory of elasticity for anisotropic media \cite{lekhnitskii}. For anisotropic materials, the stress concentration\footnote{The stress concentration is defined as the ratio of the maximum stress to the applied stress $\sigma_\infty$. The maximum stress for the Kirsch problem occurs at the circumference of the hole.} at the hole is no longer $3$ (as it is for an isotropic material), but depends on the elastic constants of the material. For the elastic constants of the Lennard-Jones model in \eref{eqn:lennard_jones}, we obtain a stress concentration of $2.408$. In addition to the overall stress concentration, the analytical solution provides the complete stress field about the hole. We can therefore compare the microscopically-based stress fields with the continuum result. In order to model an infinite elastic space, we consider a large square plate oriented along the crystallographic axes consisting of $367{,}590$ atoms, with a hole of radius $25a$, where $a$ is the lattice constant. The plate is constructed by stacking $100\times100\times10$ unit cells and excluding the atoms that lie withing the radius of the hole. The relatively large system size helps to ensure that the variation of the continuum stress is small on the lengthscale of the lattice spacing and minimizes boundary effects near the hole. The atoms interact according to the modified Lennard-Jones potential given in \eref{eqn:lennard_jones}. \begin{figure} \centering \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_01}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_02}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_03}} \\ \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_04}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_05}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_06}} \\ \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_07}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_08}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_09}} \\ \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_10}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_11}} \subfigure{\includegraphics[totalheight=0.130\textheight]{fig6_4_12}} \caption{Normalized $\sigma_{11}$ component of the stress along the $x_1=0$ line for an anisotropic plate with a hole subjected to uniaxial tension in the 1-direction. The $x_2$ coordinate is normalized by the height $h$ of the atomistic model. The exact solution for an infinite plate obtained from anisotropic linear elasticity (black solid line) is compared with the results obtained from the three microscopic definitions in the three columns: Hardy, Tsai and virial. The four rows correspond to four different averaging domains constituting $1\%$, $2.5\%$, $5\%$ and $10\%$ of $h$.} \label{fig:11_1} \end{figure} As before, the averaging domain size is set by the length and width of the Tsai plane, with the Hardy, virial and DA\ stress using a sphere of diameter equal to the length of the Tsai plane. The system is loaded by $\sigma_{11}=\sigma_\infty$, by displacing the atoms according to the exact solution from continuum mechanics for linear elastic anisotropic media \cite{lekhnitskii}. The applied stress is sufficiently small, so that the assumption of material linearity and the small strain approximation inherent in the elasticity theory provide a good approximation for the behavior of the system. After the atoms are displaced, the stress tensor is evaluated on a uniformly-distributed grid of points on the mid-plane of the plate located at $x_3=0$. A grid of $100\times100$ points is chosen to evaluate the virial, Tsai and Hardy expressions and a grid of $30\times30$ is chosen to evaluate the DA\ stress tensor. A coarser grid is used for the DA\ stress due to the higher computational cost of this calculation. First, we consider the $\sigma_{11}$ component of the stress along the $x_1=0$ line, where we expect the maximum value at the hole surface. The results are plotted in \fref{fig:11_1}, which shows a comparison between the exact value and the three microscopic stress definitions (Hardy, Tsai and virial) for four different averaging domains ranging from $1\%$ to $10\%$ of the height $h$ of the atomistic model. We see that the Hardy and Tsai stresses faithfully follow the exact solution, but then drop-off as their averaging domain overlaps with the hole. This drop-off reflects the fact that the microscopically-based stress measures are bulk expressions. The smaller the averaging domain, the closer the microscopic measures can approach the exact stress concentration at the hole surface, however, this increased fidelity comes at the cost of significantly large fluctuation about the exact value. The virial stress is identically zero for the smallest averaging domain because it is too small to contain complete bonds. For the same reason, the Hardy stress experiences very large fluctuations and a nearly constant average value. For larger averaging domains, the Hardy stress has smaller fluctuations than the other stress definitions. The reason that the drop-off effect described above is so pronounced in this simulation, is that the system is very small by continuum standards. If instead of a hole with a radius of $25a$, we studied a plate with a hole $100$ or $1000$ times larger, using the same-sized averaging domain, the spatially-averaged expressions would get much closer to the correct value before dropping off over the same lengthscale as seen in \fref{fig:11_1}. However, microscopic stress measures are often computed for small system sizes and therefore the difficulties presented in the figure are typical of realistic atomistic simulation. \begin{figure} \centering \subfigure[]{\label{fig:11_exact}\includegraphics[totalheight=0.14\textheight]{fig6_5a}} \\ \subfigure[]{\label{fig:11_virial}\includegraphics[totalheight=0.14\textheight]{fig6_5b}} \subfigure[]{\label{fig:11_tsai}\includegraphics[totalheight=0.14\textheight]{fig6_5c}} \subfigure[]{\label{fig:11_hardy}\includegraphics[totalheight=0.14\textheight]{fig6_5d}} \subfigure[]{\label{fig:11_murdoch}\includegraphics[totalheight=0.14\textheight]{fig6_5e}} \caption{Color density plots of $\sigma_{11}$ are plotted on a common scale: (a) exact (b) virial (c) Tsai (d) Hardy (e) DA. Results plotted for an averaging domain size of $10\%$ of the height of the model.} \label{fig:11} \end{figure} Next, we explore the stress field over the entire plane. The color density plots given in \fref{fig:11}, show variation of $\sigma_{11}$ in the mid-plane of the plate. It can be seen that the stress within the hole is zero. Comparing the plots for $\sigma_{11}$ of the virial stress (\fref{fig:11_virial}), Tsai stress (\fref{fig:11_tsai}), Hardy stress (\fref{fig:11_hardy}) and the DA\ stress (\fref{fig:11_murdoch}) with the exact solution (\fref{fig:11_exact}), we see that the first three definitions capture the overall variation in $\sigma_{11}$, whereas the DA\ stress does not. However, it is clear that the microscopically-based stress in all of the cases is smeared relative to the stress given by the exact solution and none reach the exact stress concentration of $2.408$. This is a result of the averaging procedure involved in all the definitions as explained above. Although the DA\ stress tensor plotted in \fref{fig:11_murdoch}\footnote{\fref{fig:11_murdoch} and \fref{fig:shear_murdoch} are generated from a much coarser grid compared to the other plots due to the computational expense of the DA\ stress definition.} captures the variations in the field, it is much smaller in magnitude compared to the exact solution. This is because of the greater degree of averaging involved in the DA\ stress tensor. Overall, the Hardy stress is less noisy than the virial or Tsai definitions due to the smoothing afforded by the weighting function. (This is hard to see in the figure.) The stress computed from the Tsai traction, in particular, is more noisy since the averaging is limited to a plane compared with the volume averaging of the Hardy and virial definitions. However, this more localized definition enables the Tsai stress to approach the exact stress concentration most closely of all of the microscopic definitions. Similar results were observed by Cheung and Yip \cite{cheung1991} for the stress near a free surface. \begin{figure} \centering \subfigure[]{\label{fig:11_virial_e}\includegraphics[totalheight=0.17\textheight]{fig6_6a}} \subfigure[]{\label{fig:11_tsai_e}\includegraphics[totalheight=0.17\textheight]{fig6_6b}} \subfigure[]{\label{fig:11_hardy_e}\includegraphics[totalheight=0.17\textheight]{fig6_6c}} \caption{Color density plots of error in $\sigma_{11}$, defined as the absolute value of $(\sigma_{11} - \sigma_{11}^{\rm{exact}}) / \sigma_{11}^{\rm{exact}}$, where $\sigma_{11}$ is the stress calculated using (a) virial (b) Tsai and (c) Hardy stress definitions.} \label{fig:11_e} \end{figure} The relative error in $\sigma_{11}$ for the three microscopic definitions is shown in \fref{fig:11_e}. Of the three definitions, the stress computed from the Tsai traction is generally more accurate, followed by the Hardy stress and then the virial stress. As noted above, the Tsai stress does particularly well in capturing the variations in the stress field close to the hole where the fact that it is localized in one direction is particularly helpful. \begin{figure} \centering \subfigure[]{\label{fig:12_tsai}\includegraphics[totalheight=0.17\textheight]{fig6_7a}} \subfigure[]{\label{fig:21_tsai}\includegraphics[totalheight=0.17\textheight]{fig6_7b}} \subfigure[]{\label{fig:12_exact}\includegraphics[totalheight=0.17\textheight]{fig6_7c}} \caption{Color density plots of the shear stress components computed from the Tsai traction, (a) $\sigma_{12}$ and (b) $\sigma_{21}$, and (c) the exact shear stress, plotted on a common scale.} \label{fig:12} \end{figure} \begin{figure} \centering \subfigure[]{\label{fig:12_murdoch}\includegraphics[totalheight=0.19\textheight]{fig6_8a}} \subfigure[]{\label{fig:21_murdoch}\includegraphics[totalheight=0.19\textheight]{fig6_8b}} \caption{Color density plots of the DA\ shear stress components, (a) $\sigma_{12}$ and (b) $\sigma_{21}$, plotted on a common scale.} \label{fig:shear_murdoch} \end{figure} It is also interesting to examine the shear stress components. \fref{fig:12} shows the exact result from the continuum solution and the $\sigma_{12}$ and $\sigma_{21}$ components of the stress tensor computed from the Tsai traction from two different planes, one normal to the $1$ direction and the other normal to the $2$ direction. We see that the Tsai stress reproduces the exact distribution and appears generally symmetric. This suggests that the symmetry of the Hardy stress is preserved while taking the limit to arrive at the Tsai traction. The DA\ stress tensor is in general non-symmetric \cite{murdoch2007}, but from \fref{fig:12_murdoch} and \fref{fig:21_murdoch} we observe that in this case, $\sigma_{12}$ and $\sigma_{21}$ appear similar. Interestingly, in contrast to the normal stress, the magnitude of shear stress is captured by the DA\ stress definition, at least for the case studied here. The reason for this is not obvious. \medskip Overall, we can summarize our results as follows. Of the three definitions studied, the Hardy stress is generally preferred. It tends to be the smoothest and provides good accuracy away from surfaces as long as the lengthscale over which the continuum fields vary is large relative to the atomic spacing. In situations where either of those conditions break down, the Tsai traction provides a better localized measure of stress. The virial stress is less accurate than both. From a computational standpoint, the virial stress has the advantage of being easiest to compute. The evaluation of the bond function in the Hardy stress makes it slightly more expensive to compute, but comparable to the virial stress. The Tsai traction is most difficult and time consuming to compute, since it requires the detection of bonds and atoms that cross a given plane during the averaging process. Furthermore, this evaluation must be performed for three separate planes in order to obtain the full stress tensor in three dimensions. \section{Summary and future work} \label{ch:conclusions} In this paper, we provide a unified interpretation and possible generalization of all commonly used definitions for the stress tensor for a discrete system of particles. The macroscopic stress in a system under conditions of thermodynamic equilibrium are derived using the ideas of canonical transformations within the framework of classical statistical mechanics. The stress in non-equilibrium systems is obtained in a two-step procedure: \begin{enumerate} \item The Irving--Kirkwood--Noll procedure \cite{ik1950,noll1955} is applied to obtain a \emph{pointwise} (microscopic) stress tensor. The stress consists of a kinetic part $\bm{\sigma}_{\rm k}$ and a potential part $\bm{\sigma}_{\rm v}$. The potential part of the stress is obtained for multi-body potentials which have a continuously differentiable extension from the shape space of the system to a higher-dimensional Euclidean space.\footnote{Most practical interatomic potentials satisfy this conditions.} This generalizes the original Irving--Kirkwood--Noll approach that was limited to pair potentials. This generalization is obtained based on the important result that for any multi-body potential with a continuously differentiable extension, the force on a particle in a discrete system can always be expressed as a sum of \emph{central} forces. In other words, the \emph{strong law of action and reaction} is always satisfied. \item The pointwise stress obtained in the first step is spatially averaged to obtain the macroscopic stress. \end{enumerate} This two-step procedure provides a general unified framework from which various stress definitions can be derived including \emph{all} of the main definitions commonly used in practice. In particular, it is shown that the two-step procedure leads directly to the stress tensor derived by Hardy in \cite{hardy1982}. The traction of Cauchy and Tsai \cite{Cauchy1828a,Cauchy1828b,tsai1979} is obtained from the Hardy stress in the limit that the averaging domain is collapsed to a plane. The virial stress of Clausius and Maxwell \cite{clausius1870,maxwell1870,maxwell1874} is an approximation of the Hardy stress tensor for a uniform weighting function where bonds that cross the averaging domain are neglected. The Hardy stress and virial stress become identical in the thermodynamic limit. In this manner, clear connections are established between all of the major stress definitions in use today. The unified framework described above yields a \emph{symmetric} stress tensor for \emph{all} interatomic potentials which have an extension, when used with the standard Irving--Kirkwood--Noll procedure. However, there are materials in nature, such as liquid crystals, which can have non-symmetric stress tensors. In order to explore the possibility of non-symmetric stress, the Irving--Kirkwood--Noll procedure is generalized to \emph{curved paths of interaction} as suggested in \cite{schofield1982}. This involves the generalization of Noll's lemmas in \cite{noll1955}, originally derived for straight bonds, to arbitrary curved paths as defined in this paper. These generalized lemmas, lead to a non-symmetric stress tensor when applied within the Irving--Kirkwood--Noll procedure. It is postulated that curved paths of interaction may be important in systems with internal degrees of freedom, such as liquid crystals and objective structures \cite{james2006}. This is left for future work. One of the key points addressed in this paper is the uniqueness of the stress tensor. Three possible sources of non-uniqueness of the stress tensor are identified and addressed: \begin{enumerate} \item Different pointwise stress definitions can be obtained for different potential energy extensions. This is demonstrated through a simple one-dimensional example. We also show that regardless of the uniqueness of the pointwise stress tensor, the \emph{macroscopic} stress tensor obtained through a procedure of spatial averaging is unique since the difference resulting from alternative pointwise stress tensors tends to zero as the volume of the averaging domain is increased. \item The pointwise stress tensor is obtained by solving the balance of linear momentum, $\operatorname{div}_{\bm{x}} \bm{\sigma}_{\rm{v}} = \bm{h}(\bm{x})$, where $\bm{\sigma}_{\rm{v}}$ is the potential part of the stress tensor, and $\bm{h}(\bm{x})$ is a known function. The Irving--Kirkwood--Noll procedure leads to a closed-form solution to this problem. However, an arbitrary tensor field $\tilde{\bm{\sigma}}$ with zero divergence can be added to $\bm{\sigma}_{\rm{v}}$ without violating the balance of linear momentum. We argue that in the thermodynamic limit, the non-equilibrium stress obtained through our unified two-step process must converge to the virial stress of equilibrium statistical mechanics. This is similar to the argument made by Wajnryb et al. \cite{dahler1995}. This condition is satisfied by the general stress expression that we obtain. Any divergence-free stress $\tilde{\bm{\sigma}}$ added to this stress must therefore also disappear under equilibrium conditions. This greatly restricts the allowable forms of $\tilde{\bm{\sigma}}$. \item The generalization of the Irving--Kirkwood--Noll procedure from straight bonds to arbitrary curved paths of interaction implies the existence of multiple stress tensors for a given system. However, the existence of curved bonds implies the existence of internal structure for the discrete particles, a possibility already discussed by Kirkwood in \cite{kirkwood1946}. For a system of point masses without internal structure, only straight bonds are possible due to symmetry arguments, and therefore this source of non-uniqueness is removed. The general case of non-symmetric stress must be addressed within the context of an appropriate multipolar theory as discussed by Pitteri \cite{Pitteri1990}. We leave this to future work. \end{enumerate} In addition to the unified framework described above which is based on the Irving--Kirkwood--Noll procedure, we also investigated the Murdoch--Hardy procedure \cite{hardy1982,murdoch1982} of defining continuum fields as direct spatial averages of microscopic quantities. We demonstrate that this approach can be systematized by adopting a non-local continuum perspective and introducing suitable generator functions. The various stress definitions resulting from the Murdoch--Hardy procedure, such as the Hardy, virial and the ``double-averaged'' stress (suggested by Murdoch in \cite{murdoch1994}) can be derived from this unified framework. Although we share the concern regarding the ambiguity of the probability density functions used in the Irving--Kirkwood--Noll procedure that led Murdoch to develop the direct spatial averaging approach \cite{murdoch1993}, we feel that since these probability density functions exist \emph{in principle}, the Irving--Kirkwood--Noll formalism is the correct framework to phrase the problem in with approximations introduced later to derive practical expressions. Finally, numerical experiments involving molecular dynamics and lattice statics simulations are conducted to study the various stress definitions derived in this paper. It is generally observed that the Hardy stress definition appears to be most accurate and converges most quickly with the averaging domain size. In situations where a more localized measure of stress is needed, such as near surfaces or defects, the Tsai traction can be used instead. The virial stress is less accurate than the other two definitions and converges most slowly with averaging domain size. Its main advantage is its simple form and low computational cost. One of the most interesting results, which requires further study, comes from Experiment~2 of a crystalline system under uniform hydrostatic stress. \fref{fig:al_4000_periodic} shows that although the potential and kinetic parts of the Tsai traction largely depend on the position of the Tsai plane between two adjacent lattice planes, the total stress remains constant. This calculation provides a striking demonstration of the interplay between the kinetic and potential parts of the stress tensor. \vfill
train/arxiv
BkiUdKE5qWTA5dUDkLiE
5
1
\section{Introduction} Random walk on networks is a hot-topic in the realm of complex network study \cite{Noh-2004}-\cite{Lam-2012} because of a large number of applications, both theoretical and practical, such as, the study of transport-limited reactions \cite{O-B-2011}, target search \cite{Michael-2006} as well as disease spreading on relationship networks among individuals \cite{Jia-2018}, to name just a few. Thus, it has attracted considerable attention from various fields including applied mathematics, theoretical computer science, statistic physics in the past years \cite{Carletti-2020}-\cite{Berenbrink-2010}. In the language of mathematics, random walk on networks $\mathcal{G}(\mathcal{V},\mathcal{E})$ (defined in detail later) describes a simple dynamic process where a walker starting out from its current position $u\in\mathcal{V}$ will at random hop to any vertex $v\in\mathcal{V}$ in the next step with probability $P_{u\rightarrow v}$ that is defined as follows $$P_{u\rightarrow v}=\left\{\begin{split}&1/k_{_{u}} \qquad \text{if $u$ is adjacent to $v$},\\ &0 \qquad \qquad\text{otherwise.} \end{split}\right.$$ in which $k_{_{u}}$ is the number of edges incident with vertex $u$. In other words, $$f(v)=\sum_{u,u\sim v}\frac{1}{k_{u}}f(u)$$ for any distribution $f:\mathcal{V}\rightarrow \mathbb{R}$ with $\sum_{v}f(v)=1$ where symbol $u\sim v$ indicates that vertex $u$ is adjacent to $v$ \cite{Chung-1998}. Along this line of research, we keep on studying random walks on some networks of significant interest by considering many relevant topological parameters in order to better understand how the underlying structure affects dynamic behaviors of such type. As a tip, the terms graph and network are used indistinctly throughout this paper. In fact, there is a long history of investigating random walks on an arbitrary graph \cite{Shuji-2015}-\cite{Ibe-2013}. As known, the most fundamental in studying random walks is to estimate some structural parameters, for instance, mean first-passage time $\overline{\mathcal{F}}$ (defined in detail later). In theory, the closed-form solution to mean first-passage time $\overline{\mathcal{F}}$ on graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ may be obtained using $$\overline{\mathcal{F}}=\frac{2|\mathcal{E}|}{|\mathcal{V}|-1}\sum_{i=2}^{|\mathcal{V}|}\frac{1}{\lambda_{i}}$$ where $|\mathcal{E}|$ and $|\mathcal{V}|$ represents size and order of graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ in the jargon of graph theory \cite{Bondy-2008}, respectively, and $\lambda_{i}$ is eigenvalue of the corresponding Laplacian matrix $\mathbf{L}_{\mathcal{G}}$. On the other hand, this typical method will become intractable to derive what we want when it is not easy to determine Laplacian spectra of network in question. Perhaps, one reason for this is that it is difficult to first establish the corresponding Laplacian matrix. These such example networks can be ubiquitous in research community, for instance, the famous Vicsek fractal \cite{Vicsek-1983}. This triggers the relevant research and inspires scholars to develop effective approaches, which are suitable for different types of networks under considerable, in order to get around the dilemma above. An instructive example is the so-called self-similarity based method for heterogeneous networks \cite{Sheng-2019}. Therefore, in this paper, we will build up feasible techniques for the purpose of determining the analytic solution to mean first-passage time on some networks, in particular, tree networks. In general, these tree models that will be discussed are significantly difficult to analyze according to the typical methods when considering random walks in more general cases as will be shown later. Tree, as the simplest connected network, has also been well discussed in the area of random walk study \cite{Graham-1990}-\cite{Ma-2020-1}, and some structural parameters have been reported. For instance, the mean first-passage time $\overline{\mathcal{F}}_{\mathcal{T}}$ on an arbitrary tree $\mathcal{T}$ complies with the next expression $$2|\mathcal{V}_{\mathcal{T}}|\leq\overline{\mathcal{F}}_{\mathcal{T}}\leq\frac{1}{3}|\mathcal{V}_{\mathcal{T}}|^{2}.$$ Note that we have neglected other negligible terms in the above inequality in the large graph size limit. Nonetheless, the problem of how to analytically calculate the concrete formula for mean first-passage time on trees with intriguing structural features is of considerable interest both theoretically and experimentally. This is because there are a great number of real-world applications associated with some specific trees \cite{Borah-2014}-\cite{Bartolo-2016}. As a topological measure, the mean first-passage time can be chosen to quantify the structural properties including network robustness on those tree models. Particularly, the well-known Vicsek fractal is often used to describe the underlying structure of some dendrimers and regular hyperbranched polymers \cite{Borah-2014}. The deterministic uniform growth tree is frequently adopted to model epidemic spreading in population \cite{Moon-1974}. The famous $T$-graph \cite{Redner-2001} and other variants including Peano basin fractal \cite{Bartolo-2016} have been popularly utilized in physics and geosciences. Motivated by this, we aim at studying random walks on stochastic uniform growth trees, which are a class of more general tree models, and then derive the corresponding solution to mean first-passage time. To this end, we first propose a principled framework for generating anticipated tree models. As an immediate result, those tree models mentioned above will be grouped into the proposed framework. In other wards, the goal of this study is to try to discuss those trees from a more comprehensive and systematical viewpoint. In view of applications of those tree models into wide ranges of science \cite{Borah-2014}-\cite{Bartolo-2016}, the exact formulae for mean first-passage time on them have been obtained using some methods which are mainly based on spectral theory \cite{Kemeny-1976,Biggs-1974}. It is worth noticing that those models previously discussed all share deterministic structure. Even so, it is not easy to calculate mean first-passage time by using the typical methods as mentioned above. Meanwhile, in most cases, almost all seeds used to create those tree models are some specific and simple trees, for instance, an edge or a star. If an arbitrary tree is selected to serve as the seed, it is clear to understand that the pre-existing methods might not be suitable for calculating analytic solution to mean first-passage time. Additionally, when introducing randomness into development of tree models of this kind, the typical techniques will be prohibitively difficult and even fail to work. To address this issue, we develop some more convenient combinatorial approaches, and finally obtain what we are seeking for. Our contribution is shown in the following form. (1) A principled framework for creating stochastic uniform growth tree is proposed. More concretely, it consists of three ingredients: Vertex-based uniform growth mechanism, Edge-based uniform growth mechanism, along with mixture uniform growth mechanism. As will be clear to the eye, some previous models are contained in this framework completely. Meanwhile, based on this framework, we uncover some close relationships between those pre-existing models, which are previously unknown. (2) Using a combinatorial method, we derive a formula connecting Wiener index $\mathcal{W}$ to mean first-passage time $\overline{\mathcal{F}}$ in a tree. With this formula, the analytic solutions to mean first-passage time for random walks on all stochastic uniform growth trees built in this work are obtained precisely. As a consequence, many early published results in the literature are certainly covered by the formulae derived by us. More importantly, no complicated computations are involved in the process of determining solutions. (3) With the concept of network criticality, we find that the underlying topological structure of stochastic uniform growth tree plays a key role in studying random walks on it. At the same time, the scaling relations between quantity $\overline{\mathcal{F}}$ and vertex number on all stochastic uniform growth trees proposed herein are also analytically obtained. They can fairly be thought of as more general consequences in comparison with the previously reported results. The rest of this paper is organized as follows. We in Section 2 introduce some conventional notations in the jargon of graph theory. In Section 3, we propose some widely-studied graphic operations to establish a principled framework for generating potential candidate models as the objectives of this paper. Some example networks are listed for the purpose of better understanding the proposed framework. Next, the main results are shown in Section 4. More specifically, a formal connection of Wiener index $\mathcal{W}$ to mean first-passage time $\overline{\mathcal{F}}$ in a tree is built in a combinatorial manner. This enables us to derive the analytic solutions to mean first-passage time for random walks on all the above-mentioned models in a more convenient way. Accordingly, some detailed discussions are provided in order to understand how the underlying structure affects dynamic behaviors of this kind. Finally, we draw conclusion in the last section. \section{Definitions and notations} It is a convention in graph theory to denote a graph by $\mathcal{G}(\mathcal{V},\mathcal{E})$ that contains a set $\mathcal{V}$ vertices and a set $\mathcal{E}$ edges running between vertices. Accordingly, symbols $|\mathcal{V}|$ and $|\mathcal{E}|$ represent vertex number and edge number, respectively. $\mathcal{N}_{u}$ is used to indicate the neighboring set of vertex $u$. At the same time, let notation $[a,b]$ be a collection of integers which are certainly both no greater than $b$ and no less than $a$. \subsection{Matrix representation of graph} Roughly speaking, it is convenient to interpret a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ based on its adjacency matrix $\mathbf{A}=(a_{uv})$, which is defined in the following form $$a_{uv}=\left\{\begin{aligned}&1, \quad\text{vertex $u$ is connected to $v$ via an edge}\\ &0,\quad\text{otherwise}. \end{aligned}\right. $$ It is clear to see that such kind of representation encapsules some basic information about underlying structure of a graph itself. For instance, the degree $k_{u}$ of vertex $u$ is equivalent to $k_{u}=\sum_{v=1}^{|\mathcal{V}|}a_{uv}$. To make further progress, the diagonal matrix, denoted by $\mathbf{D}$, may be viewed as follows: the $i$th diagonal entry is $k_{i}$, while all non-diagonal entries are zero, that is, $\mathbf{D}=\text{diag}[k_{1},k_{2},\dots,k_{|\mathcal{V}|}]$. Based on this, the corresponding Laplacian matrix, denoted by $\mathbf{L}$, is expressed as $\mathbf{L}=\mathbf{D}-\mathbf{A}$. Accordingly, the normalized version is $\mathbf{\mathcal{L}}=\mathbf{I}-\mathbf{D}^{-1}\mathbf{A}$ where $\mathbf{I}$ is an identity matrix with proper cardinality and $\mathbf{D}^{-1}$ indicates the inverse of matrix $\mathbf{D}$. For our propose, let $\lambda_{1},\lambda_{2},\cdots,\lambda_{|\mathcal{V}|}$ indicate the $|\mathcal{V}|$ eigenvalues of Laplacian matrix $\mathbf{L}$, which can be rearranged in an increasing order as follows, $0=\lambda_{1}<\lambda_{2}<\cdots<\lambda_{|\mathcal{V}|}$. \subsection{Shortest path length} Given a pair of vertices, say $u$ and $v$, in graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, the distance $d_{uv}$, also usually thought of as shortest path length, is the edge number of any shortest path joining vertices $u$ and $v$. The diameter is the maximum among all the distances in graph. The sum over distances of all possible vertex pairs in graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, denoted by $\mathcal{W}$ which is also called Wiener index, is by definition given by \begin{equation}\label{eqa:MF-2-1-1} \mathcal{W}=\frac{1}{2}\sum_{u\in \mathcal{V}}\sum_{v\in \mathcal{V}}d_{uv}. \end{equation} At meantime, we can define the mean shortest path length $\overline{\mathcal{W}}$ in the following form \begin{equation}\label{eqa:MF-2-1-2} \overline{\mathcal{W}}=\frac{\mathcal{W}}{|\mathcal{V}|(|\mathcal{V}|-1)/2}. \end{equation} \subsection{Random walks on graph} When considering random walks on network, an important and fundamental parameter is first-passage time ($FPT$) for an arbitrarily given pair of vertices $u$ and $v$. By definition, the first-passage time from vertex $u$ to $v$, denoted by $\mathcal{F}_{u\rightarrow v}$, is the expected time taken by a walker starting out from vertex $u$ to first reach its destination vertex $v$. As before, for network $\mathcal{G}(\mathcal{V},\mathcal{E})$ as a whole, we can define two analogs relevant to $FPT$ for random walks, i.e., \begin{equation}\label{eqa:MF-2-2-1} \mathcal{F}=\sum_{u}\sum_{v}\mathcal{F}_{u\rightarrow v}, \end{equation} and, \begin{equation}\label{eqa:MF-2-2-2} \overline{\mathcal{F}}=\frac{\mathcal{F}}{|\mathcal{V}|(|\mathcal{V}|-1)}. \end{equation} It is worth mentioning that in general, the quantity $\mathcal{F}_{u\rightarrow v}$ is not necessarily equal to $\mathcal{F}_{v\rightarrow u}$ when a walker does random walks on network. Thus, the factor of $2$ is eliminated in the preceding two equations compared with Eqs.(\ref{eqa:MF-2-1-1}) and (\ref{eqa:MF-2-1-2}). The parameter $\overline{\mathcal{F}}$ is often called mean first-passage time ($MFPT$) for the sake of simplicity. If network that we are discussing is a tree $\mathcal{T}$, there is an expression for $MFPT$ based on the eigenvalues of Laplacian matrix $\mathbf{L_{\mathcal{T}}}$, which is given by \begin{equation}\label{eqa:MF-2-2-3} \overline{\mathcal{F}}_{\mathcal{T}}=2\sum_{i=2}^{|\mathcal{V}_{\mathcal{T}}|}\frac{1}{\lambda_{i}} \end{equation} where symbol $|\mathcal{V}_{\mathcal{T}}|$ represents the total number of vertices in tree $\mathcal{T}$. As seen in the literature \cite{Vicsek-1983,Sheng-2019}, most of previous work concerning with quantity $MFPT$ on trees with intriguing properties, such as, fractal feature, is based on Eq.(\ref{eqa:MF-2-2-3}). Generally speaking, it is a standard technique for all trees to study mean first-passage time using Laplacian spectra. \subsection{Electric network} For a given graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, we can construct a corresponding electrical network, simply referred to as $\mathcal{G}^{\dagger}(\mathcal{V}^{\dagger},\mathcal{E}^{\dagger})$, by replacing each edge in $\mathcal{E}$ with a unit resistor. For any two distinct vertices $u$ and $v$ in $\mathcal{V}^{\dagger}$, the effective resistance $\Omega_{uv}$ between them is defined as the potential difference between $u$ and $v$ when a unit current from $u$ to $v$ is maintained. In the case $u$ identical to $v$, $\Omega_{uv}$ is equal to zero. As known, effective resistance is in fact a measure of distance. So, the sum over effective resistances $\Omega_{uv}$ of all possible vertex pairs can be expressed as \begin{equation}\label{eqa:MF-2-4-1} \mathcal{R}_{\mathcal{G}^{\dagger}}=\sum_{u\in \mathcal{V}^{\dagger}}\sum_{v\in \mathcal{V}^{\dagger}}\Omega_{uv}. \end{equation} The quantity $\mathcal{R}_{\mathcal{G}^{\dagger}}$ is commonly called Kirchhoff index of electrical network $\mathcal{G}^{\dagger}(\mathcal{V}^{\dagger},\mathcal{E}^{\dagger})$. Analogously, Kirchhoff index $\mathcal{R}_{\mathcal{G}^{\dagger}}$ is also calculated in terms of the eigenvalues of Laplacian matrix $\mathbf{L_{\mathcal{G}}}$ of its underlying graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, which is as below \begin{equation}\label{eqa:MF-2-4-2} \mathcal{R}_{\mathcal{G}^{\dagger}}=2|\mathcal{V}|\sum_{i=2}^{|\mathcal{V}|}\frac{1}{\lambda_{i}}. \end{equation} The mean effective resistance $\overline{\mathcal{R}_{\mathcal{G}^{\dagger}}}$ in $\mathcal{G}^{\dagger}(\mathcal{V}^{\dagger},\mathcal{E}^{\dagger})$, called network criticality, is read to \begin{equation}\label{eqa:MF-2-4-2} \overline{\mathcal{R}_{\mathcal{G}^{\dagger}}}=\frac{\mathcal{R}_{\mathcal{G}^{\dagger}}}{|\mathcal{V}|(|\mathcal{V}|-1)}=\frac{2}{|\mathcal{V}|-1}\sum_{i=2}^{|\mathcal{V}|}\frac{1}{\lambda_{i}}. \end{equation} $\overline{\mathcal{R}_{\mathcal{G}^{\dagger}}}$ quantifies the network robustness of $\mathcal{G}(\mathcal{V},\mathcal{E})$ as a communication network: smaller value for $\overline{\mathcal{R}_{\mathcal{G}^{\dagger}}}$ implies that network $\mathcal{G}(\mathcal{V},\mathcal{E})$ is more robust. For brevity, we make use of the unique symbol $\mathcal{G}(\mathcal{V},\mathcal{E})$ to represent a graph and its corresponding electrical network in the remainder of this paper. \section{Graphic operation and Framework} Here, we propose some graphic operations used later. It is worth noting that the below operations can still be generalized in many manners. Some simple generalized versions are shown in the rest of this section. Nonetheless, we aim at clarifying the thought behind these operations from the perspective of methodology. That is to say, it suffices to only consider graphic operations defined below. For ease of exposition, let us first define four vectors, say $\overrightarrow{m}_{\mu}$, $\overrightarrow{n}_{\nu}$, $\overrightarrow{p}_{\mu}$ and $\overrightarrow{q}_{\nu}$, in the following form $$\overrightarrow{m}_{\mu}=(m_{1},m_{2},\cdots,m_{\mu}),\qquad \overrightarrow{n}_{\nu}=(n_{1},n_{2},\cdots,n_{\nu}),\qquad \overrightarrow{p}_{\mu}=(p_{1},p_{2},\cdots,p_{\mu}),\qquad \overrightarrow{q}_{\nu}=(q_{1},q_{2},\cdots,q_{\nu}).$$ Additionally, we require that the last two vectors, $\overrightarrow{p}$ and $\overrightarrow{q}$, meet the following criteria $$\forall i \quad p_{i}\geq0,\quad \text{and}\quad ||\overrightarrow{p}_{\mu}||_{1}=1;\qquad \forall j \quad q_{j}\geq0,\quad \text{and}\quad ||\overrightarrow{q}_{\nu}||_{1}=1$$ where $||\bullet||_{1}$ is the $\ell_{1}$-norm. It is clear to see that vectors $\overrightarrow{p}_{\mu}$ and $\overrightarrow{q}_{\nu}$ are in the $\mu$-dimensional probability simplex and the $\nu$-dimensional probability simplex, respectively. Next, each entry $m_{i}$ in vector $\overrightarrow{m}_{\mu}$ is an arbitrary natural number at least larger than $0$. At the same time, we demand $m_{i}\neq m_{j}$ for an arbitrary pair of distinct indices $i$ and $j$. The similar requirement holds on vector $\overrightarrow{n}_{\nu}$. \begin{figure} \centering \includegraphics[height=3cm]{MF-2020-FIG-1.jpg}\\ \vskip0.5cm {\small Fig.1. (Color online) The diagrams of three operations. We only show deterministic versions corresponding to three operations for convenience. Particularly, the seed is an edge $uv$. When executing Operation-I, we assume that $\mu=3$ and $m_{i}=1$ for all $i\in[1,3]$ as shown in the leftmost panel. Similarly, assume that $\mu=\nu=1$, $m_{1}=2$ and $n_{1}=1$ in Operation-II, the intermediate panel shows an example. In the third operation as shown in the rightmost panel, we make use of $\mu=4$ and $m_{i}=1$ for all $i\in[1,4]$. Note that red edges represent those newly created edges during implementing operation.} \end{figure} \textbf{Definition 1} For a given graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, \emph{Operation-I} is to attach $\mu$ paths to each vertex $v$, each having length $m_{i}-1$ with probability $p_{i}$. This operation is called \emph{Vertex-based uniform growth mechanism} ($VUGM$) mainly because such an operation is applied on each vertex without considering the structural properties of vertex itself, such as, vertex degree. \emph{Example 1} When the seed is an edge and each path degenerates into an isolated vertex, the resulting graph is in fact the deterministic uniform growth tree, denoted by $\mathcal{Y}_{I}(t)$, after iteratively manipulating Operation-I $t$ times. This kind of trees have been well studied due to their own prevalent applications in real-life world \cite{Masuda-2017}, for instance, modeling epidemic spread. \textbf{Definition 2} Given a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, we can insert all centres of $m_{i}$ star-like graphs\footnote[1]{A star-like graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ is one graph that shares a similar topological structure to star graph \cite{Bondy-2008}. That is to say, there exists a vertex acting as the center in star-like graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, and the central vertex is attached to some paths. Vividly, each path is also viewed as a ``tentacle" for brevity. An illustrative example is plotted in Appendix.} into each edge $uv$ with probability $p_{i}$ where each star-like graph has $\nu$ ``tentacles". Here, each ``tentacle" owns $n_{j}$ vertices with probability $q_{j}$ independently. Such a manipulation is defined as \emph{Operation-II}, which is also viewed as \emph{Edge-based uniform growth mechanism} ($EUGM$). \emph{Example 2} Similarly, if the seed is an edge, and $m_{i}$ and $n_{j}$ are equal to $1$ for all $i$ and $j$, we can obtain the well-known $T$-graph, denoted by $\mathcal{Y}_{II}(t)$, through iteratively executing Operation-II $t$ times. Some structural properties on $T$-graph have been discussed in \cite{Kahng-1989,Agliari-2008} particularly because it may serve as a simple model illustrating the inhomogeneity and scale-invariance of many disordered materials in physics. \textbf{Definition 3} Considering a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ with largest vertex degree $k_{max}$ which is no greater than parameter $\mu$, we first insert $m_{i}$ new vertices into each edge incident with vertex $v$ according to probability $p_{i}$. Next, two cases need to be considered: (1) If the degree $k_{v}$ of vertex $v$ is equal to $\mu$, then there is nothing more to do; (2) On the other hand, i.e., $k_{v}<\mu$, we need to connect $\mu-k_{v}$ paths to vertex $v$ in which each newly added path has $m_{i}$ vertices in terms of probability $p_{i}$. The procedure above is referred to as \emph{Operation-III}, which is called \emph{mixture uniform growth mechanism} ($MUGM$) as well. \emph{Example 3} We select a star $S_{\mu}$ on $\mu+1$ vertices as a seed, assume that each entry $m_{i}$ in vector $\overrightarrow{m}_{\mu}$ is equivalent to $1$, and obtain the famous Vicsek fractal $V_{1}^{\mu}(t)$ after running Operation-III $t$ times. The key parameters pertaining to Vicsek fractal $V_{1}^{\mu}(t)$, such as, mean first-passage time, have been widely studied in the published papers including Ref.\cite{Zhang-2010}. One of most important reasons for this is that Vicsek fractal $V_{1}^{\mu}(t)$ can be utilized to describe the underlying structure of some polymers in chemistry. Figure 1 shows some examples in order to better understand details about operations introduced above. As mentioned above, the goal of this paper is to study stochastic uniform growth trees. Hence, the initial graph (aka seed) is always an arbitrary tree $\mathcal{T}$. In Examples 1-3, the selected seeds are in fact trees with specific structural property. In the following, we are going to establish a principled framework $\Upsilon$ based on three different types of operations stated in Defs.1-3. \textbf{Framework $\Upsilon$} At $t=0$, an arbitrary tree $\mathcal{T}$ is chosen as the seed and four vectors are defined as above. At $t=1$, one of three operations built is applied on tree $\mathcal{T}$. The resulting tree is denoted by $\mathcal{T}(1)$. More specifically, applying Operation-I yields tree $\mathcal{T}_{I}(1)$ and similarly for other two operations. At $t\geq2$, the new generation $\mathcal{T}(t)$ can be obtained from the preceding tree $\mathcal{T}(t-1)$ by implementing the same operation as in the previous time step. We illustrate the principled framework $\Upsilon$ in Fig.2 for the goal of expounding concrete procedures. Specifically speaking, this framework outputs three distinct families of stochastic uniform growth trees, that is, $\mathcal{T}_{I}(t)$, $\mathcal{T}_{II}(t)$ as well as $\mathcal{T}_{III}(t)$. In principle, we are able to replace each added path by an arbitrary tree when applying Operation-I. This leads to more general stochastic uniform growth trees. Analogously, some generalization technologies can also be adopted in other two operations for creating generalized versions. Due to space limitation, we omit it here. It is worth noticing that the goal of this work is to provide a guideline for creating stochastic uniform growth trees. In what follows, let us pay considerable attention on studying structural parameters on the resulting tree networks. \begin{figure} \centering \includegraphics[height=4cm]{MF-2020-FIG-2.jpg}\\ \vskip0.5cm {\small Fig.2. (Color online) The diagram of the principled framework $\Upsilon$ for creating stochastic uniform growth trees.} \end{figure} \section{Main Results} The goal of this section is to display main results. For instance, we determine the analytic solutions to mean first-passage time for random walks on three different kinds of stochastic uniform growth trees output by framework $\Upsilon$. As an immediate consequence, those previously published formulas for deterministic versions can be easily obtained by substituting available parameters into our results. First of all, let us take a lemma. \textbf{Lemma 1 \cite{Ma-2020-1}} For a designated vertex $v$ in graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, the sum $\mathcal{F}_{v}$ of first-passage times for all vertices $u$ in its neighboring set $\mathcal{N}_{v}$ follows \begin{equation}\label{Section-3-1-0} \mathcal{F}_{v}=\sum_{u\in\mathcal{N}_{v}}\mathcal{F}_{u\rightarrow v}=2|\mathcal{E}|-k_{v} \end{equation} where $k_{v}$ represents the degree of vertex $v$ (see Eq.(14) in \cite{Ma-2020-1} for more details). From Eq.(\ref{Section-3-1-0}), we can obtain a more helpful result for tree $\mathcal{T}$ as shown in the next proposition, which enables us to establish a proof of theorem 4. \textbf{Proposition 2} Given an edge $uv$ in tree $\mathcal{T}$, the first-passage times $\mathcal{F}_{u\rightarrow v}$ and $\mathcal{F}_{v\rightarrow u}$ are given by \begin{equation}\label{Section-3-2-0} \mathcal{F}_{u\rightarrow v}=2|\mathcal{E}_{u}|+1,\qquad \mathcal{F}_{v\rightarrow u}=2|\mathcal{E}_{v}|+1 \end{equation} where $|\mathcal{E}_{v}|$ is the total number of edges in the $v$-root tree and $|\mathcal{E}_{u}|$ has the same meaning for the $u$-root tree. Deleting edge $uv$ in tree $\mathcal{T}$ yields two subtrees. The subtree containing vertex $v$ is called $v$-root tree, and the other is viewed as $u$-root tree. \textbf{\emph{Proof}} The correctness of Eq.(\ref{Section-3-2-0}) is validated by deduction on edge number. First of all, proposition 2 is trivial when tree $\mathcal{T}$ is an edge. With loss of generality, we assume that the edge number in tree $\mathcal{T}$ is no less than $1$. It is clear to see that proposition 2 holds when either of two vertices $u$ and $v$ is a leaf vertex. For the sake of simplicity, we only prove the first equation. The other can be checked in a similar manner. Assume now that the first equation is true for root tree having edge number less than $|\mathcal{E}_{u}|$. By definition, the quantity $\mathcal{F}_{u\rightarrow v}$ is expressed as \begin{equation}\label{Section-3-2-1} \mathcal{F}_{u\rightarrow v}=\frac{1}{k_{u}}+\frac{1}{k_{u}}\sum_{u_{i}(\neq v)\in \mathcal{N}_{u}}(\mathcal{F}_{u_{i}\rightarrow v}+1). \end{equation} In view of nature of tree itself, Eq.(\ref{Section-3-2-1}) can be rewritten as \begin{equation}\label{Section-3-2-2} \mathcal{F}_{u\rightarrow v}=\frac{1}{k_{u}}+\frac{1}{k_{u}}\sum_{u_{i}(\neq v)\in \mathcal{N}_{u}}(\mathcal{F}_{u_{i}\rightarrow u}+\mathcal{F}_{u\rightarrow v}+1). \end{equation} Obviously, the edge number $|\mathcal{E}_{u_{i}}|$ of $u_{i}$-root tree is strictly less than quantity $|\mathcal{E}_{u}|$. By hypothesis, we can have \begin{equation}\label{Section-3-2-3} \mathcal{F}_{u\rightarrow v}=\frac{1}{k_{u}}+\frac{1}{k_{u}}\sum_{u_{i}(\neq v)\in \mathcal{N}_{u}}\left[(2|\mathcal{E}_{u_{i}}|+1)+\mathcal{F}_{u\rightarrow v}+1\right]. \end{equation} Multiplying $k_{u}$ on the both-hand sides of Eq.(\ref{Section-3-2-3}) yields \begin{equation}\label{Section-3-2-4} k_{u}\mathcal{F}_{u\rightarrow v}=1+\sum_{u_{i}(\neq v)\in \mathcal{N}_{u}}\left[(2|\mathcal{E}_{u_{i}}|+1)+\mathcal{F}_{u\rightarrow v}+1\right]. \end{equation} Next, we have \begin{equation}\label{Section-3-2-5} \begin{aligned}\mathcal{F}_{u\rightarrow v}&=k_{u}+\sum_{u_{i}(\neq v)\in \mathcal{N}_{u}}(2|\mathcal{E}_{u_{i}}|+1)\\ &=2\sum_{u_{i}(\neq v)\in \mathcal{N}_{u}}(|\mathcal{E}_{u_{i}}|+1)+1 \end{aligned}. \end{equation} According to nature of tree, we complete the proof of Eq.(\ref{Section-3-2-0}) upon Eq.(\ref{Section-3-2-5}). \textbf{Corollary 3} For an arbitrarily given pair of vertices $u$ and $v$ in tree $\mathcal{T}$, the commute time $\mathcal{F}_{v\leftrightarrow u}$ is given by \begin{equation}\label{Section-3-3-0} \mathcal{F}_{v\leftrightarrow u}=2|\mathcal{E}_{\mathcal{T}}|d_{uv} \end{equation} in which $d_{uv}$ is the distance between them. This is an immediate result of proposition 2 and we thus omit its proof here. Note that a more general version relevant to commute time $\mathcal{F}_{v\leftrightarrow u}$ on graph has been derived using spectral technique \cite{Guex-2015}. To make further progress, based on Eq.(\ref{Section-3-3-0}), we can establish a connection of Wiener index $\mathcal{W}_{\mathcal{T}}$ to mean first-passage time $\overline{\mathcal{F}}_{\mathcal{T}}$ as shown in the following theorem. \textbf{Theorem 4} Consider random walks on a tree $\mathcal{T}$, there is a formula between two structural parameters $\mathcal{W}_{\mathcal{T}}$ and $\overline{\mathcal{F}}_{\mathcal{T}}$, as follows \begin{equation}\label{Section-3-4-0} 2\mathcal{W}_{\mathcal{T}}=|\mathcal{V}_{\mathcal{T}}|\overline{\mathcal{F}}_{\mathcal{T}}. \end{equation} According to the simplicity of proof, we omit it here. Note also that a more general consequence corresponding to Eq.(\ref{Section-3-4-0}) is found in terms of spectral technique \cite{Chandra-1989} (see theorem 2.1 in \cite{Chandra-1989} for more details). Obviously, this implies that we have the ability to derive the analytic solutions to mean first-passage time for random walks on all the stochastic uniform growth trees if the derivation of Wiener index on corresponding trees is easy to deal with. We make a statement in advance that for convenience and brevity, we will select a tree $\mathcal{T}$ on $h$ vertices as a seed to create the candidates through framework $\Upsilon$ in the rest of this paper, and denote by $\mathcal{W}_{\mathcal{T}}$ the corresponding Wiener index of tree $\mathcal{T}$. In what follows, we study three distinct kinds of stochastic uniform growth trees, i.e., $\mathcal{T}_{I}(t)$, $\mathcal{T}_{II}(t)$ and $\mathcal{T}_{III}(t)$, and derive the analytic solutions to some structural parameters including Wiener index and mean first-passage time. It should be mentioned that in view of randomness of trees $\mathcal{T}_{I}(t)$, $\mathcal{T}_{II}(t)$ and $\mathcal{T}_{III}(t)$, the results obtained are expected expressions. On the other hand, we attempt to make use of a brief yet unambiguous presentation in the following discussions. For instance, we use ``solution of Wiener index" instead of ``expected solution of Wiener index". \subsection{Tree $\mathcal{T}_{I}(t)$} \textbf{Theorem 5} The solution of Wiener index $\mathcal{W}_{\mathcal{T}_{I}(1)}$ of tree $\mathcal{T}_{I}(1)$ is given by \begin{equation}\label{Section-4-1-0} \begin{aligned}\mathcal{W}_{\mathcal{T}_{I}(1)}&=\mathcal{W}_{\mathcal{T}}\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{2}+\mu\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)h^{2}\\ &\quad+\left\{\mu\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)+\left[\sum_{i=1}^{\mu}2p_{i}m_{i}\left( \begin{array}{c} \mu \\ 2 \\ \end{array} \right)-\mu^{2}\sum_{i=1}^{\mu}p_{i}m_{i}\right]\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right\}h \end{aligned}. \end{equation} For convenience, the formula above is commonly referred to as the $\mathcal{W}$-polynomial for Wiener index of tree $\mathcal{T}_{I}(1)$ whose variables are parameters $\mathcal{W}_{\mathcal{T}}$ and $h$ of seed $\mathcal{T}$. Roughly speaking, such a polynomial is able to be expressed as \begin{equation}\label{Section-4-1-0-0} f_{I}(\mathcal{W}_{\mathcal{T}},h)\triangleq a_{I}\mathcal{W}_{\mathcal{T}}+b_{I}h^{2}+c_{I}h. \end{equation} \textbf{\emph{Proof}} Before beginning with our discussions, some necessary notations are introduced as follows. We denote by $\Lambda_{u}^{I}$ the set of vertices which are added into tree $\mathcal{T}_{I}(1)$ by applying $VUGM$ to vertex $u$ in seed $\mathcal{T}$ \footnote[2]{As an example, $\Lambda_{u}^{I}$ is composed of threes green vertices connected to vertex $u$ in the left-most panel of Fig.1.}. This suggests that all the newly created vertices belong to set $\bigcup_{u\in\mathcal{T}}\Lambda_{u}^{I}$, and then the set $V_{\mathcal{T}_{I}(1)}/\bigcup_{u\in\mathcal{T}}\Lambda_{u}^{I}$ contains all the vertices previously belonging to tree $\mathcal{T}$. For simplicity of presentation, we define $\Omega_{1}^{I}=V_{\mathcal{T}_{I}(1)}/\bigcup_{u\in\mathcal{T}}\Lambda_{u}^{I}$. Based on this, the concrete demonstrations can be shown in stages. \emph{Case 1} For an arbitrary pair of vertices, say $u$ and $v$, in vertex set $\Omega_{1}^{I}$, the distance between them $d'_{uv}$ keeps unchanged after applying $VUGM$ to seed $\mathcal{T}$. Thus, it is straightforward to see \begin{equation}\label{Section-4-1-0-1} \mathcal{W}_{\mathcal{T}_{I}(1)}(1)\triangleq\frac{1}{2}\sum_{u\in\Omega_{1}^{I}}\sum_{v\in\Omega_{1}^{I}}d'_{uv}=\mathcal{W}_{\mathcal{T}}. \end{equation} \emph{Case 2} Similarly, we can without difficulty obtain the following formula \begin{equation}\label{Section-4-1-0-2} \begin{aligned}\mathcal{W}_{\mathcal{T}_{I}(1)}(2)&\triangleq\sum_{u\in\Omega_{1}^{I}}\sum_{u_{\alpha}\in \Lambda^{I}_{u}}d'_{uu_{\alpha}}+\frac{1}{2}\sum_{u\in\Omega_{1}^{I}}\sum_{u_{\alpha}\in \Lambda^{I}_{u}}\sum_{u_{\beta}\in \Lambda^{I}_{u}}d'_{u_{\alpha}u_{\beta}}\\ &=h\left[\mu\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\sum_{i=1}^{\mu}2p_{i}m_{i}\left( \begin{array}{c} \mu \\ 2 \\ \end{array} \right)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\mu\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)\right] \end{aligned}, \end{equation} in which the summation over distances between arbitrary pair of vertices $u_{\alpha}$ and $u_{\beta}$ in set $\Lambda^{I}_{u}$ is viewed as $$\frac{1}{2}\sum_{u_{\alpha}\in \Lambda^{I}_{u}}\sum_{u_{\beta}\in \Lambda^{I}_{u}}d'_{u_{\alpha}u_{\beta}}.$$ At the same time, it is worth noticing that for $a<b$, we assume $\left( \begin{array}{c} a \\ b \\ \end{array} \right)=0$ and $\sum_{x=b}^{a}x=0.$ \emph{Case 3} Now, let us focus on the derivation of sum over distances of all possible vertex pairs $u$ in $\Omega_{1}^{I}$ and $v_{\alpha}$ in $\Lambda^{I}_{v}$ where $u$ is distinct from $v$. Without loss of generality, suppose that the path joining vertex $u$ to $v_{\alpha}$ in tree $\mathcal{T}_{I}(1)$ is defined as $\mathcal{P}_{uv_{\alpha}}$. Consequently, it is clear to find path $\mathcal{P}_{uv_{\alpha}}$ to contain two sub-paths, say $\mathcal{P}_{uv}$ and $\mathcal{P}_{vv_{\alpha}}$. Thus, there is an identity $d'_{uv_{\alpha}}=d'_{uv}+d'_{vv_{\alpha}}.$ Accordingly, we can obtain \begin{equation}\label{Section-4-1-0-3} \begin{aligned}\mathcal{W}_{\mathcal{T}_{I}(1)}(3)&\triangleq\sum_{u\in\Omega_{1}^{I}}\sum_{v(\neq u)\in\Omega_{1}^{I}}\sum_{v_{\alpha}\in \Lambda^{I}_{v}}d'_{uv_{\alpha}}\\ &=2\mu\left[\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\sum_{i=1}^{\mu}p_{i}m_{i}\mathcal{W}_{\mathcal{T}}\right] \end{aligned}. \end{equation} \emph{Case 4} The remainder of this task is to calculate the distance between vertices $u_{\alpha}$ and $v_{\beta}$, denoted by $d'_{u_{\alpha}v_{\beta}}$. Note that this pair of vertices are from two different sets $\Lambda^{I}_{u}$ and $\Lambda^{I}_{v}$, respectively. By analogy with calculation of quantity $\mathcal{W}_{\mathcal{T}_{I}(1)}(3)$ in the previous case, the sum over distances of this type is calculated to yield \begin{equation}\label{Section-4-1-0-4} \begin{aligned}\mathcal{W}_{\mathcal{T}_{I}(1)}(4)&\triangleq\sum_{u\in\Omega_{1}^{I}}\sum_{v(\neq u)\in\Omega_{1}^{I}}\sum_{u_{\alpha}\in\Lambda^{I}_{u}}\sum_{v_{\beta}\in \Lambda^{I}_{v}}d'_{u_{\alpha}v_{\beta}}\\ &=\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{2}\mathcal{W}_{\mathcal{T}}+2\mu\sum_{i=1}^{\mu}p_{i}m_{i}\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\left[\mu\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right] \end{aligned}. \end{equation} Armed with all the cases, we prove Eq.(\ref{Section-4-1-0}) based on summation $\mathcal{W}_{\mathcal{T}_{I}(1)}=\sum_{i=1}^{4}\mathcal{W}_{\mathcal{T}_{I}(1)}(i)$ after performing some fundamental arithmetics. $\hfill\Box$ In fact, theorem 5 provides us with an approach to determining the analytic solution to Wiener index $\mathcal{W}_{\mathcal{T}_{I}(t)}$ on stochastic uniform growth tree $\mathcal{T}_{I}(t)$. Now, the only requirement is to first know vertex number $\mathcal{V}_{\mathcal{T}_{I}(t)}$ of tree $\mathcal{T}_{I}(t)$. This is easily derived via the following recursive relation $$|\mathcal{V}_{\mathcal{T}_{I}(t)}|=\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)|\mathcal{V}_{\mathcal{T}_{I}(t-1)}|.$$ Using the initial condition $|\mathcal{V}_{\mathcal{T}_{I}(0)}|=|\mathcal{V}_{\mathcal{T}}|=h$, we obtain \begin{equation}\label{Section-4-1-1} |\mathcal{V}_{\mathcal{T}_{I}(t)}|=h\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{t}. \end{equation} With the results mentioned above, we reach the next proposition. \textbf{Proposition 6} The solution of Wiener index $\mathcal{W}_{\mathcal{T}_{I}(t)}$ of tree $\mathcal{T}_{I}(t)$ is given by \begin{equation}\label{Section-4-1-2} \begin{aligned}\mathcal{W}_{\mathcal{T}_{I}(t)}&=\mathcal{W}_{\mathcal{T}}\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{2t}+\mu t h^{2}\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{2t-1}\\ &+h\left\{\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)+\left[\sum_{i=1}^{\mu}2p_{i}m_{i}\left( \begin{array}{c} \mu \\ 2 \\ \end{array} \right)-\mu^{2}\sum_{i=1}^{\mu}p_{i}m_{i}\right]\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right\}\Phi^{I}_{1} \end{aligned}, \end{equation} in which the solution of symbol $\Phi^{I}_{1}$ is $$\Phi^{I}_{1}=\frac{\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{2t-1}-\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{t-1}}{\mu\sum_{i=1}^{\mu}p_{i}m_{i}}.$$ Based on Eqs.(\ref{Section-4-1-0}) and (\ref{Section-4-1-1}), the aforementioned proposition can be proved in an iterative manner, and thus we omit it here. Using the relation shown in Eq.(\ref{Section-3-4-0}), the closed-form solution to quantity $\overline{\mathcal{F}}_{\mathcal{T}_{I}(t)}$ is obtained with respect to Eq.(\ref{Section-4-1-2}) immediately, which is shown in the next theorem. \textbf{Theorem 7} The analytic solution to mean first-passage time $\overline{\mathcal{F}}_{\mathcal{T}_{I}(t)}$ for random walks on tree $\mathcal{T}_{I}(t)$ is given by \begin{equation}\label{Section-4-1-3} \begin{aligned}\overline{\mathcal{F}}_{\mathcal{T}_{I}(t)}&=\frac{2\mathcal{W}_{\mathcal{T}}}{h}\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{t}+ 2\mu th\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{t-1}\\ &+2h\left\{\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)+\left[\sum_{i=1}^{\mu}2p_{i}m_{i}\left( \begin{array}{c} \mu \\ 2 \\ \end{array} \right)-\mu^{2}\sum_{i=1}^{\mu}p_{i}m_{i}\right]\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right\}\Phi^{I}_{2} \end{aligned}, \end{equation} where we have made use of $$\Phi^{I}_{2}=\frac{\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{t-1}-\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{-1}}{\mu\sum_{i=1}^{\mu}p_{i}m_{i}}.$$ This is now an obvious consequence and hence we omit its proof. In the limit of large graph size, the scaling relation between two structural parameters, $V_{\mathcal{T}_{I}(t)}$ and $\overline{\mathcal{F}}_{\mathcal{T}_{I}(t)}$, obeys \begin{equation}\label{Section-4-1-4} \overline{\mathcal{F}}_{\mathcal{T}_{I}(t)}=O\left(\Theta_{I} |\mathcal{V}_{\mathcal{T}_{I}(t)}|\right),\qquad\qquad \Theta_{I}=\frac{\mu t\sum_{i=1}^{\mu}p_{i}m_{i}(m_{i}-1)}{1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}}. \end{equation} This implies that the mean first-passage time for random walks on stochastic uniform growth tree $\mathcal{T}_{I}(t)$ is approximately linearly correlated with the total number of vertices as $t\rightarrow \infty$. As mentioned above, a special member in tree family $\mathcal{T}_{I}(t)$, namely, deterministic version $\mathcal{Y}_{I}(t)$, has been studied analytically. Now, by using our formula in Eq.(\ref{Section-4-1-3}), the corresponding theoretical expression of mean first-passage time is timely derived as shown in corollary 8. \textbf{Corollary 8} The exact solution to mean first-passage time $\overline{\mathcal{F}}_{\mathcal{Y}_{I}(t)}$ on deterministic uniform growth tree $\mathcal{Y}_{I}(t)$ is \begin{equation}\label{Section-4-1-5} \overline{\mathcal{F}}_{\mathcal{Y}_{I}(t)}=(4\mu t+\mu-1)(1+\mu)^{t-1}+\frac{2}{1+\mu}. \end{equation} This is completely the same as the previous result in published paper \cite{Wang-2012} (see Eq.(31) in \cite{Wang-2012} for more details), suggesting that theorem 7 is sound. \subsection{Tree $\mathcal{T}_{II}(t)$} \textbf{Theorem 9} The solution of Wiener index $\mathcal{W}_{\mathcal{T}_{II}(1)}$ of tree $\mathcal{T}_{II}(1)$ is given by \begin{equation}\label{Section-4-2-0} \begin{aligned}\mathcal{W}_{\mathcal{T}_{II}(1)}&=\mathcal{W}_{\mathcal{T}}\Phi_{4}^{II}\left(\Phi_{1}^{II}+1\right)^{2}+\left[\Phi_{1}^{II}\Phi_{3}^{II}-\Phi_{4}^{II}\left(\Phi_{1}^{II}\right)^{2}-\Phi_{4}^{II}\Phi_{1}^{II}+\Phi_{3}^{II}\right]h^{2}\\ &\quad+\left[2\Phi_{4}^{II}\left(\Phi_{1}^{II}\right)^{2}-3\Phi_{1}^{II}\Phi_{3}^{II}+\Phi_{4}^{II}\Phi_{1}^{II}+\Phi_{2}^{II}-\Phi_{3}^{II}\right]h+\left[2\Phi_{1}^{II}\Phi_{3}^{II}-\Phi_{4}^{II}\left(\Phi_{1}^{II}\right)^{2}-\Phi_{2}^{II}\right] \end{aligned}. \end{equation} Note that the concrete meanings of symbols $\Phi_{i}^{II}$ ($i\in[1,4]$) are deferred in the next proof for the sake of argument. As mentioned previously, the $\mathcal{W}$-polynomial for tree $\mathcal{W}_{\mathcal{T}_{II}(1)}$ is written as \begin{equation}\label{Section-4-2-0-0} f_{II}(\mathcal{W}_{\mathcal{T}},h)\triangleq a_{II}\mathcal{W}_{\mathcal{T}}+b_{II}h^{2}+c_{II}h+d_{II}. \end{equation} \textbf{\emph{Proof}} Let us first recall $EUGM$, and then find that at present, operation is applied to each edge in seed $\mathcal{T}$. For our purpose, we use $\Omega_{1}^{II}$ to denote vertex set in which all the vertices in tree $\mathcal{T}$ are. And then, it is natural to group all newly added vertices by implementing $EUGM$ on every edge $uv$ in $\mathcal{T}$ into vertex set $\Lambda_{\mathcal{E}_{uv}}^{II}$ where symbol $\mathcal{E}_{uv}$ represents a specific path in stochastic uniform growth tree $\mathcal{T}_{II}(1)$ whose two end-vertices are adjacent in seed $\mathcal{T}$. The way to do this is to distinguish path $\mathcal{P}_{uv}$. These such paths $\mathcal{E}_{uv}$ are collected into a set $\mathcal{E}_{uv}^{II}$. Additionally, each vertex in $\Lambda_{\mathcal{E}_{uv}}^{II}$ is assigned a unique label $w_{uv}^{\alpha}$. By using these notations above, vertex set $V_{\mathcal{T}_{II}(1)}$ is expressed as $\bigcup_{uv\in\mathcal{T}}\Lambda_{\mathcal{E}_{uv}}^{II}\bigcup \Omega_{1}^{II}$. We are now ready to provide a rigorous proof to Eq.(\ref{Section-4-2-0}). As will be explained below, our computations are carried out in stages. \emph{Case 1} Using $EUGM$, there will be $m_{i}$ vertices inserted into each edge $uv$ in seed $\mathcal{T}$ with probability $p_{i}$. As a consequence, it is not hard to see \begin{equation}\label{Section-4-2-0-1} \mathcal{W}_{\mathcal{T}_{II}(1)}(1)\triangleq\frac{1}{2}\sum_{u\in\Omega_{1}^{II}}\sum_{v\in\Omega_{1}^{II}}d'_{uv}=\mathcal{W}_{\mathcal{T}}\left(\sum_{i=1}^{\mu}p_{i}m_{i}+1\right), \end{equation} where summation $\sum_{i=1}^{\mu}p_{i}m_{i}+1$ will be replaced with symbol $\Phi_{4}^{II}$ in executing further arithmetics for the purpose of simplifying calculation. \emph{Case 2} Following the aforementioned case, each of $m_{i}$ newly inserted vertices is attached $\nu$ paths having $n_{j}$ vertices with probability $q_{j}$ each. By utilizing a similar computational manner to that used to analyze case 2 of theorem 5, we can write \begin{equation}\label{Section-4-2-0-2} \begin{aligned}\mathcal{W}_{\mathcal{T}_{II}(1)}(2)&\triangleq\frac{1}{2}\sum_{\mathcal{E}_{uv}^{II}}\sum_{w_{uv}^{\alpha}\in\Lambda^{II}_{\mathcal{E}_{uv}}}\sum_{w_{uv}^{\beta}\in\Lambda^{II}_{\mathcal{E}_{uv}}}d'_{w_{uv}^{\alpha}w_{uv}^{\beta}}\\ &=(h-1)\sum_{i=1}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)\left(1+\nu\sum_{j=1}^{\nu}q_{j}n_{j}\right)^{2}+(h-1)\sum_{i=1}^{\mu}p_{i}m_{i}^{2}\left[\nu\sum_{j=1}^{\nu}q_{j}\left( \begin{array}{c} n_{j}+1 \\ 2 \\ \end{array} \right)\right]\\ &\quad+(h-1)\left[\sum_{i=1}^{\mu}p_{i}m_{i}\left( \begin{array}{c} \nu \\ 2 \\ \end{array} \right)+\nu^{2}\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i} \\ 2 \\ \end{array} \right)\right]\left[2\sum_{j=1}^{\nu}q_{j}n_{j}\sum_{j=1}^{\nu}q_{j}\left( \begin{array}{c} n_{j}+1 \\ 2 \\ \end{array} \right)\right]\\ &\quad+(h-1)\sum_{i=1}^{\mu}p_{i}m_{i}\left[\nu\sum_{j=1}^{\nu}q_{j}\sum_{s=2}^{n_{j}}\left( \begin{array}{c} s \\ 2 \\ \end{array} \right)\right] \end{aligned}. \end{equation} As before, the summation over all ``coefficients" of term $(h-1)$ in Eq.(\ref{Section-4-2-0-2}) is denoted by symbol $\Phi_{2}^{II}$ when we perform further computations. \emph{Case 3} Now, we discuss the distance between vertex $u$ in set $\Omega_{1}^{II}$ and vertex $w_{xy}^{\alpha}$ in set $\bigcup_{xy\in\mathcal{T}}\Lambda_{\mathcal{E}_{xy}}^{II}$. Without loss of generality, we make use of $\mathcal{P}_{uw_{xy}^{\alpha}}$ to indicate that path linking vertex $u$ with $w_{xy}^{\alpha}$ in stochastic uniform growth tree $\mathcal{T}_{II}(1)$. In addition, path $\mathcal{P}_{uw_{xy}^{\alpha}}$ is assumed to possess two sub-paths $\mathcal{P}_{ux}$ and $\mathcal{P}_{xw_{xy}^{\alpha}}$. Note that we have supposed that vertex $y$ is always far away from vertex $u$ than vertex $x$. In this way, there is no influence on the future derivations. In what follows, we can encounter two sub-cases: (1) $w_{xy}^{\alpha}$ is some vertex inserted into edge $xy$ in seed $\mathcal{T}$, and (2) vertex $w_{xy}^{\alpha}$ is in some one of $\nu$ paths attached to some vertex $\theta$\footnote[3]{Here, we take symbol $\theta$ to indicate some vertex in set $\bigcup_{xy\in\mathcal{T}}\Lambda_{\mathcal{E}_{xy}}^{II}$, which is inserted into edge $xy$ in seed $\mathcal{T}$ through $EUGM$ directly, for convenience.} that is inserted into edge $xy$ in seed $\mathcal{T}$. In any subcase, we would like to replace sub-path $\mathcal{P}_{ux}$ with $\mathcal{P}_{uy}$ so as to build up a connection of quantity $\mathcal{W}_{\mathcal{T}_{II}(1)}(1)$ to $\mathcal{W}_{\mathcal{T}_{II}(1)}(3)$ as follows. For the first subcase, the distance $d'_{uw_{xy}^{\alpha}}$ satisfies $d'_{uw_{xy}^{\alpha}}=d'_{uy}-d'_{yw_{xy}^{\alpha}}.$ And, in the other subcase, the distance $d'_{uw_{xy}^{\alpha}}$ is given by $d'_{uw_{xy}^{\alpha}}=d'_{uy}-d'_{y\theta}+d'_{\theta w_{xy}^{\alpha}}.$ Based on the analysis above, we can obtain \begin{equation}\label{Section-4-2-0-3} \begin{aligned}\mathcal{W}_{\mathcal{T}_{II}(1)}(3)&\triangleq\sum_{u\in\Omega_{1}^{II}}\sum_{\mathcal{E}_{xy}^{II}}\sum_{w_{xy}^{\alpha}\in\Lambda^{II}_{xy}}d'_{uw_{xy}^{\alpha}}\\ &=2\mathcal{W}_{\mathcal{T}_{II}(1)}(1)\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right)\\ &\quad-2\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\left(\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right)\\ &\quad+2\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\left[\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right)+\sum_{i=1}^{\mu}p_{i}m_{i}\nu\sum_{j=1}^{\nu}q_{i}\left( \begin{array}{c} n_{j}+1 \\ 2 \\ \end{array} \right)\right] \end{aligned}. \end{equation} For ease of exposition, we still need to introduce two symbols, $\Phi_{1}^{II}$ and $\Phi_{3}^{II}$, as stated early. More specifically, $$\Phi_{1}^{II}=\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right),$$ along with $$\Phi_{3}^{II}=\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right)+\sum_{i=1}^{\mu}p_{i}m_{i}\nu\sum_{j=1}^{\nu}q_{i}\left( \begin{array}{c} n_{j}+1 \\ 2 \\ \end{array} \right).$$ \emph{Case 4} The last task is to derive analytic solution to summation $\mathcal{W}_{\mathcal{T}_{II}(1)}(4)$ over distances between vertices $w_{uv}^{\alpha}$ and $w_{xy}^{\beta}$ in tree $\mathcal{T}_{II}(1)$ where $uv$ is not identical to $xy$. Along the similar thought in cases 2 and 3, we omit detailed demonstration and straightforwardly provide \begin{equation}\label{Section-4-2-0-4} \begin{aligned}\mathcal{W}_{\mathcal{T}_{II}(1)}(4)&\triangleq\sum_{uv(\neq xy)\in\mathcal{T}}\sum_{w_{uv}^{\alpha}\in\Lambda^{II}_{xy}}\sum_{w_{xy}^{\beta}\in\Lambda^{II}_{xy}}d'_{w_{uv}^{\alpha}w_{xy}^{\beta}}\\ &=\sum_{uv(\neq xy)\in\mathcal{T}}\sum_{w_{uv}^{\alpha}\in\Lambda^{II}_{xy}}\sum_{w_{xy}^{\beta}\in\Lambda^{II}_{xy}}\left[d'_{uy}-2\left(\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)+d'_{vw_{uv}^{\alpha}}+d'_{xw_{xy}^{\beta}}\right]\\ &=\left[\mathcal{W}_{\mathcal{T}_{II}(1)}(1)-(h-1)\Phi_{4}^{II}\right]\left(\Phi_{1}^{II}\right)^{2}-2\Phi_{4}^{II}\left(\Phi_{1}^{II}\right)^{2}\left( \begin{array}{c} h-1 \\ 2 \\ \end{array} \right)+2\Phi_{4}^{II}\Phi_{3}^{II}\left( \begin{array}{c} h-1 \\ 2 \\ \end{array} \right) \end{aligned}, \end{equation} in which we have used some symbols that have the same meanings as above. So far, substituting the results from Eqs.(\ref{Section-4-2-0-1})-(\ref{Section-4-2-0-4}) into expression $\mathcal{W}_{\mathcal{T}_{II}(1)}=\sum_{i=1}^{4}\mathcal{W}_{\mathcal{T}_{II}(1)}(i)$ yields the same consequence as in Eq.(\ref{Section-4-2-0}), implying that theorem 9 is complete. $\hfill\Box$ Upon an arbitrary tree $\mathcal{T}$ as seed, the final graph $\mathcal{T}_{II}(t)$ is recursively constructed via executing $EUGM$ $t$ steps. After that, the vertex number $|\mathcal{V}_{\mathcal{T}_{II}(t)}|$ is easy to calculate in an iterative way, as below \begin{equation}\label{Section-4-2-1} |\mathcal{V}_{\mathcal{T}_{II}(t)}|=(h-1)\left(1+\sum_{i=1}^{\mu}p_{i}m_{i}+\sum_{i=1}^{\mu}p_{i}m_{i}\nu\sum_{j=1}^{\nu}q_{j}n_{j}\right)^{t}+1. \end{equation} From now on, let us focus on the calculation of Wiener index $\mathcal{W}_{\mathcal{T}_{II}(t)}$ of stochastic uniform growth tree $\mathcal{T}_{II}(t)$. As stated previously, this issue can also be effortlessly addressed by calculating recurrence relation in terms of Eqs.(\ref{Section-4-2-0}) and (\ref{Section-4-2-1}). Thus, we omit the detailed derivation and attach the final formula in the following proposition. \textbf{Proposition 10} The solution of Wiener index $\mathcal{W}_{\mathcal{T}_{II}(t)}$ of tree $\mathcal{T}_{II}(t)$ is given by \begin{equation}\label{Section-4-2-2} \begin{aligned}\mathcal{W}_{\mathcal{T}_{II}(t)}&=\mathcal{W}_{\mathcal{T}}\left[\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)^{2}\right]^{t}+\left(\Psi_{1}^{III}+\Psi_{2}^{III}+\Psi_{3}^{III}\right)\frac{\left[\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)^{2}\right]^{t}-1}{\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)^{2}-1}\\ &+(h-1)\left(2\Psi_{1}^{II}+\Psi_{2}^{II}\right)\left(1+\Phi_{1}^{II}\right)^{t-1}\frac{\left[\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)\right]^{t}-1}{\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)-1}\\ &+\Psi_{1}^{II}(h-1)^{2}\left(1+\Phi_{1}^{II}\right)^{2(t-1)}\frac{\left(\Phi_{4}^{II}\right)^{t}-1}{\Phi_{4}^{II}-1}\\ \end{aligned}, \end{equation} in which we have taken advantage of three additional symbols $\Psi_{i}^{II}$ for convenience. Their own specific implications are as follows $$\Psi_{1}^{II}=\Phi_{1}^{II}\Phi_{3}^{II}-\Phi_{4}^{II}\left(\Phi_{1}^{II}\right)^{2}-\Phi_{4}^{II}\Phi_{1}^{II}+\Phi_{3}^{II},$$ $$\Psi_{2}^{II}=2\Phi_{4}^{II}\left(\Phi_{1}^{II}\right)^{2}-3\Phi_{1}^{II}\Phi_{3}^{II}+\Phi_{4}^{II}\Phi_{1}^{II}+\Phi_{2}^{II}-\Phi_{3}^{II},\quad \text{and,}\quad \Psi_{3}^{II}=2\Phi_{1}^{II}\Phi_{3}^{II}-\Phi_{4}^{II}\left(\Phi_{1}^{II}\right)^{2}-\Phi_{2}^{II}.$$ Here, there is a little surprise, i.e., \begin{equation}\label{Section-4-2-2-1} \Psi_{1}^{II}+\Psi_{2}^{II}+\Psi_{3}^{II}=0. \end{equation} That is to say, we only need to derive arbitrary two of parameters $\Psi_{i}^{II}$ when calculating the $\mathcal{W}$-polynomial for tree $\mathcal{W}_{\mathcal{T}_{II}(t)}$ with respect of parameters $\mathcal{W}_{\mathcal{T}}$ and $h$ on seed $\mathcal{T}$. As a special member in stochastic uniform growth tree family $\mathcal{T}_{II}(1)$, the $m$-th order subdivision tree $\mathcal{T}^{m}$ is deterministic, and may be conveniently produced by setting parameters $m_{i}=m$ for all $i\in[1,\mu]$ and $\nu=0$ in $EUGM$. This kind of trees have been commonly-studied in graph theory \cite{Bondy-2008}. Here, for our purpose, the closed-form solution to corresponding Wiener index on trees of such type can be immediately obtained from Eq.(\ref{Section-4-2-0}). \textbf{Corollary 11} The closed-form solution to Wiener index $\mathcal{W}_{\mathcal{T}^{m}}$ on $m$-th order subdivision tree $\mathcal{T}^{m}$ is given in the following form \begin{equation}\label{Section-4-2-3} \mathcal{W}_{\mathcal{T}^{m}}=(m+1)^{3}\mathcal{W}_{\mathcal{T}}-\frac{m(m+1)^{2}}{2}h^{2}+\left[\frac{m(m+1)^{2}}{2}+\sum_{l=2}^{m}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)\right]h-\sum_{l=2}^{m}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right). \end{equation} In particular, when setting $m=1$, the $m$-th order subdivision tree $\mathcal{T}^{m}$ is reduced as the subdivision tree $\mathcal{T}'$. Then, the corresponding Wiener index $\mathcal{W}_{\mathcal{T}'}$ is calculated to equal $$\mathcal{W}_{\mathcal{T}'}=8\mathcal{W}_{\mathcal{T}}-2h(h-1),$$ which is identical to result in our previous work \cite{Ma-2020} (see theorem 1 in \cite{Ma-2020} for more details). We are now in a position where the formula of mean first-passage time $\overline{\mathcal{F}}_{\mathcal{T}_{II}(t)}$ on stochastic uniform growth tree $\mathcal{T}_{II}(t)$ need be derived analytically. In practice, this can be derived using results in Eqs.(\ref{Section-4-2-1}) and (\ref{Section-4-2-2}) by virtue of statement from Eq.(\ref{Section-3-4-0}). For brevity and convenience, we omit the concrete derivation, and the final expression is written as below. \textbf{Theorem 12} The analytic solution to mean first-passage time $\overline{\mathcal{F}}_{\mathcal{T}_{II}(t)}$ for random walks on tree $\mathcal{T}_{II}(t)$ is given by \begin{equation}\label{Section-4-2-4} \begin{aligned}\overline{\mathcal{F}}_{\mathcal{T}_{II}(t)}&=\mathcal{W}_{\mathcal{T}}\frac{2\left[\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)^{2}\right]^{t}}{(h-1)\left(1+\Phi_{1}^{II}\right)^{t}+1}+\frac{2\left(\Psi_{1}^{III}+\Psi_{2}^{III}+\Psi_{3}^{III}\right)}{(h-1)\left(1+\Phi_{1}^{II}\right)^{t}+1}\times\frac{\left[\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)^{2}\right]^{t}-1}{\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)^{2}-1}\\ &+2(h-1)\frac{\left(2\Psi_{1}^{II}+\Psi_{2}^{II}\right)\left(1+\Phi_{1}^{II}\right)^{t-1}}{(h-1)\left(1+\Phi_{1}^{II}\right)^{t}+1}\times\frac{\left[\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)\right]^{t}-1}{\Phi_{4}^{II}\left(1+\Phi_{1}^{II}\right)-1}\\ &+2(h-1)^{2}\frac{\Psi_{1}^{II}\left(1+\Phi_{1}^{II}\right)^{2(t-1)}}{(h-1)\left(1+\Phi_{1}^{II}\right)^{t}+1}\times\frac{\left(\Phi_{4}^{II}\right)^{t}-1}{\Phi_{4}^{II}-1}\\ \end{aligned}. \end{equation} Besides that, in the limit of large graph size, the result above will behave a power scaling over variable $|V_{\mathcal{T}_{II}(t)}|$, $$\overline{\mathcal{F}}_{\mathcal{T}_{II}(t)}=O\left(|V_{\mathcal{T}_{II}(t)}|^{\Theta_{II}}\right),\qquad \Theta_{II}=1+\frac{\ln\left(\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)}{\ln\left[1+\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right)\right] }.$$ Obviously, parameter $\Theta_{II}$ is no larger than $2$. Yet, it asymptotically tends constant $2$ when assuming $\nu=0$. It has been shown in Section 3 that the famous $T$-graph $\mathcal{Y}_{II}(t)$ is the simplest member in stochastic uniform growth tree $\mathcal{T}_{II}(t)$. Some intriguing structural parameters planted on $\mathcal{Y}_{II}(t)$ including mean first-passage time $\overline{\mathcal{F}}_{\mathcal{Y}_{II}(t)}$ have been studied using other methods in the past \cite{Agliari-2008} (see Eq.(13) in \cite{Agliari-2008} for more details). Here, we only need to substitute some initial conditions, namely, $\mathcal{W}_{\mathcal{T}}=1$, $h=2$, $m_{i}=1$ for $i\in[1,\mu]$, $\nu=1$ as well as $n_{1}=1$, into Eq.(\ref{Section-4-2-4}) in order to obtain the corresponding formula for quantity $\overline{\mathcal{F}}_{\mathcal{Y}_{II}(t)}$. \textbf{Corollary 13} The exact solution to mean first-passage time $\overline{\mathcal{F}}_{\mathcal{Y}_{II}(t)}$ on the well-known $T$-graph $\mathcal{Y}_{II}(t)$ is \begin{equation}\label{Section-4-2-5} \overline{\mathcal{F}}_{\mathcal{Y}_{II}(t)}=\frac{2}{3^{t}+1}\left(18^{t}-2\times\frac{18^{t}-3^{t}}{5}-\frac{18^{t}-9^{t}}{3}\right). \end{equation} To make further progress, if we suppose that in stochastic uniform growth tree $\mathcal{T}_{II}(t)$, the seed is still an edge and parameters $m_{i}=1$, $n_{j}=1$ for all $j$ ($j\in[1,\nu]$), then the resulting deterministic graph is $\nu$-fractal tree $\mathcal{T}_{\nu}(t)$. In \cite{Lin-2010}, the mean first-passage time $\overline{\mathcal{F}}_{\mathcal{T}_{\nu}(t)}$ on tree $\mathcal{T}_{\nu}(t)$ has been reported using spectral method (see Eq.(64) in \cite{Lin-2010} for more details). On the other hand, the corresponding formula $\overline{\mathcal{F}}_{\mathcal{T}_{\nu}(t)}$ is able to be exactly obtained through substituting parameters which are related to tree $\mathcal{T}_{\nu}(t)$ into Eq.(\ref{Section-4-2-4}), which is as follows $$\overline{\mathcal{F}}_{\mathcal{T}_{\nu}(t)}=\frac{2}{(\nu+2)^{t}+1}\left\{\left[2(\nu+1)^{2}\right]^{t}-(\nu+1)(\nu+2)^{t}\frac{(2\nu+4)^{t}-1}{2\nu+3}-(\nu+2)^{2t-1}\left(2^{t}-1\right)\right\}.$$ \subsection{Tree $\mathcal{T}_{III}(t)$} \textbf{Theorem 14} The solution of Wiener index $\mathcal{W}_{\mathcal{T}_{III}(1)}$ of tree $\mathcal{T}_{III}(1)$ is \begin{equation}\label{Section-4-3-0} \begin{aligned}\mathcal{W}_{\mathcal{T}_{III}(1)} &=\mathcal{W}_{\mathcal{T}}\left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{2}+(\mu-2)\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)h^{2}\\ &+\left\{\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+2\right)\sum_{i}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\mu\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)\right\}h \end{aligned}. \end{equation} Similarly, we can write the $\mathcal{W}$-polynomial for tree $\mathcal{T}_{III}(1)$ in the following form \begin{equation}\label{Section-4-3-0-0} f_{III}(\mathcal{W}_{\mathcal{T}},h)\triangleq a_{III}\mathcal{W}_{\mathcal{T}}+b_{III}h^{2}+c_{II}h. \end{equation} \textbf{\emph{Proof}} In fact, there exist some similarities between $VUGM$ and $MUGM$. For instance, a star-like subgraph with $\mu$ ``tentacles" is to be created for each vertex $u$ in seed $\mathcal{T}$ when performing operation on vertex $u$. More specifically, this type of star-like subgraph contains vertex $u$ as center and those newly added vertices in terms of vertex $u$. The latter vertices are grouped into set $\Lambda_{u}^{III}$ for convenience and our purpose. As above, each vertex in set $\Lambda_{u}^{III}$ is remarked by a unique symbol $u_{\alpha}$. Additionally, we still make use of $\bigcup_{u\in\mathcal{T}}\Lambda_{u}^{III}$ to represent set composed of all the new vertices introduced into tree $\mathcal{T}_{III}(1)$ when applying $MUGM$ to every vertex in seed $\mathcal{T}$, and define notation $\Omega_{1}^{III}=\mathcal{V}_{\mathcal{T}_{III}(1)}/\bigcup_{u\in\mathcal{T}}\Lambda_{u}^{III}$ to be vertex set to which all the vertices in seed $\mathcal{T}$ belong. Now, let us divert more attention to calculation of Wiener index of tree $\mathcal{T}_{III}(1)$. This is dealt with using a similar method as recommended previously. At the same time, it is noteworthy that some formulae will be shown straightforwardly without detailed description according to the same derivation in the development of validating Eq.(\ref{Section-4-1-0}). Reader refers subsection 4.1 for more details. \emph{Case 1} By definition, it is clear to see that there will be $2\sum_{i=1}^{\mu}p_{i}m_{i}$ vertices inserted into each edge $uv$ in seed $\mathcal{T}$. This implies that the sum $\mathcal{W}_{\mathcal{T}_{III}(1)}(1)$ over all distances between two arbitrarily distinct vertices in set $\Omega_{1}^{III}$ is certainly subject to the following formula \begin{equation}\label{Section-4-3-0-1} \mathcal{W}_{\mathcal{T}_{III}(1)}(1)\triangleq\frac{1}{2}\sum_{u\in\Omega_{1}^{III}}\sum_{v\in\Omega_{1}^{III}}d'_{uv}=\mathcal{W}_{\mathcal{T}}\left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right). \end{equation} \emph{Case 2} For both vertex in set $\Lambda_{u}^{III}$ and vertex $u$ in set $\Omega_{1}^{III}$, we surely have \begin{equation}\label{Section-4-3-0-2} \begin{aligned}\mathcal{W}_{\mathcal{T}_{III}(1)}(2)&\triangleq\sum_{u\in\Omega_{1}^{III}}\left(\sum_{u_{\alpha }\in\Lambda_{u}^{III}}d'_{uu_{\alpha}}+\frac{1}{2}\sum_{u_{\alpha}\in\Lambda_{u}^{III}}\sum_{u_{\beta}\in\Lambda_{u}^{III}}d'_{u_{\alpha}u_{\beta}}\right)\\ &=h\left[\mu\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\sum_{i=1}^{\mu}2p_{i}m_{i}\left( \begin{array}{c} \mu \\ 2 \\ \end{array} \right)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\mu\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)\right] \end{aligned}. \end{equation} This is completely the same as Eq.(\ref{Section-4-1-0-2}). \emph{Case 3} Apart from some similarities between $VUGM$ and $MUGM$, there are a few differences as will be stated shortly. Given a pair of vertices, say $u$ and $v_{\alpha}$ where $u$ differs for $v$, we denote by $\mathcal{P}_{uv_{\alpha}}$ path connecting vertex $u$ to $v_{\alpha}$ in stochastic uniform growth tree $\mathcal{T}_{III}(1)$. In view of $MUGM$, we confirm that each of $\mu-1$ ``tentacles" in star-like subgraph whose center is vertex $v$ can be referred to as an expansion of path $\mathcal{P}_{uv}$. To put this another way, path $\mathcal{P}_{uv_{\alpha}}$ is based on paths $\mathcal{P}_{uv}$ and $\mathcal{P}_{vv_{\alpha}}$ via conjunction on vertex $v$. This leads to a relation $d'_{uv_{\alpha}}=d'_{uv}+d'_{vv}.$ On the other hand, the left ``tentacle" in star-like subgraph is in fact a contraction of path $\mathcal{P}_{uv}$ itself. Specifically speaking, path $\mathcal{P}_{uv_{\alpha}}$ is obtained from path $\mathcal{P}_{uv}$ by deleting $\mathcal{P}_{vv_{\alpha}}$, which results in the next expression $d'_{uv_{\alpha}}=d'_{uv}-d'_{vv}.$ Taken together, we derive the solution to summation $\mathcal{W}_{\mathcal{T}_{III}(1)}(3)$ over distances of all possible vertex pairs $u$ and $v_{\alpha}$ of this kind in tree $\mathcal{T}_{III}(1)$, as follows \begin{equation}\label{Section-4-3-0-3} \begin{aligned}\mathcal{W}_{\mathcal{T}_{III}(1)}(3)&\triangleq\frac{1}{2}\sum_{u\in\Omega_{1}^{III}}\sum_{v(\neq u)\in\Omega_{1}^{III}}\sum_{v_{\alpha}\in\Lambda_{v}^{III}}d'_{uv_{\alpha}}\\ &=2\mathcal{W}_{\mathcal{T}_{III}(1)}(1)\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)+2(\mu-2)\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right) \end{aligned}. \end{equation} \emph{Case 4} Finally, let us evaluate the contribution from distance between arbitrary pair of vertices $u_{\alpha}$ and $v_{\beta}$ to Wiener index $\mathcal{W}_{\mathcal{T}_{III}(1)}$, which is thought of as $\mathcal{W}_{\mathcal{T}_{III}(1)}(4)$, i.e., $$\mathcal{W}_{\mathcal{T}_{III}(1)}(4)\triangleq\sum_{u\in\Omega_{1}^{III}}\sum_{v(\neq u)\in\Omega_{1}^{III}}\sum_{u_{\alpha}\in\Lambda_{u}^{III}}\sum_{v_{\beta}\in\Lambda_{v}^{III}}d'_{u_{\alpha}v_{\beta}.}$$ By analogy with demonstration in previous cases, we can determine the analytic solution to quantity $\mathcal{W}_{\mathcal{T}_{III}(1)}(4)$ by first considering two corresponding central vertices $u$ and $v$ with respect to a given pair of vertices $u_{\alpha}$ and $v_{\beta}$. Specifically, for star-like subgraph whose center is $u$, there must be $\mu-1$ ``tentacles" as expansions of path $\mathcal{P}_{uv}$, each being towards the outside along the direction from end-vertex $v$ to $u$, and $1$ ``tentacle" as contraction of path $\mathcal{P}_{uv}$ towards the inside along the opposite direction, namely, orientation from end-vertex $u$ to $v$. Taking into account nature of $MUGM$, there are four different combinatorial manners in situation mentioned above by means of both expansion and contraction along two distinct directions. For the sake of argument, we omit concrete calculation for each combinatorial manner and immediately write \begin{equation}\label{Section-4-3-0-4} \begin{aligned}\mathcal{W}_{\mathcal{T}_{III}(1)}(4)&=\mathcal{W}_{\mathcal{T}_{III}(1)}(1)\left((\mu-1)\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{2}+2(\mu-1)\sum_{i=1}^{\mu}p_{i}m_{i}\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\left[(\mu-1)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right]\\ &\quad+2\mathcal{W}_{\mathcal{T}_{III}(1)}(1)(\mu-1)\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{2}-2\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)(\mu-1)\sum_{i=1}^{\mu}p_{i}m_{i}\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\\ &\quad+2\sum_{i=1}^{\mu}p_{i}m_{i}\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\left[(\mu-1)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right]\\ &\quad+\mathcal{W}_{\mathcal{T}_{III}(1)}(1)\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{2}-2\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\left[\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right] \end{aligned}. \end{equation} Using some fundamental arithmetics, the above equation simplifies to \begin{equation}\label{Section-4-3-0-5} \mathcal{W}_{\mathcal{T}_{III}(1)}(4)=\mathcal{W}_{\mathcal{T}_{III}(1)}(1)\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{2}+2(\mu^{2}-2\mu)\sum_{i=1}^{\mu}p_{i}m_{i}\left( \begin{array}{c} h \\ 2 \\ \end{array} \right)\left[\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\right]. \end{equation} Putting all things together yields the same result as in Eq.(\ref{Section-4-3-0}), which suggests that we complete the proof to theorem 14. $\hfill\Box$ From Eq.(\ref{Section-4-3-0}), we can find that there is in essence a recursive relation of both Wiener index $\mathcal{W}_{\mathcal{T}_{III}(1)}$ and $\mathcal{W}_{\mathcal{T}}$. This means that the analytic solution to Wiener index $\mathcal{W}_{\mathcal{T}_{III}(t)}$ may be easily obtained in an iterative fashion after estimating the total number of vertices in stochastic uniform growth tree $\mathcal{T}_{III}(t)$. To this end, in view of the specific growth way to construct tree $\mathcal{T}_{III}(t)$, the vertex number $|\mathcal{V}_{\mathcal{T}_{III}(t)}|$ follows \begin{equation}\label{Section-4-3-1} |\mathcal{V}_{\mathcal{T}_{III}(t)}|=h\left(1+\mu\sum_{i=1}^{\mu}p_{i}m_{i}\right)^{t}. \end{equation} Clearly, this is identical to the vertex number of tree $\mathcal{T}_{I}(t)$. This is another similarity between both types of trees, $\mathcal{T}_{I}(t)$ and $\mathcal{T}_{III}(t)$. As before, an iterative calculation firmly produces the analytic formula of $\mathcal{W}_{\mathcal{T}_{III}(t)}$. \textbf{Proposition 15} The solution of Wiener index $\mathcal{W}_{\mathcal{T}_{III}(t)}$ of tree $\mathcal{T}_{III}(t)$ is \begin{equation}\label{Section-4-3-2} \begin{aligned}\mathcal{W}_{\mathcal{T}_{III}(t)} &=\mathcal{W}_{\mathcal{T}}\left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{t}\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{2t}+(\mu-2)h^{2}\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{2t-1}\Phi^{III}_{1}\\ &+h\left\{\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+2\right)\sum_{i}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\mu\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)\right\}\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{t-1}\Phi^{III}_{2} \end{aligned}, \end{equation} where we make use of symbols $\Phi^{III}_{1}$ and $\Phi^{III}_{2}$, which are given in the following form $$\Phi^{III}_{1}=\frac{\left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{t}-1}{2\sum_{i=1}^{\mu}p_{i}m_{i}},\quad \Phi^{III}_{2}=\frac{\left[\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)\left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)\right]^{t}-1}{\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)\left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)-1}.$$ For brevity and convenience, we omit more details about derivation. At meantime, based on Eq.(\ref{Section-3-4-0}), we convert the result in Eq.(\ref{Section-4-3-2}) into expression of mean first-passage time on stochastic uniform growth tree $\mathcal{T}_{III}(t)$, which is as below. \textbf{Theorem 16} The analytic solution to mean first-passage time $\overline{\mathcal{F}}_{\mathcal{T}_{III}(t)}$ for random walks on tree $\mathcal{T}_{III}(t)$ is given by \begin{equation}\label{Section-4-3-3} \begin{aligned}\overline{\mathcal{F}}_{\mathcal{T}_{III}(t)} &=\frac{2\mathcal{W}_{\mathcal{T}}}{h}\left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{t}\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{t}+2(\mu-2)h\sum_{i=1}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{t-1}\Phi^{III}_{1}\\ &+2\left\{\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+2\right)\sum_{i}^{\mu}p_{i}\left( \begin{array}{c} m_{i}+1 \\ 2 \\ \end{array} \right)+\mu\sum_{i}^{\mu}p_{i}\sum_{l=2}^{m_{i}}\left( \begin{array}{c} l \\ 2 \\ \end{array} \right)\right\}\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)^{-1}\Phi^{III}_{2} \end{aligned}. \end{equation} More generally, we are interested in the scaling behavior of quantity $\overline{\mathcal{F}}_{\mathcal{T}_{III}(t)}$ in the large graph size limit. On the basis of Eq.(\ref{Section-4-3-1}), when considering case $t\rightarrow \infty$, there is a relationship $$\overline{\mathcal{F}}_{\mathcal{T}_{III}(t)}=O\left(|\mathcal{V}_{\mathcal{T}_{III}(t)}|^{\Theta_{III}}\right), \quad \Theta_{III}=1+\frac{\ln \left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)}{\ln\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)},$$ suggesting that $\overline{\mathcal{F}}_{\mathcal{T}_{III}(t)}$ is not linearly correlated with vertex number $|\mathcal{V}_{\mathcal{T}_{III}(t)}|$ but a power function as parameter $|\mathcal{V}_{\mathcal{T}_{III}(t)}|$ in form. Meanwhile, the power exponent $\Theta_{III}$ is strictly less than constant $2$ and approaches unit gradually as $\mu\rightarrow \infty$. As a case study, let us revisit the famous Vicsek fractal $V_{1}^{\mu}(t)$ in which the seed $\mathcal{T}$ is a star $S_{\mu}$, namely, $\mathcal{W}_{\mathcal{T}}=\mu^{2}$ and $h=|\mathcal{V}_{\mathcal{T}}|=\mu+1$. Substituting these pre-designated parameters into Eq.(\ref{Section-4-3-3}) yields the previously published result in \cite{Zhang-2010} (see Eq.(20) in \cite{Zhang-2010} for more details). To make our statement more self-contained, the closed-form solution is written in the next corollary. \textbf{Corollary 17} The exact solution to mean first-passage time $\overline{\mathcal{F}}_{V_{1}^{\mu}(t)}$ on the famous Vicsek fractal $V_{1}^{\mu}(t)$ is \begin{equation}\label{Section-4-3-4} \overline{\mathcal{F}}_{V_{1}^{\mu}(t)}=\frac{2\mu^{2}}{\mu+1}(3\mu+3)^{t}+(\mu-2)(\mu+1)^{t}(3^{t}-1)+\frac{(2\mu+4)\left[3^{t}(\mu+1)^{t}-1\right]}{(\mu+1)(3\mu+2)}. \end{equation} By far, we finish derivations of Wiener index and mean first-passage time on three types of stochastic uniform growth trees. Compared to prior work focusing on deterministic versions, this study focuses on more general versions, and thus the formulas derived herein are also general. More importantly, the proposed method is more convenient to calculate what we care about than the commonly-used methods in the literature \cite{Agliari-2008,Zhang-2010,Wang-2012,Lin-2010}. In addition, we observe some differences and similarities between these growth trees using a systematical study, which is not yet reported in the previous work mainly because a single type of tree is often selected as an objective. This is helpful to understand the underlying structures on these growth trees. It is worth mentioning that during the derivation, a few surprising results are found, for instance, Eq.(\ref{Section-4-2-2-1}). This enables us to well reveal effect of graphic operations on topological structure of growth trees, and further create more intriguing networked models. Note also that while some other more complicated uniform growth trees are built and studied in the future, this work provides a guide to discuss many topological parameters including Wiener index and mean first-passage time on those models. Besides that, several example trees output by framework $\Upsilon$ can be selected to serve as candidate models modelling real-world networks \cite{Jurjiu-2003,Bartolo-2016,Furstenberg-2015,Markelov-2018}. Accordingly, the derived results are able to help one investigate topological structures on those networks. \subsection{Network robustness and other structural parameters} Network criticality $\overline{\mathcal{R}_{\mathcal{G}^{\dagger}}}$, as a topological measure estimating robustness on the underlying structure of network $\mathcal{G}(\mathcal{V},\mathcal{E})$ under consideration, has been widely studied in the past years \cite{Jekel-2018,Tizghadam-2010}. It is easy to see that in tree $\mathcal{T}$, quantity $\overline{\mathcal{R}_{\mathcal{T}^{\dagger}}}$ is in fact equal to average shortest path length $\overline{\mathcal{W}_{\mathcal{T}}}$. Therefore, we have the ability to analytically derive the corresponding solutions for such a parameter on all the stochastic uniform growth trees generated based on framework $\Upsilon$. Due to space limitation, we omit the correspondingly analytic expressions. On the other hand, we are interested in the scaling behavior of these parameters in the large graph size limit. \textbf{Theorem 18} As $t\rightarrow\infty$, the asymptotic formulae for network criticality, $\overline{\mathcal{R}_{\mathcal{T}_{I}(t)^{\dagger}}}$, $\overline{\mathcal{R}_{\mathcal{T}_{II}(t)^{\dagger}}}$ and $\overline{\mathcal{R}_{\mathcal{T}_{III}(t)^{\dagger}}}$, on three classes of stochastic uniform growth trees are written as \begin{equation}\label{Section-4-4-0} \overline{\mathcal{R}_{\mathcal{T}_{I}(t)^{\dagger}}}=O(t),\quad \overline{\mathcal{R}_{\mathcal{T}_{II}(t)^{\dagger}}}=O\left(|V_{\mathcal{T}_{II}(t)}|^{\Theta_{II}-1}\right),\quad \text{and},\quad \overline{\mathcal{R}_{\mathcal{T}_{III}(t)^{\dagger}}}=O\left(|\mathcal{V}_{\mathcal{T}_{III}(t)}|^{\Theta_{III}-1}\right). \end{equation} These expressions are easily calculated and thus we omit proofs here. From the above equation, we can find that tree $\mathcal{T}_{I}(t)$ is more robust than other both types of stochastic uniform growth trees. One of most important reasons for this is that tree $\mathcal{T}_{I}(t)$ has a smaller diameter than trees $\mathcal{T}_{II}(t)$ and $\mathcal{T}_{III}(t)$. In addition, trees $\mathcal{T}_{II}(t)$ and $\mathcal{T}_{III}(t)$ exhibit fractal structure, however such a phenomenon is not observed on tree $\mathcal{T}_{I}(t)$. As pointed in the literature \cite{Gennes-1982} (see Eq.(II.1) in \cite{Gennes-1982} for more details), for a fractal $\mathcal{G}(\mathcal{V},\mathcal{E})$ in question, there is an identity $$\Theta_{\mathcal{G}}=\frac{2}{d_{\mathcal{G}}},$$ where $d_{\mathcal{G}}$ is the spectral dimension of graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ and $\Theta_{\mathcal{G}}$ complies to $\overline{\mathcal{F}}_{\mathcal{G}}=|\mathcal{V}|^{\Theta_{\mathcal{G}}}$. So, using two parameters established above, $\Theta_{II}$ and $\Theta_{III}$, we can have $$d_{\mathcal{T}_{II}(t)}=\frac{2\ln\left[1+\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right)\right]}{\ln\left(\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)+\ln\left[1+\left(\sum_{i=1}^{\mu}p_{i}m_{i}\right)\left(\nu\sum_{j=1}^{\nu}q_{j}n_{j}+1\right)\right]}<2,$$ and $$d_{\mathcal{T}_{III}(t)}=\frac{2\ln\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)}{\ln\left(\mu\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)+\ln \left(2\sum_{i=1}^{\mu}p_{i}m_{i}+1\right)}<2.$$ In view of statement in \cite{Gennes-1982}, we point out that a walker originating from a designed vertex on tree $\mathcal{T}_{II}(t)$ will return back to the vertex almost surely over time because the corresponding spectral dimension is no more than $2$. The similar conclusion also holds for tree $\mathcal{T}_{III}(t)$. \section{Conclusion} To conclude, we consider random walks on tree networks and study some structural parameters of interest. First of all, we introduce three kinds of graphic operations, i.e., $VUGM$, $EUGM$ along with $MUGM$, and propose a principled framework $\Upsilon$. Based on this, we generate three families of stochastic uniform growth trees. As a consequence, some previously reported deterministic cases including $T$-graph and Vicsek fractal are contained into our framework completely. Next, we determine the analytic solution to mean first-passage time on stochastic uniform growth trees built. In view of an identity between Wiener index and mean first-passage time on tree given by Eq.(\ref{Section-3-4-0}), we first derive the corresponding formulae for Wiener index on stochastic uniform growth trees in a more manageable combinatorial manner instead of the commonly-used methods including spectral technique. One of most important reasons for this is that those typical manners mentioned above will become prohibitively difficult to execute even in some specific cases where the seed is just an edge or a star for instance. It should be emphasized that the formulae derived by us are more general, and thus cover the published results associated with deterministic cases. Last but not the least, we distinguish network robustness on all the stochastic uniform growth trees using network criticality. After that, we also analytically obtain spectral dimensions of two types of trees $\mathcal{T}_{II}(t)$ and $\mathcal{T}_{III}(t)$, and find a walker originating from a designed vertex on either $\mathcal{T}_{II}(t)$ or $\mathcal{T}_{III}(t)$ will return back to the vertex almost surely as time goes on. We would like to stress that our work is only a tip of the iceberg and however the lights shed by our methods can be beneficial to study random walks on other models \cite{Cohen-2016}-\cite{Alev-2020}. Meanwhile, there are still some open questions, for instance, how to effectively determine the $\mathcal{W}$-polynomial on the resulting graphs obtained from an arbitrary graph by utilizing three kinds of graphic operations introduced herein, which are waiting for us to address. \section{Acknowledgments} The authors would like to thank Xudong Luo for useful conversations. The research was supported by the National Key Research and Development Plan under grant 2020YFB1805400 and the National Natural Science Foundation of China under grant No. 61662066. \section*{Appendix} In Fig.3, we provide an illustrative example to clarify terminologies introduced in footnote 1. \begin{figure} \centering \includegraphics[height=4cm]{MF-2020-FIG-3.jpg}\\ \vskip0.5cm {\small Fig.3. (Color online) The diagram of a star-like graph centered at vertex $u$. More specifically, vertex $u$ is the center. There are five tentacles, namely, paths $\mathcal{P}_{uu_{1}}$, $\mathcal{P}_{uu_{3}}$, $\mathcal{P}_{uu_{4}}$, $\mathcal{P}_{uu_{5}}$ and $\mathcal{P}_{uu_{6}}$.} \end{figure} \vskip 1cm {\footnotesize
train/arxiv
BkiUbpM25YjgKKTXXMvt
5
1
\section{Motivations} In the low-energy regime, M-theory is $D=11$, $N=1$ supergravity. In the matrix model the fundamental degrees of freedom of M-theory are 0-branes (that is, Derichlet particles). For this model to be a correct description of M-theory, it must then reproduce supergravity in the long-distance regime. In particular, 0-brane scattering amplitudes in $D=10$ must reproduce those of compactified (from $D=11$ down to 10) supergravity, for which the gravitons carry momentum in the compactified direction. Such a correspondence between amplitudes in these two different-looking theories plays an important role because it can be computed explicitely. It has now been succesfully checked in the two- and three-graviton scattering amplitudes. \section{Two-graviton scattering} The scattering of two graviton carrying momentum in a compactified direction has been studied several times in the literature~\cite{2grav}. The simplest way to compute it is by means of the effective lagrangian~\cite{BBPT} \begin{equation} L = - p_- \dot x^- = - p_- \frac{\sqrt{1 - h_{--} v^2} -1}{h_{--}}\, , \end{equation} where $h_{--} = f(r)/2 \pi R_{11}$ and $f(r) = 2 \kappa^2 M/7 \Omega\, r^7$ for the space-time of the shock wave generated by the graviton moving with momentum $p_- = N_2/R_{11}$. Actually, this is a special case of shock wave in which the 11-th dimension has been smeared. By expanding in the relative velocity $v$, we find \begin{equation} L = - p_- \left\{ \frac{v^2}{2} + a_1 \: \frac{v^4}{r^7} + a_2 \: \frac{v^6}{r^{14}} \cdots \right\}\, , \end{equation} where the exact values of the coefficients $a_1$ and $a_2$ are known. The corresponding amplitude in matrix theory can derived from the gauge fixed action, the bosonic part of which reads \begin{eqnarray} S &=& \int \mbox{d} t \: \:\mathop{\mbox{Tr}}\,\bigg(\dot a_0^2 + \dot x_i^2 + 4\,i\,\dot R_k\,[a_0, x_k] -[R_k, a_0]^2 - [R_k, x_j]^2\nonumber\\ &&+2\,i\,\dot x_k\,[a_0, x_k] + 2\,[R_k, a_0][a_0, x_k] -2\,[R_k, x_j][x_k, x_j] \nonumber\\ &&-[a_0,x_k]^2 - \frac{1}{2}[x_k, x_j]^2 \bigg), \label{action} \end{eqnarray} where $a_0$ and $x_k$ are hermitian matrices representing the fluctuations and $R_k$ is the background. The fermionic and ghost terms must also be included in addition to (\ref{action}) but are here omitted for semplicity. The units are such that \begin{equation} g_{\mbox{\rm \scriptsize YM}}=\left( R_{11}/ \lambda_{\mbox{\rm \scriptsize P}}^2 \right) ^{3/2}=1 \, , \end{equation} the quantities $R_{11}$, $\lambda_{\mbox{\rm \scriptsize P}}$ and $g_{\mbox{\rm \scriptsize YM}}$ being the compactification radius, the Planck length and the Yang-Mills coupling, respectively. The relevant gauge group depends on the process under stady. It is the rank one (only one independent velocity) group $SU(2)$ in the two-body scattering. The corresponding computations at one- and two-loop level in matrix theory yield \begin{equation} a_1 = \frac{15}{16} \: \frac{N_1 N_2}{R^3 M^9} \quad \mbox{(one loop)~\cite{BBPT}} \end{equation} and \begin{equation} a_2 = \frac{225}{64} \: \frac{N_1^2 N_2}{R^5 M^{18}} \quad \mbox{(two loops)~\cite{BC}} \, , \end{equation} in agreement with what found in supergravity. \section{Three-graviton scattering} The simplest way to obtain supergravity amplitudes is by means of string theory. Since it is a tree-level amplitude, it is consistent with conformal invariance in any dimensionality, in particular in $D=11$. We consider the {\it bona fide} superstring theory (where there is no tachyon) and the scattering amplitude of three ($11$-dimensional) gravitons, and look at suitable {\it pinching} limits, where only intermediate massless states are coupled to the external gravitons. Those states are themselves $11$-dimensional gravitons. We then compactify the $10^{\rm th}$ space dimension giving mass to the external gravitons, which will thus correspond to $10$-dimensional $D0$-branes. Keeping zero momentum transfer in the $10^{\rm th}$ dimension, the intermediate states remain massless and correspond to the various massless fields of $10$-dimensional supergravity. By considering only the part of the complete amplitude that is proportional to \begin{equation} \varepsilon_1 \cdot \varepsilon_1' \: \varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3' \, , \end{equation} $\varepsilon$ being the external graviton polarization tensor, we obtain the amplitude $A_6$ for six graviton vertices~\cite{FIR}: \begin{eqnarray} A_6 & = & \varepsilon_1 \cdot \varepsilon_1' \: \varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3' \: \frac{\kappa^4 (\alpha')^3}{4 \pi^3} \int \mbox{d} ^2 x\: \mbox{d} ^2 y\: \mbox{d} z^2 |1-y|^{-2 + \alpha' p_2'\cdot p_2} \nonumber \\ &&\times \: |y|^{\alpha' p_3\cdot p_2'} |1-x|^{\alpha' p_2\cdot p_1'} |x|^{\alpha' p_3\cdot p_1'} |1-z|^{\alpha' p_3'\cdot p_2} \nonumber \\ &&\times \: |z|^{-2 + \alpha' p_3\cdot p_3'} |z-x|^{\alpha' p_3'\cdot p_1'} |z-y|^{\alpha' p_3'\cdot p_2'} |x-y|^{\alpha' p_2'\cdot p_1'} \nonumber \\ &&\times \: \left\{ \frac{p_3' \cdot p_1' \: p_2' \cdot p_1'}{(y-x)(z-x)} + \frac{p_3 \cdot p_2' \: p_3' \cdot p_1'}{y(z-x)} - \frac{p_3' \cdot p_2' \: p_3 \cdot p_1'}{x(z-y)} \right. \nonumber \\ && \left. + \frac{p_2' \cdot p_3' \: p_2' \cdot p_1'}{(y-x)(z-y)} + \frac{p_3' \cdot p_2 \: p_2' \cdot p_1'}{(z-1)(y-z)} \right\} \wedge \Biggl\{ c.c. \Biggr\} \end{eqnarray} where $p_i = (E_i, {\bf p}_i-{\bf q}_i /2, M_i)$, $p_i' = (-E_i', - {\bf p}_i-{\bf q}_i /2, -M_i)$, $ p_i^2=0$, $E_i \simeq M_i + ({\bf p}_i-{\bf q}_i /2)^2/2M_i$ and $M_i=N_i/R_{11}$. Moreover, we have that $\sum_i {\bf q}_i = 0$ and $\sum_i {\bf p}_i \cdot {\bf q}_i = 0$. In the long-distance regime we are interested in we find that $A_6 = A_\vee + A_Y$ where \begin{eqnarray} A_\vee & = & 2 \: \kappa^4 \: \varepsilon_1 \cdot \varepsilon_1' \: \varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3' \; \frac{1}{{\bf q}_1^2\: {\bf q}_2^2} \nonumber \\ && \times \left\{ ({\bf p}_3 - {\bf p}_2)^2 \: ({\bf p}_3 - {\bf p}_1)^2 \left[ ({\bf p}_2 - {\bf p}_1)^2 - ({\bf p}_3 - {\bf p}_1)^2 - ({\bf p}_3 - {\bf p}_2)^2 \right] \right. \nonumber \\ && -\: ({\bf p}_3 - {\bf p}_2)^2 \: ({\bf p}_3 - {\bf p}_1)^2 \left[ ({\bf p}_3 - {\bf p}_2)^2 \: \frac{ {\bf q}_2 \cdot ({\bf p}_3 - {\bf p}_1) } { {\bf q}_1 \cdot ({\bf p}_3 - {\bf p}_1)} \right. \nonumber \\ && \left. \left . + \: ({\bf p}_3 - {\bf p}_1)^2 \: \frac{ {\bf q}_1 \cdot ({\bf p}_3 - {\bf p}_2) } { {\bf q}_2 \cdot ({\bf p}_3 - {\bf p}_2)} \right] \right\} \: + \: \mbox{symmetric} \end{eqnarray} and \begin{eqnarray} A_Y & = & - 2 \: \kappa^4 \: \varepsilon_1 \cdot \varepsilon_1' \: \varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3' \; \frac{1}{{\bf q}_1^2\: {\bf q}_2^2\: {\bf q}_3^2} \nonumber \\ & & \times \: \Biggl\{ ({\bf p}_2 - {\bf p}_3)^2 \Bigl[ {\bf q}_3 \cdot ({\bf p}_3 -{\bf p}_1) + {\bf q}_2 \cdot ({\bf p}_1 -{\bf p}_2) \Bigr] \Biggr. \nonumber \\ && \quad + \: ({\bf p}_3 - {\bf p}_1)^2 \Bigl[ {\bf q}_3 \cdot ({\bf p}_2 - {\bf p}_3) + {\bf q}_1 \cdot ({\bf p}_1 - {\bf p}_2) \Bigr] \nonumber \\ && \Biggl. \quad +\: ({\bf p}_1 - {\bf p}_2)^2 \Bigl[ {\bf q}_2 \cdot ({\bf p}_2 -{\bf p}_3) + {\bf q}_1 \cdot ({\bf p}_3 -{\bf p}_1) \Bigr] \Biggr\}^2 \end{eqnarray} Notice that $A_\vee = 0$ and $A_Y = 0$ whenever two of the three momenta are equal or the three momenta are parallel. $A_Y$ is subleading in the relevant regime and we can neglect it. In order to compare $A_\vee$ with matrix theory we consider the {\it eikonal expression} where we integrate over the time $t$ along the world-line trajectories the Fourier transform \begin{equation} a_\vee = \int \frac{\mbox{d}^9{\bf q}_1 \mbox{d}^9{\bf q}_2}{(2\pi)^{18}} \: A_\vee \: \exp \Bigl[ i \: {\bf q}_1 \cdot ({\bf r}_1 - {\bf r}_3) + i \: {\bf q}_2 \cdot ({\bf r}_2 - {\bf r}_3)\Bigr] \, , \end{equation} where ${\bf r}_{i} = (v_i {\bf\hat n}_1 t +{\bf b}_{i})$, ${\bf b}_i\cdot{\bf\hat n}_1=0$ and $B\equiv |{\bf b}_1 -{\bf b}_2| \gg b \equiv |{\bf b}_2 -{\bf b}_3|$. We write the momenta in terms of the velocities as ${\bf p}_i =M_i {\bf v}_i$ while bearing in mind that $M_i\sim N_i$. We normalize the amplitude by dividing the result by the product of the $M_i$ and find~\cite{FFI} \begin{equation} \tilde{a}_\vee \sim \int \mbox{d} t\; \frac{N_1 N_2 N_3 v_{23}^2 v_{13}^2 v_{12}^2}{(v_{23}^2t^2 + B^2)^{7/2} (v_{12}^2t^2 + b^2)^{7/2}} \sim \frac{N_1 N_2 N_3 |v_{23}| v_{13}^2 v_{12}^2}{B^7 b^6} \end{equation} that is to be compared to matrix theory. A bit of controversy arised concerning the term $\tilde{a}_\vee$. It was thought to be impossible in matrix theory~\cite{DR}. However, the argument was not correct, as first shown in~\cite{FFI}. The matrix theory computation is in this case based on the rank two group $SU(3)$. We choose the background \begin{equation} R_1 =\pmatrix{v_1 t & 0 & 0 \cr 0 & v_2 t & 0 \cr 0 & 0 & v_3 t \cr} \qquad\hbox{and}\qquad R_k =\pmatrix{b_k^1 & 0 & 0 \cr 0 & b_k^2 & 0 \cr 0 & 0 & b_k^3 \cr}\quad k>1. \end{equation} We can factor out the motion of the center of mass by imposing $v_1 + v_2 + v_3 = 0$ and $b_k^1 + b_k^2 + b_k^3 = 0$. We use a Cartan basis for $SU(3)$, where $H^1$ and $H^2$ denote the generators of the Cartan sub-algebra and $E_\alpha$ ($\alpha=\pm\alpha^1, \pm\alpha^2,\pm\alpha^3$) the roots. We also define the space vectors \begin{equation} {\bf R}^\alpha = \sum_{a=1,2}\alpha^a\mathop{\mbox{Tr}}\, \Big(H^a {\bf R}\Big) \, . \label{lim} \end{equation} With the standard choice of $H^a$ and $\alpha$, this definition singles out the relative velocities and impact parameters, e.g. $ R_1^{\alpha^1} = (v_2 - v_3)t\equiv v^{\alpha^1}t$ plus cyclic and, for $k>1$, $ R_k^{\alpha^1} = b_k^2 - b_k^3\equiv b_k^{\alpha^1}$ plus cyclic. According to the previous section we choose the relative distance of the first particle with the other two to be much larger than the relative distance of particle two and three, in other words, we set \begin{equation} |{\bf b}^{\alpha^2}|\approx|{\bf b}^{\alpha^3}|\approx B \gg |{\bf b}^{\alpha^1}|\approx b \quad \mbox{and} \quad B\, b \gg v \, . \label{regime} \end{equation} The propagators and vertices can be easily worked out from the gauge fixed action (\ref{action}), with two points worth stressing: first, the quadratic part (yielding the propagators) is diagonal in root space; second, contrary to the $SU(2)$ case, there are now vertices with three massive particles (corresponding to the three different roots). The second point is particularly crucial because it is from a diagram containing those vertices that we find the supergravity term. We find twenty real massless bosons and thirty massive complex bosons. We only need consider some of the latter to construct the diagram. Writing $x_k = x_k^a H^a + x_k^\alpha E_\alpha$, with $x_k^{-\alpha} = x_k^{\alpha *}$, we define the propagators as \begin{equation} \langle x_k^{\alpha *}(t_1)x_k^{\alpha}(t_2) \rangle = \Delta\Big( t_1, t_2 \: \Big|\: (b^{\alpha})^2, v^{\alpha}\Big) \, . \end{equation} As for $x_1$, (the fluctuation associated to the background $R_1$), it mixes with the field $a_0$ (the fluctuation of the gauge potential). Writing $x_1^\alpha=z^\alpha+w^\alpha$ and $a_0^\alpha=i(z^\alpha-w^\alpha)$ yields \begin{eqnarray} \langle z^{\alpha *}(t_1)z^{\alpha}(t_2) \rangle &=& \Delta\Big(t_1, t_2 \: \Big| \: (b^{\alpha})^2+2v^{\alpha}, v^{\alpha}\Big) \nonumber \\ \langle w^{\alpha *}(t_1)w^{\alpha}(t_2) \rangle &=& \Delta\Big( t_1, t_2 \: \Big| \: (b^{\alpha})^2-2v^{\alpha}, v^{\alpha}\Big) \, , \end{eqnarray} where \begin{equation} \Delta_i = \int \mbox{d} s \: e^{-\beta_i^2 s} \sqrt{\frac{v^{\alpha^i}}{2\pi\sinh 2\, v^{\alpha^i} s}}\exp\left\{ {-h(v^{\alpha^i}, s)\:t^2 -k(v^{\alpha^i}, s)\:T^2}\right\} \end{equation} where $t=(t_1-t_2)/2$, $T=(t_1+t_2)/2$, $\beta_1^2 = b^2$, $\beta_2^2 = B^2 + 2 v_{13}$, $\beta_3^2=B^2$ and \begin{eqnarray} h(v^{\alpha^i}, s)&=&\frac{v^{\alpha^i}}{\sinh 2\,v^{\alpha^i}s} \Bigl( \cosh 2 \,v^{\alpha^i}s + 1 \Bigr)\nonumber\\ k(v^{\alpha^i}, s)&=&\frac{v^{\alpha^i}}{\sinh 2\,v^{\alpha^i}s} \Bigl( \cosh2\,v^{\alpha^i} s - 1 \Bigr) \, . \end{eqnarray} The vertex we need is contained in the term of the effective action~(\ref{action}) of type \begin{equation} -2\: \mathop{\mbox{Tr}}\, \Big( [R_1, x_j][x_1, x_j] \Big) \, , \end{equation} which gives a vertex with two massive bosons and a massless one and another one with all three massive bosons. Focusing on the second case and choosing a particular combination of the roots we obtain a term of the type \begin{equation} v^{\alpha^1}t\; z^{\alpha^2}x_j^{\alpha^1}x_j^{\alpha^3} \equiv v_{23}\:t\: z^{13}x_j^{23}x_j^{12} \, , \label{verti} \end{equation} and a similar term with $z^{\alpha}$ replaced by $w^{\alpha}$. The diagrams we have considered are two-loop diagrams in the bosonic sector---there are various similar diagrams which can give rise to the same behavior---and we have analyzed in detail one of them, the {\it setting-sun} diagram with all massive propagators, which only arises in the three-body problem. It can be written as \begin{equation} \tilde a_\ominus = (v^{\alpha^1})^2 \int \mbox{d}\, t \:\mbox{d}\, T \: \left( T^2 - t^2 \right) \Delta_1 \Delta_2\Delta_3 \label{a} \end{equation} The appropriate powers of $N_i$ can be deduced---following~\cite{BBPT}--- from the double-line notation in which the setting-sun diagram is of order $N^3$; this factor must be $N_1 N_2 N_3$ for the diagram to involve all three particles. Expanding (\ref{a}) in the limit (\ref{lim}) yields \begin{equation} \tilde{a}_\ominus \sim \frac{N_1 N_2 N_3 |v_{23}| v_{12}^2 v_{13}^2}{B^7b^6} \end{equation} which reproduces the behavior of the supergravity result, that is, $\tilde a_\ominus \sim \tilde a_\vee$. The same result can be obtained in the framework of an effective action in which the degrees of freedom related to the ``heavy'' modes (those exchanged at distance $B$) are integrated out and the action is discussed in terms of the ``light'' modes (exchanged at distance $b$). Claims about a vanishing result in such an effective-action approach~\cite{WT} are discussed and shown to be irrelevant for the three-graviton problem in~\cite{FFI2}. The preliminary result of ~\cite{FFI} concerning a single diagram has been confirmed by the complete computation performed in~\cite{OY}. They found perfect numerical agreement for the three graviton scattering in supergravity and matrix theory.
train/arxiv
BkiUfGbxK7Tt522WcRYl
5
1
\section{\label{intr}Introduction} In this paper we present three programs in Mathematica related to the Phase Integral Approximation (PIA) \cite{genpia}. They produce expressions for the lowest order aproximation and the higher order corrections. The first program generates the higher order corrections $Y_{2n}(x)$ pertinent to one ordinary differential equation of the Schr{\"o}dinger type. The second program gives the vectors $\mathbf{b}_m(x)$ and the third one the corrections $Y_m(x)$ and the vectors $\mathbf{s}_m(x)$. These quantities are pertinent to a set of two ordinary differential equation of the Schr{\"o}dinger type. The programs will be referred to as \ver be saved in \ver and run by using the standard Mathematica input command \ver The user supplied input data should be saved in \ver \ver program will be used in computation. Each program opens a dialog in Mathematica session in which the user is asked to specify the form of output from Mathematica (\ver \ver and answer a few questions specific to each program. The results produced by Mathematica will be saved as \ver \ver refer to \cite{genpia}. One should answer \ver \ver some trig functions, and \ver The factor $g(x)$ after the dialog question \ver \ver in the just printed expression for \ver see Eq.~(124). In the last case, the implied factors should also be included, e.g., $\sin x$ comming from $\tan x$ etc. All three programs deal with multiple sums, see Eqs.~(40) and (54). These sums are programmed in the simplest possible way, e.g., the sum \[ \sum_{\substack{ \alpha+\beta+\gamma+\delta+\sigma=m\\ \sigma \geq 1}} Y_{\alpha} Y_{\beta} Y_{\gamma} Y_{\delta} \, \mathbf{s}_{\sigma} \] present in Eq.~(54), where $0 \leq \alpha, \beta, \gamma, \delta, \sigma \leq m-1$, is programmed as \noindent \begin{verbatim} sum2 = 0; Do[ sum2 += If[ a + b + g + d + s == m, (*then*) Y[a] Y[b] Y[g] Y[d] sv[s], (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {g, 0, m - 1}, {d, 0, m - 1}, {s, 1, m - 1} ]; \end{verbatim} etc. This makes the programs as close as possible to mathematical formulas thereby eliminating programming errors. For the same reasons, the integral which defines the coordinate $(\mathbf{e},\mathbf{s}_m)$, see Eq.~(105), was not simplified by using Eqs.~(106)--(108) which would make the computation faster. However, this would make programming a bit more complicated and error-prone. In our computations, this type od optimalizations was not necessary. Our aim was to produce correct results in a reasonable time (seconds or minutes rather than hours). A strong test for the correctness of programming was the vanishing of the odd order corrections, $Y_{2n-1}(x) \equiv 0$ (which required cancellation of many terms), see Sec.~VIIIA. Another check was the fact that in hermitian cases, the same results were produced in the simplified hermitian and non-hermitian theory, see Secs.~VIIIA--VIIIC and VIIIE. \newpage \section{Program to determine the corections $Y_{2n}$ from the recurrence relation (40)} \noindent File \ver \begin{verbatim} (******************************************************************************* Calculation of the phase integral corrections Y2n from recurrence relations, Eq. (40) in [1], for a scalar case, i.e. for one ODE of the Schroedinger type: u''(x) + R(x) u(x) = 0. [1] A. A. Skorupski, "Phase Integral Approximation for coupled ODEs of the Schroedinger type", arXiv: 0710.5868, Sec. II. ******************************************************************************** ***************** Define type of output from Mathematica ********************* *) outpform = InputString["Output, TeX or Fortran form of results, o/t/f? "]; sc = If[ outpform == "o", OpenWrite["Y2n.res", FormatType -> OutputForm], If[ outpform == "t", OpenWrite["Y2n.resTeX", FormatType -> TeXForm], OpenWrite["Y2n.resFor", FormatType -> FortranForm ] ] ]; (**) outpYm = InputString[ "Simple fractions or Common denominator or in Y2n, s/c? "]; (**) WriteString[sc, "\n Formulas for corrections Y2n as functions of x or z (= zeta variable). \n"]; (* ****************** Define maximum value of n in Y[2 n] ********************* *) nmax = Input["nmax = ? "]; WriteString[sc, "\n nmax = "]; Write[ sc, nmax ]; (* ***************** Define type of input to Mathematica ********************* *) inptform = InputString["Input of R(x) and a(x) or General Y2n, i/g? "]; (**) t0 = TimeUsed[]; (**) If [ inptform == "i", (*then*) (* ********************** Define default input data *************************** *) (*** Parabolic Model ***) af = 0; R = coef (x^2 - x1^2); (* **************** Read new input data from file Y2n.dat ********************* *) << Y2n.dat; WriteString[sc, "\n Y2n[x] for \n"]; (* ************************* Write input data ********************************* *) WriteString[sc, "\n R[x] = \n"]; Write[sc, R]; WriteString[sc, "\n Auxiliary function a[x] = \n"]; Write[sc, af]; (**) (****************************************************************************) Qsq = R - af; dQsq = D[ Qsq, x ]; Qsqor1 = Qsq; ep0 = ( (5/16) (dQsq/Qsq)^2 - (1/4) D[ dQsq, x ]/Qsq + af )/Qsq; aux = Simplify[ ep0 ]; aux1 = Together[aux]; WriteString[sc, "\n eps0[x] = \n"]; Write[sc, aux1], (*else*) (********** Prepare quantities for general calculations **************) (**) ep0 = eps0[x]; Qsq = Qsqr[x]; xorz = InputString["x or zeta variable, x/z? "]; If[ xorz == "x", (*then*) WriteString[sc, "\n Y2n[x] as functions of eps0[x], Qsqr[x] = Q^2[x] and derivatives \n"]; Qsqor1 = Qsq, (*else*) Qsqor1 = 1; x = z; WriteString[sc, "\n Y2n[z] as functions of eps0[z] and derivatives \n"]; ] ]; (**) Qm2 = 1/Qsqor1; Y[0] = 1; (* ********************** Start iterations for Y[2 n] ************************* *) For[n = 1, n <= nmax, n++, sum1 = 0; sum2 = 0; sum3 = 0; m = 2 n; Do[ sum1 += If[ a + b == m, (*then*) Y[a] Y[b], (*else*) 0 ], {a, 0, m - 2, 2}, {b, 0, m - 2, 2} ]; Do[ sum2 += If[ a + b + g + d == m, (*then*) Y[a] Y[b] Y[g] Y[d], (*else*) 0 ], {a, 0, m - 2, 2}, {b, 0, m - 2, 2}, {g, 0, m - 2, 2}, {d, 0, m - 2, 2} ]; Do[ sum3 += If[ a + b == m - 2, (*then*) ep0 Y[a] Y[b] + (3/4) Qm2 D[Y[a], x] D[Y[b], x] - (1/2) Y[a] Qm2 (D[Y[b], {x, 2}] - (1/2) Qm2 D[ Qsqor1, x] D[Y[b], x] ), (*else*) 0 ], {a, 0, m - 2, 2}, {b, 0, m - 2, 2} ]; (**) Y[m] = (1/2) (sum1 - sum2 + sum3); (**) ]; t = TimeUsed[]; WriteString[sc, "\n CPU time used for computation (seconds) = "]; Write[sc, t - t0]; (* *********************** Simplify and write results *************************** *) For[n = 1, n <= nmax, n++, m = 2 n; WriteString[sc, "\n n = "]; Write[sc, n]; aux = Simplify[ Y[m] ]; aux1 = If[ outpYm == "c", Together[aux], Apart[aux, x] ]; WriteString[sc, "\n Y2n = \n"]; Write[ sc, aux1 ] ]; t = TimeUsed[]; WriteString[sc, "\n CPU time used for computation & simplification (seconds) = "]; Write[sc, t - t0]; \end{verbatim} End of file \ver The file that follows is an example of the data file for the program \ver uncommenting the definition of the functions $a(x)$ and $R(x)$: \ver the input data.\\[1ex] \noindent File \ver \begin{verbatim} (* Data pertinent to R[x] in the differential equation u''[x] + R[x] u[x] = 0. By default the auxiliary function af[x] = 0. For other choice include the data command: af = your_function[x]; *) (************************ Budden's Model: *********************************) (* af = 0; R = coef x/(x - p); *) (******************************************************************************) \end{verbatim} End of file \ver \section{Program to determine $\mathbf{b}_m$ from the recurrence relation (54)} \noindent File \ver \begin{verbatim} (******************************************************************************* Calculation of bv[m] from the recurrence relation, Eq. (54) in [1] [1] A. A. Skorupski, "Phase Integral Approximation for coupled ODEs of the Schroedinger type", arXiv: 0710.5868, Sec. III. ******************************************************************************** *) bvm = OpenWrite["bvm.res", FormatType -> OutputForm]; (**) WriteString[bvm, "\n Formulas for vectors bvm as functions of x or z (= zeta variable). \n"]; t0 = TimeUsed[]; Unprotect[Sqrt, Power]; Sqrt[x_^2] := x; i^n_ := (-1)^(n/2) /; EvenQ[n]; i^n_ := i (-1)^((n-1)/2) /; OddQ[n]; re[x_] := Coefficient[x, i, 0]; im[x_] := Coefficient[x, i, 1]; cc[x_] := re[x] - i im[x]; Potect [Sqrt, Power]; (* ******************** Define maximum value of m in bv[m] ******************** *) mmax = Input["\n mmax = ? "]; (**) xorz = InputString["\n x or zeta variable, x/z? "]; If[ xorz == "z", (*then*) Q[x_] := 1; x = z ]; Qm1 = Q[x]^(-1); Qm2 = Qm1^2; dQ = D[ Q[x], x ]; (**) Y[x, 0] = 1; bv[1] = i Qm1 D[sv[x, 0], x]; Y1eq0Q = InputString["\n Y[x, 1] = 0, y/n? "]; If[ Y1eq0Q == "y", (*then*) Y[x, 1] = 0 ]; (* ************************ Start iterations for bv[m] ************************ *) For[m = 2, m <= mmax + 1, m++, (**) sum1 = 0; sum2 = 0; sum3 = 0; sum4 = 0; sum5 = 0; sum6 = 0; Do[ sum1 += If[ a + b + s == m, (*then*) Y[x, a] Y[x, b] ( sv[x, s] + 2 (Y[x, s] sv[x, 0] - bv[s]) ), (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {s, 1, m - 1} ]; Do[ sum2 += If[ a + b + g + d + s == m, (*then*) Y[x, a] Y[x, b] Y[x, g] Y[x, d] sv[x, s], (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {g, 0, m - 1}, {d, 0, m - 1}, {s, 1, m - 1} ]; Do[ sum3 += If[ a + b == m, (*then*) Y[x, a] Y[x, b], (*else*) 0 ], {a, 1, m - 1}, {b, 1, m - 1} ]; Do[ sum4 += If[ a + b + g + d == m, (*then*) Y[x, a] Y[x, b] Y[x, g] Y[x, d], (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {g, 0, m - 1}, {d, 0, m - 1} ]; Do[ sum5 += If[ a + b + g + s == m - 1, (*then*) Y[x, a] Y[x, b] Y[x, g] Qm1 D[sv[x, s], x], (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {g, 0, m - 1}, {s, 0, m - 1} ]; Do[ sum6 += If[ a + b + s == m - 2, (*then*) Y[x, a] ( Y[x, b] ( Qm2 ( D[sv[x, s], {x, 2}] - Qm1 dQ D[sv[x, s], x ] ) + eps0[x] sv[x, s] ) - Qm2 D[Y[x, b], x] D[sv[x, s], x] - (1/2) Qm2 ( D[Y[x, b], {x, 2}] - Qm1 dQ D[Y[x, b], x ] ) sv[x, s] ) + (3/4) Qm2 D[Y[x, a], x] D[Y[x, b], x] sv[x, s], (*else*) 0 ], {a, 0, m - 2}, {b, 0, m - 2}, {s, 0, m - 2} ]; (**) bv[m] = (1/2) (sum1 - sum2 + ( sum3 - sum4 ) sv[x, 0] + i 2 sum5 + sum6); (**) ]; t = TimeUsed[]; WriteString[bvm, "\n CPU time used (seconds) = "]; Write[bvm, t - t0]; (**) (* *********************** Simplify and write results *************************** *) For[m = 1, m <= mmax, m++, WriteString[bvm, "\n m = "]; Write[bvm, m]; aux = Simplify[ bv[m] ]; aux1 = Together[aux]; WriteString[bvm, "\n bvm = \n"]; Write[ bvm, aux1 ]; (**) ]; \end{verbatim} End of file \ver \section{Program to determine $Y_m$ and $\mathbf{s}_m$ from recurrence relations for 2 ODEs with either hermitian or non-hermitian matrix, see [1], Sec. VI} \noindent File \ver \begin{verbatim} (******************************************************************************* Calculation of Y[m] and sv[m] from recurrence relations for 2 ODEs with either hermitian or non-hermitian matrix [1]. [1] A. A. Skorupski, "Phase Integral Approximation for coupled ODEs of the Schroedinger type", arXiv: 0710.5868, Sec. VI. ******************************************************************************** *) outpform = InputString["Output, TeX or Fortran form of results, o/t/f? "]; vc = If[ outpform == "o", (*then*) OpenWrite["Ymsvm.res", FormatType -> OutputForm], (*else*) If[ outpform == "t", (*then*) OpenWrite["Ymsvm.resTeX", FormatType -> TeXForm], (*else*) OpenWrite["Ymsvm.resFor", FormatType -> FortranForm] ] ]; t0 = TimeUsed[]; Unprotect[Sqrt, Power]; Sqrt[x_^2] := x; i^n_ := (-1)^(n/2) /; EvenQ[n]; i^n_ := i (-1)^((n-1)/2) /; OddQ[n]; re[x_] := Coefficient[x, i, 0]; im[x_] := Coefficient[x, i, 1]; cc[x_] := re[x] - i im[x]; Potect [Sqrt, Power]; (* *************** Define maximum value of m in Y[m] and sv[m] ****************** *) mmax = Input["mmax = ? "]; WriteString[vc, "\n mmax = "]; Write[vc, mmax]; mmxp1 = mmax + 1; (* *********************** Define default input data **************************** One must define the auxiliary function a(x, p) -> af and the matrix elements Rjk(x, p) -> Rjk, where p represents parameter(s). By default, the variable automatic = True. In that case, one must define a list of meaningful numerical replacements: parrepls = { x -> x0, p -> p0 } which is necessary to calculate automatically the variables: Delta given by Eq. (121), sqrtDel (= Sqrt[Delta]) and signQsq (= sign of Qsq given by Eq. (121)). If one puts automatic = False in the data file Ymsvm.dat, the definitions of Delta, sqrtDel and signQsq must be given in the file Ymsvm.dat. *) (*** Example A ***) af = 0; R11 = x Cos[x]^2 + Sin[x]^2; R12 = (x - 1) Cos[x] Sin[x]; R21 = (x - 1) Cos[x] Sin[x]; R22 = x Sin[x]^2 + Cos[x]^2; parrepls = { x -> 2 }; automatic = True; (* ***************** Read new input data from file Ymsvm.dat ******************** *) << Ymsvm.dat; (* ************************** Write input data ********************************** *) WriteString[vc, "\n R11 = \n"]; Write[vc, R11]; WriteString[vc, "\n R12 = \n"]; Write[vc, R12]; WriteString[vc, "\n R21 = \n"]; Write[vc, R21]; WriteString[vc, "\n R22 = \n"]; Write[vc, R22]; WriteString[vc, "\n af = \n"]; Write[vc, af]; If[ automatic, (*then*) WriteString[vc, "\n paramter replacement list = \n"]; Write[vc, parrepls], (*else*) WriteString[vc, "\n *** Non-automatic calculation *** \n"] ]; (**) G11 = R11 - af; G12 = R12; G21 = R21; G22 = R22 - af; (**) exptrig = InputString["Expand[ , Trig -> True ], y/n? "]; QsqmQ = InputString["Qsqr with minus or plus Sqrt[ Delta ], m/p? "]; WriteString[vc, "\n Qsqr with "]; If[ QsqmQ == "m", WriteString[vc, "minus "], WriteString[vc, "plus "] ]; WriteString[vc, "Sqrt[ Delta ] "]; (**) (* ************ Find eigenvalues and eigenvectors of the G matrix *************** *) If[ automatic, (*then*) Delta = (G11 - G22)^2 + 4 G12 G21 ]; If[ exptrig == "y", Delta = Expand[ Delta, Trig -> True ] ]; Delta = Factor[ Delta ]; WriteString[vc, "\n Delta = \n"]; Write[vc, Delta]; If[ automatic, (*then*) sqrtDel = Sqrt[ Delta ] ]; (**) Qsq = (1/2) (G11 + G22 + If[ QsqmQ == "m", (*then*) -sqrtDel, (*else*) sqrtDel ]); Qsq = Simplify[ Qsq ]; If[ exptrig == "y", Qsq = Expand[ Qsq, Trig -> True ] ]; (**) If[ automatic, (*then*) signQsq = If[ ( Qsq /. parrepls ) < 0, (*then*) -1, (*else*) 1, (*and if neither True or False then*) 1 ] ]; WriteString[vc, "\n signQsq = "]; Write[vc, signQsq]; WriteString[vc, "\n Qsq = \n"]; Write[vc, Qsq]; (**) (*** eps0 = Simplify[ (Qsq^(1/4) D[Qsq^(-1/4), {x, 2}] + af)/Qsq ]; ***) eps0 = Together[ Simplify[ ( (5/16) ( D[Qsq, x]/Qsq)^2 - (1/4) D[Qsq, {x, 2}]/Qsq + af )/Qsq ] ]; WriteString[vc, "\n eps0 = \n"]; Write[vc, eps0]; (* << eps0.mat; *) (**) Q = If[ signQsq < 0, (*then*) - i (- Qsq)^(1/2), (*else*) Qsq^(1/2) ]; WriteString[vc, "\n Q = \n"]; Write[vc, Q]; (**) Qm1 = Q^(-1); Qm2 = Qm1^2; dQ = D[ Q, x ]; (**) s02os01 = ( Qsq - G11 )/G12; If[ exptrig == "y", s02os01 = Expand[ s02os01, Trig -> True ] ]; s02os01 = Simplify[ s02os01 ]; WriteString[vc, "\n s02/s01 = \n"]; Write[vc, s02os01]; Print[ "s02/s01 = ", s02os01 ]; (**) fact = Input["Factor g[x] in eigenvector = ? "]; WriteString[vc, "\n Factor g[x] = \n"]; Write[vc, fact]; s0v1 = fact; s0v2 = fact s02os01; (**) asqr = s0v1 cc[s0v1] + s0v2 cc[s0v2]; If[ exptrig == "y", asqr = Expand[ asqr, Trig -> True ] ]; (**) normeigv = InputString["Normalized eigenvector, y/n? "]; If[ normeigv == "y", (*then*) ms0v = Sqrt[ Simplify[ asqr ] ]; s0v1 = s0v1/ms0v; s0v2 = s0v2/ms0v; asqr = 1; Print[ "\n(s0v(x), s0v'(x)) = " ]; intg = cc[s0v1] D[ s0v1, x ] + cc[s0v2] D[ s0v2, x ]; If[ exptrig == "y", intg = Expand[ intg, Trig -> True ] ]; intg = Simplify[intg]; (** Print["intg = ", intg]; **) If[ intg =!= 0, (*then*) (* Print[ "No, (s0v, s0v'(x)) =" ]; *) Print[ intg ]; intheta = InputString[ "Integrate (s0v(x), s0v'(x)) dx to calculate theta, y/n? "]; If[ intheta == "y", (*then*) theta = i Integrate[ intg, x ]; Print["theta = ", theta]; WriteString[vc, "\n theta = \n"]; Write[vc, theta]; phasf = Cos[theta] + i Sin[theta]; s0v1 = s0v1 phasf; s0v2 = s0v2 phasf ], (*else*) Print[0] ]; ]; s0v1 = Simplify[ s0v1 ]; s0v2 = Simplify[ s0v2 ]; s0v = { s0v1, s0v2 }; spv = { - cc[s0v2], cc[s0v1] }; (**) WriteString[vc, "\n s0v = \n"]; Write[vc, s0v]; WriteString[vc, "\n spv = \n"]; Write[vc, spv]; WriteString[vc, "\n *** sv_m = cp_m spv + c_m s0v, m = 1, 2, ..., mmax *** \n"]; (**) s0v1ms = s0v1 cc[s0v1]; s0v2ms = s0v2 cc[s0v2]; den = s0v1ms (G22 - Qsq) + s0v2ms (G11 - Qsq) - cc[s0v1] s0v2 G12 - s0v1 cc[s0v2] G21; If[ exptrig == "y", den = Expand[ den, Trig -> True ] ]; den = Apart[ Simplify[ den ] ]; WriteString[vc, "\n D = \n"]; Write[vc, den]; (**) coef = - 2 Qsq/den; (**) Y[0] = 1; sv[0] = s0v; (**) bv[1] = i Qm1 D[sv[0], x]; hrmthQ = InputString["Hermitian or Non-hermitian theory, h/n? "]; WriteString[vc, If[ hrmthQ == "h", " *** Hermitian ", " *** Non-hermitian "] ]; WriteString[vc, "theory *** \n"]; simthQ = If[ hrmthQ == "h", (*then*) InputString[ "Simplified, Fulling or Wronskian conserving theory, s/f/w? "], (*else*) "s" ]; wresQ = InputString["Write results, y/n? "]; sresQ = InputString["Simplify results, y/n? "]; If[ wresQ == "y", (*then*) outpYm = InputString[ "Simple fractions, Common denominator or NO output spec. in Ym, s/c/n? "]; outpsv = InputString[ "Simple fractions, Common denominator or NO output spec. in svm, s/c/n? "] ]; aprog = InputString["Append program, y/n? "]; (* ******************* Start iterations for Y[m] and sv[m] ********************** *) For[m = 2, m <= mmxp1, m++, m1 = m - 1; (**) cpf[m1] = coef { - s0v2, s0v1 } . bv[m1]; If[ exptrig == "y", cpf[m1] = Expand[ cpf[m1], Trig -> True ] ]; Print[ " m = ", m1 ]; intgnt[m1] = If[ simthQ == "s", 0, If[ OddQ[m1] && simthQ == "w", 2 re[ cpf[m1] { cc[D[s0v1, x]], cc[D[s0v2, x]] } . spv ], i 2 im[ cpf[m1] { cc[D[s0v1, x]], cc[D[s0v2, x]] } . spv ] ] ]; If[ exptrig == "y", intgnt[m1] = Expand[ intgnt[m1], Trig -> True ] ]; (**) If[ m1 > 1, (*then*) For[ alpha = 1, alpha <= m1-1, alpha++, intgnt[m1] -= If[ simthQ == "s", 0, If[ OddQ[alpha] && simthQ == "w", -1, 1 ] {cc[sv[alpha][[1]]], cc[sv[alpha][[2]]]} . D[sv[m1 - alpha], x ] ] ] ]; If[ exptrig == "y", intgnt[m1] = Expand[ intgnt[m1], Trig -> True ] ]; intgnt[m1] = Simplify[ intgnt[m1] ]; (* WriteString[vc, "\n m = \n"]; Write[vc, m1]; WriteString[vc, "\n integrant[m] = \n"]; Write[vc, intgnt[m1]]; *) (**) cf[m1] = If[ simthQ == "s", 0, Integrate[ intgnt[m1], x ] ]; sv[m1] = cpf[m1] spv + cf[m1] s0v; If[ exptrig == "y", sv[m1] = Expand[ sv[m1], Trig -> True ] ]; (*** sv[m1] = Simplify[ sv[m1] ]; ***) (**) Y[m1] = If[ hrmthQ == "h", (*then*) ( { cc[s0v1], cc[s0v2] } . bv[m1])/asqr, (*else*) (Qm2 cpf[m1] G12 asqr/(2 s0v1) + bv[m1][[1]])/s0v1 ]; If[ exptrig == "y", Y[m1] = Expand[ Y[m1], Trig -> True ] ]; (* WriteString[vc, "\n Y[m] = \n"]; Write[vc, Y[m1]]; *) (*** Y[m1] = Simplify[ Y[m1] ]; ***) (**) If[ m < mmxp1, (*then*) sum1 = 0; sum2 = 0; sum3 = 0; sum4 = 0; sum5 = 0; sum6 = 0; Do[ sum1 += If[ a + b + s == m, (*then*) Y[a] Y[b] ( sv[s] + 2 (Y[s] sv[0] - bv[s]) ), (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {s, 1, m - 1} ]; Do[ sum2 += If[ a + b + g + d + s == m, (*then*) Y[a] Y[b] Y[g] Y[d] sv[s], (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {g, 0, m - 1}, {d, 0, m - 1}, {s, 1, m - 1} ]; Do[ sum3 += If[ a + b == m, (*then*) Y[a] Y[b], (*else*) 0 ], {a, 1, m - 1}, {b, 1, m - 1} ]; Do[ sum4 += If[ a + b + g + d == m, (*then*) Y[a] Y[b] Y[g] Y[d], (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {g, 0, m - 1}, {d, 0, m - 1} ]; Do[ sum5 += If[ a + b + g + s == m - 1, (*then*) Y[a] Y[b] Y[g] Qm1 D[sv[s], x], (*else*) 0 ], {a, 0, m - 1}, {b, 0, m - 1}, {g, 0, m - 1}, {s, 0, m - 1} ]; Do[ sum6 += If[ a + b + s == m - 2, (*then*) Y[a] ( Y[b] ( Qm2 ( D[sv[s], {x, 2}] - Qm1 dQ D[sv[s], x ] ) + eps0 sv[s] ) - Qm2 D[Y[b], x] D[sv[s], x] - (1/2) Qm2 ( D[Y[b], {x, 2}] - Qm1 dQ D[Y[b], x ] ) sv[s] ) + (3/4) Qm2 D[Y[a], x] D[Y[b], x] sv[s], (*else*) 0 ], {a, 0, m - 2}, {b, 0, m - 2}, {s, 0, m - 2} ]; (**) bv[m] = (1/2) (sum1 - sum2 + ( sum3 - sum4 ) sv[0] + i 2 sum5 + sum6)] (**) ]; t = TimeUsed[]; WriteString[vc, "\n CPU time used for computation (seconds) = "]; Write[vc, t - t0]; (**) If[ wresQ == "y", (*then*) (* ********************** Simplify and/or write results ************************* *) For[m = 1, m <= mmax, m++, WriteString[vc, "\n m = "]; Write[vc, m]; aux = If[ sresQ == "y", (*then*) Simplify[ Y[m] ], (*else*) Y[m] ]; aux1 = If[ outpYm == "c", Together[aux], If[ outpYm == "s", Apart[aux], aux ] ]; WriteString[vc, "\n Y_m = \n"]; Write[ vc, aux1 ]; (**) aux = cpf[m]; aux1 = If[ exptrig == "y", Expand[aux, Trig -> True], aux ]; aux2 = If[ sresQ == "y", (*then*) Simplify[aux1], (*else*) aux1 ]; aux3 = If[ outpsv == "c", Together[aux2], If[ outpsv == "s", Apart[aux2], aux2 ] ]; WriteString[vc, "\n cp_m = \n"]; Write[ vc, aux3 ]; (**) aux = cf[m]; aux1 = If[ exptrig == "y", Expand[aux, Trig -> True], aux ]; aux2 = If[ sresQ == "y", (*then*) Simplify[aux1], (*else*) aux1 ]; aux3 = If[ outpsv == "c", Together[aux2], If[ outpsv == "s", Apart[aux2], aux2 ] ]; WriteString[vc, "\n c_m = \n"]; Write[ vc, aux3 ] ] ]; (**) If[ aprog == "y", (*then*) << ap.mat ]; t = TimeUsed[]; WriteString[vc, "\n CPU time used for computation & simplification (seconds) = "]; Write[vc, t - t0]; \end{verbatim} End of file \ver The file that follows is an example of the data file for the program \ver uncommenting appropriate pieces, one can activate input data pertaining to examples given in \cite{genpia}, Sec.~VIII.\\[1ex] \noindent File \ver \begin{verbatim} (**** Data for program Ymsvm ****) (**) (*** Example B ***) (* af = 0; R11 = - ( x Cos[x]^2 + Sin[x]^2 ); R12 = (x - 1) Cos[x] Sin[x]; R21 = (x - 1) Cos[x] Sin[x]; R22 = - (x Sin[x]^2 + Cos[x]^2 ); parrepls = { x -> 2 }; *) (*** Example C.1 ***) (* af = 0; R11 = x Cos[x]^2 + Sin[x]^2; R12 = i (x - 1) Cos[x] Sin[x]; R21 = - i (x - 1) Cos[x] Sin[x]; R22 = x Sin[x]^2 + Cos[x]^2; parrepls = { x -> 2 }; *) (*** Example C.2 ***) (* af = 0; R11 = - ( x Cos[x]^2 + Sin[x]^2 ); R12 = i (x - 1) Cos[x] Sin[x]; R21 = - i (x - 1) Cos[x] Sin[x]; R22 = - ( x Sin[x]^2 + Cos[x]^2 ); parrepls = { x -> 2 }; *) (*** Example C.3 ***) (* af = 0; R11 = - ( x Cos[x]^2 + Sin[x]^2 ); R12 = (1 + i)/2^(1/2) (x - 1) Cos[x] Sin[x]; R21 = (1 - i)/2^(1/2) (x - 1) Cos[x] Sin[x]; R22 = - (x Sin[x]^2 + Cos[x]^2 ); parrepls = { x -> 2 }; *) (*** Example C.4 ***) (* af = 0; R11 = - ( x Cos[x]^2 + Sin[x]^2 ); R12 = (Cos[fi] + i Sin[fi]) (x - 1) Cos[x] Sin[x]; R21 = (Cos[fi] - i Sin[fi]) (x - 1) Cos[x] Sin[x]; R22 = - (x Sin[x]^2 + Cos[x]^2 ); (*fi = x;*) parrepls = { x -> 2 }; *) (*** Example D ***) (* af = 0; R11 = x Cos[x]^2 + Sin[x]^2; R12 = 2 i (x - 1) Cos[x] Sin[x]; R21 = - (1/2) i (x - 1) Cos[x] Sin[x]; R22 = x Sin[x]^2 + Cos[x]^2; parrepls = { x -> 2 }; *) (*** Example E ***) (* af = 0; R11 = h0[x] - h1[x]; R12 = h2[x]; R21 = h2[x]; R22 = h0[x] + h1[x]; automatic = False; r[x_] := Sqrt[ h1[x]^2 + h2[x]^2 ]; Delta = 4 r[x]^2; sqrtDel = 2 r[x]; signQsq = - 1; *) (*** Example X ***) (* af = 0; R11 = E1 - x^2; R12 = x; R21 = x; R22 = E2 - 4 x^2; parrepls = { x -> 3, E1 -> 1, E2 -> 2 }; *) \end{verbatim} End of file \ver The file that follows contains an example of a program that can be appended to \ver \ver This program is pertinent to \ver It computes and prints both the formulas in Eq.~(174) and all numerical results given in TABLE I in \cite{genpia}. For each of two eigenvalues (for the minus or plus sign in Eq.~(123)), the program \ver \ver \ver \ver the program would spend very long time in an attempt to simplify $Y_2(x)$ and $c_2^{\bot}(x)$ which in any case are too complicated to be presented.\\[1ex] \noindent File \ver \begin{verbatim} (***** Appended computation for program Ymsvm, Example E *****) (**) aux = Together[ Y[1] ]; WriteString[ vc, "\n Y_1 = \n"]; Write[ vc, aux ]; (**) aux = Together[ cpf[1] ]; WriteString[vc, "\n cp_1 = \n"]; Write[ vc, aux ]; (**) d0[x_] := 1/(4 x^2) + 4/x^4 + 38/x^6 + 748/x^8; d1[x_] := 1/x^2 + 2/x^4 + 19/x^6 + 374/x^8; h0[x_] := -1 - k^2 + d0[x]; h1[x_] := 2 (om + 1/x^2); h2[x_] := -1 + d1[x]; (**) parrepls = { x -> 55, k -> 4/100, om -> 26041/10^7 }; WriteString[ vc, "\n"]; Write[ vc, parrepls ]; WriteString[ vc, "\n Q = "]; Write[ vc, N[ Q /. parrepls ] ]; WriteString[ vc, "\n eps0/2 = "]; Write[ vc, N[ eps0/2 /. parrepls ] ]; WriteString[ vc, "\n Y_1 = "]; Write[ vc, N[ Y[1] /. parrepls ] ]; WriteString[ vc, "\n Y_2 = "]; Write[ vc, N[ Y[2] /. parrepls ] ]; WriteString[ vc, "\n cp_1 = "]; Write[ vc, N[ cpf[1] /. parrepls ] ]; WriteString[ vc, "\n cp_2 = "]; Write[ vc, N[ cpf[2] /. parrepls ] ]; \end{verbatim} End of file \ver
train/arxiv
BkiUdoE5qoTAoIoYJYLU
5
1
\section{Introduction} Most lattice calculations of QCD in its non--perturbative regime and weak interactions use at present the quenched approximation, i.e. neglect the effect of virtual quark loops. Taking them into account considerably increases computing times. This means that presumably the quenched approximation will remain with us for quite a long time: even with computers much faster than those presently available, it will always offer the chance to make a low cost exploratory calculation before embarking on a full QCD simulation. Simulations of quenched QCD would be much more useful if we had a real understanding of the effects of this approximation. Investigations in this direction have been made by several authors \cite{Morel,Sharpe,qCHPT}. At present we see one main approach that has proven to be the most systematic, and also to incorporate most of the useful ideas that have been proposed on the subject. This method is called quenched Chiral Perturbation Theory (qCHPT), and has been originally proposed by Bernard and Golterman in Ref. \cite{qCHPT} for the purely strong sector (strong interactions in the presence of external fields). It has been recently extended to the heavy--light meson sector \cite{HLmeson}, to vector mesons \cite{Vector} and to the baryon sector \cite{lasha}. It has been also used in the context of non--leptonic weak interactions \cite{Sharpe,Kpipi}. Let us shortly review the main ideas behind this approach. The difficulty to control the quenched approximation comes from the fact that one is modifying the theory at the non--perturbative level. On the other hand we know that at low energy it is possible to define a perturbative scheme to study the strong interactions: this scheme is known as Chiral Perturbation Theory (CHPT). In this framework the expansion parameter is given by the energy of the weakly interacting Goldstone bosons of the spontaneously broken chiral symmetry: these have a vanishing interaction at zero energy, as symmetry dictates. The chiral symmetry imposes also a set of relations between the coefficients of this expansion in different amplitudes. Those relations do not fully constrain the theory that at each order of the expansion has a number of free constants. These constants incorporate the effect of the non--perturbative QCD dynamics. Under the assumption that in the quenched approximation the mechanism of spontaneous chiral symmetry breaking is preserved, one may attempt to construct a perturbative scheme for the quenched case, analogous to the one valid in the full QCD case. In this manner one would be able to calculate those effects of quenching that modify the perturbative, calculable part of the theory. On the other hand, the changes in the unconstrained low energy constants remain unknown, being due to the modifications which affect the non--perturbative QCD dynamics. This method has the advantage of introducing from the start this clear, useful separation between the non--perturbative dynamics of the fundamental theory and the perturbative, predictable dynamics of the Goldstone bosons. A peculiar aspect of the quenched approximation comes from the $U(1)$ axial anomaly of QCD. In the fundamental theory the would--be--Goldstone boson (the $\eta^\prime$) does not become massless in the chiral limit, since the axial anomaly generates a singlet component (heavy) mass at the level of the effective theory. Thus, in the real world the $\eta^\prime$ is heavy and decoupled from the octet of the pseudo--Goldstone bosons. In the quenched approximation this decoupling stops halfway: only one of the diagrams that are responsible for the decoupling of the $\eta^\prime$ survives. At the level of the effective theory this has important consequences: the singlet field remains light (degenerate with the Goldstone bosons) and has to be treated on the same footing as the octet fields. However its two--point function develops a double pole and does not admit an interpretation as a propagator. Treating the singlet as a dynamical degree of freedom brings in new constants in the effective theory. One of them is a new mass scale (the singlet mass $m_0$) that is generated by the anomaly, and that does not vanish in the chiral limit. This mass appears in the numerator of the double--pole term in the singlet two--point function. As different authors have shown \cite{Sharpe,qCHPT}, this double pole is responsible for the presence of a new type of chiral logarithms (we denote them as quenched chiral logs) in loop corrections, of the form $m_0^2 \ln M_\pi^2$, as opposed to the standard ones $M_\pi^2 \ln M_\pi^2$. This is one of the main qualitative differences that arises from the quenched version of CHPT. So far, works in quenched CHPT have concentrated on specific processes, analyzing the changes induced in Goldstone boson loops and the size of the effect of quenched chiral logarithms. The aim of the present work is to perform a complete renormalization of the theory at the one--loop level, on the same line as what has been done by Gasser and Leutwyler in the ordinary CHPT case \cite{gl84, gl85}. This requires a calculation of all the ultraviolet divergent pieces of the generating functional and a definition of the Lagrangian at order $p^4$, the next--to--leading order. The advantages of the present analysis are the following: \begin{enumerate} \item[1)] The calculation of the divergences and renormalization can be done for a generic number of flavours $N$. As we have shown in Ref. \cite{pl} the $N$--dependence of the divergences can be used to verify the cancellation of quark loops in the effective theory. \item[2)] Like in the standard case, the calculation of the divergences at the generating functional level provides a useful check for single amplitude calculations. This check is even more welcome in qCHPT where the number of graphs to be computed becomes soon very large. \item[3)] This calculation allows to have full control on the divergences due to singlet loops. In particular we will show that quenched chiral logarithms can be accounted for via a renormalization of the low--energy constant $B_0$ (which is proportional to the $\bar{q} q$ condensate). This constant appears in all other quantities through the pion mass squared, with the only exception of $\bar{q}q$ matrix elements, that have it as an explicit factor. \end{enumerate} After having performed the one--loop renormalization, we will devote our attention to the ultraviolet finite part of the one--loop corrections, by computing specific physical quantities at one loop. The relevance of the finite part of the loop corrections is in the fact that they may contain terms which diverge in the chiral limit like an inverse power of the quark mass. One can realize that this may happen by simply looking at the standard chiral power counting \cite{weinberg}, and taking into account the fact that in quenched CHPT a new vertex appears with chiral order zero (the vertex proportional to $m_0^2$). Power--like chiral divergences and quenched chiral logs are the crucial problem of the quenched version of CHPT: the effective theory is defined as an expansion around the chiral limit, and this limit is no more well defined in the quenched case. On the other hand these divergences seem to be unavoidable in the present framework and it looks plausible that they are a direct consequence of the sicknesses of quenched QCD. To clarify this very important point, a direct evidence of these effects in lattice simulations of quenched QCD would be most welcome. In our analysis of various observables we will give the complete one--loop results. Our aim is not just to make predictions, or to compare with numbers produced in lattice simulations. Rather, we would like to show in detail how the quenched approximation distorts the matrix elements. For this reason we will only work in Minkowski space--time: all the formulae will be given with the idea that one should be able to easily see the difference from the corresponding ones calculated in standard CHPT. In particular we will stress the presence of terms divergent in the chiral limit and of unphysical threshold singularities in Minkowski space--time at infinite volume. These type of singularities have been already discussed in the literature \cite{bg,pipiq}, and have lead to the conclusion that quenched CHPT makes sense only in Euclidean space--time. Despite this, we still prefer to calculate amplitudes in Minkowski space--time, considering them as formal expressions. As we just said this will make the comparison to standard CHPT amplitudes easier; on the other hand, the modifications needed to go to Euclidean space--time can be easily implemented. The plan of the paper is as follows. In Section 2 we outline the main steps from CHPT to its quenched version. We give the leading order Lagrangian and define our notation, both for CHPT and quenched CHPT. In Section 3 we calculate the divergences of qCHPT to one loop using the background field method, while Section 4 contains the list of counterterms for a generic number of flavours $N$ and for $N=3$ and 2. This completes the renormalization of the theory at the one--loop level. In Section 5 we analyze a few quantities to one loop in the two degenerate flavours case. These are the $\bar{q}q$ condensate, the pion mass and decay constant, the scalar and vector form factors of the pion, and the $\pi \pi$ scattering amplitude. In Section 6 we state our conclusions. We have also three appendices. In Appendix A we give a simple derivation of the divergent term proportional to $m_0^2$ in the quenched generating functional. In Appendix B we give the explicit $N$--dependence of the divergences in the non--leptonic weak interactions sector, and guess the divergences in the quenched case by simply dropping any $N$--dependence. Finally, in Appendix C we give the explicit expressions for the one--loop functions which enter the calculations. \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} \section{From CHPT to its quenched version} In this section we introduce the standard notation of Chiral Perturbation Theory, that will be also used in its quenched version. We work in Minkowski space--time in both cases for ease of comparison. For any further detail in the derivation of the CHPT Lagrangian we refer the reader to the original works by Gasser and Leutwyler \cite{gl84,gl85}. The construction of the CHPT Lagrangian is based on the identification of the symmetry group of the QCD Lagrangian in the chiral limit, which, for $N$ flavours is given by $U(N)_L\otimes U(N)_R$, and on the well supported assumption that the symmetry of the subgroup $SU(N)_L\otimes SU(N)_R$ is spontaneously broken to $SU(N)_V$. The extension of this construction to the quenched case was proposed by Bernard and Golterman \cite{qCHPT} on the basis of an observation made by Morel \cite{Morel}. He observed that, formally, a Lagrangian corresponding to quenched QCD can be obtained by adding to the QCD Lagrangian a term which is totally analogous to that for quark fields, but which contains {\em ghost} spin--1/2 fields with wrong, i.e. bosonic statistics. The symmetry of the resulting Lagrangian in the chiral limit is larger than that of QCD and is given by the graded group: $U(N|N)_L\otimes U(N|N)_R$, describing transformations between $N$ physical flavours and $N$ ghost flavours. It is then assumed that also in the quenched case a spontaneous symmetry breaking down to the diagonal subgroup $SU(N|N)_V$ occurs. Like in standard QCD, the $U(1)_A$ symmetry is anomalous. \subsection{Standard CHPT} Chiral Perturbation Theory describes the dynamics of the octet Goldstone bosons fields (pions) of the spontaneously broken chiral symmetry of QCD. It is an expansion in powers of the energy of the Goldstone bosons and the light quark masses. The lowest order CHPT Lagrangian, i.e. at order $p^2$ and linear in the quark masses, can be written in the following form: \begin{equation} {\cal L}_2 = \frac{F^2}{4} \langle D_\mu U D^\mu U^\dagger + U^\dagger \chi + \chi^\dagger U \rangle = \frac{F^2}{4} \langle u_\mu u^\mu + \chi_+ \rangle \; , \label{LCHPT} \end{equation} where $\langle\ldots\rangle$ stands for the trace over flavour indices, $F$ is the bare pion decay constant and the fields are defined as follows \begin{eqnarray} U= u^2 &=& \exp\left(\sqrt{2} i \phi/F \right) \; , \nonumber \\ D_\mu U &=& \partial_\mu U -i r_\mu U + i U l_\mu \; , \nonumber \\ \chi &=& 2 B_0(s+ip) \; ,\nonumber\\ u_\mu&=&iu^\dagger D_\mu U u^\dagger = u_\mu^\dagger \; , \nonumber \\ \chi_+ &=& u \chi^\dagger u + u^\dagger \chi u^\dagger \; . \label{DEFF} \end{eqnarray} The Lagrangian contains the external sources $s, p, v_\mu, a_\mu$, $r_\mu=v_\mu+a_\mu$, $l_\mu=v_\mu-a_\mu$, which are $N\times N$ matrices, with $N$ the number of flavours. The field $\phi$ is an $N\times N$ matrix that contains the Goldstone bosons fields: $\phi =1/\sqrt{2} \sum_{i=1}^{N^2-1} \lambda_i \phi^i$. In case, one may add to $\phi$ a singlet component, so that $\langle \phi \rangle = \phi_0$. In the presence of a singlet component the Lagrangian in Eq. (\ref{LCHPT}) is invariant under $U(N)_L\otimes U(N)_R$. Since in QCD the $U(1)_A$ subgroup is anomalous the breaking pattern $U(N)_L\otimes U(N)_R\to SU(N)_L\otimes SU(N)_R\otimes U(1)_V$ is realized. The invariance under the residual unbroken group allows for the presence of extra functions of the singlet component $\phi_0$ only. A possible choice for this Lagrangian, compatible with $P,C,T$ and chiral invariance, is (see also \cite{gl85} for a different choice): \begin{equation} {\cal L}_2= V_1(\phi_0) \langle D_\mu U D^\mu U^\dagger\rangle +V_2(\phi_0) \langle U^\dagger \chi + \chi^\dagger U\rangle -V_0(\phi_0) +V_5(\phi_0) D_\mu\phi_0 D^\mu \phi_0 \; , \label{SCHPT} \end{equation} where all the functions $V_i$ are even and real functions of $\phi_0$. \subsection{Quenched CHPT} The modification needed to construct the quenched version of the CHPT Lagrangian in Eq. (\ref{SCHPT}) for a generic number of flavours $N$ consists of the extension of the chiral symmetry group $SU(N)_L\otimes SU(N)_R\otimes U(1)_V$ to the graded group $\left[ SU(N|N)_L\otimes SU(N|N)_R \right] \odot U(1)_V$, which enlarges the spectrum of the theory to include {\em ghost} states (the $\odot$ stands for the semidirect product of $U(1)_V$, which does not commute with transformations that exchange particles with ghosts). In the quenched case there are $N$ physical flavours and $N$ ghost flavours. Under the graded extension all the $N\times N$ matrices representative of the original $U(N)$ group are transformed into graded $2\times 2$ block matrices \[ A \rightarrow \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \; , \] whose components are in turn $N \times N$ matrices. The matrices $A$ and $D$ ($B$ and $C$) have bosonic (fermionic) character. The trace is then transformed into supertrace: \[ \mbox{tr} (A) \rightarrow \mbox{str} \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) = \mbox{tr}(A)-\mbox{tr}(D) \; . \] The leading order Lagrangian of quenched CHPT can be written in full analogy to the standard CHPT case\footnote{To distinguish between a quenched CHPT quantity and its standard counterpart we use either capital letters (as in $\phi \rightarrow \Phi$) or, when this is not possible, the $s$ subscript (as in $U \rightarrow U_s$).}: \begin{eqnarray} {\cal L}_2&=&V_1(\Phi_0) \mbox{str} (D_\mu U_sD^\mu U_s^\dagger )+V_2(\Phi_0) \mbox{str} (\chi^\dagger_s U_s+U_s^\dagger\chi_s ) -V_0(\Phi_0) \nonumber\\ &&+V_5(\Phi_0) D_\mu\Phi_0 D^\mu \Phi_0 \; , \label{L2} \end{eqnarray} where again $V_i(\Phi_0)$ are even and real functions of the generalized singlet field $\Phi_0$. The graded meson field is defined through the usual exponential representation: \[ U_s = \exp (\sqrt{2} i\, \Phi/F) \; \; , \] where $F$ is the bare quenched pion decay constant and $\Phi$ is now a hermitian non traceless $2\times 2$ block matrix \[ \Phi = \left( \begin{array}{cc} \phi & \theta^\dagger \\ \theta & \tilde\phi \end{array} \right) \; , ~~~\mbox{str}(\Phi) = \Phi_0 = \phi_0-\tilde\phi_0 \; \; , \] which contains the new {\em ghost} states of the quenched spectrum. All the possible quenched meson states carry the quantum numbers of a two particle bound state made up with quarks $q$ or ghost--quarks $\tilde{q}$. On the diagonal sites it contains the physical pseudo--Goldstone boson matrix $\phi$ (i.e. the physical pions including the singlet component), with the quantum numbers of a $q\bar{q}$ pair, and the ghost field matrix $\tilde{\phi}$, with the quantum numbers of a $\tilde{q}\bar{\tilde{q}}$ pair. They are both of bosonic nature. In the off--diagonal sites are the {\em ghost} hybrid fields $\theta$ and $\theta^\dagger$, which carry the quantum numbers of a mixed $\tilde{q}\bar{q}$ and $q\bar{\tilde{q}}$ pair respectively, both of fermionic nature. This spectrum of meson states can be found also in the original derivation by Morel \cite{Morel}. He calculated the functional integral over the quark and ghost--quark fields (in the leading large--$d$ expansion and strong gauge coupling limit) and obtained exactly the meson spectrum of quenched CHPT, with mesons of composite nature, given by bilinears of quarks/ghost--quarks at the same lattice site. The covariant derivative over the field $U_s$ is defined as $D^\mu U_s = \partial^\mu U_s - i r_s^\mu U_s+iU_s l_s^\mu$, where $r_s^\mu (l_s^\mu )$ is the right(left)--handed external source of the graded group. The field $\chi_s=2B_0(s_s+ip_s)$ contains the external scalar ($s_s$) and pseudoscalar ($p_s$) sources analogously to the ordinary CHPT case. All the external fields $r_s^\mu , l_s^\mu , s_s, p_s$ are generalizations of the standard external fields, in order to make the Lagrangian in Eq. (\ref{L2}) locally invariant under the graded group $\left[ SU_L(N|N)\otimes SU_R(N|N) \right] \odot U(1)_V$. Since we are not interested in studying matrix elements containing the spurious fields as external legs, we will always use the standard external sources only. With this reduction a generic graded source reads as follows: \[ j_s = \left( \begin{array}{cc} j & 0 \\ 0 & 0 \end{array} \right) \; ,\; j= p,v_\mu, a_\mu \; \; . \] For the scalar external source we must recall that it is defined to contain the quark mass matrix ${\cal M }$ which has to be the same both for the quarks and the ghosts: \[ s_s=\left( \begin{array}{cc} {\cal M } + \delta s & 0 \\ 0 & {\cal M} \end{array} \right) \; \; . \] In what follows the quark mass matrix will be taken proportional to the unit matrix: ${\cal M} = m_q { \bf 1}$. All the Goldstone bosons and their ghost counterparts will have the same mass: $M^2=2 B_0 m_q$. We have adopted the usual CHPT notation and call $M^2$ the lowest order term in the expansion of the mass of the pions in powers of quark masses: \begin{equation} M_\pi^2 = 2 B_0 m_q + O(m_q^2) = M^2 + O(m_q^2) \; \; . \end{equation} Finally, we expand the functions $V_i(\Phi_0)$ in powers of $\Phi_0$: \begin{eqnarray} V_0(\Phi_0) &=& \frac{m_0^2}{2N_c} \Phi_0^2 +O(\Phi_0^4) \; \; , \nonumber \\ V_{1,2}(\Phi_0) &=& \frac{F^2}{4}+{1\over 2}v_{1,2}\,\Phi_0^2 + O(\Phi_0^4) \; \; , \nonumber \\ V_5(\Phi_0) &=& \frac{\alpha}{2N_c} + O(\Phi_0^2) \; \; , \end{eqnarray} and we shall always work with number of colours $N_c=3$. \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} \section{One--loop divergences} To calculate the ultraviolet divergent part of the quenched generating functional to one loop we use the background field method, i.e. expand the action around the classical solution, which is determined by the external sources through the classical equations of motion. We write the field $U_s$ as: \[ U_s= u_s ~e^{i \Xi}~ u_s \; \; , \] where $\bar{U_s}=u_s^2 $ is the classical solution to the equations of motion. In the absence of spurious external sources it reduces to \[ u_s=\left( \begin{array}{cc} { u } & 0 \\ 0 & {\bf 1} \end{array} \right) \; \; . \] We decompose the fluctuation $\Xi$ similarly to the field $\Phi$ and write: \[ \Xi = \left( \begin{array}{cc} \xi & \zeta^\dagger \\ \zeta & \tilde\xi \end{array} \right) \;\; , \; \; \; \; \mbox{str}(\Xi)= \sqrt{N}(\xi_0-\tilde\xi_0) \; \; , \] (note that with this normalization the $\xi_0$ and $\tilde\xi_0$ have a proper kinetic term). The matrix fields $\xi$ and $\zeta$ are decomposed as follows \begin{equation} \xi =\sum_{a=0}^{N^2-1} \hat\lambda_a \xi^a , \; \; \zeta = \sum_{a=0}^{N^2-1} \hat\lambda_a \zeta^a \; , \end{equation} and the fields $\tilde\xi$ and $\zeta^\dagger$ analogously, where $\hat\lambda_a = \lambda_a/\sqrt{2}, \; \; a=1,\ldots,N^2-1$, and $\hat\lambda_0= {\bf 1}/\sqrt{N}$. Given their special character, it is useful to separate the singlet components of the $\xi$ and $\tilde\xi$ fields from the rest, and combine them into one vector: \[ X_0 = \left( \begin{array}{c} \xi_0 \\ \tilde\xi_0 \end{array} \right) \; \;. \] The remaining fields are put into components of the following vectors: \[ \xi^T = (\xi^1, \; \xi^2, \ldots, \; \xi^{N^2-1})\;, ~~~~ \zeta^\dagger = (\zeta^{\dagger \;0}, \; \zeta^{\dagger \;1}, \ldots, \; \zeta^{\dagger \; N^2-1}) \; . \] With this notation the action can be written as \begin{eqnarray} S[\Phi] &=& S[\bar{\Phi}] - \frac{F^2}{4} \int dx \left\{ X_0^T D_X X_0 + \xi^T D_\xi \xi + \xi_0 B^T \xi + \xi^T B \xi_0 + 2 \zeta^\dagger D_\zeta \zeta \right. \nonumber \\ &-& \left. \tilde{\xi}^T (\Box + M^2) \tilde{\xi}\right\} + O(\Xi^3) \; \; . \end{eqnarray} The explicit expressions for the various differential operators $D_{X,\xi,\zeta}$ and the matrix $B$ will be given below. The matrix $B$ induces a mixing between the singlet and non singlet component of the physical meson field. Notice also that the fields $\tilde\xi$ are completely decoupled from the rest: the integration over these degrees of freedom produces only an irrelevant constant. Before deriving the various contributions to the generating functional at one loop we shift the field $\xi$ in order to remove the mixing with the singlet component $\xi_0$. By performing the translation \[ \xi = \xi'- D_\xi^{-1} B \xi_0\; , \] one gets \begin{eqnarray} \xi^T D_\xi \xi + \xi_0 B^T \xi+ \xi^T B \xi_0 = \xi'^T D_\xi \xi'- \xi_0 B^T D_\xi^{-1} B \xi_0\; . \end{eqnarray} In this manner the action up to the quadratic fluctuations becomes a sum of quadratic differential forms diagonal in the fields $X_0, \; \xi, \; \tilde\xi , \; \zeta, \; \zeta^\dagger$. The price to pay is that now the differential operator acting on the singlet field $X_0$ has a nonlocal term. Denoting as $\bar{D}_X$ the new non local operator acting on $X_0$ after the shift, the quenched generating functional to one loop can be formally written as follows \begin{equation} e^{iZ^{\mbox{\tiny{qCHPT}}}_{\mbox{\tiny{1~loop}}}} = {\cal N} {\det D_\zeta\over (\det D_\xi )^{1\over 2}(\det \bar{D}_X)^{1\over 2}} \; \;. \label{ZQ} \end{equation} As we will see below, the non locality of $ \bar{D}_X$ will hardly make the calculation of the divergent part more complicated. \subsection{Integral over the $\xi$ fields} The differential operator $D_\xi^{ab}$ is defined as follows\footnote{We remind the reader that the indices $a,\;b$ run from 1 to $N^2-1$. The singlet components are treated separately.} \begin{eqnarray} D_\xi^{ab}\xi_b &=& d_\mu d^\mu\xi^a+\hat{\sigma}^{ab}\xi_b \; \; , \nonumber\\ d_\mu\xi^a &=& \partial_\mu\xi^a+\hat{\Gamma}_\mu^{ab}\xi_b \; \; , \label{D} \end{eqnarray} where \begin{equation} \hat{\Gamma}_\mu^{ab}=-\langle \Gamma_\mu [\hat\lambda^a,\hat\lambda^b ] \rangle \; \;, \; \; \; \; \hat{\sigma}^{ab}= -{1\over 4} \langle [u_\mu ,\hat\lambda^a ] [u^\mu ,\hat\lambda^b ] \rangle +{1\over 4} \langle \{\hat\lambda^a,\hat\lambda^b\}\chi_+ \rangle \; \; , \end{equation} and $\Gamma_{\mu }= 1/2 ( [u^\dagger ,\partial_\mu u ]-i u^\dagger r_\mu u -i u l_\mu u^\dagger )$ is the vector current connection of the covariant derivative over the dynamical fields. The ultraviolet divergent part of the integral over the $\xi$ fields can be derived in closed form by regularizing the determinant in $d$ dimensions and using standard heat--kernel techniques. The result reads: \begin{eqnarray} {i \over 2} \ln \det D_\xi &=& -\frac{1}{(4\pi)^2(d\!-\!4)} \int \! dx \left\{ {N \over 6} \langle \Gamma_{\mu \nu} \Gamma^{\mu \nu} \rangle + {1 \over 2}\left[ {1 \over 4} \langle u_\mu u_\nu \rangle \langle u^\mu u^\nu \rangle + {1 \over 8} \langle u_\mu u^\mu \rangle^2 \right. \right. \nonumber \\ &+& {N \over 8} \langle (u_\mu u^\mu)^2 \rangle + {N \over 4} \langle u_\mu u^\mu \chi_+ \rangle + {1 \over 4} \langle u_\mu u^\mu \rangle \langle \chi_+ \rangle \nonumber\\ &+& \left({N \over 8}-{1 \over {2N}} \right) \langle \chi_+^2 \rangle + \left({1 \over 8}+{1 \over {4N^2}} \right) \langle \chi_+ \rangle^2 \nonumber\\ &-& \left.\left. {1\over 2}\langle u_\mu \rangle\langle u^\mu \left( u_\nu u^\nu + \chi_+ \right) \rangle \right] \right\} +\ldots \; \; , \label{lndet1} \end{eqnarray} where the ellipsis stands for contributions which are finite in four dimensions. This result is the standard CHPT result derived in \cite{gl85}, where now we also keep terms proportional to $\langle u_\mu\rangle$ that are nonzero only in the presence of the singlet component. \subsection{Integral over the $\zeta$ fields} The differential operator $D_\zeta^{ab}$ is defined like in Eq. (\ref{D}), but with barred quantities, given by\footnote{Here the singlet component is included, and the indices $a,\;b$ run from 0 to $N^2-1$.} \begin{equation} \bar{\Gamma}_\mu^{ab} = -\langle\Gamma_\mu \hat\lambda^a \hat\lambda^b \rangle \; \;, \; \; \; \; \bar{\sigma}^{ab}= {1\over 4} \langle( u_\mu u^\mu +\chi_+ +4B_0{\cal M}) \hat\lambda^a \hat\lambda^b \rangle \; \; , \end{equation} where we recall that ${\cal M}$ is the quark mass matrix. The ultraviolet divergent part of the functional integral over the $\zeta$ fields can also be given in closed form using standard heat--kernel techniques. The result reads: \begin{equation} i\ln \det D_\zeta=\frac{-1}{(4\pi)^2(d\!-\!4)} \! \int \! dx \left[ {N \over 6} \langle \Gamma_{\mu \nu} \Gamma^{\mu \nu} \rangle +{N \over 16} \langle (u_\mu u^\mu + \chi_+ + 4 B_0 {\cal M} )^2 \rangle \right] +\ldots \; \; . \label{lndetg} \end{equation} As we remarked in Ref. \cite{pl}, the integral over the $\zeta$ fields completely removes the terms linear in $N$ in the divergences of standard CHPT to one loop. This dependence is not fully explicit in Eq. (\ref{lndet1}), since a factor $N$ is contained in the trace of $\chi_+$, when we expand this around $s={\cal M}$ and for ${\cal M}$ diagonal: \[ \langle \chi_+ \rangle = 2 N M^2 +O(\phi^2) \; \; . \] This result shows that the qCHPT scheme is coherent: the terms linear in $N$ can only be generated by quark loops, and these are supposed to be absent in the quenched approximation. \subsection{Integral over the $X_0$ fields} After the shift of the $\xi$ field the operator acting on $X_0$ can be written as: \begin{equation} X_0^T \overline{D}_X X_0 = X_0^T \left[ D_X-\frac{1}{2} (1+\tau_3) B^T D_\xi^{-1} B \right] X_0 \; \; , \end{equation} where \begin{eqnarray} D_X &=& D_X^0+A_X \; \; ,\nonumber \\ D_X^0&=& \tau_3(\Box +M^2)+\frac{N}{3}(1-\tau_1)(\alpha \Box +m_0^2) \; \; , \nonumber \\ A_X &=& \frac{1}{4N}(1+\tau_3) \langle \hat\chi_+ \rangle -N(1-\tau_1)\left ( v_1 \langle u_\mu u^\mu \rangle +v_2 \langle \hat\chi_+ \rangle \right ) +O(\Phi_0^2) \; \; , \nonumber \\ B^a &=& \frac{1}{2\sqrt{2N}} \langle \lambda^a \chi_+ \rangle \; \; , \label{DX} \end{eqnarray} and $\hat\chi_+=\chi_+-2M^2\bf{1}$, so that $\langle\hat\chi_+\rangle =\langle\chi_+ \rangle -2NM^2$. The expression of $D_X^0$, the ``free'' part of the differential operator, clearly shows that the theory has a problem here: it is not possible to diagonalize that operator, and we do not have two freely propagating normal fields $(\xi_0, \tilde\xi_0 )$. On the other hand this problem is welcome in this context, since it is thought to be the manifestation of the absence of quark loops in the singlet field propagator, at the level of the effective theory. In the language of Feynman diagrams this problem shows up as a double pole in the propagator of the singlet field, whose consequences on observables have been studied by several authors \cite{Sharpe,qCHPT,bg}. We adopt the usual point of view on this problem, i.e. assume that it has to be there, and proceed with the calculation of the divergent part of the generating functional. In this case we cannot apply straightforwardly the heat--kernel techniques, because the differential operator does not reduce to a diagonal Klein--Gordon operator when the external fields are put to zero. Therefore we just expand the logarithm of the differential operator, and isolate the ultraviolet divergent terms: \begin{eqnarray} \mbox{Tr} \ln \left(\overline{D}_X/D_X^0 \right) &=& \mbox{Tr}\left[ {D_X^0}^{-1} (\overline{D}_X\!-\!D_X^0) \right] \nonumber \\ &-&\frac{1}{2} \mbox{Tr}\left[ {D_X^0}^{-1} (\overline{D}_X\!-\!D_X^0){D_X^0}^{-1} (\overline{D}_X\!-\!D_X^0) \right] + \ldots \; . \label{logDX} \end{eqnarray} One can easily see that the ellipsis in (\ref{logDX}) contains ultraviolet finite terms only. We postpone a more detailed discussion of the infrared behaviour of Eq. (\ref{logDX}) to the end of this section. The inverse of the ``free'' operator $D_X^0$ is: \begin{equation} {D_X^0}^{-1} = G_0 \left[ \tau_3 - (1+\tau_1) \frac{N}{3} ( \alpha \Box + m_0^2 ) G_0 \right] \; \; , \label{DX0-1} \end{equation} where \begin{equation} ( \Box +M^2 )_x G_0(x-y) = \delta(x-y) \; \; , \end{equation} and \begin{equation} \overline{D}_X-D_X^0 = A_X-\frac{1}{2} (1+\tau_3) B^T D_\xi^{-1} B \; \; . \label{ABX} \end{equation} As we anticipated above, the overall effect of the shift made to remove the mixing between singlet and nonsinglet fields is easily accounted for. Expanding around the free part of $D_\xi^{-1}$ in the non local term one gets \begin{equation} B^T D_\xi^{-1} B = \frac{1}{4N} G_0 \left[ \langle \chi_+^2 \rangle -\frac{1}{N} \langle \chi_+ \rangle^2 \right] +O(G_0^2) \; \; . \end{equation} The term proportional to $O(G_0^2)$ can only yield ultraviolet finite contributions to Eq. (\ref{logDX}), while the $G_0$ term yields ultraviolet divergent contributions only to the first term of the expansion in Eq. (\ref{logDX}). This shows that also in the singlet sector the UV divergent part is local and can be given in closed form. The calculation of the ultraviolet divergent part of $\ln \det \overline{D}_X$ is now easy: we simply have to insert back Eqs. (\ref{DX0-1},\ref{ABX}) into Eq. (\ref{logDX}) and keep only the UV divergent parts. Having worked out the traces (over the $\tau$ matrices) we obtain: \begin{eqnarray} \frac{i}{2} \mbox{Tr} \ln \left( \overline{D}_X/D_X^0 \right) &=& -\frac{1}{(4\pi)^2 (d-4)} \left\{ \frac{1}{4N} \langle \chi_+^2 \rangle -\frac{1}{8N^2} \langle \chi_+ \rangle^2 \right. \nonumber \\ && + \frac{m_0^2}{6} \langle \chi_+ \rangle - \frac{\alpha}{12} \langle \chi_+^2 \rangle + \frac{\alpha^2}{72} \langle \hat\chi_+ \rangle^2 \nonumber \\ && -\left. \frac{1}{2} \langle \hat\chi_+ \rangle \left(v_1 \langle u_\mu u^\mu \rangle +v_2 \langle \hat\chi_+ \rangle \right) \right\} + \ldots \; \; , \label{Zsinglet} \end{eqnarray} where the ellipsis contains UV finite terms only. The terms proportional to inverse powers of $N$ exactly cancel those contained in Eq. (\ref{lndet1}) giving a result that is totally $N$--independent. The terms proportional to $m_0^2$ and powers of $\alpha$ are the effect of the double pole in the singlet propagator, and are also $N$--independent. Note that no mixed terms of the type $(m_0^2,\alpha )\times (v_1, v_2)$ can be produced in the divergent part. The term proportional to $m_0^2$ is a term already present in the $O(p^2)$ Lagrangian. To remove that divergence one has to add to the lowest order parameter $B_0$ in the ${\cal L}_2$ Lagrangian a $d$--dependent part proportional to $m_0^2$ that has a pole at $d=4$: \begin{equation} B_0 \rightarrow B_0\left[1 + {\mu^{d-4}\over 16 \pi^2} \frac{1}{d-4} \frac{2m_0^2}{3F^2}+b_0(\mu) \right]\; . \label{B0ren} \end{equation} This feature is completely new with respect to standard CHPT (in dimensional regularization), and stems from the fact that in the quenched theory we have a new mass scale that does not vanish in the chiral limit. After the divergence has been removed, we are left with a term of the form $m_0^2 \ln M^2 \langle \chi_+ \rangle$. This term contains all the one--loop quenched chiral logs that have been discussed at length in the literature. Our calculation shows that they can be fully accounted for by defining a renormalized constant $\overline{B}_0$ \begin{equation} B_0 \rightarrow \overline{B}_0 = B_0 \left( 1- \frac{m_0^2}{48 \pi^2 F^2} \ln \frac{M^2}{\mu^2} +b_0(\mu)\right) \; \; . \label{B0bar} \end{equation} Notice that since $B_0$ is independent from the quark masses, $\overline{B}_0$ becomes divergent in the chiral limit. To find evidence for these quenched chiral logs one should try to extract from lattice data this quantity $\overline{B}_0$. As we will see the quark condensate and the scalar form factor are two excellent candidates for this, since they are the simplest quantities which are explicitly proportional to $\overline{B}_0$. Other quantities will tipically depend on $\overline{B}_0$ through the renormalized pion mass. This at one loop is given by: \begin{equation} M_\pi^2 = 2 \overline{B}_0 m_q +O(m_q^2) \; \; , \end{equation} and is not divergent in the chiral limit. These other quantities are therefore much less suitable to identify the presence of quenched chiral logs. Of course what we have just said is valid in the specific sector we are studying here. To extend it to other sectors of the effective theory (like the non--leptonic weak interactions) requires further study. However we have a rather simple argument that shows that what has happened here will happen also in other sectors: the quenched chiral logs to one loop contribute to the redefinition of one of the constants appearing in the lowest order Lagrangian. In order not to interrupt the discussion here we relegate the argument to Appendix \ref{m0}. \subsection{Complete result} In this section we put together all the various pieces and give the complete result for the ultraviolet divergent part of the generating functional of qCHPT to one loop. The explicit expression for Eq. (\ref{ZQ}) is: \begin{eqnarray} Z^{\mbox{\tiny{qCHPT}}}_{\mbox{\tiny{1~loop}}} &=& -\frac{1}{(4\pi)^2(d\!-\!4)} \int dx \left[ {1 \over 8} \langle u_\mu u_\nu \rangle \langle u^\mu u^\nu \rangle + {1 \over 16} \langle u_\mu u^\mu \rangle^2 \right. \nonumber \\ && + {1\over 8} \left(1 - 4 v_1 \right) \langle u_\mu u^\mu \rangle \langle \hat\chi_+ \rangle + {1\over 16} \left(1 - 8 v_2 \right)\langle \hat\chi_+ \rangle^2 \nonumber \\ &&+{m_0^2\over 6}\langle \chi_+ \rangle +{\alpha^2\over 72} \langle \hat\chi_+ \rangle^2 -{\alpha\over 12} \langle \chi_+^2 \rangle \nonumber\\ && \left.- {1\over 4}\langle u_\mu \rangle \langle u^\mu \left(u_\nu u^\nu + \chi_+\right) \rangle \right] +\ldots \; \; . \label{ZZ} \end{eqnarray} The most striking feature of Eq. (\ref{ZZ}) is the complete flavour independence of the result. If we analyze in detail the modifications that the quenched approximation has produced to the divergent structure of the effective theory at the one--loop level, we come to the following list: \begin{enumerate} \item all the terms proportional to $N$ have been dropped; \item all the terms proportional to $1/N$ and $1/N^2$ have been dropped; \item new divergences proportional to the parameters present in the anomalous singlet sector have been produced. \end{enumerate} All these new parameters are dimensionless, with the only exception of $m_0$. The dimensionless parameters ($\alpha$ and $v_{1,2}$) generate divergences that have the structure of a chiral invariant term (since they do not break the chiral symmetry) of order $p^4$, for obvious dimensional reasons. For the same reasons $m_0^2$ generates divergences with the structure of a chiral invariant of order $p^2$. As it is shown in Appendix \ref{m0} one can very easily understand why it is only the mass term $\langle \chi_+ \rangle$ that is generated. As it turns out, the modifications listed in points 1. to 3. above find a very simple explanation: dropping the terms proportional to $N$ corresponds to dropping virtual quark loops. Dropping the terms proportional to $1/N$ and $1/N^2$, is a consequence of having a singlet degenerate in mass with the nonsinglet pseudoscalars. The new parameters in the singlet sector are required by the $U(1)_A$ anomaly, and the diseases in that sector are generated by the quenched approximation, as it is well known. These simple conclusions suggest that one could have guessed all these modifications without doing any calculation. In fact, we provide an example of how one could try such a guess in Appendix \ref{NWI}, where we apply the same criteria to the generating functional of the non--leptonic weak interaction sector for the octet on--shell case (the complete analysis will be given elsewhere \cite{weak}), by going through the three steps we have enumerated above. \subsection{Chiral and threshold divergences} Quenched chiral logs are not the only problem generated by the presence of the double pole in the quenched version of the singlet propagator. As we will see in detail in Sec. 5 through several examples, this double pole generates also other kind of divergences inside contributions which are ultraviolet finite. These divergences are of two types: powerlike chiral divergences, i.e. inverse powers of $M_\pi^2$, and unphysical threshold divergences. We find it instructive, before closing this section to identify which are the terms in the generating functional which are responsible for them. Some of the terms (and in fact an infinite series of them) that we have neglected in Eq. (\ref{Zsinglet}) because they are ultraviolet finite, contain these kind of singularities. They can be given in closed form only if one stops at a given order in the expansion in powers of the field $\Phi$. Since in the following sections we are not going to analyze anything beyond the four--point function, we can stop at order $\Phi^4$, and identify explicitly the troublesome terms. They all come from the insertion of the double pole term of (\ref{DX0-1}) in the expansion (\ref{logDX}), and give the following contribution to the generating functional: \begin{eqnarray} \delta Z_{\mbox{\tiny{1 loop}}}^{\mbox{\tiny{qCHPT}}} &=& {(m_0^2-\alpha M^2)\over 24} \int \! dx dy~\tilde{I}_1(x-y) \langle\hat{\chi}_+(y)\hat{\chi}_+(x)\rangle \nonumber\\ &-& \alpha {(m_0^2-\alpha M^2)\over 72} \int \! dx dy~\tilde{I}_1(x-y) \langle\hat{\chi}_+(y)\rangle\langle\hat{\chi}_+(x)\rangle \nonumber\\ &-& {(m_0^2-\alpha M^2)^2\over 144} \int \! dx dy~\tilde{I}_2(x-y) \langle\hat{\chi}_+(y)\rangle\langle\hat{\chi}_+(x)\rangle\!+\!O(\!\Phi^6\!) \; . \label{IRdiv} \end{eqnarray} The functions $\tilde{I}_1(z), \tilde{I}_2(z)$ are defined in appendix \ref{APP3}. At infinite volume and in Minkowski space--time their Fourier transforms ${I}_1(q^2), {I}_2(q^2)$ develop an imaginary part when $q^2 \geq 4 M_\pi^2 $ which diverges at $q^2 = 4 M_\pi^2$ (see appendix \ref{APP3}). Moreover their values at $q^2=0$ are inversely proportional to $M_\pi^2$ (again see appendix \ref{APP3}): this is the origin of powerlike chiral divergences that we will find in several observables in Sect. 5. The threshold singularities in particular make the theory meaningless in Minkowski space--time at infinite volume. In finite volume and in Euclidean space--time the same one loop functions ${I}_{1,2}(q)$ have been evaluated at $q^2=4 M_\pi^2$ in Ref. \cite{bg}, and it was found that these functions give rise to {\em enhanced finite volume} corrections which are forbidden in a healthy Hamiltonian theory. As it was pointed out in Ref. \cite{bg} this shows that qCHPT can only make sense in Euclidean space--time and in finite volume. \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} \section{Lagrangian at order $p^4$} To complete the renormalization of the quenched theory at order $p^4$ one needs to add the most general chiral invariant Lagrangian at this order. As in the standard CHPT case, some of the couplings appearing in the order $p^4$ Lagrangian have an UV divergent part in such a way that all the one--loop divergences are removed. The most general chiral invariant Lagrangian at order $p^4$ in standard CHPT has been given by Gasser and Leutwyler \cite{gl85}. The extension to the graded symmetry version is not needed here, since we are not going beyond order $p^4$, and are not interested in having the spurious degrees of freedom as external particles\footnote{Moreover we will not consider singlet fields as external particles. They require at least two more counterterms as shown by Eq. (\ref{ZZ}). }: we can use the standard CHPT Lagrangian right away. There is however a slight modification that we have to introduce. As we noted before, the trace of $\chi_+$ starts with a constant term proportional to $N$ in the degenerate mass case we are considering here. In the quenched version a linear dependence upon $N$ is forbidden, and therefore we must always substitute $\langle \chi_+ \rangle \rightarrow \langle \hat\chi_+ \rangle$. Apart from this modification, we have followed existing notations for the choice of the $O(p^4)$ Lagrangian, both in the $SU(3)$ and $SU(2)$ case. The $SU(3)$ choice is the standard Gasser and Leutwyler Lagrangian \cite{gl85}, while for $SU(2)$ we choose to use the Gasser--Sainio--\v Svarc Lagrangian \cite{gss}. An important point concerns the value of the counterterms: we observe that in the quenched case the counterterms do not depend on the number of flavours. Not only the divergent part, as we have explicitly shown in the previous section, but also the numerical value of the finite part of the counterterms does not change for different values of $N$. Therefore it is useful to identify, and give names to them in the general $N$ case. For the more interesting cases of $N=3$ and $N=2$, because of trace relations, one will be able to access only certain combinations of them, as we will specify below. For general $N$ the lagrangian at order $p^4$ is given by: \begin{equation} {\cal L}_4=\sum_{i=0}^{10} \Lambda^q_i P_i \; \; , \end{equation} where the eleven operators $P_i$ are listed in Table \ref{tab:L4} (we remind the reader that in the quenched case it is necessary to change $\langle \chi_+ \rangle \rightarrow \langle \hat\chi_+ \rangle$). These eleven chiral invariant operators contain, besides those defined in Eq. (\ref{DEFF}), the following new building blocks: \begin{eqnarray} f_{\pm\mu\nu}&=& ul_{\mu\nu}u^\dagger \pm u^\dagger r_{\mu\nu} u \; \; , \nonumber\\ \chi_{-}&=& u^\dagger\chi u^\dagger - u\chi^\dagger u \; \; . \end{eqnarray} To derive the results shown in Table \ref{tab:L4} the following relation is useful: \begin{equation} f_{+\mu\nu}= 2i\, \Gamma_{\mu\nu} -{i\over 2} [u_\mu ,u_\nu ] \; \; , \end{equation} and the identification $\langle\chi_+^2\rangle = 1/2\, \langle \chi_+^2 +\chi_-^2\rangle$ and $\langle f_+^2\rangle = 1/2\, \langle f_+^2-f_-^2\rangle$ can be done up to contact terms which contain external sources only. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline &&\multicolumn{2}{c|}{}&&\\ &$P_i$ &\multicolumn{2}{c|}{Coeff. of $-\frac{1}{(4\pi)^2(d-4)}$}& & \\ $i$ &for $SU(N)$&\multicolumn{2}{c|}{}&$SU(3)$&$SU(2)$\\ &&\multicolumn{2}{c|}{~~CHPT~~~~~~~~~qCHPT~~} && \\ &&\multicolumn{2}{c|}{\phantom{~~CHPT}~~~~~~~~$\langle \chi_+ \rangle \to \langle \hat{\chi}_+ \rangle$ } &&\\ \hline &&&&&\\ 0&$\langle u_\mu u_\nu u^\mu u^\nu \rangle$ &$\frac{N}{48}$&0& Eq. (\protect{\ref{CH1}}) & Eq. (\protect{\ref{CH2}})\\ &&&&&\\ 1&$\langle u_\mu u^\mu \rangle^2$ &$\frac{1}{16}$&$\frac{1}{16}$&$L_1$ & $\frac{1}{4}l_1$\\ &&&&&\\ 2&$\langle u_\mu u_\nu\rangle\langle u^\mu u^\nu\rangle$&$\frac{1}{8}$&$\frac{1}{8}$&$L_2$ &$\frac{1}{4}l_2$ \\ &&&&&\\ 3&$\langle u_\mu u^\mu u_\nu u^\nu \rangle$ &$\frac{N}{24}$&0&$L_3$& Eq. (\protect{\ref{CH2}}) \\ &&&&&\\ 4&$\langle u_\mu u^\mu\rangle\langle \chi_+\rangle$ &$\frac{1}{8}$&$\frac{1}{8}-{v_1 \over 2}$&$L_4$& $ {1\over 8} l_4$ \\ &&&&&\\ 5&$\langle u_\mu u^\mu \chi_+ \rangle$ &$\frac{N}{8}$&0&$L_5$ &Eq. (\protect{\ref{CH2}}) \\ &&&&&\\ 6&$\langle \chi_+ \rangle^2$ &$\frac{1}{16}+\frac{1}{8N^2}$&$\frac{1}{16}-{v_2\over 2}+ \frac{\alpha^2}{72} $& $L_6$ & $\frac{1}{16} l_3$\\ &&&&&\\ 7&$\langle \chi_- \rangle^2$ &0&0&$L_7$ & $-\frac{1}{16}l_7$\\ &&&&&\\ 8&$ \frac{1}{2} \langle \chi_+^2+ \chi_-^2 \rangle$&$\frac{N}{16}-\frac{1}{4N}$ &$-\frac{\alpha}{12}$ &$L_8$ & Eq. (\protect{\ref{CH2}}) \\ &&&&&\\ 9&$ -i \langle f_+^{\mu \nu} u_\mu u_\nu \rangle $ &$\frac{N}{12}$&0 &$L_9$ & $-\frac{1}{2} l_6$ \\ &&&&&\\ 10&$\frac{1}{4} \langle f_+^2 - f_-^2\rangle $ &$-\frac{N}{12}$&0&$L_{10}$ & $l_5$\\ &&&&&\\ \hline \end{tabular} \end{center} \protect\caption{List of terms of order $p^4$ for $N$ generic, $N=3$ and $N=2$. In the second and third columns we give the coefficient of the divergence coming from the one loop in the standard and quenched CHPT case. As we have indicated in the table, the invariants containing $\langle \chi_+ \rangle$ have to be changed with $\langle \chi_+ \rangle \to \langle \hat{\chi}_+ \rangle$ in the quenched case.} \label{tab:L4} \end{table} \subsection{$SU(3)$ Lagrangian at order $p^4$} For $N=3$ the Lagrangian at order $p^4$ reads as follows: \begin{equation} {\cal L}_4^{(N=3)}= \sum_{i=1}^{10} L_i^q P_i \; \; , \end{equation} where the operators $P_i$ are defined in Table \ref{tab:L4}. The $P_0$ operator is linearly dependent from the others through the following trace relation: \begin{equation} P_0=\frac{1}{2}P_1+P_2-2P_3 \; \; , \label{CH1} \end{equation} which implies: \begin{equation} L_1^q=\Lambda_1^q+{1\over 2} \Lambda_0^q \;, \; \; \; L_2^q=\Lambda_2^q+ \Lambda_0^q \;, \; \; \; L_3^q=\Lambda_3^q- 2 \Lambda_0^q \;, \; \; \; \end{equation} In order to reabsorb the divergences at one loop we define the $L_i^q$ in the following manner: \begin{eqnarray} L_i^q &=& L_i^{q \; r}(\mu )+ \Gamma_i^q \lambda \; \; , \nonumber \\ \lambda &=& \frac{\mu^{d-4}}{16 \pi^2}\left[\frac{1}{d-4}-\frac{1}{2}\left( \ln 4\pi +\Gamma^\prime(1)+1\right) \right] \; \; , \label{lbar3} \end{eqnarray} with $\mu$ the renormalization scale, $\lambda$ contains the divergence at $d=4$ and the coefficients $\Gamma_i^q$ are given by \begin{eqnarray} &&\Gamma_1^q=\frac{1}{16}\; \; ,~~~~~~~\Gamma_2^q=\frac{1}{8}\; \; , ~~~~~~~~ \Gamma_4^q=\frac{1}{8}(1-4v_1)\; \; , \nonumber\\ &&\Gamma_6^q=\frac{1}{16}\left( 1-8v_2+\frac{2}{9}\alpha^2\right) \; \; , ~~~~~~~\Gamma_8=-\frac{\alpha}{12} \; \; , \label{gamma3} \end{eqnarray} all the other $\Gamma_i^q$ are zero. \subsection{$SU(2)$ Lagrangian at order $p^4$} For $N=2$ the Lagrangian at order $p^4$ reads as follows: \begin{equation} {\cal L}_4^{(N=2)}= \sum_{i=1}^{7} l_i^q Q_i \; \; , \label{Lctr} \end{equation} where \begin{eqnarray} Q_1&=& \frac{1}{4} \langle u_\mu u^\mu \rangle^2 \; \; , \nonumber \\ Q_2&=& \frac{1}{4} \langle u_\mu u_\nu \rangle \langle u^\mu u^\nu \rangle \; \; , \nonumber \\ Q_3&=& \frac{1}{16} \langle \hat\chi_+\rangle^2 \; \; , \nonumber \\ Q_4 &=& \frac{1}{8} \langle u_\mu u^\mu \rangle \langle \hat\chi_+\rangle \; \; , \nonumber \\ Q_5 &=&\frac{1}{4} \langle f_+^2-f_-^2 \rangle \; \; , \nonumber \\ Q_6 &=& \frac{i}{2} \langle f_+^{\mu \nu} u_\mu u_\nu \rangle \; \; , \nonumber \\ Q_7 &=& -\frac{1}{16} \langle \chi_-\rangle^2 \; \; . \label{SU2} \end{eqnarray} To reduce the number of chiral invariants needed we have used the following relations: \begin{equation} P_0=-{1\over2} P_1+P_2 \; \; , ~~~~ P_3 = {1 \over 2} P_1 \; \; , ~~~~ P_5 = {1 \over 2} P_4 \; \; , ~~~~ P_8 = \frac{1}{2} \left( P_6+P_7 \right) \; \; . \label{CH2} \end{equation} which imply the following relations between the $N=3$ and $N=2$ counterterms: \begin{eqnarray} {1 \over 4} l_1^q=L_1^q+{1 \over 2} L_3^q \; \; , ~~ &{1 \over 4} l_2^q=L_2^q \; \; , &{1 \over 16} l_3^q= L_6^q + {1 \over 2} L_8^q \; \; , \nonumber \\ {1 \over 8} l_4^q = L_4^q + {1 \over 2} L_5^q \; \; , ~~~~ &-{1 \over 16} l_7^q = L_7^q+ {1 \over 2} L_8^q \; \; .& \end{eqnarray} Note that the trace relations have been written down using the invariant $\langle \chi_+ \rangle$, and must be reexpressed in terms of $\langle \hat{\chi}_+ \rangle$ in the quenched case. This generates a correction to the constants appearing in the ${\cal L}_2$ Lagrangian, see below. In order to reabsorb the divergences at one loop we define the $l_i^q$ in the following manner: \begin{equation} l_i^q = l_i^{q \; r}(\mu )+ \gamma_i^q \lambda \; \; , \end{equation} with: \begin{eqnarray} &&\gamma_1^q=\frac{1}{4} \; \; ,~~~~~~~~\gamma_2^q=\frac{1}{2} \; \; ,~~~~~~~~ \gamma_3^q=1-8v_2-\frac{2}{3}\alpha+\frac{2}{9}\alpha^2 \; \; ,\nonumber\\ &&~~~\gamma_4^q= 1-4v_1 \; \; ,~~~~~~~~\gamma_7^q = \frac{2}{3} \alpha \; \; , \label{gamma} \end{eqnarray} all the other $\gamma_i^q$ are zero. We find useful for the analysis of the phenomenology to introduce the scale independent constants $\overline{l}_i^q$, defined as follows: \begin{equation} \overline{l}_i^q=\frac{32 \pi^2}{\gamma_i^q}l_i^{q \; r}(\mu ) -\ln\frac{M^2}{\mu^2} \; \; . \label{lbar} \end{equation} As we mentioned above, a complete renormalization at the one--loop level requires, in the quenched case, the renormalization of the order $p^2$ constant $B_0$ due to divergences proportional to $m_0^2$. In the present case, $\langle\chi_+^2\rangle$ has been eliminated with the use of the Cayley--Hamilton relations (\ref{CH2}) in favour of $\langle\hat{\chi}_+\rangle^2$ and $M^2\langle\chi_+\rangle$. The divergence proportional to the latter can also be reabsorbed in the renormalization of the $B_0$ parameter. Since $P_5=4 Q_4 +2 M^2 \langle u_\mu u^\mu \rangle$ the constant $F^2$ receives a finite correction proportional to $L_5^q$. For later convenience, we define here the renormalized constants at order $p^2$ in the two--flavour case, in such a way that they include also finite corrections: \begin{eqnarray} \bar{\cal L}_2&=& \frac{\bar{F}^2}{4} \langle u_\mu u^\mu + \bar{\chi}_+ \rangle \; \; , \nonumber \\ \bar{F}^{\tiny{N=2}}&=&F\left( 1+ 4 L_5^q {M^2 \over F^2} \right) \; \; , \nonumber \\ \bar{B}_0^{\tiny{N=2}}&=& B_0\left[1-{(m_0^2-2\alpha M^2)\over 48\pi^2F^2} \left(\ln {M^2\over \mu^2}+1 \right) \right. \nonumber \\ && \left. ~~~~~~~~ -\left(8 L_5^q+ {\alpha \over 48\pi^2}\right) {M^2 \over F^2} +b_0(\mu ) \right] \; \; , \label{B0BAR2} \end{eqnarray} where, with an obvious notation $\bar{\chi}_+$ stands for the analogous of $\chi_+$ which contains $\bar{B_0}$ instead of $B_0$. \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} \section{Analysis of various observables in quenched CHPT} \label{phen} In this section we make a complete one--loop analysis of several observables in quenched CHPT. The main reason for this is to study the problems generated by quenching in the finite part of the one--loop corrections, which we have not considered in the generating functional. As we will see, some of the finite corrections diverge in the chiral limit. The origin of these divergences can be traced back to the presence of the double pole in the singlet two--point function. The double pole carries in the numerator a new mass scale $m_0$ that does not vanish in the chiral limit, and hence modifies the chiral power counting valid in CHPT. The standard power counting goes as follows: the chiral order of a generic diagram is given by the simple formula \begin{equation} D_g = 4 L-2 I + \sum_d d N_d \; \; , \label{chiral1} \end{equation} where $D_g$ is the chiral dimension of a graph $g$ that has $L$ loops, $I$ internal lines, and $N_d$ vertices of chiral dimension $d$. The topological relation \begin{equation} L=I-V+1 \; \; , \end{equation} where $V=\sum_d N_d$ is the total number of vertices, can be used to obtain \begin{equation} D_g = 2 L + \sum_d (d-2) N_d + 2 \; \; . \label{chiral_order} \end{equation} Since in standard CHPT the lowest chiral dimension of a vertex is two, the chiral dimension of a graph is always bigger than two, and increases with the number of loops and vertices with chiral dimension bigger than two. In quenched CHPT the situation changes, and we have to allow for the presence of vertices with chiral order zero, i.e. the insertions on the singlet propagators proportional to $m_0^2$ (that is a constant in the chiral limit). In this case Eqs. (\ref{chiral1}) through (\ref{chiral_order}) are still valid, but due to the presence of terms with $d=0$, $D_g$ may now be smaller than two, and even negative. Naively one could conclude that $D_g$ could even be unbounded from below. However, one has to take into account the fact that virtual quark loops are forbidden: this puts a series of constraints on the type of graphs with $m_0^2$ insertions that are allowed. For example: it is not possible for two $m_0^2$ vertices to lie on the same line one after the other, or, no standard vertices can have all the outgoing lines that end up on an $m_0^2$ vertex.\footnote{There is one exception to this, given by vertices with physical external sources. In this case disconnected quark loops are allowed, since they are not generated by the QCD determinant (see Section \ref{three}).} These constraints are such that $D_g$ comes out to be bounded from below, although it may be negative. The value of the lower bound depends on the observable -- we will see explicit examples below. In what follows we are going to analyze: the quark condensate, the pion mass and decay constant, the vector and scalar form factors of the pion and the $\pi\pi$ scattering amplitude. Although these quantities (with the exception of the form factors) were already analyzed at the one--loop level in previous works \cite{Sharpe,qCHPT,bg}, we find it useful to present them here again, in view of the renormalization that we have performed at the level of the generating functional, and also of our definition of the lagrangian at order $p^4$. We make the analysis in the case of two light flavours with degenerate masses. \subsection{Quark condensate, pion mass and decay constant} \label{onetwo} As anticipated in the previous section the renormalized scalar quark condensate plays a crucial role among the quenched observables in the strong sector, since it contains an explicit dependence upon the quenched chiral logarithms through the $\bar{B}_0$ parameter (\ref{B0BAR2}) (everywhere in this section we shall use the $\bar{B}_0$ parameter as defined in Eq. (\ref{B0BAR2}), dropping the $N=2$ superscript). We shall see later in the case of the scalar form factor that all the $\bar{q} q$ matrix elements share the same feature. The renormalized scalar density to one loop in the two--flavour case is given by \begin{equation} \langle \bar{q}q\rangle_q = -F_\pi^2 \bar{B}_0 \left[1 + O(M^2) \right] \; \; , \label{QQBAR} \end{equation} where we have not written down explicitly the standard chiral corrections of order $M^2$. The problem with these corrections is that they contain contributions coming from counterterms of order $p^4$ that contain only external fields (we have not written them down in the previous section). These counterterms cannot be determined on a phenomenological basis: their presence in the expression of the quark condensate reflects the fact that away from the chiral limit, this quantity cannot be defined unambiguously. We refer the reader to Ref. \cite{gl84} for a detailed discussion of this point. On the other hand, in the chiral limit, where this ambiguity disappears, the quark condensate diverges due to the quenched chiral logarithms inside $\bar{B}_0$. The pion decay constant to one loop is renormalized only by a finite amount in the quenched two-flavour case: $F_\pi=\bar{F}$, see Eq. (\ref{B0BAR2}). Notice that in the quenched three--flavour case there is no need to define an $\bar{F}$, but on the other hand $L_5^q$ directly contributes to $F_\pi$ in such a way that for $N=3$ and $N=2$ (as also for any other $N$) one has the same pion decay constant, as expected\footnote{We thank Martin L\"uscher for pointing out an inconsistency on this point in the first version of the manuscript.}. The diagrams which renormalize the pion mass to one loop are shown in Fig. \ref{mpi}. \begin{figure}[th] \epsfxsize 9 cm \epsfysize 1.4 cm \begin{picture}(50,15) \end{picture} \epsffile{mpi.ps} \protect\caption{One--loop diagrams in quenched CHPT that contribute to the squared pion mass $M_\pi^2$. They are the meson tadpole, its ghost counterpart and the tadpole with one singlet vertex $(\times )$ insertion.} \protect\label{mpi} \end{figure} The meson tadpole and its ghost counterpart cancel each other, so that the renormalization of the quenched pion mass at one loop is provided by the tadpole with one singlet vertex insertion and its counterterm: \begin{equation} M_\pi^2 = 2\bar{B}_0 m_q \; \; , \label{MASS} \end{equation} where $m_q$ is the light quark mass. As one can see, all the one--loop corrections, including the quenched chiral logarithm have been reabsorbed in $\bar{B}_0$. Since $\bar{B}_0 m_q \sim m_q \log m_q$ when approaching the chiral limit, the renormalized pion mass tends to zero like $m_q \log m_q $. No divergence is produced by quenching in the behaviour of the renormalized pion mass in the chiral limit, although the way it approaches zero is different from that of standard CHPT. Once $M_\pi^2$ is fixed to its physical value no residual quenched chiral logarithms are left in the strong sector (with the mentioned exception of $\bar{q}q$ matrix elements). In Appendix \ref{NWI} it is shown that the same situation occurs in the weak $\Delta I=1/2$ sector, where additional quenched chiral logarithms can be reabsorbed in the renormalization of the weak mass term. \subsection{Scalar form factor} \label{three} The scalar form factor of the pion is defined by the matrix element of the $\bar{q}q$ density between two pion states \begin{equation} \langle \pi^i(p^\prime )|\bar{q}q|\pi^k(p)\rangle = \delta^{ik} F_S(t) \; \; , \label{SM} \end{equation} where $t=(p-p')^2$. In quenched CHPT the complete list of one--loop diagrams which give contribution to $F^q_S(t)$ are shown in Fig. \ref{ffs}. \begin{figure}[ht] \epsfxsize 11 cm \epsfysize 9 cm \begin{picture}(50,15) \end{picture} \epsffile{ffs.ps} \protect\caption{One--loop diagrams in quenched CHPT which contribute to $F_S(t)$ (the box stands for the scalar source insertion). . They are the ``standard'' meson loop diagrams (first line) to which one has to add the corresponding fermionic ghost loop diagrams, the singlet insertion diagrams (second line) and the diagrams with $v_1$, $v_2$ vertex insertions (third line).} \protect\label{ffs} \end{figure} An explicit calculation shows that the fermionic ghost loops do not fully cancel the corresponding meson loop diagrams. The reason for this mismatch is best understood within the quark--flow diagram picture. Here, the physical scalar source only couples to the quark lines and not to the ghost lines. The possible one--loop diagrams are the ones listed in Fig. \ref{ffsquark}. \begin{figure}[ht] \epsfxsize 13 cm \epsfysize 2 cm \begin{picture}(50,15) \end{picture} \epsffile{ffsquark.ps} \protect\caption{One--loop diagrams which contribute to $F_S(t)$ in the quark--flow diagram picture. Diagram $(b)$ can be present in the quenched approximation, while the others disappear.} \protect\label{ffsquark} \end{figure} Diagram $(b)$, where the scalar source is coupled to the internal disconnected closed quark line, has no correspondent ghost loop diagram: this is correct because the loop is not produced by the fermionic determinant, and must therefore be present also in the quenched approximation. The complete renormalized quenched scalar form factor can be written as follows \begin{eqnarray} {F}^q_S(t) &=&F_S^q(0)\left\{ 1+{\bar{J}(t)\over F_\pi^2}\left [ {1\over 2} \gamma_4^q \left ( t-2M_\pi^2\right ) + \gamma_3^q M_\pi^2\right ] +t{\gamma_4^q \over 32\pi^2 F_\pi^2}(\bar{l}_4^q-1) \right . \nonumber\\ &&-{2\over 3}{M_\pi^2\over F_\pi^2} \bar{I}_1(t)(m_0^2-\alpha M_\pi^2) \left ( 1-{2\over 3}\alpha \right ) \nonumber\\ &&\left . +{2\over 9} {M_\pi^2\over F_\pi^2} \bar{I}_2(t)(m_0^2-\alpha M_\pi^2)^2 \right\} +O(t^2) \; \; , \end{eqnarray} where the coefficients $\gamma_i^q$ have been defined in Eq. (\ref{gamma}) and $F_S^q(0)$ is given by \begin{eqnarray} {F}^q_S(0) &=&2\bar{B}_0\left\{ 1 -{ (m_0^2-\alpha M_\pi^2)\over 48\pi^2F_\pi^2} \left ( 1-{2\over 3}\alpha \right ) +{1\over 9} { (m_0^2-\alpha M_\pi^2)^2 \over 48\pi^2 F_\pi^2 M_\pi^2} \right . \nonumber\\ &&\left . +{M_\pi^2\over 16\pi^2F_\pi^2}\left [ \gamma_3^q\,(\bar{l}_3^q-1) - \gamma_4^q\,(\bar{l}_4^q-1) \right ]\right\} \; \; . \label{FS0} \end{eqnarray} Note that we are working in the degenerate mass case, so that no isospin breaking effect has been taken into account. In standard CHPT there is no isospin breaking correction to the scalar form factor at this order of the expansion. In passing, we state that also in the quenched case there is no isospin breaking contribution linear in $m_u-m_d$ to the pion scalar form factor, as it happens in CHPT, while an isospin breaking correction of order $(m_u-m_d)^2$ is produced via the $(\phi_0\, ,\phi_3)$ mixing for neutral pions by the chiral invariant operator $P_7$ in Table \ref{tab:L4}\footnote{Note that also the neutral pion mass $M_{\pi^0}^2$ gets next--to--leading corrections of order $O\left ((m_u-m_d)^2\right )$ from $P_7$ in the quenched case.}. The functions $\bar{J}(t)$, $\bar{I}_1(t)$ and $\bar{I}_2(t)$ are finite and they are defined in Appendix \ref{APP3}. The two functions $\bar{I}_1(t)$ and $\bar{I}_2(t)$ are peculiar of quenched CHPT. They will also appear in the $\pi\pi$ scattering amplitude, where we shall analyze in some details the various sicknesses of which they suffer. Here we used their low momentum expansion to define the scalar form factor at $t=0$. The scalar form factor is a good example to analyze the modifications produced by the quenched approximation to an observable at the one loop level. First, the pion loops have been only partially cancelled, therefore the ordinary chiral logarithms and the one--loop function $\bar{J}(t)$ do appear in the same way as in standard CHPT, but with different coefficients (these coefficients may even vanish in particular cases, like $M_\pi$ and $F_\pi$). Second, quenched chiral logarithms appear at one loop, but they can be reabsorbed in the renormalization of the ${B}_0$ parameter, as we have demonstrated in the previous section. Besides quenched chiral logs, the remaining finite loop corrections arising from the anomalous singlet sector and proportional to $m_0^2$ are even more problematic, since they have negative chiral dimension, as anticipated in the general discussion above. It is a simple exercise to calculate the chiral dimension of the one loop diagram with two $m_0^2$ insertions on the two internal singlet lines (this is the central graph in the second line of Fig. \ref{ffs}): with respect to the tree level graph this has chiral dimension $-2$. These corrections diverge in the chiral limit like an inverse pion mass squared, see Eq. (\ref{FS0}). In fact, there is an infinite series of graphs that has the same chiral dimension: these graphs are obtained from this one by adding any even number of singlet lines (each one with one $m_0^2$ insertion) between the two vertices. Also the insertion of tadpoles and sunset diagrams with the maximum allowed number of $m_0^2$ insertions does not change the chiral dimension of the starting diagram. As far as we could see this series of graphs is also the one with the lowest chiral dimension for the scalar form factor. This example shows that despite the general formula (\ref{chiral_order}) with $d=0$ vertices allowed, in the quenched case the chiral dimension of amplitudes is bounded from below. It is also interesting to look at the slope of the scalar form factor at low momenta in the quenched case. This defines the scalar radius as follows \begin{equation} F_S^q(t)=F_S^q(0)\left [ 1+{t\over 6}\langle r^2\rangle_S^q+O(t^2)\right ] \; \; . \end{equation} The scalar radius in the quenched approximation at one loop is given by: \begin{eqnarray} \langle r^2\rangle^q_S&=& {1\over 16\pi^2 F_\pi^2}\left [ \gamma_3^q + \gamma_4^q\, (3 \bar{l}_4^q-4) -\left( 1-{2\over 3}\alpha \right) {(m_0^2-\alpha M_\pi^2) \over 3 M_\pi^2} \right .\nonumber\\ &&\left . ~~~~~~~~~~~~~~~~~~~ + {4\over 45}{(m_0^2-\alpha M_\pi^2)^2\over M_\pi^4} \right] \; \; . \end{eqnarray} In standard CHPT the scalar radius diverges in the chiral limit because of the presence of $t$ dependent chiral logarithms. It behaves like: \begin{equation} \langle r^2\rangle_S = -{3\over 8\pi^2 F_\pi^2}\,\ln M_\pi^2+\ldots \; \; . \end{equation} In the chiral limit the one--loop contribution to the quenched scalar radius diverges not just logarithmically as in the standard case, but like an inverse power of the pion mass: \begin{equation} \langle r^2\rangle^q_S\vert_{M_\pi\to 0} \sim {1\over 16\pi^2 F_\pi^2}\left [ {4\over 45}{m_0^4\over M_\pi^4}-{1\over 3}\left ( 1-{2\over 15}\alpha\right ) {m_0^2\over M_\pi^2} -3\gamma_4^q \log M_\pi^2\right ] + \ldots \; \; . \end{equation} The origin of this power--like divergence in the chiral limit is the same as that of the form factor at $t=0$. Here it is more severe simply because the definition of the radius implies a derivative with respect to $t$. It is interesting to note that in quenched CHPT the Feynman--Hellman theorem \cite{gl84,FH} does not hold: \begin{equation} {F}^q_S(0) \neq {\partial M_\pi^2\over \partial m_q} \; \; , \end{equation} as one can easily verify by comparing Eq. (\ref{MASS}) and Eq. (\ref{FS0}). The origin of the violation of this theorem is in the presence of diagram $(b)$ of Fig. \ref{ffsquark} in the quenched scalar form factor. This graph cannot be obtained taking a derivative with respect to $m_q$ of $M_\pi^2$, since the quark loop is not present in $M_\pi^2$ and cannot be resurrected by a derivative. \subsection{Vector form factor} The vector form factor of the pion is defined in terms of the matrix element of the vector current $V_\mu^k = \bar{q}\gamma_\mu{\lambda^k\over 2} q$ between two pion states: \begin{equation} \langle \pi^i(p^\prime )| V_\mu^k|\pi^l(p)\rangle_q = i\epsilon^{ikl}(p_\mu +p_\mu^\prime ) F^q_V(t) \; \; , \end{equation} where $t=(p-p^\prime)^2$. The divergent contributions to $F^q_V(t)$ can be derived from the expression (\ref{ZZ}) of the quenched generating functional in the usual way. It is an easy exercise to show that these contributions are zero. In fact, the only chiral invariant which can give corrections at order $p^4$ is the operator number 9 in the list of Table \ref{tab:L4}, which has no divergent term in the quenched limit. In a Feynman diagram approach the graphs which contribute to one loop are shown in Fig. \ref{ffv}. \begin{figure}[t] \epsfxsize 11 cm \epsfysize 6 cm \begin{picture}(50,15) \end{picture} \epsffile{ffv.ps} \protect\caption{One--loop diagrams in quenched CHPT that contribute to $F^q_V(t)$ (the box stands for the vector source insertion). They are the ``standard'' meson loop diagrams (first line) and the corresponding fermionic ghost loop diagrams (second line). No singlet component can run in the loop.} \protect\label{ffv} \end{figure} The complete calculation gives zero, because of the systematic cancellation of each pion loop with the corresponding ghost loop. In addition, since no singlet component can run in the loop, there is no extra contribution coming from the anomalous singlet sector. The quenched vector form factor for $N=2$ can be written as follows \begin{equation} F^q_V(t) = 1-{l_6^q\over F_\pi^2} t + O(p^4)\; \; , \end{equation} where the finite counterterm $l_6^q$ is defined in Table \ref{tab:L4} and Eq. (\ref{SU2}). Again, no isospin breaking effects have been taken into account. In standard CHPT the Ademollo--Gatto theorem \cite{AD} guarantees that they are absent at this order. In quenched CHPT the theorem is also valid. Note that the counterterm $P_7$ cannot contribute at all to the vector current matrix element, while the new chiral invariant term $\langle u_\mu\rangle\langle u^\mu\chi_+\rangle$ induced by the dynamical singlet component gives $O\left( (m_u-m_d)^2\right )$ corrections to the decay amplitude $\pi^+\to \pi^0 e\nu$ via the $(\phi_0\, ,\phi_3 )$ mixing. Since the vector form factor does not receive contributions from singlets running inside the loop at the one--loop level, it does not show any divergence in the chiral limit. The situation however changes at two loops, where we have among others the graphs shown in Fig. \ref{ffv2l}. The most dangerous graph is the fish diagram with two $m_0^2$ insertions (the last of Fig. \ref{ffv2l}) which has chiral dimension zero respect to the tree level. Again this is only the first example of a full series of graphs which have the same chiral dimension: they are obtained from the starting one by inserting any even number of singlet lines between the same two vertices as those of the two--loop fish diagram, or tadpoles and sunset diagrams all with the maximum allowed number of $m_0^2$ insertions. In this case there are no graphs which are more singular than those in the chiral limit. \begin{figure}[t] \epsfxsize 14 cm \epsfysize 3.2 cm \epsffile{ffv2l.ps} \protect\caption{Two--loop diagrams in quenched CHPT with the $m_0^2$ singlet vertex ($\times$) insertions that give divergent contributions to the e.m. charge radius in the chiral limit. They are tadpoles, which generate quenched chiral logs and the fish diagrams that also generate power--like divergences. For each diagram the chiral dimension respect to the tree level is given.} \protect\label{ffv2l} \end{figure} The low energy representation of $F^q_V(t)$ also determines the electromagnetic charge radius of the pion in the quenched approximation \begin{equation} F^q_V(t) = 1+{t\over 6} \langle r^2\rangle^q_V +O(t^2) \; \; . \end{equation} In standard CHPT the presence of $t$ dependent chiral logarithms makes the electromagnetic charge radius diverge in the chiral limit \cite{gl84} \begin{equation} \langle r^2\rangle_V = -{1\over 16\pi^2F_\pi^2} \log M_\pi^2 +\ldots \; \; . \end{equation} The divergence of the electromagnetic charge radius in full QCD can be understood in a physically intuitive way. The charge distribution is cut off by the Yukawa potential $\sim e^{-M_\pi r}$ at large distances. In the chiral limit $M_\pi$ goes to zero and the Yukawa potential is no more effective, the charge distribution falls off like a power of the distance and the charge radius becomes infinite. The charge distribution of the pion cloud surrounding any particle gets modified by quenching. As a consequence, the behaviour of the charge radius in the chiral limit is modified. In the quenched case the one--loop contribution gives \begin{equation} \langle r^2\rangle^q_V = -{6 l_6^q\over F_\pi^2} \; \; , \end{equation} which stays finite in the chiral limit. The situation changes at two loops and higher: the graphs that we have discussed above, which have chiral dimension zero with respect to the tree level (like the two--loop fish diagram), do generate power--like divergences in the chiral limit. At two loops we are going to have a behaviour like: \begin{equation} \langle r^2\rangle_V^{q~{\mbox{\tiny{2 loop}}}}\vert_{M_\pi\to 0} \sim {1\over (16\pi^2 F_\pi^2)^2} \left ( d_1\,{(m_0^2/N_c)^2\over M_\pi^2} +d_2\, {m_0^2\over N_c}\, \ln M_\pi^2\right ) \; \; , \end{equation} where presumably also at this order the chiral logs could be reabsorbed in the renormalization of some order $p^4$ constants. \subsection{The $\pi\pi$ scattering amplitude} \label{pion} The $\pi\pi$ scattering amplitude is another example of an observable where one can find all the typical effects of quenching. Moreover it is an interesting quantity by itself since a comparison of the prediction for the two $S$--wave scattering lengths with existing lattice calculations \cite{fuku} is possible. The presence of ``standard'' chiral logs even in the quenched theory has to be interpreted as due to diagrams with pion loops that do not contain quark loops. For the $\pi \pi$ scattering amplitude an example is given in Fig. \ref{fig1}. \begin{figure}[t] \epsfxsize 11 cm \epsfysize 3 cm \begin{picture}(50,15) \end{picture} \epsffile{graph.ps} \protect\caption{Two examples of pion loop graphs contributing to $\pi \pi$ scattering in the quark--flow diagram picture (all lines are quark lines). Diagram (a) does not contain quark loops, whereas diagram (b) does.} \label{fig1} \end{figure} The one--loop contributions to the $\pi\pi$ scattering amplitude in quenched CHPT come from the diagrams shown in Fig. \ref{pipi}. \begin{figure}[t] \epsfxsize 14 cm \epsfysize 11 cm \begin{picture}(30,15) \end{picture} \epsffile{pipia.ps} \protect\caption{One--loop diagrams in quenched CHPT which contribute to the $\pi\pi$ scattering amplitude in the two degenerate flavour case. They are the ``standard'' meson loop diagrams (first line) to which one has to add the corresponding fermionic ghost loop diagrams, the singlet vertex $(\times )$ insertion diagrams (second line) and the diagrams with one $v_1$, $v_2$ vertex insertion (third line).} \protect\label{pipi} \end{figure} The scattering amplitude at tree level is the same as in standard CHPT \begin{equation} A^{\mbox{\tiny{tree}}}(s,t,u) = {s-M^2\over F^2} \; \; , \end{equation} where $M$ and $F$ are the bare pion mass and decay constant. The renormalized scattering amplitude in quenched CHPT and in the two degenerate flavour case can be written as follows \begin{equation} A(s,t,u)={s-M_\pi^2\over F_\pi^2}+ B(s,t,u)+C(s,t,u)+O(p^6) \; \; , \end{equation} where \begin{eqnarray} B(s,t,u)&=& {\bar{J}(s)\over 4F_\pi^4} \left\{ s^2-16M_\pi^2 v_1 (s-2M_\pi^2) +4M_\pi^4\left ( \gamma_3^q -1\right ) \right\} \nonumber\\ && +{1\over 4F_\pi^4} \left\{ \bar{J}(t)\left(t-2M_\pi^2\right)^2 +\bar{J}(u) \left(u-2M_\pi^2\right)^2 \right\} \nonumber\\ && +I_1(s){2M_\pi^4\over 3F_\pi^4} \left( m_0^2-\alpha M_\pi^2\right) \left({2\over 3}\alpha -1\right) +I_2(s){2M_\pi^4\over 9F_\pi^4} \left( m_0^2-\alpha M_\pi^2\right)^2 \; \; , \nonumber\\ &&\nonumber\\ C(s,t,u)&=&{1\over 128\pi^2F_\pi^4}\left\{ {\vrule height1.08em width0em depth1.08em} s^2 (2\bar{l}_1^q+\bar{l}_2^q-3) +(t-u)^2 (\bar{l}_2^q-1) \right .\nonumber\\ && +8sM_\pi^2\left [1-\bar{l}_1^q + \gamma_4^q\, (\bar{l}_4^q -1)\right ] \nonumber\\ &&\left . +8 M_\pi^4\left [ \bar{l}_1^q-1+ \gamma_3^q\, (\bar{l}_3^q-1) -2 \gamma_4^q\, (\bar{l}_4^q-1) \right ] {\vrule height1.08em width0em depth1.08em} \right\} \; \; . \end{eqnarray} For a definition of the functions $\bar{J}(q^2)$, $I_1(q^2)$ and $I_2(q^2)$ see Appendix \ref{APP3}. The functions $I_1(s)$ and $I_2(s)$ arise from diagrams with one and two $m_0^2$ insertions on the two internal singlet lines in the $s$--channel respectively (see Fig. \ref{pipi}). Note that everything is expressed in terms of the renormalized squared pion mass $M_\pi^2$ given by Eq. (\ref{MASS}) and $F_\pi = \bar{F}$. Note also that any dependence upon quenched chiral logarithms has been again reabsorbed in the $\bar{B}_0$ parameter contained in the renormalized pion mass, as expected. The function $C(s,t,u)$ contains only polynomial contributions, while the invariant function $B(s,t,u)$ is the quenched analogue of the unitarity correction to the scattering amplitude in ordinary CHPT. It is important to note that unitarity is destroyed by the quenched approximation: the structure of the cuts in the one--loop amplitude is not related via unitarity to the real part of the tree level amplitude. Moreover, one can easily verify that the Fermi--Watson theorem, which relates, e.g., the imaginary part of the vector and scalar form factors to those of the corresponding partial waves of $\pi \pi$ scattering, is not valid in this case. In this particular example the violation of unitarity is also immediately seen in the presence of the finite functions $I_1(q^2)$ and $I_2(q^2)$, which are not generated in ordinary CHPT. They have a nonzero imaginary part for $s\geq 4M^2$ that has a singularity at $s=4 M^2$ (of the type $(s-4M^2)^{-1/2}$ and $(s-4M^2)^{-3/2}$, respectively, see Appendix \ref{APP3}), which is a pure quenching artifact. These singularities have been already identified in \cite{bg,pipiq}. Here we have rederived them in the $\alpha\neq 0$ case and inserted in the complete formula for the amplitude. Interesting quantities to be extracted from the $\pi\pi$ scattering amplitude are the $S$--wave scattering lengths. In Ref. \cite{pl} we calculated the coefficients of the chiral logarithms which arise in the quenched case and made the comparison with standard CHPT. Here we give the complete expression of the $S$--wave scattering lengths in the isospin $I=0,2$ channels to one loop and comment on the anomalous behaviour of the isospin amplitude in the $I=0$ channel (which was already remarked in Ref. \cite{bg}). The $I=0,2$ amplitudes are expressed in terms of the invariant amplitude $A(s,t,u)$ as follows \begin{eqnarray} T^0(s,t)&=&3A(s,t,u)+A(t,u,s)+A(u,s,t) \; \; , \nonumber\\ T^2(s,t)&=&A(t,u,s)+A(u,s,t) \; \; . \end{eqnarray} The pion scattering lengths $a^I_l$ for a given isospin $I$ and angular momentum $l$ are defined by the behaviour of the partial wave amplitudes near threshold \begin{equation} Re ~t^I_l(s) = q^{2l} \left\{ a^I_l +q^2 b_I^l +O(q^4)\right\} \; \; , \end{equation} which enter the expansion in partial waves of the isospin amplitude \begin{equation} T^I(s,t)= 32\pi \sum_{l=0}^\infty (2l+1) P_l(\cos\theta ) t^I_l(s) \; \; . \end{equation} For more details about the notation we refer the reader to Ref. \cite{gl84}. The scattering amplitude in the $I=0$ channel contains the amplitude $A(s,t,u)$ and therefore in the quenched case acquires a sick threshold behaviour due to the presence of functions $I_1(s)$ and $I_2(s)$. These functions do not contribute to the $I=2$ amplitude. On the other hand the divergences at threshold present in the infinite volume case show up as ``enhanced'' finite volume corrections to the L\"uscher formula \cite{luescher}, that is used on the lattice to extract the scattering lengths; these finite volume corrections have been studied in Ref. \cite{bg}. We can formally define the quenched $I=0$ $S$--wave scattering length $a^0_0$ as the coefficient of the $(q^2)^0$ term in the expansion of the real part of the isospin amplitude $T^0(s,t)$ in partial waves. This gives us an idea of the size of normal one--loop corrections to the scattering length. The present definition is also equivalent to the one adopted in Ref. \cite{bg} in the analysis of the finite volume corrections. The quenched $S$--wave scattering length in the $I=2$ channel $a^2_0$ is defined in the usual way. For the complete renormalized $S$--wave ``quenched scattering lengths'' at one loop we find: \begin{eqnarray} {32\pi F_\pi^2 \over M_\pi^2} a_0^0&=& 7 + {M_\pi^2\over 16\pi^2 F_\pi^2}\left\{ 7+ 5(\bar{l}_1^q+2\bar{l}_2^q) +\gamma_3^q \,(5\bar{l}_3^q+1)+2\gamma_4^q \,(\bar{l}_4^q-1) -48 v_1 \right\} \nonumber\\ &&-{(m_0^2-\alpha M_\pi^2)\over 48\pi^2F_\pi^2}\left ({2\over 3}\alpha -1\right ) +{5\over 9}{(m_0^2-\alpha M_\pi^2)^2\over 48\pi^2 M_\pi^2F_\pi^2} \; \; , \label{a00} \end{eqnarray} \begin{eqnarray} {32\pi F_\pi^2 \over M_\pi^2} a_0^2&=& -2 + {M_\pi^2\over 16\pi^2 F_\pi^2}\left\{ 2(\bar{l}_1^q+2\bar{l}_2^q -1) +2\gamma_3^q \, (\bar{l}_3^q-1) -4\gamma_4^q \, (\bar{l}_4^q-1) \right\}\nonumber\\ &&+{(m_0^2-\alpha M_\pi^2)\over 24\pi^2F_\pi^2}\left ({2\over 3}\alpha -1\right ) +{2\over 9}{(m_0^2-\alpha M_\pi^2)^2\over 48\pi^2 M_\pi^2F_\pi^2} \; . \label{a02} \end{eqnarray} The renormalized quenched scattering lengths depend upon four counterterms $\bar{l}_1^q,\ldots\bar{l}_4^q$ and the parameters of the anomalous singlet sector at leading order, $m_0^2$, $\alpha$, $v_1$ and $v_2$. The counterterms $\bar{l}_i^q$ carry the chiral logarithms $\bar{l}_i^q=-\log m +\ldots $. In ordinary CHPT the chiral logarithms are largely dominant in the one--loop corrections to the $S$--wave scattering lengths at the renormalization scale $\mu =1$ GeV \cite{gl83}. Here the main unknown is the value of the parameters $v_1,v_2$ of the singlet sector. The singlet parameters $m_0$ and $\alpha$ can be extracted from lattice calculations. Favoured values are listed e.g. in Ref. \cite{lat96}. With these values at hand we can do the following numerical exercise. Let us disregard for the moment the parameters $v_1$ and $v_2$ and limit the analysis to the contributions that are reasonably expected to be the dominant ones: 1) the singlet corrections in $m_0$ and $\alpha$ and 2) the standard chiral logarithms. With the definitions \begin{equation} \delta = {m_0^2\over 48\pi^2 F_\pi^2} \; \; ,~~~~~\epsilon = {M_\pi^2\over 48\pi^2 F_\pi^2} \; \; ,~~~~\bar{\delta} = {m_0^2-\alpha M_\pi^2 \over 48\pi^2 F_\pi^2}=\delta-\alpha\epsilon \; \; , \end{equation} the leading contributions to the scattering lengths are as follows: \begin{eqnarray} {32\pi F_\pi^2 \over M_\pi^2} a_0^{0}&=& 7 -\left( {2\over 3}\alpha -1\right) \bar{\delta} +{5\over 9}{\bar{\delta}^2 \over \epsilon} - 66 \epsilon \ln{M_\pi^2\over \mu^2} + \ldots \;\; , \nonumber \\ {32\pi F_\pi^2 \over M_\pi^2} a_0^{2}&=& -2 + \left( {2\over 3}\alpha -1 \right) 2\bar{\delta} +{2\over 9} {\bar{\delta}^2 \over \epsilon} - 12 \epsilon \ln {M_\pi^2\over \mu^2} + \ldots \;\; . \label{a0leading} \end{eqnarray} For the numerical calculations we use $F_\pi = 93$ MeV, $\delta = 0.15$ and $\alpha = 0.6$ and vary the pion mass between its physical value $M_\pi= 140$ MeV, and $M_\pi=600$ MeV, which is presumably already outside a reasonable range of validity for ordinary CHPT. The chiral log is evaluated at $\mu=1$ GeV. The numerical results are given in Tables \ref{taba00} and \ref{taba02} for the $I=0$ and $I=2$ scattering lengths respectively. We note that at the physical value of the pion mass the $\bar{\delta}^2 / \epsilon$ term is largely dominant in both cases: the divergence in the chiral limit produced by quenching is already felt at the physical pion mass. This also means that the whole framework is not very reliable in this range, since also higher loop effects may produce modifications of the same chiral order (higher powers of $\delta$ with the same $1/\epsilon$ in front). At larger values of the pion mass, which are those typically used in lattice calculations, the situation changes and the standard chiral logarithms become dominant, as it happens in standard CHPT. \begin{table}[thb] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\ $M_\pi$ (MeV) & tree & $\bar{\delta}$ & $\bar{\delta}^2/\epsilon$ & $\epsilon \ln M_\pi^2$ \\ \hline &&&&\\ 140 & 7 & 0.09 & 2.5 & 1.2 \\ 300 & 7 & 0.08 & 0.47 & 3.5 \\ 600 & 7 & 0.06 & 0.06 & 5.9 \\ \hline \end{tabular} \end{center} \protect\caption{Numerical values of the leading contributions to $a_0^0$ quenched up to one loop for $M_\pi=140, \; 300, \; 600$ MeV, according to Eq. (\protect{\ref{a0leading}}).} \label{taba00} \end{table} \begin{table}[thb] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\ $M_\pi$ (MeV) & tree & $\bar{\delta}$ & $\bar{\delta}^2/\epsilon$ & $\epsilon \ln M_\pi^2$ \\ \hline &&&&\\ 140 & -2 & -0.18 & 1.0 & 0.23 \\ 300 & -2 & -0.16 & 0.19 & 0.63 \\ 600 & -2 & -0.12 & 0.02 & 1.1 \\ \hline \end{tabular} \end{center} \protect\caption{Numerical values of the leading contributions to $a_0^2$ quenched up to one loop for $M_\pi=140, \; 300, \; 600$ MeV according to Eq. (\protect\ref{a0leading})} \label{taba02} \end{table} This picture, although at a semiquantitative level, suggests that quenched lattice calculations of the $S$--wave scattering lengths with a moderately high pion mass (like the ones in Ref. \cite{fuku}), should not be too far from those predicted by full CHPT. This conclusion is based on two observations: first the standard chiral logarithms start soon to be dominant with respect to the dangerous quenching effects, and second their coefficient happens not to be substantially changed by quenching \cite{pl}. The comparison between the standard CHPT prediction at one \cite{gl84} and two loops \cite{2lpipi}, and the lattice calculation \cite{fuku}, has been made in Ref. \cite{g}. \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} \section{Summary and conclusions} In this paper we have analyzed the quenched version of Chiral Perturbation Theory at the one--loop level. We have calculated the one--loop ultraviolet divergences of the theory at the level of the generating functional, and shown how one can reabsorb all those divergences by a proper definition of the counterterms. We have shown that even in the presence of the anomalous singlet sector the ultraviolet divergent part of the quenched generating functional can be calculated in closed form. We have closely followed the notation and methods of standard CHPT \cite{gl84} in order to identify as clearly as possible the changes produced by the quenched approximation in the formulation of the effective theory. \paragraph{} We have found a systematic cancellation of the flavour--number dependent terms inside the divergent part of the generating functional to one loop. As we anticipated in Ref. \cite{pl} the complete $N$--independence of quenched CHPT is welcome, since it shows that we understand the differences between standard CHPT and its quenched version. Let us recall that the calculation of the divergences to one loop in CHPT produces explicit $N$ dependence, in three different powers: $N$, $1/N$ and $1/N^2$. The terms linear in $N$ must be generated at the quark level by virtual quark loops: therefore they must be absent in the quenched theory. The terms with inverse powers of $N$ are generated by the decoupling of the singlet field from the octet of the Goldstone bosons. Since the decoupling does not take place in the quenched theory, also the inverse powers of $N$ disappear in qCHPT to one loop. A posteriori one could say that the changes that lead from standard CHPT to its quenched version could have been guessed by simply looking at the $N$ dependence of the generating functional to one loop. In fact this can still be done in other sectors of the effective theory that have not been fully analyzed yet. We give one example of this in Appendix \ref{NWI}, where we study the one--loop divergences in the sector of the on--shell non--leptonic weak interactions. \paragraph{} The quenched approximation produces a double pole in the singlet two--point function, which however is not allowed in a consistent quantum field theory, and is therefore the source of many sicknesses of quenched CHPT. As was shown already in Refs. \cite{Sharpe,qCHPT}, one of the consequences of this double pole is the appearance of a new kind of chiral logarithms in the one--loop corrections. Together with the standard $M^2 \ln M^2$ chiral logarithms, qCHPT has corrections of the form $m_0^2 \ln M^2$, which diverge in the chiral limit. The complete calculation of all the ultraviolet one--loop divergences in the generating functional, has shown that the quenched chiral logs can be accounted for via a renormalization of the lowest order constant $B_0$ (which is proportional to the quark condensate). As a consequence, the renormalized $\bar{B}_0$ parameter diverges in the chiral limit, while the renormalized pion mass $M_\pi^2 = 2 \bar{B}_0 m_q$ does not. The use of the renormalized pion mass to express any other observable makes the quenched chiral logs disappear at one loop, with the only exception of $\bar{q} q$ matrix elements, that are proportional to the renormalized $\bar{B}_0$ parameter alone. Hence, $\bar{q}q$ matrix elements remain the unique place for discovering the presence of quenched chiral logs in quenched lattice calculations within the strong sector. \paragraph{} The double pole in the singlet two--point function also changes the standard chiral power counting for which diagrams with a higher number of loops are of higher chiral order. In the quenched case one may have graphs with any number of loops with the same chiral dimension, and the chiral order of an amplitude is no more constrained to be positive: as a consequence, quenched CHPT has power--like divergences in the chiral limit. These divergences are in principle a very serious problem of the theory, although they seem to be a unavoidable consequence of the quenched approximation. Since the graphs that have negative chiral dimension are also ultraviolet finite, their study requires the calculation of the UV finite part of the loop corrections. We have shown how they arise within the generating functional approach; at one--loop and at order $\Phi^4$ they are given in Eq. (\ref{IRdiv}). We have therefore analyzed some physical quantities at one loop in the case of two degenerate light flavours: the scalar quark condensate, the pion mass, the scalar and vector form factors of the pion and the $\pi\pi$ scattering amplitude. This has given us the possibility to discuss in detail the changes induced by quenching in the UV finite part of the one--loop corrections. The main changes can be summarized by saying that unitarity is not satisfied anymore, and that the double pole in the singlet two--point function produces singularities in the chiral limit, and also unphysical singularities at threshold. \paragraph{} The differences between CHPT and its quenched version are rather well understood, as the study of the flavour--number dependence of the generating functional at one--loop also shows. The presence of the double pole in the singlet two--point function is also a rather direct consequence of the quenched approximation. This double pole has dramatic effects on the effective theory. However, it looks plausible that despite all these inconsistencies (or maybe because of them) quenched CHPT is the right tool to understand the effects of quenching in actual lattice calculations. The crucial check will be a detailed comparison of qCHPT predictions with the quark mass dependence of various quenched quantities on the lattice, and especially of the way they approach the chiral limit. We expect that further investigations in this direction will answer these questions. \section*{Acknowledgements} We thank Roberto Petronzio and Juerg Gasser for many enlightening discussions, Gerhard Ecker and Joachim Kambor for informative discussions about the weak non--leptonic sector. This work has been supported by Schweizerisches Nationalfonds and by the HCM, EEC--Contract No. CHRX--CT920026 (EURODA$\Phi$NE). \newpage
train/arxiv
BkiUeV05qoTAi5Vb6lWS
5
1
\section{\normalsize Introduction}\setcounter{equation}{0} Throughout this paper, we assume $G$ is a simple (i.e., a finite, undirected, loopless and without multiple edges) connected graph, whose vertex set is $V_G=\{v_1,v_2,\ldots,v_n\}$ and edge set is $E_G$. The \textit{order} of $G$ is the number $n=|V_G|$ of its vertices and its \textit{size} is the number $|E_G|$ of its edges. Denote by $\bar{G}$ the complement of $G$. Unless otherwise stated, we follow the traditional notation and terminology (see \cite{0009}). Given a graph $G$, its \textit{adjacency matrix} $A(G)$ is an $n\times n$ $0$-$1$ matrix whose $(i,j)$-entry is $1$ if and only if $v_i$ is adjacent to $v_j$ in $G$. Let $D(G)={\rm diag}(d_1,\ldots, d_n)$ be the diagonal matrix of vertex degrees in a graph $G$. The matrix $Q(G)=D(G)+A(G)$ is called the \textit{signless Laplacian matrix} of $G$ (see \cite{RD}). To track the gradual change of $A(G)$ into $Q(G),$ Nikiforov \cite{0007} introduced the \textit{$A_{\alpha}$-matrix} of a graph $G$, which is a convex combination of $D(G)$ and $A(G)$, that is, $$ A_{\alpha}(G)=\alpha D(G)+(1-\alpha)A(G),\ \ \ 0\leqslant \alpha\leqslant 1. $$ One attractive feature of $A_{\alpha}$-matrices is that we can determine many of their properties from those of adjacency matrices or signless Laplacian matrices. Notice that $A_{\alpha}(G)$ is real and symmetric. Hence its eigenvalues are real. For short, the $A_{\alpha}$-spectral radius of $G$ (i.e., the largest eigenvalue of $A_{\alpha}(G)$) is called the \textit{$A_{\alpha}$-index} of $G.$ Note that $$ A(G)=A_0(G),\ \ \ Q(G)=2A_{\frac{1}{2}}(G) \ \ \ \text{and}\ \ \ D(G)=A_1(G). $$ Recently, more and more people focused on the $A_{\alpha}$-matrix of a graph. Nikiforov et al. \cite{Nik} gave some bounds on the $A_{\alpha}$-index of graphs, and they determined the unique tree with maximum (resp. minimum) $A_{\alpha}$-index among $n$-vertex trees. Nikiforov et al. \cite{005} and Xue et al. \cite{Xue}, independently, gave three edge graft transformations on $A_\alpha$-index. As applications, Xue et al. \cite{Xue} determined the graphs with maximum (resp. minimum) $A_{\alpha}$-index among all connected graphs with given diameter (resp. clique number). For more advances on the $A_{\alpha}$-spectra, we refer the reader to \cite{Chen,Huang,Li,LI2019,LI2020,001,Ni,Wang1,Wang,Xu2020} and references cited in. It is well-known that the spectrum of a graph $G$ consists of all the eigenvalues (including the multiplicities) of the corresponding matrix associated with $G.$ In the literature, one usually studies the adjacency spectrum, Laplacian spectrum, signless Laplacian spectrum and $A_\alpha$-spectrum of a graph $G$, which are denoted by ${\rm Spec}_A(G),\,{\rm Spec}_L(G),\,{\rm Spec}_Q(G)$ and ${\rm Spec}_{\alpha}(G),$ respectively. We say two graphs are \textit{cospectral} if they share the same spectrum. A graph $G$ is said to be \textit{determined by the spectrum} (DS for short) if, whenever $H$ is a graph such that $H$ and $G$ are cospectral, then $H$ is isomorphic to $G$ (here the matrix associated with $G$ should be clear in the context). In particular, for $\alpha\in[0,1),$ a graph $G$ is said to be \textit{determined by the generalized $A_\alpha$-spectrum} (or, DGA$_\alpha$S for short) if whenever $H$ is a graph such that ${\rm Spec}_{\alpha}(G) = {\rm Spec}_{\alpha}(H)$ and ${\rm Spec}_{\alpha}(\bar{G}) = {\rm Spec}_{\alpha}(\bar{H}),$ then $H$ is isomorphic to $G.$ ``Which kinds of graphs are DS?" is a classical problem in spectral graph theory. The problem originates from chemistry and goes back to more than 60 years ago. In 1956, G\"{u}nthard and Primas \cite{9} raised the question in a paper that relates the theory of graph spectra to H\"{u}ckel's theory from chemistry. Kac \cite{11} asked a similar question: ``Can one hear the shape of a drum?". Fisher \cite{10} used the graph to model the shape of a drum. Then the sound of the drum can be identified by the eigenvalues of the corresponding graph. However, it turns out that determining whether a graph is DS is usually a difficult problem. We refer the reader to van Dam and Haemers \cite{van,van2} for some background and known results. In the literature, many researchers studied the above problem in the context of the generalized $A_0$-spectrum and generalized $A_{\frac{1}{2}}$-spectrum (i.e., generalized adjacency spectrum and generalized $Q$-spectrum). Liu, Siemons and Wang \cite{Liu6} have constructed infinite families of graphs that are DG$A_0$S. Mao, Liu and Wang \cite{Mao3} gave a simple way to construct large DG$A_0$S graphs from small ones. Wang and Mao \cite{Wang4} presented a simple sufficient condition, under which they showed that $G\cup H$ is DG$A_0$S if and only if both $G$ and $H$ are DG$A_0$S. We make no attempt here to survey more important early contributions but instead refer the reader to \cite{A2,Liu5,0005,0001,0002,Wang01,0003,0004,Wang7} and references cited in. Assume that $G$ is a graph with $A_\alpha$-matrix $A_\alpha(G)$ for $\alpha\in [0,1).$ In our whole context, we only consider that $\alpha$ is rational. Then let $c_\alpha$ be the smallest positive integer such that $A_{c_\alpha}(G):={c_\alpha} A_\alpha(G)$ is an integral matrix, that is, $c_\alpha$ is the smallest integer such that ${c_\alpha}\alpha$ and ${c_\alpha}(1-\alpha)$ are nonnegative integers. Define $W_{{\alpha}}(G)=[{\bf 1},A_{c_\alpha}{\bf 1},\ldots,A_{c_\alpha}^{n-1}{\bf 1}]$ to be the $A_{\alpha}$-walk matrix of $G$ for $\alpha\in[0,1)$, where ${\bf 1}$ denotes the all-ones vector and we always abbreviate $A_{c_\alpha}(G)$ to $A_{c_\alpha}$. It is easy to see that $2^{\lfloor\frac{n}{2}\rfloor}c_\alpha^{n-1}$ is of course a factor of $\det W_{{\alpha}}(G)$ (based on Lemma \ref{lem3.4} below). Furthermore, we shall notice that the rank of the $A_{\alpha}$-walk matrix $W_{{\alpha}}(G)$ is $1$ over $\mathbb{F}_{c_\alpha}$ if $c_\alpha>1.$ So, for $\alpha\in[0,1)$, we define the modified $A_{\alpha}$-walk matrix of $G$, written as $\tilde{W}_{{\alpha}}(G)$, to be $[{\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots, \frac{A_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha}].$ Recall that $A_{c_\alpha}(G){\bf 1} ={c_\alpha}A_\alpha(G){\bf 1}.$ It follows that $\tilde{W}_{{\alpha}}(G)$ is an integral matrix. Recently, Wang et al. \cite{0001,0002,0003,0004} and Qiu et al. \cite{0005} gave a simple arithmetic condition for a graph being DG$A_0$S and DG$A_{\frac{1}{2}}$S, respectively. In this paper, we generalize their results to $A_\alpha$-matrix for $0\leqslant \alpha<1$. Our main result is given as follows. \begin{thm}\label{thm1.1} Let $G$ be a graph with order $n\,(n\geqslant 5)$ and let $\alpha\in[0,1).$ If $\frac{\det \tilde{W}_{{\alpha}}(G)}{2^{\lfloor\frac{n}{2}\rfloor}}$ is odd and square-free, and the rank of $\tilde{W}_{{\alpha}}(G)$ is full over $\mathbb{F}_p$ for each odd prime $p\mid {c_\alpha},$ then $G$ is DGA$_{\alpha}$S except for even $n$ and odd $c_\alpha\,(\geqslant 3)$. \end{thm} The main idea of the proof of Theorem \ref{thm1.1} follows from Qiu \cite{0005} and Wang \cite{0001,0002}. Together with some new idea we make the proof work. The remainder of the paper is organized as follows: In Section 2, we give some preliminary results that will be needed in the sequel. In Section 3, we present the proof of Theorem \ref{thm1.1}. In Section 4, we give examples of DGA$_\alpha$S graphs for some $\alpha\in[0,1)$. In Section 5, we give some conclusion remarks and some further research problems. \section{\normalsize Preliminaries}\setcounter{equation}{0} In this section, we present some preliminary results which will be needed in the subsequent sections. If we focus on an integral matrix, its Smith Normal Form (SNF for short) is a very useful tool in our study. An integral matrix $V$ is said to be \textit{unimodular} if $\det V =\pm1.$ We firstly introduce the following well-known theorem. \begin{thm}[\cite{0006}]\label{thm2.3} Let $M$ be an integral matrix of order $n$. Then there exist unimodular matrices $V_1$ and $V_2$ such that $M = V_1SV_2$, where $S=\text{\rm diag}(s_1, s_2,\ldots,s_n)$ is the SNF of $M$ with $s_i\mid s_{i+1}$ for all $i\in \{1,2,\ldots,n-1\},$ and $s_i$ is called the $i$-th elementary divisor of $M$. \end{thm} Notice that the SNF of a matrix can be computed efficiently (see \cite[P50]{0008}). The following lemma is the key result in the proof of our main result. \begin{lem}[\cite{0001}]\label{lem2.4} With the above notations, the system of congruence equations $M{\bf x}\equiv {\bf 0} \pmod{p^2}$ has a solution ${\bf x}\not\equiv{\bf 0} \pmod{p}$ if and only if $p^2 \mid s_n.$ \end{lem} Next, we will illustrate the main strategy to prove a graph to be DGA$_\alpha$S for $\alpha\in[0,1).$ The following lemma is the groundwork of our method, which gives a simple characterization of two graphs having the same generalized $A_\alpha$-spectrum. It is an analogous result of adjacency matrix in \cite{0004} and signless Laplacian matrix in \cite{0005}. For convenience, we always set $a:=c_\alpha\alpha$ and $b:=c_\alpha(1-\alpha)$ and denote by $O_{n}(\mathbb{Q})$ the set of all $n\times n$ rational orthogonal matrices \begin{lem}\label{thm2.1} Let $G$ be a graph of order $n$ such that $\det W_{{\alpha}}(G)\neq0$ for $\alpha\in[0,1).$ Then there exists a graph $H$ such that $G$ and $H$ share the same generalized $A_{\alpha}$-spectrum if and only if there exists a unique matrix $U\in O_{n}(\mathbb{Q})$ satisfying that \[\label{eq:2.1} U^TA_{c_\alpha}(G)U=A_{c_\alpha}(H)\ \ \text{and}\ \ U{\bf 1}={\bf 1}. \] \end{lem} \begin{proof} \textit{Sufficiency.}\ Assume that there exists a matrix $U\in O_{n}(\mathbb{Q})$ satisfying \eqref{eq:2.1}. It is routine to check that \[\label{eq:2.01} A_{c_\alpha}(\bar{G})=aD(\bar{G})+bA(\bar{G})=bJ+(a(n-1)-b)I-A_{c_\alpha}(G), \] where $I$ and $J$ denote the identity matrix and the all one's matrix, respectively. Notice that $U{\bf 1}={\bf 1}$ and $U\in O_{n}(\mathbb{Q}).$ Therefore, $$ U^TA_{c_\alpha}(\bar{G})U=U^T(bJ+(a(n-1)-b)I-A_{c_\alpha}(G))U=bJ+(a(n-1)-b)I-A_{c_\alpha}(H)=A_{c_\alpha}(\bar{H}). $$ Hence $G$ and $H$ are cospectral with respect to the generalized $A_{\alpha}$-spectrum. \textit{Necessity.}\ Note that ${\rm Spec}_\alpha(G)={\rm Spec}_\alpha(H)$ and ${\rm Spec}_\alpha(\bar{G})={\rm Spec}_\alpha(\bar{H}).$ Together with \eqref{eq:2.01}, one obtains that \[\label{eq:03} \det(\lambda I-A_{c_\alpha}(G))=\det(\lambda I-A_{c_\alpha}(H))\ \text{and}\ \det(\lambda I+bJ-A_{c_\alpha}(G))=\det(\lambda I+bJ-A_{c_\alpha}(H)) \] hold for all real $\lambda.$ Furthermore, it is routine to check that \begin{align}\notag \det(\lambda I+bJ-A_{c_\alpha}(G))&=\det(\lambda I-A_{c_\alpha}(G)+b{\bf 1}{\bf 1}^T)\\\notag &=\det(\lambda I-A_{c_\alpha}(G))\det(I+b(\lambda I-A_{c_\alpha}(G))^{-1}{\bf 1}{\bf 1}^T)\\\label{eq:01} &=(1+b{\bf 1}^T(\lambda I-A_{c_\alpha}(G))^{-1}{\bf 1})\det(\lambda I-A_{c_\alpha}(G)) \end{align} for all $\lambda\not\in \sigma(A_{c_\alpha}(G)),$ where $\sigma(M)$ denotes the set of all distinct eigenvalues of the matrix $M.$ Similarly, for all $\lambda\not\in \sigma(A_{c_\alpha}(H)),$ we have \[\label{eq:02} \det(\lambda I+bJ-A_{c_\alpha}(G))=(1+b{\bf 1}^T(\lambda I-A_{c_\alpha}(H))^{-1}{\bf 1})\det(\lambda I-A_{c_\alpha}(H)). \] Combining \eqref{eq:03}-\eqref{eq:02}, we have \[\label{eq:2.2} {\bf 1}^T(\lambda I-A_{c_\alpha}(G))^{-1}{\bf 1}={\bf 1}^T(\lambda I-A_{c_\alpha}(H))^{-1}{\bf 1}. \] Notice that $A_{c_\alpha}(G)$ is a real symmetric matrix. Hence all the linearly independent eigenvectors of $A_{c_\alpha}(G)$ form an orthonormal basis of $\mathbb{R}_n.$ Group the eigenvectors with respect to the eigenvalue $\mu$ into matrix $P_{\mu}$ for each $\mu\in \sigma({A_{c_\alpha}}(G))$. Without loss of generality, assume that $\sigma({A_{c_\alpha}}(G))=\sigma(A_{c_\alpha}(H))=\{\mu_1,\mu_2,\ldots,\mu_s\}.$ Therefore, \begin{equation*} A_{c_\alpha}(G)\left[P_{\mu_1}, P_{\mu_2}, \ldots, P_{\mu_s}\right]=\left[P_{\mu_1}, P_{\mu_2}, \ldots, P_{\mu_s}\right]\left[ \begin{array}{cccc} \mu_1I_1 & & & \\ & \mu_2I_2 & & \\ & & \ddots & \\ & & & \mu_sI_s \\ \end{array} \right], \end{equation*} where $I_i$ denotes the identity matrix whose order equals the multiplicity of $\mu_i$ for $1\leqslant i\leqslant s.$ Hence, for each $\lambda\not\in \sigma(A_{c_\alpha}(G)),$ we obtain $$ (\lambda I-A_{c_\alpha}(G))^{-1}=\sum_{i=1}^s\frac{1}{\lambda-\mu_i}P_{\mu_i} P_{\mu_i}^T. $$ By a similar discussion, one has, for each $\lambda\not\in \sigma(A_{c_\alpha}(H)),$ that $$ (\lambda I-A_{c_\alpha}(H))^{-1}=\sum_{i=1}^s\frac{1}{\lambda-\mu_i}R_{\mu_i} R_{\mu_i}^T, $$ here $R_{\mu_i}(1\leqslant i\leqslant s)$ denotes the matrix for $H$ corresponding to $P_{\mu_i}$ for $G.$ In view of \eqref{eq:2.2}, we obtain, for each $\lambda\not\in \sigma(A_{c_\alpha}(G)),$ that $$ \sum_{i=1}^s\frac{\left\|P_{\mu_i}^T{\bf 1}\right\|}{\lambda-\mu_i}=\sum_{i=1}^s\frac{\left\|R_{\mu_i}^T{\bf 1}\right\|}{\lambda-\mu_i}. $$ This implies that $\left\|P_{\mu_i}^T{\bf 1}\right\|=\left\|R_{\mu_i}^T{\bf 1}\right\|$ for all $i\in\{1,\ldots,s\}.$ Hence, there exists an orthogonal matrix $H_{\mu_i}$ such that $P_{\mu_i}^T{\bf 1}=H_{\mu_i} R_{\mu_i}^T{\bf 1}$ for all $i\in\{1,\ldots,s\}.$ Let $$ U=[P_{\mu_1},P_{\mu_2},\ldots,P_{\mu_s}][ R_{\mu_1}H_{\mu_1}^T, R_{\mu_2}H_{\mu_2}^T,\ldots, R_{\mu_s}H_{\mu_s}^T]^T. $$ It is straightforward to check that $U$ is an orthogonal matrix such that $U{\bf 1}={\bf 1}$ and $U^TA_{c_\alpha}(G)U=A_{c_\alpha}(H).$ Therefore, $U^TA_{c_\alpha}^k(G){\bf 1}=A_{c_\alpha}^k(H){\bf 1}$ for each $k\in\{0,1,\ldots,n-1\},$ which yields that $U^TW_{{\alpha}}(G)=W_{{\alpha}}(H).$ It follows from $\det W_{\alpha}(G)\neq0$ that $\det W_{\alpha}(H)\neq0,$ and thus $U=W_{{\alpha}}(G)W_{{\alpha}}(H)^{-1}$ is a rational orthogonal matrix satisfying \eqref{eq:2.1}. Now, we show the uniqueness of $U.$ Suppose to the contrary that there exist two distinct matrices $U_1,\,U_2\in O_n(\mathbb{Q})$ satisfying \eqref{eq:2.1}. Then we obtain $U_1^TW_{{\alpha}}(G)=U_2^TW_{{\alpha}}(G)=W_{{\alpha}}(H).$ Notice that $\det W_{{\alpha}}(G)\neq 0.$ Hence $U_1=U_2,$ a contradiction. This completes the proof. \end{proof} For $\alpha\in[0,1),$ we now define the following notation that will be used frequently in the sequel: $$ \Gamma_\alpha(G)=\{U:U\in O_n(\mathbb{Q}),\,U^T{A_{c_\alpha}}(G)U ={A_{c_\alpha}}(H)\,\text{for some graph}\,H\, \text{and}\, U{\bf 1} = {\bf 1}\}. $$ The next lemma extends the results for the adjacency matrix and signless Laplacian matrix found in standard texts like \cite{0005,0004} to $A_\alpha$-matrix \begin{lem}\label{thm2.2} Let $G$ be a graph and let $\alpha\in[0,1)$. If $\det W_{{\alpha}}(G)\neq 0,$ then $G$ is DGA$_{\alpha}$S if and only if each matrix in $\Gamma_\alpha(G)$ is a permutation matrix. \end{lem} \begin{proof} \textit{Sufficiency.}\ Suppose to the contrary that $G$ is not DGA$_{\alpha}$S. Then there exists a graph $H\not\cong G$ such that $G$ and $H$ share the same generalized $A_\alpha$-spectrum. According to Lemma \ref{thm2.1}, there is a unique matrix $U\in O_n(\mathbb{Q})$ satisfying \eqref{eq:2.1}. Therefore, $U\in \Gamma_\alpha(G).$ Notice that $H\not\cong G.$ Hence $U$ is not a permutation matrix, which contradicts the assumption that each matrix in $\Gamma_\alpha(G)$ is a permutation matrix. \textit{Necessity.}\ Suppose that there exists a matrix $U\in \Gamma_\alpha(G)$ and $U$ is not a permutation matrix. Assume that $H$ is a graph satisfying $U^T{A_{c_\alpha}}(G)U ={A_{c_\alpha}}(H).$ Based on the uniqueness of $U,$ we obtain that $H\not\cong G,$ a contradiction. \end{proof} Further on we need the following definition. \begin{defi}\label{defi1} Let $U$ be a rational orthogonal matrix. The \textit{level} of $U$, denoted by $l(U)$ (or simply by $l$ if there is no danger of ambiguity), is the smallest positive integer $k$ such that $kU$ is an integral matrix. \end{defi} Obviously, $l(U)$ is the least common denominator of all the entries of the matrix $U$. It is routine to check that a matrix $U\in O_{n}(\mathbb{Q})$ with $U{\bf 1}={\bf 1}$ is a permutation matrix if and only if $l(U)=1.$ In view of Lemma~\ref{thm2.2}, for a given graph $G$ and $\alpha\in[0,1),$ our main strategy in proving that $G$ is DGA$_{\alpha}$S is equivalent to show that $l(U)=1$ for all $U\in \Gamma_\alpha(G).$ In the remaining part of this section, we give a technical lemma, which also plays an important role in the proof of Theorem \ref{thm1.1}. In what follows, when there is no scope for ambiguity, we always put $A:=A(G),$ $D:=D(G),$ $A_{c_\alpha}:=A_{c_\alpha}(G)$ and $\tilde{W}_\alpha:=\tilde{W}_\alpha(G).$ \begin{lem}\label{lem2.5} Let $G$ be a graph on $n$ vertices and let $\alpha\in[0,1)$. Then ${\bf 1}^TA_{c_\alpha}{\bf 1}\equiv 0 \pmod{2{c_\alpha}}$ and ${\bf 1}^TA_{c_\alpha}^k{\bf 1}\equiv 0 \pmod{2{c_\alpha^2}}$ for each integer $k\geqslant 2.$ \end{lem} \begin{proof} Notice that $D{\bf 1}=A{\bf 1}={\bf d},$ here ${\bf d}=(d_1,d_2,\ldots, d_n)^T$ and $d_i$ denotes the degree of the $i$-th vertex of $G.$ Hence, $A_{c_\alpha}{\bf 1}=(a A+bD){\bf 1}={c_\alpha}{\bf d}.$ Therefore, $$ {\bf 1}^TA_{c_\alpha}{\bf 1}={c_\alpha}{\bf 1}^T{\bf d}={c_\alpha}\sum_{i=1}^nd_i=2{c_\alpha}|E_G|\equiv 0 \pmod{2{c_\alpha}}. $$ Furthermore, $$ {\bf 1}^TA_{c_\alpha}^2{\bf 1}=(A_{c_\alpha}{\bf 1})^T(A_{c_\alpha}{\bf 1})={c_\alpha^2}\sum_{i=1}^nd_i^2\equiv{c_\alpha^2}\sum_{i=1}^nd_i=2{c_\alpha^2}|E_G|\equiv0\pmod{2{c_\alpha^2}}. $$ In what follows, we prove that ${\bf 1}^TA_{c_\alpha}^k{\bf 1}\equiv 0 \pmod{2{c_\alpha^2}}$ for each integer $k\geqslant 3.$ Recall that $A_{c_\alpha}{\bf 1}={c_\alpha}{\bf d}={c_\alpha}D{\bf 1}.$ Therefore, $$ {\bf 1}^TA_{c_\alpha}^k{\bf 1}={c_\alpha^2}{\bf 1}^TD(aA+bD)^{k-2}D{\bf 1}\equiv{c_\alpha^2}\Tr(D(aA+bD)^{k-2}D)\pmod{2{c_\alpha^2}}, $$ here $\Tr(D(aA+bD)^{k-2}D)$ denotes the trace of $D(aA+bD)^{k-2}D).$ Hence, it suffices to show that $\Tr(D(aA+bD)^{k-2}D)\equiv0\pmod{2}$ for each integer $k\geqslant 3.$ For the ease of expression, we define $\mathfrak{X}$ as the free monoid generated by $\{x, y\},$ and $$ \mathfrak{X}_m=\{X\in \mathfrak{X}:\text{the length of}\, X\, \text{is}\, m\}. $$ Define a mapping $\tau$ on $\mathfrak{X}_m$ such that $\tau(X)=M_mM_{m-1}\ldots M_1:=X^{\tau},$ where $M_i$ is the $i$-th character of $X$ for each $i\in \{1, 2,\ldots,m\}.$ Denote by $\underline{X}=M_1M_2\cdots M_m$ the product of the string of matrices in $X$, where $M_i=aA$ if the $i$-th character of $X$ is $x$, and $M_i=bD$ if the $i$-th character of $X$ is $y$, for $1\leqslant i\leqslant m.$ It is routine to check that $\underline{X}^T=\underline{X^{\tau}}$ and $X^{\tau}\in \mathfrak{X}_m$ is uniquely determined by $X.$ Using the above notations, one has $$ \Tr(D(aA+bD)^{k-2}D)=\sum_{{X}\in \mathfrak{X}_{k-2}}\Tr(D\underline{X}D). $$ Notice that $\Tr(D\underline{X}D)=\Tr((D\underline{X}D)^T)=\Tr(D\underline{X^{\tau}}D).$ Therefore, $$ \sum_{{X}\in \mathfrak{X}_{k-2},\,X\neq X^{\tau}}\Tr(D\underline{X}D)\equiv 0\pmod{2}. $$ Hence, $$ \Tr(D(aA+bD)^{k-2}D)\equiv\sum_{{X}\in \mathfrak{X}_{k-2},\,X=X^{\tau}}\Tr(D\underline{X}D)\pmod{2}. $$ Now, we proceed by distinguishing the parity on $k$. {\bf Case 1.}\ $k$ is even. Recall that $A{\bf 1}=D{\bf 1}.$ Hence, \begin{align*} \Tr(D\underline{X^{\tau}}AA\underline{X}D)&=\Tr(A\underline{X}DD\underline{X^{\tau}}A)\equiv {\bf 1}^TA\underline{X}DD\underline{X^{\tau}}A{\bf 1}\\ &={\bf 1}^TD\underline{X}DD\underline{X^{\tau}}D{\bf 1}\equiv \Tr(D\underline{X}DD\underline{X^{\tau}}D)\pmod{2}. \end{align*} It follows that \begin{align*} \Tr(D(aA+bD)^{k-2}D)&\equiv\sum_{{X}\in \mathfrak{X}_{k-2},\,X=X^{\tau}}\Tr(D\underline{X}D)=\sum_{X\in \mathfrak{X}_{\frac{k}{2}-2}}\Tr(D\underline{X^{\tau}}(aAaA+bDbD)\underline{X}D)\\ &=a^2\sum_{X\in \mathfrak{X}_{\frac{k}{2}-2}}\Tr(D\underline{X^{\tau}}AA\underline{X}D)+b^2\sum_{X\in \mathfrak{X}_{\frac{k}{2}-2}}\Tr(D\underline{X^{\tau}}DD\underline{X}D)\\ &=(a^2+b^2)\sum_{X\in \mathfrak{X}_{\frac{k}{2}-2}}\Tr(D\underline{X^{\tau}}DD\underline{X}D)\\ &\equiv(a+b)\sum_{X\in \mathfrak{X}_{\frac{k}{2}-2}}\Tr(D\underline{X^{\tau}}DD\underline{X}D)\pmod{2}. \end{align*} The last congruence equation follows from the fact that $t^2\equiv t\pmod{2}$ for each positive integer $t.$ If $a+b$ is even, then $\Tr(D(aA+bD)^{k-2}D)\equiv0\pmod{2},$ as desired. If $a$ is even and $b$ is odd, then $$ \Tr(D(aA+bD)^{k-2}D)\equiv \Tr(b^{k-2}D^k)=b^{k-2}\sum_{i=1}^n d_i^k\equiv b^{k-2}\sum_{i=1}^n d_i=2b^{k-2}|E_G|\equiv0\pmod{2}, $$ as desired. If $a$ is odd and $b$ is even, then \begin{align*} \Tr(D(aA+bD)^{k-2}D)&\equiv a^{k-2}\Tr(DA^{k-2}D)\equiv {a{\bf 1}^TDA^{k-2}D{\bf 1}}=a{\bf 1}^TA^{k}{\bf 1}\\ &={a\sum_{i,j}(A^{k-1})_{ij}(A)_{ij}}= 2a\sum_{1\leqslant i<j\leqslant n}(A^{k-1})_{ij}(A)_{ij}\equiv 0\pmod {2}, \end{align*} as desired. {\bf Case 2.}\ $k$ is odd. It is straightforward to check that \begin{align*} \Tr(D\underline{X^{\tau}}A\underline{X}D)&=\Tr(\underline{X}D D\underline{X^{\tau}}A)=\sum_{i,j}(\underline{X}D D\underline{X^{\tau}})_{ij}(A)_{ij}\\ &=2\sum_{1\leqslant i<j\leqslant n}(\underline{X}D D\underline{X^{\tau}})_{ij}(A)_{ij}\equiv0\pmod{2}. \end{align*} Thus, \begin{align*} \Tr(D(aA+bD)^{k-2}D)&\equiv\sum_{{X}\in \mathfrak{X}_{k-2},\,X=X^{\tau}}\Tr(D\underline{X}D)=\sum_{X\in \mathfrak{X}_{\frac{k-3}{2}}}\Tr(D\underline{X^{\tau}}(aA+bD)\underline{X}D)\\ &\equiv b\sum_{X\in \mathfrak{X}_{\frac{k-3}{2}}}\Tr(D\underline{X^{\tau}}D\underline{X}D)\pmod{2}. \end{align*} It is easy to see that $$ \Tr(D\underline{X^{\tau}}D\underline{X}D)\equiv\Tr(D\underline{X^{\tau}}DD\underline{X}D)=\Tr(D\underline{X}DD\underline{X^{\tau}}D) \equiv\Tr(D\underline{X}D\underline{X^{\tau}}D)\pmod{2}. $$ Then, it follows that \begin{align*} \Tr(D(aA+bD)^{k-2}D)&\equiv b\sum_{X\in \mathfrak{X}_{\frac{k-3}{2}}}\Tr(D\underline{X^{\tau}}D\underline{X}D)\equiv b\sum_{X\in \mathfrak{X}_{\frac{k-3}{2}},\,X=X^{\tau}}\Tr(D\underline{X^{\tau}}D\underline{X}D)\\ &\equiv b\sum_{X\in \mathfrak{X}_{\frac{k-3}{2}},\,X=X^{\tau}}\Tr(D\underline{X^{\tau}}DD\underline{X}D)\equiv b\sum_{X\in \mathfrak{X}_{\frac{k-3}{2}},\,X=X^{\tau}}({\bf 1}^TD\underline{X^{\tau}}D)(D\underline{X}D{\bf 1})\\ &\equiv b\sum_{X\in \mathfrak{X}_{\frac{k-3}{2}},\,X=X^{\tau}}{\bf 1}^TD\underline{X}D{\bf 1} \equiv b\Tr(D(aA+bD)^{\frac{k-3}{2}}D)\pmod{2}. \end{align*} Then by induction on $k$, we can show that $\Tr(D(aA+bD)^{k-2}D)\equiv0\pmod{2}.$ Combining Cases 1-2, we obtain that ${\bf 1}^TA_{c_\alpha}^k{\bf 1}\equiv 0 \pmod{2{c_\alpha^2}}$ for each integer $k\geqslant 3.$ This completes the proof. \end{proof} \section{\normalsize Proof of Theorem \ref{thm1.1}}\setcounter{equation}{0} In this section, we give the proof of Theorem \ref{thm1.1}. In view of Lemma \ref{thm2.2}, it suffices to prove that for each $U\in \Gamma_\alpha(G)$, the conditions of Theorem \ref{thm1.1} mean that the level $l(U)=1,$ which is equivalent to show that any prime $p$ is not a divisor of $l(U).$ In what follows, we will use the finite field notation $\mathbb{F}_p$ and mod $p$ (for a prime $p$) interchangeably, and denote by $\rank_p(M)$ the \textit{rank} of an integral matrix $M$ over $\mathbb{F}_p.$ Let $\mathfrak{F}_n$ be the set of all graphs $G$ on $n$ vertices satisfying that $\frac{\det \tilde{W}_{{\alpha}}(G)}{2^{\lfloor\frac{n}{2}\rfloor}}$ is odd and square-free, and $\rank_p(\tilde{W}_{{\alpha}}(G))=n$ for each odd prime $p\mid{c_\alpha}$ and $\alpha\in[0,1)$ In order to complete the proof of Theorem \ref{thm1.1}, we need the following key results. \begin{thm}\label{thm3.1} Let $G\in \mathfrak{F}_n$ and let $\alpha\in[0,1).$ If $U\in \Gamma_\alpha(G)$ with level $l$ and $p$ is an odd prime, then $p\nmid l.$ \end{thm} \begin{thm}\label{thm3.3} Let $G\in \mathfrak{F}_n\,(n\geqslant 5)$ and let $\alpha\in[0,1).$ If $U\in \Gamma_\alpha(G)$ with level $l$, then $l$ is odd except for even $n$ and odd $c_\alpha\,(\geqslant 3)$. \end{thm} We postpone the proofs of Theorems \ref{thm3.1} and \ref{thm3.3} to the subsequent of this section. As a corollary, we present the proof of Theorem \ref{thm1.1}: \begin{proof}[\bf Proof of Theorem \ref{thm1.1}]\ Combining Theorems \ref{thm3.1} and \ref{thm3.3}, our results follows immediately. \end{proof} In Theorem 1.1, put $\alpha=0$, then $c_\alpha=1$ and we may obtain the following corollary immediately. It gives a simple arithmetic condition for determining whether a graph is determined by the generalized adjacency spectrum, which was obtained by Wang \cite{0002}. \begin{cor}\label{cor1.01} Let $G$ be a graph of order $n$. If $\det \frac{\tilde{W}_0(G)}{2^{\lfloor\frac{n}{2}\rfloor}}$ is odd and square-free, then $G$ is DG$A_0$S. \end{cor} In Theorem 1.1, put $\alpha=\frac{1}{2}$, then $c_\alpha=2$ and we may obtain the following corollary immediately. It gives a simple arithmetic condition for determining whether a graph is determined by the generalized $Q$-spectrum, which was obtained by Qiu, Ji and Wang \cite{0005}. \begin{cor}\label{cor1.02} Let $G$ be a graph of order $n$. If $\det \frac{\tilde{W}_\frac{1}{2}(G)}{2^{\lfloor\frac{n}{2}\rfloor}}$ is odd and square-free, then $G$ is DG$A_\frac{1}{2}$S. \end{cor} \subsection{\normalsize Proof of Theorem \ref{thm3.1}} In this subsection, we are devoted to the proof of Theorem \ref{thm3.1}. Before giving the proof of Theorem \ref{thm3.1}, we present the following needed lemmas. \begin{lem}\label{lem3.2} Let $G$ be a graph and let $\alpha\in[0,1).$ Assume that $U\in \Gamma_\alpha(G)$ with level $l$ and $p$ is a prime division of $l.$ Then for each integer $k\geqslant 0,$ there exists an integral column vector ${\bf v}\not\equiv{\bf 0}\pmod{p}$ satisfying that \[\label{eq:3.2} {\bf v}^TA_{c_\alpha}^k{\bf v}\equiv 0\pmod{p^2}\ \ \text{and}\ \ \tilde{W}_{{\alpha}}^T{\bf v}\equiv{\bf 0}\pmod{p}. \] \end{lem} \begin{proof} Let $H$ be a graph such that $U^TA_{c_\alpha}(G)U=A_{c_\alpha}(H)$ and let $\bar{U}=lU.$ Based on Definition \ref{defi1}, there exists a column ${\bf v}$ of $\bar{U}$ such that ${\bf v}\not\equiv{\bf 0}\pmod{p}.$ It is easy to check that $\bar{U}^T{A_{c_\alpha}^k}(G)\bar{U}=l^2{A_{c_\alpha}^k}(H)\equiv{\bf 0}\pmod{p^2}.$ It follows that ${\bf v}^TA_{c_\alpha}^k(G){\bf v}\equiv 0\pmod{p^2}$ for each integer $k\geqslant 0.$ Notice that $U{\bf 1}={\bf 1}.$ Hence, $U^TA_{c_\alpha}^k(G){\bf 1}=A_{c_\alpha}^k(H){\bf 1}$ for each integer $k\geqslant 0.$ It follows that $U^T\frac{A_{c_\alpha}^k(G){\bf 1}}{c_\alpha}=\frac{A_{c_\alpha}^k(H){\bf 1}}{c_\alpha}$ for each integer $k\geqslant 1,$ and therefore $U^T\tilde{W}_{{\alpha}}(G)=\tilde{W}_{{\alpha}}(H)$ is an integral matrix. Hence, $\tilde{W}_{{\alpha}}(G)^T{\bf v}\equiv{\bf 0}\pmod{p}.$ This completes the proof. \end{proof} \begin{lem}\label{lem3.6} Let $G$ be a graph with $\det \tilde{W}_{{\alpha}}(G)\neq 0$ and let $\alpha\in[0,1)$. If $U\in \Gamma_\alpha(G)$ with level $l,$ then $l\mid s_n,$ where $s_n$ is the $n$-th elementary division of $\tilde{W}_{{\alpha}}(G).$ \end{lem} \begin{proof} Let $H$ be a graph such that $U^T{A_{c_\alpha}}(G)U={A_{c_\alpha}}(H).$ In view of the proof of Lemma~\ref{lem3.2}, one has $U^T\tilde{W}_{{\alpha}}(G)=\tilde{W}_{{\alpha}}(H).$ That is to say, $U^T=\tilde{W}_{{\alpha}}(H)\tilde{W}_{{\alpha}}(G)^{-1}.$ By Theorem \ref{thm2.3}, there exist unimodular matrices $V_1$ and $V_2$ such that $\tilde{W}_{{\alpha}}(G)=V_1SV_2,$ where $S={\rm diag}(s_1,s_2,\ldots,s_n)$ and $s_i\mid s_{i+1}$ for each $i\in\{1,\ldots,n-1\}.$ Thus, $$ s_nU^T=\tilde{W}_{{\alpha}}(H)V_2^{-1}{\rm diag}\left(\frac{s_n}{s_1},\frac{s_n}{s_2},\ldots,\frac{s_n}{s_n}\right)V_1^{-1}. $$ Notice that $V_1^{-1}$ and $V_2^{-1}$ are integral matrices. Hence, $s_nU^T$ is an integral matrix. Therefore, $l\mid s_n.$ This completes the proof. \end{proof} Now, we are ready to prove Theorem \ref{thm3.1}. \begin{proof}[\bf Proof of Theorem \ref{thm3.1}]\ Firstly, we consider $p\mid c_\alpha.$ It follows from Lemma \ref{lem3.6} that $p\nmid l.$ Otherwise, $p\mid \det \tilde{W}_\alpha$ and therefore $\rank_p(\tilde{W}_{{\alpha}})\neq n,$ which contradicts the definition of $\mathfrak{F}_n.$ So, in what follows, it suffices to assume that $p\nmid c_\alpha.$ Suppose, to the contrary, that $p\mid l.$ In view of Theorem \ref{thm2.3}, there exist unimodular matrices $V_1$ and $V_2$ such that $\tilde{W}_{{\alpha}}=V_1SV_2,$ where $S={\rm diag}(s_1,s_2,\ldots,s_n)$ and $s_i\mid s_{i+1}$ for each $i\in\{1,\ldots,n-1\}.$ By Lemma \ref{lem3.6}, one has $p\mid s_n.$ Notice that $\det \tilde{W}_{{\alpha}}=\pm \det S.$ Therefore $p\mid \det \tilde{W}_{{\alpha}}.$ Based on the definition of $\mathfrak{F}_n,$ one has $p^2\nmid \det \tilde{W}_{{\alpha}}.$ Hence, $\rank_p(\tilde{W}_{{\alpha}})=n-1.$ In view of the proof Lemma~\ref{lem3.2}, we know that each column of $\bar{U}$ satisfies \eqref{eq:3.2}. Thus, $\tilde{W}_{{\alpha}}^T\bar{U}\equiv{\bf 0}\pmod{p}$ and so $\rank_p(\bar{U})=1.$ It follows that there exists an integral vector ${\gamma}$ such that ${\bf v}{ \gamma}^T\equiv \bar{U}\pmod{p},$ where ${\bf v}$ is the $j$-th column of $\bar{U}$ satisfying both ${\bf v}\not\equiv{\bf 0}\pmod{p}$ and \eqref{eq:3.2}. Let $H$ be a graph such that $U^TA_{c_\alpha}(G)U=A_{c_\alpha}(H).$ Thus, $$ A_{c_\alpha}(G){\bf v}=\bar{U}A_{c_\alpha}(H)_j\equiv {\bf v}(\gamma^TA_{c_\alpha}(H)_j)=\lambda_0{\bf v}\pmod{p}, $$ here $A_{c_\alpha}(H)_j$ denotes the $j$-th column of $A_{c_\alpha}(H)$ and $\lambda_0=\gamma^TA_{c_\alpha}(H)_j.$ Hence, $\rank_p(A_{c_\alpha}(G)-\lambda_0I)\neq n.$ We proceed by considering the following three cases {\bf Case 1.}\ $\rank_p(A_{c_\alpha}-\lambda_0I)= n-1.$ Notice that ${\bf v}^T(A_{c_\alpha}-\lambda_0I)\equiv{\bf 0}\pmod{p},$ ${\bf v}^T{\bf v}\equiv0\pmod{p^2}$ and ${\bf v}^T{\bf 1}\equiv0\pmod{p}.$ Hence, there exist integral vectors ${\bf y}$ and ${\bf u}$ satisfying ${\bf v}\equiv(A_{c_\alpha}-\lambda_0I){\bf y}\pmod{p}$ and ${\bf 1}\equiv(A_{c_\alpha}-\lambda_0I){\bf u}\pmod{p}.$ That is to say, ${\bf 1}=(A_{c_\alpha}-\lambda_0I){\bf u}+p\beta$ for some integral vector $\beta.$ It follows that \begin{align}\label{eq:3.3} \tilde{W}_{{\alpha}}=\left[{\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha}\right]=(A_{c_\alpha}-\lambda_0I)X +p\left[\beta,\frac{A_{c_\alpha}\beta}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}\beta}{c_\alpha}\right], \end{align} here $X=\left[{\bf u},\frac{A_{c_\alpha}{\bf u}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf u}}{c_\alpha}\right].$ Notice that $X$ does not need to be an integral matrix, but $X\pmod{p}$ is meaningful since $p$ is an odd prime and $p\nmid c_\alpha$. Therefore, $$ {c_\alpha}\frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}={c_\alpha}X^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}+\left[{c_\alpha}\beta,{A_{c_\alpha}\beta},\ldots,{A_{c_\alpha}^{n-1}\beta}\right]^T{\bf v}. $$ It is straightforward to check that there exists an integer $s$ such that $\frac{p+s}{c_\alpha}\times {c_\alpha}\equiv1\pmod{p}$ and ${c_\alpha}\mid(p+s).$ Hence, $\frac{p+s}{c_\alpha}\equiv \frac{1}{c_\alpha}\pmod{p}.$ Notice that $\frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}$ is an integral vector. It follows that \[\label{eq:3.01} \frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}\equiv X^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}+{\bf v}^T\beta\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T\pmod{p}. \] By Lemma \ref{lem3.2} and ${\bf v}^T{\bf v}\equiv0\pmod{p^2},$ one has ${\bf v}^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}\equiv0\pmod{p}.$ Together with ${\bf v}^T(A_{c_\alpha}-\lambda_0I)\equiv{\bf 0}\pmod{p}$ and $\rank_p{(A_{c_\alpha}-\lambda_0I)}=n-1,$ we obtain that there exists an integral vector ${\bf x}$ such that $\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}\equiv(A_{c_\alpha}-\lambda_0I){\bf x}\pmod{p}.$ In view of \eqref{eq:3.3}, one has $$ \tilde{W}_{{\alpha}}\equiv (A_{c_\alpha}-\lambda_0I)X+p\left[\beta,\frac{(p+s)A_{c_\alpha}\beta}{c_\alpha},\ldots,\frac{(p+s)A_{c_\alpha}^{n-1}\beta}{c_\alpha}\right]\equiv(A_{c_\alpha}-\lambda_0I)X\pmod{p}. $$ Thus, \[\label{eq:3.02} X^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}\equiv X^T(A_{c_\alpha}-\lambda_0I){\bf x}\equiv\tilde{W}_{{\alpha}}^T{\bf x}\pmod{p}. \] Recall that ${\bf v}\equiv(A_{c_\alpha}-\lambda_0I){\bf y}\pmod{p}.$ It is routine to check that \begin{align*} \frac{{\bf 1}^TA_{c_\alpha}{\bf y}}{c_\alpha}&\equiv\frac{(p+s)\lambda_0}{c_\alpha}{\bf 1}^T{\bf y}+\frac{p+s}{c_\alpha}{\bf 1}^T{\bf v}\equiv\frac{(p+s)\lambda_0}{c_\alpha}{\bf 1}^T{\bf y}\pmod{p},\\ \frac{{\bf 1}^TA_{c_\alpha}^2{\bf y}}{c_\alpha}&\equiv\frac{(p+s)\lambda_0}{c_\alpha}{\bf 1}^TA_{c_\alpha}{\bf y}+\frac{p+s}{c_\alpha}{\bf 1}^TA_{c_\alpha}{\bf v}\equiv\frac{(p+s)\lambda_0^2}{c_\alpha}{\bf 1}^T{\bf y}\pmod{p}, \end{align*} \begin{align*} &\ \ \vdots\\ \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf y}}{c_\alpha}&\equiv\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}{\bf 1}^T{\bf y}\pmod{p}. \end{align*} Hence, \[\label{eq:3.03} \tilde{W}_{{\alpha}}^T{\bf y}\equiv {\bf 1}^T{\bf y}\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T \pmod{p}. \] Next, we show that ${\bf 1}^T{\bf y}\not\equiv0\pmod{p}.$ Otherwise, ${\bf 1}^T{\bf y}\equiv0\pmod{p}$, then $\tilde{W}_{{\alpha}}^T{\bf y}\equiv {\bf 0}\pmod{p}.$ Recall that $\tilde{W}_{{\alpha}}^T{\bf v}\equiv {\bf 0}\pmod{p}$ and $\rank_p(\tilde{W}_{{\alpha}})=n-1.$ It follows that ${\bf v}$ and ${\bf y}$ are linearly dependent over $\mathbb{F}_p.$ That is, there exist not all zero integers $m_1$ and $m_2$ such that $m_1{\bf v}+m_2{\bf y}={\bf 0}$ over $\mathbb{F}_p.$ Left multiplying both sides by $A_{c_\alpha}-\lambda_0I$ yields that $m_2{\bf v}\equiv{\bf 0}\pmod{p}.$ Notice that ${\bf v}\not\equiv{\bf 0}\pmod{p}.$ So, $m_2=0$ and therefore $m_1=0,$ a contradiction. Thus, ${\bf 1}^T{\bf y}\not\equiv0\pmod{p}.$ Then there exists an integer $t$ such that ${\bf v}^T\beta\equiv t{\bf 1}^T{\bf y}\pmod{p}.$ Together with \eqref{eq:3.01}-\eqref{eq:3.03}, we obtain that $$ \frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}\equiv \tilde{W}_{{\alpha}}^T{\bf x}+t\tilde{W}_{{\alpha}}^T{\bf y}\pmod{p}, $$ which is equivalent to $$ \tilde{W}_{{\alpha}}^T({\bf v}-p{\bf x}-tp{\bf y})\equiv{\bf 0}\pmod{p^2}. $$ In view of Lemma \ref{lem2.4}, one has $p^2\mid \det\tilde{W}_{{\alpha}},$ a contradiction. {\bf Case 2.}\ $\rank_p(A_{c_\alpha}-\lambda_0I)= n-2.$ In this case, we proceed by considering the following claim. \begin{claim}\label{c1} $\rank_p([A_{c_\alpha}-\lambda_0I,{\bf v}])=n-1.$ \end{claim} \begin{proof}[\bf Proof of Claim \ref{c1}]\ Notice that $\rank_p([A_{c_\alpha}-\lambda_0I,{\bf v}])\geqslant \rank_p(A_{c_\alpha}-\lambda_0I)=n-2.$ Suppose that $\rank_p([A_{c_\alpha}-\lambda_0I,{\bf v}])=n-2.$ Then ${\bf v}$ can be written as a linear combination of the column vectors of $A_{c_\alpha}-\lambda_0I$ over $\mathbb{F}_p$. That is, there exist an integral vector ${\bf w}\,({\bf w}\not\equiv{\bf 0}\pmod{p})$ such that ${\bf v}\equiv (A_{c_\alpha}-\lambda_0I){\bf w}\pmod{p}.$ Recall that $A_{c_\alpha}{\bf v}\equiv \lambda_0{\bf v}\pmod{p}$ and $\bar{U}{\bf 1}=lU{\bf 1}=l{\bf 1}.$ Hence, for each positive integer $k,$ $$ \frac{{\bf 1}^TA_{c_\alpha}^k{\bf w}}{c_\alpha}\equiv\frac{{\bf 1}^TA_{c_\alpha}^{k-1}{\bf v}}{c_\alpha}+\frac{\lambda_0{\bf 1}^TA_{c_\alpha}^{k-1}{\bf w}}{c_\alpha}\equiv\frac{\lambda_0^k{\bf 1}^T{\bf w}}{c_\alpha}\pmod{p}. $$ Since $\rank_p(A_{c_\alpha}-\lambda_0I)= n-2,$ there exists an integral vector ${\bf y}$ which is linearly independent with ${\bf v}$ over $\mathbb{F}_p$, such that $(A_{c_\alpha}-\lambda_0I){\bf y}\equiv{\bf 0}\pmod{p}.$ It is routine to check that ${\bf 1}^T{\bf y}\not\equiv0\pmod{p}.$ In fact, if ${\bf 1}^T{\bf y}\equiv0\pmod{p},$ then $\tilde{W}_\alpha^T{\bf y}\equiv{\bf 0}\pmod{p},$ a contradiction to $\rank_p(\tilde{W}_\alpha)=n-1$. Now, we show that ${\bf v},\,{\bf y}$ and ${\bf w}$ are linearly independent over $\mathbb{F}_p.$ Otherwise, there exist not all zeros integers $m_1,\,m_2$ and $m_3$ such that $m_1{\bf v}+m_2{\bf y}+m_3{\bf w}={\bf 0}$ over $\mathbb{F}_p.$ Left multiplying both sides by $A_{c_\alpha}-\lambda_0I$ gives us $m_3{\bf w}\equiv{\bf 0}\pmod{p}.$ Notice that ${\bf w}\not\equiv{\bf 0}\pmod{p}.$ Hence, $m_3=0$ and so $m_1{\bf v}+m_2{\bf y}={\bf 0}$ over $\mathbb{F}_p,$ a contradiction. Let $\eta=({\bf 1}^T{\bf y}){\bf w}-({\bf 1}^T{\bf w}){\bf y}.$ Then $\eta\not\equiv{\bf 0}\pmod{p}$ and ${\bf 1}^T\eta\equiv0\pmod{p}.$ Moreover, for each integer $k\geqslant 1,$ $$ \frac{{\bf 1}^TA_{c_\alpha}^{k}\eta}{c_\alpha}\equiv({\bf 1}^T{\bf y})\frac{\lambda_0^k{\bf 1}^T{\bf w}}{c_\alpha}-({\bf 1}^T{\bf w})\frac{\lambda_0^k{\bf 1}^T{\bf y}}{c_\alpha}\equiv0\pmod{p}. $$ It follows that $\tilde{W}_{{\alpha}}^T\eta\equiv{\bf 0}\pmod{p},$ a contradiction to the fact that $\rank_p(\tilde{W}_{{\alpha}})=n-1.$ This completes the proof of Claim \ref{c1}. \end{proof} Notice that ${\bf v}^T{\bf 1}\equiv0\pmod{p}$ and ${\bf v}^T[A_{c_\alpha}-\lambda_0I,{\bf v}]\equiv{\bf 0}\pmod{p}.$ By Claim \ref{c1}, there exist integral vectors ${\bf u},\,\beta$ and integer $f$ such that ${\bf 1}=(A_{c_\alpha}-\lambda_0I){\bf u}+f{\bf v}+p\beta.$ Therefore, \[\label{eq:3.4} \tilde{W}_{{\alpha}}=(A_{c_\alpha}-\lambda_0I)X +f\left[{\bf v},\frac{A_{c_\alpha}{\bf v}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf v}}{c_\alpha}\right]+p\left[\beta,\frac{A_{c_\alpha}\beta}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}\beta}{c_\alpha}\right], \] where $X=\left[{\bf u},\frac{A_{c_\alpha}{\bf u}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf u}}{c_\alpha}\right].$ Thus, $$ {c_\alpha}\frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}={c_\alpha}X^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}+\frac{f}{p}\left[{c_\alpha} {\bf v},A_{c_\alpha}{\bf v},\ldots,A_{c_\alpha}^{n-1}{\bf v}\right]^T{\bf v}+\left[{c_\alpha}\beta,{A_{c_\alpha}\beta},\ldots,{A_{c_\alpha}^{n-1}\beta}\right]^T{\bf v}. $$ Notice that $\frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}$ is an integral vector. It follows that $$ \frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}\equiv X^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}+(\frac{f}{p}{\bf v}^T{\bf v}+{\bf v}^T\beta)\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T\pmod{p}. $$ By Lemma \ref{lem3.2} and ${\bf v}^T{\bf v}\equiv0\pmod{p^2},$ one has ${\bf v}^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}\equiv0\pmod{p}.$ Together with Claim~\ref{c1} and ${\bf v}^T[A_{c_\alpha}-\lambda_0I,{\bf v}]\equiv{\bf 0}\pmod{p},$ there exists an integral vector ${\bf x}$ and an integer $m$ such that $\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}\equiv(A_{c_\alpha}-\lambda_0I){\bf x}+m{\bf v}\pmod{p}.$ Note that $(A_{c_\alpha}-\lambda_0 I){\bf y}\equiv{\bf 0}\pmod{p},$ here ${\bf y}$ is defined in the proof of Claim \ref{c1}. Thus, $$ \tilde{W}_{{\alpha}}^T{\bf y}=\left[{\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha}\right]^T{\bf y}\equiv {\bf 1}^T{\bf y}\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T\pmod{p}. $$ Recall that ${\bf 1}^T{\bf y}\not\equiv0\pmod{p}.$ Hence, there exists an integer $t$ satisfying $\frac{f}{p}{\bf v}^T{\bf v}+{\bf v}^T\beta+m{\bf u}^T{\bf v}-f{\bf v}^T{\bf x}\equiv t{\bf 1}^T{\bf y}\pmod{p}.$ It follows from \eqref{eq:3.4} that $\tilde{W}_{{\alpha}}^T\equiv X^T{(A_{c_\alpha}-\lambda_0I)}+f\left[{\bf v},\frac{A_{c_\alpha}{\bf v}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf v}}{c_\alpha}\right]\pmod{p}.$ Hence \begin{align*} \frac{\tilde{W}_{{\alpha}}^T{\bf v}}{p}&\equiv X^T\frac{(A_{c_\alpha}-\lambda_0I){\bf v}}{p}+(\frac{f}{p}{\bf v}^T{\bf v}+{\bf v}^T\beta)\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T\\ &\equiv X^T(A_{c_\alpha}-\lambda_0I){\bf x}+mX^T{\bf v}+(\frac{f}{p}{\bf v}^T{\bf v}+{\bf v}^T\beta)\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T\\ &\equiv\tilde{W}_{{\alpha}}^T{\bf x}+(\frac{f}{p}{\bf v}^T{\bf v}+{\bf v}^T\beta+m{\bf u}^T{\bf v}-f{\bf v}^T{\bf x})\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T\\ &\equiv\tilde{W}_{{\alpha}}^T{\bf x}+t{\bf 1}^T{\bf y}\left[1,\frac{(p+s)\lambda_0}{c_\alpha},\ldots,\frac{(p+s)\lambda_0^{n-1}}{c_\alpha}\right]^T\\ &\equiv\tilde{W}_{{\alpha}}^T{\bf x}+t\tilde{W}_{{\alpha}}^T{\bf y}\pmod{p}. \end{align*} Therefore, $$ \tilde{W}_{{\alpha}}^T({\bf v}-p{\bf x}-tp{\bf y})\equiv{\bf 0}\pmod{p^2}. $$ In view of Lemma \ref{lem2.4}, one has $p^2\mid \det\tilde{W}_{{\alpha}},$ a contradiction. {\bf Case 3.}\ $\rank_p(A_{c_\alpha}-\lambda_0I)\leqslant n-3.$ In this case, there exists three linearly independent vectors, say ${\bf v},\,{\bf w}$ and ${\bf y},$ such that $(A_{c_\alpha}-\lambda_0I){\bf v}=(A_{c_\alpha}-\lambda_0I){\bf w}=(A_{c_\alpha}-\lambda_0I){\bf y}={\bf 0}$ over $\mathbb{F}_p.$ It is straightforward to check that ${\bf 1}^T{\bf w}\not\equiv0\pmod{p}$ and ${\bf 1}^T{\bf y}\not\equiv0\pmod{p}.$ Let $\zeta=({\bf 1}^T{\bf y}){\bf w}-({\bf 1}^T{\bf w}){\bf y}.$ Then $\zeta\not\equiv{\bf 0}\pmod{p}$ and ${\bf 1}^T\zeta\equiv0\pmod{p}.$ Moreover, for each integer $k\geqslant 1,$ $$ \frac{{\bf 1}^TA_{c_\alpha}^{k}\zeta}{c_\alpha}\equiv({\bf 1}^T{\bf y})\frac{\lambda_0^k{\bf 1}^T{\bf w}}{c_\alpha}-({\bf 1}^T{\bf w})\frac{\lambda_0^k{\bf 1}^T{\bf y}}{c_\alpha}\equiv0\pmod{p}. $$ It follows that $\tilde{W}_{{\alpha}}^T\zeta\equiv{\bf 0}\pmod{p},$ a contradiction to the fact that $\rank_p(\tilde{W}_{{\alpha}})=n-1.$ Combining Cases 1-3, Theorem \ref{thm3.1} follows immediately. \end{proof} \subsection{\normalsize Proof of Theorem \ref{thm3.3}} In this subsection, we present the proof of Theorem \ref{thm3.3}. Before doing so, we need the following lemmas. \begin{lem}\label{lem3.4} Let $G$ be a graph and let $\alpha$ be in $[0,1)$. Then $\rank_2(\tilde{W}_{{\alpha}})\leqslant \lceil\frac{n}{2}\rceil.$ \end{lem} \begin{proof} We proceed by distinguishing the parity on $n$. {\bf Case 1.}\ $n$ is even. In view of Lemma \ref{lem2.5}, one obtains that \begin{equation}\label{eq:4} \tilde{W}_{{\alpha}}^T\tilde{W}_{{\alpha}}=\left( \begin{array}{cccc} {\bf 1}^T{\bf 1} & \frac{{\bf 1}^TA_{c_\alpha}{\bf 1}}{c_\alpha} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha} \\ \frac{{\bf 1}^TA_{c_\alpha}{\bf 1}}{c_\alpha} & \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{{c_\alpha^2}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha} & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{2n-2}{\bf 1}}{{c_\alpha^2}} \\ \end{array} \right)\equiv{\bf 0}\pmod{2}. \end{equation} Therefore, $2\,\rank_2(\tilde{W}_{{\alpha}})=\rank_2(\tilde{W}_{{\alpha}}^T)+\rank_2(\tilde{W}_{{\alpha}})\leqslant n$. It follows that $\rank_2(\tilde{W}_{{\alpha}})\leqslant \frac{n}{2}=\lceil\frac{n}{2}\rceil.$ {\bf Case 2.}\ $n$ is odd. Let $\bar{W}_{{\alpha}}:=\bar{W}_{{\alpha}}(G)=[2\times{\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha}].$ Based on Lemma \ref{lem2.5}, we obtain that \begin{equation}\label{eq:5} \tilde{W}_{{\alpha}}^T\bar{W}_{{\alpha}}=\left( \begin{array}{cccc} 2\times{\bf 1}^T{\bf 1} & \frac{{\bf 1}^TA_{c_\alpha}{\bf 1}}{c_\alpha} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha} \\ \frac{2\times{\bf 1}^TA_{c_\alpha}{\bf 1}}{c_\alpha} & \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{{c_\alpha^2}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{2\times{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha} & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{2n-2}{\bf 1}}{{c_\alpha^2}} \\ \end{array} \right)\equiv{\bf 0}\pmod{2}. \end{equation} It is easy to see that $\rank_2(\bar{W}_{{\alpha}})\geqslant \rank_2(\tilde{W}_{{\alpha}})-1$ and $\rank_2(\tilde{W}_{{\alpha}}^T)+\rank_2(\bar{W}_{{\alpha}})\leqslant n.$ Hence, $\rank_2(\tilde{W}_{{\alpha}})\leqslant \frac{n+1}{2}=\lceil\frac{n}{2}\rceil.$ Combining Cases 1-2, we complete the proof. \end{proof} \begin{lem}\label{lem3.5} Let $G\in \mathfrak{F}_n$ and $\alpha\in[0,1).$ Then the SNF of $\tilde{W}_{{\alpha}}$ is $$ S={\rm diag}(\underbrace{1,\ldots,1}_{\lceil\frac{n}{2}\rceil},\underbrace{2,\ldots,2,2B}_{\lfloor\frac{n}{2}\rfloor}), $$ where $B$ is an odd and square-free integer. Furthermore, $\rank_2(\tilde{W}_{{\alpha}})=\lceil\frac{n}{2}\rceil.$ \end{lem} \begin{proof} Notice that $\frac{\det \tilde{W}_{{\alpha}}}{2^{\lfloor\frac{n}{2}\rfloor}}$ is odd and square-free. Then $\det \tilde{W}_{{\alpha}}=\pm 2^{\lfloor\frac{n}{2}\rfloor}p_1p_2\ldots p_s,$ here $p_i$ denotes an odd prime and $p_i\neq p_j$ for $1\leqslant i<j\leqslant s.$ Therefore, the SNF of $\tilde{W}_{{\alpha}}$ is $$ S={\rm diag}(1,\ldots,1,2^{l_1},\ldots,2^{l_{t-1}},2^{l_t}B), $$ where $B=p_1p_2\ldots p_s.$ In view of Lemma \ref{lem3.4}, one has $\rank_2(\tilde{W}_{{\alpha}})\leqslant \lceil\frac{n}{2}\rceil$, which is equivalent to $n-t\leqslant \lceil\frac{n}{2}\rceil.$ It follows that $t\geqslant \lfloor\frac{n}{2}\rfloor.$ Note that $\det(\tilde{W}_{{\alpha}})=\pm \det(S).$ Hence, $l_1+l_2+\cdots+l_t=\lfloor\frac{n}{2}\rfloor.$ Thus, $l_1=l_2=\cdots=l_t=1$ and $t=\lfloor\frac{n}{2}\rfloor.$ So, $\rank_2(\tilde{W}_{{\alpha}})=\lceil\frac{n}{2}\rceil.$ This completes the proof. \end{proof} For the ease of presentation, we need the following notations. For a graph $G$ on $n$ vertices, let $\hat{W}_{{\alpha}}(G)$ be the matrix defined as follows: \begin{equation*} \hat{W}_{{\alpha}}:=\hat{W}_{{\alpha}}(G)= \left\{ \begin{aligned} &\left[{\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{\frac{n}{2}-1}{\bf 1}}{c_\alpha}\right],&\ \ \textrm{if $n$ is even;}\\ &\left[\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\frac{A_{c_\alpha}^2{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{\frac{n-1}{2}}{\bf 1}}{c_\alpha}\right],&\ \ \textrm{if $n$ is odd.} \end{aligned} \right. \end{equation*} \begin{lem}\label{lem3.7} Let $G\in \mathfrak{F}_n$. Then $\rank_2(\hat{W}_{{\alpha}})=\lfloor\frac{n}{2}\rfloor$ for $\alpha\in[0,1).$ \end{lem} \begin{proof} By Lemma \ref{lem3.5}, one has $\rank_2(\tilde{W}_{{\alpha}})=\lceil\frac{n}{2}\rceil.$ Let $t:=\lceil\frac{n}{2}\rceil.$ In order to prove our result, it suffices to show that the first $t$ columns of $\tilde{W}_{{\alpha}}$ are linearly independent over $\mathbb{F}_2.$ Suppose, to the contrary, that ${\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{t-1}{\bf 1}}{c_\alpha}$ are linearly dependent over $\mathbb{F}_2.$ That is to say, there exist not all zero integers $b_0,b_1,\ldots,b_{t-1}\in \mathbb{F}_2$ such that $$ b_0{\bf 1}+b_1\frac{A_{c_\alpha}{\bf 1}}{c_\alpha}+\cdots+b_{t-1}\frac{A_{c_\alpha}^{t-1}{\bf 1}}{c_\alpha}\equiv{\bf 0}\pmod{2}. $$ Let $m=\max\{i:0\leqslant i\leqslant t-1,b_i\neq0\}.$ Clearly, $0<m\leqslant t-1.$ Hence $$ \frac{A_{c_\alpha}^{m}{\bf 1}}{c_\alpha}=-b_m^{-1}b_0{\bf 1}-b_m^{-1}b_1\frac{A_{c_\alpha}{\bf 1}}{c_\alpha}-\cdots-b_m^{-1}b_{m-1}\frac{A_{c_\alpha}^{m-1}{\bf 1}}{c_\alpha}\ \text{over}\ \mathbb{F}_2. $$ It follows that $\frac{A_{c_\alpha}^{m}{\bf 1}}{c_\alpha}$ can be written as a linear combination of ${\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots, \frac{A_{c_\alpha}^{m-1}{\bf 1}}{c_\alpha}$ over $\mathbb{F}_2.$ Then $$ \frac{A_{c_\alpha}^{m}{\bf 1}}{c_\alpha}=-b_m^{-1}b_0{\bf 1}-b_m^{-1}b_1\frac{A_{c_\alpha}{\bf 1}}{c_\alpha}-\cdots-b_m^{-1}b_{m-1}\frac{A_{c_\alpha}^{m-1}{\bf 1}}{c_\alpha}+2\beta\ \text{over}\ \mathbb{Z} $$ for some integral vector $\beta.$ Moreover, $$ \frac{A_{c_\alpha}^{m+1}{\bf 1}}{c_\alpha}=-{c_\alpha}b_m^{-1}b_0\frac{A_{c_\alpha}{\bf 1}}{c_\alpha}-b_m^{-1}b_1\frac{A_{c_\alpha}^2{\bf 1}}{c_\alpha}-\cdots-b_m^{-1}b_{m-1}\frac{A_{c_\alpha}^{m}{\bf 1}}{c_\alpha}+2A_{c_\alpha}\beta\ \text{over}\ \mathbb{Z}. $$ Hence $$ \frac{A_{c_\alpha}^{m+1}{\bf 1}}{c_\alpha}{\color{blue}=}-{c_\alpha}b_m^{-1}b_0\frac{A_{c_\alpha}{\bf 1}}{c_\alpha}-b_m^{-1}b_1\frac{A_{c_\alpha}^2{\bf 1}}{c_\alpha}-\cdots-b_m^{-1}b_{m-1}\frac{A_{c_\alpha}^{m}{\bf 1}}{c_\alpha}\ \text{over}\ \mathbb{F}_2, $$ i.e., $\frac{A_{c_\alpha}^{m+1}{\bf 1}}{c_\alpha}$ can be written as a linear combination of ${\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots, \frac{A_{c_\alpha}^{m-1}{\bf 1}}{c_\alpha}$ over $\mathbb{F}_2.$ By a similar discussion, we can show that for each integer $i\geqslant 0,$ the vector $\frac{A_{c_\alpha}^{m+i}{\bf 1}}{c_\alpha}$ can be written as a linear combination of ${\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots, \frac{A_{c_\alpha}^{m-1}{\bf 1}}{c_\alpha}$ over $\mathbb{F}_2.$ Therefore, $\rank_2(\tilde{W}_{{\alpha}})\leqslant m\leqslant t-1,$ a contradiction. This completes the proof. \end{proof} For a graph $G,$ let $\tilde{W}'_{{\alpha}}:=\tilde{W}'_{{\alpha}}(G)=\left[{\bf 1},\frac{A_{c_\alpha}^2{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{2n-2}{\bf 1}}{c_\alpha}\right],$ and let \begin{equation*} \hat{W}'_{{\alpha}}:=\hat{W}'_{{\alpha}}(G)= \left\{ \begin{aligned} &\left[{\bf 1},\frac{A_{c_\alpha}^2{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-2}{\bf 1}}{c_\alpha}\right],&\ \ \textrm{if $n$ is even;}\\ &\left[\frac{A_{c_\alpha}^2{\bf 1}}{c_\alpha},\frac{A_{c_\alpha}^4{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha}\right],&\ \ \textrm{if $n$ is odd.} \end{aligned} \right. \end{equation*} \begin{lem}\label{lem3.8} Let $G\in \mathfrak{F}_n$. Then $\rank_2\left(\frac{\tilde{W}_{{\alpha}}^T\hat{W}'_{{\alpha}}}{2}\right)=\lfloor\frac{n}{2}\rfloor$ for $\alpha\in[0,1).$ \end{lem} \begin{proof} For even $n,$ in view of Lemma \ref{lem3.5}, one has $\det \left(\frac{\tilde{W}_{{\alpha}}^T\tilde{W}_{{\alpha}}}{2}\right)=\frac{(2^{\lfloor\frac{n}{2}\rfloor}B)^2}{2^n}=B^2,$ where $B$ is defined in Lemma \ref{lem3.5}. Notice that $B$ is odd. Hence, $\rank_2 \left(\frac{\tilde{W}_{{\alpha}}^T\tilde{W}_{{\alpha}}}{2}\right)=n.$ Therefore, $\rank_2\left(\frac{\tilde{W}_{{\alpha}}^T\hat{W}'_{{\alpha}}}{2}\right)$ equals the number of columns of $\hat{W}'_{{\alpha}},$ as desired. For odd $n,$ let $\bar{W}_{{\alpha}}:=\bar{W}_{{\alpha}}(G)=[2\times {\bf 1},\frac{A_{c_\alpha}{\bf 1}}{c_\alpha},\ldots,\frac{A_{c_\alpha}^{n-1}{\bf 1}}{c_\alpha}]$ be the matrix defined as Lemma~\ref{lem3.4}. In view of Lemma \ref{lem3.5}, one has $\det \left(\frac{\tilde{W}_{{\alpha}}^T\bar{W}_{{\alpha}}}{2}\right)=B^2.$ It follows that $\rank_2 \left(\frac{\tilde{W}_{{\alpha}}^T\bar{W}_{{\alpha}}}{2}\right)=n.$ Hence, $\rank_2\left(\frac{\tilde{W}_{{\alpha}}^T\hat{W}'_{{\alpha}}}{2}\right)$ is equal to the number of columns of $\hat{W}'_{{\alpha}},$ as desired. This completes the proof. \end{proof} Now, we are ready to present the proof Theorem \ref{thm3.3}. \begin{proof}[\bf Proof of Theorem \ref{thm3.3}]\ Suppose on the contrary that $l$ is even. By Lemma \ref{lem3.2}, there exists a column ${\bf v}\,({\bf v}\not\equiv{\bf 0}\pmod{2})$ of $lU$ satisfying that $\tilde{W}_{{\alpha}}^T{\bf v}\equiv{\bf 0}\pmod{2}$ and ${\bf v}^TA_{c_\alpha}^k{\bf v}\equiv0\pmod{4}$ for each integer $k\geqslant 0.$ In view of \eqref{eq:4}, \eqref{eq:5} and Lemma \ref{lem3.7}, we obtain that ${\bf v}$ can be written as a linear combination of the column vectors of $\hat{W}_{{\alpha}}$ over $\mathbb{F}_2,$ that is, there exist integral vectors ${\bf u}\,({\bf u}\not\equiv{\bf 0}\pmod{2})$ and $\beta$ such that ${\bf v}=\hat{W}_{{\alpha}}{\bf u}+2\beta.$ Therefore, for each integer $k\geqslant 0,$ \begin{align*} {\bf v}^TA_{c_\alpha}^k{\bf v}=(\hat{W}_{{\alpha}}{\bf u}+2\beta)^TA_{c_\alpha}^k(\hat{W}_{{\alpha}}{\bf u}+2\beta)\equiv {\bf u}^T\hat{W}_{{\alpha}}^TA_{c_\alpha}^k\hat{W}_{{\alpha}}{\bf u}\equiv0\pmod{4}. \end{align*} We proceed by distinguishing the parity of $n$. {\bf Case 1.}\ $n$ is even. In view of Lemma \ref{lem2.5}, one has for each integer $k\geqslant 0,$ \begin{equation*} \hat{W}_{{\alpha}}^TA_{c_\alpha}^k\hat{W}_{{\alpha}}=\left[ \begin{array}{cccc} {\bf 1}^TA_{c_\alpha}^k{\bf 1} & \frac{{\bf 1}^TA_{c_\alpha}^{1+k}{\bf 1}}{c_\alpha} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{\frac{n}{2}-1+k}{\bf 1}}{c_\alpha} \\ \frac{{\bf 1}^TA_{c_\alpha}^{1+k}{\bf 1}}{c_\alpha} & \frac{{\bf 1}^TA_{c_\alpha}^{2+k}{\bf 1}}{{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{\frac{n}{2}+k}{\bf 1}}{{c_\alpha^2}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{{\bf 1}^TA_{c_\alpha}^{\frac{n}{2}-1+k}{\bf 1}}{c_\alpha} & \frac{{\bf 1}^TA_{c_\alpha}^{\frac{n}{2}+k}{\bf 1}}{{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-2+k}{\bf 1}}{{c_\alpha^2}} \\ \end{array} \right]\equiv{\bf 0}\pmod{2}. \end{equation*} Put $M:=(M_{ij})_{\frac{n}{2}\times\frac{n}{2}}=\hat{W}_{{\alpha}}^TA_{c_\alpha}^k\hat{W}_{{\alpha}}$ and ${\bf u}=[u_1,u_2,\ldots,u_{\frac{n}{2}}]^T.$ Hence, for each integer $k\geqslant 0,$ \begin{align} {\bf u}^T\hat{W}_{{\alpha}}^TA_{c_\alpha}^k\hat{W}_{{\alpha}}{\bf u}&=\sum_{1\leqslant i\leqslant j\leqslant {\frac{n}{2}}}M_{ij}u_iu_j=\sum_{1\leqslant i\leqslant {\frac{n}{2}}}M_{ii}u_i^2+2\sum_{1\leqslant i<j\leqslant {\frac{n}{2}}}M_{ij}u_iu_j\notag\\ &\equiv ({\bf 1}^TA_{c_\alpha}^k{\bf 1})u_1+\frac{{\bf 1}^TA_{c_\alpha}^{2+k}{\bf 1}}{{c_\alpha^2}}u_2+\cdots+\frac{{\bf 1}^TA_{c_\alpha}^{n-2+k}{\bf 1}}{{c_\alpha^2}}u_{\frac{n}{2}}\label{eq:3.09}\\ &=\left[{\bf 1}^TA_{c_\alpha}^k{\bf 1},\frac{{\bf 1}^TA_{c_\alpha}^{2+k}{\bf 1}}{{c_\alpha^2}},\ldots,\frac{{\bf 1}^TA_{c_\alpha}^{n-2+k}{\bf 1}}{{c_\alpha^2}}\right]{\bf u}\notag\\ &\equiv0\pmod{4},\notag \end{align} the congruence equation in \eqref{eq:3.09} follows from the fact that $u_i^2\equiv u_i\pmod{2}$ for any $1\leqslant i\leqslant \frac{n}{2}.$ Therefore, \[\label{eq:3.5} \left[\frac{{\bf 1}^TA_{c_\alpha}^k{\bf 1}}{2},\frac{{\bf 1}^TA_{c_\alpha}^{2+k}{\bf 1}}{2{c_\alpha^2}},\ldots,\frac{{\bf 1}^TA_{c_\alpha}^{n-2+k}{\bf 1}}{2{c_\alpha^2}}\right]{\bf u}\equiv0\pmod{2}. \] If $c_\alpha$ is even, then we define \begin{align*} M_1:&=\left[ \begin{array}{ccccc} \frac{{\bf 1}^T{\bf 1}}{2} & \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^4{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-2}{\bf 1}}{2{c_\alpha^2}} \\ 0 & \frac{{\bf 1}^TA_{c_\alpha}^{3}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{5}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{2{c_\alpha^2}} \\ 0 & \frac{{\bf 1}^TA_{c_\alpha}^{4}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{6}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{2{c_\alpha^2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \frac{{\bf 1}^TA_{c_\alpha}^{n+1}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{n+3}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{2n-3}{\bf 1}}{2{c_\alpha^2}} \\ \end{array} \right]\\ &\equiv\left[ \begin{array}{ccccc} \frac{{\bf 1}^T{\bf 1}}{2} & \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^4{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-2}{\bf 1}}{2{c_\alpha^2}} \\ \frac{{\bf 1}^TA_{c_\alpha}{\bf 1}}{2} & \frac{{\bf 1}^TA_{c_\alpha}^{3}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{5}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{2{c_\alpha^2}} \\ \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{2} & \frac{{\bf 1}^TA_{c_\alpha}^{4}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{6}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{2{c_\alpha^2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{2} & \frac{{\bf 1}^TA_{c_\alpha}^{n+1}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{n+3}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{2n-3}{\bf 1}}{2{c_\alpha^2}} \\ \end{array} \right]\pmod{2}, \end{align*} the second congruence equation follows from Lemma \ref{lem2.5}. In view of \eqref{eq:3.5}, one has $M_1{\bf u}\equiv{\bf 0}\pmod{2}.$ Furthermore, we define \begin{align*} M_2:&=\left[ \begin{array}{ccccc} \frac{{\bf 1}^T{\bf 1}}{2} & 0 & 0 & \cdots & 0 \\ \frac{{\bf 1}^TA_{c_\alpha}{\bf 1}}{2{c_\alpha}} & \frac{{\bf 1}^TA_{c_\alpha}^{3}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{5}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{2{c_\alpha^2}} \\ 0 & \frac{{\bf 1}^TA_{c_\alpha}^{4}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{6}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{2{c_\alpha^2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \frac{{\bf 1}^TA_{c_\alpha}^{n+1}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{n+3}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{2n-3}{\bf 1}}{2{c_\alpha^2}} \\ \end{array} \right]\\ &\equiv\left[ \begin{array}{ccccc} \frac{{\bf 1}^T{\bf 1}}{2} & \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{2{c_\alpha}} & \frac{{\bf 1}^TA_{c_\alpha}^4{\bf 1}}{2{c_\alpha}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-2}{\bf 1}}{2{c_\alpha}} \\ \frac{{\bf 1}^TA_{c_\alpha}{\bf 1}}{2{c_\alpha}} & \frac{{\bf 1}^TA_{c_\alpha}^{3}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{5}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{2{c_\alpha^2}} \\ \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{2{c_\alpha}} & \frac{{\bf 1}^TA_{c_\alpha}^{4}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{6}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{2{c_\alpha^2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{2{c_\alpha}} & \frac{{\bf 1}^TA_{c_\alpha}^{n+1}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{n+3}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{2n-3}{\bf 1}}{2{c_\alpha^2}} \\ \end{array} \right] =\frac{\tilde{W}_{{\alpha}}^T\hat{W}'_{{\alpha}}}{2}\pmod{2}, \end{align*} the second congruence equation follows from Lemma \ref{lem2.5}. By Lemma \ref{lem3.7}, we obtain that $\rank_2\left(\frac{\tilde{W}_{{\alpha}}^T\hat{W}'_{{\alpha}}}{2}\right)=\frac{n}{2},$ that is, the column rank of $M_2$ is full over $\mathbb{F}_2.$ Hence, the last $\frac{n}{2}-1$ columns of $M_2$ are linearly independent over $\mathbb{F}_2.$ Therefore, the last $\frac{n}{2}-1$ columns of $M_1$ are linearly independent over $\mathbb{F}_2.$ It follows that $\rank_2(M_1)\geqslant \frac{n}{2}-1.$ By applying \eqref{eq:3.5} with $k=0$, we have $\frac{{\bf 1}^T{\bf 1}}{2}\equiv0\pmod{2}.$ Therefore, $\rank_2(M_1)= \frac{n}{2}-1.$ Then the solution space of $M_1{\bf u}\equiv{\bf 0}\pmod{2}$ has dimension $1$. It is routine to check that it is spanned by ${\bf u}\equiv[1,0,\ldots,0]^T\pmod{2}.$ Hence, ${\bf v}\equiv\hat{W}_{{\alpha}}{\bf u}\equiv {\bf 1}\pmod{2}.$ In view of Lemmas \ref{lem3.6} and \ref{lem3.5}, one has $l\mid2B.$ It follows from Theorem~\ref{thm3.1} that $l\mid2$ and therefore $l=2.$ Recall that $U$ is orthogonal and $U{\bf 1}={\bf 1}.$ Based on the choice of ${\bf v}$ and $\frac{{\bf 1}^T{\bf 1}}{2}\equiv0\pmod{2},$ we know that $n=4.$ However, there are only six connected graphs on $4$ vertices. It is routine to check that each of them is DG$A_\alpha$S. Hence $l=1,$ a contradiction. If $c_\alpha=1,$ then by \eqref{eq:3.5}, one has $M_2{\bf u}\equiv{\bf 0}\pmod{2}.$ Together with Lemma \ref{lem3.7}, we obtain ${\bf u}\equiv{\bf 0}\pmod{2},$ a contradiction. {\bf Case 2.}\ $n$ is odd. By a similar discussion as the proof for even $n$, we can show that for each integer $k\geqslant 0,$ \[\label{eq:3.6} \left[\frac{{\bf 1}^TA_{c_\alpha}^{2+k}{\bf 1}}{2{c_\alpha^2}},\frac{{\bf 1}^TA_{c_\alpha}^{4+k}{\bf 1}}{2{c_\alpha^2}},\ldots,\frac{{\bf 1}^TA_{c_\alpha}^{n-1+k}{\bf 1}}{2{c_\alpha^2}}\right]{\bf u}\equiv0\pmod{2}. \] Define \begin{align*} M_3:=\left[ \begin{array}{ccccc} \frac{{\bf 1}^TA_{c_\alpha}^2{\bf 1}}{2{c_\alpha}} & \frac{{\bf 1}^TA_{c_\alpha}^4{\bf 1}}{2{c_\alpha}} & \frac{{\bf 1}^TA_{c_\alpha}^6{\bf 1}}{2{c_\alpha}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n-1}{\bf 1}}{2{c_\alpha}} \\ \frac{{\bf 1}^TA_{c_\alpha}^3{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{5}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{7}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n}{\bf 1}}{2{c_\alpha^2}} \\ \frac{{\bf 1}^TA_{c_\alpha}^4{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{6}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{8}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{n+1}{\bf 1}}{2{c_\alpha^2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{{\bf 1}^TA_{c_\alpha}^{n+1}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{n+3}{\bf 1}}{2{c_\alpha^2}} & \frac{{\bf 1}^TA_{c_\alpha}^{n+5}{\bf 1}}{2{c_\alpha^2}} & \cdots & \frac{{\bf 1}^TA_{c_\alpha}^{2n-2}{\bf 1}}{2{c_\alpha^2}} \\ \end{array} \right] =\frac{\tilde{W}_{{\alpha}}^T\hat{W}'_{{\alpha}}}{2}. \end{align*} In view of \eqref{eq:3.6}, one has $M_3{\bf u}\equiv{\bf 0}\pmod{2}.$ By Lemma \ref{lem3.7}, we obtain that $\rank_2\left(\frac{\tilde{W}_{{\alpha}}^T\hat{W}'_{{\alpha}}}{2}\right)=\frac{n-1}{2},$ that is, the column rank of $M_3$ is full over $\mathbb{F}_2.$ Hence, ${\bf u}\equiv{\bf 0}\pmod{2},$ a contradiction. Combining Cases 1-2, one obtains that $l$ is odd. This completes the proof. \end{proof} \section{\normalsize Examples}\setcounter{equation}{0} In this section, we give two specific examples of DGA$_\alpha$S graphs which satisfies the conditions of Theorem~\ref{thm1.1} for some $\alpha\in[0,1)$. \begin{ex} {\rm Let the adjacency matrix of graph $G$ be given as follows: \begin{equation*} A(G)=\left( \begin{array}{cccccccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \end{array} \right)_{14\times14}. \end{equation*} Hence, $D(G)={\rm diag}(4, 8, 3, 3, 5, 7, 5, 9, 7, 5, 6, 7, 7, 4).$ It can be computed directly using \textit{Mathematica} \cite{math} that $$ \det(\tilde{W}_{\frac{3}{4}}(G))=2^{7}\times5\times331\times143807\times545912603\times30283875584713\times77826853969408184689 $$ and $$ \det(\tilde{W}_{\frac{5}{6}}(G))={2^{7}}\times13\times31\times37\times327773499972443320387744582054393134299875049186710656493725761. $$ By Theorem \ref{thm1.1}, we know that $G$ is DGA$_{\frac{3}{4}}$S and DGA$_{\frac{5}{6}}$S.} \end{ex} \begin{ex} {\rm Let the adjacency matrix of graph $G$ be given as follows: \begin{equation*} A(G)=\left( \begin{array}{ccccccccccccc} 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \end{array} \right)_{13\times13}. \end{equation*} Clearly, $D(G)={\rm diag}(7, 3, 3, 5, 7, 4, 9, 6, 5, 6, 7, 7, 3).$ It can be computed directly using \textit{Mathematica} \cite{math} that $$ \det(\tilde{W}_{\frac{2}{3}}(G))={2^{6}}\times5\times97\times1367\times10067\times118189\times132430201\times145112609, $$ and \begin{align*} \det(\tilde{W}_{\frac{10}{11}}(G))=&{2^{6}}\times3\times2567\times3251\times18593\times110574553\\ &\times19912837250380292202346041446775471026303813. \end{align*} By Theorem \ref{thm1.1}, we know that $G$ is DGA$_{\frac{2}{3}}$S and DGA$_{\frac{10}{11}}$S.} \end{ex} \section{\normalsize Concluding remarks} In this paper, we give a simple arithmetic criterion for determining whether a graph $G$ is DGA$_\alpha$S. Obviously, the main results in \cite{0005} and \cite{0002} (i.e., Corollary \ref{cor1.01} and Corollary \ref{cor1.02}) are the direct consequences of our results in this paper. Notice that we do not consider the case that $n$ is even and $c_\alpha(\geqslant 3)$ is odd in Theorem \ref{thm1.1}. Now, we pose the following conjecture. \begin{conj}\label{conj1} Let $G$ be in $\mathfrak{F}_n$ and let $\alpha$ be in $[0,1)$ with even $n$ and odd $c_\alpha\,(\geqslant 3)$. Then $G$ is DGA$_{\alpha}$S. \end{conj} In view of Lemma~\ref{thm2.2}, we know that in order to prove this conjecture, it suffices to show that the condition in Conjecture \ref{conj1} implies $l(U)=1$ for all matrices $U\in \Gamma_\alpha(G),$ which is equivalent to prove that any prime $p$ is not a divisor of $l(U).$ Based on Theorem \ref{thm3.1}, one obtains that any old prime $p\nmid l(U)$ for all matrices $U\in \Gamma_\alpha(G).$ Hence, in order to prove Conjecture \ref{conj1}, it is sufficient to show that $l(U)$ is odd for each matrix $U\in \Gamma_\alpha(G).$ As it turns out in our paper, the arithmetic properties of $\det \tilde{W}_\alpha(G)$ are closely related to whether a given graph is DGA$_\alpha$S for $\alpha\in[0,1)$. By comparing Theorem~\ref{thm1.1} with Corollaries~\ref{cor1.01}-\ref{cor1.02}, we pose the following open problem: \begin{pb} Let $G$ be a graph with order $n$ and let $\alpha\in[0,1).$ If $\frac{\det \tilde{W}_\alpha(G)}{2^{\lfloor\frac{n}{2}\rfloor}}$ is an odd and square-free integer, determining whether $G$ is DGA$_\alpha$S or not? \end{pb}
train/arxiv